A few months ago I spoke with the owner of a script-hosting marketplace where writers upload potential screenplays and producers browse listings by genre. His entire company depends on identifying genuine creative talent and high-impact insight. He cares deeply about authenticity.

He does use AI to help structure his thinking. But he wouldn’t tell anyone about it. In his industry, major union agreements now require explicit AI disclosure; silence isn’t just uncomfortable, it’s a contractual risk.

That’s the moral problem he sees plaguing writers everywhere.

“Everyone knows AI use is unavoidable,” he said. “Everyone also knows for any high-stakes written work, authenticity matters to the people paying for it. Everyone knows that their own discomfort about how others would perceive their work proves that provenance matters. Yet everyone hides.”

Everyone hides. I’ve heard that again and again. Over the last 16 months, I’ve had 124 such conversations with professionals across at least nine different verticals.

The hiding is understandable. A professional who discloses AI signals that their expertise is less involved. A professional who hides it carries legal and reputational exposure they cannot offload. A professional who avoids AI use altogether risks becoming uncompetitive against peers who use it well. There is no clean exit from this trilemma.

Trust in written content is breaking down, because the cost to produce words has gone to zero.

The problem is structural, not individual.

I had a call with the owner of a 50-person B2B agency serving enterprise clients. He has no way of knowing which content produced by his writers reflects original ideas and which is AI-generated. He’s flying blind on his own company’s reputation. His problem is not AI use, or the absence of an AI policy. It’s that he has no mechanism to ensure that what goes out under his firm’s name actually reflects his firm’s judgment.

This is not a small agency problem. It scales directly to the largest professional services companies in the world.

Last year, Deloitte delivered a 237-page report to the Australian government under a contract worth AU$440,000. It contained fabricated academic citations generated by AI. Deloitte refunded the final payment. Six weeks later, it happened again in Canada, a 526-page healthcare report with fabricated academic papers delivered under a contract worth nearly CA$1.6 million. The Canada report appeared after the Australia scandal had already generated international headlines. There was no standardized verification workflow, no disclosure protocol, no mechanism to carry the lesson from one failure to the next active project.

The legal profession has its own version of this failure. By early March 2026, the courts had documented over 1,000 cases of AI-generated hallucinations in legal filings, according to Judge Scott Schlegel. When I spoke to attorneys at law firms, they were clear what they actually want. Not transparency about AI use, but a credential proving that a partner was meaningfully involved. They agreed that self-attestation is insufficient and meaningless.

Everyone in that chain sees it. Most are waiting until it becomes intolerable or mandatory.

The prevailing response has been AI detection tools.
It is not a viable solution.

Detection is an arms race. It produces false positives, flagging genuine human writing as AI-generated. But the deeper harm is what it signals: it treats every professional as a suspect. Plus, detection is trivially gameable. Slightly revising and re-running the text through major detection platforms until it registers as 100% human is not difficult. Detection catches the careless and the innocent.

Imagine if police officers were allowed to write speeding tickets by guessing how fast you were going. That is what AI detection is: confidence without conviction.

Detection is a tool. You reach for it when you suspect a problem. What’s missing is different. A system where the record of human engagement generated as a byproduct of the work itself, not checked afterward. Every financial transaction leaves a trail not because someone decided to verify it, but because the system requires it. Written work has never had that.

AI can’t bring experience, judgment, or accountability that a professional has earned over decades. That judgment is what clients are actually buying; not the words, but the thinking behind them. When the written work no longer reliably carries evidence of that thinking, vendor credibility erodes with it. Forrester’s 2026 Predictions for B2B finds that human expertise will rival genAI in appeal as buyers seek deeper validation. Written work is how that expertise gets made. Right now, there is no way to verify it.

Written professional work in PR, marketing communications, journalism, and research has no equivalent infrastructure. No licensing. No process standards. Historically, it didn’t need them, because written work was self-evidencing. You could read it and judge the quality yourself. The quality was a reliable proxy for human engagement. That proxy is gone. The output can be excellent and tell you nothing about who or what produced it. There is no signal today that makes human engagement visible as a byproduct of how the work gets done. It’s a gap that has never existed before.

Trust depends on accountability.

And accountability requires transparency, not as a compliance statement but as a verifiable fact.

That means process verification at origin, at creation time. Not after the fact, not through detection, not through self-attestation. The verification happens at the moment of creation, as a byproduct of the writing itself. A record of human engagement that the professional owns, that clients can reference, and that over time becomes as meaningful a credential as a degree or years of experience.

If the agency owner could share this engagement signal with his clients, they would not question his retainer. At a large consulting firm, a senior partner and a team of associates could use AI to develop a better report and still have a clear record of which parts reflect partner-level thinking and which were delegated. Legal practitioners could use the signal to verify human involvement in copyright disputes.

Proof of human involvement will become a standard ask before most firms are ready to provide it. The ones that wait will be explaining themselves to clients who already moved on.