Cal Poly dropped it. So did Vanderbilt. So did Northwestern, UC Berkeley, UT Austin, Pitt and dozens of other universities across the country, leaning on guidance from researchers at MIT and other respected institutions.
Here’s the truth: Detection was never the right pathway for higher education. It’s easy to work around, it exposes institutions to liability, and its assessments are ambiguous: “human — low confidence,” “human – medium confidence.” Those aren’t verifiable results. That’s a shrug. In practice, it makes schools a surveillance system. A peer-reviewed paper in Patterns by J. Scott Christianson puts this plainly: Established edtech companies built a business around faculty anxiety, and faculty bought in.
Over the past year and a half, I’ve had more than 25 conversations with faculty and administrators at universities across the country. They told me about the stress of feeling like they had to become AI detectives. About fear of false positives, or discriminating against those who didn’t grow up speaking English. About wanting to teach how to use AI responsibly, but not having pathways to do it.
AI is now broadly recognized as the future of learning. But the integrity infrastructure hasn’t caught up. The result: Nobody is happy. Students hate detection. Faculty who are pro-learning resent it. Administrators worry about the value of the credential for their graduates in the real world.
Detection also has a framing problem. It answers the question: “Did AI write this?” That question assumes the output is the evidence.
The right question to ask is: “Did the student actually do the work?” That’s what process verification answers. Instead of hunting for proof writers did or didn’t use AI, it records evidence that someone engaged — with their ideas, their drafts, their thinking.
Many institutions are still anchored to detection. But that anchor is getting harder to justify.
Faculty are already there
A professor and former vice-provost at a research university told us he believes detection and proctoring are inherently flawed.
In his view, the valuable human input in an AI-augmented world isn’t generating a first draft — it’s the curation, revision, and judgement that follows. The focus should be on verifying that human involvement, not detecting AI’s initial touch.
Where detection is in place, submitting an essay now comes with a ritual. Students run their own writing through multiple AI checkers. They screen-record themselves typing. Some deliberately “dumbcraft” their prose, introducing errors and awkward structures to avoid triggering false positives. A Drexel University study found that of 49 students accused of using ChatGPT, 78% said they did not use it. The dominant emotional themes were frustration, anxiety, helplessness, and growing cynicism about the value of higher education itself.
That last is a pressing issue for postgraduate programs that charge high tuition and award prestigious credentials. Law school? If graduates cannot recognize and correct AI errors, they’re in trouble: Stanford found general LLMs hallucinate legal citations 30%-45% of the time.
In medical schools, Dartmouth is exploring a forward-thinking solution: structured sessions where students present which AI model they consulted, what it recommended, and where they disagreed with it. Not “Did you use AI?” but “Show me how you thought alongside it.”
The ACCA, the world’s largest professional credentialing body, ceased routine online exams in March 2026 because cheating technology had outpaced their safeguards. They conceded what higher education hasn’t yet: detection cannot work.
I spoke to a 30-year education sales veteran who spent his career at leading academic integrity companies. “AI detection companies are solving the wrong problem,” he told me. They focus on catching cheaters and generating scores, rather than the root cause: what authentic learning looks like in a world where AI is standard.
What keeps the detection in place isn’t evidence that it works. It’s the false psychological safety that it portrays as being in control.
Professors have been finding workarounds on their own: requiring multiple drafts, grading revision history, asking students to present their research choices in class. Or even going back to bluebooks. All to answer one question: Did you actually think about this?
A political science professor at CUNY we spoke to started out skeptical. After our conversation, he became convinced that documenting the writing process was the only path that made sense. A business school professor at the University of Washington is now designing experiments to measure creativity and original thinking through how students work, not what they produce.
And a professor at a California university just ran a quarter-long experiment in her writing course where students documented their own process.
“I don’t want to be the AI police,” she told us. “This is such a great solution to that problem.”
None of these faculty know each other. None of them used the phrase process verification. They all arrived at the same conclusion independently.
What process verification actually means
Before AI, learning still had shortcuts. A student preparing a legal brief could refer to class notes, look up prior cases, read how others had approached similar problems. But that required effort. You had to read, select, synthesize. The engagement was built in.
AI created a fork. There are now two paths: the lazy path, where you generate output without engagement, and the learning path, where you use AI as a collaborator, sometimes a tutor, while engaging in real thinking. Both paths produce polished output. Detection can’t tell them apart, so it defaults to assuming everyone is on the lazy path. That’s the moral failure of the current system. Students don’t pay $60,000 a year to not learn anything.
The faculty side compounds the problem because the pedagogy hasn’t caught up. Most people teaching aren’t sure what good AI use looks like. So institutionally, the default has been to policing the output.
Process verification doesn’t ask whether AI was used. It asks whether the person did the work. The first question leads to an arms race that detection has already lost. The second question leads to infrastructure that documents effort, revision patterns, time spent, and engagement.
It distinguishes someone who was thinking from someone who pressed a button.
The generational divide matters here. Younger users don’t see AI use as cheating. They see hiding it as the problem. Faculty can’t win a fight built on detection because students won’t accept it. Process verification gives both sides what they need: faculty get evidence of engagement, students get agency over how their effort becomes visible.
And it isn’t complex. The writer writes. The infrastructure captures behavioral signals in the background. The credential attaches to the work. The reader, the professor, the admissions committee: all can see that sustained human effort was present.
Surveillance implies lack of consent and a presumption of guilt. Process verification simply gives the student a cryptographic, tamper-proof receipt of their own hard work that they can choose to attach to their work. No surveillance. No detection. No arms race.
I’ve seen this in a different industry. I co-founded a company that licensed premium video content for home distribution. Before any deal could happen, studios required cryptographic proof of every step in the chain, from encoding to delivery to playback. The master file couldn’t move from studio to screen without verified proof at every step. Writing is in that same moment now.
Institutions are embedding AI into how they evaluate student work at scale. But every one of those tools operates on a finished product. The more AI tools institutions adopt for assessment, the wider the verification gap gets. As output gets more polished and harder to attribute, this mismatch between learning and assessment becomes an impossible problem to solve without process verification.
We don’t need better detectors. We need a receipt for the work.