Cal Poly and the detection problem
In early 2024, driven by low faculty adoption and high costs, Cal Poly San Luis Obispo let its Turnitin license expire. The campus had already spent $171,000 on the tool since 2020. But this action pointed to a systemic problem. Detection-first tools analyze finished text and try to guess whether AI was used to write it. They were often incorrect, labeling AI-written text as human, and worse, labeling human-written text as AI. They put faculty in the position of an investigator, and most faculty don’t want that job.
While the rest of the Cal State system paid more than $1.1 million in 2025 alone on AI detection tools, Cal Poly stepped back.
Kim Bisheff, a journalism professor, didn’t wait for a policy answer to evaluate what’s next. She found appeal with OKhuman and decided to test it in her class.
“I don’t want to be the AI police. And this is such a great solution to that problem.”
The pilot
Kim taught a first-year multimedia journalism course in Winter 2026. This is a writing-intensive class built around publishing. Students conduct interviews, develop sources, and even publish their writing on Substack.
Kim wanted something that gave students agency over their own writing process, without putting them under a microscope. The question wasn’t whether students were cheating by using AI for writing. The question was whether students could own and verify the work they were doing, on their terms.
She introduced OKhuman into the classroom to find out.
Students wrote. And it showed.
Students installed OKhuman on the laptops with little to no hiccups, wrote their assignments in the tool of their choice, stamped them with OKhuman and associated their assignment with a “drop code” that Kim shared with the class.
What students appreciated most was owning the narrative around their work.
“From the student perspective, they love the idea of taking control.”
The work itself reflected that ownership. Kim found it notable that she saw hardly any work that appeared to be AI-assisted. Some of the writing had the lack of polish that is expected from first-year students, but none of the submissions had what she called that “bland, impersonal” type of writing that she associates with generative AI.
The need extends beyond the classroom. Cal Poly’s student newspaper has a strict AI policy but no reliable way to verify compliance. Students publishing under a masthead face a sharper version of the same problem: their credibility depends on proving the work is theirs.
A helpful guardrail. No more detective work. More teaching.
Kim found that OKhuman changed how she thought about her own use of AI in writing. Working as what she described as a “benevolent force,” it made her process visible, and she became more aware of when and how she turned to chatbots.
“Every once in a while, I thought, ‘Oh, I’m using AI as more of a crutch than I realized.’”
The OKhuman educator dashboard made it simple for her to manage her class. The drop codes were easily adopted by students, and she could view all of their work in one place.
Most importantly, she leaned into the framework. Students are accountable to their own process. Just the presence of the OKhuman stamp was enough to show the student was paying attention and cared about their work.
The time she would have spent scrutinizing submissions went back into teaching. Kim is now planning to bring OKhuman into upper-division courses, where students already have more skills tucked into their belt, along with the desire to exercise them.
“I think students with that added level of confidence would be quicker to adopt it. ... It would be such a natural thing to say ‘Sign up for OKhuman and put your stamp at the end.’”
Every faculty member, their own way
Back in January 2026, Cal Poly’s College of Liberal Arts AI task force had conducted a survey and found that faculty have widely different views on how to handle AI, with no consensus on approach. OKhuman doesn’t require consensus. It gives individual faculty a tool that works within their own course design, and it gives the institution a consistent, verifiable credential across all of them.
In Kim’s first-year journalism class, she integrated OKhuman the way that she felt was appropriate. And when she did, she didn’t feel the need to police submissions. She could see where her students needed to grow—something that’s essential for faculty members.
“I would feel very comfortable saying, ‘Install OKhuman because then you can take control of that narrative and, p.s., it also saves my time.’”
This experience is something Kim wants to share with colleagues, who are all grappling with what AI means for their profession and work. She has talked with them about how the tool returns control to students, and saves faculty time.
“I am shocked at how often OKhuman comes up in conversation. ... There’s definitely a hunger for a solution to this problem.”
Kim also sees value beyond the classroom. The same credential that verifies student work could verify faculty scholarship, supporting the university’s reputation for authentic research.
“As a faculty member who’s working to get my research published, I think that being able to demonstrate that human stamp would be a value-add.”
What’s next
Kim is expanding OKhuman into additional courses and has begun connecting colleagues at her school with what she found. One quarter in one classroom opened several conversations.
The conditions that made this pilot work aren’t unique to Cal Poly. Across the country, faculty are wrestling with the same question Kim was asking at the start of Winter quarter — not how to catch students using AI, but how to give them a reason not to. Anywhere written work matters and credibility is on the line, the gap that Kim identified exists.