AI Detection Software Is An Incomplete Solution to Educational Transformation
Why Educational Leaders Need to Rethink Academic Integrity
Before I get started, I know many of you have been following my career transition. It continues to be a wild ride. Perhaps I’ll do a formal update in an upcoming post as an encouragement to others navigating career transition. Candidly, the whole thing has been a rollercoaster of emotions. Practically speaking, I recently updated my website, so if you haven’t checked it out in a bit, I’d love it if you did. Also, I’m learning to ask for help, so if you or someone you know would benefit from what I do, let me know. It’d mean a lot. I know I can’t do it alone, and you’re all part of the community supporting me along my journey.
Alright, enough about that. On to the main event.
I recognize I’ve recently been covering a lot of heavy topics at the intersection of politics, business, and economics. Honestly, I’ve been really encouraged that they didn’t spark outrage or arguing but instead inspired thoughtful reflection and discussion. However, this week, I needed a mental break. So, this topic isn’t about governmental politics or corporate chaos; however, it still has the potential to be just as divisive. So, if you are new to my work, please take a deep breath and hear the heart of my writing before reacting to the content. Because whenever you talk about AI, particularly its use (or misuse) in education, people tend to have strong reactions.
This one was inspired by a number of articles I’ve seen recently about how academic institutions are scrambling to “combat” AI in the classroom. I’ve also had many conversations with and counseled higher education leaders and institutions about this issue. The issue presents as students using AI to bypass assignments and what institutions and educators need to do to get ahead of it. Most of the conversations are reactionary in nature. This is compounded by the fact that, like so many challenges we face today, the default approach seems to be quickly seeking out a simple answer to a complicated problem.
I’ve observed that the majority of schools take an aggressive stance, going so far as to ban helpful tools like Grammarly, requiring handwritten assignments, or pouring millions into the latest AI detection software as a new “plagiarism” check. Far fewer are wrestling with integrating AI into the learning process instead of fighting against it. All this brings up a really important question. What are we actually assessing and why? Unfortunately, in almost every situation, this is the question nobody wants to ask.
I made my usual off-the-cuff, raw reflection video on it, so if you like the less polished version or enjoy seeing my smiling face, feel free to check it out. However, as always, I wanted to take the time here to go a little deeper. I wanted to go deeper because this isn’t just a conversation about cheating or detection software. It’s about how we think about learning, critical thinking, and the future of education in an AI-driven world.
With that, let’s get to it!
Deeper Reflections
"AI is here to stay; resisting it only leaves you behind."
Whenever new technology comes along, our default response is resistance. Even as a tech enthusiast, I find myself fighting my resistance to change. That’s not necessarily a bad thing. I regularly advocate slowing down, reflecting, and considering your desired outcomes. However, I promise you that AI is the new electricity. It isn’t a passing trend or an optional tool. It will eventually impact literally everything you do. No amount of resisting will make it go away. Resistance is the wrong option, and we know this because history has never been kind to those who dig in their heels against inevitable progress. Those who insist AI can be held at bay are setting themselves up to be relics of a past era.
However, it’s not about staying relevant. There’s actual harm that comes from the wrong kind of resistance. When we resist change for the sake of preserving comfort or the status quo, we stop serving the people we’re supposed to be helping. Education is supposed to prepare students for the world they’re stepping into, not the one we’re nostalgic for. In the world ahead, AI will never be a threat to be eliminated (let’s hope). It will be a reality to be understood and applied. The harder we fight it, the more we signal that we’re more interested in preserving our comfort than preparing others for the future.
When we try to “protect” education from AI, we’re actually making education less relevant because it’s left behind as the rest of the world keeps moving forward.
"You don’t stop AI by throwing different AI at it."
Maybe you’re one of the folks who would argue with my first point by saying, “they’re not resisting AI; they’re actually using AI to prevent AI misuse.” I’ve heard it before, and it might sound like a clever argument. Unfortunately, it doesn’t hold up since you don’t solve the challenges of AI by waging an arms race against it. When you build an entire strategy around adopting AI detection software to hunt down AI-generated work, you’re not using AI in a meaningful way. You’re setting yourself up to be trapped in a never-ending game of cat and mouse.
Like an equally balanced game of tug-of-war, the more institutions try to tighten their grip on AI detection; the more students focus their attention on working around it. Adding to the competition, AI will continue getting better at mimicking human writing, so AI software will scramble to keep up. The cycle endlessly repeats. Nobody will ever make any forward progress. Everyone is wasting all their time and energy on figuring out how to dig in, pulling harder, all the while going nowhere. You are burning all your precious time, energy, and resources trying to stop something instead of figuring out how to use it productively.
AI is a tool with massive potential to augment and improve education. If containment is your only strategy, you’ve already lost.
"AI detection software promises a solution it can’t deliver."
If you check any AI detection tool’s website, you’ll notice they all consistently make big promises. Unfortunately, none of them can do what they claim consistently or in real-world situations. Their “99% accuracy” detection stats sound really impressive, but keep in mind those numbers are coming from controlled tests against the easiest cases, someone using generic prompts followed by a direct copy-paste job. I mean, yes, some people do that, but that’s not how the majority of AI is actually used. Those are the things a seasoned pro can spot a mile away.
You have to keep in mind that AI is inherently designed to look, sound, and read like a human, and if you follow my work, you know it’s only getting exponentially better. Any student who thinks critically, edits, and uses AI to augment their work isn’t producing the kind of AI garbage a detection tool will reliably flag. As a result, you’ll flag the bad actors you would have already caught while being flooded by false positives, arguing with students because their writing style resembles AI training data (that was created by humans, by the way).
Even if detection software was perfect (which it will never be), it will always be behind the latest AI models. No software is going to stop what’s already here.
"It’s time to reimagine what we’re really assessing."
So, if not AI detection, then what? My answer to that is simple in concept but more complicated in practice. We need to stand naked in the mirror to assess what we’re working with and reimagine what we’re trying to accomplish. The thing is, unless you’re teaching a keyboarding, grammar, or handwriting class, I think it’s safe to assume the objective of an assignment isn’t to measure the way someone takes the thoughts from their head and translates them to a screen or scribbles them onto a page. AI is forcing us to confront a harsh reality: Many of the assessments we’re using were either ineffective already or built for a world that no longer exists.
You might hate to hear this, but despite all the PhDs, education isn’t immune to complacency. Many of the assignments and assessments we’ve relied on for years are still there because they’re familiar, not because they’re effective. AI isn’t just shortcutting how students complete their work; it’s exposing that many of our methods are grossly outdated. We need to take an honest look at what skills matter most and reevaluate the outcomes we’re driving people toward. After that, we can start determining the best way for AI to help us get there.
It’s time we deconstruct education to its fundamental ones and zeros rather than tweaking policies or layering AI detection on top of old methods. If we’re going to help students thrive in this new world, they need to be able to think critically, engage meaningfully, and apply technology in ways that elevate their abilities instead of replacing them.
Concluding Thoughts
Despite having eight kids, I’ve been fortunate that nobody has told me any of our babies were ugly. However, I’ve been told that in other areas of my life, which is never fun. It stings, and this is that moment for educators. And, the baby isn’t ugly because educators don’t care. In most cases, it’s the opposite. They’ve poured years into architecting assignments, assessments, and learning experiences they believe will help students succeed. However, AI is revealing it’s time for a change.
At the end of the day, education should never be about activity; it’s about outcomes. I’ve spent my whole career fighting against a system that focuses too much on whether a student or employee shows up, does the thing, and passes the test. While those can be part of the process, they don’t matter in the end. We need to be helping people leave better prepared for the world ahead. And, whether you like it or not, that world will include AI.
So, let’s stop the nonsense and not respond to this moment by dragging students back to the Stone Age, jumping through artificial hoops so we can feel like we’re maintaining control. It’s time to go back to the drawing board, strip away what no longer serves us, and build something better.
AI isn’t an enemy to be defeated. It’s the next generation of how we work, think, and create. If we’re willing to do the hard work now and rethink education from the ground up, AI won’t be a threat to academic integrity; it will be a trusted partner in achieving it.
The future is coming, whether we’re ready or not. Let’s do whatever it takes to make sure we’re all prepared.
With that, I’ll see you on the other side.
You’re absolutely spot on with your thoughts of reform, Christopher. We must lean into AI, rather than resist it. People seem to think it spells self-destruction because of how alien it is to our established ways. But they don’t see how it’s actually a catalyst for transformation. I talked about this in a recent piece comparing AI to the Shimmer from Alex Garland’s Annihilation (2018): mutantfutures.substack.com/p/005
You wrote, "AI isn’t an enemy to be defeated. It’s the next generation of how we work, think, and create."
Can you explain what you mean by AI is how we will think?