AI Claims a PhD, but It Still Needs a Supervisor
OpenAI’s new agent has a PhD price tag, but without human oversight it’s a dangerous illusion.
If you hadn’t heard, OpenAI officially (re)announced its o3 model. Yes, the one that technically launched, then quietly disappeared. (Cue the “We Don’t Talk About Bruno” soundtrack.) Alongside it is o4-mini, both positioned as powerful leaps forward with bold claims of PhD-level intelligence and a $20K/month price tag to match.
Now, while these models are impressive on paper, we have yet to see them in action, and I predict that’s where things will get interesting. It will be a real-time test of what happens when cutting-edge AI gets priced like a luxury good and marketed like a genius. For business leaders already stretched thin, this is adding yet another layer of complexity to the mix. Companies are already juggling hybrid teams, shifting priorities, and economic pressures. Talk about a perfect time to layer in deciding whether a digital PhD is worth betting your strategy on.
Given that, I understand why it’d be tempting to ignore the trend. $20K/month for an app sounds like something only the tech giants and big companies need to worry about. However, I’d encourage you not to look the other way. This launch is about more than luxury AI goods. It’s about preparing for the inevitable future. And, if you’re not asking the right questions now, you might be solving the wrong problems later.
Given all that, it’s why I chose to make it the focus of my deeper reflection this week. Per usual, I created my usual off-the-cuff reflections video if you’re into that.
With that, let’s get to it.
Key Reflections
“Human work isn’t disappearing. But, the bar for doing it well just got higher.”
AI is only going to keep getting smarter by every measurable benchmark. That shouldn’t surprise anyone anymore. With OpenAI’s latest model claiming PhD-level capability, the bar is now brushing up against the top of our academic systems. At this pace, we’re going to need a new vocabulary to describe what’s coming next. However, there’s something many people don’t consider. The smarter AI gets, the sharper the humans managing it need to become. Yes, people will still matter, but not in default mode. Waving the “creativity, critical thinking, emotional intelligence” flag isn’t going to be enough if those skills aren’t being actively developed.
In fact, the more advanced AI becomes, the more dangerous it will be to leave human growth stagnant. If your people fall behind, they won’t just slow things down; they’ll introduce blind spots, bottlenecks, and bad decisions that scale faster than ever. As AI levels up, we have to level up with it. More than ever, we need to develop the capacity to ask questions, apply judgment, and bring clarity to increasingly complex outputs. It won’t be about competing with AI; it’s about learning to lead AI with greater precision and perspective.
Relevance in this next chapter won’t be based on what makes us human. It will be based on how much we’re willing to grow what makes us human.
We hear “PhD” and assume smart. But, we rarely ask, “smart at what?” and “according to what standard?”
When I heard the claim that this new model was “PhD-level,” my immediate thought was, “What does that even mean?” Saying someone is a PhD could signal deep expertise, original thought, or years of research and writing, but it might just mean they spent a lot of time in school and use big words nobody understands to hide the fact they have no practical understanding. Now, the problem isn’t necessarily the degree (although I have some thoughts there). It’s that we’ve turned the term into an unquestioned stamp of intelligence despite never actually defining intelligence. And, now we’re applying that same vague stamp to AI.
When we blindly accept that something being called “PhD-level” means it’s exceptional, we skip past the questions that matter. We fail to critically examine things like, “What kind of thinking can this model actually do? In what context? With what depth? And, how will we know if it’s successful?” That’s extremely dangerous when you consider how people are proposing to use these tools. Consider the fact that you’ve probably met a PhD you wouldn’t trust to make any major decision for you. You probably also know people with no formal credentials who can outthink most rooms they walk into. If that’s true for humans, why wouldn’t it be true for machines?
All that to say, we have to stop assigning weight to words that sound credible without asking whether they mean anything. If we don’t, we’re handing authority to whoever’s best at branding, not best equipped for the task at hand.
AI can do more work and do it faster. However, if it’s the wrong work, it will just make a bigger mess.
Now, I want to emphasize my previous points from a slightly different angle. I’ve already made the case that “PhD-level” AI requires upgraded human oversight and tempts people to believe it can, and should, take the wheel without question. The volume on those risks only gets louder when we consider the fact that speed and intelligence aren’t the same as direction and wisdom. Faster, more intelligent systems without clear alignment and protective guardrails don’t solve problems; they multiply them. This is the risk I see on a tragically frequent basis as more and more leaders chase automation.
Many reason that if AI gets smart enough to do a job and can do it autonomously, they should let it run full throttle. Let me be really clear on this: Automation is not a strategy; it’s an accelerator to your strategy. If you’re not crystal clear on what you’re trying to do, why it matters, and how you’ll measure success, the acceleration will drive you into chaos faster than you can imagine. AI may be able to give you a detailed answer, backed by complex logic, in perfect grammar, but it can (and frequently will) be dead wrong. Worse, you won’t realize the scope and size of the mistake until it’s too late.
Please, before you hand off work to AI (PhD or GED), evaluate whether you’re prepared to manage what it does well, because more and faster is a terrible goal.
AI was supposed to level the field. Unfortunately, the further we go, the wider the gap seems to get.
This one is more of a personal sigh of disappointment. What first had me so excited about advanced AI wasn’t the speed or sophistication; it was its democratization capability. It felt like a tool that had so much potential to lift people up instead of leaving them out, closing the gap between the haves and have-nots. I was genuinely excited that anyone with an internet connection could tap into capabilities once reserved for the elite. That felt hopeful. With this latest release and some other emerging trends I’m following, I’m no longer quite as hopeful. The launch of a $20K/month AI model signals a shift from democratization to exclusivity.
Now, don’t get me wrong. I fully appreciate that innovation costs money and advanced research requires funding. If you read any headlines, you’re also very aware of the reality that AI costs a lot to run. It’s far from a free service, and it’s not run by a charity. However, when access to capability shifts toward a luxury, the gap between the empowered and the excluded grows. And, despite what you might think, that’s not good for anyone. Even if you don’t hold a biblical worldview, the data still stands that when the middle falls out, the whole system becomes unstable. A future that benefits only the few will eventually collapse under its own weight.
This isn’t about pricing models. It’s about keeping a clear focus on the reality that progress worth having includes everyone.
Concluding Thoughts
Hopefully, I’ve convinced you that this week’s announcement is about more than a new “PhD-level” AI or premium price tag for tech. It’s a moment that reminds us there’s always more happening beneath the headlines. We’re not just watching technology evolve. We’re watching assumptions being made about what it means to be smart and to succeed. And, like everything in life, the real story is layered. What looks like progress always carries risk. The things that promise convenience demand more oversight. And, some of the things that have potential to raise the tide for everyone quietly widen the gap if we don’t stay grounded.
That said, here’s some good news. I promise that you don’t need the most advanced AI to move forward. You can soar if you pair the right tool with the right mindset. A mid-tier or even free-tier AI model in the hands of a thoughtful, intentional human will outperform even the flashiest model every single time. It’s a reminder that we are living in a time of incredible opportunity. However, that opportunity belongs to those willing to grow, get clear, and sharpen their thinking.
So, before you ask whether you need to blow $20k a month on a PhD AI, I’d encourage you to pause. A better question to evaluate is, “Am I clear on what matters, and making the most of what I already have?”
Sometimes the smartest move isn’t upgrading; it’s making the most of what you currently have available.
With that, I’ll see you on the other side.
A PhD level AI is wonderful marketing but is likely meaningless in reality. My daughter and son-in-in-law both have doctorates. A PhD is a research degree. It gives someone the skills to write research papers but doesn’t guarantee genius level insights. Often research papers are overviews and summations of existing research and likely that’s all anyone would ever get from an AI. I don’t think any AI systems today do any original research.
The other notion that AI systems are smart or have any real intelligence is also a misnomer. PQ Rubin who also writes about AI said it very well when he reminded people that AI systems aren’t smart, especially the large language models, they are just really good at predicting the next most logical thing to say in a string of words. The mathematical modeling is great in terms of the prediction model but that shouldn’t be confused with any actual intelligence.
After a long career in business working with technology I found that simple systems that people understood and implemented well worked better than whiz-bang technology that was supposed to be better but in practice rarely performed as well as what it replaced. Your concern about people’s ability to effectively use this expensive complicated technology are well founded. In my experience company management falls in love with the hype and rarely makes the long term investments in people that are needed to take advantage of new technology. I think what DODGE is doing for government “efficiency” is a perfect cautionary tale.
Good analysis. Personally, I see little use. My work flow is about anointed creativity in the midst of writing and design. I pray and do something, Then I pray to find out what the Lord has added to the mix. Then I pray again for the anointing to continue moving on the path the Lord is shaping before my eyes. Then I take the next step.
The goal is to be on the path which the Lord is revealing as I go. The walk of faith has always been one step at a time—the equivalent of moving a couple feet at a time along the path the Lord is revealing as we go. AI would mess that up badly—as far as I can tell.
Also, compared to Yeshua Messiah, AI is incredible stupid. However, I am learning how to use its superior drawing skills in my illustration needs. But it is certainly like working with a helper who is extremely thick-headed. It is often impossible to communicate what I need accurately. So, I have to build in smaller pieces gradually building an image. Generate a dozen or more variants for a prompt and then pick the one I can use best and prompt the next piece to add.