The Problem With Trusting AI When the Stakes Are Human
A Stanford study exposes how AI fails in moments of crisis and what it means for leadership.
AI is embedding itself into nearly every corner of our lives, including therapy, one of the most intimate and human interactions out there.
AI-powered mental health apps are popping up everywhere. You can download an unlicensed, “digital therapist” in seconds. And, in some ways, I understand the appeal. They’re inexpensive, listen without judgment, respond instantly with what we want to hear, and are available 24/7. It sounds helpful in a world where access to care is expensive, slow, and often inhumane.
That’s why some new research out of Stanford caught my attention.
Now, I had my suspicions about where the data might lead, but I try to remain objective and open-minded, especially with something as important as this. I wanted to understand and sit with what the research had to say. Spoiler alert: It’s not good. Interestingly, even the limitations (which usually make me hold to research loosely) made the findings worse, not better.
However, this article isn’t really about therapy because the longer I sat with it, the more I realized this was about more than the exponential rise of AI-powered therapy bots. This study shined a light on something much bigger and far more applicable to everyday work.
Per usual, I shared my raw thoughts in this week’s YouTube video, but here are my more refined reflections I think every professional, especially those leading others, should seriously consider.
With that, let’s get to it.
Key Reflections
“Just because AI can do the job doesn’t mean it deserves the responsibility.“
Something that didn’t surprise me but gave me pause was where AI fell short, especially given the margin of failure. Crisis intervention and mental health support are some of the most fragile, human-centered situations out there. And yet, those seem to be where AI is being positioned and gaining popularity as a reasonable substitute. Now, that miss says less about the technology and more about us. It highlights our tendency to be drawn to solutions that sound good on paper without examining if they actually work, or worse, considering the possibilities of what happens when they don’t.
Now I’m not meaning to take potshots at AI therapy apps. This isn’t isolated to therapy bots. Business leaders everywhere are doing exactly the same thing. Each week I tap the brakes on plans for AI to replace employees, automate workflows, and make critical business decisions before deeply considering what’s being handing over and what will happen when it fails. While not always popular, I hold my ground because I recognize when something sounds smart and saves time, it’s tempting to skip past the hard questions. However, we can’t shortcut that step. If we do, the damage will be done before we realize it was even a possibility.
So, before you delegate a moment that carries weight, I’d encourage you to slow down and ask whether this is something you’d delegate to someone who’s never been trained. If not, why do you believe it’s wise to hand it to a machine?
“AI bias doesn’t just shape what the system does, it shapes the things we believe are worth doing.”
Something unsettling in the study was the consistent presence of stigma toward depression, schizophrenia, addiction. This wasn’t a glitch. It’s what happens when you train AI models with the only data we have: messy, biased human data. But the deeper issue isn’t just that the bias exists, it’s that we absorb it. Even when humans are technically “in the loop,” we tend to accept AI outputs at face value because they’re wrapped in confidence and convenience. Tragically, when we let that happen, the model doesn’t just shape decisions. It shapes the entire culture. Like a frog in slowly boiled water, we begin believing what it tells us is normal, valuable, or worth our attention.
And this is where we often miss the point when it comes to AI bias. Mitigating it isn’t a technical fix; it’s a human capability. Even though cleaning your data is essential, you can’t debug your way out of bias. The real risk isn’t just that AI gets things wrong, it’s that we often don’t realize we’ve stopped questioning what it’s suggesting. That’s one of the reasons I built the AI Effectiveness Rating (which, subtle plug, is available to individuals now). It’s important we understand not just how well we can use a tool but how prepared we are to prevent it from shaping our lenses. Because mitigating bias isn’t solved with better prompts. It’s mitigated by developing better judgment.
So, if you’re starting to integrate AI into your decision making, you need to make sure you’re equipped to recognize when the data is quietly steering the outcome.
“When you’re lost in the fog, AI is a flashlight, but what you need is a compass.”
Something else the study covered highlights a major AI risk that’s supported by other trends. That risk is how quickly we look to AI in moments of crisis. On the surface, it seems like a reasonable and understandable response. When we’re disoriented, desperate for help, and unsure where to turn, we’re vulnerable to grab anything that seems capable of guiding you, even if in calmer circumstances you’d recognize it’s not capable of truly helping. What amplifies this risk is AI checks all the boxes. It’s fast, confident, and always available. It’s capable of rushing to our aid when we’re least able to discern whether where it’s taking us is actually right.
Now, don’t write this risk off because AI therapy isn’t in your portfolio. This risk isn’t limited to mental health. Chaos and unchecked AI is the culture of most workplaces. People are unclear on priorities, stretched too thin, and worried about their livelihood alongside a tidal wave of AI tools promising to help sort it all out. It’s why we need to do something fast. The risk is too big, and I promise the solution isn’t found in trying to buy time by banning AI. We need to make sure people have clarity before they’re lost. When people are grounded in where they are, what’s getting in the way, and what to focus on before the chaos, it prevents them from outsource their thinking.
So, don’t wait. The best time to reach for a compass isn’t after you’ve gotten lost; it’s before you step into unfamiliar terrain.
“If your leadership is just polished words, AI already knows how to do your job.”
One of the most underestimated insights from the study wasn’t just that AI failed in crisis moments. It’s that it failed even when it said all the right things. It shows that AI can say the right words and use the right tone but still completely miss what matters. And, that disconnect is more than a tech limitation; it’s a leadership warning. Because the truth is, many leaders are already doing the same thing. What we can take from the research is that when your words are disconnected from understanding, it matters a lot. So, if you’re going through the motions, you’re not building trust, and you’re absolutely not improving performance.
However, this surfaces another layer of risk, one most leaders I encounter aren’t taking seriously. In an AI-saturated world, being able to follow the script and say the right things is no longer impressive. The machines can deliver a solid one-on-one, generate a perfectly supportive response, and perfectly follow the corporate talking points. Given that, if that’s what you’re doing as a leader, it won’t take long before your leaders start questioning whether you’re adding any more value than the AI tools. It’s pretty much always been said that being human in leadership matters. But now? It might be the only thing that sets you apart.
So remember that authenticity is essential for morale or performance. And, it might soon become your last remaining competitive advantage.
Concluding Thoughts
Thanks for hanging with me through another reflection. While I hopefully broadened the application of the research for you, I also hope that if any of you are considering using AI for therapy this gave you reason to pause.
Also, if reading this made you a little uneasy and even challenged, good. It means you’re still self aware, thinking critically, and paying attention. It means you not only recognize the risks of AI but are dialed into the parts of leadership and decision-making that should never be automated in the first place.
Now, if you read my stuff, you know I’m a big supporter of using AI to speed things up or simplify the process. However, we need to keep asking whether something should be automated. If we don’t, we risk losing the very things that make our work and our leadership worth doing.
Fortunately, there’s always a bright side, and it’s that this research doesn’t just expose what AI can’t do but reveals what people still uniquely can. Humans are uniquely capable of caring, discerning, and showing up with context and conviction when it matters. Even better, it’s okay if you’re not super technical in the world ahead. You just need to be human on purpose because that’s the part AI will never be able to reach. However, you have to consciously choose to show up that way, and I hope you make the right choice.
With that, I’ll see you on the other side.
Another great article that asks the tough questions. I remember a sign hanging in one of my school classrooms. “It’s more important to be human than to be important.”
I think this sums up your AI message this week. Just because a machine can say the right thing, doesn’t make it meaningful or impactful.
You rightly brought up the corporate rush to save costs by cutting people and your warning to slow down. People and personal connections matter more than many realize. During my career I did a fair amount of travel to different manufacturing plants, sometimes in other countries. What people will tell you on the phone vs in person walking through their factory is quite different. It takes time to go out for meals together but these informal meetings are where the true connections are made and pay dividends later when a phone call asking for a favor can make things happen. AI can never accomplish this.
These are the pieces that are really, really dangerous. These guys only looked at clinical AIs that were supposed to be therapists, and there are thousands of AIs out there now working that we have no idea what they have been trained on, what their role is, and how they will be used.
For example, many astrology apps now have AIs in them. Sure, most of the content is seemingly innocent "how to dress like your Venus sign" for example, but there's no guardrails if the person says they can't do that, so what do they do? Some of the ones I have tried have given some very disturbing answers.