You'll Take The Fall When AI Fails
Workday may be the one in court, but the companies that outsourced their judgment to AI could take the heat.
Hey! Editor Christopher jumping in before things get started. I wanted to make you aware of two virtual events I’m hosting next week. Consider this your personal invitation to one or both if they’re of interest. And, since they’re both virtual, there are no space limitation, so feel free to pass along to anyone else you think might enjoy or benefit from it. (And, to proactively address the question, if the date/time doesn’t work, registration gets you access to the live-replay.)
The first is on August 20th at 11am CT and focuses on maximizing AI effectiveness in your organization through your people. I’ll be talking about the six behavioral disciplines that matter most as well as how best to assess and enable them at scale.
The second is on August 22nd at 3pm CT and is a slight deviation. I’ll be hosting a live QA sharing how I’m navigating AI through a biblical worldview. So, Christian or not, if you’re curious my perspective or have questions about on how faith intersects with AI and it’s impact on the world, check it out.
That’s it. Now back to your regularly scheduled reading.
When AI makes a mistake or possibly does something illegal, who will ultimately be held accountable? It’s an important question on many people’s minds, and it’s looking like we may get more legal clarity on the answer very soon.
Now, if you’ve been following my content for any length of time, there’s a good chance you heard me mention a legal case involving Workday (one of the top HR software companies) and its AI-powered hiring tools. The lawsuit was in response to rising concerns the AI was fueling discriminatory, potentially illegal, hiring practices.
Well, there’s been an interesting new development that may have some C-suite execs losing sleep. A judge ordered Workday to turn over a complete list of every employer that utilizes the AI capability as part of their hiring. Granted, it’s still early, but it’s a kind of shift that could begin framing how accountability falls in a world where AI “regulations” feel more like best practices than an established legal system.
I shared my usual off-the-cuff video on YouTube. However, as always, this update in particular kept me thinking. Given the subject matter and a large percentage of my audience, I think this one called for something deeper. That said, I think you’ll all appreciate my thoughts on this means for leaders, tech buyers, or anyone tempted to believe that trusting AI is a ticket out of accountability.
With that, let’s get into it.
Key Reflections
“AI might do more of the work, but you might find yourself accountable for both the work and the decision to let AI perform it.”
Quick lesson for anyone who’s not been around executive decision-making. Frequently, execs bring in vendors and task them with the messy stuff for a reason. If it soars, you get the credit. If it fails, they’ll take the heat and you can move on. It’s why enterprise software and consulting fees tend to be so high. You’re paying to mitigate your personal risk. There’s even a line for it: “Nobody gets fired for hiring McKinsey.” I’m not advocating for it; I personally hate it because of the broader impact. However, it’s the game. Interestingly, AI may be breaking the system. Why? AI is more than a failed digital transformation or strategy that didn’t deliver. In many cases, it’s leading to real harm. That’s not a branding issue. It’s a legal one. And, it seems courts aren’t buying the idea you should be allowed to hide behind a machine.
What many aren’t taking into consideration is that AI is consolidating the number of parties you can blame. Vendors aside, even in a traditional team, if you made the call, another human or even teams of humans executed. There was a surplus of warm bodies you could point at if things went south. However, when AI is the one doing the modeling, filtering, execution, and decision support? Well, there are no team members to fire or ding in a performance review, no contractor to distance yourself from. You’ll find yourself alone on stage with a big, bright spotlight on your face. And, it would seem the courts aren’t okay with a casual shrug and “I didn’t know” when you chose not to understand.
The moment you bring AI into your process, be prepared for the fact accountability doesn’t decrease; it multiplies. It won’t vanish because the decision passed through a fancy neural network. It will be easily traced back to one who chose to trust it.
“You don’t need a law to know what’s right, and the absence of precedent won’t protect you forever.”
There’s a dangerous AI mindset creeping into leadership right now. Many are buying into the idea that if there’s no official rule against it, there won’t be consequences. That’s a dangerous bet since loopholes don’t equal permission and most certainly don’t equal wisdom. Just because the law hasn’t caught up doesn’t mean it’s okay. If your AI system is filtering candidates based on protected attributes or denying services to people who need them, it doesn’t matter that a judge hasn’t ruled on it yet. You already know it’s not right. The absence of legislation doesn’t erase the presence of harm. The ethical ground doesn’t shift because the courts lag behind. It will however painfully expose who was willing to take advantage of the delay.
Now, that last sentence should give you pause, even if you like playing in the ethical fray. When the tide turns, and it always does, it won’t just be future misuse that gets punished. There’s a good chance they’ll retroactively reevaluate everything under the newly formalized moral lens. We’ve seen it with social media posts, old interviews, and even corporate campaigns. Things people “got away with” have a funny way of resurfacing with a new weight. It won’t be written off as a “different time.” So, when AI regulation inevitably catches up, don’t be surprised if all the companies and leaders that exploited the gap are suddenly front-page stories. Getting away with something now just means the consequences haven’t arrived yet.
What’s acceptable in a vacuum rarely holds up under scrutiny. And, I have a funny feeling AI history will have a very long memory.
“We like to blame the system, but right now we are the ones building the system.”
With the rise of automation, we’re becoming further disconnected, and with that, it becomes easier to forget that real people live on the other side of our decisions. A rapidly rejected candidate isn’t just a résumé. They’re a person trying to provide for their family. A quickly denied prescription isn’t just a cost-saving measure. It could be the difference between life and death. These aren’t edge cases anymore. They’re happening every day as we let AI drive without a human hand on the wheel. Now, I’ll acknowledge it’s not usually because someone set out to hurt anyone. Someone was probably tasked with faster processing, not realizing it set off a chain reaction that ends in someone else paying the price.
However, right now the worst excuse we can make is, “Well, that’s just how the system works.” AI isn’t just augmenting our systems; it’s completely redefining them. Every time we plug it into a workflow, we are rebuilding the infrastructure of how decisions get made. That means we have a choice. It presents us with an opportunity to reimagine a system that’s better all around. Or, we can choose to amplify the same broken patterns on a greater scale and speed. However, if you automate what you know is already flawed, you are no longer passive participants. You become the architect of the harm.
Right now we have the power to build a different system. The only question is whether we will. Oh, and pretending you didn’t see it doesn’t make you innocent.
“It’s not the big moves that scare me; it’s the small ones we keep making without thinking.”
The vast majority of AI fallout we’re seeing right now didn’t start with massive system overhauls or enterprise-scale deployments. It started with a checkbox, preference toggle, or setting buried inside a UI that no one thought to double-check. Don’t believe me? Consider what happened with OpenAI’s recent “share” disaster. One little option resulted in people’s private conversations being indexed and publicly searchable on Google. It wasn’t a system failure; it was a UX oversight with AI-amplified consequences because that’s exactly what AI does. It magnifies impact and risk beyond our comprehension. In a traditional system, a small mistake might stay small. In an AI system, there are no small mistakes. Every decision matters.
And, I’m empathetic to the fact everyone is overwhelmed. There are impossible demands and shifting expectations amidst being told that speed is the only thing that matters. Interestingly, this whole situation highlights how AI isn’t just a technical challenge. It’s deeply human, touching hiring, ethics, communication, compliance, culture, and everything in between, which is why the people designing and deploying it need to understand all of it. That’s why I do the work I do. Not because I have all the answers, but because we need people who can help others translate between the business, the tech, and the human, who can see the system and the individual.
The small decisions we’re ignoring today will be the messes we’re scrambling to clean up tomorrow because small decisions never feel dangerous until you realize the whole system is built on them.
Concluding Thoughts
Well, there you have it. But before you go, here’s my final, spoken thoughts.
Thanks again for sticking around. If you found value in the content I create, would you consider buying me lunch so I can keep it coming?
And, if you’d benefit from my expertise, check out my website to learn more or share with someone else who would.
Finally, no matter what, be encouraged. Whatever seems overwhelming or impossible today will feel like a vapor tomorrow.
With that, I’ll see you on the other side.
I appreciate your "everyone has good intentions" mindset. I do not share it. The ability to blame someone else has little to do with the legal mess. If they get caught, they'll pay and not care. We see it all to often - how many times has Facebook been fined and still acts like Facebook (or Meta or whatever the hell that cesspool calls itself).
It shows why tech CEOs and the "move fast, break things" culture is going to ruin capitalism. We've moved it from the social media app to the most important aspects. Moving slow is seen as a disadvantage instead of justifiable caution. Fire a bunch of people and then rehire a bunch - they are just a resource.
The real question for you Christopher - why do you keep writing on topics that get me fired up? :)