Dependence by Design Is the Most Misunderstood Risk of AI
Everyone’s talking about the OpenAI memo, but the risks are hiding in plain sight.
Since you’re reading this, there’s a good chance you know I sort through a lot of AI research, headlines, and hype; however, this one gave me pause for reasons that will become clear.
An internal OpenAI memo surfaced as part of the DOJ’s ongoing case against Google. In it was the “master plan” to make ChatGPT so useful, so integrated into people’s lives, that they’d feel like they “couldn’t live without it.” As you can imagine, it didn’t take long for it to ignite panic.
“They’re trying to addict us all to an ‘entity’?!”
“AI is the new tobacco.”
“This reveals the Illuminati’s endgame and fulfills the book of Revelation.”
Some of it was off the beaten path, but I can at least understand why it would lead people to react the way they did. Big tech meets redacted document uncovered in government legal discovery meets emotionally charged language around dependency. It has all the ingredients required to trigger fear, especially in a cultural moment where everyone is walking a razor wire.
However, as I dug past the headlines and read what the memo said (or at least what wasn’t redacted), it didn’t read like an Illuminati or New World Order master plan. It read like something I’ve heard countless times from founders and product leaders. “We want to build something people love so much they can’t imagine not using it.” It’s not new or surprising. In fact, it’s the playbook for every successful product.
Innocent enough, right? Well, not quite. This memo surfaces risks that are still widely misunderstood, including by the very people investing in, deploying, and building AI.
Per usual, I shared my raw thoughts on YouTube. However, this is the deeper reflection that continued to unfold the longer I sat with it. Because the longer I thought about it, the clearer it became: dependence is the stated business model. And, knowing that should at least make us all pause.
With that, let’s get to it.
Key Reflections
“A single word, even with good intent, can send your entire system into chaos.”
It’s common for leaders to talk in corporate shorthand, a vernacular of platitudes we pull from when we’re moving fast. Most of the time, it’s not intended to be malicious or manipulative. It’s practical. When jumping from meeting to meeting, it’s easy to hit cruise control, casually saying the legally-approved talking points that sound ‘good enough’ in the moment. However, this memo is a reminder that platitudinal phrases like building something people “can’t live without” can detonate in ways you didn’t expect. Every leader, myself included, has done it. It happens when we don’t pause to ask how our words might land with the people on the other side of them.
It’s a reminder that the environment we’re operating in right now isn’t calm; it’s chaotic. People are stretched thin, skeptical of leadership, and unsure where they fit in our rapidly changing world. The weight of our words is heavier than usual. Words like “automation,” “efficiency,” “optimizing,” or “reskilling” aren’t positive, or even neutral, right now. The takeaway is you can’t afford to lead on autopilot. You need to be intentional and calibrate your language based not just on what you mean but how it might be received. Because the wrong word, even if honest, can undermine trust before you realize it’s happened.
Oh, and when you get it wrong (because you will), own it quickly, adjust your language, and stay connected to the people hearing you the loudest.
“When we believe the intent and the outcome are good, we have a bad habit of not questioning what it takes to get there.”
Good intentions are important. They should absolutely be the starting point of everything we do. However, we have to recognize they’re not enough. You can have the best motives in the world and still cause serious harm if you don’t step back and objectively assess the outcome. It’s way too easy to assume that because you believe you’re trying to do something good, the outcomes will be good too. However, history is filled with countless examples of people who started from the right place and ended up somewhere they never intended, leaving a trail of catastrophic damage in their wake, all because they didn’t stop to ask hard questions.
But, it doesn’t stop there. There’s a third layer many people miss, especially over time. When you believe your intentions are noble and your outcomes are good, you can fall into the trap of doing whatever it takes to make it happen. Before you know it, shortcuts get approved, and decisions that used to feel non-negotiable suddenly become “necessary.” Over time, everyone starts justifying things they never would’ve entertained, and people who object are pushed out. You end up destroying everyone and everything in the process. This one doesn’t happen overnight. It happens silently when we stop holding motives, methods, and outcomes up to the light.
Now, I know it’s a lot to carry. However, the cost of not doing it is much higher. The easy road is wide and lined with justifications. The better one is narrow and often uncomfortable, but it leads to better outcomes you won’t have to explain away later.
“AI is rapidly ‘optimizing’ and ‘integrating’ into everything, but have you considered what you’ll do when it’s not available?”
Companies everywhere are sprinting to implement and integrate AI into everything, which isn’t inherently bad. In many cases, it makes sense. The tech isn’t going away, and figuring out how to use it well is necessary. However, integration done right isn’t fast and loud. It’s intentional and requires a lot more than enthusiasm. It demands clarity because you don’t need AI in everything. You need it in the right things. I’ve observed we’re getting better at recognizing that. Many are starting to see the cost of a spaghetti-on-the-wall AI strategy, and most leaders are accepting, sometimes painfully, that automation without purpose isn’t innovation; it’s chaos wrapped in fancy wallpaper.
While the progress is good, a deeper risk largely remains unaddressed: What’s the plan when AI doesn’t work or behaves in a way you didn’t expect and have to pull back? Both of those scenarios are inevitable. Models will fail, vendors will shift, and promises will outpace delivery. That’s a big problem if your entire operation is dependent on everything going right. You’ll find you’ve built something fragile, not scalable. Nobody wants to think about that. We like to believe that once it’s implemented, everything will just work. Unfortunately, betting your business on uninterrupted performance in a system you don’t control is a strategy of hope, not design.
So yes, be intentional, even bold. But don’t assume AI will always be there. Make sure you’ve got a plan for what you’ll do when it’s not.
“If your boundaries aren’t clearly defined, operationalized, and measured, they’re not boundaries; they’re just platitudes and aspirations.”
Earlier, I talked about the importance of defining guardrails for mitigating dependence. There’s one final addendum I have to add because I see it all the time. You must make your guardrails measurable. We frequently stop at the term, maybe add a generic definition. “Responsible AI” or “best-in-class” sounds great, but they become little more than decorations without measurements and accountability. When you stop short of measurably operationalizing your framework, you’re inviting chaos. In the absence of clarity, people will default to whatever definition seems right in their own eyes, which is when vision drifts, trust erodes, and unintended consequences multiply exponentially.
That’s why I created the AER (AI Effectiveness Rating). It offers orgs and leaders a structured, measurable way to assess people's effectiveness with AI across the six disciplines that matter. However, this gap is bigger than AI. It applies to everything. You need ways to measure alignment and a structure that gives managers something solid to coach against. Unfortunately, many treat it like a compliance exercise, “reduce how frequently people mess up.” I see it as something else entirely. This is about performance. When you don’t define a measurable standard, nobody knows if they’re improving. And, you certainly can’t lead toward it.
Now, you may not adopt my framework (although I’d encourage you to check it out). However, the truth still stands that you need one. Fail here and you’re drifting and hoping the wind takes you somewhere good.
Concluding Thoughts
I recognize a lot of what I have to say can be interpreted as resistance or friction to progress. “Christopher, wouldn’t it be faster if we didn’t think about over-dependence?” I promise that’s not the intent. Despite all the uncertainty and risk surrounding AI, I see and feel just as much excitement and want to move quickly. I work around extraordinary breakthroughs, help many people accelerate progress, and am genuinely hopeful about where some of it could lead.
I didn’t write this because I’m convinced we’re headed for an addict’s dystopia with no way out. I wrote it because I am optimistic about how far and fast a “product we couldn’t live without” could take us if we build with care, intention, and integrity.
However, it kills me when we justify harm in the name of progress. I cringe when people lean on lines like “you have to break a few eggs to make an omelet,” as if that’s just the cost of doing something meaningful. It doesn’t have to be that way. I see it as a convenient excuse to skip the hard work of doing it better. The reality is, I’ve been around enough to know we don’t have to choose between world-changing innovation and responsibility. We can have both.
In fact, with the tools we have today, we can probably make the best omelets the world has ever seen without breaking a single egg, harming a single animal, or compromising a single thing that matters. That’s the future I’m working toward, and I hope you are too.
With that, I’ll see you on the other side.
Christopher, I can’t argue with your vision and that it’s the right way to proceed. I can’t help but feel it’s a bit of a “Boil The Ocean” project. Not so different from the many ERP implementations I’ve been involved with around the edges. Your vision calls for vast amounts of money and consistent management focus. In my experience, that rarely happens. Top Management often has a short attention span and unless the effort is driven by the CEO and he/she considers it a corporate necessity for survival, these efforts rarely achieve the lofty visions.
I hope companies and the many people they employ can achieve your vision. Anything to help them do that is a win.
As one who is quite leery about AI, I want to offer an unusual statement. I took in the WWDC Apple event yesterday. Everyone is quacking about how Apple has blown it. I hear people saying Apple has finally blown it for good. Balderdash.
My opinion is that Apple is implementing AI in a manner I find comforting and it's another indicator of why I use Apple. Their AI will probably be safe to use and built into the way I work on my computer. They are talking about AI use with strong privacy controls, and that's what I want. I think that, as usual, they are doing it much better than almost anyone.