2025 Predictions Mid-Year Check-In
What’s Held Up, What Went Off the Rails, and What I Didn't See Coming
If you’re new here, in January I laid out ten realistic predictions I believed we’d see in motion by the end of 2025. Like always, my goal wasn’t to create noise or make hype-driven moonshots but identify grounded, realistic trends I was already seeing ripple through organizations and leadership conversations.
As part of that, I committed to check back in mid-year and share how things were progressing. While I’m a little taken back that we’re already at the halfway point, what better way to celebrate than deviate from my usual flow with a mid-year update.
To set some expectations, this article won’t be a comprehensive rundown on everything because there’s more to unpack than 10 quick updates. I did an extended (~70 minutes) video podcast with all the details you can check out on YouTube, Spotify, or any of your favorite listening channels. If you opt to check that out, you’ll find it broken into three major sections:
An update on each of the original ten predictions
Five new areas of focus that weren’t on my radar then but would have been if I knew what I know now
Three emerging trends that aren’t urgent quite yet, but worth keeping on your radar
I recognize that’s a lot, so I kept it all timestamped for easy navigation. If you want the full rundown, head over there before or after you read this.
However, like always, the more I sat with my raw thoughts and reactions, the more three key patterns began to emerge. These weren’t just changes, but cracks in how we’re approaching AI. They’re expanding gaps in how we’re leading and what we’re overlooking.
So, in this article I want to zoom in on those three cracks because they keep showing up in the headlines and the work I’m doing with organizations and leaders.
You’ll see they aren’t future predictions; they’re present realities.
And, if you haven’t felt them yet, I firmly believe you will very soon.
With that, let’s get to it.
3 Cracks In The System
“Everyone’s moving. Many don’t even realize it, and almost nobody knows where they’re going.”
No matter where I look or who I talk to, everyone is actively doing something with AI. The pace and the posture always looks different, but not a single person I encounter is standing still. And, no matter where they sit on the adoption spectrum, everyone thinks they’re being careful, resisting total surrender to the AI wave, and remaining cautious until things are “more proven.”
An interesting observation with the AI-resistors is they aren’t quite as resistant as they think they are.
They’re unquestionably using AI in email, copywriting, research, and meeting notes. Some are even experimenting with automations in their existing tech tools, even if they’re not talking openly about it. I bring this up because a strategy not being corporate-approved, or even acknowledged, doesn’t mean it’s not happening. It absolutely still is. I’d go so far to say unacknowledged movement may be the most dangerous kind, and they’re popping up everywhere.
This risk raises major concerns because when you don’t believe you’re in motion, you don’t bother to check the map or ask where you’re headed. And, you sure as heck don’t notice how far off course you’ve already drifted because you never charted one.
I worked with a team earlier this year who brought me in thinking they were “behind.” The assumption was they were missing opportunities because they hadn’t adopted AI yet. However, when I started digging into how their teams were actually working, I found they had AI peppered everywhere, some of which no one could explain or trace to an owner. They weren’t behind; they were adrift, and they didn’t even know it.
Now, this problem isn’t limited to the resistors. Those on the other end of the spectrum are often flying just as blind. They’re spinning up agents with little to no oversight, handing over decisions and workflows to AI without understanding the impact, and assuming that “more AI” or “better AI” automatically equals better results. But, adding speed or complexity to uncertainty doesn’t improve strategy. It accelerates and amplifies risk.
All this highlights crack #1, which is we’re all moving without asking some important questions like:
Where are we becoming dependent on AI?
Who’s tracking and evaluating what’s being built?
Is where this is taking us better than where we are today?
Whether you feel like you’re running or resisting, the real risk is in the drift. And, not being intentional about direction doesn’t mean you’re not still going somewhere. There’s just a very high risk you won’t like where you end up.
“We’re making a lot of big changes on assumptions and promises, not grounded decisions.”
Crack #1 is about unintentional drift. Crack #2 is about all the conscious decisions being made with false confidence. We’ve already established many leaders are acting. Unfortunately, a lot of that acting is in response to assumptions, and most don’t even realize how flimsy those assumptions are.
I’ve sat in more executive meetings this year than I can count where someone says something like, “Let’s not backfill that role until we’re sure AI can’t take it on,” or “I’m confident we can cut 20-30% operational spend (code word for employees) once we get the agents up and running.” Now, they aren’t always bad hypotheses, but they’re always terrible first steps.
Because, here’s what’s not also happening in most of those rooms:
No analysis of what those roles actually did
No audit of what AI is capable of right now
No evidence of how AI performs in context
No assessment of risk if the assumptions are wrong
In other words, people aren’t making decisions. They’re making bets, and they’re making them as if they’re backed by data.
This crack is also not just layoffs or hiring freezes. Companies are redefining entire roles. I know someone who was recently informed their job was no longer writing code but approving or denying a never ending queue of AI-generated code. Leaders are actively rewriting the responsibilities of their employees based on what they’ve heard is possible, not on what’s actually been proven to work at scale. The most dangerous sentences I keep hearing opens with, “We believe AI can…”
What we’re seeing right now is a kind of strategic theater. AI is playing the role of a pre-fab solution to operational inefficiencies, cost pressures, or performance gaps, but nobody ever checks if it’s actually solving the problem. We’re implementing promises, not solutions.
And when the promises don’t hold? It’s not just the AI that taking the hit. It’s the people, the processes, and the leadership trust that goes with them. In one instance, I’m trying to prevent an entire company from folding.
“We’re not paying attention to the right things, and it’s going to cost us.”
Crack #3 is a sneaky one because many of the serious risks don’t show up in headlines. They don’t scream; they whisper. And right now, a lot of people’s attention is focused on what’s loud, immediate, and understandable. However, many of the dangerous, long-term consequences of this AI shift are quietly building in the shadows, beneath the visible surface.
I’m talking about things like:
The militarization of AI, where defense contracts and autonomous systems are advancing faster than ethical oversight
The environmental strain of massive-scale model training, data centers pulling power from already stressed grids and water cooling systems most leaders don’t even know exist
The impact on children, who are growing up learning from, talking to, and being shaped by systems never designed with them in mind
The erosion of trust caused by deepfakes and AI-generated content that looks real but isn’t, and the fatigue that follows when people stop believing anything they see online
People trusting whatever AI tells them rather than exercising critical thinking, good judgement, and discernment
The organizational black boxes being built as employees spin up unauthorized automations and agents with no documentation, no transparency, and no one tracking what happens if they leave
None of these will make your Q3 priority list, but that doesn’t make them science fiction. They’re already happening, and the cost of ignoring them is going to continue compounding.
Where my empathy kicks in is knowing the majority of people and organizations aren’t meaning to look away. There’s a kind of cultural and operational gravity that pulls everyone’s focus toward productivity, short-term ROI, and visible outputs. If it’s not breaking right now or screaming red on a dashboard, it doesn’t feel urgent.
This crack shines a spotlight on not feeling a problem until it’s too big to ignore.
To illustrate, a tech leader I know eliminated multiple headcount across a team to balance the budget trusting “AI would fill the gap.” What they didn’t realize was the team was barely staying afloat and only breathing because of a cobbled-together stack of AI tools that one of them had built, and no one else could explain. After they were all eliminated, including the one who cobbled the tools together, productivity collapsed. This wasn’t a tech failure; it was a visibility failure.
That is why I keep trying to help leaders see that technology doesn’t just change what’s visible. If you’re only tracking what’s loud, obvious, or on fire, you’re missing what’s fragile. And eventually, you will pay for that.
Concluding Thoughts
Whoosh. Breathe in; breathe out. If you made it this far, you might be feeling like we’re on the verge of the apocalypse, and who knows, maybe we are. But, I don’t think so because none of these cracks are inevitable.
Yes, they’re real, and they’re already showing up inside companies, leadership conversations, and decision-making everywhere. I won’t sugarcoat it or pretend it’s not bad. (It’s worse than most even realize.) However, that doesn’t mean we’re powerless. We still have time to interrupt if we start leading with clarity, intention, and courage.
Here’s the thing, most organizations I work with aren’t failing because they’re “behind” on AI. They’re failing because they’re moving without direction, making changes without validation, or ignoring threats that don’t feel immediate.
That’s why I’m so committed to the work I do every day.
I’m determined to help leaders and teams:
Cut through the noise to figure out what actually matters
Get honest about where they are and where they’re headed
Build people-centered, technology-informed strategies that hold up under pressure
That’s why I created tools like the AI Effectiveness Rating (AER), to help you measure how capable your teams are at using AI effectively, and the Pathfinder Pulse Analysis, to give leaders a fast, structured way to find clarity in the chaos.
So, if any of what I’ve written here feels familiar, or if you’re reading this and thinking, “Crap. He’s right and we’re already drifting…” I’d love to talk. Whether it’s a focused advisory session, ongoing support, or just a conversation to make sense of what you’re seeing, that’s why I’m here.
And, even if you don’t reach out to me, please do something.
Don’t assume you’ll be fine if you “wait it out” or build your future on a guess. There’s way too much at stake and so many paths that don’t require falling apart first.
With that, I’ll see you on the other side.
It’s amazing how the latest whizbang technology is the one management focuses on to reduce headcount when that wasn’t why it was invented. I think back to Lean Manufacturing and the techniques allowed a company to grow without adding people but the intention wasn’t to do a lean initiative to see how many people could be let go. How motivating is that?
It seems like AI is the latest tool for eliminating heads, replaced by technology without asking the question of whether it’s effective for the company or the customer.
I’m glad I left the corporate world just in time to avoid this wave of technology. Glad there’s people like you asking the hard questions and helping the folks in the trenches trying to survive in this crazy environment.
Sorry, but it looks like satanicly planned anarchy—the need is prayer.