I was off last Friday for the holiday weekend but am back this week with another rundown of questions.
If you missed the original podcast that these questions are based on, you can catch that on Spotify, YouTube, or any of your favorite platforms.
This week, I covered four topics:
MIT’s Report on 95% AI Failure
Update on Deepseek 3.1 and Liquid AI
Google Mangle
AI in Sales
However, what you’ll find here are my answers to five of the most popular and thought-provoking questions that came from the community.
Without further ado, here they are:
MIT Report & 95% Failure Rate
If 95% of specialized AI pilots are failing, how should leaders decide which projects are worth pursuing instead of waiting for the tech to “mature”?
You mentioned the danger of binary thinking around AI success and failure. What’s a practical way leaders can spot when they’re falling into that trap in their own organizations?
If adoption is at 90% but outcomes aren’t clear, how should companies even measure “success” with AI at this stage?
Human-Centered Risk
You said the real issue isn’t the models but how people use them. What skills or habits actually separate the effective AI user from the one just creating risk?
Shadow AI is everywhere. If banning tools doesn’t work, what are the first two or three things leaders should do to manage this reality without slowing people down?
Model Landscape (DeepSeek & Liquid AI)
With DeepSeek being open source and incredibly cheap, do you see companies seriously moving away from OpenAI/Anthropic, or will the risks outweigh the cost savings?
Liquid AI’s ability to run powerful models on mobile devices sounds game-changing. What are the hidden risks of AI becoming that embedded and untethered?
Google Mango
You compared Mango to a universal translator across databases. If it really works, what kinds of roles or jobs are most at risk—or does it actually create new ones?
What’s the biggest misconception leaders might have if they hear about Mango and assume it “solves” complexity?
Leadership & Risk
You said even good leaders could lose their jobs if failure rates stay this high. What’s one blind spot you see in otherwise strong leaders that could put them in the 95% without realizing it?
Alright, that’s it for this week. I hope you found it helpful, and if you did, would you consider saying thanks by buying me a coffee or lunch?
All that said, I hope all is going well amidst the chaos. I’m already building my list of topics for next week. I’m still working through the topics for next week but I’m probably going to include an interesting AI use case out of KPMG that shows how AI is changing jobs, hilarious mishaps companies are running into as they hand AI the reins, and breakdown SalesForce bizarre AI path.
However, there’s still time, so if you have suggestions or things on your mind, send them my way.
With that’s we’ll see you on the other side.











