Well, it took a little longer than usual to get this published. Like usual, a bit after the livestream ended I was notified it was ready to share. However, when I opened the draft I found it was someone else’s livestream in my drafts.
And, while it was a hilarious and engaging discussion between two Italian-Americans exchanging cooking tips and family stories, I didn’t feel it was right to steal their content.
Thankfully, Substack corrected the matter, so here we are.
If you missed the original podcast, you can catch that on Spotify, YouTube, or any of your favorite platforms.
This week I touched on four topics:
The ChatGPT Conversation “Leak” Debacle
Developments in the WorkDay AI Hiring Case
The Human Cost of AI Decision-Making
A Story of AI Hiring Done Right
However, what you’ll find here are my answers to ten of the most popular and thought-provoking questions that came from the community. I’ve broken them down by a topic.
On the ChatGPT “Leak” Segment
You said people need to “slow down” when using AI tools. But in high-speed work cultures, how do you actually build that pause into real workflows?
Are there any AI features you personally think should never be publicly available, even with opt-ins—because they’re too risky for the average user?
On the Workday Lawsuit + Accountability
You mentioned the courts asking for a list of companies that used Workday’s AI. What should a company do right now if they’re one of them—or even if they used a similar tool?
You talked about “moving with wisdom, not speed.” In your consulting experience, what does that actually look like in a fast-moving AI rollout? Any practical signals that a leader is moving too fast?
Do you think there will be a “scapegoat culture” that develops in companies when these things go public? How can leaders avoid turning on each other?
On the Healthcare Denial Story
You mentioned that people stepped in to help this person when the AI system failed them. But what kind of escalation paths should companies build to ensure there’s a human override before it hits social media?
Is it ever ethical to automate a decision that carries life-or-death consequences, even if the AI is 95% accurate? Where do you personally draw the line?
On the Positive Hiring Use Case
You gave an example of AI actually helping candidates find better-fit roles. How did you ensure it didn’t just become another resume filter? What guardrails did you build in?
How do you sell the idea of human-centered AI to execs who are mainly focused on efficiency and speed?
Meta-Level Wrap
You’ve said several times that the issue isn’t AI—it’s how we’re using it. But what’s the one mindset shift you wish every leader would make today to avoid being part of the problem tomorrow?
Alright, that’s it for this week. I hope you found it helpful, and if you did, would consider saying thanks by buying me a coffee or lunch?
All that said, I hope all is going well amidst the chaos. Next week I’ll be talking about the GPT-5 release and Grok4, which will be a fun one. There’s a lot I want to cover that you may not expect to be sure to tune in.
With that’s we’ll see you on the other side.
Share this post