Great article. Although I don't use AI for business (yet?), I do see that it will infiltrate many parts of our lives. I too (like another commenter) question how it will be profitable - there's lots of settle-out yet to happen.
I do especially like your recommendation for rest, as designed.
One last note: please don't use the phrase "If I’m being honest..." - that should never be an if. Perhaps that's a pet peeve of mine, but it's a phrase that makes one wonder when the speaker or author wasn't being honest.🧐
Lots of unknowns for sure. We’ve got a lot of work ahead, but we’ve definitely reached a tipping point. I see a lot of orgs that are already becoming so dependent on AI systems, they can’t go back.
Thanks for the feedback on the statement. I think it’s me just calling attention to something, not so much a statement of only being honest when I say that. However, I can see how it could come across. I’ll try and make note of it. 😉
Christopher, thank you for this piece! Also, I recently came across a New York Times article which featured an organization made up of many former Open AI employees who are predicting that AGI will be accomplished by 2027. Yet they also predict a grim situation due to the lack of regulation on developers like AI. I’d love to get your take on that — both the year prediction and the gloom-and-doom outlook they have forecasted. Here’s the link to their site: https://ai-2027.com/
Hey Drew. First, thanks for sharing the site. While it’s similar to a lot of the stuff I’m involved with, it’s one I hadn’t seen. (Absolutely love the way the chart animates as you go through.) So, I think they do a really good job of laying out a lot of the very legitimate risks that seriously need to be seriously considered. It also very much points to one of the major acceleration points being when AI reaches a point where it can do its own independent research. Given OpenAI’s announcement of their $20k “PhD-level” agent, it may come sooner than 2027. (however, its capability is yet to be seen.)
It would probably take days to get into everything in my head related to your question. However, here’s my best quick response. I think a lot of the dystopian potential is legitimate and the capability will be there. Honestly, don’t think we’ll have to wait several years. Some of the government and defense work happening with AI is terrifying. We truly are in a modern arms race on a global scale. I also fully anticipate we’re going to see some really, really bad stuff happen. However, I don’t think it will happen quite as wide scale as the grim predictions indicate. Let’s just say I’m not building a doomsday bunker anytime soon. There’s a few reasons why.
1. While the tech can advance and the potential is there, change and adoption takes time. Just because we “can” do something doesn’t mean we will be able to at scale. Getting it integrated into systems takes time. People are extremely resistant to change. And, things are always exponentially more complex than people realize. On paper things look simple, but once you get into it, it’s not. And, a lot of the dystopian predictions assume AI will just “do it all.” AI doesn’t work that way. As much as we keep saying it’s “autonomous,” it’s really not. Someone is always driving it in some capacity and it still has to fit into the existing ecosystem, which is very resilient.
2. What you don’t hear enough of is how the more advanced these systems get, the more dysfunctional they seem to get. Granted, some of that dysfunction is nefarious (not really but it comes across that way), but some of it is just legitimately foolishness. What I mean is, some new model will come out crushing some capability that it couldn’t do before only to fall behind or completely fall apart in other areas. Also, in some areas it just is still really, really dumb. My mind is still blown by some of the things that AI should be able to do that it just can’t. Sure, analysts and the hype cycle keeps saying “it’s only a matter of time,” but the data says otherwise. The fragmentation and dysfunction just seems to scale alongside the capability of the models.
3. At the end of the day, the world is still ultimately designed for people and AI isn’t a person. So, it’s honestly really bad at doing things that people like unless people are intimately involved in what it’s doing. As just a small example, look at AI writing. There was all this hype that AI would take away writing from people because it was so much better. However, when the customer of writing is humans, they don’t really like what AI creates. It feels stale and hollow. So, yes, I think AI will augment a lot more than it does today in the very near future, but we’ll find new ways for people to be involved or it will all fall apart.
Now, to be clear, I do absolutely think we’re going to see some really catastrophic stuff. I already get called in to clean up a lot of big messes that happen when people get too excited with AI and let it loose. And, tragically, it creates exponentially more damage a heck of a lot faster. I’ve had some situations where we’ve literally had to scrap the whole thing and start over because the damage was too much.
Something that often also isn’t talked about enough with the fear-driven hype cycle is how much incredibly positive stuff that’s happening with AI right now. Some of the medical advancements that are happening. Some of the progress on clean energy. Some of the educational advancement. There truly is a lot of really good stuff too that nobody talks about.
Ultimately, like the article says, everyone is literally making predictions. Nobody really “knows” where it’s going. There are some things I’d thought we’d blow past by now that we’re still nowhere close. There’s other things I thought would take years that we’ve far surpassed. It’s a field that’s changing faster than anything I’ve ever seen.
Ultimately, my only comfort comes from my Christian worldview. I truly believe it’s all in God’s hands and he’s got it figured out.
Someone asked me recently if I thought this could lead to the end of the world. I told them I absolutely believe it has the potential to. However, it’s not the first time we’ve had something that could have led to the end of the world. Black plague, the nuclear arms race, etc… If it’s time for the world to end, it’s going to end. If it’s not, it won’t. I’m just taking it one day at a time and enjoying the ride.
“Can all your worries add a single moment to your life?” Matthew 6:27 NLT
Christopher, I appreciate your honest look at a tough issue. Your frank assessment rings true for me. Embracing AI is a huge change and it’s not fast, cheap, or easy. It will take a massive investment and true management commitment. I’m fortunate that I’m retired and get to sit this transition out. I just don’t have the energy or corporate rah rah left for such a huge change. I’m a writer now and am sitting on the AI sidelines. I’ll let others blaze the trail. AI will be common but likely not as ubiquitous as people fear. There will be plenty of opportunities to sit out, though it may mean a change in job or company. Mostly, people have to decide the road they want to take and make a decision rather than someone making it for them.
I love this reflection and thanks for the encouraging words Bruce.
Your last line I want to 1000x. It’s not that AI, no AI, or somewhere in between is a matter of making the right or wrong decision. It’s about the process of critically thinking through that decision, making it wisely, and then holding it with an open hand while embracing the road you walk.
There are no easy roads, but it’s a lot easier when you at least know where you’re walking and why.
I spent a long career living through and introducing new technology. Along the way people embraced it or not, including myself. The reality is people need to get on the bus, get off the bus, or be in danger of being run over by the bus.
Irrespective of the political circus that is going on, which quite frankly is enough to contend with I can’t help but see it as a diversionary tactic so that by comparison AI just slides on in. If you had to choose between the two, which one seems more palatable right now…which one feels more certain? Human Experience, tells us that certainty even if it’s uncomfortable will always be the first choice. Love to get your thoughts on this perspective.
Yeah, the political stage, state of economics, and geo-political tensions are…interesting and distracting….to say the least.
To your point, I do think it is allowing a lot of AI stuff to slip in, not so much in a nefarious way but because people are exhausted, overwhelmed, and mentally fatigued. As such, AI seems like an easy way to free up capacity.
As for what feels more certain? That’s a really good question. I don’t necessarily see them as being in competition. Humans focusing on human skills and experiences will always win. It’s a matter how we pull it off. I think even those going in on AI will need to maximize the “human” elements alongside the tech. AI done right optimizes and maximizes the human experience. When it diminishes it, any gains are short lived and come with long-term fallout.
I think you could also go full-on analog but that will be a very different path. You’d need to really find your audience and be one of the top in your niche to pull it off.
Ultimately, “success” I think will look very different for everyone and every company. The only thing that will be universal is being relentlessly intentional about what you’re doing and why, always being ready to adapt and learn, and creating products and experiences that bring out the best in people.
If that wasn’t quite what you were asking or want further clarity, let me know. I tend to work through it on a very personal level with folks, so when I talk in generalities it can sound a bit like an incomplete answer.
I really don't like these "top down" demands for AI like this because it takes the human expertise factor out of the equation. You can't know how to use AI and use it appropriately, you *have* to do it because the boss is demanding it, even if it doesn't fit the job or project in front of you, which can actually slow down the project considerably.
Second, it is telling that he is required the use of these tools but learning must be self-driven, according to his manifesto. The company is going to require you to use something but not invest in its human staff to learn how to use it.
Third, while the manifesto is written in "techno optimism", its de facto downsizing. Do this or lose your job. End of story. No room for discussion or options for folks who are committed to a human priority over the AI priority.
There's a big difference between saying something like "Explore the AI market. Show us a prototype or idea for incorporating it into your project so we can discuss" and saying "You have to do this".
The other thing I notice in the manifesto is - while the humans are being judged on the use of AI, the AIs are not being evaluated for effectiveness, which should definitely be a part of the process. The companies that are doing this right are making AIs a member of the team, with equal responsibilities and measurements just like the humans. If an AI is not adding productivity to the team, the humans should not have to use it. Posting prompts to a Slack channel is not proof of effectiveness.
So, I share many of the concerns you have, which is why I consider what they're doing ahead of the curve but still a far cry from "best-in-class."
To some degree, it is the CEO's job to say "this is where we're going as a company." Having sat at the most senior levels, I understand everyone wants to feel like they're part of picking the direction and driving the ship, but we need leaders who make bold moves and decisions (so long as they hold themselves accountable to it and then carry it forward in a human-centered way).
That said, all the things you bring up is why I think how well this goes for Shopify is yet to be seen. All that said, I think this is still a clear indicator that this IS where things are going, regardless of our feelings about it.
Like many things going on in the world, things are moving fast but without consideration on the impact. Fail fast in the software world, does not readily translate into the real world. AI is the line in the sand, and it won't be long before it becomes another half-built, polarizing wall.
I remain an AI skeptic for two main reasons - one is addressed in the article, one not.
First, I still don't see the business case for AI providers. They are losing money at unfathomable rates and have no real plan to be profitable. The tech may be good, but if no one can afford it, then its use will be very limited.
Second, the problem with statements like the CEOs is that they actually don't know how to use it yet. It is a radical restructuring of roles and no one is really working at figuring it out. They are making grand statements and big pronouncements, but no real work. Shopify included. Too many companies are saying "start using this" with no clue how it will apply.
By the time they figure it out, all these AI companies will be bankrupt. (My prediction, for entertainment purposes only)
Great article. Although I don't use AI for business (yet?), I do see that it will infiltrate many parts of our lives. I too (like another commenter) question how it will be profitable - there's lots of settle-out yet to happen.
I do especially like your recommendation for rest, as designed.
One last note: please don't use the phrase "If I’m being honest..." - that should never be an if. Perhaps that's a pet peeve of mine, but it's a phrase that makes one wonder when the speaker or author wasn't being honest.🧐
Lots of unknowns for sure. We’ve got a lot of work ahead, but we’ve definitely reached a tipping point. I see a lot of orgs that are already becoming so dependent on AI systems, they can’t go back.
Thanks for the feedback on the statement. I think it’s me just calling attention to something, not so much a statement of only being honest when I say that. However, I can see how it could come across. I’ll try and make note of it. 😉
Christopher, thank you for this piece! Also, I recently came across a New York Times article which featured an organization made up of many former Open AI employees who are predicting that AGI will be accomplished by 2027. Yet they also predict a grim situation due to the lack of regulation on developers like AI. I’d love to get your take on that — both the year prediction and the gloom-and-doom outlook they have forecasted. Here’s the link to their site: https://ai-2027.com/
Hey Drew. First, thanks for sharing the site. While it’s similar to a lot of the stuff I’m involved with, it’s one I hadn’t seen. (Absolutely love the way the chart animates as you go through.) So, I think they do a really good job of laying out a lot of the very legitimate risks that seriously need to be seriously considered. It also very much points to one of the major acceleration points being when AI reaches a point where it can do its own independent research. Given OpenAI’s announcement of their $20k “PhD-level” agent, it may come sooner than 2027. (however, its capability is yet to be seen.)
It would probably take days to get into everything in my head related to your question. However, here’s my best quick response. I think a lot of the dystopian potential is legitimate and the capability will be there. Honestly, don’t think we’ll have to wait several years. Some of the government and defense work happening with AI is terrifying. We truly are in a modern arms race on a global scale. I also fully anticipate we’re going to see some really, really bad stuff happen. However, I don’t think it will happen quite as wide scale as the grim predictions indicate. Let’s just say I’m not building a doomsday bunker anytime soon. There’s a few reasons why.
1. While the tech can advance and the potential is there, change and adoption takes time. Just because we “can” do something doesn’t mean we will be able to at scale. Getting it integrated into systems takes time. People are extremely resistant to change. And, things are always exponentially more complex than people realize. On paper things look simple, but once you get into it, it’s not. And, a lot of the dystopian predictions assume AI will just “do it all.” AI doesn’t work that way. As much as we keep saying it’s “autonomous,” it’s really not. Someone is always driving it in some capacity and it still has to fit into the existing ecosystem, which is very resilient.
2. What you don’t hear enough of is how the more advanced these systems get, the more dysfunctional they seem to get. Granted, some of that dysfunction is nefarious (not really but it comes across that way), but some of it is just legitimately foolishness. What I mean is, some new model will come out crushing some capability that it couldn’t do before only to fall behind or completely fall apart in other areas. Also, in some areas it just is still really, really dumb. My mind is still blown by some of the things that AI should be able to do that it just can’t. Sure, analysts and the hype cycle keeps saying “it’s only a matter of time,” but the data says otherwise. The fragmentation and dysfunction just seems to scale alongside the capability of the models.
3. At the end of the day, the world is still ultimately designed for people and AI isn’t a person. So, it’s honestly really bad at doing things that people like unless people are intimately involved in what it’s doing. As just a small example, look at AI writing. There was all this hype that AI would take away writing from people because it was so much better. However, when the customer of writing is humans, they don’t really like what AI creates. It feels stale and hollow. So, yes, I think AI will augment a lot more than it does today in the very near future, but we’ll find new ways for people to be involved or it will all fall apart.
Now, to be clear, I do absolutely think we’re going to see some really catastrophic stuff. I already get called in to clean up a lot of big messes that happen when people get too excited with AI and let it loose. And, tragically, it creates exponentially more damage a heck of a lot faster. I’ve had some situations where we’ve literally had to scrap the whole thing and start over because the damage was too much.
Something that often also isn’t talked about enough with the fear-driven hype cycle is how much incredibly positive stuff that’s happening with AI right now. Some of the medical advancements that are happening. Some of the progress on clean energy. Some of the educational advancement. There truly is a lot of really good stuff too that nobody talks about.
Ultimately, like the article says, everyone is literally making predictions. Nobody really “knows” where it’s going. There are some things I’d thought we’d blow past by now that we’re still nowhere close. There’s other things I thought would take years that we’ve far surpassed. It’s a field that’s changing faster than anything I’ve ever seen.
Ultimately, my only comfort comes from my Christian worldview. I truly believe it’s all in God’s hands and he’s got it figured out.
Someone asked me recently if I thought this could lead to the end of the world. I told them I absolutely believe it has the potential to. However, it’s not the first time we’ve had something that could have led to the end of the world. Black plague, the nuclear arms race, etc… If it’s time for the world to end, it’s going to end. If it’s not, it won’t. I’m just taking it one day at a time and enjoying the ride.
“Can all your worries add a single moment to your life?” Matthew 6:27 NLT
Christopher, I appreciate your honest look at a tough issue. Your frank assessment rings true for me. Embracing AI is a huge change and it’s not fast, cheap, or easy. It will take a massive investment and true management commitment. I’m fortunate that I’m retired and get to sit this transition out. I just don’t have the energy or corporate rah rah left for such a huge change. I’m a writer now and am sitting on the AI sidelines. I’ll let others blaze the trail. AI will be common but likely not as ubiquitous as people fear. There will be plenty of opportunities to sit out, though it may mean a change in job or company. Mostly, people have to decide the road they want to take and make a decision rather than someone making it for them.
I love this reflection and thanks for the encouraging words Bruce.
Your last line I want to 1000x. It’s not that AI, no AI, or somewhere in between is a matter of making the right or wrong decision. It’s about the process of critically thinking through that decision, making it wisely, and then holding it with an open hand while embracing the road you walk.
There are no easy roads, but it’s a lot easier when you at least know where you’re walking and why.
I spent a long career living through and introducing new technology. Along the way people embraced it or not, including myself. The reality is people need to get on the bus, get off the bus, or be in danger of being run over by the bus.
Irrespective of the political circus that is going on, which quite frankly is enough to contend with I can’t help but see it as a diversionary tactic so that by comparison AI just slides on in. If you had to choose between the two, which one seems more palatable right now…which one feels more certain? Human Experience, tells us that certainty even if it’s uncomfortable will always be the first choice. Love to get your thoughts on this perspective.
Yeah, the political stage, state of economics, and geo-political tensions are…interesting and distracting….to say the least.
To your point, I do think it is allowing a lot of AI stuff to slip in, not so much in a nefarious way but because people are exhausted, overwhelmed, and mentally fatigued. As such, AI seems like an easy way to free up capacity.
As for what feels more certain? That’s a really good question. I don’t necessarily see them as being in competition. Humans focusing on human skills and experiences will always win. It’s a matter how we pull it off. I think even those going in on AI will need to maximize the “human” elements alongside the tech. AI done right optimizes and maximizes the human experience. When it diminishes it, any gains are short lived and come with long-term fallout.
I think you could also go full-on analog but that will be a very different path. You’d need to really find your audience and be one of the top in your niche to pull it off.
Ultimately, “success” I think will look very different for everyone and every company. The only thing that will be universal is being relentlessly intentional about what you’re doing and why, always being ready to adapt and learn, and creating products and experiences that bring out the best in people.
If that wasn’t quite what you were asking or want further clarity, let me know. I tend to work through it on a very personal level with folks, so when I talk in generalities it can sound a bit like an incomplete answer.
Great reflections and wisdom
I really don't like these "top down" demands for AI like this because it takes the human expertise factor out of the equation. You can't know how to use AI and use it appropriately, you *have* to do it because the boss is demanding it, even if it doesn't fit the job or project in front of you, which can actually slow down the project considerably.
Second, it is telling that he is required the use of these tools but learning must be self-driven, according to his manifesto. The company is going to require you to use something but not invest in its human staff to learn how to use it.
Third, while the manifesto is written in "techno optimism", its de facto downsizing. Do this or lose your job. End of story. No room for discussion or options for folks who are committed to a human priority over the AI priority.
There's a big difference between saying something like "Explore the AI market. Show us a prototype or idea for incorporating it into your project so we can discuss" and saying "You have to do this".
The other thing I notice in the manifesto is - while the humans are being judged on the use of AI, the AIs are not being evaluated for effectiveness, which should definitely be a part of the process. The companies that are doing this right are making AIs a member of the team, with equal responsibilities and measurements just like the humans. If an AI is not adding productivity to the team, the humans should not have to use it. Posting prompts to a Slack channel is not proof of effectiveness.
So, I share many of the concerns you have, which is why I consider what they're doing ahead of the curve but still a far cry from "best-in-class."
To some degree, it is the CEO's job to say "this is where we're going as a company." Having sat at the most senior levels, I understand everyone wants to feel like they're part of picking the direction and driving the ship, but we need leaders who make bold moves and decisions (so long as they hold themselves accountable to it and then carry it forward in a human-centered way).
That said, all the things you bring up is why I think how well this goes for Shopify is yet to be seen. All that said, I think this is still a clear indicator that this IS where things are going, regardless of our feelings about it.
Masterfully penned!
💫
Thank you sir!
Like many things going on in the world, things are moving fast but without consideration on the impact. Fail fast in the software world, does not readily translate into the real world. AI is the line in the sand, and it won't be long before it becomes another half-built, polarizing wall.
I have some serious concerns about how fast and recklessly things are moving right now as well.
Given that's where we are, I think it's now a matter of figuring out what we do within it.
I remain an AI skeptic for two main reasons - one is addressed in the article, one not.
First, I still don't see the business case for AI providers. They are losing money at unfathomable rates and have no real plan to be profitable. The tech may be good, but if no one can afford it, then its use will be very limited.
Second, the problem with statements like the CEOs is that they actually don't know how to use it yet. It is a radical restructuring of roles and no one is really working at figuring it out. They are making grand statements and big pronouncements, but no real work. Shopify included. Too many companies are saying "start using this" with no clue how it will apply.
By the time they figure it out, all these AI companies will be bankrupt. (My prediction, for entertainment purposes only)