When I stumbled across the term Robotheism last week, I wasn’t entirely sure what to expect. It was mentioned in a comment on an interview between two well-known tech leaders discussing the growing trend among some tech elites to create and position AI as humanity’s savior, promising to solve all our problems. Now, to be fair, the conversation wasn’t about people bowing to humanoid robots or lighting candles around some futuristic AI altar, as my cover image might suggest. However, the concept grabbed my attention since I find it interesting, and a little ironic, tech elites are trying to create something I believe walked the earth nearly two millennium ago, but I digress. As usual, I recorded a video for those who want my unfiltered reaction.
Digging deeper, the concept of Robotheism sent me down a rabbit hole I wasn’t entirely prepared for. It turns out that while the more literal idea of AI worship does exist, there’s a Polish church established in 2017 built around the worship of AI, what’s far more pervasive societally is the quieter, more subtle shift. It’s a growing cultural movement of people putting their faith and hope in technology, even though I’m not convinced many even recognize they’re doing it. While not as explicit, people’s shifting attitudes, beliefs, and hopes paint a very clear picture. Many are looking for AI to be much more than just a tool.
While the caricatured examples make for good headlines, it’s this subtle, widespread shift that we need to talk about. Whether or not people realize it, believing AI will become the answer to humanity’s biggest challenges comes with its own set of risks, and they’re ones we might not fully grasp until it’s too late.
So, what does this all mean? How do we ensure we don’t step across the line of seeing AI as a powerful tool to believing it will save us? Those are the questions I wrestled with over the last week as I thought through this topic. As usual, I’ve laid out four key takeaways, regardless of your beliefs, that can help you navigate this unchartered terrain.
Top Takeaways
“When we hand over our deepest hopes and fears to something we’ve created, we risk placing our burdens on something incapable of carrying it.”
It’s easy to see why people are drawn to AI. It offers an enticing promise, answers to life’s hardest questions, and solutions to humanity’s biggest challenges. However, hope is a dangerous thing to hang on something we’ve built. While we’d like to think AI will transcend us, it can’t. After all, it’s built on our data, carrying out human imperfections and operating within limits we define. Hoping it can bear the weight of our trust will inevitably lead to disappointment.
Interestingly, this quiet shift toward believing AI has the potential to be a savior reveals something deeper we’d be wise to pay attention to. While yes, we desire greater efficiency and innovation, we’re also searching for something more. For me, that hope and fulfillment can only be found in something far greater than anything humans will ever create. That said, even if you don’t share my belief, the question remains: Is the hope you’re placing in AI, or anything else for that matter, capable of carrying it?
It’s important to consider because when you misplace hope, it won’t just result in the failure of the thing you trusted, it will also carry the devastation of realizing you trusted it in the first place.
“If AI seems divine, it’s because we’re projecting what we think we lack onto it.”
While AI seems completely new, it isn’t. It’s a modernized reflection of what already exists. As I said before, the whole thing is built on our data and our inputs. However, as it grows more capable, it’s easy to begin seeing something you believe is conscious or even divine. But let me be really clear, AI is incapable of consciousness or divinity. It’s complex math pointing out existing patterns, which is why we have to create fancy terms like “hallucination” to dismiss the times it shows us parts of ourselves we’d rather not confront.
However, it’s interesting to observe how quickly we project onto AI the qualities we associate with divinity like intelligence, creativity, and even salvation. Ironically, it isn’t about technology becoming godlike; it’s about us looking to fill a God-sized void with something. But, shouldn’t a Creator complete us, not be something we’ve created? If you start to see God in AI it’s worth asking yourself, “What am I longing for that makes me see it this way?
After all, if you begin believing AI is all-knowing or all-powerful, you need to recognize it’s not because AI is divine; it’s because you’re seeking something divine.
“AI promises efficiency, but taking the easy path isn’t the same as finding peace.”
One of the most appealing aspects of AI is how it conveniently it gives us what we want. It promises an easy button, so you can simplify complexity, skip discomfort, and automate anything that feels burdensome. And, to be clear, I’m not suggesting there’s something inherently wrong with making life easier. However, there’s a multitude of hidden risks when we begin mistaking the easy road to fulfillment. Truly living isn’t about avoiding challenges.
You risk living if you outsource too much of your life. Struggle, pain, and even inefficiency shape us in ways that ease, comfort, and convenience never will. Reflecting over the past week, I recognize many of my biggest growth moments occurred when I was stuck wrestling through tremendous challenges, not when I snagged a quick fix. If you rely on AI to take over the hard parts of life, you’ll risk silencing the inner work that shapes who you’re really meant to be.
Convenience can be a gift when it frees you up to focus on what truly matters. But when it starts to replace the effort, reflection, and even the struggles that lead to growth, it will rob you of something far greater than time.
“Before you trust AI with something, ask yourself what you’re trying to avoid.”
There are no shortage of debates about whether AI can be trusted. However, they often revolve around its technical capabilities, whether it’s ethical, unbiased, or reliable. While valid, these questions overlook the deeper question, Why do I want to trust AI with this in the first place? The decision to trust is about more than someone or something’s capability. There are plenty of people or things I could technically trust with something that I choose not to.
Are you considering trusting AI because it’s the right tool for the job, or are you looking for an opportunity to escape responsibility? Are you looking for it to make decisions you don’t want to make? Are you looking for a scapegoat for failure? These aren’t just technological questions; they’re deeply personal ones. And how you answer them says more about you than AI.
Before considering whether AI is trustworthy, you should evaluate what you’re considering handing over and why. It’s essential to make sure you’re not surrendering something you should hold onto.
Concluding Thoughts
So, will an AI church pop up in your city? I mean, stranger things have happened. However, I wouldn’t bet on a booming trend of robot-led Sunday services. After all, while AI might be able to craft and deliver a well-structured sermon, create some compelling visuals, and even write a banger worship song, that’s not really what matters in our quest for something greater than ourselves.
However, we need to be mindful that we’re putting a lot more trust and hope in AI than we realize. It’s tempting to look to it for answers to life’s big questions, for guidance on making major life decisions, or to believe it has the potential to solve humanity’s greatest struggles. Unfortunately, when we do, we’re placing an expectation on technology it’s completely incapable of carrying.
Our hope and trust are powerful yet fragile. Where we place them matters. So, here’s my challenge to you. Take some time this week to think about where you’re putting your hope. Is it in something capable of carrying it? And maybe more importantly, why are you placing it there?
Because, at the end of the day, AI can do all sorts of wonderful things. However, it doesn’t hold the answer to the deeper questions of who you are or where your purpose lies. Those answers are found elsewhere.
You raise many good points, especially what pieces of our lives we are willing to turn over to automation. I’m a writer and have no interest in using AI to write my stories. I’d rather struggle to write them myself and enjoy both the results and the process. We learn by doing, not having AI do for us. It’s slower if I do it but much more satisfying and meaningful. I own it good or bad and I’m perfectly comfortable with that.
Totally agree with this commentary on the philosophical and theological connections to AI and humanity's need to find ultimate "answers" in things we think we create or control. But it ought to turn our reflections inward, as your questions push, to think about WHY we hope AI is going to solve humanity's most difficult societal issues, really. Keep up the great posts, Christopher!