15 Comments
User's avatar
Philip Teale's avatar

You’re absolutely spot on with your thoughts of reform, Christopher. We must lean into AI, rather than resist it. People seem to think it spells self-destruction because of how alien it is to our established ways. But they don’t see how it’s actually a catalyst for transformation. I talked about this in a recent piece comparing AI to the Shimmer from Alex Garland’s Annihilation (2018): mutantfutures.substack.com/p/005

Expand full comment
Christopher Lind's avatar

Thanks for the encouraging words and adding some additional perspective. I’ll have to check out the article.

Expand full comment
Emily Harrison's avatar

You wrote, "AI isn’t an enemy to be defeated. It’s the next generation of how we work, think, and create."

Can you explain what you mean by AI is how we will think?

Expand full comment
Christopher Lind's avatar

It’s fundamentally changing the way we seek out, gather, and validate, and synthesize information, which is why it’s so important we’re thoughtful and intentional about how we engage with it.

Just as an example, because of its power and capability, we’re able to bring in and synthesize exponentially more information than we’ve ever been able to before. It’s a lot more to consider and we have a choice to let AI decide for us or lean in and think more critically.

I know some folks set on resisting, but what they don’t realize is it’s being and largely already is fundamentally baked into the systems we use.

To be clear, I’m not saying AI comes in and warps our brain. It’s much more subtle than that. It enables the best and worst in us, but that then has a ripple effect.

Expand full comment
Emily Harrison's avatar

I agree that it's baked into the systems we use far more than most people realize! And more is yet to come, to be sure.

I guess I don't understand the point about how we think. How we interact with data, and how much data we interact with is changing, but is AI/LLMs changing how humans actually think? What I'm asking is: are LLMs changing what it means to be human? We've long known that multitasking is a myth. Humans can only hold so many thoughts in their brains at a time. Just because we have more information, does it mean we are thinking differently? On a neurological level? Is that the point you were making about AI changing how we think?

Expand full comment
Christopher Lind's avatar

I understand your question better now. Thanks Emily.

Okay, so I’ll be honest that this is a conversation that I could probably have for several hours but let me try and distill down my highest level response.

To your point, yes, there are a lot of human bottlenecks and just the way our brain works that won’t necessarily be changed by AI. Like you said, we’ll still gather data, process it, and do something with it.

However, what AI is actually showing is we know a lot less about how we actually “think” than we originally thought and there are a lot of environmental factors that influence it.

As an example, there is emerging research that over-dependency on AI can lead to cognitive decay. In short, it can make us dumber. What we don’t know yet is whether it can be reversed. While we know we can create new neural pathways, exactly how much can be overcome is unclear because it’s all so new. So, technically, while on the surface it looks like AI might not change how we think from a physiological standpoint, it’s looking like that may not be the case if we’re not intentional and thoughtful about its use.

Then, if we think about “how” we go about our thinking, it’s not limited to just what’s in our heads. AI is dramatically affecting that. We’re already seeing people becoming overconfident in their “thinking” despite no practical experience or engaging or talking with other people who have subject matter expertise.

So again, it’s changing how we think, and if we’re not careful, for the worst.

The point of my article is educators have an opportunity to help create and reinforce positive “thinking practices” augmented with AI and encourage the kind of critical thinking and social engagement that prevents cognitive decay and fosters not only greater knowledge retention but practical skill development.

Okay, so I still feel like I barely touched the surface but hopefully that gives you a little more insight into the complexity of what I mean when I say it will affect how we think.

If I’d be helpful to say more or you’ve got other clarifying questions, let me know.

Expand full comment
Emily Harrison's avatar

This helps a ton! Thanks. I'm familiar with much of what you referenced. I take the opposing view though. Since we don't have a good long term understanding of how AI may (or may not) shape the human brain shouldn't we figure that out BEFORE we start using it on kids? We've known for years about "brain drain," learning loss, and the negative effects of screen use on students. All of this leads me to say that we should exercise extreme caution with our kids' developing brains.

Expand full comment
Christopher Lind's avatar

We probably agree more than we differ. I definitely agree we should approach it with caution, especially with kids. At the same time, I think prohibition isn’t the best approach. I grew up in the 90s and was around a lot of people and a school that treated the Internet and computers with prohibition.

I was fortunate enough to have my owns ways to work around it but I saw how not being taught how to use the tech well really had a negative impact on folks.

It doesn’t give them the guidance they need to use it wisely.

Expand full comment
Emily Harrison's avatar

I agree.

So how do you teach them to use it wisely? That’s the million dollar question no one has been able to answer. We keep throwing darts at the board but nothing is hitting.

Expand full comment
David Bergsland's avatar

Excellent! As a retired college instructor, you've nailed it. I spent most of my career fighting people (supervisors, fellow teachers, and governing personnel) about the need for digital publishing skills. Many were still fighting that old battle when I retired in 2009. The only concern I have with AI is built in surveillance which goes far beyond reasonable standards. That, and the fact that the AI copy produced is filled with subtle little errors, especially if you ask it about spiritual reality.

We need to be teaching what the dangers are and how to use AI well. I use it for graphic production all the time. But as slick as it is, eliminating nonsense takes major effort, at this point.

Expand full comment
Christopher Lind's avatar

Spot on David!

If you were still an instructor, we'd have been fighting the good fight together.

Expand full comment
David Bergsland's avatar

Actually, I still am and we are... Yeehaw! Here it comes.

Expand full comment
Bruce Landay's avatar

AI is creeping into our lives in many small and big ways. Google searches now show the AI summary as the top listing and in many cases it’s enough to answer the question. Grammarly is the default when drafting emails and while it’s sometimes helpful it’s often wrong. Writing ability among young people has been declining for years. While I agree we can’t turn off AI or wish it away it needs to be like any tool and used appropriately. People need to understand the limits of what AI does and doesn’t do well.

Far better to acknowledge AI’s existence and allow its use but hold students and professionals to a higher standard of not just accepting what the computer spits out. My fear is that with each new round of whiz bang technology we forget how to think and worse yet neither hold ourselves or anyone else to that standard. At the risk of turning political, consider low information voters and the results we are now experiencing.

Expand full comment