Another great article that asks the tough questions. I remember a sign hanging in one of my school classrooms. “It’s more important to be human than to be important.”
I think this sums up your AI message this week. Just because a machine can say the right thing, doesn’t make it meaningful or impactful.
You rightly brought up the corporate rush to save costs by cutting people and your warning to slow down. People and personal connections matter more than many realize. During my career I did a fair amount of travel to different manufacturing plants, sometimes in other countries. What people will tell you on the phone vs in person walking through their factory is quite different. It takes time to go out for meals together but these informal meetings are where the true connections are made and pay dividends later when a phone call asking for a favor can make things happen. AI can never accomplish this.
Our desire to cut corners and take the easy road is going to continue to cause serious harm.
What’s really unfortunate is there are lots of cases where it’s not even nefarious or intentional. People are getting so desperate for help and support and feel they have nowhere to turn, they’ll grasp at anything that even appears to offer support.
I’m tracking some longitudinal data on all this, and I don’t think we’re even beginning to grasp the consequences of the things we’re doing. At this point, it’s about mitigating the fallout, not preventing it from happening.
These are the pieces that are really, really dangerous. These guys only looked at clinical AIs that were supposed to be therapists, and there are thousands of AIs out there now working that we have no idea what they have been trained on, what their role is, and how they will be used.
For example, many astrology apps now have AIs in them. Sure, most of the content is seemingly innocent "how to dress like your Venus sign" for example, but there's no guardrails if the person says they can't do that, so what do they do? Some of the ones I have tried have given some very disturbing answers.
That’s exactly it. This is what the research says in a clinical environment with the “best case” scenarios. I’ve been following some real-world stuff, and it’s nowhere near as good as what we’re seeing here (not sure ‘good’ is even the right word).
We need to be really, REALLY careful with this stuff because it always seems innocent enough…until it isn’t.
Excellent. Everyday I read another piece showing more difficulties caused by careless use of AI. Like the one Telosity shared from the MIT study. It discovered that using AI for writing not only compromised, but damaged, creativity and critical thinking. Like you and I have talked before, yet another group discovered that the need to edit and rewrite AI output made the process slower and of lower quality than simply writing yourself in the first place.
Another great article that asks the tough questions. I remember a sign hanging in one of my school classrooms. “It’s more important to be human than to be important.”
I think this sums up your AI message this week. Just because a machine can say the right thing, doesn’t make it meaningful or impactful.
You rightly brought up the corporate rush to save costs by cutting people and your warning to slow down. People and personal connections matter more than many realize. During my career I did a fair amount of travel to different manufacturing plants, sometimes in other countries. What people will tell you on the phone vs in person walking through their factory is quite different. It takes time to go out for meals together but these informal meetings are where the true connections are made and pay dividends later when a phone call asking for a favor can make things happen. AI can never accomplish this.
Our desire to cut corners and take the easy road is going to continue to cause serious harm.
What’s really unfortunate is there are lots of cases where it’s not even nefarious or intentional. People are getting so desperate for help and support and feel they have nowhere to turn, they’ll grasp at anything that even appears to offer support.
I’m tracking some longitudinal data on all this, and I don’t think we’re even beginning to grasp the consequences of the things we’re doing. At this point, it’s about mitigating the fallout, not preventing it from happening.
These are the pieces that are really, really dangerous. These guys only looked at clinical AIs that were supposed to be therapists, and there are thousands of AIs out there now working that we have no idea what they have been trained on, what their role is, and how they will be used.
For example, many astrology apps now have AIs in them. Sure, most of the content is seemingly innocent "how to dress like your Venus sign" for example, but there's no guardrails if the person says they can't do that, so what do they do? Some of the ones I have tried have given some very disturbing answers.
That’s exactly it. This is what the research says in a clinical environment with the “best case” scenarios. I’ve been following some real-world stuff, and it’s nowhere near as good as what we’re seeing here (not sure ‘good’ is even the right word).
We need to be really, REALLY careful with this stuff because it always seems innocent enough…until it isn’t.
Excellent. Everyday I read another piece showing more difficulties caused by careless use of AI. Like the one Telosity shared from the MIT study. It discovered that using AI for writing not only compromised, but damaged, creativity and critical thinking. Like you and I have talked before, yet another group discovered that the need to edit and rewrite AI output made the process slower and of lower quality than simply writing yourself in the first place.
Yeah, we’re still very much going through this phase of discovering where AI works, where it doesn’t, and where it makes things worse.
I just seriously hope we don’t make too many mistakes along the way. We’re going to cause some real societal harm, if we haven’t already.
We need to pray against bad actors... I suspect we can kept under control until we pulled out of here.