Daniel Kahneman, in "Thinking Fast and Slow," spent a lot of time focused on the benefits of GROUP decision making among senior managers. But even the older Oracle data, showing that 70% of senior managers would prefer "robots" to make their decisions for them precludes some of this new data. Scary. And we are paying them such BIG BIG BUCKS for This???
Personally, I really think we need to move to a SkyNet app to deal with the bad managers who are making so many bad decisions. They have created a Supervisor Hellscape where 70% of them are disengaged, and we are paying them BIG BIG BUCKS to engineer poor workplace performance?
My SkyNet post about dealing with bad managers using AI tools is here:
Yeah, it’s definitely not a new trend. If anything, it’s just getting worse as people are getting more and more confident in AI’s capability despite the data continuing to pour in that it’s not more effective.
If anything, it’s just getting more complicated and detailed in what it does, so it’s harder to quickly distinguish if it’s off or on track.
AI's great, if you keep a solid leash in your hand.
You pretty much need a choker collar and chain leash. haha
Daniel Kahneman, in "Thinking Fast and Slow," spent a lot of time focused on the benefits of GROUP decision making among senior managers. But even the older Oracle data, showing that 70% of senior managers would prefer "robots" to make their decisions for them precludes some of this new data. Scary. And we are paying them such BIG BIG BUCKS for This???
Personally, I really think we need to move to a SkyNet app to deal with the bad managers who are making so many bad decisions. They have created a Supervisor Hellscape where 70% of them are disengaged, and we are paying them BIG BIG BUCKS to engineer poor workplace performance?
My SkyNet post about dealing with bad managers using AI tools is here:
https://medium.com/@scottsimmerman/build-a-skynet-to-improve-bad-managers-0464f131975d
Yeah, it’s definitely not a new trend. If anything, it’s just getting worse as people are getting more and more confident in AI’s capability despite the data continuing to pour in that it’s not more effective.
If anything, it’s just getting more complicated and detailed in what it does, so it’s harder to quickly distinguish if it’s off or on track.