Artificial intelligence is the most powerful innovation we’ve seen in a generation—maybe ever.
I know many people still don’t see the impact it will have. But as someone who studies failure, I can’t help but think about the things we’re not seeing. The things we can’t predict. The blind spots that history has shown up again and again.
This isn’t fear-mongering. I’m not saying we should stop building or exploring AI. The cat’s already out of the bag—and much of what’s happening is remarkable. We’re likely entering an era of discoveries that will radically reshape what’s possible in science, medicine, education, and even creativity.
But what concerns me isn’t the progress. It’s our arrogance.
We often think we can control systems far more complex than we understand. We’ve done it before—nuclear power, financial systems, the internet—and every time, we learn the same hard truth: we’re not as prepared as we thought.
What makes AI different is how fast it’s moving, how invisible the risks are, and how hard it will be to reverse once the damage is done.
Yes, some dangers are obvious—weaponization, manipulation, data exploitation. But others are subtle. They creep in through overconfidence.