Supremacy - The Race Nobody Saw Coming
Parmy Olson's "Supremacy: AI, ChatGPT and the Race that Will Change the World" won the 2024 Financial Times Business Book of the Year Award. After reading it, I understand why. This isn't another breathless book about how AI will transform everything - we've had enough of those. This is investigative journalism at its finest, revealing what's actually happening behind the closed doors of the labs building artificial general intelligence.
What Makes This Different
Most tech books feel like they're written from press releases and public blog posts. Olson did the work - she got inside OpenAI, DeepMind, Anthropic, and the other key players. She talked to the people actually building these systems, the researchers who left because of ethical concerns, the executives making billion-dollar bets, and the regulators trying to figure out what to do about all of it.
The result is a book that reads like a thriller while being meticulously researched. You get the sense that this is what's really happening, not the sanitized version these companies want us to believe.
The OpenAI Story You Haven't Heard
Everyone knows the ChatGPT story - or thinks they do. Olson shows you what was happening inside OpenAI before the launch, during the explosive growth, and through the chaos that followed. The tension between the "safety-focused non-profit" narrative and the reality of raising billions from Microsoft. The researchers who left because they couldn't reconcile what they were building with what they were being told.
The most fascinating part? She shows how OpenAI went from an organization formed specifically to counter Google's AI dominance to becoming exactly what they feared - a secretive lab racing to build AGI as fast as possible, consequences be damned.
DeepMind's Quiet Dominance
While everyone was obsessing over ChatGPT, DeepMind was playing a different game entirely. Olson's account of how DeepMind operates within Google reveals something critical - they've been thinking about AGI safety and alignment longer than anyone else, and they're still the most technically sophisticated lab in the world.
The AlphaFold story alone is worth the price of admission. Solving protein folding wasn't just a scientific breakthrough - it demonstrated that AI could make fundamental contributions to human knowledge, not just generate convincing text. DeepMind proved AI could do science, not just simulate understanding.
The Race That Matters
Here's what Olson captures that most coverage misses - this isn't a race to build the best chatbot or the most impressive demo. It's a race to artificial general intelligence, and the stakes couldn't be higher. The companies involved know this. The question isn't whether AGI is possible - everyone she talks to believes it is. The question is who builds it first, and what safeguards exist when they do.
What's terrifying is how few people are making these decisions. A handful of labs, a few dozen key researchers, maybe a hundred people total who understand what's being built and have any influence over how it's deployed. That's it. That's who's deciding the future of intelligence on Earth.
The Safety Question
Olson doesn't take a position on AI safety - she doesn't need to. She just shows you what the people building these systems are saying to each other when they think nobody's listening. Some are genuinely worried. Others think the concerns are overblown. Most are so focused on the technical challenges that they haven't thought through the implications.
The most chilling moments come when she talks to researchers who left these labs because they couldn't stomach what they were being asked to build without proper safety measures. These aren't AI skeptics or Luddites - these are people who understand the technology better than almost anyone, and they walked away because they couldn't sleep at night.
What She Gets Right
Olson understands that this story isn't really about technology - it's about people, power, and the choices we make when transformative capabilities emerge faster than our ability to think through their implications.
She captures the hubris of Silicon Valley mixed with genuine technical brilliance. The idealism that launched these projects alongside the commercial pressures that distorted them. The researchers who believe they're saving the world and the executives who see a once-in-a-generation business opportunity.
Most importantly, she shows how the narrative shifted from "we need to be careful" to "we need to move fast before someone else does" - and how that shift happened so gradually that nobody really noticed until it was too late to reverse course.
What's Missing
The book focuses heavily on the Anglo-American AI ecosystem. There's some coverage of Chinese AI development, but it feels cursory compared to the depth of the OpenAI and DeepMind reporting. Given that China is arguably the other major player in this race, that's a significant gap.
I also wanted more technical depth in places. Olson writes for a general audience, which is fine, but sometimes she skips over important technical distinctions that matter for understanding what these labs are actually building.
Who Should Read This
If you work in tech and haven't been paying attention to the AGI race, read this. If you're in leadership and need to understand where AI is heading beyond the marketing hype, read this. If you're trying to make sense of why every tech company suddenly has an AI strategy, read this.
Hell, if you're just curious about what's actually happening in AI beyond the ChatGPT demos and Gemini announcements, read this. Olson gives you the story behind the story, and it's more interesting - and more concerning - than the public narrative suggests.
The Uncomfortable Truth
What makes "Supremacy" so valuable is that it forces you to confront an uncomfortable reality: we're in the middle of the most significant technological race in human history, and most of us are just spectators. The decisions being made right now in a few labs in San Francisco, London, and Beijing will shape the trajectory of intelligence itself.
Olson doesn't offer easy answers because there aren't any. She just shows you what's happening and lets you draw your own conclusions. For me, the conclusion is clear - we need more transparency, more diverse voices in these conversations, and a lot more time to think through the implications of what we're building.
Unfortunately, time is the one thing nobody in this race seems willing to take. And that's what makes this book both essential reading and profoundly unsettling.
Read it. Then decide whether the people building AGI should be moving this fast with this little oversight. I suspect you'll have the same answer I did.