Welcome to Are We Cooked? with Tor Bair
AI is accelerating. Reality is fragmenting. Why can't the smartest, most informed, most powerful people agree on what's happening? And what does that mean for the rest of us?
Hello, human readers and website scrapers.
This is Are We Cooked?, and I’m Tor Bair. More on who I am in just a moment, but let’s start with where you are.
This Substack is my newest experiment: a public investigation into what’s actually happening with our technology, its new capabilities, and the consequences. I’ll bring all my weird experience and abundant free time to bear—through original writing, podcasts, guest interviews, and the occasional applied deep-dive—and I’m hoping you’ll contribute too.
Why is this so hard to untangle? Well, the cost of content is approaching zero, our online echo chambers are fracturing, and everyone has blind spots and conflicts of interest. That's why you can read a thousand thought-pieces, fall down endless YouTube rabbitholes, and feel further from the truth than when you started
Or, you could let me torture myself with that burden instead.
Here I’ll attempt to sort fact from fiction, separate hype from reality, and try to understand why smart, informed people keep arriving at wildly different conclusions about the present and future. I’ll show my work, make predictions I can be held accountable for, and update when I’m wrong.
AI is accelerating. Markets are destabilizing. Truth and reality are fragmenting.
So I’m asking: What’s going on? What’s going to happen next? And why do the smartest people I know completely disagree on the answers?
Which leads me to the biggest question of all: Are We Cooked?
Oh, so this is just another AI blog / podcast / whatever?
No.
To the extent AI advancement explains what’s happening or could happen, I want to talk about it. But that’s only one piece of the puzzle.
The real puzzle is: “What if this time is different?”
This gets asked during every technological, cultural, and political upheaval. What if this is the existential turning point? The extinction event? And yet, it never has been.
But new technologies—from AI agents to TikTok to cryptocurrency and beyond—are warping the way that data, power, and money flow. Our world is networked and interdependent like never before. Autonomous and intelligent non-human actors, whose abilities rival and surpass our own, are beginning to interface with critical global information systems like social media and financial protocols. Their workings and intentions are often inscrutable, and our control of them is beginning to falter.
The potential of these technologies to create unprecedented positive change is real. But the potential to cause unprecedented harm is real too: AI slop, manipulative bots, exploitative black-box algorithms, the teen mental health crisis, the casinofication of the global economy, accelerating climate change, the erosion of democratic institutions. There have been plenty of reasons to ask: Are We Cooked?
But here’s what has been bothering me most: the people that should agree—that need to agree—don’t. Instead, they’re diverging to opposite poles of reality.
So why can’t we align on the potential and risks of these technologies? How do personal and collective incentives drive these differing perspectives? And are the technologies themselves making it easier or harder to reach the consensus we need to implement—and safely control—those technologies at scale?
That’s what I’ll be investigating here—through writing, conversations, and whatever else seems useful. These are heavy questions. I'll try to treat them that way while remaining, whenever achievable, fun for you to read and hear.
Okay, but who are you, anyway?
I’m Tor. I’ve struggled to articulate a throughline for the things I’ve found interesting enough to pursue in my life. That’s not always helpful for employment opportunities or dinner parties, but it turns out to be very useful when traditional sense-making stops explaining the world.
In 2025, I closed a few long-term chapters in my career. I’ve taken short sabbaticals for experiments in the past (2013, 2017), but after 8 straight years of sprinting and multiple CEO stints, I decided the time was right for another reset. The world has reached an obvious inflection point that I feel compelled to explore and understand—from what I’m hoping is a safe vantage point. Thus this project was born.
I’ve studied game theory and economics, worked as a trader for one of the oldest options market-making firms, and been a Big Tech data scientist. In 2017 I dove full-time into crypto, founding and running multiple startups through the full emotional spectrum: the mania, the crashes, the regulatory chaos, and the occasional moment of clarity. Along the way I picked up an MBA from MIT, which mostly confirmed that the most interesting problems resist any frameworks taught in a business school. (Thankfully, the people you meet can more than make up for that.) Now I run an advisory and coaching firm working with founders, investors, and senior leadership — helping them think through the kinds of problems that don't have clean answers.
I have hobbies, too. I love game design and world-building. I do improvisational performance because I find it strange and wonderful. I obsess over language, cognitive science, and the limits of human and technological potential.
I guess what I do—and what this blog is really about—is synthesis.
I'm less interested in already having the right answers than in understanding why brilliant, well-informed people arrive at completely different ones. That usually means following the incentives, pressure-testing the underlying and unspoken models, and mapping where systems interact in ways nobody planned for.
I've spent my career and life operating in high-uncertainty, real-time environments where being wrong has real consequences—and where finding the truth has critical value. So I have a healthy respect for epistemic humility and a low tolerance for confident nonsense.
I'm not here to tell you what to believe. I'm here to drag the most interesting disagreements into the open, show you how I'm working through them, and occasionally make a prediction I'll be held accountable for. Some of it will be wrong. That's the point.
I’m still reading. How can I help?
Thanks for asking!
First of all, please subscribe. It’s free, it takes ten seconds, and it’s the single most useful thing you can do to keep this project active. Every subscriber signals that these questions are worth pursuing—and that makes it easier for me to attract the guests, collaborators, and resources that will make this better over time.
I am also collecting your stories and ideas. If you have your own story or perspective to share on these questions, please get in touch or comment here. Or if you have a guest to recommend, someone whose thinking or work has given you insight into any of these questions. Or if you have a topic, paper, or quote you’d like to see explored. I want to hear from you.
I will be listening to your feedback. If you are studying or reckoning with any of these questions (either existentially or in your day-to-day work), I want your contributions, criticisms, suggestions, and honesty. If I only wanted sycophantic support of my work, I’d just ask an LLM!
Finally, I’d love for you to share my work. With greater reach and engagement, I can pursue higher-profile or harder-to-reach guests, tackle larger projects, and do more to answer these critical questions. I want make this project worthy of the time you give me in return for mine.
We all know social media algorithms and long-form explorations are fundamentally incompatible. If content isn’t commercial or finely-tuned ragebait, it can be invisible. That’s why I intend for this project to grow at human speed: which is to say, its future relies on your referrals and direct distribution, not on TikTok dances.
So thank you for reading. This doesn’t happen without you.
Let’s get cooking. 👨🏻🍳
Follow me (Tor) on X:
Follow Are We Cooked? on X:
➡️ https://x.com/arewecookedhq
And please share this Substack with your friends :)


