I just finished listening to Ezra Klein talk with Cory Doctorow and Tim Wu on the New York Times podcast, and I did what I often do these days. I read along while I listened. If that makes me weird, it’s OK. I’ve been called worse. I find it makes it sink in better.

The episode is titled “We Didn’t Ask For This Internet.” It’s a smart conversation. A serious one. And an important one. I recommend you read it, listen to it, or do both like I did. Here’s the link. I don’t think there’s a paywall in the way, but you can also find it on Apple Podcasts, Spotify, and the usual spots.

The premise is straightforward and hard to argue with. Somewhere along the way, the internet lost its way. What once felt like a tool for connection, creativity, and democratization has curdled into something more corrosive. Platforms are optimized for extraction, not enrichment. Incentives reward outrage, not insight. Surveillance capitalism is the business model. And yes, to use Doctorow’s now-famous term, much of the internet has been thoroughly enshittified.

On that, I don’t disagree.

But here’s where I part company, at least a little. And it’s not because Klein, Doctorow, or Wu are wrong. It’s because blaming the internet itself misses something uncomfortable, something we don’t like to look at too closely. Or maybe something we don’t like to admit to ourselves.

Blaming the internet is like blaming the mirror because you look fat.

The mirror didn’t do this to you. The mirror is just showing you what’s there.

Doctorow, a longtime blogger, science fiction writer, and activist with the Electronic Frontier Foundation, lays out his theory of “enshittification” with brutal clarity. Platforms start out serving users. Then they pivot to serve advertisers. Then they pivot again to extract value for shareholders and lock everyone else in. Quality degrades. Trust evaporates. Everything gets worse, suddenly and predictably. His book, Enshittification, gives the phenomenon a name, but most of us have been living it for years.

Wu, who served as a special assistant to Joe Biden on technology and competition policy and teaches at Columbia Law School, frames the problem as extraction. In his latest book, The Age of Extraction, he argues that today’s dominant tech platforms don’t just intermediate markets. They conquer them. They siphon value upward. They hollow out ecosystems that once supported real competition and real innovation.

Again, none of that strikes me as wrong. In fact, it’s refreshingly honest.

But here’s the thing. Tools like the internet don’t have morals. People do.

The internet didn’t wake up one morning and decide to become a cesspool. We built it this way. We rewarded it this way. We clicked, shared, liked, boosted, and monetized our worst impulses. We handed extraordinary power to a small number of tech bros and then acted shocked when they used it like robber barons, just with better hoodies.

The internet is a mirror into our collective soul. And what it reflects to us is not always flattering, to say the least.

Look at how people behave online. The racism. The cruelty. The casual dehumanization. The things said from behind screens that would never be said face-to-face. Or at least, that used to be true. One of the more chilling observations in the Klein conversation is how online behavior has spilled into real life. The internet didn’t just give us a place to vent. It gave permission. It normalized the abnormal. And now the stuff we once confined to comment sections shows up in school board meetings, on airplanes, and in the streets.

That should scare us more than bad UX or annoying ads ever could.

Doctorow and Wu are right to point out the structural incentives. Engagement beats truth. Outrage beats nuance. Algorithms don’t care if something is accurate, only if it spreads. The Times piece walks through how platforms deliberately degrade the user experience once dominance is achieved, how privacy becomes a joke, and how users are trapped by network effects even as the product gets worse.

But blaming the internet itself is like blaming a gun for having a trigger. The trigger exists. The bullet exists. The question is who pulled it, why they pulled it, and why we keep letting the same people reload.

Yes, you can argue that without guns, shooters couldn’t shoot. Without platforms, bad actors wouldn’t scale. Fair enough. But that genie left the bottle thirty years ago. The network exists. The infrastructure exists. The question is what we choose to do with it now.

Instead of evolving culturally and politically alongside the technology, we outsourced responsibility to platforms and hoped for the best. We let business models shape behavior instead of the other way around. And when the results turned ugly, we pretended to be surprised.

So where do we go from here?

Klein asks a version of this question in the podcast, and it’s the right one. Diagnosis is not enough. Lamenting is not enough. Knowing what’s broken doesn’t fix it.

Some people think AI is the answer. I’m cautiously optimistic. AI can help moderate content at scale. It can surface better information. It can nudge platforms toward healthier dynamics. But AI is still a tool, and like every tool before it, it will reflect the values of the people who wield it. If those values are extraction and domination, AI will just make the mess faster and more efficiently.

Others argue the real solution is generational. That digital natives, raised in this environment, will develop antibodies. Maybe. I hope so. But waiting for the next generation to clean up our mess feels like moral outsourcing.

Which brings us to the part everyone hates to talk about. Regulation.

Somewhere along the line, we convinced ourselves that legislating technology was inherently bad, inherently naive, or inherently authoritarian. That’s nonsense. We regulate industries when they accumulate too much power because concentrated power is rarely good for the public. Railroads learned this lesson. Utilities learned it. Banks learned it. Tech has been the exception, not because it’s special, but because it’s been politically untouchable.

Maybe social media platforms and hyperscalers need to be treated, at least in part, like public utilities. Different rules. Different obligations. A higher standard of public trust. I know how this sounds. Very boomer. Very “let’s form a committee.” But when a handful of companies shape how billions of people see the world, the stakes are too high for vibes-based governance.

Public policy is slow. Frustratingly slow. But it may be the only lever left that operates at the scale of the problem.

And leadership matters. Not performative leadership. Real leadership. From governments willing to act, from industries willing to accept constraints, and from a new generation of tech leaders who don’t confuse extraction with innovation or hoarding with success.

Here’s my Shimmy take, for what it’s worth. Until we’re willing to go behind the mirror and admit that what we see reflected at us is our own doing, nothing changes. We can blame algorithms, platforms, incentives, or the internet itself. Or we can recognize that tools amplify intent, and the intent problem is ours.

We can keep lamenting how ugly the reflection has become. Or we can do something bold and change what’s standing in front of the mirror.

The mirror isn’t the problem. What we do when we look into it is.

It’s time to be bold.