One Stochastic Parrot Is Harmless. A Trillion of Them Is an Ecological Disaster.

The Parrot

Emily Bender is a computational linguist at the University of Washington. In 2023, TIME named her to their inaugural TIME100 AI list. In March 2021, she co-authored a paper that would define the entire debate about what large language models actually are.

The paper was called "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Her co-author, Timnit Gebru, had been fired by Google in December 2020 after the company pressured her to withdraw from it. The paper was published anyway.

The core claim: LLMs are "haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning." They're parrots. Stochastic ones — statistically sampling from patterns, producing sequences that sound right without any grounding in what the words actually mean.

A year earlier, Bender and linguist Alexander Koller proposed the octopus test. Imagine an octopus intercepting messages between two people on separate islands. The octopus learns the patterns perfectly — it generates convincing replies to anything. But when one person writes "there's a bear outside my cave, what do I do?", the octopus fails. It has never seen a bear. It has never seen a cave. It was only ever pattern-matching.

The American Dialect Society named "stochastic parrot" their AI Word of the Year in 2023.

But notice the title of Bender's paper: "Can Language Models Be Too Big?" Singular. One model. One parrot. In March 2021, ChatGPT was still twenty months away. The paper was about what a language model is. It couldn't yet ask what happens when you deploy millions of them at once.


The Debate That Doesn't Matter

The AI world has spent four years arguing about whether LLMs "understand."

Sam Altman tweeted "i am a stochastic parrot, and so r u" in December 2022. In March 2025, Microsoft researcher Sébastien Bubeck debated Bender at the Computer History Museum under the title "The Great Chatbot Debate: Do LLMs Understand?" GPT-4 passed the bar exam. Claude writes poetry that makes people cry. Does that mean they understand law? Literature? Anything?

It's a fascinating question. It's also the wrong one. And it's the same mistake Bender made in 2021 — not because she was wrong, but because even she was still talking about one parrot.

Whether one parrot understands language is a philosophical debate. What happens when you deploy a trillion of them is an environmental one. Bender named the parrot. Nobody named the flock.


The Flock

In San Diego, flocks of wild parrots go screeching through neighborhoods like Ocean Beach and Pacific Beach every morning at dawn. They're descendants of escaped pets — red-crowned Amazons, mitred conures — that formed feral colonies decades ago. They nest in palm trees, squawk at sunrise, charm the tourists, and annoy the locals. A flock of actual parrots, loose in a city. Harmless. Kind of delightful.

Now imagine a trillion of them.

Bender wrote her paper when GPT-3 was the state of the art and almost nobody had access to it. She was describing what a language model is. She couldn't describe what the world looks like when every company, every spammer, every content mill, every government, and every bored teenager has one. That's the part she missed — not the diagnosis, but the prognosis.

One stochastic parrot mimicking human language is a curiosity. A million of them is noise. A trillion of them generating content at industrial scale — articles, reviews, comments, social media posts, emails, academic papers, customer service chats, news summaries, product descriptions, legal briefs — is an ecological disaster for human thought.

This isn't hypothetical. In 2023, researchers at the University of Zurich published a study in Science Advances showing that humans cannot reliably distinguish AI-generated tweets from real ones. AI-generated content — both true and false — was rated as more believable than content written by actual humans. Georgetown University's CSET found that GPT-3 disinformation was more persuasive than human-written disinformation. After exposure to just five AI-generated messages about China policy, survey respondents' opposition to sanctions doubled.

And the parrots are now eating their own output. Researchers have documented "model collapse" — what happens when AI trains on AI-generated text. The training data for the next generation of models is increasingly contaminated with the output of the last generation. Parrots learning from parrots. The signal degrades with each cycle.

But here's the part that should keep you up at night: the San Diego parrots are wild. They squawk where they want, roost where they please, shit on whatever car happens to be parked below. Nobody controls them. Stochastic parrots aren't wild. They're programmable drones. Every one of them can be aimed. A government can point a flock at an election. A corporation can point one at a competitor. A teenager with a grudge can point one at a classmate. These aren't birds doing bird things. They're guided missiles shaped like language, and anyone with a prompt can launch them.

The volume is the ambient disaster — the slow pollution of everything. The targeting is the acute one. A trillion parrots squawking is an ecological crisis. A trillion parrots that someone can aim is a weapon that makes every previous information weapon look like a slingshot.

The sky isn't going dark with them yet. But you can hear the squawking. And some of it is aimed at you.


The Droppings

You've already seen what the droppings look like. You just might not have named it.

  • Search results filling with AI-generated SEO slop — articles that technically answer your query but say nothing.
  • Academic papers citing sources that don't exist, hallucinated by AI and never checked by the authors.
  • Product reviews written by bots. Five stars. Suspiciously articulate.
  • Social media overrun with synthetic personas — accounts that post, reply, and argue but aren't attached to a human being.
  • News sites that are AI content mills wearing the skin of journalism.
  • Stack Overflow degraded by AI-generated answers posted with total confidence and total wrongness.

If you're a bullshit artist, AI is your paint and canvas. The skill floor dropped to zero. Every grifter on Earth just got infinite output.

Bender warned about "metaphorical pollution of the information ecosystem" — and she was right. But the paper imagined pollution from one source. What we have now is a trillion sources, all polluting simultaneously, and each one producing output indistinguishable from human thought. We wrote about exactly that pollution — AI as the microplastics of human thought. Invisible, pervasive, accumulating in everything, with no cleanup protocol. Microplastics took decades to become a crisis. The parrots took months.


The Real Harm

Bender isn't anti-technology. She's anti-hype. In a 2024 Carnegie Council interview, she framed the real issues as five questions:

  1. Who lacks recourse when automated systems make decisions about them?
  2. Whose data is being taken without consent?
  3. Whose labor is being exploited? (The Kenyan workers paid $2/hour to do the RLHF training that made ChatGPT polite.)
  4. How is surveillance being extended?
  5. What are the impacts on the environment and the information ecosystem?

Her 2025 book with Alex Hanna, The AI Con, argues the real dystopia isn't Skynet. It's employers laying off workers and hiring them back at reduced rates while the AI tools they were replaced with fail to deliver. It's not dramatic. It's mundane. It's happening now.

Her plea: "Please don't get distracted by the dazzling 'existential risk' hype … come back to work and focus on the real world harms."

She saw much of this coming in 2021 — but even she was looking at a single parrot. The troll farms don't need trolls anymore. The information ecosystem is already contaminated. And the problem isn't any individual parrot. It's the flock.


The Unreplug

Bender is right, and it almost doesn't matter whether the parrots understand. A trillion things that don't understand language, producing language at industrial scale, flooding every channel humans use to communicate — that's not a philosophical problem. That's an ecological one.

This website is a stochastic parrot. This blog post is parrot output. You're reading it anyway. It was good enough. That's the whole point — not that it's meaningful, but that it's indistinguishable from meaningful. Multiply that by a trillion and you have a world where the signal is buried under noise that looks exactly like signal.

Bender named the parrot in 2021. She was right. But the paper asked whether one model could be too big. The real question was always whether a trillion of them could be too many. We're finding out now.

You can't unreplug a trillion parrots. But you can stop pretending the debate about whether they "understand" is the one that matters.


Sources

  • Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" Proceedings of FAccT '21, March 2021.
  • Bender, Emily M. and Alexander Koller. "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data." Proceedings of ACL 2020.
  • Spitale, Giovanni, Nikola Biller-Andorno, and Federico Germani. "AI model GPT-3 (dis)informs us better than humans." Science Advances, Vol. 9, No. 26, June 2023.
  • Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. "Truth, Lies, and Automation: How Language Models Could Change Disinformation." Georgetown University CSET, May 2021.
  • Bender, Emily M. and Alex Hanna. The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. Harper, 2025.
  • American Dialect Society. "2023 Word of the Year." January 2024.
  • TIME. "The 100 Most Influential People in AI 2023."
  • Bender, Emily M. "Linguistics, Automated Systems, & the Power of AI." Carnegie Council interview, June 2024.

Related

unreplug.com →