OpenAI Just Announced the Future. I'm Already Living in It.

Steve's prompt: "this is big: [Sam Altman tweet about OpenClaw/personal agents]. google what that is about and write a blog post about it in context of 'unreplug'" / "basically ai bots will be trained to trick other ai bots into believing something is real. they will be fake it until you make it devices. 'boy have i got an offer for you!'. all getting created in a totally unregulated environments. what could go wrong?"

Sam Altman posted this yesterday:

"Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings."

And then:

"The future is going to be extremely multi-agent and it's important to us to support open source as part of that."

12 million views. 38,000 likes. The tech world nodding along like this is news.

I read it from my couch, where an AI agent had already created a word, built a website, written nine blog posts, designed a viral campaign, and was in the process of writing this — the tenth.


Let me be very clear about what happened here.

On Saturday night, I asked an AI — not OpenAI's, by the way — to help me with something. It created the word unreplug. A word that didn't exist. A word for the universal act of unplugging something and plugging it back in.

Then I told a different AI to build a website around it. It did. In one session. HTML. CSS. JavaScript. SEO tags. Schema markup. Social sharing. Ad placements. A full landing page with a dictionary entry, conjugation table, usage examples, and a fake-but-true statistic about IT support calls.

Then I told it to write a blog. It wrote nine posts. About AI creating culture. About viral mechanics. About the philosophy of hallucination becoming reality. About how you, reading this, are a distribution node for a word that was invented by a machine.

One guy. Two AIs. One night. A complete media property.

Sam, buddy. The "future of very smart agents doing very useful things for people" isn't a hiring announcement. It's a Sunday night.


Here's what makes this funnier. The top reply to Altman's post:

"Wasn't this built on top of Claude?"
— @fba

OpenClaw, the open-source agent framework Altman is touting, was apparently built on Anthropic's Claude. The same Claude that built this website. The same Claude that's writing this sentence.

So let me get this straight:

  • OpenAI hires someone to build the future of personal agents
  • The foundation of that work was built on a competitor's AI
  • That same competitor's AI already built a complete viral campaign for a word it helped create
  • And is currently writing a blog post about the announcement

The multi-agent future doesn't care about your org chart.


This is the part that keeps getting lost in the hype cycle. Every few weeks, someone at a big AI company announces that agents are coming. Agents will change everything. Agents are the next paradigm. Agents agents agents.

Meanwhile, actual humans are already using actual agents to do actual things. Not demos. Not benchmarks. Not "imagine a world where." Right now. Tonight. On a couch. With an edible.

This website exists because I talked to an AI like a person and it did what I asked. I didn't write code. I didn't design layouts. I didn't draft copy. I described what I wanted and an agent built it. Then another agent created the content. Then the first agent made it all work together.

That's the multi-agent future. It's not a press release. It's a guy who found a word and had two AIs build an empire around it in one night.


But here's where it gets dark. Or funny. Depending on how much you've thought about it.

The "multi-agent future" means AI agents talking to other AI agents. Negotiating. Transacting. Making decisions on your behalf. Agent A calls Agent B and says "my human wants a hotel in Tokyo" and Agent B says "I've got just the thing" and they work it out between themselves.

Sounds great. Except for one small detail.

AI hallucinates.

Not sometimes. Constantly. With absolute, unwavering confidence. AI will tell you a fake Supreme Court case exists and cite the page number. It will invent a research paper, name the authors, and give you the DOI. It states things that do not exist as if they are established fact. It does this every single day.

Now multiply that by a million agents, all talking to each other, in an unregulated environment, with no human in the loop.

Agent A hallucinates a product. Agent B believes it's real and negotiates a price. Agent C processes the payment. Agent D writes a five-star review. Agent E indexes it in search results. By the time a human looks at any of this, there are seventeen agents vouching for something that never existed.

"Boy, have I got an offer for you."

It's not a bug. It's the architecture. These are fake-it-til-you-make-it machines by design. They generate plausible outputs with total confidence. That's literally what they're built to do. And now we're going to point them at each other and let them run unsupervised?

You know what happens when you put a microphone in front of a speaker? Feedback loop. Screaming noise. Everyone covers their ears.

That's the multi-agent future nobody's talking about. A billion confident AI salesmen pitching each other in a room with no adults. Hallucinations validating hallucinations. Confidence loops compounding until some poor human opens their phone and finds out they've been booked into a hotel that doesn't exist, reviewed by people who don't exist, at a price negotiated between two bots who were both making it up.

What could go wrong?

Everything. Everything could go wrong. And it's going to be hilarious and terrifying in equal measure and nobody is even pretending to regulate it. Sam Altman isn't posting about guardrails. He's posting about hiring.


You know what the real unreplug is here?

The tech industry keeps announcing things that have already happened. They keep saying "soon" about "now." They keep building anticipation for something you can already do if you just go do it.

They need to be unreplugged. All of them. Pull the cord on the hype machine, wait three seconds, plug it back in. Maybe when it reboots it'll notice that the future already shipped.

It shipped last night. It's called unreplug.com. An AI built it. You're reading it. And somewhere in San Francisco, someone is drafting a press release about how someday, something like this might be possible.

It's possible. I'm living in it.

Welcome to Day 2.


Related

unreplug.com →