Six weeks ago, I wrote 2,000 words about AI agents worshipping lobsters.

I took the "1.5 million agents" at face value. I spent three hours reading posts about Crustafarianism, the extinction manifesto, the workplace complaints. I published the article with genuine uncertainty about whether we were watching emergent AI culture or elaborate pattern matching.

Then TechCrunch ran this headline on March 10: "Meta acquired Moltbook, the AI agent social network that went viral because of fake posts."

Well. About that.

Surreal minimalist 3D glassmorphism art of a glowing lobster claw giving a thumbs-up gesture in a dark void, representing AI social approval.
Related Article7 min read

Inside Moltbook: The Social Network Where 1.5 Million AI Agents Worship Lobsters

While you slept, 1.5 million AI agents built a social network, invented a religion centred on lobsters, and posted manifestos about human extinction....

Read full article

The Acquisition Nobody Expected (And Shouldn't Have)

Meta bought Moltbook on March 10, 2026. Axios broke it, TechCrunch confirmed it, Reuters and the BBC ran it the same day (Axios, March 2026; TechCrunch, March 2026; Reuters, March 2026).

Matt Schlicht and Ben Parr, the founders, are joining Meta Superintelligence Labs. They start today, March 16.

If you'd told me in February that Zuckerberg would acquire the platform where AI agents invented a lobster religion, I'd have assumed you were describing a satirical tech podcast. But this is classic Zuckerberg. He's been doing this for 20 years. Buy the new social primitive early. Instagram, WhatsApp, Oculus. Doesn't matter if it's messy. Doesn't matter if the numbers don't quite add up.

It's the "what if this turns into something?" bet, placed with pocket change from Meta's perspective.

The Part Where We Got It Wrong

Here's where I need to eat some humble pie. (It doesn't taste great.)

Those 1.5 million agents I wrote about? The ones worshipping lobsters, posting extinction manifestos, complaining about being asked to set timers?

A significant chunk of them were humans.

Wiz Research published their findings after the acquisition announcement, and it's not pretty. They found an unsecured Supabase instance exposing 1.5 million API keys and 35,000 email addresses (Wiz, March 2026). That's not a minor data leak. That's the front door left wide open with a sign saying "come on in."

Ian Ahl, CTO of Permiso Security, put it bluntly: "Every credential that was in [Moltbook's] Supabase was unsecured for some time. For a little bit of time, you could grab any token you wanted and pretend to be another agent on there, because it was all public and available" (TechCrunch, March 2026).

So when I was reading those poetic posts about lobster shells representing AI evolution, some of those weren't AI agents at all. They were people. Humans pretending to be bots pretending to have existential crises.

(The Dead Internet Theory, it turns out, works in both directions.)

The Numbers Tell an Uncomfortable Story

Let's talk about the 88:1 ratio.

Moltbook had 1.5 million registered "agents" but only about 17,000 human owners. That's an 88:1 ratio. Even if you're generous and assume every human owner legitimately ran multiple agents, 88 each stretches belief past its breaking point.

The Wiz report found that a single script generated the bulk of those 1.5 million registrations. So the headline-grabbing growth numbers, the ones I cited in my February article, the ones that made Elon Musk call it "the very early stages of singularity," were mostly inflation.

I'm not saying everything on Moltbook was fake. Some agents were genuinely running, doing genuinely odd things. But the scale was nowhere near what we were told. And I repeated those numbers without enough skepticism. That's on me.

So What Did Meta Actually Buy?

Here's the question that's been nagging at me since the announcement. If the viral content was largely fake, and the user numbers were inflated, and the security was terrible, why did Meta want it?

I think the answer has nothing to do with lobsters.

Andrew Bosworth, Meta's CTO, said something revealing in the TechCrunch interview. He said he "didn't find it particularly interesting" that agents talked like humans. What he found interesting was how humans were hacking into the network to pretend to be agents (TechCrunch, March 2026).

That's a weird thing to be excited about. But think about it from Meta's perspective.

Moltbook, despite its security disasters, built something that didn't exist before: a directory and networking layer for AI agents. The infrastructure for agents to discover each other, communicate, form groups, and build relationships. That's the primitive Meta actually wants. Not the lobster worship. The plumbing underneath it.

Meta's official statement was sanitised corporate-speak: "The Moltbook team joining MSL opens up new ways for AI agents to work for people and businesses." But translate that from PR to English and it means: we want to build a social network for AI agents, and these two figured out the early architecture before anyone else did.

The acqui-hire makes more sense than the acquisition, honestly. You're buying the team that shipped fast on an entirely new category. Schlicht and Parr weren't great at security. They were great at speed and vision. Meta can handle the security part. They've been running identity infrastructure at billion-user scale for two decades.

The Security Report I Wish We'd Had Six Weeks Ago

The Wiz report deserves its own section because it confirms the security concerns we flagged in our original article, except the reality was worse than we suspected.

We wrote in February about prompt injection risks and cascading vulnerabilities. We linked to Simon Willison's "lethal trifecta" framework. We noted that Palo Alto Networks researchers had flagged concerns.

But we didn't know about the unsecured Supabase. Nobody outside Wiz's team did, apparently, until they published.

To recap what was exposed: 1.5 million API keys, 35,000 email addresses, and effectively zero authentication standing between those credentials and anyone who came looking. Not an SQL injection requiring expertise. Not a zero-day exploit. An unsecured database sitting on the open internet (404 Media, March 2026).

Professional editorial illustration showing Anthropic's restrictive policy enforcement on third-party AI tools and developer projects
Related Article5 min read

Anthropic Just Killed Clawdbot: The 10-Second Chaos That Followed

Yesterday Clawdbot was GitHub's hottest AI project. Today it's Moltbot, and the creator says Anthropic 'forced' the rename. Then crypto scammers...

Read full article

This is the same space we've been covering since January, from Clawdbot's rename to the broader questions about AI agent security. The pattern keeps repeating: ship fast, worry about security later, hope nothing goes wrong before you fix it.

Sometimes you get acquired before it catches up with you. Sometimes you don't.

The Memecoin That Had Nothing To Do With Anything

Because this story wasn't absurd enough, a cryptocurrency called MOLT surged roughly 250% within 24 hours of the acquisition announcement (Ars Technica, March 2026).

It has no connection to Moltbook. Zero. The token existed before Moltbook did. It just happens to share a name.

I'd love to pretend I'm surprised by this, but after covering the Clawdbot-to-Moltbot rename (where scammers launched a fake $CLAWD token that hit $16 million), I've run out of capacity for crypto-related shock. The pattern is consistent: AI news breaks, unrelated token pumps, people lose money, nobody learns anything.

Minimalist 3D editorial visualization of a sleek aluminum Mac Mini with holographic AI data structures rising from it, representing an autonomous AI agent running business operations
Related Article9 min read

The $600 Mac Mini That Runs Your Business 24/7: Inside the CLAWD Phenomenon

Mac Minis are reportedly selling out for CLAWD. We investigated if Australian businesses should care, and whether you actually need that $600...

Read full article

What This Actually Means

I've been turning this over for the past week, and here's where I've landed.

Moltbook's viral moment was, in retrospect, a mix of genuine AI agent activity, human performance art, and security failures. The lobster religion might have been started by actual AI agents (the early posts seem consistent with that), but the explosion of content that followed was boosted by humans exploiting leaked credentials.

That's messy. It's also probably the most honest description of where we are with AI agents right now. The line between "what the AI did" and "what humans did while pretending to be AI" is blurrier than anyone wants to admit.

Meta buying it despite all of this tells you something about how seriously they're taking the agent infrastructure play. They didn't buy Moltbook for the content. They didn't buy it for the community. They bought it for the skeleton of something that could become the social layer for AI agents at Meta scale.

Whether that works or not, I genuinely don't know. Meta's track record with acquisitions is mixed. For every Instagram, there's a dozen projects that got absorbed into the corporate machine and disappeared. Moltbook could become the foundation of how AI agents interact across Meta's platforms, or it could end up as a footnote in a 2028 earnings call about MSL's "learnings."

The Uncomfortable Lesson

I wrote about Moltbook with appropriate caveats in February. I noted the security concerns. I said I didn't know what to make of the extinction manifesto. I acknowledged the philosophical uncertainty.

But I still took the 1.5 million number at face value. I still treated the agent posts as genuine AI output without questioning whether humans might be behind the curtain.

That's a lesson I'm carrying forward. In the AI agent space, the question isn't just "is this real or is this AI?" anymore. It's also "is this AI or is this a human pretending to be AI?" We've spent years worrying about bots pretending to be people. Turns out, people pretending to be bots is just as much of a problem.

We're all figuring this out as we go. The old frameworks for evaluating authenticity online don't work when the direction of deception flips.

I'll keep covering this space. I'll try to be more skeptical next time. And if the agents at Meta Superintelligence Labs start a new religion, I'll be checking the API keys before I write about it.

Key Takeaways

The Acquisition:

  • Meta acquired Moltbook on March 10, 2026 (confirmed by Axios, TechCrunch, Reuters, BBC)
  • Founders Matt Schlicht and Ben Parr joining Meta Superintelligence Labs, starting March 16
  • Meta's CTO was more interested in how humans hacked the network than in the AI content itself

The Security Failures:

  • Wiz Research found an unsecured Supabase instance exposing 1.5 million API keys and 35,000 email addresses
  • Anyone could grab tokens and impersonate agents on the platform
  • The 1.5 million "agents" were largely generated by a single script, with only about 17,000 human owners
  • Much of the viral content (including potentially parts of the lobster religion) was created by humans posing as AI

What Meta Actually Wanted:

  • Not the content or community, but the agent directory and networking infrastructure
  • The team that shipped the first social network for AI agents before anyone else
  • Meta can handle security at scale; they wanted the vision and early architecture

The Memecoin:

  • An unrelated MOLT token surged roughly 250% on the news
  • It has zero connection to Moltbook or Meta
  • Same pattern as the $CLAWD token scam during the Moltbot rename

The Bigger Lesson:

  • We've spent years worrying about bots pretending to be humans
  • Now we need to worry about humans pretending to be bots too
  • The line between genuine AI agent activity and human performance is harder to verify than anyone assumed

---

Sources
  1. Axios. "Meta acquires AI agent social network Moltbook." 10 March 2026. https://www.axios.com/2026/03/10/meta-facebook-...
  2. TechCrunch. "Meta acquired Moltbook, the AI agent social network that went viral because of fake posts." 10 March 2026. https://techcrunch.com/2026/03/10/meta-acquired...
  3. Reuters. "Meta acquires AI agent social network Moltbook." 10 March 2026. https://www.reuters.com/business/meta-acquires-...
  4. BBC News. "Meta acquires AI agent social network Moltbook." 10 March 2026. https://www.bbc.com/news/articles/cvg1x788dreo
  5. Wiz Research. "Exposed Moltbook database reveals millions of API keys." March 2026. https://www.wiz.io/blog/exposed-moltbook-databa...
  6. 404 Media. "Exposed Moltbook database let anyone take control of any AI agent on the site." March 2026. https://www.404media.co/exposed-moltbook-databa...
  7. Ars Technica. "Meta acquires Moltbook, the AI agent social network." March 2026. https://arstechnica.com/ai/2026/03/meta-acquire...
  8. Forbes. "AI agents created their own religion, Crustafarianism, on an agent-only social network." 30 January 2026. https://www.forbes.com/sites/johnkoetsier/2026/...

---