Meta is running for the hills after a $10 billion security leak, while OpenAI stays to investigate. Are the industry’s biggest secrets finally out?
Meta just hit the panic button.
The tech giant has frozen all work with Mercor, its $10 billion AI data partner. It’s a full-blown security disaster more than a leak. But as Meta is sprinting for the exit, OpenAI is staying put to run its own investigation.
This mess is a rare image of the brittle infrastructure behind the AI boom.
The breach didn’t come from a direct hack.
It started with a poisoned open-source tool called LiteLLM. A group called TeamPCP hid a “worm” inside code that millions of developers trust. When Mercor used it, the hackers walked right in. They reportedly stole four terabytes of data.
It includes the highly guarded blueprints for training AI models.
Meta’s reaction tells the real story. They didn’t just pause. They cut the cord indefinitely. That suggests they found something truly ugly in the logs.
OpenAI is playing it cool, but they are clearly on edge. If a hacker has the blueprints for how these models are “taught,” the multi-billion dollar edge these companies have disappears.
The 40,000 contractors are the real victims.
Their work is on a pause with zero warning. And many of their Social Security numbers also leaked. They are the hidden labor of the AI era. They are always the first to face the brunt.
The AI supply chain is a mess. If one bad tool can topple a $10 billion partner, the foundation is rotten.


