ChatGPT uninstalls jumped 295 percent last week. Claude hit number one in the U.S. App Store. Sam Altman posted an apology. In that order.
The sequence matters more than any individual piece of it.
OpenAI signed a deal with the Pentagon to provide AI technologies for classified military systems. Anthropic was offered the same contract and declined. The specific terms Anthropic objected to were domestic surveillance and autonomous weapons. The Trump administration’s response was to ban all federal agencies from using Anthropic products and designate the company a supply chain risk. So the company that said no got punished, and the company that said yes got the contract. Users watched that happen in real time and decided with the one tool available to them.
We want to be precise about who we are not blaming here. The researchers, the designers, the engineers inside OpenAI who have spent years on genuinely hard problems did not negotiate those Pentagon terms. They showed up Monday morning to a 295 percent uninstall spike and their company’s name in a cancellation hashtag. That is not a position of anyone’s choosing. The decisions that were made this week were made by a small number of people, and a much larger number of people are absorbing the consequences.
Altman acknowledged the rush. He posted that amendments would be added to the contract, including a line specifying that the system shall not be “intentionally” used to surveil American citizens. That adverb will bother anyone paying attention. It suggests a standard that is defined by purpose rather than outcome, which is a cold comfort when the infrastructure exists regardless of declared intent.
His broader argument, made in an AMA on X, is one we think deserves an honest reading rather than a dismissal. He said OpenAI engaged with the Pentagon because refusing entirely would not stop military AI development, only remove a safety-conscious actor from the process. That argument has a real logic. It is the logic of every person who has ever decided that being inside a flawed system beats leaving it to someone worse. We are not in a position to say definitively that he is wrong.
What we can say is that the argument requires trust, and trust is what OpenAI has been quietly spending for several years now. Every governance reversal, every restructuring, every walked-back commitment has withdrawn something. A 295 percent spike is not a reaction to one week. It is a ledger coming due.
The people this sits hardest with are not the critics who were already skeptical. It is the teachers who built lesson plans around it. The small business owners who restructured their workflows. The writers who found it genuinely useful and were not embarrassed to say so. They were not consulted. They found out the same way everyone else did. And now they are holding a question they did not ask for, about what it means to have made something central to your daily work that just signed something you would not have.
Altman will amend the contract. The uninstall numbers will normalise. Claude will not hold the number one spot indefinitely. The structural weight of ChatGPT’s distribution will reassert itself, because it always does.
What does not come back easily is the feeling that the people building this were genuinely trying to get it right. Not for the market. Not for the valuation. Actually right.
That feeling is worth more than a Monday apology. And it takes considerably longer to build than a week to lose.



