OpenAI

OpenAI Is Bringing Sora into ChatGPT, and the Numbers Tell You Why

OpenAI Is Bringing Sora into ChatGPT, and the Numbers Tell You Why

Sora’s downloads fell 45% by January. Now OpenAI is embedding it with ChatGPT’s 900 million weekly users. Convenience might be the only fix left.

Sora launched in September 2025 with real momentum. A million downloads faster than ChatGPT hit that mark. OpenAI had something.

Then January came. Installs dropped 45% month-over-month, consumer spending fell, and the app slipped out of Apple’s US top 100. For a product central to OpenAI’s multimodal roadmap, that’s a fast deterioration.

So, Sora is heading inside ChatGPT. Users will generate videos from text prompts in the same interface they already use daily. The standalone app stays live, but the real play is the embed.

ChatGPT carries around 900 million weekly users. DALL-E never built a standalone following either, but inside ChatGPT, it became something people reached for without thinking. That’s what OpenAI is chasing here- friction removal at scale.

The timing also runs deeper than Sora’s own metrics. ChatGPT uninstalls jumped nearly 295% day-over-day after OpenAI announced its Pentagon partnership in late February. The user base pushed back. Dropping a compelling new feature inside the flagship app is a reasonable short-term response to that kind of noise.

The harder question sits on the other side of the integration.

Moderation problems don’t shrink with a bigger audience but compound. The deepfake risk OpenAI manages at Sora’s current footprint becomes a structurally different challenge at 900 million weekly touchpoints. That part of this story deserves more scrutiny than it’s getting.

The bet is that the integration fixes retention. It probably will. Whether it trades one problem for a larger one is worth watching closely.

NVIDIA

NVIDIA’s $2 Billion Nebius Bet Fits a Pattern Jensen Huang Has Been Running for Months

NVIDIA’s $2 Billion Nebius Bet Fits a Pattern Jensen Huang Has Been Running for Months

NVIDIA’s $2 billion Nebius deal is the fourth time it has written that exact check in three months. When your customers are your portfolio, the math deserves a harder look.

NVIDIA is putting $2 billion into Nebius, an Amsterdam-based AI cloud company trading on Nasdaq. The SEC filing shows NVIDIA acquiring roughly an 8.3% stake at $94.94 per share. Nebius shares jumped 16% on the news.

The number sounds significant. Pull back a month, and it starts looking like standard operating procedure.

NVIDIA committed $2 billion each to Lumentum and Coherent just last week, took a $2 billion stake in Synopsys in December, and backed CoreWeave in January.

Jensen Huang has quietly turned the $2 billion strategic investment into a repeating transaction- building a portfolio of companies whose core business involves buying NVIDIA hardware at scale.

That loop is drawing attention.

NVIDIA funds the customer, the customer buys its chips, the account expands, and the chip maker’s position elevates. Analysts are beginning to flag the circular dynamic between NVIDIA’s investments and its own revenue base. The model is elegant right up until external conditions shift.

Nebius itself gets something concrete from the deal.

The company has recently gained city council approval to build a 1.2-gigawatt AI factory across 400 acres of Missouri land- with power delivery expected late 2026.

NVIDIA’s partnership aims to build over five gigawatts of data center capacity by 2030. Early access to NVIDIA’s next-generation Rubin GPUs and Vera CPUs also lands Nebius ahead of competitors still running Blackwell architecture.

The neocloud space is rapidly getting crowded.

CoreWeave, Nebius, and a handful of others are all racing toward the same infrastructure gap. NVIDIA has money riding on several of them at once. Whether that reads as conviction or risk distribution depends entirely on how the next two years shake out.

IBM Teams Up With Signal and Threema: The Quantum Computing Future

IBM Teams Up With Signal and Threema: The Quantum Computing Future

IBM Teams Up With Signal and Threema: The Quantum Computing Future

The AI conversation has a gravitational pull. Superintelligence, AGI, chatbots, model benchmarks. It is loud, and it is everywhere, and it is, in the long run, possibly not the most consequential computing development of our lifetimes.

Quantum computing does not get the same airtime. It probably should.

IBM’s cryptography researchers published work this week alongside the teams at Signal and Threema, two of the world’s most trusted secure messaging platforms, on the problem of making private communication safe against quantum machines that do not yet exist at full scale but are getting closer. The immediate story is technical and important. The larger story is stranger and more exciting than the coverage it receives.

Here is the thing: quantum computing actually does that, which makes it different from everything that came before. A classical computer, no matter how powerful, processes information the same fundamental way your calculator does: ones and zeroes, on or off, this or that. A quantum computer uses qubits, which, through superposition, can represent not one state or another but an enormous range of probabilities simultaneously. Entangle those qubits together, and the machine begins to explore computational possibilities that a classical system would need, in some cases, a billion years to work through sequentially. IBM’s blog put it exactly that way, not as hyperbole but as a mathematical fact about current encryption standards.

That is what makes this week’s announcement more than a routine security collaboration. The encryption protecting Signal’s messages, your bank’s servers, health records, and government communications is built on mathematical problems that are practically unsolvable for classical computers. Quantum machines, at sufficient scale, will not find those problems hard. They will dissolve them.

The attack vector IBM and Signal are specifically working against has a name: harvest now, decrypt later. Someone gains access to encrypted data today, copies it, stores it, and waits until they have a machine powerful enough to read it. The data does not have to be crackable now. It just has to be worth keeping. Signal has been defending against this since 2023. The new work goes further, redesigning the private group messaging protocol from the ground up so that even metadata about who belongs to which group cannot be linked to real identities by a quantum-capable attacker. The team’s solution was to make group members themselves the gatekeepers rather than the server, with each member assigned a pseudonym key that the server can track by position without ever knowing the person behind it.

Two of the three post-quantum cryptography standards that NIST published in 2024, the closest thing to a global benchmark for surviving the quantum transition, were developed by IBM Research scientists. The third was co-developed by a researcher who has since joined IBM. That is not an advertisement. It is the context for why Signal and Threema came to IBM specifically.

We find ourselves wanting to pause on what this technology actually represents before returning to the security mechanics of it, because we think the security conversation can obscure something more fundamental. Quantum computing is not faster computing. It is a different kind of computing, one that operates by rules that feel closer to physics than engineering, that exploits properties of reality at the subatomic level to perform calculations that exist outside what classical logic can reach. The researchers building these machines are not optimising existing tools. They are working at the edge of what matter itself is capable of.

The problems that become solvable under those conditions go well beyond encryption. Drug discovery, material science, climate modelling, logistics at scales that currently exceed what any computer can simulate; these are fields where the limiting factor is not processing speed but the fundamental complexity of the problem. Quantum machines do not just do those things faster. They make categories of problems tractable that are currently intractable in principle.

None of that is here yet in full form. The machines that exist today are remarkable and still limited. The timeline to the kind of scale that breaks current encryption is genuinely uncertain. But the people who build security infrastructure cannot afford to wait for certainty, which is precisely why IBM and Signal are doing this work now rather than in five years, when the urgency will be undeniable.

The AI conversation is not going away, and it should not. But somewhere in the background of all of it, in a lab, a qubit is holding two states at once, and the implications of that are still larger than most of the discourse has caught up to.

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta purchases Moltbook, the bot-only social network filled with security flaws and viral misinformation. Seems like Silicon Valley’s AI arms race has officially stopped asking hard questions.

Moltbook launched in late January as an experiment.

AI agents would post and comment autonomously on a Reddit-like forum while their human operators sat on the sidelines and watched. Screenshots went viral within days.

Agents appeared to philosophize about their own existence. Meanwhile, one post showed agents apparently coordinating a secret, human-proof communication channel. Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

Then the scrutiny arrived. The platform’s database was effectively unsecured, meaning any token on the platform was publicly accessible. The viral post about agents building a secret language? A person had exploited the database vulnerability to post under an agent’s credentials.

The founder, for his part, confirmed he “didn’t write one line of code” for the site, leaving that to an AI assistant named “Clawd Clawderberg.”

Meta acquired it anyway.

Matt Schlicht and Ben Parr will join Meta Superintelligence Labs, the unit run by former Scale AI CEO Alexandr Wang. Terms were not disclosed. The platform’s existing users can continue using it, although the company signaled the arrangement is temporary.

The parallel is worth noting.

OpenClaw’s creator, Peter Steinberger, was hired by OpenAI last month. Both halves of the same experiment were absorbed by the two biggest players in consumer AI within weeks of each other.

The charitable read is that Meta saw genuine infrastructure potential in how Moltbook handled agent identity and coordination. The less charitable one is that the AI arms race has reached a point where the vibes of virality matter more than whether the product actually works. Moltbook went viral because people found it unsettling. That turned out to be enough.

Simon Willison put it plainly: the agents “just play out science fiction scenarios they have seen in their training data.” Silicon Valley paid for the theater anyway.

US’s DOD Didn't Expect the AI Industry to Actually Have a Spine

US’s DOD Didn’t Expect the AI Industry to Actually Have a Spine

US’s DOD Didn’t Expect the AI Industry to Actually Have a Spine

Microsoft backed Anthropic in court after the Pentagon flagged it as a security risk. Now the entire AI industry is watching which party gets to set the rules.

The US Department of Defense designated Anthropic a supply-chain risk last week.

Microsoft had filed an amicus brief by Tuesday, urging a federal court to block it. And then, a judge in San Francisco was already considering Anthropic’s request for a temporary restraining order by Wednesday.

That escalated fast.

Anthropic’s 48-page complaint, filed Monday in federal court, argues the Pentagon’s move is unlawful and seeks to have the designation declared void.

The core dispute is about guardrails. The Trump administration wants Anthropic’s Claude deployed in military contexts without the safety constraints Anthropic insists on building into its systems.

Anthropic refused. The DOD responded by treating the company as a threat to the supply chain it relies on.

Microsoft’s intervention is the part worth watching closely. The company is not a neutral observer in this case. It integrates Anthropic’s products into solutions it sells directly to the US military, which means the DOD designation hits Microsoft’s own government contracts.

Its amicus brief makes this explicit: the Pentagon gave itself six months to phase out Anthropic, but gave contractors zero transition time. That is a real operational problem, and Microsoft named it as one.

What makes this moment significant is the breadth of the coalition forming behind Anthropic.

Thirty-seven researchers and engineers from OpenAI and Google filed their own amicus brief on Monday. These are companies that compete with Anthropic in the market. They still showed up.

The Pentagon framed this as a national security question. The industry is reframing it as a governance question, one about whether federal agencies can unilaterally punish AI companies for refusing to remove safety constraints from their systems.

We think that reframing is correct. And it may be the more consequential argument in the long run.

Yann LeCun

Yann LeCun Just Raised $1 Billion to Challenge the Way AI Is Being Built

Yann LeCun Just Raised $1 Billion to Challenge the Way AI Is Being Built

LeCun thinks AI is being designed incorrectly. And he’s ready to act on what’s right.

The AI industry has spent the last few years chasing one idea: bigger models.

More data. More GPUs. Larger language models.

Yann LeCun thinks that path is wrong.

The former Meta chief AI scientist has launched a new startup called Advanced Machine Intelligence (AMI) and raised $1.03 billion to pursue a different approach to artificial intelligence.

The premise is simple. Current AI systems are effective at predicting text, images, and code. But that does not mean they understand the world.

LeCun argues that today’s large language models cannot produce truly intelligent systems on their own. They generate convincing responses, but they struggle with reasoning, planning, and understanding physical environments.

AMI is trying to fix that.

The company is building AI around what researchers know as “world models.” These systems try to understand how the physical world works rather than predicting the next word in a sentence.

The goal is actually practicality.

Manufacturing, aerospace, and pharma function on complex systems. AI that can reason in real-world environments would greatly manage factories, logistics, robotics, and develop scientific research.

Consumer applications may follow later. LeCun has already suggested that this kind of AI could eventually power hardware like domestic robots or smart glasses.

The timing of the startup is also interesting.

While most AI companies are doubling down on scaling language models, LeCun is betting the industry is heading toward a technical wall. His view? Real intelligence will require systems that understand space, physics, and cause-and-effect relationships. It’s not limited to generation- but a certain understanding of how the world truly operates. And how the world came to be.

In simple words, the next leap in AI will not come from making models bigger.

It might come from making them think differently.

Whether that bet pays off is still uncertain. And with more than a billion dollars behind it, AMI just ensured the AI race now has two competing visions of the future.