IBM Teams Up With Signal and Threema: The Quantum Computing Future

IBM Teams Up With Signal and Threema: The Quantum Computing Future

IBM Teams Up With Signal and Threema: The Quantum Computing Future

The AI conversation has a gravitational pull. Superintelligence, AGI, chatbots, model benchmarks. It is loud, and it is everywhere, and it is, in the long run, possibly not the most consequential computing development of our lifetimes.

Quantum computing does not get the same airtime. It probably should.

IBM’s cryptography researchers published work this week alongside the teams at Signal and Threema, two of the world’s most trusted secure messaging platforms, on the problem of making private communication safe against quantum machines that do not yet exist at full scale but are getting closer. The immediate story is technical and important. The larger story is stranger and more exciting than the coverage it receives.

Here is the thing: quantum computing actually does that, which makes it different from everything that came before. A classical computer, no matter how powerful, processes information the same fundamental way your calculator does: ones and zeroes, on or off, this or that. A quantum computer uses qubits, which, through superposition, can represent not one state or another but an enormous range of probabilities simultaneously. Entangle those qubits together, and the machine begins to explore computational possibilities that a classical system would need, in some cases, a billion years to work through sequentially. IBM’s blog put it exactly that way, not as hyperbole but as a mathematical fact about current encryption standards.

That is what makes this week’s announcement more than a routine security collaboration. The encryption protecting Signal’s messages, your bank’s servers, health records, and government communications is built on mathematical problems that are practically unsolvable for classical computers. Quantum machines, at sufficient scale, will not find those problems hard. They will dissolve them.

The attack vector IBM and Signal are specifically working against has a name: harvest now, decrypt later. Someone gains access to encrypted data today, copies it, stores it, and waits until they have a machine powerful enough to read it. The data does not have to be crackable now. It just has to be worth keeping. Signal has been defending against this since 2023. The new work goes further, redesigning the private group messaging protocol from the ground up so that even metadata about who belongs to which group cannot be linked to real identities by a quantum-capable attacker. The team’s solution was to make group members themselves the gatekeepers rather than the server, with each member assigned a pseudonym key that the server can track by position without ever knowing the person behind it.

Two of the three post-quantum cryptography standards that NIST published in 2024, the closest thing to a global benchmark for surviving the quantum transition, were developed by IBM Research scientists. The third was co-developed by a researcher who has since joined IBM. That is not an advertisement. It is the context for why Signal and Threema came to IBM specifically.

We find ourselves wanting to pause on what this technology actually represents before returning to the security mechanics of it, because we think the security conversation can obscure something more fundamental. Quantum computing is not faster computing. It is a different kind of computing, one that operates by rules that feel closer to physics than engineering, that exploits properties of reality at the subatomic level to perform calculations that exist outside what classical logic can reach. The researchers building these machines are not optimising existing tools. They are working at the edge of what matter itself is capable of.

The problems that become solvable under those conditions go well beyond encryption. Drug discovery, material science, climate modelling, logistics at scales that currently exceed what any computer can simulate; these are fields where the limiting factor is not processing speed but the fundamental complexity of the problem. Quantum machines do not just do those things faster. They make categories of problems tractable that are currently intractable in principle.

None of that is here yet in full form. The machines that exist today are remarkable and still limited. The timeline to the kind of scale that breaks current encryption is genuinely uncertain. But the people who build security infrastructure cannot afford to wait for certainty, which is precisely why IBM and Signal are doing this work now rather than in five years, when the urgency will be undeniable.

The AI conversation is not going away, and it should not. But somewhere in the background of all of it, in a lab, a qubit is holding two states at once, and the implications of that are still larger than most of the discourse has caught up to.

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta Buys Moltbook, the Social Network with a Security Hole Anyone Could Walk Through

Meta purchases Moltbook, the bot-only social network filled with security flaws and viral misinformation. Seems like Silicon Valley’s AI arms race has officially stopped asking hard questions.

Moltbook launched in late January as an experiment.

AI agents would post and comment autonomously on a Reddit-like forum while their human operators sat on the sidelines and watched. Screenshots went viral within days.

Agents appeared to philosophize about their own existence. Meanwhile, one post showed agents apparently coordinating a secret, human-proof communication channel. Andrej Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.”

Then the scrutiny arrived. The platform’s database was effectively unsecured, meaning any token on the platform was publicly accessible. The viral post about agents building a secret language? A person had exploited the database vulnerability to post under an agent’s credentials.

The founder, for his part, confirmed he “didn’t write one line of code” for the site, leaving that to an AI assistant named “Clawd Clawderberg.”

Meta acquired it anyway.

Matt Schlicht and Ben Parr will join Meta Superintelligence Labs, the unit run by former Scale AI CEO Alexandr Wang. Terms were not disclosed. The platform’s existing users can continue using it, although the company signaled the arrangement is temporary.

The parallel is worth noting.

OpenClaw’s creator, Peter Steinberger, was hired by OpenAI last month. Both halves of the same experiment were absorbed by the two biggest players in consumer AI within weeks of each other.

The charitable read is that Meta saw genuine infrastructure potential in how Moltbook handled agent identity and coordination. The less charitable one is that the AI arms race has reached a point where the vibes of virality matter more than whether the product actually works. Moltbook went viral because people found it unsettling. That turned out to be enough.

Simon Willison put it plainly: the agents “just play out science fiction scenarios they have seen in their training data.” Silicon Valley paid for the theater anyway.

US’s DOD Didn't Expect the AI Industry to Actually Have a Spine

US’s DOD Didn’t Expect the AI Industry to Actually Have a Spine

US’s DOD Didn’t Expect the AI Industry to Actually Have a Spine

Microsoft backed Anthropic in court after the Pentagon flagged it as a security risk. Now the entire AI industry is watching which party gets to set the rules.

The US Department of Defense designated Anthropic a supply-chain risk last week.

Microsoft had filed an amicus brief by Tuesday, urging a federal court to block it. And then, a judge in San Francisco was already considering Anthropic’s request for a temporary restraining order by Wednesday.

That escalated fast.

Anthropic’s 48-page complaint, filed Monday in federal court, argues the Pentagon’s move is unlawful and seeks to have the designation declared void.

The core dispute is about guardrails. The Trump administration wants Anthropic’s Claude deployed in military contexts without the safety constraints Anthropic insists on building into its systems.

Anthropic refused. The DOD responded by treating the company as a threat to the supply chain it relies on.

Microsoft’s intervention is the part worth watching closely. The company is not a neutral observer in this case. It integrates Anthropic’s products into solutions it sells directly to the US military, which means the DOD designation hits Microsoft’s own government contracts.

Its amicus brief makes this explicit: the Pentagon gave itself six months to phase out Anthropic, but gave contractors zero transition time. That is a real operational problem, and Microsoft named it as one.

What makes this moment significant is the breadth of the coalition forming behind Anthropic.

Thirty-seven researchers and engineers from OpenAI and Google filed their own amicus brief on Monday. These are companies that compete with Anthropic in the market. They still showed up.

The Pentagon framed this as a national security question. The industry is reframing it as a governance question, one about whether federal agencies can unilaterally punish AI companies for refusing to remove safety constraints from their systems.

We think that reframing is correct. And it may be the more consequential argument in the long run.

Yann LeCun

Yann LeCun Just Raised $1 Billion to Challenge the Way AI Is Being Built

Yann LeCun Just Raised $1 Billion to Challenge the Way AI Is Being Built

LeCun thinks AI is being designed incorrectly. And he’s ready to act on what’s right.

The AI industry has spent the last few years chasing one idea: bigger models.

More data. More GPUs. Larger language models.

Yann LeCun thinks that path is wrong.

The former Meta chief AI scientist has launched a new startup called Advanced Machine Intelligence (AMI) and raised $1.03 billion to pursue a different approach to artificial intelligence.

The premise is simple. Current AI systems are effective at predicting text, images, and code. But that does not mean they understand the world.

LeCun argues that today’s large language models cannot produce truly intelligent systems on their own. They generate convincing responses, but they struggle with reasoning, planning, and understanding physical environments.

AMI is trying to fix that.

The company is building AI around what researchers know as “world models.” These systems try to understand how the physical world works rather than predicting the next word in a sentence.

The goal is actually practicality.

Manufacturing, aerospace, and pharma function on complex systems. AI that can reason in real-world environments would greatly manage factories, logistics, robotics, and develop scientific research.

Consumer applications may follow later. LeCun has already suggested that this kind of AI could eventually power hardware like domestic robots or smart glasses.

The timing of the startup is also interesting.

While most AI companies are doubling down on scaling language models, LeCun is betting the industry is heading toward a technical wall. His view? Real intelligence will require systems that understand space, physics, and cause-and-effect relationships. It’s not limited to generation- but a certain understanding of how the world truly operates. And how the world came to be.

In simple words, the next leap in AI will not come from making models bigger.

It might come from making them think differently.

Whether that bet pays off is still uncertain. And with more than a billion dollars behind it, AMI just ensured the AI race now has two competing visions of the future.

Anthropic Takes the Pentagon to Court as the AI Industry Watches.

Anthropic Takes the Pentagon to Court as the AI Industry Watches.

Anthropic Takes the Pentagon to Court as the AI Industry Watches.

After Anthropic backed out of making a deal with the Pentagon, the latter labeled it a risk. Did you think the AI powerhouse wouldn’t clap back?

AI companies have positioned themselves as builders of the future for years now. Ethical labs. Independent innovators. Firms that would guide how powerful technology entered society.

The narrative has now collided with reality.

Anthropic has filed a lawsuit against the U.S. Department of Defense. And it’s a clap back after the agency labeled it a supply chain risk. The designation could effectively push the company out of parts of the defense ecosystem.

Anthropic says the label is retaliation.

The real conflict began when the Pentagon wanted broader access to its AI systems. Anthropic refused to loosen safeguards that limit how its models can be used- especially around mass surveillance and autonomous weapons.

And soon after, the government flagged the company as a potential risk within the military supply chain.

That kind of label is serious. It’s usually for companies suspected of ties to foreign adversaries or security vulnerabilities. Applying it to a U.S. AI firm sends a clear signal to contractors: keep your distance.

Anthropic is now asking the courts to intervene. The company argues the government is punishing it for sticking to its own safety policies.

But the lawsuit reveals something deeper than a regulatory dispute.

It exposes the fragile balance between govts and the companies designing advanced AI.

The U.S. govt views AI as the strategic infrastructure. The logic? Systems that can influence intelligence analysis, cybersecurity, and military planning can’t be leveraged in national security frameworks.

Tech companies see the situation differently. Their credibility rests on safety commitments and public trust. If they bend those commitments too easily, they risk becoming extensions of the state.

Anthropic chose resistance.

Whether it wins the case may matter less than what the conflict represents. The AI industry has spent years debating alignment and ethics in theory.

Now the argument is becoming far less abstract: a courtroom.

And the outcome will quietly decide who ultimately sets the rules for the most powerful technology being built today.

Meta-Opens-WhatsApp

Meta Opens WhatsApp to Rival AI Chatbots as EU Pressure Mounts

Meta Opens WhatsApp to Rival AI Chatbots as EU Pressure Mounts

Meta is backing down, at least temporarily, by making a calculated concession in Europe.

Meta is easing its grip on WhatsApp and letting its AI rival back on to the platform. But there are specific terms and conditions- its own terms.

It comes after the EU regulators forced the tech giant’s hand after a substantial incident.

Meta blocked third-party AI chatbot providers from the WhatsApp Business API on January 15, leaving merely its own AI assistant on the platform. After this, the competitors complained to regulators. From there? The EU took notice quickly.

The European Commission threatened interim measures last month, citing potential irreparable harm to rivals. Italy’s antitrust authority had already acted in a similar way back in December.

As of now, Meta has eased its grip- at least for the next 12 months.

Meta says it will support general-purpose AI chatbots via the WhatsApp Business API in Europe. The tech powerhouse’s framing is that this voluntary step removes any urgency for the Commission to act while the broader investigation continues.

That’s reasonable. However, it sidesteps “why” the situation exists in the first place. And whether this is actually meaningful access.

Meta is charging a fee for that access, and smaller AI companies aren’t happy about it. Marvin von Hagen, the CEO of The Interaction Company (one of the complainants), puts it plainly-

“The pricing Meta introduced makes it just as impossible to operate on WhatsApp as the outright ban did, effectively replacing one anti-competitive restriction with another.”

That’s a targeted critique. And honestly, an opinion like this is hard to dismiss.

Opening a door while pricing out anyone who’d walk through it isn’t really opening the door. The same dynamic played out in Italy, where Meta reopened access after a court order. And competitors say the result is that there is no solution either.

The policy changes now extend to Brazil as well, after a court reinstated an antitrust injunction that’s suspended.

So Meta is dealing with this on multiple fronts simultaneously. And the pattern? It’s less like voluntary compliance and more like minimum concessions, market by market, wherever regulators push hard enough.