OpenAI

OpenAI Closes $122bn Funding Round, Achieves $852bn Valuation

OpenAI Closes $122bn Funding Round, Achieves $852bn Valuation

OpenAI just raised a staggering $122 billion. If it’s redesigning its roadmap- why does it smell of desperation to stay ahead of the hype cycle?

OpenAI just closed a massive funding round, which could be a small country’s GDP.

It isn’t just about paying the electric bill for servers. It is a loud, expensive bet that the path to AGI is paved with sheer, brute-force capital. By bringing in a mix of sovereign wealth funds and tech giants, Sam Altman is effectively trying to out-spend the laws of diminishing returns.

The real story here isn’t the number of zeros, but the shift in OpenAI’s identity.

The company is reportedly moving away from its complex non-profit roots to become a fully for-profit entity. This change is the price of admission for such a massive check.

Investors are no longer satisfied with saving humanity as a mission statement; they want a clear, legal path to a return on their hundred-billion-dollar investment. The road to AGI requires a level of commercialization that the founders once fought against.

However, there’s a lingering tension behind the hype.

As OpenAI scales, its hunger for data and power is hitting physical limits. We are seeing a pivot toward synthetic data and custom nuclear power deals because the internet is simply running out of fresh human thoughts to feed the machine.

This funding round buys them time to solve those engineering hurdles, and it places a massive target on their back for regulators who worry about a single company holding the keys to the future.

We are entering a phase where the AI race is determined by who has the deepest pockets. The $122 billion is a signal to rivals like Google and Anthropic that OpenAI is willing to burn through cash merely to become the pack leader.

The question is whether all that money can actually buy the breakthrough they’ve promised.

Bluesky

Bluesky Expands AI strategy with Attie, an App for Curating Personalized Social Feeds

Bluesky Expands AI strategy with Attie, an App for Curating Personalized Social Feeds

Bluesky is handing you the keys to the algorithm, but is a perfectly curated feed exactly what we need?

Social media feeds have always felt like a black box.

You begin following a few friends, and hidden code decides you want to see ads for floor mats. Bluesky is trying to break that cycle. Their new partnership with Attie, an AI customization startup, will allow users to create their own discovery engines. It is a sharp departure from the “take what we give you” model that defines X and Instagram.

The project, dubbed “Feed Gen AI,” lets people describe exactly what they want to see in plain English. Instead of hoping the algorithm catches on that you like mid-century architecture but hate home renovations, you merely tell it.

Attie’s tech handles the heavy lifting, scanning posts for context rather than just hashtags. It turns the feed from a passive stream into a tool you actually control.

It is a clever play for a platform that brands itself on decentralization.

By outsourcing the brain of the app to a third party, Bluesky avoids the ethical headache of being the sole arbiter of truth.

If you dislike the results, you don’t leave the platform; you merely switch your AI provider. It treats social media like a browser where you can pick your own extensions.

There’s, of course, a flip side.

If everyone builds their own perfectly curated bubble, the social part of social media might erode. We are moving toward a world where no two people see the same internet.

Bluesky is betting that we are so tired of corporate manipulation that we’ll take the risk of total isolation merely to feel like we’re back in the driver’s seat.

NVIDIA

Why NVIDIA Just Wrote a $2 Billion Check to Marvell

Why NVIDIA Just Wrote a $2 Billion Check to Marvell

NVIDIA just spent $2 billion to make its biggest rival its closest partners. Is this a new era of open tech or just a cleverer way to stay in control?

The narrative in the chip world was NVIDIA versus Everyone for a long time. But Jensen Huang has changed the script.

NVIDIA isn’t just buying a stake in a rival by investing $2 billion into Marvell Technology. It’s about building a bridge. This partnership centers on something called NVLink Fusion, a technology stack that allows other companies’ custom chips to plug directly into NVIDIA’s world-class AI factories.

The nuance here is a strategy shift.

Tech giants are increasingly designing their own specialized processors to save power and costs. That would usually be a threat to NVIDIA’s dominance. But NVIDIA is now effectively saying: “Go ahead and build your own ‘brains,’ but let us provide the ‘nervous system’ that connects them.”

NVIDIA aims to integrate Marvell’s high-speed networking and optical tech into its own ecosystem. A move that ensures that even if you aren’t using the company’s chips, you are still using its platform.

It’s a masterclass in staying indispensable.

Marvell is a leader in silicon photonics- using light to move data at speeds that traditional copper wires can’t reach. As AI models become massive, the bottleneck isn’t how fast a chip thinks, but how fast it talks to its neighbors.

They’ve created a walled garden that actually feels like an open field by tying Marvell’s pipes to NVIDIA’s architecture.

It’s a win for Marvell, whose stock jumped 13% on the news, but the real winner is NVIDIA’s long-term moat. They are moving from being a hardware vendor to becoming the universal operating system for AI infrastructure.

If the world is moving toward custom silicon, NVIDIA just ensured it’s the one holding the blueprint for how it all fits together.

NVIDIA just spent $2 billion to make its biggest rival its closest partners. Is this a new era of open tech or just a cleverer way to stay in control?

The narrative in the chip world was NVIDIA versus Everyone for a long time. But Jensen Huang has changed the script.

NVIDIA isn’t just buying a stake in a rival by investing $2 billion into Marvell Technology. It’s about building a bridge. This partnership centers on something called NVLink Fusion, a technology stack that allows other companies’ custom chips to plug directly into NVIDIA’s world-class AI factories.

The nuance here is a strategy shift.

Tech giants are increasingly designing their own specialized processors to save power and costs. That would usually be a threat to NVIDIA’s dominance. But NVIDIA is now effectively saying: “Go ahead and build your own ‘brains,’ but let us provide the ‘nervous system’ that connects them.”

NVIDIA aims to integrate Marvell’s high-speed networking and optical tech into its own ecosystem. A move that ensures that even if you aren’t using the company’s chips, you are still using its platform.

It’s a masterclass in staying indispensable.

Marvell is a leader in silicon photonics- using light to move data at speeds that traditional copper wires can’t reach. As AI models become massive, the bottleneck isn’t how fast a chip thinks, but how fast it talks to its neighbors.

They’ve created a walled garden that actually feels like an open field by tying Marvell’s pipes to NVIDIA’s architecture.

It’s a win for Marvell, whose stock jumped 13% on the news, but the real winner is NVIDIA’s long-term moat. They are moving from being a hardware vendor to becoming the universal operating system for AI infrastructure.

If the world is moving toward custom silicon, NVIDIA just ensured it’s the one holding the blueprint for how it all fits together.

CoreWeave Takes Out Billion Dollar Loan to Expand AI Infrastructure 1 1

CoreWeave Takes Out Billion-Dollar Loan to Expand AI Infrastructure

CoreWeave Takes Out Billion-Dollar Loan to Expand AI Infrastructure

Wall Street just gave CoreWeave’s AI chips an investment-grade rating. Is this a sign of a maturing industry or just a very expensive house of cards?

CoreWeave just closed an $8.5 billion financing deal that feels less like a startup loan and more like a structural shift in how the world funds technology.

It isn’t just about a mountain of cash. It’s the first time high-performance computing infrastructure, specifically the chips and servers that run AI, has an “investment-grade” rating by Moody’s.

In plain English? Wall Street has officially decided that AI hardware is now as safe a bet as a utility company or a toll road.

The deal is a masterclass in modern financial engineering.

CoreWeave is basically using its massive fleet of Nvidia GPUs and pre-signed customer contracts as collateral. It’s a “delayed draw” loan, meaning they can pull the money as they build, specifically to fulfill a massive, high-priority contract with a major AI enterprise.

By securing a lower cost of capital, CoreWeave is pivoting from a high-risk disruptor to a foundational landlord of the AI era.

But there is a catch that most headlines skip.

While the investment-grade tag suggests stability, the company’s stock has been a rollercoaster, losing nearly half its value since its 2025 highs. Investors are in a tug-of-war: they love the “land and expand” strategy, but they are wary of the sheer amount of debt CoreWeave is stacking- now totaling $28 billion in just a year.

That’s the ultimate test of the “AI bubble” theory.

If the demand for compute remains an infinite resource, CoreWeave becomes the backbone of the next century. If the appetite for large language models suddenly cools, the industry now has the world’s most expensive pile of silicon.

For now, Blackstone and JPMorgan are betting billions that we are nowhere near the ceiling.

Meta

Meta, Google, TikTok Under Fire for Breaching Australia’s Under-16s Ban

Meta, Google, TikTok Under Fire for Breaching Australia’s Under-16s Ban

Australia’s under-16 social media ban is facing its first real crisis, but can a government actually win a game of cat-and-mouse with the world’s biggest algorithms?

Australia’s “world-first” social media ban for under-16s was supposed to be a clean break from a decade of digital addiction. Instead, the government is accusing Big Tech of “taking the mickey” three months in.

The eSafety Commissioner recently launched a massive investigation into Meta, TikTok, and Google, signaling that the honeymoon phase of voluntary compliance is over.

The numbers tell a story of a system made of holes.

While platforms have been bragging about purging five million accounts in December, a new report found that 70% of kids who had accounts before the ban still have access. The regulator isn’t just mad about the numbers; they are calling out the “playbook” tactics used to bypass the law. Some platforms allegedly prompted kids to try age-verification tests over and over until they finally guessed a birth year that let them back in.

It’s more than a technical glitch; it’s a fundamental disagreement on what “reasonable steps” look like.

Minister Anika Wells isn’t buying the industry’s excuses about technology being imperfect. From the government’s perspective, billion-dollar companies that can map the globe shouldn’t struggle to verify a teenager’s age.

But for the platforms, the pushback is about more than just profit. They argue that forcing kids into “age-blind” corners of the web or demanding government IDs creates a privacy nightmare that far outweighs the benefits of a ban.

The stakes go beyond Australia’s borders.

With Indonesia and parts of Europe watching closely, this investigation will determine if a mid-sized democracy can actually force Silicon Valley to change its DNA.

If the eSafety Commission moves toward the maximum $49.5 million fines by mid-year, we will see the platforms blink. Or we might see them abandon the Australian market entirely.

Microsoft’s

Microsoft’s Multi-Model Gambit: Copilot Can Now Critique Itself Using Rival Models

Microsoft’s Multi-Model Gambit: Copilot Can Now Critique Itself Using Rival Models

Microsoft is now pitting GPT against Claude inside Copilot to fix AI’s lying problem, but is a self-correcting bot worth the new premium price tag?

Microsoft is fundamentally changing how its AI works by allowing rival models to converse.

In a major update to Copilot released today, the tech giant introduced a feature called “Critique” that forces OpenAI’s GPT and Anthropic’s Claude to collaborate on a single task. It is a striking admission that no single AI model is currently perfect enough to handle the complex demands of enterprise work alone.

The new workflow functions like a high-speed editorial desk.

When a user submits a research query, GPT drafts the initial response while Claude simultaneously reviews it for accuracy and citation quality. This “model council” approach has reportedly led to a double-digit improvement in research quality, pushing Microsoft ahead of standalone tools from Google and Perplexity.

By layering these models, Microsoft aims to resolve the industry’s biggest headache: the tendency for AI to hallucinate facts.

Beyond better research, Microsoft is also pushing Copilot Cowork into early access. It’s a much-needed pivot to autonomous agents.

Earlier versions of Copilot focused on email summaries, and Cowork changed that. It will actually do the work, like reconciling budgets or organizing entire project timelines.

But this intelligence comes with a price tag.

Microsoft is simultaneously pulling the free version of Copilot from core Office apps and reserving the integrated experience for paid commercial subscribers. It’s clearly a strategic step.

The tech giant is no longer interested in just giving AI away for fun. And now it’s actively betting that businesses will pay a premium for a “coworker” that finally knows how to check its own work.