Meta

Meta, Google, TikTok Under Fire for Breaching Australia’s Under-16s Ban

Meta, Google, TikTok Under Fire for Breaching Australia’s Under-16s Ban

Australia’s under-16 social media ban is facing its first real crisis, but can a government actually win a game of cat-and-mouse with the world’s biggest algorithms?

Australia’s “world-first” social media ban for under-16s was supposed to be a clean break from a decade of digital addiction. Instead, the government is accusing Big Tech of “taking the mickey” three months in.

The eSafety Commissioner recently launched a massive investigation into Meta, TikTok, and Google, signaling that the honeymoon phase of voluntary compliance is over.

The numbers tell a story of a system made of holes.

While platforms have been bragging about purging five million accounts in December, a new report found that 70% of kids who had accounts before the ban still have access. The regulator isn’t just mad about the numbers; they are calling out the “playbook” tactics used to bypass the law. Some platforms allegedly prompted kids to try age-verification tests over and over until they finally guessed a birth year that let them back in.

It’s more than a technical glitch; it’s a fundamental disagreement on what “reasonable steps” look like.

Minister Anika Wells isn’t buying the industry’s excuses about technology being imperfect. From the government’s perspective, billion-dollar companies that can map the globe shouldn’t struggle to verify a teenager’s age.

But for the platforms, the pushback is about more than just profit. They argue that forcing kids into “age-blind” corners of the web or demanding government IDs creates a privacy nightmare that far outweighs the benefits of a ban.

The stakes go beyond Australia’s borders.

With Indonesia and parts of Europe watching closely, this investigation will determine if a mid-sized democracy can actually force Silicon Valley to change its DNA.

If the eSafety Commission moves toward the maximum $49.5 million fines by mid-year, we will see the platforms blink. Or we might see them abandon the Australian market entirely.

Microsoft’s

Microsoft’s Multi-Model Gambit: Copilot Can Now Critique Itself Using Rival Models

Microsoft’s Multi-Model Gambit: Copilot Can Now Critique Itself Using Rival Models

Microsoft is now pitting GPT against Claude inside Copilot to fix AI’s lying problem, but is a self-correcting bot worth the new premium price tag?

Microsoft is fundamentally changing how its AI works by allowing rival models to converse.

In a major update to Copilot released today, the tech giant introduced a feature called “Critique” that forces OpenAI’s GPT and Anthropic’s Claude to collaborate on a single task. It is a striking admission that no single AI model is currently perfect enough to handle the complex demands of enterprise work alone.

The new workflow functions like a high-speed editorial desk.

When a user submits a research query, GPT drafts the initial response while Claude simultaneously reviews it for accuracy and citation quality. This “model council” approach has reportedly led to a double-digit improvement in research quality, pushing Microsoft ahead of standalone tools from Google and Perplexity.

By layering these models, Microsoft aims to resolve the industry’s biggest headache: the tendency for AI to hallucinate facts.

Beyond better research, Microsoft is also pushing Copilot Cowork into early access. It’s a much-needed pivot to autonomous agents.

Earlier versions of Copilot focused on email summaries, and Cowork changed that. It will actually do the work, like reconciling budgets or organizing entire project timelines.

But this intelligence comes with a price tag.

Microsoft is simultaneously pulling the free version of Copilot from core Office apps and reserving the integrated experience for paid commercial subscribers. It’s clearly a strategic step.

The tech giant is no longer interested in just giving AI away for fun. And now it’s actively betting that businesses will pay a premium for a “coworker” that finally knows how to check its own work.

NVIDIA's

NVIDIA’s Valuation Hits a Seven-Year Floor

NVIDIA’s Valuation Hits a Seven-Year Floor

NVIDIA’s valuation just hit its lowest point since 2019, leaving investors to wonder if the AI boom is finally cooling or if this is the bargain of a lifetime.

NVIDIA was the undisputed engine of the stock market for the last three years. But now the engine is knocking.

NVIDIA’s PE ratio has tumbled to a seven-year low of 19.6 despite reporting massive profit margins and record-breaking revenue. That’s a level not seen since the pre-ChatGPT era- signaling a massive shift in how Wall Street views AI’s future.

The primary culprit is a growing sense of AI angst among institutional investors. While NVIDIA is still shipping millions of chips, the big tech companies purchasing them are facing intense scrutiny over their spending.

The question now is- will these multi-billion-dollar infrastructure investments ever become bottom-line profits? The world wants to see the product.

And all the geopolitics is adding to the mounting pressure.

It’s primarily fueling inflation fears, which always impact high-growth tech stocks first. Investors are de-risking their portfolios to move toward safe-haven assets.

NVIDIA was once a bulletproof bet, but now it has become a cyclical hardware company.

The irony is that NVIDIA’s fundamentals have rarely looked better.

Gross margins remain at a staggering 75%, and the company is preparing to launch its next-gen Vera Rubin architecture. Yet, NVIDIA is trading at a valuation lower than the S&P 500 average for the first time in a decade.

The stock hasn’t necessarily become a bad investment, but the market has decided that the era of unknown optimism is over.

ChatGPT

Is ChatGPT Trading Its Soul for Ad Dollars?

Is ChatGPT Trading Its Soul for Ad Dollars?

OpenAI’s ad pilot just hit a $100 million run rate in six weeks. As ChatGPT leans into ads to fund its future, can it stay the neutral tool we trust?

OpenAI just proved that the “free” in free software always has an expiration date. Within six weeks of launching its U.S. advertising pilot, ChatGPT has already cleared a $100 million annualized revenue mark.

For a company burning through billions in compute costs, this isn’t just a milestone. It is a survival strategy.

The strategy is a classic pivot.

While Sam Altman’s team spent years positioning ChatGPT as a pure, distraction-free utility, the reality of the balance sheet has finally set in. By showing ads to users on the free and “Go” tiers, OpenAI is following the well-worn path of every tech giant before it. They claim these ads are separate from the AI’s logic and won’t influence answers.

But in the world of high-stakes algorithms, the line between “useful suggestion” and “paid placement” can get blurry very fast.

The real nuance is in the price tag. OpenAI is reportedly charging a $60 CPM- triple what Meta asks- and demanding $200,000 minimum commitments. They are selling “premium” attention, betting that a user in the middle of a deep research session is more valuable than someone mindlessly scrolling through a feed.

Yet, early data shows a click-through rate of less than 1%, far below the gold standard of Google Search.

OpenAI is currently walking a tightrope.

They need the cash to keep the lights on for GPT-5 and beyond that. However, they also risk turning into another digital billboard. When the ads become too intrusive, or if the “relevance” starts to feel like manipulation? The very trust that built ChatGPT’s massive user base could evaporate.

We are watching the transition of an oracle into a marketplace. The question is whether we will still value the advice when we know it comes with a sponsor.

Google

Google’s 2029 Warning Asks an Important Question- Is Our Digital Past Compromised?

Google’s 2029 Warning Asks an Important Question- Is Our Digital Past Compromised?

Google’s 2029 quantum breakthrough isn’t just a future threat. If our current encryption is destined to fail, are today’s secrets already compromised?

The tech industry used to treat “Q-Day”- the moment quantum computers break modern encryption- as a problem for the next generation.

Google’s latest assessment has shattered that complacency. By pinpointing 2029 as the year our digital locks might fail, they have moved the finish line from a comfortable distance to our immediate doorstep.

That isn’t merely a warning for future hackers.

The real nuance lies in a strategy known as “harvest now, decrypt later.” Sophisticated actors and intelligence agencies are likely gathering encrypted data today, betting they can store it until quantum processors are ready to spotlight it.

Your medical records, financial transfers, and private messages sent this morning are being archived in high resolution, waiting for a key that is still in the forging process.

Google’s aggressive timeline has rattled the industry. While many experts previously expected this breakthrough in the late 2030s or beyond, Google is already overhauling its internal security models.

By moving Android and its core authentication services to post-quantum cryptography (PQC) now, they are signaling that the era of “safe” classical encryption is effectively over.

The challenge is that updating global infrastructure is a slow and grueling task.

Upgrading a single government database or international banking network can take half a decade.

We have already lost the lead if we wait until 2028 to take this transition seriously. And to put things into perspective- we are currently in a race against a machine that’s still being designed, while trying to protect data that’s probably already stolen.

The real question is no longer about when the walls will fall. It boils down to- how much of our digital history we have already surrendered to the future.

Wikipedia

Wikipedia’s Human Wall Might Be the Last Stand for Authenticity

Wikipedia’s Human Wall Might Be the Last Stand for Authenticity

Wikipedia is officially banning AI-generated content to save its soul. In a digital world of synthetic noise, is being “human-only” a luxury or a losing battle?

Wikipedia has spent two decades as the internet’s most successful “trust me, bro” experiment. It works because, for all our flaws, we care about being right. But the site just made a massive gamble by banning AI-generated content.

Wikipedia is choosing to stay slow, stubborn, and strictly biological- especially in an era where silicon can churn out a million words in seconds.

The logic is simple: LLMs don’t actually know things. They predict the next most likely word in a sequence. That makes them world-class liars.

It does so with the confidence of a tenured professor when an AI hallucinates a fake historical event. For a platform built on the bedrock of verifiability? Allowing AI to write entries is akin to inviting a high-speed rumor mill to manage a library.

The Reality Check

The ban is a noble attempt to avoid a “dead internet” feedback loop. If AI begins learning from AI-generated Wikipedia articles, the truth starts to degrade like a photocopy of a photocopy.

But there is a glaring practical problem-

AI detectors are known to be unreliable. And the tech is now getting better at mimicking human quirks each day.

Why It Should Matter

It isn’t just about blocking bots. It is a fundamental shift in how we value information.

By banning AI, Wikipedia is positioning itself as the organic section of the information grocery store. It is betting that as the rest of the web becomes a soup of synthetic text, users will crave the friction and accountability that only comes from a human author.

The risk is that humans cannot keep up with the sheer volume of global events.

We are watching a digital sanctuary being built. Whether it remains a source of truth or becomes a curated museum of a slower age is the real question. If the wall holds, Wikipedia might be the last place on earth where you know for sure that a person is behind the screen.