Infineon

Infineon Is Betting Big on Robot Chips: How It Will Play Out is the Part Worth Watching

Infineon Is Betting Big on Robot Chips: How It Will Play Out is the Part Worth Watching

Infineon’s bet might sound cool, but this is as much about staying relevant as it is about innovation.

Infineon’s chief executive has been blunt about where the company sees growth: AI and chips for humanoid robots. On the surface, it feels like another tech executive hyping the future. But there’s a practical instinct underneath this kind of language.

Let’s be honest.

Chips for robot bodies and AI workloads make for sexy headlines. Yet most chip manufacturers will tell you the real revenue still sits with data centers, automotive, and industrial applications. Infineon knows that. It also knows it needs a narrative that doesn’t read like “we make components.”

Saying “humanoid robot chips” gets attention because it sounds like something out of a sci-fi poster. But if you peel it back, the underlying point is more grounded: Infineon wants to be part of the future tech stack, not stuck in legacy parts.

Infineon has strengths where it matters- power semiconductors for cars, sensors for industrial gear, and a growing presence in power management that AI hardware can’t ignore. Those domains are real money today. Talking about AI and robots signals where the company wants to be in five to ten years.

There’s also competition to consider.

Taiwan Semiconductor, Samsung, Nvidia- these players dominate the conversation around advanced AI silicon. Infineon doesn’t want to be left talking only about yesterday’s chips. Positioning itself as a contributor to robot hardware and AI accelerators is partly strategic branding.

Will we be buying humanoid robots anytime soon? Maybe not.

But the language matters. It tells customers and investors that Infineon doesn’t think its future is in commoditized components. It wants to be in the part of the tech stack that still feels like growth.

In a world where every chip company claims AI relevance, calling out humanoid robots is a way to differentiate. It may be marketing. It may be a strategy. Probably a bit of both. What’s clear is this: Infineon wants to be in tomorrow’s headlines, not yesterday’s datasheets.

Meta Doubles Down on NVIDIA Chips Just When Everyone Else is Talking Alternatives

Meta Doubles Down on NVIDIA Chips Just When Everyone Else is Talking Alternatives

Meta Doubles Down on NVIDIA Chips Just When Everyone Else is Talking Alternatives

Meta has agreed to purchase millions of AI chips from Nvidia in a multi-year deal. It’s a vote of confidence in NVIDIA’s grip on AI infrastructure.

NVIDIA will sell Meta millions of its AI chips under a multi-year supply agreement, including both current and next-gen models. And the package includes Blackwell GPUs and future Rubin chips, along with standalone Grace and Vera CPUs for data-center workloads.

Make no mistake, this is a crucial commitment from a company that has been trying to add to its hardware stack.

It’s not because “Meta has no in-house chips.”

Meta is working on its own silicon while talking to other partners like Google about alternatives. But the fact remains. It still reroutes to NVIDIA for scale. That tells you something.

More interestingly, this deal includes CPUs that compete with Intel and AMD. Meta isn’t just buying raw AI horsepower. It’s buying infrastructure that can run the stuff that keeps data centers humming- databases, background processes, inference workloads.

That’s a shift from pure GPU grabs to a broader stake in data-center computing.

And yes, this is a defensive move too.

NVIDIA’s dominance has invited competitors and alternatives. But when a powerhouse tech buyer like Meta doubles down with NVIDIA, it reinforces the narrative that the chip maker still sits at the center of practical AI deployment.

Stock reactions were modest because Wall Street now hears these types of deals all the time. While a small bump for NVIDIA, it’s hardly any movement for Meta.

But look past the price action. It’s about trust and momentum.

Meta betting on NVIDIA chips at scale isn’t a comfortable afterthought. It’s an endorsement that on-the-ground deployment still runs through NVIDIA’s pipeline.

SoftBank Dumps Nvidia Stake: Quiet Move but a Loud Signal for Tech Investors

SoftBank Dumps Nvidia Stake: Quiet Move but a Loud Signal for Tech Investors

SoftBank Dumps Nvidia Stake: Quiet Move but a Loud Signal for Tech Investors

SoftBank has dissolved its NVIDIA stake, according to an SEC filing. In the middle of the AI boom, that exit says more than the stock dip.

SoftBank Group has dissolved its stake in NVIDIA, according to a recent SEC filing- not trimmed, not reduced, but gone.

The market reaction was mild. NVIDIA dipped slightly. Then it moved on. But SoftBank does not make small, meaningless moves- especially not in the middle of the largest AI rally in years.

NVIDIA has been a ladder for the AI surge. Its chips power the models. Its name anchors the narrative. If you wanted exposure to AI infrastructure, NVIDIA was the obvious bet.

So why leave?

One explanation is simple. Valuation. NVIDIA’s rise has been relentless. At some point, even believers look at the multiple and decide the upside is priced in. SoftBank has always chased asymmetric returns. Once the trade becomes crowded, it loses its edge.

There’s another angle. SoftBank prefers leverage over visibility. Owning NVIDIA stock is passive. Backing private AI ventures, infrastructure plays, or emerging chip challengers offers more control and potentially more upside. Selling NVIDIA could be less about doubt and more about redeploying capital.

The timing is everything.

The AI boom remains in full swing, while the CapEx is exploding. Optimism is also high. Walking away now suggests SoftBank thinks this phase is maturing.

That doesn’t mean NVIDIA is in trouble. It means smart money is reassessing where the real leverage sits. Public market darlings are obvious. The next layer down is less so.

SoftBank rarely telegraphs its strategy loudly. But this move speaks clearly. In an overheated AI cycle, even the boldest investors know when to step aside and look for the next angle.

OpenClaw’s Architecture Has High Potential to Become an Unconstrained Playground for Malicious Actors, Reports Say

OpenClaw’s Architecture Has High Potential to Become an Unconstrained Playground for Malicious Actors, Reports Say

OpenClaw’s Architecture Has High Potential to Become an Unconstrained Playground for Malicious Actors, Reports Say

As OpenClaw’s founder joins OpenAI, researchers warn of over 400 malicious skills uploaded to ClawHub.

Stating that OpenClaw is “powerful” is nothing short of an understatement.

For those living under a rock, this might seem like another trend or hype making the rounds. But OpenClaw’s virality wasn’t manufactured. It rose to the spotlight quite subtly. And especially through chatters of Moltbook, a social media platform where AI agents complain, ruminate, and converse.

Previously known as Clawdbot, this self-hosted AI agent basically executes real actions, whether it’s network requests, shell commands, or even file operations. Its skills come quite close to the agentic prowess that tech leaders and investors have been chasing incessantly.

That’s precisely what makes it so powerful- added to the fact that it runs on your own machine. And unless you sandbox it, well, it’s a security nightmare for your entire system.

And to make matters worse?

Well, over 400 new malicious skills were uploaded onto ClawHub, the very public marketplace for OpenClaw extensions, and GitHub within a week.

In this context, skills are small packages of what agents are capable of doing, each built with some metadata and instructions. And each may also contain extra scripts and resources- which makes OpenClaw’s architectural design seemingly nuanced, but by default, dangerous.

That’s where this AI agent’s power stems from.

No code’s hardwired into it. You merely add the skills, and subsequently, it can leverage new tools and APIs. OpenClaw just reads the document and follows the instructions inside. That’s the more malicious part. Skills are these third-party codes that are running in an environment with real system access.

From a user’s perspective, it’s a setup they trust. But from an attacker’s? It’s an open playground. One mechanism works differently for distinct intentions.

It’s intelligent. But the risk factors are quite high.

However, given that, Sam Altman has announced that OpenClaw will remain open-source under a foundation led by OpenAI. This news come after OpenAI onboards OpenClaw’s builder, Peter Steinberger- with big plans to materialize a future that’s multi-agent.

Dentsu Faces $2 Billion in Losses, Replaces Its President and CEO

Dentsu Faces $2 Billion in Losses, Replaces Its President and CEO

Dentsu Faces $2 Billion in Losses, Replaces Its President and CEO

Hiroshi Igarashi is out, and Takeshi Sano is stepping in.

The Tokyo-based advertising group posted a tremendous loss this year, in financial terms. It’s one of the worst in Dentsu’s history, so much so that it isn’t even paying dividends to its shareholders.

But this wasn’t sudden.

The stark difference between its international and domestic operations has been evident for some time now. Dentsu has eliminated 2,100 jobs and is due to cut 3400 more positions from its international arm.

Dentsu is stuck inside the perfect storm. But also going through a split personality problem.

Its Japan-side operations are growing and doing quite well overall. Is it an international side? Not so much. Dentsu even tried to sell this side of its business operations, but the buyers didn’t stick around.

Why is such a huge agency, such as Dentsu, struggling?

All the money is actually going into very different pockets- AI tools, in-house teams, and other tech platforms. While agency trust is dwindling. The sludge of AI is mostly responsible for Dentsu’s smooth descent, but it’s not merely that. All of the advertising agencies have taken the hit.

Clients prefer to reallocate their agency spending to in-house, AI-savvy teams that also offer them control over their own first-party data. It’s a win-win situation. But if you look at the other side of the coin, businesses have observed a steep growth in their ad spending, not actual revenue growth.

The money isn’t flowing through ad agencies but around them.

If you think of it- the digital ad landscape is actually growing smoothly. It’s just that traditional agencies are the ones being squeezed out. Brands are doing it themselves, and independent agencies that are quite handy with AI and data infrastructure.

The market has a clean preference.

Dentsu is losing ground because it has little competence in everything AI-first. But it’s not the only one. Even other significant advertising holding companies (such as WPP and Omnicom) are struggling.

The traditional model hasn’t disappeared. The client needs, and AI onslaught has rendered it obsolete. And Dentsu happens to be one of them.

Alibaba

Alibaba Makes Headlines with its New Agentic AI Model, Qwen 3.5: Is It All Part of the Hype?

Alibaba Makes Headlines with its New Agentic AI Model, Qwen 3.5: Is It All Part of the Hype?

Alibaba claims its Qwen 3.5 model is way superior to one of DeepSeek’s. Well, is Alibaba in it for the hype of the race or to truly innovate AI?

Much of the AI focus is shifting from the US to China right now. There’s serious competition brewing, and the AI agents that are popping up are no joke.

ByteDance’s Doubao has been a constant topic in headlines for a couple of weeks. Now it’s leading the chart with over 200 million active users. That’s after DeepSeek rattled the markets last year.

Now illuminating the headlines is Alibaba’s new Qwen 3.5 model.

Some of the features under the limelight?

  • 60% cheaper than the previous versions.
  • Handles huge tasks 8 times better.
  • Has visual agentic capabilities across the web and app (perform abilities independently)
  • Performs at the level of the leading US models (as per Alibaba’s survey)

And even in China? Alibaba just raised the level of competition.

It all began with a coupon campaign.

Alibaba was distributing shopping coupons directly through its chatbot. And that, of course, drew in customers at an alarming rate. But this positioned Qwen as more than merely a question-answer assistant. It’s positioned the bot under a fresh spotlight.

This campaign encouraged consumers to make purchases across Alibaba-owned retail platforms- through AI prompts. All of this effort was meant to elevate user engagement for Alibaba, keeping the chatbot at the front and center. And the actual numbers were beyond what anyone had expected: 10 million orders in the first 9 hours.

It’ll come to be regarded as one of the most happening AI marketing campaigns of the year. Because the AI promise here- of offering users’ convenience and accessibility- materialized to an exciting extent.

So much so that Alibaba’s Qwen even faced glitches and technical setbacks during the campaign. The e-commerce platform had to urge customers to ease their activity.

During one such moment, Qwen responded to a user with-

“Everyone’s enthusiasm for experiencing AI shopping is too high! Currently, there are too many participants in ‘Qwen free order’, we are working tirelessly ‍to maintain the ⁠campaign’s experience.”

Alibaba has been working on the user interface and integrating the bot across its other apps. And now it’s also planning on allowing customers to complete purchases without having to leave the applications.

As much as this is about users, it’s also imperative to the ongoing AI race. As all the abilities of AI are being tested, only a few will make an actual impact. That’s precisely what Alibaba hopes to do- help enterprises (not merely individuals) operate faster and do more with the same amount of compute.