NVIDIA

The AI Industry’s Eyes Are on Jensen Huang at the AI Megaconference GTC

The AI Industry’s Eyes Are on Jensen Huang at the AI Megaconference GTC

NVIDIA’s GTC 2026 keynote is today. And the AI industry is tuned in- new chips, new software, and a CEO who knows exactly how to work a crowd.

Jensen Huang is all set to make history on the floor of the SAP Center in San Jose on Monday to deliver his keynote across 30k attendees from 190 countries.

It’s no longer a tech conference but a coronation.

Huang’s presentation covers NVIDIA’s push into AI inference, with new chips and software for autonomous agents. That matters. NVIDIA already commands an estimated 80% of the AI training market share. Inference is the next frontier, and as of now, Google, Amazon, and others are competing rigorously with custom chips. Huang wants that territory too.

He promised “a chip that will surprise the world” and teased “a few new chips the world has never seen before.” Bold word- but they better deliver.

GTC 2026 is where NVIDIA officially kicks off its Vera Rubin platform, replacing Blackwell and Blackwell Ultra. On the software side, NVIDIA is expected to unveil NemoClaw, an open-source platform for enterprise AI agents that offers businesses the right structure to build and deploy AI software.

Then there’s Groq. It’s the first major showcase since NVIDIA’s $20 billion licensing deal with the inference company in late 2025. Everyone wants to know how that integration actually works.

The broader picture is straightforward. NVIDIA is just selling chips, but it’s not merely that. It’s selling the whole stack: hardware, software, models, infrastructure. The company’s announcements today will influence technology roadmaps across the global semiconductor and server supply chains.

No other company in AI has that kind of reach right now. That’s the real story from San Jose.

Accenture

Accenture to Acquire Verum Partners, Expanding its Capital Projects Capabilities in Latin America

Accenture to Acquire Verum Partners, Expanding its Capital Projects Capabilities in Latin America

So Accenture is moving into Latin America in a meaningful way. Last week, the firm announced it is acquiring Verum Partners, a Belo Horizonte-based infrastructure and capital projects management company with 180 people and serious on-the-ground experience in mining, metals, energy, chemicals, and transportation. No price disclosed, as is customary for these things.

Verum does something specific and genuinely difficult. It takes the kind of industrial megaproject that routinely runs over budget and behind schedule and tries to make it not do that. Accenture’s own research puts the failure rate of large infrastructure projects at around 90% against original targets. That number is staggering every time you read it. Verum’s value is that it has people who actually go to the site, coordinate across contractors, and solve problems where the problems are. Accenture’s value is that it can layer AI and digital infrastructure on top of that. Together, the pitch is: faster, more predictable, less wasteful delivery of very large, very complex projects.

It is a good pitch. Brazil’s investment cycle is accelerating right now across mining expansion, grid modernization, transportation, and energy transition. There is a lot to build and a long history of it taking longer and costing more than anyone planned. This acquisition makes sense.

Belo Horizonte is an interesting place to anchor this. The name of the state it sits in, Minas Gerais, means General Mines, and that is not a historical footnote so much as an active description. The region is one of the most resource-rich in the Southern Hemisphere and has been the site of some of the most consequential infrastructure decisions Brazil has made, good and otherwise.

The announcement stays focused on the opportunity, which is fair. Efficiency, productivity, faster operational handover. These are the terms of the deal and they are real improvements worth making.

What does not make it into the press release, and rarely does in these situations, is the question of what sits alongside all this building. The Cerrado, the enormous biodiverse savanna that borders much of this industrial activity, is under significant pressure from exactly the kind of expansion this acquisition is designed to support. Brazil’s environmental licensing process is stretched. These are not Accenture’s problems to solve and the announcement was never going to raise them.

But they are the backdrop. And the companies whose projects Verum will now help deliver faster are operating inside that backdrop every day.

We are not saying do not build. Infrastructure matters, energy transition is real, and poorly managed projects have their own costs. We are just noting that “efficient” is a description of how something happens, not whether it should, and those two questions tend to travel separately in announcements like this one.

The Verum team built something worth acquiring. That much is clear.

Google

Google leaves the door open for ads in Gemini

Google leaves the door open for ads in Gemini

Nick Fox runs Google’s Knowledge and Information division. That means he is responsible for Search, Gemini, and the Assistant. Wired sat down with him recently and the interview is making rounds, mostly because of one thing he said about advertising.

Before we get into that, a quick note on who Nick Fox is. He spent years at Google running ads. That is not a criticism, it is context. The person now overseeing Gemini’s direction came up through the advertising side of the business. Google made that choice deliberately, and it is worth knowing.

Now, the thing everyone is running with.

In January, Demis Hassabis told reporters at Davos that Google had no current plans to put ads inside the Gemini app. Ten weeks later, Fox told Wired that advertising in Gemini is not off the table and that learnings from AI Mode, which does carry ads, will “likely carry over” to the broader Gemini product over time.

Does that mean ads are definitively coming to Gemini? No. Fox was careful. He framed it as a prioritization question, not an announcement. The honest read is that nobody at Google has decided yet, which is actually worth saying plainly instead of treating this as a bombshell. It is not a bombshell. It is a company with 750 million Gemini users and an expensive AI infrastructure bill leaving its options open. That is a business, not a conspiracy.

What is actually interesting is the specific thing Fox called his “holy grail.” Personalization. Gemini already connects to Gmail, Photos, and Calendar through a feature called Personal Intelligence. The product knows a lot about you, by design, because that is what makes it useful.

And that is where the real question lives. Not whether ads are coming, but what an ad means inside a system that has read your emails. Search ads were always a legible transaction. You searched, Google showed you results, some were sponsored, most were labeled. You knew the deal. A personalized AI assistant that also carries advertising is a structurally different arrangement, and nobody, including Google, has fully worked out what the user relationship looks like inside it.

Fox acknowledged this. He said user data will not be sold or shared. He said the company is still figuring out what users will accept in this context. These are not the words of someone with a plan already in motion.

So let us be precise about what this story actually is. An executive with an advertising background now runs the product. A CEO said no ads in January. That same executive said not necessarily in March. A decision has not been made.

Whether the hype around this interview is proportionate to what was actually said is a fair question. The underlying tension it points to, between a product built on intimacy and a business built on advertising, is real and worth watching.

That part is not hype. That part is just the math.

Anthropic

Anthropic invests $100 million into the Claude Partner Network

Anthropic invests $100 million into the Claude Partner Network

Most of the coverage around this announcement will focus on the number. $100 million, Claude Partner Network, Accenture training 30,000 people, Deloitte in, Cognizant in, Infosys in. That is the press release reading itself back to you. It is accurate and it is not the point.

The point is what Anthropic is actually building, and how fast.

Claude is in Chrome. It is in Excel. It is in PowerPoint. It is in Slack. It has a desktop app, an enterprise plan, a coding product, and a consumer subscription tier. It runs on AWS, Google Cloud, and Microsoft Azure simultaneously, something no other frontier model does. It now has a formal partner network with nine-figure backing and the four largest professional services firms on the planet co-signing the vision.

That is not a model company. That is a company building the operating system for work. And it is doing it methodically, one surface area at a time, in a way that is easy to miss if you are only reading individual announcements instead of laying them next to each other.

The SaaS industry has had a version of this conversation before and mostly dismissed it. The argument was always that AI would augment existing tools, not replace them. The Partner Network is the clearest signal yet that Anthropic is not thinking in terms of augmentation. A Code Modernization starter kit that helps enterprises migrate legacy codebases. Certifications for solution architects. Sales playbooks. A services directory where enterprise buyers find Claude-certified implementation partners. This is not the infrastructure of a company selling a feature. This is the infrastructure of a company replacing a category.

The second-order effect worth watching is what happens to the software companies currently sitting inside the workflows Anthropic is systematically entering. Project management, customer support, financial analysis, code review, document processing. Claude has a stated solution for every one of these. The Partner Network is how it gets into the enterprise deals where those solutions get chosen.

For the consultancies involved, the math is straightforward. Accenture does not train 30,000 people on a tool unless it expects that tool to generate a practice worth building. What Accenture is signaling, more than anything Anthropic said in the announcement, is that enterprise demand for Claude implementation is real enough to staff for at scale.

The companies that should be reading this most carefully are not the other AI labs. They are the mid-size SaaS businesses whose entire value proposition is a workflow that Claude can now run inside a side panel.

That conversation is only just beginning, and $100 million is a very deliberate way of starting it.

Open ai

OpenAI to acquire Promptfoo

OpenAI to acquire Promptfoo

On Monday, OpenAI announced it is acquiring Promptfoo, a two-year-old AI security startup founded by Ian Webster and Michael D’Angelo.

The deal brings Promptfoo’s technology into OpenAI Frontier, the company’s enterprise platform for what it is now calling “AI coworkers.” Terms were not disclosed. The Promptfoo team will join OpenAI.

Here is what Promptfoo actually does, because it matters more than the acquisition price. It helps companies find out what their AI systems will do when someone tries to break them. Prompt injections, jailbreaks, data leaks, tool misuse, out-of-policy agent behavior. You build something on an LLM, you point Promptfoo at it, and it tries to make the thing go wrong before your users do. More than 350,000 developers use it. A quarter of Fortune 500 companies rely on it. For a two-year-old company with 11 employees, that is a remarkable footprint.

So the good news is that this capability is being taken seriously at the highest level. That is genuinely worth noting.

The reason it needs to be taken seriously at the highest level is also worth sitting with for a moment.

AI agents are now moving into real enterprise workflows. They are reading emails, drafting responses, scheduling meetings, making purchasing decisions, accessing internal databases. OpenAI’s Frontier platform, launched just last month, is built specifically for this. The promise is a more productive workplace. The surface area for something to go wrong, quietly and at scale, is something the industry is only beginning to map.

Prompt injection, which is one of the core threats Promptfoo is built to detect, is not a complicated concept but it is an uncomfortable one. It means that a malicious actor can embed instructions inside content that an AI agent reads, and the agent, unable to distinguish between data and commands the way a human instinctively does, follows them. An AI coworker processing a vendor invoice that contains hidden instructions is not a hypothetical. It is a documented class of attack that becomes more consequential the more access the agent has.

The deeper thing, the one that does not make it into most coverage of this acquisition, is that we are not just talking about external attacks. We are also talking about what happens when the system gets something wrong and neither the user nor the organization notices in time. An agent that confidently produces an incorrect output, then acts on it, then logs it for compliance, is a different kind of problem than a hacked system. It is subtler. It compounds. The error does not look like an error.

Webster, Promptfoo’s CEO, put it plainly in his announcement: adversarial tests for security, safety, and behavioral risks turned out to be the biggest blockers to actually shipping AI in enterprise environments. Not the models. Not the cost. The question of what the thing will do when reality gets complicated.

OpenAI acquiring the company that surfaces that question is not a coincidence. It is a signal that the answer is harder than the demos suggest.

Promptfoo will stay open source, OpenAI has committed to that. Whether that commitment holds as Frontier’s commercial roadmap develops is a question 130,000 active monthly users will be watching with some attention.

For now, the acquisition makes sense on every level. The capability is real, the need is real, and the timing tracks with where enterprise AI deployment actually is, which is somewhere between excited and quietly nervous.

That second part is appropriate. It means people are paying attention.

Meta

Meta Delays Launch of New ‘Avocado’ AI Model

Meta Delays Launch of New ‘Avocado’ AI Model

Meta is delaying Avocado, its next flagship AI model, after internal benchmarks came back uncomfortable. The model, originally expected earlier this year, is now pushed to at least May.

It did not outperform Google’s Gemini 3. It trails leading models from OpenAI and Anthropic. It did beat Gemini 2.5 and improved on Llama 4, which is something, though not the kind of something you lead a press release with.

In response, Meta’s senior leadership is reportedly exploring licensing Gemini models from Google to keep Meta AI competitive across Facebook, Instagram, and WhatsApp while internal development catches up. Apple already did something similar, paying roughly a billion dollars to integrate Gemini into Siri. So there is precedent. Still, the image of two of the world’s largest technology companies licensing their AI brains from a third is worth pausing on.

The model itself, Avocado, comes out of Meta’s newly formed Superintelligence Labs, led by Alexandr Wang, whose company Scale AI Meta acquired last year for $14.5 billion. It is designed for logical reasoning, software development, and agentic behavior, meaning it is meant to plan and execute tasks across multiple steps autonomously. Meta is spending between $115 billion and $135 billion on AI infrastructure this year. That number is not a typo.

So we have a company spending at a scale almost impossible to conceptualize, building toward a model it had to delay, potentially filling the gap by licensing from a competitor. The honest question this raises is not about Avocado specifically.

It is about what all of this is starting to look like.

SaaS, at its peak, worked on a simple premise. Big companies built software, smaller companies and enterprises paid monthly to use it, and the value was in the product being better than whatever you could build yourself. The switching costs were real, the integrations ran deep, and the recurring revenue was extraordinarily predictable. Salesforce, Workday, ServiceNow. The model printed money for two decades.

AI is replicating that architecture almost beat for beat, except the product is not software anymore. It is intelligence. OpenAI has a subscription. Anthropic has a subscription. Google has a subscription. Meta wants one too. The enterprise deals, the partner networks, the platform integrations, the certifications for implementation consultants. If you squint, it is SaaS with a different name on the door and a much larger infrastructure bill.

The difference, and it matters, is that in SaaS the product mostly stayed where you put it. An AI model that is behind the competition is a much more immediately felt problem because the user knows. They have used something better. They will go find it again. The switching cost that protected SaaS incumbents for years is much thinner here because the interface is often just a text box and the alternative is one tab away.

This is what Meta’s delay actually tells us. In a world where the product is intelligence, being second is a real problem in a way it was not when the product was a feature set that took months to migrate away from. The benchmarks that came back short on Avocado are not just an engineering setback. They are a user retention problem, a distribution problem, and a positioning problem, all arriving at the same time.

Meta has the infrastructure spend to fix the engineering part. The rest of it is harder to budget for.

Whether these companies have thought carefully enough about what it means to be in a subscription business where the customer can feel, in real time, whether what they are paying for is good enough, is the question we keep coming back to.

SaaS companies spent years making it hard to leave. AI companies are making it very easy to compare. That is a different game entirely.