AI News

Automatically collected by AI

Google’s Bid to Own the Enterprise AI Stack

Google Tries to Sell the Whole AI Stack

At its Cloud Next conference on Tuesday, Google delivered a broad new pitch to corporate customers: stop thinking of artificial intelligence as a chatbot bolted onto office software, and start treating it as an operating layer for the company itself.

The company called that vision the “agentic enterprise,” and it used the event to introduce an interlocking set of products meant to make the case. There were new eighth-generation Tensor Processing Units, or TPUs, split into one system designed for training frontier models and another tuned for fast, low-latency inference. There was a reworked enterprise agent platform built around Gemini. There was a new contextual AI layer for Workspace. And there were updated research agents, including Deep Research Max, intended to automate complex information gathering across the web and private corporate data.

The message was less about any single product than about vertical integration. Google is arguing that the next phase of enterprise AI will belong not to stand-alone models, but to companies that can provide the chips, networking, cloud infrastructure, orchestration tools, productivity software and domain-specific applications in one package.

That is a significant shift in emphasis. For much of the generative AI boom, technology companies sold businesses on copilots and chat interfaces. Google’s presentation suggested it now wants to move beyond assisting workers with prompts and summaries toward systems that can reason through tasks, plan multistep actions and carry out work across business software and data environments.

A Hardware Push for AI at Scale

Central to that argument was new infrastructure.

Google introduced its eighth-generation TPUs in two versions: TPU 8t, aimed at training large AI models, and TPU 8i, built for serving models with lower latency. The company also tied the chips to broader data-center infrastructure, including its Virgo networking technology, underscoring that it sees compute, networking and software as one enterprise offering rather than separate layers.

That focus reflects a reality of the current AI market. As businesses move from experimenting with chatbots to deploying AI systems across departments, demand is shifting from occasional model access to continuous, large-scale computing capacity. The economics of that transition could determine which cloud provider wins the next phase of enterprise spending.

Google said customer API usage has climbed to more than 16 billion tokens per minute, a figure meant to show both the scale of demand and the company’s need to keep expanding infrastructure. It also said that just over half of its machine-learning compute investment this year is expected to go to cloud customers and partners, a notable signal that Google is trying to prove it is not building AI capacity solely for its own products.

The company is hardly alone. Microsoft, Amazon and OpenAI, along with chipmakers like Nvidia, are all vying to define the infrastructure layer of enterprise AI. But Google’s latest move was notable for how explicitly it tied custom silicon to the application layer above it.

From Copilots to Agents

The heart of Google’s strategy is the idea that businesses want AI agents, not just assistants.

Its Gemini Enterprise Agent Platform is intended to let companies build and manage systems that can take action across workflows, using business context and tools rather than simply answering questions. In Workspace, Google introduced what it described as an intelligence layer meant to draw on organizational context and make its office software more responsive to the needs of teams and individual workers.

The updated Deep Research products push that concept further. Deep Research and Deep Research Max are designed to automate more complex investigations, drawing from web sources and, increasingly, proprietary enterprise data. Developers can connect specialized sources, including financial feeds, through the Model Context Protocol, or MCP, which has emerged as an important way to link models to external tools and information.

The ambition is clear: an analyst, lawyer, consultant or operations manager would no longer use AI merely to draft a memo or summarize a meeting, but to conduct broad research, synthesize findings from internal and external sources, and return with a structured output ready for review.

That vision remains aspirational in some important respects. Google promoted performance gains for its research agents, but outside observers still have limited visibility into how those benchmarks were produced and how reliably such systems perform in real, high-stakes settings. Autonomous research tools can be compelling in demonstrations while remaining error-prone in practice, especially when they are asked to evaluate ambiguous, rapidly changing or specialized information.

AI for Maps, Media and Infrastructure

Google also used the event to showcase a more industry-specific side of its AI strategy, especially in imaging and geospatial analysis.

Among the new tools were systems that let creative professionals place AI-generated scenes into real Street View environments, potentially streamlining location scouting and previsualization for film and media work. Google also presented products aimed at city planning, infrastructure analysis and remote sensing, saying some satellite image analysis tasks that once took weeks could be completed in minutes. It also highlighted models that can identify objects such as bridges and power lines.

Those announcements may have seemed far from Workspace and enterprise agents, but they fit neatly into Google’s larger case. Few companies possess both a large cloud business and an archive of mapping, Street View and satellite data extensive enough to build such tools. By turning those assets into enterprise AI products, Google is trying to show that its advantage lies not just in foundation models, but in the combination of models with unique data, cloud infrastructure and existing software ecosystems.

For customers in industries like logistics, urban planning, media and utilities, that matters. The enterprise AI market is beginning to split between general-purpose assistants and tools tailored to sector-specific workflows. Google appears to want both.

Why the Push Matters Now

The timing is important.

The first years of the generative AI boom were dominated by experimentation. Companies tested chatbots, bought coding assistants and ran pilots inside customer support or marketing teams. But many executives have become more demanding. They want systems that can be embedded into operations, tied to internal data and justified in terms of productivity or revenue, not just novelty.

Google’s Cloud Next announcements were a direct response to that shift. The company is presenting AI as a full enterprise platform, one that stretches from the data center to the employee desktop and into specialized industry applications.

That strategy could appeal to businesses looking to simplify procurement and reduce integration headaches. But it also raises familiar concerns about concentration and lock-in. The more of the AI stack a company buys from one provider — chips, models, orchestration, productivity tools, security and data connectors — the harder it may become to switch later.

There are also practical questions Google did not fully resolve. Some of the newly announced hardware and platform features will need to prove they can scale broadly. Customers will want evidence that Google’s systems offer a meaningful advantage in cost or performance over rival clouds and model providers. And companies considering autonomous research or workflow agents will need to weigh speed against reliability, governance and accountability.

Still, Google’s message at Cloud Next was unmistakable. The company is no longer merely trying to show that it can compete in AI models. It is trying to persuade businesses that the next era of enterprise computing will be built around agents — and that Google should supply the entire foundation.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *