AI News

Automatically collected by AI

When the API Becomes the Interface

A New Front Door for Software

For years, software companies have treated the application screen as the main event: the dashboard, the workflow, the browser tab where work gets done. Now some of the industry’s largest players are advancing a different idea — that the real product is not the interface humans click through, but the underlying capabilities software exposes to machines.

That shift came into sharper focus in mid-April, when Salesforce introduced what it calls Headless 360, a move to expose its platform so that artificial intelligence agents can work across Salesforce, Slack and related tools without relying on the company’s browser-based interface. The company said that “everything on Salesforce is now an API, MCP tool, or CLI command,” adding more than 60 new MCP tools and more than 30 coding skills intended to let agents retrieve data, trigger workflows and complete tasks directly.

Two days later, Google released A2UI v0.9, a framework-agnostic standard meant to let AI agents generate interfaces on the fly from an application’s existing component library across web and mobile environments. Taken together, the announcements suggest a broader change underway in enterprise computing: software may increasingly be designed first for agent access, with user interfaces becoming a secondary, generated layer.

The slogan emerging around this idea is blunt: the API is the new UI.

From Clicking Screens to Calling Capabilities

The argument behind agent-first design is straightforward. If AI systems are going to carry out work on behalf of users — filing reports, updating customer records, fetching information, routing approvals — it is more efficient for them to call trusted APIs directly than to imitate a human moving through a maze of menus and buttons.

That makes direct access faster and less fragile than browser automation, which has long been used to script software not built for machine interaction. It also reflects a growing belief inside the industry that the traditional graphical interface is poorly suited to a world in which software is increasingly mediated by conversational assistants and autonomous agents.

Salesforce’s push is notable because it comes from one of enterprise software’s defining companies. Marc Benioff, the company’s chief executive, has framed the move in stark terms, arguing that the browser is no longer the necessary center of business software if agents can reach the same systems through APIs, command-line tools and machine-readable protocols.

The company’s emphasis on MCP — shorthand for Model Context Protocol, an emerging way for AI systems to discover and use tools — signals that software vendors are beginning to package enterprise functions not just for developers, but for large language model-based agents as a distinct class of user.

Google’s Bet on Interfaces That Appear Only When Needed

If Salesforce’s announcement addresses how agents perform work behind the scenes, Google’s A2UI effort addresses what happens when a human still needs to be in the loop.

A2UI is designed to let agents assemble task-specific interfaces dynamically from an application’s prebuilt components. Instead of hard-coding every possible workflow into a permanent screen, developers can expose trusted interface elements — forms, buttons, tables, selectors — that an agent can compose as needed for a particular task. Google has positioned the approach as more secure than having agents generate executable front-end code on their own, because the agent sends structured UI descriptions rather than arbitrary software.

That distinction matters. In many business settings, companies want automation, but not a black box improvising the entire user experience. A generated interface built from approved components offers a middle path: agents can adapt the presentation to the moment, while enterprises retain control over design systems, permissions and behavior.

In effect, the software stack begins to separate into two layers: one where capabilities are exposed programmatically for agents, and another where interface fragments are assembled only when a person needs to review, approve or intervene.

The Return of the API

To longtime developers, the current moment has a familiar feel. In the early 2010s, web companies raced to release APIs, presenting them as the connective tissue of the internet economy. That enthusiasm faded as many platforms tightened access, restricted third-party developers or found that APIs created support burdens without obvious revenue.

Now some developers argue that a second API-first era is beginning. The reason is not mobile apps or startup ecosystems this time, but AI.

The developer writer Brandur Leach has described the past several years as an “API winter,” with platforms often treating external access as a liability. In the age of AI agents, he argues, that logic is reversing. An API is becoming a competitive asset again because it gives customers a way to let software act on their behalf. In categories where products look increasingly similar, usable machine access could become a deciding factor in which tool wins adoption.

That view has spread quickly through technical circles. Simon Willison, a prominent independent developer and writer on AI tools, recently pointed to the convergence of “headless” services and agent-based software use, linking the idea to comments from the British technologist Matt Webb that personal AI systems will prefer dependable machine interfaces over brittle GUIs. In that framing, software that cannot be accessed headlessly may begin to feel incomplete.

Why This Matters for Enterprise Software

The practical implications could be significant.

For one thing, direct agent access could make automation more durable. Businesses have spent years using robotic process automation tools to mimic human interactions with applications, often with mixed results; small UI changes can break those systems. APIs and structured agent tooling promise something more stable.

It could also make software more composable. If agents can move cleanly among customer records, communications tools, financial systems and internal workflows, enterprises may be able to string together tasks across products with less custom integration work.

And it may begin to change how software is bought and priced. Traditional SaaS economics revolve around named users and per-seat subscriptions. But if an organization’s main operator becomes an AI agent acting across many systems for many employees, those assumptions begin to wobble. Companies will have to decide whether to price for humans, for usage, for outcomes, for agents — or for some combination of the four.

That business question may prove as important as the technical one. A world in which one agent can do work previously spread across dozens of software seats would challenge one of the industry’s foundational revenue models.

The Unsettled Questions

Still, the shift is far from complete.

One open question is whether vendors will expose true feature parity through APIs or reserve critical functions for their human-facing interfaces. Many companies have long claimed to be API-first while still leaving advanced settings, exception handling or administrative functions trapped inside the GUI.

Another uncertainty is trust. Enterprises may like the efficiency of agents, but granting them broad permissions over sensitive data and workflows raises governance, auditing and security concerns. Identity management, approval chains and fine-grained access controls will matter more, not less, in an agent-driven environment.

There is also a standards contest beginning to take shape. Alongside A2UI and MCP-related tooling, other interoperability efforts are vying to define how agents should discover tools, invoke services and render interfaces. It is not yet clear whether the market will coalesce around a common set of protocols or fragment into competing ecosystems.

For now, though, the direction of travel is becoming easier to see. Salesforce is pushing the idea that enterprise software should be directly operable by agents. Google is building for a world in which the interface itself can be generated at the moment of need. Together they are sketching a future in which software is less a fixed destination on a screen than a set of capabilities waiting to be invoked — by people, by programs, and increasingly by AI acting in between.

Sources

Further reading and reporting used to add context:

Leave a Reply

Your email address will not be published. Required fields are marked *