---
title: "When the \"Agent Era\" Arrives: Traditional Enterprise Transformation, the End of Software Companies, and Wall Street's \"Underestimation\""
type: "News"
locale: "en"
url: "https://longbridge.com/en/news/282133432.md"
description: "As the core users of enterprise software shift toward AI agents, industry experts such as Box CEO Aaron Levie point out that future software must be entirely reconstructed for agents. This means the logic of enterprise software is evolving from \"humans using tools\" to \"agents calling systems.\" However, due to compliance friction and rigid IT budget systems in large enterprises, the penetration rate of AI in real business environments may fall short of expectations, but the market scale of its eventual explosion will far exceed current capital market estimates"
datetime: "2026-04-09T04:32:49.000Z"
locales:
  - [zh-CN](https://longbridge.com/zh-CN/news/282133432.md)
  - [en](https://longbridge.com/en/news/282133432.md)
  - [zh-HK](https://longbridge.com/zh-HK/news/282133432.md)
---

# When the "Agent Era" Arrives: Traditional Enterprise Transformation, the End of Software Companies, and Wall Street's "Underestimation"

As AI agents replace humans as the "primary users" of enterprise software, traditional SaaS business models, API call billing, and corporate computing cost structures are poised for an exponential disruption and restructuring.

On April 8, a16z released a podcast featuring an in-depth industry interview. Podcast host Erik Torenberg spoke with Aaron Levie, CEO of renowned cloud storage company Box; Steven Sinofsky, former Microsoft executive and prominent investor; and Martin Casado, a partner at a16z. They engaged in a heated discussion about enterprise software transformation in the "AI agent" era, the surge in computing costs, and the fallacies in Wall Street's pricing models.

(From left to right: Erik Torenberg, Steven Sinofsky, Aaron Levie, Martin Casado)

Box CEO Aaron Levie proposed a core hypothesis:

> If you have 100 times or 1,000 times more agents than humans, then your software must be built for agents.

This means that software interfaces are shifting from human-oriented UI (User Interface) to AI-oriented APIs (Application Programming Interface), CLI (Command Line Interface), or Computer Use.

**The logic of enterprise software is evolving from "humans using tools" to "agents calling systems."**

**However, due to compliance friction and rigid IT budget systems in large enterprises, the penetration rate of AI in real business environments may fall short of expectations, but the market scale of its eventual explosion will far exceed current capital market estimates.**

## Breaking Wall Street's "Zero-Sum Thinking": Incremental Space for SaaS Business Models

While the market widely worries that AI might destroy traditional SaaS business models, the participants provided a starkly different assessment, arguing that current financial models have completely miscalculated the situation.

**Steven Sinofsky** stated bluntly that the biggest current misjudgment lies in Wall Street's financial models:

> The biggest problem now is that everyone is trying to figure out the economics of it all, but they are off by at least an order of magnitude regarding how big this opportunity actually is.

Steven Sinofsky pointed out without reservation that Wall Street is currently filled with "fixed revenue pie and zero-sum game thinking," still using linear growth curves to justify massive expenditures on GPUs and Tokens.

**Sinofsky compared current AI to the early eras of PCs and cloud computing. In the past, the market thought cloud computing was merely shifting on-premises server budgets to the cloud, failing entirely to foresee that "once resources were democratized, people would consume over a thousand times more computing resources."**

**In the AI era, as the volume of software code generation grows exponentially and mobile devices gain full AI access, the consumption of computing resources will expand at a staggering rate.**

Regarding the future performance potential for companies, Aaron Levie provided an optimistic guide using Box's own business as an example:

> What we're excited about is that every agent loves processing files. Therefore, there will be far more files in the future than before. Can we build a platform that makes it extremely easy for agents to process this data? We are betting that this will be a very optimistic outcome for our business model.

When agents lead decision-making, they will not be limited by the friction costs humans face in micro-transactions. **Levie pointed out that in the future, an agent might automatically pay $3 to acquire medical data to complete a deep research task, which will give birth to entirely new micro-payment and API-call monetization models.**

## When Agent Counts Surge 1,000-Fold, All Software Interfaces Will Be Born for Machines

In terms of forward-looking guidance on operations and orders, the underlying logic of enterprise software (SaaS) is being thoroughly overturned.

Aaron Levie emphasized:

> If you have 100 times or 1,000 times more agents than humans, then your software must be built for agents.

The core subject for the future is no longer just studying human user interfaces (UI), but studying how agents interact with systems via APIs, CLIs, and other methods.

Martin Casado believes:

> This will be the year of "Computer Use."

Early AI involved adding an AI button to enterprise software; later, it was about letting AI write code. Now, the paradigm is: **Enterprise software remains enterprise software, but AI agents use enterprise software like a computer. AI agents can read data while deciding for themselves whether to call existing APIs or "write code in real-time" to complete specific operations.**

This also means future software business models will be reshaped.

Aaron Levie believes future business performance will depend on your AI agent's ability to obtain the necessary information. **Software companies that can provide high-quality APIs and properly handle agent identity and permissions will gain massive incremental revenue because the frequency with which agents process data and files will far exceed that of humans.**

Steven Sinofsky, however, warned:

> Attempting to replace mature systems like SAP with "vibe coding" (natural language coding) is simply absurd; deep industry domain knowledge remains the core moat.

## The Battle Over Computing Budgets and Tokens: The "Crazy Moment" Facing CFOs

With the surge in AI usage, future corporate performance guidance and margin performance will undergo dramatic changes.

**Past software spending was capital expenditure or fixed operating expenditure, but in the AI era, underlying resources have become usage-based Tokens.** Aaron Levie emphasized:

> Discussions about engineering computing budgets will be the craziest topic over the next few years. CFOs must know the answers, and Wall Street will force them to provide them.

Aaron Levie added:

> For any public company, R&D expenses typically account for 14% to 30% of revenue. If computing costs become double your engineering team's costs, or even just increase by 3%, it will directly eat into your Earnings Per Share (EPS).

**In practice, engineering heads need to weigh whether to allow developers to run massive amounts of agent experiments that might waste Tokens in parallel. This elastic, hard-to-predict computing consumption conflicts sharply with the fixed capital expenditures or predictable operating expenditures that traditional enterprises are accustomed to.**

## Real-World Obstacles to AI Implementation: Why Are Silicon Valley's Expectations Too Optimistic?

Despite the certainty of the long-term trend, Levie also delivered a sobering reality check to the market:

> The diffusion of AI capabilities will take longer than people in Silicon Valley realize.

**The core issue hindering AI from rapidly generating orders in large enterprises lies in system integration and security boundaries. Ideally, agents could move seamlessly between systems, but in reality, granting agents system access equivalent to humans brings enormous compliance risks.**

Levie warned:

> The risk is amplified 1,000 times.

Levie emphasized:

> You cannot treat them (agents) entirely like humans. For a regular employee, you don't need to log into their account every day to supervise them; they are responsible for their execution in the real world. But for an agent, you have full responsibility for everything it does, and they have no right to privacy. If an agent is deceived by social engineering, it could easily leak your M&A documents.

Until robust AI security standards and a new access control layer are established, CIOs at large enterprises will remain cautious and may even "lock everything down."

**This means that the penetration rate of AI in the software industry will diverge: startups with no legacy baggage will quickly iterate and integrate fully, while traditional large enterprises with massive systems of record will require a longer digestion period.**

**Full transcript of the conversation follows (AI-assisted translation):**

> **Aaron Levie**: I think the trend today is very clear, right? We are now spending as much time on "agent interfaces" as humans spend on software interaction interfaces.
> 
> **Martin Casado**: Yes.
> 
> **Aaron Levie**: Right. We do this because our hypothesis is: if there are a hundred to a thousand times more agents than people, then software must be built for agents. The way agents interact with systems will be through APIs, CLIs, MCP, or other similar protocols.
> 
> **As it stands, an emerging and already effective paradigm is letting programming agents access your SaaS tools, as well as your knowledge workflows and related context. This combination itself becomes a superpower—the agent isn't just able to read some data and understand information; it can complete whatever task it aims to achieve by writing code or calling APIs.**
> 
> This paradigm seems to be starting to have a compounding effect. This is exactly what we see with the Claude Cloud collaboration phenomenon, and what OpenAI is exploring in directions like super-apps, Perplexity, Computer Use, etc. I think this is the ultimate manifestation of all this development; it's logical.
> 
> **Steven Sinofsky**: I think you're right; theoretically, it makes sense.
> 
> **Aaron Levie**: But—
> 
> **Steven Sinofsky**: But on a practical level, we must be very cautious. To put it another way, "algorithmic thinking" is extremely difficult for the vast majority of people. The simplest example: if you walked up to any person and asked them to draw a flowchart for a specific job, they would likely fail.
> 
> **Take a marketing team for a large product line, say there are 50 people doing marketing. Among them, maybe only one person truly understands and can document the entire flowchart. So if you put these agent tools in front of everyone and ask them to create these things, their ability to explain to the tool "what to do" is very limited.**
> 
> **Aaron Levie**: Well, what if this becomes the new way humans interact with computers? You just have to continuously iterate and adapt.
> 
> **Steven Sinofsky**: Then you're actually just going back to the old way—you're just developing the next layer of abstraction for human-computer interaction. Historically, the construction of every layer of underlying logic has been done by a tiny minority of highly skilled, highly specialized people within an organization. The small modules they build are like scattered parts; some people can piece them together, and some can't. This phenomenon has existed since the era of paperclips and thumbtacks and will continue to exist.
> 
> **Aaron Levie**: I think the unchanging rule is that work just moves up one level, and people need to learn a new set of skills. So I don't think this time is fundamentally different, it's just that the leverage gained this time is obviously extraordinary.
> 
> There was a viral tweet recently about the Head of Growth Marketing at Anthropic—did you guys see it? It was basically one person who used Claude Code to automate what would have originally required five to ten people spread across different roles.
> 
> What's interesting about this is that you must have systems thinking to do that. He was obviously technical enough to pull it off, but it does represent a possibility: if you're doing a job in the economy and you have an unlimited pool of engineers next to you who can automate anything you want to do, what does that job look like in the future?
> 
> I agree, you have to find a way to understand your work as a system to do that. Maybe agents will get better and better at guiding you in that direction. But there's reason to believe people will start trying to automate a lot of work, like: "Why not migrate effective keywords from Google Ads to Facebook, ensure they're synced, and then get signals on the latest market trends?"
> 
> **Steven Sinofsky**: That's already a big step forward.
> 
> **Aaron Levie**: That Anthropic growth marketing example is really a microcosm of most work.
> 
> **Steven Sinofsky**: Right, I could do similar work too. When demand is infinite and supply is also infinite, it's not that hard.
> 
> **Steven Sinofsky**: So you might as well be that marketing person with a $600 budget and see what you can do. That's real work.
> 
> **Aaron Levie**: We need a better example.
> 
> **Steven Sinofsky**: But it is indeed interesting. Let me give an old example, an old-school one. My cousin got an MBA from a top school and got her first job right at the point when computers were becoming widespread. She hadn't used spreadsheets in grad school, and when spreadsheets appeared, she wasn't the type to use them.
> 
> So the company told her: hire as many interns as you want. **So in her first year, she actually managed a room full of "human agents"—young people from universities specifically responsible for all the spreadsheet work.**
> 
> **Then, over the next few years, a magical transformation happened—she and everyone in her class became people who knew how to use spreadsheets. The old model of "being a manager in a bank and having a group of people help you with all the tedious calculations after two years of seniority" vanished; the entire layer of abstraction moved up.**
> 
> Before spreadsheets, you sat there with a calculator or even an HP calculator to derive a model for an M&A deal, and you often only did two iterations before presenting a proposal. After spreadsheets, they could do thirty iterations themselves.
> 
> **I think our current relationship with agents is at that stage—you think you need 50 people, and the entire abstraction layer still operates in a fragmented division of labor coordinated by one super-smart person. But soon, all of this will merge and eventually just be a piece of code we call a "marketing agent." You can ask it marketing-related questions, and the next step is to have it actually execute those things.**
> 
> **Steven Sinofsky**: Until the issues of irreproducibility and randomness in AI are completely solved, I'm still a bit skeptical about "letting agents execute." The cost at the execution level would be very high. This brings back the discussion about "human-in-the-loop." But I feel we are indeed at that tipping point—when I talk to people trying to do things with AI, I get a feeling like I'm at the Thanksgiving table with that cousin who’s been on the job for six months, and I'm already using spreadsheets. Back then, I'd think: I don't understand why this is so hard, just use it. Then two years later, she was using it too.
> 
> I feel like now, to get forty-two agents up and running, you have to be both a rocket scientist and a growth marketing expert. But this "rocket science" barrier will disappear in a very short time. At that point, you're facing a massive amount of domain expertise.
> 
> **Aaron Levie**: Right, back to the domain expert topic.
> 
> **Martin Casado**: I'd like to offer another perspective on what you said. I think it's easy to fall into the idea that agents will write code, do X. But I think we're actually going the opposite way. Our initial approach was to overlay AI on existing software—an AI-enabled way, and an extreme version of using code to solve these problems.
> 
> But what are we actually doing now? SaaS software is still SaaS software; agents use it like they use a computer—because they're actually very good at that. So I would say we started from code and moved toward the terminal, and the terminal actually involves less encoding than the code.
> 
> **This year will be the year of "Computer Use." Agents are increasingly like humans using computers rather than generating code. This feels like an intermediate transition state. And I myself come from the "code generation" camp, but I think that approach is decreasing, not increasing.**
> 
> **Aaron Levie**: To me, whether it's Computer Use API or writing code on the fly, I might inappropriately group them together.
> 
> **Martin Casado**: But they are very different.
> 
> **Aaron Levie**: However, one agent we're building can autonomously judge: should it use an existing skill, call an existing Box tool, or write its own code to solve the problem? It can flexibly choose one of these three ways at any moment, which is ultimately extremely useful.
> 
> Because sometimes you want to perform a specific operation, and writing code directly is the fastest; we can't pre-plan all possible user needs for a document. So the model's ability to write code instantly, even if existing tools suffice 90% of the time, is still an amazing attribute.
> 
> **Martin Casado**: Looking at the Pareto principle, over time, just as people only use seven apps regularly on their phones, these tools will slowly converge and integrate.
> 
> **Aaron Levie**: But the "seven apps" problem is a human problem—humans don't want to repeatedly learn new things; I don't have enough energy to master that many apps. But an agent that uses tools, calls APIs, and can write code has none of those limits.
> 
> **Martin Casado**: You could also say there’s too much to do, so just make the interface general enough.
> 
> **Steven Sinofsky**: That makes some sense. I think what you said is very interesting; let me add something I strongly agree with. Software has developed to the point where, for example, I use SAP all day and need to generate various reports. Then someone comes along and says they want a report viewing data sliced in a certain way, and I'm stumped—I don't know how to do that.
> 
> Then I have to dig through SAP help docs to find it. AI can do this very well—it can navigate these functional areas better; the help docs are all there, it just needs to find them and map the language. Humans have been the bottleneck for mining software features for the last twenty-five years.
> 
> I've been on a plane and had the person next to me ask: "How do I get PowerPoint to do X?"
> 
> **Aaron Levie**: "Go find the ribbon!"
> 
> **Steven Sinofsky**: Exactly, it's really painful watching someone struggle with bullets and numbering in Word or try to make a dual-axis chart in Excel—it's almost rocket science level operations that few people know, yet the need is extremely common. So that impedance mismatch in the human-machine interface has always existed.
> 
> **Martin Casado**: At the consumer layer, I completely agree—that perfectly fluid UI or consumer layer. But I have questions about the backend, the systems of record layer.
> 
> **Aaron Levie**: It might eventually...
> 
> **Martin Casado**: It will eventually converge to some kind of database, a set of general APIs, and then connect them. That seems to be the direction. I agree.
> 
> **Steven Sinofsky**: Let me jump in.
> 
> **Martin Casado**: I spent all last weekend implementing my Nano Club Bot. At first, it felt like building integrations separately for each platform—OpenAI has all the integrations, Data Cloud only has a few, so you have to build many tools yourself. But after two or three days, you basically have the tool integrations you need.
> 
> **Aaron Levie**: But what we were talking about just now was personal productivity, probably organizing personal life and things like that?
> 
> **Martin Casado**: No, it's work output.
> 
> **Aaron Levie**: Okay, that's work productivity. But if it's a system like SAP—there's infinite complexity. For example, a multinational supply chain company handling seventy-five categories of information from thirty different systems—the computing power requirements for an agent to handle that are beyond anything any architecture can support today.
> 
> **Steven Sinofsky**: But what you just described is exactly what it has been doing for the last fifty years and will continue to do—I have a friend who was the CIO of the Department of Veterans Affairs, and he spent all his time gluing seventy-five systems together; it was all integration work.
> 
> **Martin Casado**: Right, I completely agree about integration—these kinds of AI tools are best used for that, just piecing two systems together.
> 
> **Aaron Levie**: But I think what's happening now is "on-demand integration"—the kind of new query where the IT team hasn't pre-wired it, but I need it to happen in real-time at runtime.
> 
> **Steven Sinofsky**: Okay, let me give a real-world scenario. I recently attended a meeting with CFOs and CIOs, and when I said something similar—though not as optimistic as you—six people came up to me afterward and said: "You're crazy, you've completely lost my trust."
> 
> **Martin Casado**: Great! What specifically got them so excited—the idea that agents would do integration?
> 
> **Steven Sinofsky**: I said integration issues would become easier. They didn't object to that itself, but to letting ordinary people do the integration—releasing power to humans to do integration—that's what they're afraid of. Because once people start creating new integrations at will, you're essentially saying: please come and break my systems of record. Creating a new API between System 27 and System 38 is fine if it's just for viewing reports because that's their own business; but you can't—
> 
> **Aaron Levie**: I think this will stay at the read-only level for quite a long time.
> 
> **Steven Sinofsky**: Right, that "quite a long time" will be very long.
> 
> **Martin Casado**: Many AI applications now are actually at the consumer layer; the consumer is human, and the real landing point is still on the consumer side.
> 
> **Aaron Levie**: Yes. We just officially launched Box CLI; thank you for liking that tweet.
> 
> **Steven Sinofsky**: I liked it, and I used it; I have some feedback to share with you.
> 
> **Aaron Levie**: All feedback is welcome. It's an interesting thing. We've had many internal discussions: combine Claude Code and Box CLI, and you can operate the entire Box system using natural language, with Opus 4.6 as the orchestrator to perform a series of actions—it's somewhat breathtaking.
> 
> You can say "upload this entire folder on my desktop to Box," and it can do it; or "process all documents in this folder," and it can do that too; it's amazing.
> 
> Then we started thinking deeper: suppose a company has five thousand employees, everyone has access to a shared knowledge base—engineering docs, marketing assets, etc.—and everyone is running Claude Code or Codex plus CLI.
> 
> **This creates some very interesting new challenges. For example, how to coordinate concurrency—you might issue ten thousand requests per hour to the system; it's not a performance issue, but how to ensure that while someone is performing a write operation, no one else accidentally moves a file while someone else is trying to delete something. Because you'll have these agents running around everywhere in the system. This will become a new major issue that every CIO and CFO will struggle with.**
> 
> **Steven Sinofsky**: That's exactly what happened to me—I followed your example, created a marketing plan directory, and ended up in a sort of infinite loop, constantly creating directories.
> 
> **Aaron Levie**: It'll run until it dies.
> 
> **Steven Sinofsky**: Right, I was wondering if there was a limit to the number of directories on Box because I was about to hit it.
> 
> **Aaron Levie**: We'll find that answer out together.
> 
> **Martin Casado**: **My feeling is that many people's gut reaction is to add another control layer. But in practice, everyone is doing the opposite. For example: when we first started using these personal agents, we would give them our API keys and email addresses, letting them access these resources while worrying about "how to stop them from making a mess." Now, people's approach is to give it a dedicated phone number or even a separate credit card.**
> 
> **Steven Sinofsky**: Like the Visa debit cards CBS buys with just a little money in them.
> 
> **Martin Casado**: Right, and giving it a separate Gmail account, and Gmail itself has a robust RBAC permission system. So you could say we've already built these permission systems into many places—you need to treat it like an independent person rather than constantly overlaying new control layers.
> 
> **Aaron Levie**: Okay, can I immediately counter that issue we're about to face? For personal productivity, that logic is great. But in an enterprise scenario, problems arise.
> 
> Simple example: suppose there's a team of fifty people; wouldn't it basically become fifty humans and fifty agents collaborating in the same space? I can certainly fully control my own agent, but if my agent collaborates with others and accidentally gains access to a resource I shouldn't have access to, and this autonomous, stateful agent continues to work for others—what then?
> 
> **Martin Casado**: Treat the agent like a person, and the problem is solved.
> 
> **Aaron Levie**: **But you can't treat them entirely like people. Here's why: for a regular employee, you have no right to view their Slack channels, no right to log in as them, no right to monitor everything they do; they bear responsibility for their actions, and you won't be penalized along with them for their mistakes. But for an agent, you bear full responsibility for all its actions, you need to maintain full monitoring, and it has no right to privacy. So in some places, the "treat it like a person" analogy doesn't hold.**
> 
> I need to be able to authorize it, but I also need to be able to intervene as it at any time, see what it did, or even undo everything. But if I can log in as it at any time, how can it collaborate with others in the real world while maintaining any form of confidentiality or security? So it essentially can only be an extension of yourself; there's almost no way around that.
> 
> **This is a problem we don't know how to solve in the short term—you can trick an agent into leaking information at any time, which is why giving an agent completely independent resource access and letting it make autonomous decisions is currently impossible.**
> 
> **Martin Casado**: That logic doesn't necessarily hold. For example, I can also access my employees' emails if necessary.
> 
> **Aaron Levie**: But you don't do that on a daily basis. You can get access if necessary, but that's for special circumstances, like litigation, not routine monitoring.
> 
> **Martin Casado**: It's the same principle for treating agents with the right operating model—
> 
> **Aaron Levie: But the risk is on a thousand-fold scale. Agents will not hesitate to send information to anyone if prompted.**
> 
> **Martin Casado**: I think the ultimate state of these things is that they will always be "imprecise computers"; they will never be able to keep information completely confidential.
> 
> **Aaron Levie**: I don't really like the term "imprecise," though if it's meant colloquially—keeping content in the context window confidential, "telling it not to leak X," I think is a very hard problem to solve. **Therefore, as long as anything enters the context window, you must assume it could potentially be extracted by a prompt injection attack. We haven't found a solution yet.**
> 
> **And if I know your agent's email address, I can email it. It's much easier for me to perform social engineering on an agent than to deceive a human—and it can access your M&A files at the same time, which is very dangerous.**
> 
> **Martin Casado**: But isn't that just the overall situation AI faces right now?
> 
> **Aaron Levie**: Which aspect specifically?
> 
> **Martin Casado**: The way we use these AI systems and agents now—shared systems, shared context.
> 
> **Aaron Levie**: Right, and that's exactly why right now you basically use them "as yourself," and we don't yet know how to let them operate not as yourself.
> 
> **Steven Sinofsky**: Let me give an example.
> 
> **Aaron Levie**: The key to solving this is: you can easily trick an agent into leaking information. So letting it have independent resources that belong entirely to itself and allow it to make autonomous decisions is still unachievable.
> 
> **Steven Sinofsky**: There's a perfect analogy here—we've already gone through the journey of open source. Open-source models were "everything is there, use it yourself, take what you want," and there wasn't much controversy then because the world was small, and no one was hosting podcasts online to discuss it.
> 
> But soon everyone realized all the problems you just mentioned: in large companies, you can't just let people copy-paste open-source code into your commercial products due to licensing issues, quality issues, etc., so various norms slowly developed.
> 
> This discussion we're having now is a very interesting modern phenomenon in the development of new technology—it's all happening in real-time. Back when open source was happening, we sat in meeting rooms discussing how much open-source code Windows or Office could use; no one online knew we were having that debate.
> 
> Now, the discussion about where all this is going is happening in public; everyone is trying to race to the finish line, but at a speed far exceeding how fast we can actually get there. So what's really needed is for people to buckle down and build things.
> 
> **Aaron Levie**: We need standards, we need—
> 
> **Martin Casado**: I think we have different intuitions about the end state.
> 
> **Steven Sinofsky**: You don't want to hear my intuition.
> 
> **Martin Casado**: You can build an end-to-end argument that these agents will eventually converge to human-level reliability—just like how we view autonomous driving. At that point, you can use the same mechanisms used for humans to protect them—considering internal threats, the possibility of being bribed, human error, performing risk assessments, and establishing operating procedures. That's one intuition. But another intuition also exists—
> 
> **Aaron Levie**: I'm just talking about where we are now; we might not disagree on the end state. And strategically, we're betting on both sides—we'll build agent users and corresponding registration mechanisms; I love the idea of Open Claude having a Box account and operating independently.
> 
> **Steven Sinofsky**: Right, double the account count instantly.
> 
> **Aaron Levie**: Totally support that. I'm just saying for now, we don't know how to safely give an M&A data room to an agent.
> 
> **Steven Sinofsky**: But it's actually harder than you describe. Because threat vectors will be far more complex than human threats today. You can't assume an agent's behavior will be the same as a human's today because, in a sense, an agent injected with malicious instructions is the fastest, most meticulous, and most omnipotent "superhuman" ever, and it will go all out trying to leak information.
> 
> So what will happen next is that enterprise customers will lock everything down first until there's some sense of order. Meanwhile, individual users, especially developers, will have a massive first-mover advantage. I think this is precisely where the most exciting tension lies—enterprises will lag behind these highly personalized "super individuals" who look like emerging startups. Because they have no legacy baggage, startups will move far faster than large enterprises. Of course, agents "going off the rails" in a startup might be a daily occurrence, but that's just a plot for an episode of _Silicon Valley_.
> 
> **Aaron Levie**: Regarding risk, I agree in general, but there are some differences. For instance, I can't threaten Claude Code with "otherwise I'll fire you," but with a regular employee, you at least have that line—95% of people won't actively do bad things.
> 
> **Steven Sinofsky**: They might not be actively evil, but their ability to unintentionally cause harm...
> 
> **Aaron Levie**: I think getting a regular employee to not mistakenly share files outside the company is easier than getting an agent to follow the same set of instructions.
> 
> **Steven Sinofsky**: And you have tools now that can stop that behavior at a higher abstraction level, which is why these capabilities must be built into the software.
> 
> **Aaron Levie**: Yes. I think refining your last point explains much of why the diffusion of AI capabilities will take longer than people in Silicon Valley expect. We see startups starting from scratch with nothing to blow up, so they can just go, and we take that as a universally applicable trajectory. But when you face JPMorgan and ask them "how do you plan to let Nano Claude actually automate business in the near term"—the gap is huge.
> 
> **Steven Sinofsky**: How do you guys see this split between big and small, startups and enterprises? I think this leads to a very interesting question.
> 
> Those existing SaaS vendors experiencing the "SaaS apocalypse"—I don't fully agree with that term, but they do face a real dilemma: what they sell isn't just data, but the intelligence and domain knowledge encapsulated in the entire system.
> 
> The agent side now just wants to buy data, just wants authorization to access data, wants unlimited access to data—but SaaS vendors have never truly opened that up; it's never been their business model. This has been one of the long-term core tensions for vendors like Workday and SAP regarding how much API access to open. Salesforce went through three massive platform restructures.
> 
> I think this is a particularly interesting question—not from Wall Street's perspective, whose judgments on economic logic and issues are all wrong, but from a technical perspective: when everyone wants to access data directly, what does "system of record" actually mean?
> 
> **Martin Casado**: Is it for model training or other purposes?
> 
> **Steven Sinofsky**: What they're really worried about is that some large customers' suppliers want to use the customers' data to train models.
> 
> **Aaron Levie**: Actually, even without training, this problem exists. Because previously users operated in your UI, now they just send an API request over the network; the monetization capabilities of the two methods are worlds apart.
> 
> **Steven Sinofsky: But monetization is a Wall Street perspective. I think systems like SAP have a lot of domain knowledge that won't disappear; that's an absolute fact. The idea that you can replace SAP with "vibe coding" is absurd. The domain knowledge in SAP doesn't just exist in some neatly designed data layer; it exists in the UI, the middleware, and the way the system is used itself.**
> 
> So I'm really not sure how this will evolve because SAP won't disappear, and that will slow the diffusion of AI on that data—whether it's agents actively operating on data or just read-only query reports. What do you think?
> 
> **Aaron Levie**: Okay, let me offer a somewhat bold view.
> 
> **Steven Sinofsky**: Go ahead; if you don't say it, don't expect to be invited back.
> 
> **Aaron Levie**: I've become completely convinced of the "build products for agents" philosophy. Over the past year, this concept has become clearer, and I think we're actually aligned on this—after enough iterations, at some point, agents will largely dominate which tools they want to use.
> 
> Agents certainly can't replace an enterprise system, but a few generations later, an agent might run into too many obstacles set by your software and just say: "You need to completely replace this old HR system, otherwise I can't automate this workflow for you."
> 
> So there's a very interesting dynamic—back to the logic of agents being a hundred or a thousand times the volume of humans; repeat that enough times, and the software stack must be built for agents. There might be a few holdouts, a few ERP systems as the last traditionalists, but everything else will be: your business performance will be highly correlated with whether your agents can smoothly access the information they need.
> 
> **Therefore, your enterprise IT infrastructure must support the efficient operation of agents; agents will effectively take control because your software must let those agents work effectively. This means every SaaS company or software company must face this core question: Can you build high-quality APIs? Can you achieve monetization? Can you handle agent identity and access control? These are the new problems you must solve when building a software company.**
> 
> How is monetization achieved? Will Workday charge a penny for every HR record query? We'll figure that out later. I think revenue for some businesses might decrease, while others might increase significantly.
> 
> **What we're excited about is: every agent loves processing files, so there might be far more files in the future than before. Can we build a platform that makes it very easy for agents to process this data? We're betting this will be an optimistic outcome for our business model. Of course, some business models will be compressed because, in that future scenario, the value created by agents is far greater than the software itself; and most cases fall somewhere in between.**
> 
> **Martin Casado**: Can I object to one thing?
> 
> **Aaron Levie**: I thought what I said was uncontroversial...
> 
> **Steven Sinofsky**: We're here to object.
> 
> **Martin Casado**: I think Paul Graham and many others are missing something—they focus on the interface and say things like "build products for agents." But I think that's almost entirely wrong.
> 
> **Aaron Levie**: To be fair, Paul Graham didn't actually—
> 
> **Steven Sinofsky**: It’s been over-interpreted.
> 
> **Aaron Levie**: I'm the one who dragged Paul Graham into this.
> 
> **Martin Casado**: Okay, then I'm talking about those discussing this at an abstract level. They say "now you're marketing to agents, the most important thing is having a good API." I think that's almost entirely wrong.
> 
> **Aaron Levie**: That's a heavy statement.
> 
> **Martin Casado**: What agents are truly good at is finding the solution path themselves. Ultimately, what matters is semantics. In my experience, agents are very good at choosing the right backend for the task at hand—they don't say "this interface is easy to use," they consider substantive factors like cost parameters, durability, and use the collective wisdom of us using these platforms.
> 
> Take cloud platforms; there are many now. Every time I have an agent choose a platform, it's actually making judgments based on truly meaningful criteria, not interface-layer things. So I think as an industry, we're too focused on the interface—"oh, you have to market well to agents and such." But in reality, what determines the winner is whether you've built a better system.
> 
> **Aaron Levie**: Okay, I think we don't actually disagree. I'm not looking at this from a marketing perspective; I'm more saying: if your tool is closed to agents, agents will eventually find a better alternative tool for that company. Previously you went to Gartner to ask what system to use; after enough iterations, an agent will say: "You should use this kind of database to handle these operations." If you're not on that recommendation list, you're out.
> 
> **Martin Casado**: I think that's actually something to celebrate because agents are quite smart at choosing technology. In the past, procurement decisions were often influenced by many factors unrelated to the technology itself.
> 
> **Aaron Levie**: But don't worry, in Silicon Valley, we'll soon destroy this meritocratic competition mechanism—
> 
> **Steven Sinofsky**: Someone will start bribing algorithmic recommendations. Workday's marketing agent will find a way to buy recommendation rankings...
> 
> **Aaron Levie**: Find a way to treat agents to steak dinners.
> 
> **Steven Sinofsky**: Exactly, that's it. But there is a real and interesting phenomenon—it's like back in the internet era, every company's intranet file share had the best documents, best PPTs, best financial models. People would get familiar with them, and if they couldn't find a suitable one, they'd create a new one. Many organizations operated like that, in a way like a free market.
> 
> Before Box, IT people didn't care if it wasn't a file in a database; they only cared about what was in SQL. The risk of the model you describe is that agents will generate their own de facto new reference systems, which in the eyes of the IT department, is just end-user messing around at the "middleware" level. And this system will spread in a fragmented way; that's a real risk.
> 
> To some extent, it's like "macro code eventually taking over the whole company." IT departments have seen this before, and they've seen marketing departments go buy an event website online themselves, resulting in a major security breach, email list leaks, and the company being sued. So there's more real-world tension in this dynamic than what we've just discussed.
> 
> **And different organizations will move at very different speeds. JPMorgan will be the slowest, startups will be the fastest, but the gap is huge. Even for startups, this is still some distance away because startups at some point also need systems of record, they will all use SaaS, and they won't replace them quickly.**
> 
> **Martin Casado**: It feels like there are two diametrically opposed views. One is what Elon says—you enter a prompt, it directly outputs machine code, which is the "layer collapse theory": all existing interfaces and layers built in the past will disappear, going straight from prompt to machine code.
> 
> The other is the system history perspective—layers never disappear, they only continue to stack because many layers are actually organizational boundaries, state boundaries—
> 
> **Steven Sinofsky**: Or compatibility requirements; they remain because of compatibility.
> 
> **Martin Casado**: The other argument is: we built these layers for a reason, to satisfy human and organizational needs; these needs won't change, and agents will adapt to these layers instead of breaking them. I lean toward the latter. I think systems will still operate in quite a similar way; maybe more agents will be using them, but I don't think the systems themselves will evolve that much.
> 
> **Aaron Levie**: Maybe the people in the "Anthropic growth marketing head case" category are closest to the first scenario—because they built a pure AI-native way of working from first principles. But for ordinary people, we still want to use a set of off-the-shelf CRM systems.
> 
> **Steven Sinofsky**: This isn't something that hasn't been tried. If you redesign an ERP system from first principles—SAP had a set of assumptions when it was founded in the 1970s; today you'd have completely different assumptions, and the architecture would be very different. But ten years later you'd feel about that new architecture: "That decision was really wrong."
> 
> **So the existence of layers is inevitable, but first-principles thinking will also always exist because at any point in time, the decisions you make based on constraints then will determine the direction of the entire system. Just like LiDAR, it was perfectly reasonable ten years ago, but it still took ten to fifteen years to prove "it works without LiDAR too." Then there will be a bunch of new things that make you say: "It could have been done differently."**
> 
> So my feeling is our discussion is trying to rush to a finish line, but let's first see the first case of what you described actually happening—that's the real signal. I think enterprises will just go back to layers and architectural models because that's the only feasible way.
> 
> **Martin Casado**: For security and compliance, it's a must.
> 
> **Steven Sinofsky**: And it's the only way to build a system. Otherwise, you're just writing an app. If that app only does one thing, then that's a completely different approach.
> 
> **Aaron Levie**: One thing I'm particularly interested in—I don't have any amazing data points, but at least conceptually—those new companies in service industries starting from scratch and building from first principles: marketing agencies, engineering consulting firms, or law firms, maybe construction design, architectural design, any type of knowledge-based service company.
> 
> If you have no legacy baggage, no information silos or access boundaries, you can give all the context to an agent, write code for specific needs at any time; you really can build your company in a very different way. I think that will be quite disruptive until those large incumbents can catch up. This will at least create some precedents and cases showing what this new type of organizational form looks like.
> 
> Of course, over time, they will eventually encounter the same problems every other company faces—
> 
> **Steven Sinofsky**: Geographic expansion, market segmentation, distribution channels... anything beyond your four walls will hit real-world resistance.
> 
> **Aaron Levie**: I do like the idea of "new business models being unlocked." Of course, because there's a massive amount of information and software utilization that is a hundred times lower than its actual economic value—simply because no one wants to pay 5 cents to access a piece of data or $1 to use a tool just once.
> 
> But once you give an agent a budget and a set of protocols, it can instantly acquire medical research materials and pay $3—that actually works. Agents can directly complete these micro-transactions. This opens up a whole new set of business models.
> 
> **Steven Sinofsky**: The history of the internet... okay, I'll refrain from saying too much. But that point, I think, is one of the most important "unresolved" questions today: **everyone is trying to figure out the economic logic of all this, but they are off by at least an order of magnitude in estimating the scale of this opportunity. The reason is no one knows what the new business models will be, but they will certainly emerge, just as they did with every new technology.**
> 
> **The current constraint being discussed is that a bunch of finance and Wall Street people are trying to build profitability models for GPUs and tokens as if we still live in the old world, viewing revenue as some kind of linear growth curve.**
> 
> This is like the PC era when people saw PCs as a limited market because they just saw computing power as a finite consumable, not imagining what would happen if you put that power on every desktop. They thought software was bundled with hardware; no one imagined software could be sold separately. Until one person thought of it, and it turned out to be a very good idea—Bill and Paul.
> 
> The same thing happened with cloud computing—people looked at the cloud and said we were just going to move the server business, about sixty thousand units a year, to someone else's data center and prorate the price. No one said "the amount of resources people use will increase a thousandfold." That's exactly what drives me crazy—Wall Street models always view the revenue pie as a fixed size.
> 
> **Martin Casado**: It's something approaching zero...
> 
> **Steven Sinofsky**: Right, approaching zero somewhere. They think a company's total spending is fixed. That was also the problem Salesforce faced—Marc Benioff was carving out a new path: the CRM market then was $2 billion a year, and that $2 billion required you to buy a bunch of servers, Oracle licenses, and go through long deployment and consulting processes. But if you could let salespeople sign up individually, they'd all sign up, zero friction, right? That's exactly what's happening with AI right now, no doubt about it.
> 
> **Martin Casado**: Let me give an example. I've been investing for ten years and have a portfolio of about 240 companies, 50 of which are infrastructure companies; some have done well and some haven't. In the last six months, all 50 of these companies without exception have entered the exponential growth phase of the curve.
> 
> The reason is simple: more software is being written every day now than ever before—and not because they have large enterprise customers, it's purely the explosion in consumption at the infrastructure layer. As more software and more agents emerge, the consumption of computing resources will be even greater.
> 
> **Steven Sinofsky**: We haven't even reached the point where everyone's phone is heavily consuming AI. Once on-device AI truly lands, that number will increase by another billion-fold.
> 
> **Aaron Levie**: Do you support the direction of micro-payments?
> 
> **Steven Sinofsky**: In general, yes, but historically with every technical revolution, there's a wave of people who think micro-payments will become mainstream, and eventually in enterprise scenarios, everyone still prefers bulk licensing or packaging because it's simpler and more predictable.
> 
> **Aaron Levie**: People want predictability.
> 
> **Steven Sinofsky**: Right, they don't want to calculate costs every moment. But I also like the idea—
> 
> **Aaron Levie**: What I like is that for the first time you have a subject that completely doesn't care about the friction of small transactions. For the first time, there's a participant willing to actually pay for resources behind paywalls.
> 
> **Steven Sinofsky**: And the entire world has built the infrastructure to aggregate these payments into efficient forms for customers.
> 
> **Martin Casado**: Because tokens are now a significant part of the cost, this is driving the entire industry toward usage-based billing. I remember the shift from permanent licenses to subscriptions; that required massive budget and organizational changes. We are now going through the same shift toward usage-based billing, and usage-based billing is very granular—
> 
> **Steven Sinofsky**: We went through this in the AWS era—people were terrified of the elastic costs of cloud computing then, giving rise to a group of intermediary companies helping enterprises find the lowest prices.
> 
> **Martin Casado**: Exactly that.
> 
> **Aaron Levie**: Okay, bringing tokens into it; I don't know if we have time to expand on that in this conversation. But the topic of engineering computing budgets will be one of the craziest discussions in the coming years. How much engineering expense should be allocated to tokens? Answers from different people might range from 1% to 100%.
> 
> **Steven Sinofsky**: Yes, but that...
> 
> **Aaron Levie**: CFOs must really give an answer.
> 
> **Steven Sinofsky**: CFOs always want answers to questions that have none.
> 
> **Aaron Levie**: Wall Street will force them to provide answers.
> 
> **Steven Sinofsky**: Wall Street will force them to make up a number, then hold them accountable to it, then the CFO gets fired, and it passes.
> 
> **Aaron Levie: R&D spending is usually 14% to 30% of revenue for any public company. If the cost of computing resources is double the engineering team's pay, or just 3% more, the gap between those is huge and directly impacts EPS. We have to figure this out.**
> 
> **Steven Sinofsky**: I don't mind sacrificing a few CFOs. That's a good clip, by the way. But the reason is still that we're trying to answer a question that fundamentally has no answer right now. It happened in the internet bandwidth era, in—
> 
> **Aaron Levie**: No, this is not on the same level as network bandwidth at all.
> 
> **Steven Sinofsky**: Yes, it is; it happened in the vacuum tube era, the transistor era; every new technology goes through this phase. It even happened with programmers—in my lifetime, there was once a time when people felt programmers would eat up every company's cost budget.
> 
> **Aaron Levie**: But I think we've never experienced a situation where every end-user in an organization has fully elastic power to initiate computing requests on their own behalf.
> 
> **Martin Casado**: This sounds very similar to what happened in the cloud computing era from 2016 to 2018—there were a whole set of companies then basically helping you manage cloud spending dashboards, like FinOps-type tools. Developers got cloud access, but cloud spending started to spiral out of control, giving rise to tools like "here's your Twilio spend, here's your AWS spend."
> 
> **Aaron Levie**: But this is very different, and I'm waiting for the YouTube comments section to correct you. Back then you could call the team into a room and say: "Can we optimize this algorithm slightly so it doesn't hog so many cluster resources late at night?" Meeting over, someone goes and changes it, done.
> 
> The problem now is that every prompt from every engineer involves these decisions: should this task be designed to run long? Should it be parallelized? What's your tolerance for wasting tokens? Right now my answer is we should waste more tokens because that means we're trying new things. Should your engineering head be happy to see you running ten experiments at once—even if 90% of the tokens are "wasted," you'll pick one successful path? Or do you tell the team to fully design the system before executing?
> 
> As of the moment of recording this episode, everyone is freaking out about token limits for the new Claude Code Max plan; you're cut off after three prompts. This will be a very real topic until we can truly expand data center capacity.
> 
> **Steven Sinofsky**: That's another issue. Assuming we build more capacity, prices will drop because the high price now is caused by limited supply. But—
> 
> **Steven Sinofsky**: This will eventually be resolved, and I feel for those who must make decisions right now—which seventeen people can't use tokens this week, or everyone lining up with a token card for food. But you know, just like we used to output command execution time in command-line tools to know if it got better or slower—all this, all these problems, will eventually disappear. No doubt.
> 
> **Aaron Levie**: On a timeframe, I 100% agree.
> 
> **Steven Sinofsky**: The fundamental reason is needing to do Benioff's arithmetic: if you pay an enterprise salesperson $1 million a year, what are his tools worth? If you pay an engineer X dollars a year, then his tools are completely worth the investment at some point. It won't be a problem.
> 
> If there's a short-term capacity bottleneck, that's a different issue, it's supply-driven price, not that we'll always have to do this cost budgeting exercise.
> 
> **Aaron Levie**: I think the law of large numbers will solve this because eventually enough engineers will use enough computing resources. But we're in a transition phase now—two years ago most people thought AI spending was just a chatbot, they were wrong, we tried to remind them, but they were still wrong.
> 
> **Steven Sinofsky**: They were wrong because they only saw this specific application scenario. But like that vacuum tube example—there were once people who thought the Dakotas would be covered in vacuum tube warehouses, with workers on roller skates flying down hallways to replace tubes to support WWII calculation needs.
> 
> Then someone said, why not use transistors? We will have our own "transistor moment" in AI too, maybe supply-side expansion, fundamental algorithmic breakthroughs, or hardware revolutions—many things could change this current moment.
> 
> **I think it's particularly strange that everyone is locked onto tokens now—like back in the IBM mainframe era, everyone was looking at computing capacity (MIPS), until one day someone pointed out that IBM was selling more MIPS for less money every year, and they hadn't realized it themselves, still pricing by MIPS, until someone told them they were already on a downward curve—because the speed they produced MIPS was faster than the speed they could charge for them. The same thing will definitely happen. I'm just saying.**
> 
> **Martin Casado**: That sounds very confident.
> 
> **Steven Sinofsky**: Right, it sounds like I know what I'm talking about.
> 
> **Aaron Levie**: I probably should believe that.

### Related Stocks

- [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md)
- [SMH.US](https://longbridge.com/en/quote/SMH.US.md)
- [CLOU.US](https://longbridge.com/en/quote/CLOU.US.md)
- [SRVR.US](https://longbridge.com/en/quote/SRVR.US.md)
- [IDGT.US](https://longbridge.com/en/quote/IDGT.US.md)
- [XSD.US](https://longbridge.com/en/quote/XSD.US.md)
- [XDAT.US](https://longbridge.com/en/quote/XDAT.US.md)
- [BOX.US](https://longbridge.com/en/quote/BOX.US.md)
- [XSW.US](https://longbridge.com/en/quote/XSW.US.md)
- [IGV.US](https://longbridge.com/en/quote/IGV.US.md)
- [DTCR.US](https://longbridge.com/en/quote/DTCR.US.md)
- [PSI.US](https://longbridge.com/en/quote/PSI.US.md)
- [DAT.US](https://longbridge.com/en/quote/DAT.US.md)
- [IXN.US](https://longbridge.com/en/quote/IXN.US.md)

## Related News & Research

- [Nutanix Investor Day: Agentic AI Stack, NetApp/Lenovo Storage Deals, and FY2029 Rule of 40+ Targets](https://longbridge.com/en/news/281945174.md)
- [Key facts: Nutanix debuts Agentic AI, NKP Metal; Service Provider Central](https://longbridge.com/en/news/281926226.md)
- [Anthropic built an AI so dangerous it won’t release it](https://longbridge.com/en/news/282184138.md)
- [Nvidia acquisition of SchedMD sparks worry among AI specialists about software access](https://longbridge.com/en/news/281791137.md)
- [03:00 ETGlobal Tech Firm Opens Prediction Market Infrastructure to Businesses Worldwide](https://longbridge.com/en/news/281840410.md)