--- title: "Google CEO Deep Dive: 10 Years as CEO, Lows, Reversals, and Regrets" type: "News" locale: "en" url: "https://longbridge.com/en/news/282199853.md" description: "In a tenth-anniversary interview, Google CEO Sundar Pichai reflects on Google's journey in the AI field, admitting the reasons for the delayed release of the Transformer architecture and emphasizing Google's full-stack Vertical Integration Explained as a core strength. He warns of wafer capacity bottlenecks in 2026 and reveals that Google is exploring space data centers. Pichai predicts that by 2027, Google's business forecasting will be fully automated by AI, and search functions will evolve into an \"agent manager.\"" datetime: "2026-04-09T12:56:17.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/282199853.md) - [en](https://longbridge.com/en/news/282199853.md) - [zh-HK](https://longbridge.com/zh-HK/news/282199853.md) --- # Google CEO Deep Dive: 10 Years as CEO, Lows, Reversals, and Regrets Recently, Google CEO Sundar Pichai, on the occasion of his tenth anniversary leading the company, accepted a joint interview with John Collison, co-founder of payments giant Stripe, and Elad Gil, a tech angel investor. During the interview, Pichai reviewed Google's journey from being passive to leading in the AI wave. He directly **addressed the history that remains a point of frustration for Google employees: the fact that although the Transformer architecture was born at Google, it was OpenAI that launched ChatGPT, which then became the cornerstone for disrupting the search industry.** He admitted that there is "a bit of a misunderstanding" regarding this, stating that **Transformer was born out of a desire to improve translation quality from the very beginning**, rather than being just theoretical research. The reason it wasn't released sooner was partly due to Google's "higher threshold" for search quality, as **early internal versions were "too toxic" to be released.** Facing the current AI race, Pichai believes the market is far from a zero-sum game, noting that the "value growth curve is extremely steep." He also revealed that he **personally spends at least an hour every week approving compute allocation**, stating, "This is the most important thing right now." In Pichai's view, **Google's full-stack Vertical Integration Explained is a core advantage**, spanning from the seventh-generation TPU to models and applications, and he disclosed that capital expenditure will reach $175 billion to $185 billion by 2026. Regarding resource bottlenecks, he believes wafer capacity is the "fundamental constraint" and warns that **2026 will be a "year of supply compression."** However, the U.S. must learn to "build physical infrastructure at 10x the speed." He also confirmed that **Google is exploring space data centers, stating, "This is the Waymo of 2010."** It may seem distant, but it has already started with a small team and a small budget. Pichai firmly believes that **search will not die; instead, it will evolve into an "agent manager."** You will only need to give a command, and an AI agent will complete the task for you. He even boldly predicted that by 2027, internal business forecasting at Google will be fully automated by AI, requiring no human intervention. The following is a condensed version of Pichai's interview: ## **01** **"We Weren't Slow, Our Standards Were High"** **Q:** People always bring up that history: Transformer was invented by Google, but it became the foundation for ChatGPT. How do you look back on that now? **Pichai:** This is actually a bit misunderstood. Transformer didn't come out of nowhere. We had a very realistic need at the time: to make translation better. It was the same with TPUs. Speech recognition technology already existed, but the problem was that we had to serve two billion users, and existing chips simply couldn't handle it; we had to solve the inference efficiency problem first. **Q:** So Transformer was aimed at products from the start? **Pichai:** Yes, our research team was focused on solving practical problems from the beginning. As soon as Transformer came out, we immediately applied it to search. Later, we developed BERT (Bidirectional Encoder Representations from Transformers) and MUM (Multitask Unified Model), and search quality saw a massive leap during that period. We actually developed products similar to LaMDA (Language Model for Dialogue Applications) internally; we just weren't the first to bring them to market. **Q:** In other words, you did the research and saw the returns, but didn't use it to dominate everything. **Pichai:** It's more than that. We had actually researched product forms like ChatGPT internally, specifically LaMDA. Do you remember? There was an engineer at the time who felt LaMDA had become sentient (later suspended and dismissed for it, lol); that was essentially the precursor to the early version of ChatGPT. We had internal product versions long ago, but they were released about nine months later than ChatGPT. In fact, as early as the 2022 I/O conference, we launched AI Test Kitchen, with LaMDA running in the background. But we imposed many restrictions because that version hadn't undergone RLHF (Reinforcement Learning from Human Feedback), and its responses were quite "toxic," making us afraid to release it directly. Furthermore, Google has always had extremely high standards for search quality, and the bar for product releases is higher. Even when OpenAI released ChatGPT, their partnership with Microsoft had only just been finalized. So, looking back, ChatGPT's success wasn't as "obvious" or "a sure thing" as it might seem. I think OpenAI was lucky in one regard: they first saw an opportunity in programming scenarios through GitHub. We might have missed that signal at the time. In programming, the progress in model capabilities is much more evident than in pure language scenarios. From GPT-2 to GPT-3, and then to GPT-4, the leap in writing code was more striking than in chatting. These factors combined to create the subsequent situation. So, I don't think it has much to do with "research not translating into products," but rather other factors coming together. **Q:** I recall someone saying that when ChatGPT was released, it was actually quite low-key, launched during the week of Thanksgiving, and no one thought it would become what it is today. It was just an interesting experiment. **Pichai:** That's the norm for the consumer internet; there are always surprises. When we were at Google, we did Google Video Search, and then YouTube came out. It was the same with Facebook; Instagram suddenly emerged. No one looks at these things with a dramatic sense of "I'm about to be disrupted"; Facebook's approach was simply to buy Instagram. My point is, there are always three to five people huddled together working on a prototype, throwing out millions of ideas every day. I'm not belittling anyone, but this is bound to happen. You can't just build the next iPhone in a garage, but that's how the consumer internet works. The key is to realize this and truly internalize it into the organization's DNA. ## **02** **Search Is Not Dead** **Q:** Google has always been known for being "fast." Early search showed response times on the results page, and Gmail and Chrome were a step faster than competitors. Now, Gemini runs on TPUs, and the speed is still incredibly fast. Is this a deliberate product strategy, or is there a more complex reason? **Pichai:** Speed is actually twofold. One is response speed, which is the fast or slow perception of the user; the other is iteration speed, which is how quickly we launch new features and improve products. Both are important. You just asked about latency. The hard part is that we have to continuously add new features while maintaining rapid responses. The search team now has a millisecond-level latency budget; for example, if you save 3 milliseconds, 1.5 milliseconds must be given to the user experience, and the other 1.5 milliseconds is the quota you've earned for yourself. **Q:** The latency humans can perceive is only a few hundred milliseconds, right? **Pichai:** True. But over the past five years, while adding a bunch of features, we've also reduced search latency by 30%. The same goes for Gemini; the Flash model has 90% of the strength of the Pro model but is much faster and cheaper. Vertical Integration Explained has played an important role in this. **Q:** Do you think search will still be around in 10 years? Some say chat is the new interface, while others say everyone will have their own agent in the future, and you can directly command it to perform actions without personally searching. **Pichai:** With every technological shift, search can do more. User expectations change, and you have to change with them. In the future, many "quick checks" will become agent-based—you give a task, and the agent helps you complete it. Search will become an agent manager. In the Antigravity tool I currently use, there are already a bunch of agents working. **Q:** Will the form of inputting a line of keywords and getting back a bunch of links still exist? **Pichai:** In the current AI search model, some people are already doing deep research on it; it's a bit different from what you described, but that's how people are using it. In the future, there will be more and more long-running tasks, and they can be asynchronous. **Q:** You just said search will become an agent manager. But in ten years, will that search box still be there, but people just won't take it seriously anymore? **Pichai:** Device forms will change, and the ways of input and output will change. But to be honest, thinking ten years ahead can easily paralyze you. We are lucky to be at a moment where just looking at the next year is exciting enough. The curve is too steep; models will be completely different a year from now. Just following the curve is exciting enough in itself. And many people don't realize that this is an expansive moment, not a zero-sum game. Look at YouTube; TikTok and Instagram have both developed, and aren't we still doing well? The more you feel that you must die if someone else rises, the more it truly becomes a zero-sum game. But as long as you are innovating, it won't be. We are doing both search and Gemini simultaneously; there is overlap, and they will gradually diverge. I think it is beneficial to have both. **Q:** In the spring and summer of 2025, the market was extremely pessimistic about Google's future, saying search was finished, and your stock price dropped to around $150. Looking back, that was clearly a misunderstanding. Google has performed excellently across the entire tech stack, whether in applications, models, or TPUs, as well as with Waymo, YouTube, and all those cool bets. What do you think investors missed at that time? **Pichai:** At that time, everyone's attention was on the "reversal," the so-called "OpenAI counter-attack." But to me, that moment instead made me feel that Google was born for this moment. This Vertical Integration Explained was not accidental or arbitrary. In 2016, we released the TPU at the I/O conference and promised to build AI data centers; now, we've iterated to the seventh generation. That year, the company also determined the "AI First" direction, which was not just a slogan. We did fall behind by one step in frontier large models, but internally we had all the necessary capabilities; all that was left was execution. What excites me is that from a full-stack view, we have research teams, infrastructure teams, and various business platforms. AI just happens to be able to accelerate all these businesses simultaneously, including search, YouTube, Cloud, and Waymo; they are all on the same curve. This is a very efficient lever. I didn't think it was a zero-sum game at the time. Everything will expand tenfold, and others will have room. After Google rose, didn't Amazon and Facebook also do well? We always underestimate the space brought by growth. So my focus was simple: execute better. **Q:** Was there a landmark moment that made the outside world feel "Google is finally back"? Was it Gemini 3? **Pichai:** People really started to notice this trend probably with Gemini 2.5. Especially the multimodal capabilities, which stood right at the frontier. This is thanks to the Google DeepMind team. We paid a lot of fixed costs for multimodality from the beginning, and Gemini was designed for this direction from day one. By the time of Gemini 2.5, the advantages started to show. For example, Nano Banana, you can see the effect of everything integrated together. However, this field is changing too fast. Two or three top labs are pushing each other; this month you think "Great, we're leading in this area," and the next month it's "Oh no, we're behind over there." The landscape might be different again in a few months. Frontier competition is just that intense. ## **03** **Spending $180 Billion Annually to Explore AGI** **Q:** Some external researchers feel there is a difference between Google and other top labs: Google isn't as "obsessed with AGI." In other words, Google doesn't quite believe AGI will be realized immediately and doesn't accelerate its pace around that idea. Do you think this observation is correct? If it is, will it affect your judgment of future directions? **Pichai:** Look at our capital expenditure; it has risen from $30 billion to $180 billion. Who would spend that kind of money without truly believing in this curve? I think this is largely a semantic issue. We are a large company, our products cover too many people and levels, and our way of speaking might be different. But to say Google doesn't understand AGI makes no sense. Many founders themselves are AGI fans; Demis Hassabis, Jeff Dean, Ilya Sutskever, and Dario Amodei all worked at Google back then. I think the reason it appears to the outside world that we have differences is partly determined by geographical location, such as San Francisco gathering more young companies and research labs. But these are just appearances. At the root, everyone's judgment on the technology curve and how to understand and apply AI is not essentially different. The real gap lies in whether you have witnessed changes on the front line. In our company, there is a group of people who run at the very front every day, personally deploying and testing AI agents, watching them step by step gain new skills and handle complex tasks. If you look back at what they could do three months ago, you can truly feel the impact of exponential growth. **Q:** I'm very curious, when was the last time you felt the AGI moment was approaching? **Pichai:** The first time I had that feeling was in 2012. At the time, Dean demonstrated the earliest version of Google Brain; this neural network recognized a cat. Later, Larry Page and I went to the DARPA challenge to watch cars driving autonomously. Demis demonstrated an early model, and the model showed something we called "imagination." There are many such moments. Most recently, it's the rapid progress in the programming field. You give a programming agent a complex task, and from beginning to end without opening an IDE (integrated development environment), you just watch it complete the task in the manager. That feeling, you could call it an AGI moment. **Q:** I was working on a small project myself the other day, and only after it was running did I realize I didn't even know what programming language it was using and had to specifically ask it. It felt like magic. **Pichai:** Exactly. The slope of the curve (the speed at which it gets stronger) is what's truly surprising. You look back three months and you know how much progress has been made. **Q:** Speaking of this firsthand experience, I'm curious how you maintain a real touch with the products. Tech products are too abstract; you can't just look at reports and PPTs. Besides routine operations like using Gmail every day, how do you ensure you don't disconnect from users? **Pichai:** I use internal versions and specifically arrange time for intensive use. Two weeks ago, I was working out at the gym with Gemini Live open on my phone, and for the next 30 minutes, I went deep on one topic with it. Some experiences were great, some were frustrating, but you learn things. I force myself to use them in a "power user" way. I compel myself to use them in that "power user" mode to maintain contact. X (Twitter) also helps, because sometimes you get the most direct feedback. Also, I now go into Antigravity (our internal version) and directly ask the AI: "We released this feature; what does everyone think? Tell me the five worst and five best comments." It pulls them right out. Has my life become easier? Yes. In the past, I had to spend a lot of time trying to understand situations; now, AI agents help me do that part. Of course, I still have to spend the time experiencing it myself; it's a learning process. I'm also working hard to adapt to this future. **Q:** You just said this isn't a zero-sum game, and the productivity improvement is also real. But looking back at previous technical cycles—Internet, mobile, SaaS—it took a long time to show up in GDP. On the AI side, we've already seen data center construction driving GDP growth. Do you think in the next three to five years, the U.S. economy will become larger because of AI? By how much? **Pichai:** For these returns to be meaningful, they have to show up somewhere. I remember someone from Sequoia wrote an article saying that with everyone investing so much money, the returns have to match. Of course, that was two and a half years ago. At the time, some said this was illogical because the rate of return had to reach a certain level to be considered reasonable. But now, the investment scale may have grown tenfold; we need to re-examine these numbers. At some point, the math has to work out. What is very clear is that we are now supply-constrained, and we see strong compute demand in all application areas. **Q:** I have no doubt this is a huge market. The problem is, the way many people calculate the math might be wrong. For example, they use token budgets to compare with engineer salaries; I think the software engineering market is larger than anyone imagines, and an increase in supply will instead expand the market tenfold. I'm not questioning the relationship between capital expenditure and returns; I'm just curious about how large you think the growth can truly be? **Pichai:** Looking back at the development of the Internet, the GDP growth figures didn't fully reflect the kind of change we felt. Maybe without the Internet, GDP growth would have been negative. It's hard to make precise predictions; every level of society has natural suppression mechanisms. One of the most obvious examples: the compute construction curve and the model improvement curve are completely different; the former is slower. Then you also have to consider how to diffuse the technology into society. Waymo is an example. It's safer than human drivers, but you still have to be cautious with the speed of rollout; all these levels have constraints. The U.S. economy is much larger than ten years ago; even if the growth rate increases by only half a percentage point, it's a huge contribution. I think it will move in this direction. ## **04** **Supply Chain Alert: Memory, Electricians** **Q:** You mentioned supply constraints, which is indeed a defining feature of 2026. You said Google's capital expenditure is about $180 billion? **Pichai:** Between $175 billion and $185 billion. **Q:** Interestingly, even if Google wanted to spend $400 billion, it couldn't because there isn't enough memory, not enough electricity, and various components are insufficient. Can you talk about these bottlenecks? **Pichai:** You can't even find the electricians you need. **Q:** Tell us what the bottlenecks are. **Pichai:** Ultimately, it goes back to wafer capacity; that's the fundamental constraint. Electricity and energy are relatively easier to solve, but the permitting and regulatory environment is a big issue, slowing down the speed at which you do things. **Q:** States like Texas, Nevada, and Montana have plenty of land, but it's still not enough? **Pichai:** We are making great progress, but the U.S. truly needs to learn how to build faster. **Look at the construction speed in China; it's amazing.** We need to shift our mindset and think about how to increase the construction speed of the physical world tenfold. This will be the real constraint. And the resistance will get bigger and bigger; it's not something that can be solved by a few people saying, "We need to speed up construction." **Q:** There are also issues like data center moratoriums. **Pichai:** Wafer capacity, permitting, construction speed—these are all bottlenecks. The government has already done a lot, and everyone realizes the need for improvement. Then there are critical components in the supply chain; memory is a classic example. In the short term, everyone is stuck here. For companies like us, no matter how much you "revere AGI," you have to face a reality: your judgment cannot be 100% accurate, there's always a margin of error. You have to think clearly: how optimistic are you about future development? How much profit compression can you bear? Because external factors could go wrong at any time. Everyone is making adjustments based on these uncertainties. **Q:** So memory is what you feel is the biggest component bottleneck? **Pichai:** Definitely one of the most critical currently. **Q:** You say this is short-term. Will the market stimulate supply through price increases? **Pichai:** Leading memory manufacturers cannot significantly expand production. It will be restricted in the short term but will gradually ease. Moreover, this restriction will force innovation—we will increase efficiency by 30 times. These things are happening simultaneously. **Q:** Won't this reinforce the oligopolistic pattern? Models self-improve, write their own code, label their own data, and compute is a game of musical chairs. Whoever has more compute can run further. But if everyone's compute is allocated proportionally, that effectively sets an upper limit for people. Do you think this statement is correct? **Pichai:** There is some truth to that. But we just released Gemma 4, a very good open-source model. **Chinese models are very good, but I believe that outside of China, this is also a very good open-source model.** The frontier level of Gemma 4, compared with the architecture of Gemini 3, although the gap is very large, from the release time, the interval between the two is not considered very long. It's not a behemoth like a SpaceX rocket. **Q:** I've always found it shocking: you run a data center for several months, and what comes out in the end is just a flat file, something like a Word document, and that's your model—it's amazing! **Pichai:** The uniqueness of this matter makes me want to challenge the framework just now. At least from an inference perspective, you're right. But everyone is looking for ways to use the power of capital to break through these limits; the incentive is huge. **Q:** But you just said the world's memory is only so much. Supply problems in 2026 and 2027 cannot be solved by capital incentives alone. This may be exactly when more divergence appears in models. **Pichai:** Yes, but it has to be looked at together with factors like wafer capacity and permitting. Balanced as a whole, the constraints might not be as serious as imagined. You have to consider everything as a whole, including capital. **Q:** Logically, everyone is willing to invest more money, but it hits the reality bottleneck of 2026 and 2027. Just like the Strait of Hormuz, you can set the oil price as high as you want, but if the supply is reduced by 20 million barrels a day, then 20 million barrels of demand must be eliminated. Memory is the same; in the end, someone will definitely not get it. **Pichai:** Of course, there are other constraints like security. But the key is that these models will soon break through the carrying limits of almost all existing software—maybe they have already broken through, and we're just sitting here unaware. **Q:** So supply constraints instead force you to optimize and become more efficient. **Pichai:** Yes, it forces you to have some necessary conversations. Take security as an example; we need more coordination, but today this coordination is far from enough. Someday there will be a moment—maybe it will come quite suddenly. You can't just wishfully hope these problems will disappear on their own. ## **05** **Three “Hidden Gems”** **Q:** Speaking of which, Google's investment portfolio is indeed impressive. You invested in SpaceX; I remember a long time ago it was about 10%? And Anthropic, also around 10%. You hold a majority stake in Waymo. Internally there's TPU, quantum computing, and are there other "hidden gems" that people might not know about or might underestimate? **Pichai:** We have been doing various long-term projects; when they were just announced, those that were a bit more fringe looked a bit absurd. For example, space data centers, we are currently in the earliest stages. You just said constraints inspire creativity, and that's exactly the principle at play. From a 20-year long-term perspective, where do you plan to build these data centers? This question is hard, but this is what we are thinking about today, just as in 2010 we started doing Waymo. Quantum computing is also one of them; we are steadily moving forward, and I'm very excited about it. **Q:** What fields do you think quantum computing will have the biggest impact on? People mainly talk about molecular modeling and cryptography. But some people are developing post-quantum cryptography (referring to new cryptographic technology that can resist quantum computing attacks), and on the molecular modeling side, deep learning is already very strong, AlphaFold is an example. Will quantum truly be important? If so, where will it have the biggest impact? **Pichai:** At an abstract level, I think quantum computers are better suited for simulating nature. Because nature itself follows the laws of quantum mechanics, using a quantum system to simulate it will be more direct and efficient. Of course, classical computers plus enough compression algorithms could theoretically do it, but my intuition is that quantum will have more advantages. For example: we still haven't fully understood the "Haber process" in fertilizer production, and there are many complex natural phenomena. My intuition is that in fields like simulating weather and simulating reality, quantum computing will ultimately win out. Technical history tells us a truth: after you make something usable, people will find various applications on it that you didn't think of at all at the beginning. I always like to give this example: mobile phones plus GPS later enabled Uber. Who could have thought of that among those who made mobile phones back then? So I believe as long as quantum computers are truly made, their applications will be so many that they will exceed everyone's imagination. **Q:** Sorry to interrupt, please continue talking about those advanced projects you mentioned just now. **Pichai:** The Google DeepMind team is working deeply on robotics. Google actually dabbled in the field of robotics very early, but it was too early then. Looking back now, AI is that piece of the puzzle that was missing then. The Gemini Robotics model is already at a top level in spatial reasoning. What's interesting is that we are now working in reverse with companies like Boston Dynamics and Agile to push forward together. There's also Wing, drone delivery. We are expanding the scale, and in the not-too-distant future, 40 million Americans will be able to use Wing's service; this isn't something many years away but will be realized very soon. These long-term projects are accumulated bit by bit. There's also Isomorphic. **Q:** Isomorphic is indeed very exciting. **Pichai:** Yes, we focus on using models to improve every link of drug discovery. Although there are still procedures like phase III clinical trials later, with the help of AI, we are more confident in moving towards success. ## **06** **Regret Not Investing in Waymo Earlier** **Q:** How is Google's capital actually allocated? Textbooks say capital allocation is putting money where the returns are highest. Boeing's example is classic: the internal rate of return (IRR) for defense contracts is 16%, new airliners are 19%; everyone will choose the latter. But Google's projects simply can't be calculated this way. Giving YouTube more money, once the algorithm is optimized, user dwell time increases, and revenue will increase. Giving Waymo more money, accelerating expansion, but not knowing when it can make money on a large scale. Investing in an AI research project, there might not even be a result in five years. The return curves of these three projects are completely different; how do you compare them? **Pichai:** This is a good question. Ironically, this question is encountered more often now than ever before because of TPU allocation. To some extent, even Waymo needs TPUs, and compute makes the capital allocation problem particularly prominent. By the way, I especially look forward to AI helping me do this. Once we open up all the data, models can actually handle it; currently, it's stuck on data unlocking. I think this will be helpful very soon. Looking back, Google has a big advantage: we often make decisions at a very early stage. This has a lot to do with the company's technical DNA. For long-term projects, the early stage is actually easier because not much capital is needed at the beginning. What's truly hard is the long-term continuous investment and the constant assessment of progress in foundational technology. Take quantum computing as an example, how do we judge whether to continue investing? We look at the error rate of logical qubits, see when the threshold for stable large-scale logical qubits can be reached, and see if the team can break through these technical hurdles. A very important experience I've learned is: you need to deeply bet on technology early on. From a long-term view, you are actually using intuition to judge the option value and potential market size of a project 5 to 10 years later. You first assume a very aggressive growth curve and then deduce in reverse: is this decision reasonable after all? The investment in TPUs was done this way; we have been steadily investing. Waymo too; about two or three years ago, the whole world was extremely pessimistic about autonomous driving, and we instead increased investment. Others were retreating; we were doubling down. **Q:** Back to the capital allocation you just mentioned. Google does cut projects; Loon (the balloon network plan) stopped, but you persisted with Waymo for so long and never gave up. What did you see then? Was it a qualitative or quantitative judgment? How do you decide to cut this project and keep that one? **Pichai:** We do have some quantitative indicators. For example, looking at Waymo's driving system, seeing how its safety and reliability are improving. This is a long-term curve; you set goals first and then continuously track the execution. Our team has always been very excellent. In some stages, progress was indeed relatively slow, but you have to believe the team can break through. The more you can make assessments at a deep technology level, the more accurate the decisions will be. At least that's how I do it. **Q:** I've heard a saying: Waymo's early days relied on hand-drawn maps and heuristic rules, and the situations it could handle were very limited. The real breakthrough was the shift to end-to-end deep learning a few years ago, just catching the Transformer wave. If Waymo only started five years ago, would it be about the same as now? Or is those ten-plus years of accumulation actually essential? **Pichai:** You can look at Waymo as a robot. Logically, those who only started doing robots in the past three years should progress faster. But Waymo is different; it's a highly integrated system, not like TSMC or SpaceX, which only push technology complexity in a single dimension. For this kind of system integration, timing and the accumulation of craftsmanship are very critical. That said, end-to-end methods will indeed become an accelerator. **Q:** So continuously nurturing a team is itself a huge advantage. You have been investing, and the moment the technology takes off, it's worth it. This is very smart. Then extending to other fields? For example, robotics, will you do hardware yourself again, or mainly rely on partners? **Pichai:** We keep an open mind. But from Waymo and TPUs, I learned one thing: in fields involving safety and regulation, you need firsthand product feedback loops. Owning first-party hardware will eventually become very important. ## **07** **Personally Evaluating Compute Allocation Weekly** **Q:** Previously, R&D was mainly spent on personnel salaries; technical costs were secondary. Now, TPU compute has become the major part of the budget. How does Google operate internally? Is there a total TPU budget? When dividing projects, was it previously according to headcount, and now it's "headcount + compute" budget? How are quarterly reviews done? **Pichai:** We have always had a compute budget, but now compute is truly severely constrained. I spend at least an hour every week looking very meticulously at how much compute each project and each team uses, and assessing how to allocate it. This matter is now the top priority. **Q:** So compute has become a scarce resource, and you want to ensure it is spent in the most worthwhile place. **Pichai:** Exactly. **Q:** What about Google Cloud? You need to use compute yourselves while also selling it to customers. How is this contradiction handled? **Pichai:** By planning ahead. The Cloud team does forward-looking planning, and we are determined to fulfill our promises to customers. Everyone operates in a constrained world; the Cloud team also always says compute is not enough, but planning ahead can solve most problems. **Q:** Speaking of Google Cloud, GCP/MCP (AI assistant and Google Cloud interaction protocol) is very useful; your AI can directly call Google Cloud programmatically and can do almost anything, short of core permission settings. Previously, Google Cloud's biggest pain point was that there were too many and too miscellaneous functions; after logging in, you had to build organizations, build projects, and find services, which was very troublesome. Now these are no longer important; you just say "add this function" and it's done. AI has read all API documents and become a navigation layer. This experience is very good. **Pichai:** AI as an orchestration layer can handle anything you think of. It's the same within enterprises; CEOs don't lack data, but lack the method to put data together. Previously, you had to do a big ERP project; now, AI is that orchestration layer. **Q:** The more complex the product, the greater the benefit of AI navigation. Stripe also has this experience, but the effect of GCP should be more obvious. **Pichai:** We can still do better, but you're right, the opportunity is huge. **Q:** What interests me about products like OpenClaw is that they allow consumers to use stateful AI. For example, "summarize and send me the news I'm interested in every morning," this kind of thing that requires persistent memory cannot be done by mainstream AI applications. Is this feature coming soon? **Pichai:** The direction is certain. Users need to run persistent, long-term tasks in a reliable and safe way. Issues like identity and permissions need to be thought through. But this is the future of AI agents; bringing this capability to consumers is an exciting frontier we are exploring. **Q:** This is also what I wanted to mention. Dreamer, which is the company of the former Stripe CTO, was just acquired by Meta; they did particularly well in stateful AI. You can make small applications yourself, and the experience is very smooth. It gives people a sense of surprise. (**_Note: Stateful AI refers to AI systems that can retain and use historical context, memory, and state information in multi-step interactions or complex workflows._**) **Pichai:** The consumer-level interface will have a complete encoding model underneath, plus suitable tools and skills, and the ability to run safely and persistently in the cloud. These basic components are converging. Today only about 0.1% of people live in this future; they are making things for themselves. But pushing it to the mass market is an exciting frontier. **Q:** Those companies I'm involved in, even those founded recently, have completely changed product development, engineering practice, and even the positioning of design teams. Is Google also rethinking these? Are there big changes in workflows? **Pichai:** You can use concentric circles to understand it. Some teams have already undergone profound transformations; my task is to diffuse this change. Early on, many things were half-baked and couldn't be pushed. But this year the curve is turning sharply. Google DeepMind and some software engineering teams are already living in an agent manager; the internal tool they use is called Jet Ski, which is actually Antigravity. Last week we just pushed it to the search team. In large companies, change management is the biggest difficulty for technology diffusion; small companies switch much faster. **Q:** I want to supplement several problems encountered in actual AI implementation. First, engineers need time to learn how to effectively prompt AI, and each company also has its own specific knowledge. Second, AI-generated codebases are hard to share because the scope of changes is large and code changes fast, making multi-person collaboration complex. Third, besides the engineering field, data permission is a big problem—you want the agent to answer "how is this transaction status," and the company knows this information, but the permission engine needs to be rewritten. Fourth, role definitions are also changing; engineering, product, and design roles might need to be merged. In short, model capabilities have reached the corresponding level, but we are still far from using them enough. What do you think? **Pichai:** The problems you mentioned are being solved one by one by the Gemini enterprise team and the Antigravity team. This is our roadmap. We use it internally, encounter obstacles, overcome obstacles, and then turn them into products to push out. Identity access control is a real difficulty, and our security requirements are particularly high, so we must be cautious. But it's precisely because of this that when we solve problems, the things pushed out will be more robust. We are currently going through this fixed-cost stage. ## **08** **AI Handover Timeline** **Q:** Google does several formal business forecasts every year. Theoretically, you can let AI completely automate this matter without anyone's participation. When do you think Google will first achieve forecasts completely done by AI agents, in which quarter? **Pichai:** I expect 2027 will be an important turning point. At the beginning, someone will still be responsible for checking, but it will slowly switch over. In 2027, these transformations will happen very obviously. **Q:** So besides the engineering process, those non-engineering processes, you think 2027 will also truly start AI-ization? **Pichai:** Yes. This is also the advantage of startups; they can recruit AI-native teams, and this set of plays is there from the start. And we have to do re-training and transformation. Young companies indeed have an advantage in this regard; we must drive this transformation ourselves. **Q:** Is there any small project in Google currently that excites you? **Pichai:** It might be surprising to say it. Space data centers, we started from a small team of a few people, taking a very small budget to achieve the first milestone. Big ideas also have to start small. Risk Warning and Disclaimer Markets involve risks, and investment requires caution. This article does not constitute personal investment advice, nor does it take into account the specific investment objectives, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article are appropriate for their specific circumstances. Investing based on this is at your own risk. ### Related Stocks - [GOOG.US](https://longbridge.com/en/quote/GOOG.US.md) - [GOOW.US](https://longbridge.com/en/quote/GOOW.US.md) - [SMH.US](https://longbridge.com/en/quote/SMH.US.md) - [SOXL.US](https://longbridge.com/en/quote/SOXL.US.md) - [GOOGL.US](https://longbridge.com/en/quote/GOOGL.US.md) - [GGLL.US](https://longbridge.com/en/quote/GGLL.US.md) - [DTCR.US](https://longbridge.com/en/quote/DTCR.US.md) - [SOXX.US](https://longbridge.com/en/quote/SOXX.US.md) ## Related News & Research - [Google announcing $30 million funding globally over next three years to help global hotlines - blog](https://longbridge.com/en/news/281863393.md) - [Google in Talks With Poolside to Revive Data Center Project](https://longbridge.com/en/news/281507610.md) - [Google's AI Ad Machine Rewrites Search Rules As Brands See Up To 80% Revenue Boost: 'Not Some Zero-Sum Game'](https://longbridge.com/en/news/282044298.md) - [Google quietly launched an AI dictation app that works offline](https://longbridge.com/en/news/281979619.md) - [Google makes it easy to deepfake yourself](https://longbridge.com/en/news/282182025.md)