--- title: "Broadcom conference call: AI chip revenue will exceed $100 billion by 2027, AI will not disrupt infrastructure software" type: "News" locale: "en" url: "https://longbridge.com/en/news/277876363.md" description: "Broadcom expects that by 2027, revenue from AI chips alone will far exceed $100 billion, with total shipments approaching 10 gigawatts. Broadcom believes that infrastructure software such as VCF, serving as a permanent abstraction layer between artificial intelligence software and physical chips (silicon), cannot be replaced or substituted. Relying on deep, long-term cooperation with customers, Broadcom has already secured key component capacity for 2026 to 2028, becoming one of the first companies in the industry to lock in capacity for 2028" datetime: "2026-03-05T04:16:57.000Z" locales: - [zh-CN](https://longbridge.com/zh-CN/news/277876363.md) - [en](https://longbridge.com/en/news/277876363.md) - [zh-HK](https://longbridge.com/zh-HK/news/277876363.md) --- > Supported Languages: [简体中文](https://longbridge.com/zh-CN/news/277876363.md) | [繁體中文](https://longbridge.com/zh-HK/news/277876363.md) # Broadcom conference call: AI chip revenue will exceed $100 billion by 2027, AI will not disrupt infrastructure software Driven by the strong engine of doubling growth in the AI chip business, Broadcom's revenue for the first quarter of fiscal year 2026 reached a historic high, and it is expected to achieve an astonishing milestone of over $100 billion in AI chip revenue by 2027. As the global generative AI race heats up, the top player in underlying computing infrastructure is delivering results far exceeding market expectations. In the subsequent earnings call for the first quarter of fiscal year 2026, Broadcom provided a grand guidance: "We are now confident that by 2027, AI revenue from our chip business alone will exceed $100 billion." Overall, Broadcom is turning "AI custom chips + Ethernet networks" into a scalable, replicable, and long-term locked-in infrastructure business, and has already secured this until 2028. Here are the key points from the call: ## AI chip revenue > $100 billion in 2027, with 6 long-term strategic customers **Hock Tan made a striking guidance during the call: In 2027, AI revenue from chips (XPU + switch chips + DSP) will exceed $100 billion. Note several key points: only chips, excluding racks, and excluding system integration.** When pressed by analysts, he confirmed that the expected installed capacity in 2027 will be close to 10 gigawatts. Based on the industry value per gigawatt, 1 gigawatt is approximately worth $20 billion. This means: 10 gigawatts ≈ $200 billion potential scale, and Broadcom will capture a portion of this, making > $100 billion not exaggerated. Another core change in this call is that the number of customers has been explicitly stated as 6 for the first time. Public or inferred customers include: Google (TPU), Meta (MTIA), OpenAI, Anthropic, and two other undisclosed LLM platform customers. Among them, Anthropic's demand for TPU computing power is expected to surge to over 3 gigawatts by 2027, while OpenAI will also deploy over 1 gigawatt of computing power on a large scale in the same year. **The key point is that all are LLM platform-level companies, and they are all building their own custom XPUs, with multi-generational roadmap collaborations planned for 2–4 years. Management emphasized that this is not a short-term transaction, but a multi-generational strategic binding.** ## XPUs may continue to erode GPUs Hock expressed very clearly in the Q&A: GPUs are general-purpose dense matrix multiplication architectures, while XPUs can be customized for loads such as MoE, inference, pre-filling, and decoding. As models evolve, customized XPUs will ultimately become the preferred choice for customers, as they allow for architecture tailored to specific workloads, providing lower costs and power consumption. Broadcom has observed that technologically mature customers are moving towards developing two dedicated chips each year—one specifically for model training and another specifically for productizing inference This means that the demand for custom chips is not a one-time replacement for GPUs, but rather a long-term dual-line expansion. ## The Network is a Severely Underestimated Growth Engine In the Q3 Q&A, management emphasized the network significantly. Currently, the network accounts for 33% of AI revenue in Q1, expected to reach 40% in Q2, with a long-term forecast range of 33%-40%. The growth logic of the network comes from two aspects: > In terms of scale-out: Ethernet is the preferred solution. Broadcom's 100Tbps Tomahawk 6 switch, which was launched first, faces huge market demand, and the company plans to launch the performance-doubling Tomahawk 7 in 2027. > > In terms of scale-up: Within the rack's cluster domain, direct attach copper cables (DAC) should be used for as long as possible to directly connect XPU or GPU, as copper cables have the lowest latency, lowest power consumption, and cost advantages compared to optical solutions. Currently, Broadcom's technology can drive 400G transmission rates through copper cables. This means that Broadcom benefits simultaneously in three dimensions: switching chips, DSP, and Ethernet scale-up. ## Supply Chain Capacity Locked Until 2028 Relying on deep, long-term cooperation with customers, **Broadcom has locked in key component capacity (including cutting-edge wafers, high-bandwidth memory, substrates, etc.) for 2026 to 2028 in advance, becoming one of the first companies in the industry to secure capacity until 2028.** When asked how they could extend supply judgments to 2028, Tan said, “We mastered the locking technology for photomasks early on... we are definitely among the first to master this technology.” He attributed this to “early expectations” and “very excellent partners.” Charlie Kawwas, president of the semiconductor solutions division, added that customers share expectations for the next 2-4 years, prompting the company to lock in capacity and technology investments in advance. When analysts asked, “Given the current supply situation, can you achieve growth in 2028?” Kawwas replied, “Yes.” On the inventory side, CFO Kirsten Spears disclosed, “Due to our continued procurement of components to meet strong AI demand, inventory at the end of the first quarter was $3 billion.” Inventory turnover days rose to 68 days (up from 58 days in the previous quarter), noting that the main reason is “we expect the AI semiconductor business to accelerate growth.” ## Infrastructure Software Will Not Be Replaced by AI, but Will Benefit Instead In addition to hardware, Broadcom also emphasized the “certainty” of its software business during the call. Tan stated, “Our infrastructure software will not be impacted by AI.” He described VMware Cloud Foundation (VCF) as the “core software layer” of data centers and emphasized its long-term value: **“As a permanent abstraction layer between AI software and physical chips (silicon), VCF cannot be replaced or substituted.”** The company disclosed that VMware revenue grew 13% year-over-year in the first fiscal quarter, with infrastructure software “order volume remaining strong, with total contracts exceeding $9.2 billion in the first quarter,” and annual recurring revenue (ARR) growing 19% year-over-year Tan further stated, "We believe that the growth of generative artificial intelligence and intelligent agent AI will increase demand for VMware, rather than decrease it." **Below is the transcript of the Q1 2026 earnings call** > Operator: > > Welcome to Broadcom's Q1 fiscal 2026 financial performance conference call. Now, I will turn the call over to Broadcom's Head of Investor Relations, Mr. Ji Yoo, to give the opening remarks and introductions. > > Head of Investor Relations Ji Yoo: > > Thank you, operator, and good afternoon, everyone. Joining today's call are: President and CEO Hock Tan, CFO Kirsten Spears, President of the Semiconductor Solutions Division Charlie Kawwas, and President of the Infrastructure Software Division Ram Velaga. Broadcom has released a press release and financial statements after the market close, detailing our financial performance for Q1 of fiscal 2026. If you have not received it, you can find the relevant information in the "Investor Relations" section of Broadcom's official website broadcom.com. This conference call is being webcast live, and you can also listen to a replay of the call for one year through the "Investor Relations" section of Broadcom's website. During the prepared remarks, Hock and Kirsten will provide detailed insights into our Q1 fiscal 2026 performance, our outlook for Q2 fiscal 2026, and comments on the current business environment. After the remarks, we will answer your questions. Please refer to the press release we issued today and the documents we recently filed with the U.S. Securities and Exchange Commission for specific risk factors that may cause our actual results to differ materially from the forward-looking statements made during this call. In addition to GAAP reporting, Broadcom also reports certain financial metrics on a non-GAAP basis. The reconciliation tables for GAAP and non-GAAP metrics are included in the attachment of today's press release. The remarks during today's call will primarily focus on our non-GAAP financial performance. Now, I will turn the call over to Mr. Hock. > > President and CEO Hock E. Tan: > > Thank you, Ji, and thank you all for joining today. In Q1 of fiscal 2026, our total revenue reached a record $19.3 billion, a 29% year-over-year increase, exceeding expectations, primarily driven by stronger-than-expected growth in our AI semiconductor business. Strong revenue growth translated into exceptional profitability, with Q1 adjusted EBITDA reaching a record $13.1 billion, accounting for 68% of revenue. These figures indicate that our scale advantages continue to drive significant operating leverage. > > We expect this growth momentum to accelerate as our custom AI XPU enters the next phase of deployment with five customers. Looking ahead to Q2 of fiscal 2026, we anticipate consolidated revenue of approximately $22 billion, a 47% year-over-year increase > > Now let me introduce our semiconductor business in more detail. In the first quarter, revenue reached a new record of $12.5 billion, a year-on-year increase of 52%. This strong growth was mainly driven by robust revenue growth in artificial intelligence semiconductors, which grew by 106% year-on-year to $8.4 billion, far exceeding our expectations. **In the second quarter, this growth momentum will further strengthen, and we expect semiconductor revenue to reach $14.8 billion, a year-on-year increase of 76%.** The main driver of this growth is revenue from artificial intelligence, which is expected to grow significantly, increasing by 140% year-on-year to $10.7 billion. > > Our customer accelerator business grew by 140% year-on-year in the first quarter, and this growth momentum has continued into the second quarter. The memory usage of customized AI accelerators from all five of our customers is progressing well. For Google, due to strong demand for the seventh-generation Ironwood TPU, we continue to maintain growth momentum into 2026. **We expect that demand for the next-generation TPU will be even stronger in 2027 and beyond.** > > In terms of installed capacity, we have made a very good start in 2026, with TPU computing capacity expected to reach 1 gigawatt. By 2027, this demand is expected to exceed 3 gigawatts. Finally, I want to emphasize that our XPU product line goes far beyond TPU. Contrary to recent analyst reports, Meta's customized accelerator MTIA roadmap is still progressing steadily. We have now begun shipping. In fact, for the next-generation XPU, we will expand its computing capacity to several gigawatts in 2027 and beyond. Speaking of our fourth and fifth customers, we expect very strong shipments this year and anticipate that they will more than double by 2027. We now have a sixth customer. We expect OpenAI to deploy its first-generation XPU on a large scale in 2027, with computing capacity exceeding 1 gigawatt. > > I would like to take this opportunity to emphasize that our collaboration with these six customers in developing AI XPU is deep, strategic, and long-term. We bring unparalleled technology in chip design, process technology, advanced packaging, and networking to each partner, helping each customer achieve optimal performance for their differentiated LLM workloads. We have extensive experience in rapidly delivering these XPUs with very high yields and achieving mass production. In addition to our technological advantages, we also offer multi-year supply agreements to support customers in scaling their computing infrastructure deployments. > > Despite current constraints in cutting-edge wafers, high-bandwidth memory, and substrate capacity, we are still able to ensure supply, thereby ensuring the sustainability of partnerships. We have fully secured the capacity for these components from 2026 to 2028. In line with the strong prospects for our XPUs, demand for artificial intelligence networks is also showing robust growth. In the first quarter, revenue from artificial intelligence networks grew by 60% year-on-year, accounting for one-third of total AI revenue. We expect the artificial intelligence network business to accelerate growth in the second quarter, accounting for 40% of total AI revenue Our market share in the networking sector is significantly increasing. > > Let me explain. In terms of horizontal expansion, our first launched Tomahawk 6 switch (with a throughput of 100 Tbps) and the 200G 30 series switches are rapidly meeting the demands of hyperscale data centers, whether they are using XPU or GPU this year. By 2027, with the launch of the next-generation Tomahawk 7, which doubles performance, this leading advantage will further expand. Meanwhile, in terms of vertical expansion, as the scale of clusters and the number of customers grow, we are able to help these customers continue using direct copper cables with our 200G solutions. As we upgrade our solutions to 400G in 2028, our XPU customers are likely to continue using direct copper cables. This is a huge advantage because fiber solutions are more expensive and consume more power. > > Given these factors, we have significantly raised our outlook for 2027. **In fact, we are now confident that AI revenue from our chip business alone will exceed $10 billion in 2027.** We have also secured the supply chain necessary to achieve this goal. Next, let’s talk about our non-AI semiconductor business. First-quarter revenue was $4.1 billion, flat year-over-year and in line with expectations. Revenue from enterprise network broadband services storage grew year-over-year but was offset by seasonal declines in wireless business. \*\*We expect second-quarter non-AI semiconductor revenue to be approximately $4.1 billion, a year-over-year increase of 4%. Now let me discuss our infrastructure software business. First-quarter infrastructure software revenue was $6.8 billion, in line with expectations, and a year-over-year increase of 1%. We expect second-quarter infrastructure software revenue to be approximately $7.2 billion, a year-over-year increase of 9%. VMware revenue grew 13% year-over-year. Order volume remains strong, with total contract value exceeding $9.2 billion in the first quarter, and annual recurring revenue (ARR) growing 19% year-over-year. I want to emphasize that the growth of our infrastructure software business reflects our commitment and investment in infrastructure. Moreover, our infrastructure software will not be impacted by artificial intelligence.\*\*In fact, VMware Cloud Foundation (VCF) is the core software layer of data centers, integrating CPU, GPU, storage, and networking into a unified high-performance private cloud environment. As a permanent abstraction layer between AI software and physical chips (silicon), VCF is irreplaceable. It enables enterprises to efficiently scale complex generative AI workloads with agility that hardware cannot achieve. We believe that the growth of generative AI and agent AI will increase demand for VMware, rather than decrease it. In summary, let me conclude with the situation for the second quarter of 2026, where we expect consolidated revenue year-over-year growth to accelerate to 47%, reaching approximately $22 billion. We expect adjusted EBITDA to be about 68% of revenue. Therefore, let me hand the call over to Kirsten > > Chief Financial Officer and Chief Accounting Officer Kirsten Spears: Thank you, Hawk. Now let me detail our financial performance for the first quarter. This quarter, consolidated revenue reached a record high of $19.3 billion, a year-over-year increase of 29%. The gross margin was 77%. Consolidated operating expenses were $2 billion, with R&D expenses at $1.5 billion. The operating profit for the first quarter reached a record high of $12.8 billion, a year-over-year increase of 31%. Thanks to favorable operating leverage, the operating profit margin increased by 50 basis points year-over-year to 66.4%. Adjusted EBITDA was $13.1 billion, accounting for 68% of revenue, higher than our previous expectation of 67%. > > Now let's take a closer look at the two business segments. First is the semiconductor business. The semiconductor solutions segment achieved revenue of $12.5 billion, a new high with a year-over-year increase of 52%, primarily driven by strong growth in artificial intelligence. Revenue from the semiconductor business accounted for 65% of total revenue this quarter. The gross margin for the semiconductor solutions segment increased by 30 basis points year-over-year to approximately 68%. The $1.1 billion in operating expenses reflects increased investment in leading-edge edge AI semiconductor R&D, accounting for 8% of revenue. The operating profit margin for the semiconductor business was 60%, an increase of 260 basis points year-over-year, demonstrating strong operating leverage. > > Next is the infrastructure software segment. Infrastructure software revenue was $6.8 billion, a year-over-year increase of 1%, accounting for 35% of total revenue. This quarter, the gross margin for infrastructure software was 93%, with operating expenses of $979 million. The operating profit margin for software in the first quarter increased by 190 basis points year-over-year to 78%. > > Next is the cash flow situation. This quarter, free cash flow was $8 billion, accounting for 41% of revenue. We invested $250 million in capital expenditures. Due to our ongoing procurement of components to meet strong AI demand, inventory at the end of the first quarter was $3 billion. **The inventory turnover days for the first quarter were 68 days, compared to 58 days in the fourth quarter, primarily due to our expectation that the AI semiconductor business will accelerate growth.** > > Regarding capital allocation. In the first quarter, we distributed $3.1 billion in cash dividends to shareholders, with a cash dividend of $0.65 per share of common stock. During the quarter, we repurchased $7.8 billion (approximately 23 million shares) of common stock. In the first quarter, we returned a total of $10.9 billion to shareholders through dividends and stock repurchases. In the second quarter, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, not accounting for potential stock repurchases. At the end of the first quarter, we held $14.2 billion in cash. Today, we announced that the board has approved an additional $10 billion for the stock repurchase program, which is effective until the end of 2026. > > **Next is the performance outlook. We expect consolidated revenue for the second quarter to be $22 billion, a year-over-year increase of 47%.** Of this, semiconductor business revenue is expected to be approximately $14.8 billion, a year-over-year increase of 76% We expect the revenue from the artificial intelligence semiconductor business in the second quarter to be $10.7 billion, a year-on-year increase of approximately 140%. The revenue from the infrastructure software business is expected to be around $7.2 billion, a year-on-year increase of 9%. To facilitate your model building, we expect the consolidated gross margin for the second quarter to remain flat compared to the previous quarter, at 77%. We anticipate that the adjusted EBITDA for the second quarter will be approximately 68%. > We expect that due to the impact of the global minimum tax and changes in the geographic composition of revenue compared to fiscal year 2025, the non-GAAP tax rate for the second quarter of fiscal year 2026 will be approximately 16.5%. That concludes my remarks. Operator, please begin the Q&A session. > Here are the analyst Q&A: > Operator: > Thank you. (Operator instructions) Our first question comes from Brian Curtis of Jefferies Group. Your line is open. > Q1 Analyst Brian Curtis: Hello, good afternoon, thank you for taking my question. I have one question to start, and then a clarification question. Hock, regarding the revenue exceeding $100 billion, I assume you are referring to AI chips. I just want to confirm that you are explaining the difference between ASIC chips and network chips, and how the revenue is reflected in these two areas. Then for another question, I believe the biggest challenge your company faces right now is that the AI business has nearly doubled this quarter. I think this is the main trend driving growth in cloud computing capital expenditures this year. I’m curious about your thoughts, given your outlook for 2027, I believe your company should perform well. I also want to understand that investors generally believe that hyperscale cloud service providers will need this year, next year, or even the year after to achieve a return on investment; what are your thoughts on this? How do you incorporate this pessimism into your outlook? > President and CEO Hock E. Tan: > What we have seen—over the past few months, we have been seeing, and this trend continues—this is not primarily referring to hyperscale data centers; our customer base is limited to a few companies, some of which are hyperscale data centers and some are not, but they all have one thing in common: they are creating large language models (LLMs), productizing them, and building platforms, whether for enterprise code assistance, agent AI, or consumer subscription services. We understand that among these companies, a few potential customers, as well as many of our current customers, are creating these general platforms—whether generative AI or agent AI. These are our customers, and we see that their demand for computing power is increasing significantly. Training is something they continuously need, but what really surprises us is the strong demand for inference capabilities to productize and monetize their latest lifecycle models. This inference is driving a significant increase in computing power, which is good for us because these five or six customers are working on building their own custom accelerators. Not only that, but they are also designing the network cluster architecture for these customer accelerators. So I believe, as we have heard over the past six months, \*\*demand will continue to grow **Now, Brian, to clarify your first part, when I say we predict,** we have reason to believe that our revenue will significantly exceed $100 billion in 2027, I emphasize that this revenue is almost entirely based on chips, \*\* whether it's XPU, switching chips, or DSP, we are talking about silicon chips. > > Analyst Brian Curtis: Thank you very much. > > Operator: Please hold for a moment, we will move to the next question. This question will be posed by Mr. Harlan Su of JP Morgan. Your line is open. > > Q2 Analyst Harlan Su: Good afternoon. Thank you for answering my question, and congratulations to the team for such outstanding performance. Hock, there has been a lot of news recently about cloud service providers (CSPs) and hyperscale data center operators working on internal XPU and TPU design efforts, right? We call this COT, or customer-owned tools. This is not new in the ASIC space, right? I believe the Broadcom team has also experienced this COT competitive landscape over the past 30 years, right? You have always been a leader in the ASIC industry, but there have been few successful COT projects. > > Now, when it comes to artificial intelligence, some COT projects have been launched, but their performance seems to be at least half lower than your current solutions, with chip design complexity, packaging complexity, and IP complexity also being half lower. So, I might ask two questions. Hock, the first question is, given your forecast for next year, do you think COT projects can take any meaningful TPU XPU market share away from Broadcom? The second question is, from the perspective of performance, complexity, and IP, Broadcom's TPU XPU projects are 12 to 18 months ahead of any COT projects, how will the Broadcom team further widen this gap? > > President and CEO Hock E. Tan: > > Well, that's a great question. And as I specifically mentioned in my opening remarks, when any hyperscale data center or LLM developer tries to create so-called customer-owned tools (COT) models completely independently, they face enormous challenges. One of them is technology, specifically the technology related to manufacturing silicon chips, especially the XPU (extended processor) used for computing, as well as the technology required for training and inference needed to optimize and run LLM workloads. The technologies we just mentioned come from different levels. You need the best chip design teams. You need cutting-edge, truly advanced C30 chips, very sophisticated packaging technology, and equally important, you need to know how to connect these chip clusters. We have been in this business for over 20 years, \*\* in the field of silicon chips, especially in this specific area of generative artificial intelligence, if you as an LLM vendor want to independently develop chips, you cannot settle for "just good enough" chips. You need the best chips on the market because you are competing with other LLM vendors. Most importantly, you also have to compete with NVIDIA, who is not letting their guard down at all They are constantly launching better-performing chips with each generation. > So, as an LLM company, if you want to establish your platform globally, you must manufacture chips that are superior to existing ones, competing not only with NVIDIA but also with all other platform vendors. For this, you indeed need our trust; what we first see is this point, along with having the best technology, intellectual property, and execution capabilities in the industry as silicon chip partners. Humbly speaking, **we are far ahead, and for many years to come, there will be no competitors in the COT field. Competition will eventually come, but we have a long way to go because this competition is still ongoing.** > Another point that is unique to us: when you manufacture silicon chips, you must quickly put them into mass production and bring them to market as soon as possible. We have extensive experience in this area. Anyone can design a well-performing chip in a lab. But can you quickly produce 100,000 of such chips with an acceptable yield? We rarely see any company in the world that can do this. Charlie? > Analyst Harlan Sur: Thank you. Thank you, Charlie. > Operator: > Please hold for a moment while we move to the next question. This question will be posed by Ross Seymore from Deutsche Bank. Your line is open. > Q3 Analyst Ross Seymore: > Hello, thank you for taking my question. Hock, in your remarks, you focused more than ever on the differentiated advantages in networking. So I want to ask a short-term and a long-term question. The short-term question is, what factors are driving the networking business to account for 40% of AI revenue? The long-term question is, will this percentage—i.e., within over $100 billion in revenue—change? What kind of leading position do you expect to maintain in this business area, horizontal expansion or vertical expansion? Does your leading position in this area help your XPU business, as you can optimize both computing and networking simultaneously? > President and CEO Hock E. Tan: > Okay, let's first address the first question. Ross, this question is quite complex. Yes, in networking, especially with the next generation of GPUs and XPUs about to be launched, our bandwidth has reached 200Gbps and even gigabits. The Tomahawk 6, which we launched about six months ago (specifically nine months ago), is currently the only one on the market. Our customers and hyperscale data center operators want their clusters to use the best networks and the highest bandwidth. > So we see a tremendous demand in this area—currently, there is only one 100Tbps switch on the market. This greatly drives demand. Coupled with our operation of Ben at 1.6 times bandwidth for expanding optical transceiver bandwidth, we again become the only vendor in the market operating DSP at 1.6 times bandwidth. I believe these factors collectively drive the growth of our networking components, even surpassing the growth rate of our XPUs, which is already quite impressive This is what you see now. But I believe that at some point, these demands will stabilize, although we will not slow down because, as I said, next year, in 2027, we will launch the next generation Tomahawk 7, which will have twice the performance of the current one, and we are likely to be among the first manufacturers to launch this product, which will continue to maintain growth momentum. Finally, to answer your question, yes, **I expect that in any quarter, the proportion of AI network components in our total AI revenue will be between 33% and 40%.** > > Analyst Ross Seymore: Great, thank you, Hock. > > President and CEO Hock E. Tan: Thank you. > > Operator: > > Please hold on for a moment while we move to the next question. This question will come from CJ Muse in conversation with Cantor Fitzgerald. Your conversation is now open. > > Q4 Analyst CJ Muse: Good afternoon. Thank you for taking this question. I’m curious about your thoughts on decoupling pre-filling and decoding from the GPU ecosystem and how that might impact the demand for custom chips. Do you foresee any changes in the relative proportions between GPUs and custom chips? > > President and CEO Hock E. Tan: > > CJ, I don’t quite understand your question; could you clarify what you mean by “decoupling”? > > Analyst CJ Muse: > > Sure, pushing workloads to CPX for pre-filling and processing by decode counts, you know, **whether this decentralized environment will put any pressure on the demand for custom solutions versus adopting a full GPU stack.** > > President and CEO Hock E. Tan: > > Okay, I understand what you mean. This aggregation approach, in a way, what you really want to express is how the architecture of AI accelerators, whether GPU or XPU, evolves with the evolution of workloads. And this is precisely what we are currently very focused on. While general-purpose GPUs can meet all demands, they can only go so far. They can still continue to operate because you can still run different workloads, such as running mixed expert models, even if you want to ensure that the expert models have sparse costs to be very efficient (you've heard this term). But in GPUs, they are designed for dense matrix multiplication. > > So, while it can be achieved with software kernels, it is not as efficient as implementing it on the chip, and the design of XPU is intended to have higher performance in mixed expert workloads. The same goes for inference. Ultimately, you will find that the design of XPU is increasingly customized for the specific workloads of our LLM customers. Moreover, this design begins to rely on traditional standard GPU designs, which is why we have emphasized before that XPU will ultimately be the better choice because it can flexibly design models suitable for specific workloads, such as one for training and one for inference > > **As you mentioned, one may be better at pre-filling, while the other may excel at post-training, reinforcement learning, or test-time extension. You can adjust the TPU to better fit the XPU, or more precisely, to tailor it for the specific workload LLM you desire. We have seen this trend. We have observed this across all five of our clients.** > > Operator: > > Please hold on for a moment as we move to the next question. This question will be posed by Mr. Timothy Arcuri from UBS Group. Your line is open. > > Q5 Analyst Timothy Arcuri: > > Thank you very much. I have a question regarding the gross margin fluctuations after the shipment of these rack systems. I mean, it’s clear that shipping these racks will pull down the overall gross margin, but I wonder if you could provide some benchmarks? The gross margin for the product seems to be between 45% and 50%. So, I’m wondering, as the racks start to ship, should we expect the gross margin to decline by about 500 basis points? Also, Hock, is there a lower limit for the gross margin below which you wouldn’t continue to produce more racks? Thank you. > > President and CEO Hock E. Tan: > > I’m afraid you might be a bit delusional. Our gross margin is indeed in line with the numbers reported by Kirsten. **The fluctuations in gross margin and the price increases of AI products will not affect us. Our yield and costs are well controlled, and the models in the AI field will be fundamentally consistent with the models in our semiconductor business.** Kirsten? > > Chief Financial Officer and Chief Accounting Officer Kirsten Spears: I agree with this perspective. I think, upon further examination, the impact on our overall business structure is actually minimal compared to the comments I made last quarter. So, I’m not concerned. > > Analyst Timothy Arcuri: Okay, thank you very much. > > Operator: > > Please hold on for a moment as we move to the next question. This question will be posed by Stacy Rasgon from Bernstein. Your line is open. > > Q6 Analyst Stacy Rasgon: > > Hello everyone. Thank you for taking my question. I’m not sure if this question should be directed to Hock or Kirsten, but I’d like to delve deeper into the projects exceeding $100 billion next year. I’m trying to calculate the installed capacity (in gigawatts). I counted about eight or nine, with Anthropic having three, OpenAI one, so that’s four. You mentioned that Meta has several, so at least two, including Tomahawk 6. I estimate that Google’s scale should be larger than Meta’s. So at least three, adding up to nine, and then you have some others. I thought the content value per gigawatt is around $20 billion. What I want to ask is whether my calculation of the installed capacity (in gigawatts) you plan to deliver by 2027 is correct? Additionally, as the delivery volume increases, how should you calculate the value of your content per gigawatt? It may ultimately far exceed $100 billion. > > President and CEO Hock E. Tan: > > Your point is very interesting, and I must remind you. You are correct that it can be measured in gigawatts, which is the right way to measure it, rather than in dollars, because our chips are sold by gigawatt. So you need to understand that the price per gigawatt of our chips will vary depending on our LLM customers (now there are six customers, sorry, not five, but six). **The price per gigawatt chip can differ significantly at times.** There will indeed be differences. But you are right, it is not far off from the dollar price you mentioned. If you look at gigawatts in 2027, we expect to be close to 10 gigawatts. > > Analyst Stacy Rasgon: Understood, that's very helpful, thank you. > > Operator: Next, we will invite Mr. Ben Reitzes from Melius Research to ask a question. Your question is now connected. > > Q7 Analyst Ben Reitzes: > > Hi, thank you. It's great to talk to you. I want to ask you about your analysis of the outlook for the four major component suppliers before 2028, how did you approach that? You might be the first to extend the forecast to 2028. Secondly, given the remarkable growth of your AI business in 2027, based on your previous analysis, do you think the supply outlook for 2028 is sufficient to support further growth of the business? Thank you very much. > > President and CEO Hock E. Tan: > > The best answer I can give is yes, you are right. We anticipated this rapid growth. While no one could have predicted the current growth rate, we did anticipate a significant portion of the growth, or at least that it would continue for more than the next six months. We mastered the photomask locking technology early on. If you have heard about the infamous photomask locking we mentioned before, we were definitely among the first to master this technology. We have locked in the matrix. We have made progress in other areas with excellent partners, as we mentioned before. So, to your question, the answer is: this is partly due to our early expectations and the fact that we have very good partners in these key component areas. Besides "yes," what else can I say? Charlie, do you have anything to add? > > President of Semiconductor Solutions Group Charlie Kawas: > > Yes, I might just mention a few simple points. I think you explained that part very thoroughly. Well, I think another very important point, as you mentioned, is that we customize chips for six customers. We have established very deep, long-term strategic partnerships with them. Due to this customization capability, they share with us their specific expectations for at least the next two to three years, and sometimes even four years. That’s why we are taking steps to ensure all the elements that Hock mentioned. And when we ensure these elements, we need to invest with these partners, **sometimes not only to develop larger capacities but also to develop the right technologies and capacities.** So we must ensure our partnership for the coming years. And we might be — you are right, we might be the first company to ensure this until 2028 or even longer.\*\* > > Analyst Ben Reitzes: > > **Given the current supply situation, can you achieve growth by 2028? Sorry, I suddenly asked this question.** > > **President of the Semiconductor Solutions Group Charlie Kawwas: Yes.** > > Analyst Ben Reitzes: Thank you. > > Operator: Thank you. Next, we will have Mr. Vivek Arya from Bank of America Securities asking a question. Your line is open. > > Q8 Analyst Vivek Arya: > > Thank you for answering my question. Hock, I first want to clarify the projects you are undertaking this year at Anthropic, such as the investment of $20 billion to build 1 gigawatt of installed capacity, how much of that is chips and how much is rack-mounted equipment? What I want to understand is, the $100 billion chip project you mentioned, does it refer to chip projects or rack-mounted equipment? Because the scale of the rack-mounted equipment project alone is expected to triple next year. > > So my question is, your AI business is transitioning from an exclusive partnership with one large customer to now collaborating with multiple customers, who are using multiple suppliers. Given that these customers are dispersed across many cloud service providers, and the collaboration model is very fragmented, how can you clearly understand and confidently predict your market share growth among these customers? What measures have you taken to ensure you can clearly understand this dispersed customer base and capture the appropriate market share? > > President and CEO Hock E. Tan: > > Vivek, first of all, as Charlie pointed out succinctly, we have very few customers, specifically, **only six. In terms of our current business volume and revenue, we only have six customers, and previously, there were even fewer.** Secondly, I must also understand that considering the spending of each customer and the importance of the business they are conducting, it is difficult for us to have a comprehensive understanding of their investments. That is why Meta launched MTIA, which is an AI and customer acceleration program. > > For them, just like for every customer I have in this field, it is a strategic layout, not an option. **For them, it is strategic in the long term, short term, and medium term, extremely important strategies. They will not stand still, and each of them is very clear about where they want to position these custom chips in the LLM development trajectory and how to develop inference technology for LLM productization.** This part, we are very clear about. As for GPU, cloud, and cloud-based ultra-high-speed computing business, these are transactional operations and options. So, as you pointed out, this is indeed confusing > > Trust me, neither we nor our clients would do that. They are very strategic, have clear goals, and they know exactly what they want to build and how much capacity they want to build each year. The only thing they consider is: can it be completed faster? Other than that, everything follows the established roadmap, with strong strategic focus and targeting. Any other approach you see is purely opportunistic for them, just to increase options. So, this point is very clear. > > Analyst Vivek Arya: Regarding the clarification part, what is the difference between Anthropic's shelf and chips? Thank you. > > President and CEO Hock E. Tan: > > I don't really want to answer that question, but we are doing well. As Kirsten said, we have ample funding and profits. > > Analyst Vivek Arya: Thank you. > > Operator: > > Thank you. The next question comes from Tom O'Malley of Barclays. Your line is open. > > Q9 Analyst Tom O'Malley: > > Hello everyone, thank you for taking my question. I have one question for Hock and one for Charlie. Hock, I know you are very particular about the leading part, and you mentioned that clients require direct copper cables with speeds reaching 400Gbps and a rate of 30s. Is there a specific reason you emphasize this? Especially since you are a pioneer in the CTO field. Charlie, as your customer base expands, I assume the clients you work with will be using scalable Ethernet. Can you talk about scalable protocols and your views on the future development of Ethernet? Thank you. > > President and CEO Hock E. Tan: > > Okay. No, unless—I just want to emphasize our advantages in networking; our technology is indeed very unique and can help our clients, including those using cross-platform GPUs (not just XPUs). That is to say, if you are running and trying to create LLMs and build your own AI data center, designing and architecting, you definitely want to build larger domains or clusters and connect GPUs directly to XPUs as much as possible. The best way to achieve this is by using direct copper cables. This is the lowest latency, lowest power consumption, and lowest cost method. Therefore, you should continue to use this method as much as possible, especially in vertical scaling. In terms of horizontal scaling, we have already surpassed the scope of horizontal scaling. We use fiber optics. That's fine. But what I’m talking about is vertical scaling in the rack cluster domain; you really should use direct copper cables as much as possible. We are still based on Broadcom's technology—especially in terms of connections from XPU to XPU and even GPU to GPU. We can achieve this with copper cables, and we can increase the transmission speed from 100G to 200G, even 400G. We have now achieved 400G transmission, and the transmission distance of copper cables is far enough. What I want to say is that you don't need to rush to pursue those so-called CPOs (copper cable operators); even though we are pioneers of CPOs, they will appear at the right time, not this year, maybe not next year, But one day it will come. Do you understand? > > President of the Semiconductor Solutions Group, Charlie Kawas: > > Yes. That's right, Hock made a good point. Regarding Ethernet, with the rise of cloud computing, Ethernet has become the de facto standard for all cloud platforms over the past two decades. Looking back at the birth of backend networks, as Hock explained, there was a fierce debate two years ago about which protocol should be used to achieve the required latency and scalability. At that time, 24 months ago, the industry was not clear about this. But we were very clear about what the answer should be. Moreover, due to our deep collaboration with partners, they have clearly indicated to us and the entire industry that whether it's GPU or XPU, Ethernet is the preferred solution for horizontal scaling. Indeed, today everyone is talking about using Ethernet for horizontal scaling. Now, when it comes to vertical scaling, just like the situation with vertical scaling three or four years ago, what is the correct answer now? The answer we keep hearing and seeing is: Ethernet. Just like last year, we announced with several hyperscale data center operators and many peers in the semiconductor industry that Ethernet scaling is the right choice. We believe this will be the direction of future development. Time will prove everything, but many of the XPU designs we are working on require scaling through Ethernet, and we are happy to support that. > > Analyst Tom O'Malley: Thank you both. > > Operator: > > Thank you. Next, we will have a question from Mr. Jim Schneider of Goldman Sachs. Your line is open. > > Q10 Analyst James Schneider: > > Good afternoon, thank you for taking my question. Hock, it's great to hear you talk about the progress of other fully customized XPU projects aside from TPU. Looking ahead to next year, can we assume that these projects are primarily aimed at inference applications? Could you qualitatively discuss the performance or cost advantages relative to GPU, and how these advantages help customers make such large-scale predictions? Thank you. > > President and CEO Hock E. Tan: > > Thank you. Most of our customers start with inference because it is often the simplest entry point. This is not for any other reason, but because the computational load for inference is much smaller. Moreover, the question is, since tasks can be completed more efficiently using custom inference chips (such as XPU), is there still a need for general-purpose large-scale dense matrix multiplication GPUs? **XPU performs better, is even comparable, and is lower in cost and power consumption.** And that is precisely why we find that these customers initially choose to start with XPU. > > But they are now in the training phase, and many of our XPUs are being used for both training and inference. By the way, they are interchangeable, just like GPUs can be used not only for training (they may be better suited for training) but also for inference. What we see is that our XPUs are being used for both purposes simultaneously. And this is happening. But we will also soon see that for those customers who are already quite mature in implementing full XPU, they will start to develop two chips each year, one for training and one for inference, to achieve specialization Why? Because we clearly see that for these LLM manufacturers, training is aimed at achieving—so that their LLM reaches a higher level of intelligence. > > Great, you have an excellent LLM, arguably the most advanced. Now you need to productize it, which means implementing inference capabilities. At this point, you can conclude that your model is the best because if you only start productizing inference capabilities now, it will take at least a year. During this time, others may have developed better LLMs than yours. Therefore, when you train to create higher levels of intelligence in LLMs, you must also invest in inference capabilities, including chips and capacity. As we see these six customers in India maturing in their LLM technology and moving towards better LLMs, our market prospects are becoming increasingly bright. Yes, this is the trend we have observed. Although not all six customers are like this, we see that most of them are moving in this direction. > > Operator: > > Thank you. Please hold on for a moment as we move to the next question. This question will be asked by Joshua Buchalter from TD Cowen. Your line is now open. > > Q11 Analyst Joshua Buchalter: > > Hello everyone. Thank you for answering my questions, and congratulations on your achievements. I really appreciate your detailed explanation of the deployment expectations for specific customers. I hope you can talk about any changes in project visibility over the past one or two quarters that have given you more confidence to provide more details. Additionally, you mentioned that OpenAI's installed capacity will exceed 1 gigawatt by 2027. Given that the agreement covers 10 gigawatts of installed capacity before 2029, I guess this means there will be significant growth in 2028. Is my understanding correct? Has this always been part of your plan? Thank you. > > President and CEO Hock E. Tan: > > Yes. Well, that's right, as everyone can see, and as you all know, we are currently in a race for generative artificial intelligence—I shouldn't use the word "race," let's call it competition among several manufacturers. I mean, it is indeed a competition. Each manufacturer is striving to create larger models (LLMs) that are better and more suitable for specific purposes, whether enterprise-level, consumer-level, or search-level, and they are continuously improving. And all of this requires not only training (which is crucial for the continuous improvement of LLM models) but also inference to productize and monetize LLMs. We emphasize again (or perhaps we should call it "IT") that we have been collaborating with some of these manufacturers for over two years. As they become increasingly confident that the XPU we are developing together can achieve their goals, our influence is also growing. As they gradually realize the XPU being developed and the software and algorithms required, they are becoming more confident that this XPU chip is exactly what they need, and the situation will only get better As Charlie said, we can also have a clearer understanding of the progress because, after all, we only have six core AI customers. And as I mentioned, these six individuals view XPU and artificial intelligence in a very strategic way. They are not focused on the development of one generation after another, but rather on the next few years, spanning multiple generations. Despite the noise from the outside world regarding existing products, their thinking is very long-term—they are focused on how to deploy the experiences we co-develop, how to apply them to build the higher-quality LLM they want to create, and more importantly, how to achieve profitability. Therefore, we are part of their strategic roadmap. We are not just providing some options, like "Oh, should I use GPU? Should I use the cloud? Because I need to train for six months." No, it is far more than that. Their investment is long-term, allowing us to participate in their long-term roadmap, rather than just being involved in a short-term trading roadmap, which I feel very honored about. As I answered in a previous question, the noise refers to many short-term trades that interfere with our business and product's long-term strategic positioning. In summary, I believe our current business in the XPU field is a sustainable strategic initiative for our existing six customers. Analyst Joshua Buchalter: Thank you. Operator: Thank you. The Q&A session for today has concluded. I would now like to hand the call back to Ji Yoo for her final remarks. Head of Investor Relations Ji Yoo: Thank you. Broadcom currently plans to release its fiscal year 2026 second-quarter financial results after the market closes on Wednesday, June 3, 2026. The public webcast of the Broadcom earnings call will take place at 2:00 PM Pacific Time. This concludes today's earnings call. Thank you all for participating. Sree, you may disconnect the call now. Operator: That concludes today's program. Thank you all for participating. You may now disconnect ### Related Stocks - [Leverage Shares 2X Long AVGO Daily ETF (AVGG.US)](https://longbridge.com/en/quote/AVGG.US.md) - [Invesco Semiconductors ETF (PSI.US)](https://longbridge.com/en/quote/PSI.US.md) - [VanEck Semiconductor ETF (SMH.US)](https://longbridge.com/en/quote/SMH.US.md) - [Broadcom Inc. (AVGO.US)](https://longbridge.com/en/quote/AVGO.US.md) - [iShares Semiconductor ETF (SOXX.US)](https://longbridge.com/en/quote/SOXX.US.md) - [Roundhill AVGO WeeklyPay ETF (AVGW.US)](https://longbridge.com/en/quote/AVGW.US.md) - [Defiance Daily Target 2X Long AVGO ETF (AVGX.US)](https://longbridge.com/en/quote/AVGX.US.md) - [State Street® SPDR® S&P® Smcndctr ETF (XSD.US)](https://longbridge.com/en/quote/XSD.US.md) - [GraniteShares 2x Long AVGO Daily ETF (AVGU.US)](https://longbridge.com/en/quote/AVGU.US.md) - [Direxion Daily Semicondct Bull 3X ETF (SOXL.US)](https://longbridge.com/en/quote/SOXL.US.md) - [Direxion Daily AVGO Bull 2X Shares (AVL.US)](https://longbridge.com/en/quote/AVL.US.md) ## Related News & Research - [Broadcom Earnings Are About to Hit – Here’s Why HSBC Has Lowered Its Price Target](https://longbridge.com/en/news/277809476.md) - [Broadcom Q1 adjusted EPS beats estimates](https://longbridge.com/en/news/277834783.md) - [MGB Wealth Management LLC Has $10.57 Million Stock Holdings in Broadcom Inc. $AVGO](https://longbridge.com/en/news/277300167.md) - [Ninepoint Partners LP Lowers Stock Position in Broadcom Inc. $AVGO](https://longbridge.com/en/news/277301954.md) - [Virtus Wealth Solutions LLC Cuts Stock Holdings in Broadcom Inc. $AVGO](https://longbridge.com/en/news/277208690.md)