博通公佈 2025 年第一季營收達到創紀錄的 190 億美元,半導體和基礎設施軟體領域成長強勁。他們預計,在人工智慧收入的推動下,第二季將持續成長。該公司與超大規模合作夥伴討論了 XPU 的開發,並強調了生成式 AI 對半導體技術的影響。他們專注於性能優化並投資於網路和連接的新產品。
博通正在應對關稅和增加營運成本等挑戰,但也看到了新技術和人工智慧帶來的機會。他們不考慮併購,並將於 2025 年 6 月 5 日公佈第二季財報。
使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Welcome to the Broadcom Inc.'s first quarter fiscal year 2025 financial results conference.
歡迎參加博通公司2025財年第一季財務業績發表會。
At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
現在,為了致開幕詞和介紹,我想將電話轉給博通公司 (Broadcom Inc.) 投資者關係主管 Ji Yoo。
Ji Yoo - Investor Relations
Ji Yoo - Investor Relations
Thank you, Sherry, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; and Charlie Kawwas, President, Semiconductor Solutions Group.
謝謝你,雪莉,大家下午好。參加今天電話會議的還有總裁兼執行長 Hock Tan;財務長 Kirsten Spears;以及半導體解決方案集團總裁 Charlie Kawwas。
Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the first quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the investors section of Broadcom's website at broadcom.com.
博通在收盤後發布了一份新聞稿和財務表,描述了我們 2025 財年第一季的財務表現。如果您沒有收到副本,您可以從 Broadcom 網站 broadcom.com 的投資者部分獲取資訊。
This conference call is being webcast live and an audio replay of the call can be accessed for one year through the Investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2025 results, guidance for our second quarter of fiscal year 2025, as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments.
本次電話會議將進行網路現場直播,您可以透過博通網站的「投資者」欄位收聽電話會議的音訊回放,重播有效期為一年。在準備好的評論中,Hock 和 Kirsten 將提供有關我們 2025 財年第一季業績的詳細資訊、2025 財年第二季的指引以及有關商業環境的評論。我們將在準備好的評論結束後回答問題。
Please refer to our press release today and our recent fillings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call.
請參閱我們今天的新聞稿和我們最近向美國證券交易委員會提交的文件,以獲取有關可能導致我們的實際結果與本次電話會議中的前瞻性陳述存在重大差異的具體風險因素的資訊。
In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Today's call will primarily refer to our non-GAAP financial results.
除了美國 GAAP 報告外,Broadcom 還根據非 GAAP 報告某些財務指標。今天的新聞稿所附的表格中包含了 GAAP 指標和非 GAAP 指標之間的對帳表。今天的電話會議主要討論我們的非公認會計準則財務表現。
I'll now turn the call over to Hock.
現在我將電話轉給霍克。
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Thank you, Ji. And thank you, everyone, for joining today. In our fiscal Q1 2025, total revenue was a record $19 billion, up 25% year on year; and consolidated adjusted EBITDA was a record again, $10.1 billion, up 41% year on year.
謝謝你,吉。感謝大家今天的參與。在我們的 2025 財年第一季,總營收達到創紀錄的 190 億美元,年增 25%;合併調整後EBITDA再創紀錄,達101億美元,較去年成長41%。
So let me first provide color on our semiconductor business. Q1 semiconductor revenue was $8.2 billion, up 11% year on year. Growth was driven by AI. AI revenue of $4.1 billion was up 77% year on year. We beat our guidance for AI revenue of $3.8 billion, due to stronger shipments of networking solutions to hyperscalers on AI. Our hyperscale partners continue to invest aggressively in their next-generation for frontier models, which still require high-performance accelerators as well as AI data centers with larger clusters.
首先讓我介紹一下我們的半導體業務。Q1半導體營收為82億美元,年增11%。成長由人工智慧推動。人工智慧收入達41億美元,年增77%。由於向人工智慧超大規模企業提供網路解決方案的出貨量增加,我們的人工智慧收入超過了 38 億美元的預期。我們的超大規模合作夥伴繼續積極投資下一代前沿模型,這仍然需要高性能加速器以及具有更大集群的人工智慧資料中心。
And consistent with this, we are stepping up our R&D investment on two fronts. One, we're pushing the envelope of technology in creating the next generation of accelerated. We're taking out the industry's first 2-nanometer AI XPU packaging 3.5D as we drive towards a 10,000 tariff flops XPU.
與此一致,我們正在兩個方面加大研發投入。一是我們正在突破科技極限,創造下一代加速技術。我們正在推出業界首款採用 3.5D 封裝的 2 奈米 AI XPU,並朝著 10,000 關稅浮點 XPU 的目標邁進。
Secondly, we have a view towards scaling clusters of 500,000 accelerators for hyperscale customers. We have doubled the ratings capacity of the existing Thermon. And beyond this, to enable AI clusters to scale up on Ethernet towards 1 million XPUs. We have taped out our next-generation 100-terabit per month six switch, running 200G studies and 1.6-terabit bandwidth. We will be delivering samples to customers within the next few months.
其次,我們計劃為超大規模客戶擴展 50 萬個加速器的集群。我們將現有 Thermon 的額定容量提高了一倍。除此之外,還可以讓 AI 叢集在乙太網路上擴展到 100 萬個 XPU。我們已經完成了下一代每月 100 太比特的六交換機的投產,正在運行 200G 的研究和 1.6 太比特的頻寬。我們將在未來幾個月內向客戶提供樣品。
These R&D investments are very aligned with the road map of our three hyperscale customers as they each race towards 1 million XPU clusters by the end of 2027. And accordingly, we do reaffirm what we said last quarter that we expect these three hyperscale customers will generate a serviceable addressable market, OEM in the range of $60 billion to $90 billion in fiscal 2027.
這些研發投資與我們三個超大規模客戶的路線圖非常一致,他們各自都在競相在 2027 年底前實現 100 萬個 XPU 集群。因此,我們確實重申了我們上個季度的說法,即我們預計這三個超大規模客戶將在 2027 財年創造一個可服務的目標市場,OEM 規模將在 600 億美元至 900 億美元之間。
Beyond these three customers, we had also mentioned previously that we are deeply engaged with two other hyperscalers in enabling them to create their own customized AI accelerator. We are on track to take out their experience this year. In the process of working with the hyperscalers, it has become very clear that while they are excellent in the software, Broadcom is the best in hardware.
除了這三位客戶之外,我們之前還提到過,我們正在與另外兩家超大規模企業進行深入合作,幫助他們創建自己的客製化 AI 加速器。我們正按計劃在今年借鏡他們的經驗。在與超大規模供應商合作的過程中,我們清楚地認識到,雖然他們在軟體方面非常出色,但博通在硬體方面才是最好的。
Working together is what optimizes via large language models. It is, therefore, no surprise to us. Since our last earnings call that two additional hyperscalers have selected Broadcom to develop custom accelerators to train their next-generation frontier models. So even as we have three hyperscale customers, we are shipping XPUs in volume today. There are now four more were deeply engaged with us to create their own accelerators.
透過大型語言模型進行協作可以實現最佳化。因此,這對我們來說並不奇怪。自從我們上次財報電話會議以來,又有兩家超大規模企業選擇了博通來開發客製化加速器來訓練他們的下一代前沿模式。因此,即使我們有三個超大規模客戶,我們今天也在大量出貨 XPU。目前,又有四家公司與我們深度合作,創建自己的加速器。
And to be clear, of course, these four are not included in our estimated SAM of $60 billion to $90 billion in 2027. So we do see an exciting trend here. New frontier model and techniques put unexpected pressures on AI systems. It's difficult to serve all classes of models with a single system design point. And therefore, it is hard to imagine that a general purpose accelerated can be configured and optimize across multiple frontier models.
當然,需要明確的是,這四個國家並不包括在我們預計的 2027 年 600 億至 900 億美元的 SAM 中。所以我們確實看到了一個令人興奮的趨勢。新的前沿模型和技術給人工智慧系統帶來了意想不到的壓力。單一的系統設計點很難滿足所有類別的模型的需求。因此,很難想像一個通用加速器可以跨多個前沿模型進行配置和最佳化。
And as I mentioned before, the trend towards XPUs is a multiyear journey.
正如我之前提到的,XPU 的趨勢是一個多年的過程。
So coming back to 2025, we see a steady ramp in deployment of our XPUs and networking products. In Q1, AI revenue was $4.1 billion, and we expect Q2 AI revenue to grow to $4.4 billion which is up 44% year on year.
回顧 2025 年,我們將看到 XPU 和網路產品的部署穩定成長。第一季AI營收為41億美元,我們預期第二季AI營收將成長至44億美元,年增44%。
Turning to non-AI semiconductors Revenue of $4.1 billion was down 9% sequentially on a seasonal decline in wireless. In aggregate, during Q1, the recovery in non-AI semiconductors continue to be slow. Broadband, which bottomed in Q4 of 2024, showed a double-digit sequential recovery in Q1 and is expected to be up similarly in Q2 and service providers and telcos step up spending.
轉向非人工智慧半導體,由於無線業務的季節性下滑,其收入為 41 億美元,季減 9%。整體來看,第一季非人工智慧半導體的復甦持續緩慢。寬頻在 2024 年第四季觸底,在第一季呈現兩位數的連續復甦,預計第二季也將出現類似的成長,服務供應商和電信公司也將增加支出。
Server storage was down single digits sequentially in Q1, but is expected to be up high single digits sequentially in Q2.
伺服器儲存在第一季環比下降個位數,但預計在第二季將環比增長高個位數。
Meanwhile, enterprise networking continues to remain flattish in the first half of fiscal '25 as customers continue to work through channel inventory. While wireless was down sequentially due to a seasonal decline, it remained flat year on year. In Q2, wireless is expected to be the same, slight again year on year. Resales in Industrial were down double digits in Q1 and are expected to be down in Q2.
同時,由於客戶繼續透過通路庫存進行銷售,企業網路在25財年上半年持續保持穩定。儘管由於季節性下滑,無線業務環比下降,但同比仍然持平。在第二季度,無線預計將與去年同期持平,再次略有下降。第一季工業轉售量下降兩位數,預計第二季仍將下降。
So reflecting the foregone puts and takes, we expect non-AI semiconductor revenue in Q2 to be flattish sequentially even though we are seeing bookings continue to grow year on year. In summary, for Q2, we expect total semiconductor revenue to grow 2% sequentially and up 17% year on year to $8.4 billion.
因此,反映出先前的投入和產出,儘管我們看到訂單量繼續同比增長,但我們預計第二季非人工智慧半導體營收將環比持平。總結一下,我們預計第二季半導體總營收將季增 2%,年增 17%,達到 84 億美元。
Turning now to infrastructure software segment. Q1 infrastructure software revenue of $6.7 billion was up 47% year on year and up 15% sequentially -- as generated though by deals, which slipped from Q2 -- Q4 in Q1. Now this is the first quarter, Q1 '25 where the year-on-year comparables include VMware in both quarters.
現在轉向基礎設施軟體領域。第一季基礎設施軟體營收為 67 億美元,年成長 47%,季增 15%——儘管這是由第一季較第二季至第四季下滑的交易產生的。這是第一季度,即 25 年第一季度,其中兩個季度的年比數據都包括 VMware。
We're seeing significant growth in the software segment for two reasons: one, we're converting to a footprint of large -- sorry, we're converting from a footprint of largely perpetual license to one of full subscription. And as of today, we are over 60% down; two, these perpetual licenses were only largely for compute virtualization, otherwise called vSphere. We are upselling customers to a full stack VCF, which enables the entire data center to be virtualized.
我們看到軟體領域顯著成長,原因有二:第一,我們正在轉換為大型足跡——抱歉,我們正在從大部分永久授權的足跡轉換為完全訂閱的足跡。截至今天,我們的虧損已超過 60%;二、這些永久許可證主要用於計算虛擬化,也稱為 vSphere。我們正在向客戶推銷全端 VCF,它可以實現整個資料中心的虛擬化。
And this enables customers to create their own private cloud environment on-prem. And as of the end of Q1, approximately 70% of our largest 10,000 customers have adopted VCF. As these customers consume VCF, we do see a further opportunity for future growth. As launch enterprises adopt AI, they have to run their AI workloads on their on-prem data centers, which will include both GPU servers as well as traditional CPUs.
這使得客戶能夠在本地創建自己的私有雲環境。截至第一季末,我們最大的 10,000 個客戶中約有 70% 已經採用了 VCF。隨著這些客戶使用 VCF,我們確實看到了未來成長的進一步機會。隨著新興企業採用人工智慧,他們必須在其內部資料中心運行人工智慧工作負載,其中包括 GPU 伺服器和傳統 CPU。
And just as VCF virtualizes these traditional data centers using CPUs, VCF will also virtualize GPUs on a common platform and enable enterprises to import AI models to run their own data on-prem. This platform, which virtualize the GPU is called the VMware private AI Foundation. And as of today, in collaboration with NVIDIA, we have 39 enterprise customers for the VMware private AI foundation.
就像 VCF 使用 CPU 虛擬化這些傳統資料中心一樣,VCF 也將在通用平台上虛擬化 GPU,並使企業能夠匯入 AI 模型在本地運行自己的資料。這個虛擬化GPU的平台稱為VMware私有AI基金會。截至今天,透過與 NVIDIA 的合作,我們已為 VMware 私人 AI 基金會擁有 39 家企業客戶。
Customer demand has been driven by our open ecosystem superior load balancing and automation capabilities that allows them to intelligently pull and run workloads across both GPU and CPU infrastructure and leading to various costs.
我們開放的生態系統卓越的負載平衡和自動化功能推動了客戶的需求,這些功能使他們能夠跨 GPU 和 CPU 基礎架構智慧地提取和運行工作負載並降低各種成本。
Moving on to Q2 outlook for software. We expect revenue of $6.5 billion, up 23% year on year. So in total, we're guiding Q2 consolidated revenue to be approximately $14.9 billion, up 19% year on year. And this, we expect this will drive Q2 adjusted EBITDA to approximately 66% of revenue.
繼續討論第二季軟體展望。我們預計營收為 65 億美元,年增 23%。因此,總體而言,我們預計第二季綜合收入約為 149 億美元,年增 19%。我們預計這將推動第二季調整後的 EBITDA 達到營收的約 66%。
With that, let me turn the call over to Kirsten
接下來,讓我把電話轉給 Kirsten
Kirsten Spears - Chief Financial Officer, Chief Accounting Officer
Kirsten Spears - Chief Financial Officer, Chief Accounting Officer
Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. From a year-on-year comparable basis, keep in mind that Q1 of fiscal 2024 was a 14-week quarter and Q1 of fiscal 2025 is a 13-week quarter. Consolidated revenue was $14.9 billion for the quarter, up 25% from a year ago. Gross margin was 79.1% of revenue in the quarter, better than we originally guided on higher infrastructure software revenue and more favorable semiconductor revenue mix.
謝謝你,霍克。現在,讓我提供有關我們第一季財務業績的更多詳細資訊。從同比可比基礎來看,請記住 2024 財年第一季是一個 14 週的季度,而 2025 財年第一季是一個 13 週的季度。本季綜合營收為 149 億美元,年增 25%。本季毛利率為79.1%,由於基礎設施軟體收入增加和半導體收入組合更有利,優於我們最初預期。
Consolidated operating expenses were $2 billion, of which $1.4 billion was for R&D. Q1 operating income of $9.8 billion was up 44% from a year ago, with operating margin at 66% of revenue. Adjusted EBITDA was a record $10.1 billion or 68% of revenue, above our guidance of 66%. This figure excludes $142 million of depreciation.
綜合營業費用為 20 億美元,其中 14 億美元用於研發。第一季營業收入為 98 億美元,比去年同期成長 44%,營業利潤率為營收的 66%。調整後的 EBITDA 達到創紀錄的 101 億美元,佔營收的 68%,高於我們預期的 66%。該數字不包括 1.42 億美元的折舊。
Now a review of the P&L for our two segments, starting with semiconductors. Revenue for our Semiconductor Solutions segment was $8.2 billion and represented 55% of total revenue in the quarter. This was up 11% year on year. Gross margin for our Semiconductor Solutions segment was approximately 68%, up 70 basis points year on year driven by revenue mix.
現在回顧我們兩個部門的損益表,從半導體開始。半導體解決方案部門的收入為 82 億美元,佔本季總收入的 55%。這比去年同期增長了 11%。我們半導體解決方案部門的毛利率約為 68%,受收入結構推動,較去年同期成長 70 個基點。
Operating expenses increased 3% year on year to $890 million on increased investment in R&D for leading-edge AI semiconductors, resulting in semiconductor operating margin of 57%. Now moving on to infrastructure software. Revenue for infrastructure software of $6.7 billion was 45% of total revenue and up 47% year on year based primarily on increased revenue from VMware.
由於對前沿人工智慧半導體研發的投入增加,營業費用年增 3% 至 8.9 億美元,導致半導體營業利潤率達到 57%。現在轉向基礎設施軟體。基礎設施軟體營收為 67 億美元,佔總營收的 45%,年增 47%,主要得益於 VMware 營收的成長。
Gross margin for infrastructure software was 92.5% in the quarter compared to 88% a year ago. Operating expenses were approximately $1.1 billion in the quarter, resulting in infrastructure software operating margin of 76%. This compares to operating margin of 59% a year ago, this year-on-year improvement reflects our disciplined integration of VMware and sharp focus on deploying our VCF strategy.
本季基礎設施軟體的毛利率為 92.5%,去年同期為 88%。本季營運費用約 11 億美元,基礎設施軟體營運利潤率為 76%。相較於一年前 59% 的營業利潤率,這一同比增長反映了我們對 VMware 的有序整合以及對部署 VCF 策略的高度重視。
Moving on to cash flow. Free cash flow in the quarter was $6 billion and represented 40% of revenue. Free cash flow as a percentage of revenue continues to be impacted by cash interest expense from debt related to the VMware acquisition and cash taxes due to the mix of US taxable income, the continued delay in the reenactment of Section 174 and the impact of corporate AMT. We spent $100 million on capital expenditures.
繼續討論現金流。本季自由現金流為 60 億美元,佔營收的 40%。由於美國應稅收入的組合、第 174 條重新頒布的持續推遲以及企業 AMT 的影響,自由現金流佔收入的百分比繼續受到與 VMware 收購相關的債務的現金利息支出和現金稅的影響。我們在資本支出上花了 1 億美元。
Days sales outstanding were 30 days in the first quarter compared to 41 days a year ago. We ended the first quarter with inventory of $1.9 billion, up 8% sequentially to support revenue in future quarters. Our days of inventory on hand were 65 days in Q1 as we continue to remain disciplined on how we manage inventory across the ecosystem.
第一季應收帳款週轉天數為 30 天,去年同期為 41 天。我們第一季的庫存為 19 億美元,季增 8%,以支持未來幾季的營收。由於我們繼續嚴格遵守整個生態系統的庫存管理方式,因此第一季我們的庫存天數為 65 天。
We ended the first quarter with $9.3 billion of cash and $68.8 billion of gross principal debt. During the quarter, we repaid $495 million of fixed rate debt and $7.6 billion of floating rate debt with new senior notes, commercial paper and cash on hand, reducing debt by a net $1.1 billion. Following these actions, the weighted average coupon rate and years to maturity of our [$58.8 million] in fixed rate debt is 3.8% and 7.3 years, respectively.
第一季結束時,我們的現金餘額為 93 億美元,總本金債務為 688 億美元。本季度,我們透過新的優先票據、商業票據和庫存現金償還了 4.95 億美元的固定利率債務和 76 億美元的浮動利率債務,淨減少了 11 億美元的債務。在採取這些措施後,我們的 [5,880 萬美元] 固定利率債務的加權平均票面利率和到期年限分別為 3.8% 和 7.3 年。
The weighted average coupon rate and years to maturity of our $6 billion in floating rate debt is 5.4% and 3.8 years, respectively, and our $4 billion in commercial paper is at an average rate of 4.6%. Turning to capital allocation. In Q1, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $0.59 per share. We spent $2 billion to repurchase 8.7 million AGO shares from employees as those shares vested for withholding taxes. In Q2, we expect the non-GAAP diluted share count to be approximately 4.95 billion shares.
我們的 60 億美元浮動利率債務的加權平均票面利率和到期年限分別為 5.4% 和 3.8 年,而我們的 40 億美元商業票據的平均利率為 4.6%。轉向資本配置。第一季度,我們根據每股 0.59 美元的季度普通股現金股利向股東支付了 28 億美元的現金股利。我們花費 20 億美元從員工手中回購了 870 萬股 AGO 股票,因為這些股票是用來預扣稅款的。在第二季度,我們預計非 GAAP 稀釋股數約為 49.5 億股。
Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $14.9 billion, with semiconductor revenue of approximately $8.4 billion, up 17% year on year. We expect Q2 AI revenue of $4.4 billion, up 44% year on year. For non-AI semiconductors, we expect Q2 revenue of $4 billion. We expect Q2 infrastructure software revenue of approximately $6.5 billion, up 23% year on year.
現在開始指導。我們對第二季的預期是綜合營收為 149 億美元,其中半導體營收約為 84 億美元,年增 17%。我們預計第二季AI營收為44億美元,年增44%。對於非人工智慧半導體,我們預計第二季營收為 40 億美元。我們預計第二季基礎設施軟體營收約為 65 億美元,年增 23%。
We expect Q2 adjusted EBITDA to be about 66%. For modeling purposes, we expect Q2 consolidated gross margin to be down approximately 20 basis points sequentially on the revenue mix of infrastructure software and product mix within semiconductors.
我們預計第二季調整後的 EBITDA 約為 66%。為了建模目的,我們預計第二季綜合毛利率將因基礎設施軟體和半導體產品組合的收入組合而較上季下降約 20 個基點。
As Hock discussed earlier, we are increasing our R&D investment in leading edge AI in Q2. And accordingly, we expect adjusted EBITDA to be approximately 66%. We expect the non-GAAP tax rate for Q2 and fiscal year 2025 to be approximately 14%.
正如霍克之前所討論的,我們將在第二季加大對前沿人工智慧的研發投入。因此,我們預計調整後的 EBITDA 約為 66%。我們預計第二季和 2025 財年的非 GAAP 稅率約為 14%。
That concludes my prepared remarks. Operator, please open up the call for questions.
我的準備好的發言到此結束。接線員,請打開電話詢問。
Operator
Operator
(Operator Instructions) Ben Reitzes, Melius.
(操作員指示) Ben Reitzes,Melius。
Ben Reitzes - Analyst
Ben Reitzes - Analyst
Congrats on the results. Hock, you talked about four more customers coming online. Can you just talk a little bit more about the trend you're seeing? Can any of these customers be as big as the current 3? And what does it say about the custom silicon trend overall and your optimism and upside to the business long term?
恭喜你所取得的成果。霍克,您剛才談到了另外四個客戶上線。能否再稍微談談您所看到的趨勢?這些客戶中有沒有哪一個能和現在的 3 個一樣大?這對於客製化矽片的整體趨勢有何看法?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Very interesting question, Ben, and thanks for your client wishes. But what we've seen is -- and by the way, these four are not yet customers as we define it. As I've always said, in developing and creating XPUs, we are not really the creator of those XPUs, to be honest. We enable each of those hyperscalers partners we engage with to create that chip basically to create that compute system, call it that way. And it comprises the model, the software model, working closely with and the compute engine, the XPU and the networking that binds together the clusters, those multiple XPUs as a whole to train those launch frontier models.
非常有趣的問題,本,謝謝你的客戶祝福。但我們看到——順便說一句,這四個人還不是我們定義的客戶。正如我一直所說的那樣,在開發和創造 XPU 時,說實話,我們並不是真正的 XPU 的創造者。我們支持與我們合作的每個超大規模合作夥伴創建該晶片,基本上就是創建該計算系統,就這樣稱呼。它包括模型、軟體模型,與計算引擎、XPU 以及將叢集和多個 XPU 結合在一起的網路緊密協作,以訓練這些啟動前沿模型。
And so the fact that we create the hardware, it still has to work with the software models and algorithms of those partners of ours before it becomes fully deployable and scale, which is why we define customers in this case as those where we know they have deployed at scale and we received the production volume to enable it to run. And for that, we only have three, just to reiterate. The four, I call it partners, who are trying to create the same thing as the first three and to run their own frontier models, either with on to train their own frontier models.
事實上,我們創建的硬體仍然需要與我們合作夥伴的軟體模型和演算法配合使用,然後才能完全部署和擴展,這就是為什麼我們在這種情況下將客戶定義為我們知道他們已經大規模部署並且我們收到了生產量以使其能夠運行的客戶。對此,我們只有三個,只是重申。這四個人,我稱之為合作夥伴,他們試圖創造與前三個人相同的東西,並運行他們自己的前沿模型,或訓練他們自己的前沿模型。
And as I also said, it doesn't happen overnight to do the first chip could take -- would take typically 1.5 years, and that's very accelerated and which we could accelerate given that we essentially have a framework and a methodology that works right now.
正如我所說,製造第一塊晶片並非一朝一夕就能完成的,通常需要 1.5 年的時間,而這個速度已經非常快了,而且我們可以加快這一速度,因為我們基本上有一個可行的框架和方法。
And as for the three customers no reason for it to not work for four, but we still need those four partners to create and to develop the software, which we don't do to make it work. And to answer your question, there's no reason why these four guys would not create a demand in the range of what we're seeing with the first three guys, but probably later. It's a journey. They started it later, and so they will probably get there later.
對於三個客戶來說,沒有理由它不能為四個客戶工作,但我們仍然需要這四個合作夥伴來創建和開發軟體,而我們不能這樣做來使其工作。回答你的問題,沒有理由說這四個人不會創造與前三個人一樣的需求,但可能更晚。這是一趟旅程。他們開始得比較晚,所以他們可能也會比較晚到達那裡。
Operator
Operator
Harlan Sur, JPMorgan.
摩根大通的 Harlan Sur。
Harlan Sur - Analyst
Harlan Sur - Analyst
And great job on the strong quarterly execution Hock and team. Great to see the continued momentum in the AI business here in the first half of your fiscal year and the continued broadening out of your AI customers. I know Hock last earnings, you did call out a strong ramp in the second half of the fiscal year, driven by new 3-nanometer AI accelerated programs kind of ramping. Can you just help us either qualitatively quality profile the second half step-up relative to what the team just delivered here in the first half?
Hock 和團隊的季度執行表現非常出色。很高興看到貴公司財年上半年的人工智慧業務持續保持良好發展勢頭,人工智慧客戶群不斷擴大。我知道霍克上次的收益,你確實指出了財年下半年的強勁增長,這得益於新的 3 納米 AI 加速計劃的增長。您能否幫助我們從品質上分析球隊下半場的表現相對於上半場的表現有何提升?
Has the profile changed either favorably, less favorably versus what maybe 90 days ago because quite frankly, I mean, a lot has happened since last earnings, right? You've had the dynamics like DeepSeek and focus on AI model efficiency, but on the flip side, you've had strong CapEx outlooks by your cloud and hyperscale customers. So any color on the second-half AI profile would be helpful.
與 90 天前相比,情況是否有所變化,是有利的還是不利的,因為坦白說,自上次盈利以來發生了很多事情,對吧?您擁有像 DeepSeek 這樣的動態並專注於 AI 模式效率,但另一方面,您的雲端和超大規模客戶對資本支出有著強勁的展望。因此,下半場 AI 設定檔中的任何顏色都會有幫助。
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Asking me to look into the minds of my customers, and I hate to tell me they don't tell you, they don't show me the entire mindset here. But why we're bidding the numbers so far in Q1 and seems to be encouraging in Q2 partly from improved networking shipments, as I indicated, to cost those experience in AI accelerators even in some cases, GPUs together for the hyperscalers. And that's good. And partly also, we think there is some pull-ins of shipments and acceleration, call it, that we of shipments in fiscal '25.
要求我了解客戶的想法,我很遺憾他們沒有告訴你,他們沒有向我展示他們的整個心態。但是,為什麼我們在第一季迄今為止的數字上取得如此成績,並且在第二季度似乎也令人鼓舞,部分原因是網路出貨量的改善,正如我所指出的那樣,這些經驗在 AI 加速器中發揮了作用,甚至在某些情況下,為超大規模企業整合了 GPU。這很好。另外,我們認為 2025 財年的出貨量會有所下降,並且會加速成長。
Harlan Sur - Analyst
Harlan Sur - Analyst
And on the second half, that you talked about 90 days ago, the second half 3-nanometer ramp? Is that still very much on track?
那您 90 天前談到的下半年,也就是 3 奈米科技的發展?一切還順利嗎?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Harlan, thank you. I only guide you to. Sorry, let not speculate on the second half.
哈蘭,謝謝你。我只是引導你。抱歉,我們不對後半部分進行猜測。
Operator
Operator
William Stein, Truist Securities.
威廉·斯坦(William Stein),Truist Securities。
William Stein - Analyst
William Stein - Analyst
Congrats on these pretty great results. It seems from the news headlines about tariffs and about Deep Seek that there may be some disruptions, some customers and some other complementary suppliers seem to feel a bit paralyzed perhaps and have difficulty making tough decisions. Those tend to be really useful times for great companies to sort of emerge as something bigger and better than they were in the past.
恭喜這些相當優秀的成績。從有關關稅和Deep Seek的新聞頭條來看,可能會出現一些幹擾,一些客戶和其他一些互補供應商似乎感到有點無助,難以做出艱難的決定。對於偉大的公司來說,這往往是真正有利的時機,可以使其崛起為比過去更大更好的公司。
You've grown this company in a tremendous way over the last decade plus -- and you're doing great now, especially in this AI area, but I wonder if you're seeing that sort of disruption from these dynamics that we suspect or are happening based on headlines what we see from other companies? And how -- aside from adding these customers in AI, I'm sure there's other great stuff going on, but should we expect some bigger changes to come from Broadcom as a result of this?
在過去十多年裡,你們以驚人的方式發展了這家公司——現在你們做得很棒,特別是在人工智慧領域,但我想知道,你們是否看到了我們根據其他公司的新聞頭條所懷疑或正在發生的這種動態破壞?除了在人工智慧領域增加這些客戶之外,我相信還有其他偉大的事情正在進行,但我們是否應該期待博通因此而做出一些更大的改變?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
You asked -- posed a very interesting set of issues and questions. And those are very relevant, interesting issues. The only issue -- the only problem we have at this point is, I would say it's really too early to no way we all in. I mean that's right, the noise of tariffs special on chips that hasn't materialized yet, nor do we know how it will be structured. So we don't know, but we do experience and we are leaving it now is the disruption that is in a positive way, I should add a very positive disruption in semiconductors on a generative AI.
你問了——提出了一系列非常有趣的問題和問題。這些都是非常相關、有趣的問題。唯一的問題——我們目前面臨的唯一問題是,我想說現在就全都參與進來還為時過早。我的意思是,對晶片徵收特殊關稅的呼聲還沒有實現,我們也不知道它將如何建造。所以我們不知道,但我們確實經歷過,我們現在所經歷的是一種積極意義上的顛覆,我應該補充一點,在生成人工智慧領域,半導體領域也出現了非常積極的顛覆。
Generative AI for sure. And I said that before also at the risk of repeating, but it's -- we feel it more than ever. It's really accelerating the development of semiconductor technology, both process and packaging as well as the design towards higher and higher performance accelerators and networking functionality. We're seeing that innovation that those upgrades occur on every month as we face new interesting challenges.
肯定是生成式人工智慧。我以前也說過,儘管有可能重複,但我們比以往任何時候都更能感受到這一點。它確實加速了半導體技術的發展,包括工藝和封裝,以及朝著越來越高性能的加速器和網路功能的設計。當我們面臨新的、有趣的挑戰時,我們看到這些升級每月都會發生創新。
And when -- particularly with XPUs, we're trying -- we've been asked to optimize to frontier models of our partners, our customers as well as our hyperscale partners. And we -- it's a lot of -- I mean it's a privilege almost for us to be -- to participate in it and try to optimize -- and by optimize, I mean, you look at an accelerator. You can look at it from a simple term, high level to perform to want to be mentioned, not just on one single mentoring which is compute capacity, how many tariff flops. It's more than that.
當我們嘗試時(特別是使用 XPU),我們被要求針對我們的合作夥伴、客戶以及超大規模合作夥伴的前沿模型進行最佳化。而且我們 — — 這是很多 — — 我的意思是,這對我們來說幾乎是一種榮幸 — — 參與其中並嘗試優化 — — 而通過優化,我的意思是,你看看加速器。您可以從簡單的角度來看待這一點,高水準的表現想要被提及,而不僅僅是一個單一的指導,即計算能力,有多少關稅失敗。事實遠不止如此。
It's also tied to the fact that this is a distributed computing problem. It's not just the compute capacity of a single XPU or GPU. It's also the network bandwidth and if it ties itself to the next adjacent XPU, GPU -- so that has an impact. So you're doing that, you have to balance with that. Then you decide -- are you doing training or you're doing pre-fueling post-training fine tuning.
這也與分散式計算問題有關。它不僅僅是單一 XPU 或 GPU 的運算能力。這也是網路頻寬,如果它與下一個相鄰的 XPU、GPU 綁定在一起,那麼就會產生影響。所以你在這樣做的時候,你必須保持平衡。然後你決定-你是進行訓練還是訓練前補充能量後的微調。
And again then comes how much memory do you balance against that. And within how much latency you can afford, which is memory bandwidth. So you look at least four variables, maybe even five if we include in memory bandwidth, not just memory capacity when you go straight to inference.
那麼接下來的問題就是您需要平衡多少記憶體呢?以及您可以承受的延遲量,也就是記憶體頻寬。因此,當你直接進行推理時,你至少要考慮四個變量,如果我們包括記憶體頻寬的話,甚至可能需要考慮五個變量,而不僅僅是記憶體容量。
So we have all these variables to play with. And we try to optimize it. So all this is very, very -- I mean, it's a great experience for our engineers to push the envelope on how to create all those chips. And so that's the biggest disruption we see right now from share, trying to create and push the envelope on generative AI, trying to create the best hardware infrastructure to run it.
所以我們可以考慮所有這些變數。我們會盡力對其進行優化。所以這一切都非常非常——我的意思是,這對我們的工程師來說是一次很棒的經歷,可以推動他們突破如何製造所有這些晶片的極限。所以這是我們現在從份額上看到的最大顛覆,試圖創造和推動生成人工智慧的極限,試圖創造最好的硬體基礎設施來運行它。
Beyond that, there are other things that come into play because with AI, as I indicated, does not just drive hardware for enterprises, it finds the way they architect their data centers -- data requirement keeping data private under control becomes important. So suddenly, the push of workloads towards public cloud may take a little pause as large enterprises, particularly have to take to recognize that you want to run AI workloads.
除此之外,還有其他因素在發揮作用,因為正如我指出的那樣,人工智慧不僅為企業驅動硬件,還找到了建立資料中心的方式——資料要求控制資料隱私變得非常重要。因此,突然之間,向公有雲推進工作負載可能會暫時暫停,因為大型企業,尤其是大型企業必須意識到自己想要運行 AI 工作負載。
You probably think very hard about running them on-prem. And suddenly, push yourself towards saying, you've got to upgrade your own data centers to do and manage your own data to run it on-prem. And that's also pushing a trend that we have been seeing now over the past 12 months. Hence, my comments on VMware Private AI Foundation -- especially enterprises pushing direction are quickly recognizing that where do they run their AI workloads.
您可能會認真考慮在本地運行它們。突然間,你強迫自己說,你必須升級自己的資料中心來運作和管理自己的資料。這也推動了我們過去 12 個月所看到的趨勢。因此,我對 VMware Private AI Foundation 的評論——尤其是推動方向的企業正在迅速認識到他們在哪裡運行他們的 AI 工作負載。
So those are trends we see today and a lot of it coming out of AI, a lot of it coming out of sensitive rules on sovereignty in cloud and data. As far as you mentioning tariffs is concerned, I think that's too early for us to figure out where to go. And probably maybe give it another three, six months, we'll probably have a better idea when.
這些就是我們今天看到的趨勢,其中許多來自人工智慧,許多來自雲端運算和資料主權方面的敏感規則。就你提到的關稅而言,我認為現在決定下一步行動還為時過早。或許再過三到六個月,我們就會有一個更好的想法。
Operator
Operator
Ross Seymore, Deutsche Bank.
德意志銀行的羅斯·西摩。
Ross Seymore - Analyst
Ross Seymore - Analyst
Thanks for asking the question. Hock, I want to go back to the XPU side of things and going from the four new engagements, not yet named, customers to last quarter and two more today that you announced. I want to talk about going from kind of design into deployment to judge that because there is some debate about tons of design wins, but the deployments actually don't happen either that they never occur or that the volume is never what is originally promised.
感謝您提問。霍克,我想回到 XPU 方面,從上個季度的四個新客戶(尚未命名)到今天您宣布的另外兩個客戶。我想談談從設計到部署的判斷,因為存在大量關於設計勝利的爭論,但部署實際上並沒有發生,要么從未發生過,要么數量從未達到最初的承諾。
How do you view that kind of conversion ratio? Is there a wide range around it? Or is there some way you could help us kind of understand how that works?
您怎麼看待這樣的轉換率?周圍有很寬的範圍嗎?或者您能用什麼方法幫助我們理解其運作方式?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Well, each Ross, the interesting question, I'll take the opportunity to say the way we look at design win is probably very different from the way many of our peers look at it out there. Number one, to begin with, we believe design win when we know our product is produced in scale and is actually deployed, literally deployed in production. So that takes a long lead time because from taping out, getting in the product.
好吧,羅斯,這個問題很有趣,我想藉此機會說一下,我們看待設計勝利的方式可能與我們許多同行看待設計勝利的方式非常不同。首先,當我們知道我們的產品能夠規模化生產並且真正部署,真正投入生產時,我們相信設計是成功的。因此,從流片到產品交付需要很長的準備時間。
It takes a year easily from the product in the hands of our partner to when it goes into scale production. It will take six months to a year. It's our experience that we have seen, number one. And number two, I mean, producing and deploying 5,000 XPUs, that's a joke. That's not real production in our view.
從產品到達我們合作夥伴的手中到投入規模生產,很容易就需要一年的時間。這將需要六個月到一年的時間。首先,這是我們親眼見證的經驗。第二,我的意思是,生產和部署 5,000 個 XPU,這是一個笑話。我們認為這不是真正的生產。
And so we also limit ourselves in selecting partners to people who really need that large volume. You need that large volume from our viewpoint in scale right now, in mostly training, training of large language models frontier models in the continuing trajectory. So we eliminate ourselves to how many customers or how many potential customers that exist out there, Ross, and we tend to be very selective who we pick the beginning.
因此,我們在選擇合作夥伴時也將自己限制在真正需要大量產量的人身上。從我們目前的規模來看,你需要大量的數據,主要用於訓練,大型語言模型的訓練,以及持續發展軌跡中的前沿模型的訓練。因此,我們會根據現有的客戶數量或潛在客戶數量來決定,羅斯,我們傾向於非常謹慎地選擇一開始的客戶。
So when we say design -- it really is at scale. It's not something that starts in six months and die in a year and die again. Basically, it's a selection of customers. It's just the way we run our ASIC business in general for the last 15 years. We pick and choose the customers because we know this and we do multiyear road maps with these customers because we know these customers are sustainable.
因此,當我們說設計時——它確實具有規模。它不是六個月內開始、一年內消亡、然後再次消亡的事物。基本上,這是對顧客的選擇。這只是我們過去 15 年來經營 ASIC 業務的整體方式。我們挑選客戶是因為我們了解這一點,我們與這些客戶一起制定多年路線圖,因為我們知道這些客戶是可持續的。
I'll put it bluntly. We don't do it for start-ups.
我就直說吧。我們不會為新創公司做這件事。
Operator
Operator
Stacy Rasgon, Bernstein Research.
伯恩斯坦研究公司的史黛西‧拉斯岡 (Stacy Rasgon)。
Stacy Rasgon - Analyst
Stacy Rasgon - Analyst
I wanted to go to the three customers that you do have in volume today. And what I wanted to ask was, is there any concern about some of the new regulations or the AI diffusion rules that are going to get put in place supposedly in May impacting any of those design wins or shipments. It sounds like you think all three of those are still on at this point, but anything you could tell us about where is that new regulations or AI diffusional impacting any of those wins would be helpful.
我想拜訪一下今天你們的三位客戶。我想問的是,是否有人擔心 5 月實施的一些新法規或人工智慧傳播規則會影響任何設計訂單或出貨量。聽起來你認為這三個因素目前仍然存在,但是如果你能告訴我們新法規或人工智慧擴散對這些勝利有何影響,那將會很有幫助。
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Thank you. In this era or this current era of geopolitical tensions and fairly dramatic actions all around by governments. Yes, it's always some concern at the back of everybody's mind. But to answer going to your question directly no, we don't have any concerns.
謝謝。在這個時代,或者說當前的時代,地緣政治局勢緊張,各國政府都採取了相當激烈的行動。是的,每個人心裡總是會有一些擔憂。但直接回答你的問題,沒有,我們沒有任何擔憂。
Stacy Rasgon - Analyst
Stacy Rasgon - Analyst
Got it. So none of those are going into China or to Chinese customers then?
知道了。那麼這些產品都不會進入中國市場或銷往中國客戶嗎?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
No comment. [Are you trying to love me in Cuneo?]
沒有意見。[你想在庫尼奧愛我嗎?
Operator
Operator
Vivek Arya, Bank of America.
美國銀行的維韋克·艾莉亞(Vivek Arya)。
Vivek Arya - Analyst
Vivek Arya - Analyst
Hock, whenever you have described your AI opportunity, you've always emphasized the training workload. But the perception is that the AI market could be dominated by the inference workload, especially with these new reasoning model. So what happens to your opportunity and share if the mix moves more towards inference. Does it create a bigger TAM for you than the $60 billion to $90 billion? Does it keep it the same, but there is a different mix of products? Or does a more inference-heavy market favor a GPU over the next year?
霍克,每當您描述您的人工智慧機會時,您總是強調訓練工作量。但人們的看法是,人工智慧市場可能由推理工作量主導,尤其是在這些新的推理模型下。那麼,如果組合更傾向於推理,您的機會和份額會發生什麼變化?它是否為您創造了比 600 億到 900 億美元更大的 TAM?是否維持不變,但產品組合有所不同?或者,在未來一年中,更注重推理的市場會青睞 GPU?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
That's a good, interesting question. By the way, I never -- and I do talk a lot about training. We do (inaudible) XPUs us also focus on inference as a separate product line. They do. And that's why I can say the architecture of those chips are very different from the architecture of the training chips.
這是一個好問題,很有趣。順便說一句,我從來沒有——而且我確實談論了很多有關訓練的事情。我們確實(聽不清楚)XPU 讓我們也將推理作為一條單獨的產品線。是的。這就是為什麼我可以說這些晶片的架構與訓練晶片的架構非常不同。
And so it's a combination of those two, I should add, that adds up to this $60 billion to $90 billion. So if I had not been clear, I do apologize, it's a combination of both. But having said that, the larger part of the dollars come from training, not inference within the service, the thing that we have talked about so far.
所以我應該補充一下,這兩者加起來就是 600 億到 900 億美元。所以,如果我沒有表達清楚,我很抱歉,這是兩者的結合。但話雖如此,大部分資金來自於培訓,而不是服務內部的推斷,這是我們迄今為止所討論的事情。
Operator
Operator
Harsh Kumar, Piper Sandler.
哈什·庫馬爾,派珀·桑德勒。
Harsh Kumar - Analyst
Harsh Kumar - Analyst
Broadcom team and again, great execution. Just Hock, had a quick question. We've been hearing that almost all of the large clusters that are 100,000 plus, they're all going to Ethernet. I was wondering if you could help us understand the importance of when the customer is making a selection, choosing between a guy that has the best switch ASIC such as you versus a guy that might have the compute there, can you talk about what the customer is thinking and what are the final points that they want to hit upon when they make that selection for the NIC cards?
博通團隊再次表現出色。只是霍克,有一個簡單的問題。我們聽說幾乎所有十萬以上的大型集群都將採用乙太網路。我想知道您是否可以幫助我們了解客戶在進行選擇時的重要性,在擁有最佳交換機 ASIC 的公司(例如您)與可能擁有計算能力的公司之間進行選擇,您能否談談客戶的想法以及他們在選擇 NIC 卡時想要考慮的最終要點是什麼?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Okay. I see. No, it's -- yes, it's down to -- in the case of the hyperscalers now very much so, is very driven by performance. And its performance, what you're mentioning on connecting, scaling up and scaling out those AI accelerators, be they XPU or GPU among hyperscalers. In most cases, among those hyperscalers, we engage with when it comes to connecting those clusters.
好的。我懂了。不,是的,這取決於——就超大規模計算而言,現在這在很大程度上取決於性能。還有它的效能,您提到的連接、擴展和擴展這些 AI 加速器,無論是超大規模器中的 XPU 還是 GPU。大多數情況下,在連接這些集群時,我們會與這些超大規模器進行合作。
They are very driven by performance. I mean if you are in a race to really get the best performance out of your hardware, as you train and continue to train your frontier models. That matters more than anything else. So the basic first thing they go for is proven. That's a proven piece of hardware.
他們非常注重績效。我的意思是,如果你在訓練並繼續訓練你的前沿模型時,你正在競賽真正地從硬體中獲得最佳性能。這比什麼都重要。因此,他們首先要追求的基本目標就是得到證實。這是一個已經過驗證的硬體。
It's a proven that is a proven system, subsystem in our case, that makes it work. And in that case, we tend to have a big advantage because I mean, networking are us -- switching and routing are us for the last 10 years at least.
在我們的案例中,它是一個經過驗證的系統、子系統,能夠讓它發揮作用。在這種情況下,我們往往具有很大的優勢,因為我的意思是,網路就是我們——至少在過去的 10 年裡,交換和路由就是我們。
And the fact that it's AI just makes it more interesting for our engineers to work on. But it's basically based on proven technology and experience in pushing that and pushing the envelope on going from 800 gigabit per second bandwidth to 1.6 and moving on 3.2, which is exactly why we keep stepping up the rate of investment in coming in with our products where we take a Tomahawk 5. We doubled the grade to deal with just one hyperscaler because they want high ratings to create larger clusters while running bandwidth that are smaller but that doesn't stop us from moving ahead to the next generation of Tomahawk 6.
事實上,人工智慧讓我們的工程師的工作變得更加有趣。但它基本上是基於經過驗證的技術和經驗,推動頻寬從 800 千兆位元每秒增加到 1.6 千兆位元每秒,再增加到 3.2 千兆位元每秒,這正是我們不斷加大對 Tomahawk 5 產品的投資速度的原因。我們將等級提高了一倍,以便只處理一個超大規模伺服器,因為他們希望獲得高評級,以便在運行較小頻寬的同時創建更大的集群,但這並沒有阻止我們向下一代 Tomahawk 6 邁進。
And I did say we're even planning Tomahawk 7 and 8 right now. And we're speeding up the rate of development. And it's all largely for that few guys, by the way. So we're making a lot of investment for very few customers hopefully with very large serv available markets. That's if I'm nothing else, that's the big bets we are placing.
我確實說過我們現在甚至正在規劃戰斧 7 和 8。我們正在加快發展速度。順便說一句,這一切主要都是為了那少數人。因此,我們為極少數客戶進行了大量投資,希望能夠獲得非常大的可用服務市場。如果我沒有別的打算,這就是我們所下的大賭注。
Operator
Operator
Timothy Arcuri, UBS.
瑞銀的提摩西·阿庫裡。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Hock, in the past, you have mentioned XPU units growing from about 2 million last year to about 7 million you said in the 2027, 2028 timeframe. My question is, do these four new customers, do they add to that 7 million unit number? I know in the past, you sort of talked about an ASP of $20,000 by then. So the first three customers are clearly a subset of that, 7 million units. So do these new four engagements drive that 7 million higher? Or do they just fill in to get to that 7 million?
Hock,您過去曾提到,XPU 單元數量將從去年的約 200 萬增長到 2027 年、2028 年的約 700 萬。我的問題是,這四個新客戶是否會增加 700 萬台的數量?我知道過去你曾經談論過當時的 ASP 為 20,000 美元。因此,前三位客戶顯然是其中的一部分,即 700 萬台。那麼,這四項新活動是否會讓 700 萬這一數字增加呢?還是他們只是填補空缺以達到那 700 萬?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Thanks, Tim, for asking that. To clarify, I thought I made it clear in my comments. No. The market we are talking about, including when you translate the unit is only among the three customers we have today. The other four we talk about engagement partners. We don't consider that as customers yet, and therefore are not in our served available market.
謝謝蒂姆問這個問題。為了澄清起見,我認為我在我的評論中已經說得很清楚了。不。我們所談論的市場,包括你翻譯的單位,只是我們今天的三個客戶之一。另外四個我們討論的是參與夥伴。我們尚未將其視為客戶,因此不在我們服務的可用市場中。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Okay. So they would add to that number.
好的。所以他們會增加這個數字。
Operator
Operator
C.J. Muse, Cantor Fitzgerald.
C.J. Muse、領唱者費茲傑拉。
C.J. Muse - Analyst
C.J. Muse - Analyst
I guess, Hock, to follow up on your prepared remarks and comments earlier around optimization with your best hardware and hyperscalers with their great software I'm curious how you're expanding your portfolio now to six mega scale kind of frontier models will enable you to, in one blush, share tremendous information, but at the same time, a world where these six truly want to differentiate.
霍克,我想跟進一下您之前準備好的評論和意見,關於使用您最好的硬體進行優化,以及使用超大規模計算平台及其優秀軟體進行優化,我很好奇您現在如何將您的產品組合擴展到六個超大規模的前沿模型,這些模型將使您能夠一目了然地分享巨大的信息,但與此同時,這六個模型真正想要在這個世界中進行區分。
So obviously, the goal for all of these players is it flops per second per dollar of CapEx per watt. And I guess, to what degree are you aiding them in this effort? And where does maybe the Chinese wall kind of start where they want to differentiate and not share with you kind of some of the work that you're doing.
因此顯然,所有這些參與者的目標都是每瓦每美元資本支出每秒的故障次數。我想問一下,您在多大程度上幫助了他們實現這一目標?那麼,中國牆的起點可能在哪裡呢? 他們想要有所區分,不與你分享你正在做的一些工作。
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
We only provide very base basic fundamental technology in semiconductors to enable these guys to use what we have and optimize it to their own particular models and algorithms that relate to those models. That's it. That's all we do. So that's the level of -- a lot of that optimization we do for each of them. And as I mentioned earlier, there are maybe 5 degrees of freedom that we do.
我們僅提供半導體領域最基礎的基本技術,以使這些人能夠使用我們所擁有的技術,並根據他們自己的特定模型以及與這些模型相關的演算法對其進行最佳化。就是這樣。這就是我們所做的一切。這就是我們為每一個進行了大量優化的層次。正如我之前提到的,我們的自由度可能有 5 個。
And we play with that. And so even if there is 5 degrees of freedom, there's only so much we can do at that point.
我們就玩這個。所以,即使有 5 個自由度,我們能做的事情也非常有限。
But it is -- and how they -- and basically how we optimize it, it's all tied to the partner telling us how they want to do it. So there's always so much we also have visibility on. But it's what we do now is what the XPU model is, share optimization, translating to performance, but also power, that's very important, how they play it.
但事實是,以及他們如何優化它,基本上我們如何優化它,都與合作夥伴告訴我們他們想如何做有關。因此,我們總是能夠看到很多東西。但我們現在所做的是 XPU 模型,共享最佳化,轉換為效能,還有功率,這非常重要,他們如何發揮它的作用。
It's not just cost, though power translates into total cost of ownership eventually. It's how design it empower and how we balance it in terms of the size of the cluster and whether they use it for training, pretraining, post-training, inference, test time scaling, all of them have their own characteristics. And that's the advantage of doing that XP and working closely with them to create that stuff.
雖然功率最終會轉化為總擁有成本,但這不僅僅是成本問題。它是如何設計它賦予權力,我們如何在集群規模方面平衡它,以及它們是否用於訓練、預訓練、後訓練、推理、測試時間擴展,它們都有自己的特點。這就是採用 XP 並與他們密切合作創造這些產品的優勢。
Now as far as your question on a China and all that, frankly, I don't have any opinion on that at all. To us, it's a technical game.
至於你關於中國的問題,坦白說,我對此沒有任何意見。對我們來說,這是一場技術遊戲。
Operator
Operator
Christopher Rolland, Susquehanna.
克里斯多福羅蘭,薩斯奎哈納。
Christopher Rolland - Analyst
Christopher Rolland - Analyst
And this one's maybe for Hock and for Kirsten. I'd love to know just because you have kind of the complete connectivity portfolio how you see new greenfield scale-up opportunities playing out here between could be optical or copper or really anything and what additive this could be for your company?
這一個也許是獻給霍克 (Hock) 和克斯汀 (Kirsten) 的。我很想知道,因為您擁有完整的連接產品組合,您如何看待新的綠地擴大機會,可能是光纖、銅線還是其他任何東西,以及這對您的公司來說可能意味著什麼?
And then, Kirsten, I think OpEx is up. Maybe just talk about where those OpEx dollars are going towards within the AI opportunity and whether they relate
然後,Kirsten,我認為 OpEx 已經上升了。也許只是談談這些營運支出在人工智慧機會中的去向,以及它們是否相關
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Your question is very broad reaching in our portfolio. Yes, we have the advantage -- and a lot of the hyperscale customers we deal with, they are talking about a lot of expansion or it's almost all greenfield. -- less so brownfield. It's very greenfield. It's all expansion, and it all tends to be next generation that we do it, which is very exciting.
您的問題涉及我們的產品組合非常廣泛。是的,我們有優勢——我們打交道的許多超大規模客戶都在談論大量擴張,或者幾乎都是綠地。 ——棕地就不那麼重要了。這裡一片綠地。這一切都是擴張,而且我們所做的一切都趨向下一代,這是非常令人興奮的。
So the opportunity is very, very high. And we deploying -- I mean, we can do it in copper. But while we see a lot of opportunity from is when you connect -- provide the networking connectivity through optical. So there are a lot of active elements, including either multimode lasers, which are called VCSELs or (inaudible) lasers for basically single mode, and we do both. So there's a lot of opportunity just in scale up versus scale out.
所以機會非常非常高。我們正在部署 — — 我的意思是,我們可以用銅來完成。但是,我們看到很多機會在於當您連接時—透過光纖提供網路連線。因此有很多活躍元素,包括多模雷射(稱為 VCSEL)或(聽不清楚)基本上為單模的雷射器,我們兩種都做。因此,與擴大規模相比,擴大規模仍有許多機會。
We used to do -- we still do a lot of other protocols beyond Ethernet to consider PC Express, where we are on the leading edge of the PC Express.
我們過去常常這樣做——除了乙太網路之外,我們仍然在做很多其他協定來考慮 PC Express,我們處於 PC Express 的前沿。
And the architecture on networking, switching, so to speak, we offer both. One is a very intelligent switch which is like our Jericho family with a dumb Nick or a very smart Nick. We've got downswitch, which is the Tomahawk. We offer both architectures as well.
至於網路和交換架構,我們可以提供兩者。一個是非常聰明的開關,就像我們的傑里科家族中有一個愚蠢的尼克或一個非常聰明的尼克。我們有下行開關,也就是戰斧。我們也提供這兩種架構。
So yes, we have a lot of opportunities from it. All things said and done, all this nice white portfolio and all that adds up to probably, as I said in prior quarters, about 20% of our total AI revenue maybe going to 30%. Though last quarter, we hit almost 40% but that's not the norm.
所以是的,我們從中獲益良多。總而言之,所有這些出色的白色產品組合以及所有這些加起來可能相當於我們總 AI 收入的 20% 到 30%,正如我在前幾個季度所說的那樣。雖然上個季度我們的成長率接近 40%,但這並不是常態。
I would say typically, all those other portfolio products still end up to a nice decent amount of revenues for us, but within the sale of AI, they add up to, I would say, on average, be close to 30% and XPUs that accelerators is 70%, if that's what you're driving perhaps that gives you some -- share some light on towards where -- how one matters over the other. But we have a wide range of products in the connectivity, networking side of it. they just add up though to that 30%.
我想說,通常情況下,所有其他產品組合最終仍會為我們帶來相當可觀的收入,但在人工智慧的銷售中,它們加起來平均接近 30%,而 XPU 加速器佔 70%,如果這就是你所追求的,也許這可以給你一些啟發,說明一個產品比另一個產品更重要。但我們在連接性和網路方面擁有廣泛的產品。它們加起來就達到了那 30%。
Kirsten Spears - Chief Financial Officer, Chief Accounting Officer
Kirsten Spears - Chief Financial Officer, Chief Accounting Officer
And then on the R&D front, as I outlined, on a consolidated basis, we spent $1.4 billion in R&D in Q1, and I stated that it would be going up in Q2. Hock clearly outlined in his script, the two areas where we're focusing on. Now I would tell you, as a company, we focus on R&D across all of our product lines so that we can stay competitive with next-generation product offerings. But he did lay out that we were focusing on taping out the industry's first 2-nanometer AI FPU packaged in 3D. That was one in the script, and that's an area that we're focusing on.
然後在研發方面,正如我所概述的,在合併基礎上,我們在第一季的研發上花費了 14 億美元,我表示第二季這一數字將會增加。霍克在他的劇本中清楚地概述了我們關注的兩個領域。現在我想告訴你,作為一家公司,我們專注於所有產品線的研發,以便我們能夠透過下一代產品保持競爭力。但他確實表示,我們正專注於推出業界首個採用 3D 封裝的 2 奈米 AI FPU。這是劇本中的內容,也是我們關注的領域。
And then you mentioned that we've doubled the rate of capacity of the existing Tomahawk size to enable our AR customers to scale up on Ethernet towards the 1 million XPUs. So I mean that's a huge focus of the company.
然後您提到,我們將現有 Tomahawk 尺寸的容量提高了一倍,以使我們的 AR 客戶能夠在乙太網路上擴展到 100 萬個 XPU。所以我的意思是說這是公司關注的重點。
Operator
Operator
Vijay Rakesh, Mizuho.
瑞穗的 Vijay Rakesh。
Vijay Rakesh - Analyst
Vijay Rakesh - Analyst
Just a quick question on the networking side. Just wondering how much does up sequentially on the AI side? And any thoughts on M&A going forward. seeing a lot of headlines around the Intel products projects?
我只想問一個關於網路方面的簡單問題。我只是想知道人工智慧方面連續漲了多少?對於未來的併購有什麼想法嗎?看到很多有關英特爾產品項目的頭條新聞嗎?
Hock Tan - President, Chief Executive Officer, Director
Hock Tan - President, Chief Executive Officer, Director
Okay. On the networking side, I indicated, Q1 showed a bit of a search, but I don't expect that to be -- that mix of 60-40, 60% is compute and 40% net work going to be something that is normal. I think the norm is closer to 70-30, maybe at best 30%. And so who knows what, Q2 is we kind of see Q2 as continuing, but that's just my mind a temporary blip. The norm will be 70-30.
好的。在網絡方面,我指出,第一季顯示出一些搜索,但我不認為這是——60-40 的組合,60%是計算,40%是網絡,這將是正常的。我認為標準更接近 70-30,或許最多是 30%。所以誰知道呢,我們認為 Q2 會繼續,但這只是我腦海中的暫時現象。標準是 70-30。
And if you take it across a period of time like six months a year that's your question. M&A, no, I'm too busy we're too busy doing AI and VMware at this point. We're not thinking of it at this point.
如果你將其跨越一年六個月這樣的時間段,這就是你的問題。併購,不,我太忙了,我們現在忙於做人工智慧和 VMware。我們目前還沒有考慮這個問題。
Operator
Operator
That's all the time we have for our question-and-answer session. I would now like to turn the call back over to Ji Yoo for any closing remarks.
我們的問答環節的時間就這麼多了。現在我想將電話轉回 Ji Yoo 來做最後發言。
Ji Yoo - Investor Relations
Ji Yoo - Investor Relations
Thank you, Sherry. Broadcom currently plans to report its earnings for the second quarter of fiscal year 2025 after close of market on Thursday, June 5, 2025. Public webcast of our company's earnings conference call will follow at 2:00 PM Pacific.
謝謝你,雪莉。博通目前計劃於 2025 年 6 月 5 日星期四收盤後公佈其 2025 財年第二季度收益。我們公司的收益電話會議將於太平洋時間下午 2:00 進行公開網路直播。
That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
今天的收益電話會議就到此結束。感謝大家的加入。雪莉,你可以結束通話了。
Operator
Operator
Thank you. Ladies and gentlemen, thank you for participating. This concludes today's program. You may now disconnect.
謝謝。女士們、先生們,感謝你們的參與。今天的節目到此結束。您現在可以斷開連線。