使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's second-quarter fiscal 2026 financial results conference call. (Operator Instructions)
午安.我叫莎拉,今天我將擔任您的會議主持人。現在,我歡迎大家參加NVIDIA 2026財年第二季財務業績電話會議。(操作員指示)
Toshiya Hari, you may begin your conference.
Toshiya Hari,您可以開始您的會議了。
Toshiya Hari - Investor Relations
Toshiya Hari - Investor Relations
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
謝謝。大家下午好,歡迎參加NVIDIA 2026財年第二季電話會議。今天與我一起出席的還有 NVIDIA 總裁兼執行長 Jensen Huang 和執行副總裁兼財務長 Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我想提醒您,我們的電話會議正在 NVIDIA 的投資者關係網站上進行網路直播。網路直播將可重播,直至電話會議討論我們 2026 財年第三季的財務業績。今天電話會議的內容屬於 NVIDIA 的財產。未經我們事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 27, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
在本次電話會議中,我們可能會根據目前預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益報告中的披露內容、我們最新的 10-K 和 10-Q 表格以及我們可能向美國證券交易委員會提交的 8-K 表格報告。我們所有的聲明都是根據我們目前掌握的資訊截至 2025 年 8 月 27 日做出的。除法律要求外,我們不承擔更新任何此類聲明的義務。
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的 CFO 評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的對帳表。
With that, let me turn the call over to Colette.
說完這些,讓我把電話轉給科萊特。
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Thank you, Toshiya. We delivered another record quarter while navigating what continues to be a dynamic external environment. Total revenue was $46.7 billion, exceeded our outlook as we grew sequentially across all market platforms. Data center revenue grew 56% year over year. Data center revenue also grew sequentially despite the $4 billion decline in H20 revenue. NVIDIA's Blackwell top can reached record levels, growing sequentially by 17%. We began production shipments of GB300 in Q2. Our full stack AI solutions for cloud service providers, neo clouds, enterprises, and sovereigns are all contributing to our growth.
謝謝你,Toshiya。在持續變化的外部環境中,我們又創下了季度業績新高。總收入達到 467 億美元,超出了我們的預期,因為我們在所有市場平台上都實現了連續成長。資料中心營收年增56%。儘管 H20 收入下降了 40 億美元,但資料中心收入仍較上季成長。NVIDIA的Blackwell頂級顯示卡達到了創紀錄的水平,環比增長了17%。我們在第二季開始生產 GB300 並開始出貨。我們為雲端服務供應商、新雲端、企業和主權國家提供的全端 AI 解決方案都在為我們的成長做出貢獻。
We are at the beginning of an industrial revolution that will transform every industry. We see $3 billion to $4 trillion in AI infrastructure spend in the -- by the end of the decade. The scale and scope of these build-outs presents significant long-term growth opportunities for NVIDIA.
我們正處於一場將改變每個產業的工業革命的開端。我們預計到本世紀末,人工智慧基礎設施支出將達到 30 億至 4 兆美元。這些建設的規模和範圍為 NVIDIA 帶來了重大的長期成長機會。
The GB200 NVL system is seeing widespread adoption with deployments at CSPs and consumer Internet companies. Lighthouse model builders, including OpenAI, Meta and Mistral are using the GB200 NVL72 and at data center scale for both training, next-generation models and serving inference models in production.
GB200 NVL 系統在 CSP 和消費者互聯網公司中被廣泛採用。包括 OpenAI、Meta 和 Mistral 在內的 Lighthouse 模型建構者正在使用 GB200 NVL72 和資料中心規模進行訓練、下一代模型和生產中的服務推理模型。
The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue. The transition to the GB300 has been seamless for major cloud service providers due to its shared architecture, software, and physical footprint with the GB200, enabling them to build and deploy GB300 racks with ease. The transition to the new GB300 rack-based architecture has been seamless. Factory builds in late July and early August were successfully converted to support the GB300 ramp and today, full production is underway. The current run rate is back at full speed, producing approximately 1,000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online.
新的 Blackwell Ultra 平臺本季表現強勁,創造了數百億美元的收入。由於 GB300 與 GB200 共享架構、軟體和實體佔用空間,因此主要雲端服務供應商可以無縫過渡到 GB300,從而輕鬆建置和部署 GB300 機架。向新的 GB300 機架式架構的過渡非常無縫。7 月底和 8 月初建造的工廠已成功轉換為支援 GB300 坡道,目前已全面投入生產。目前生產率已恢復全速,每週生產約 1,000 個機架。隨著更多產能投入使用,預計第三季產量將進一步加速。
We expect widespread market availability in the second half of the year as CoreWeave prepares to bring their GB300 instance to market as they are already seeing 10x more inference performance on reasoning models compared to H100. Compared to the previous hopper generation, GB300 NVL72 AI factories promise 10x improvement in token per watt energy efficiency, which translates to revenues as data centers are power limited.
我們預計,隨著 CoreWeave 準備將其 GB300 實例推向市場,該實例將在今年下半年廣泛上市,因為他們已經看到推理模型的推理性能比 H100 高出 10 倍。與上一代料斗相比,GB300 NVL72 AI 工廠承諾將每瓦代幣能源效率提高 10 倍,這將轉化為收入,因為資料中心的電力有限。
The chips of the Rubin platform are in fab, the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 Scale Up switch Spectrum-X, scale out and scale across switch and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rack scale AI supercomputer with a mature and full-scale supply chain. This keeps us on track with our pace of an annual product cadence and continuous innovation across compute, networking, systems, and software.
Rubin 平台的晶片已在晶圓廠生產,包括 Vera CPU、Rubin GPU、CX9 SuperNIC、NVLink 144 Scale Up 交換器 Spectrum-X、Scale Out 和 Scale Cross 交換器以及矽光子處理器。Rubin 仍按計劃於明年實現量產。Rubin將是我們第三代NVLink機架式AI超級計算機,擁有成熟、完善的供應鏈。這使我們能夠跟上年度產品節奏的步伐,並在運算、網路、系統和軟體方面持續創新。
In late July, the US government began reviewing licenses for sales of H20 to China customers. While a select number of our China-based customers have received licenses over the past few weeks, we have notched any H20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales. But to date, the USG has not published a regulation codifying such requirement.
7月下旬,美國政府開始審查向中國客戶銷售H20的許可證。雖然我們在中國的部分客戶在過去幾週內已經獲得了許可證,但我們還是根據這些許可證對 H20 進行了標記。USG 官員表示,預計 USG 將獲得 H20 許可銷售收入的 15%。但迄今為止,美國政府尚未公佈將此類要求編入法典的法規。
We have not included H20 in our Q3 outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2 billion to $5 billion in H20 revenue in Q3. And if we had more orders, we can bill more.
由於我們仍在努力解決地緣政治問題,因此我們沒有將 H20 納入我們的第三季展望。如果存在地緣政治問題,我們第三季的 H20 收入應該會達到 20 億至 50 億美元。如果我們有更多的訂單,我們就可以收取更多費用。
We continue to advocate for the US government to approve Blackwell for China. Our products are designed and sold for beneficial commercial use and every license sale we make will benefit the US economy, the US leadership.
我們繼續倡導美國政府批准布萊克威爾前往中國。我們的產品是為有益的商業用途而設計和銷售的,我們銷售的每一張許可證都將有利於美國經濟和美國的領導地位。
In highly competitive markets, we want to win the support of every developer. America's AI technology stock can be the world's standard if we raise and compete globally.
在競爭激烈的市場中,我們希望贏得每位開發者的支持。如果我們提高水準並參與全球競爭,美國的人工智慧技術股票可以成為世界標準。
Notably, in the quarter was an increase in Hopper 100 and H200 shipments. We also sold approximately $650 million of H20 in Q2 to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of our platform.
值得注意的是,本季 Hopper 100 和 H200 的出貨量有所增加。我們也在第二季向中國境外的一位不受限制的客戶銷售了價值約 6.5 億美元的 H20。Hopper 需求的連續成長顯示在加速運算上執行的資料中心工作負載的廣度以及 CUDA 函式庫和全端優化的強大功能,這些都不斷提升了我們平台的效能和經濟價值。
As we continue to deliver both Hopper and Blackwell GPUs, we are focusing on meeting the soaring global demand. This growth is fueled by capital expenditures from the cloud to enterprises, which are on track to invest $600 billion in data center infrastructure and compute this calendar year alone, nearly doubling in two years. We expect annual AI infrastructure investments to continue growing driven by the several factors. Reasoning agentic AI requiring orders of magnitude more training and inference compute. Global build-outs for sovereign AI enterprise AI adoption and the arrival of physical AI and robotics.
在我們繼續提供 Hopper 和 Blackwell GPU 的同時,我們正致力於滿足不斷增長的全球需求。這一成長是由雲端運算到企業的資本支出推動的,光是今年,企業就將在資料中心基礎設施和運算方面投資 6,000 億美元,兩年內幾乎翻了一番。我們預計,在多種因素的推動下,年度人工智慧基礎設施投資將持續成長。推理代理 AI 需要更多數量級的訓練和推理計算。主權人工智慧企業人工智慧採用的全球建構以及實體人工智慧和機器人技術的到來。
Blackwell has set the benchmark as it is the new standard for AI inference performance. The market for AI inference is expanding rapidly with reasoning and agentic AI gaining traction across industries. Blackwell's rack scale NVLink and CUDA full stack architecture addresses this by redefining the economics of inference. New NV FP4-bit precision and NVLink 72 on the GB300 platform delivers a 50x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale.
Blackwell 設定了基準,因為它是 AI 推理性能的新標準。隨著推理和代理人工智慧在各個行業中越來越受到關注,人工智慧推理市場正在迅速擴張。Blackwell 的機架式 NVLink 和 CUDA 全端架構透過重新定義推理經濟學解決了這個問題。GB300 平台上的全新 NV FP4 位元精度和 NVLink 72 與 Hopper 相比,每個代幣的能源效率提高了 50 倍,使公司能夠以前所未有的規模將其計算貨幣化。
For instance, a $3 million investment in GB200 infrastructure can generate $30 million in token revenue a 10x return.
例如,對 GB200 基礎設施投資 300 萬美元可以產生 3,000 萬美元的代幣收入,即 10 倍的回報。
NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell's performance by more than 2x since its launch. Advances in CUDA, TensorRT-LLM, and Dynamo are unlocking maximum efficiency. CUDA library contributions from the open source community, along with NVIDIA's open libraries and frameworks are now integrated into millions of workflows. This powerful flywheel of collaborative innovation between NVIDIA and global community contribution strengthens NVIDIA's performance leadership. NVIDIA is a top contributor to OpenAI models, data, and software.
NVIDIA 軟體創新與我們強大的開發者生態系統相結合,自推出以來已將 Blackwell 的效能提高了 2 倍以上。CUDA、TensorRT-LLM 和 Dynamo 的進步正在實現最高效率。來自開源社群的 CUDA 庫貢獻以及 NVIDIA 的開放式庫和框架現已整合到數百萬個工作流程中。NVIDIA 與全球社群貢獻之間的協作創新這一強大飛輪增強了 NVIDIA 的性能領先地位。NVIDIA 是 OpenAI 模型、資料和軟體的頂級貢獻者。
Blackwell has introduced a groundbreaking numerical approach to large language model pretraining. Using NV FP4, computations on the GB300 can now achieve 7x faster training than the H100, which uses FP8. This innovation delivers the accuracy of 16-bit precision with the speed and efficiency of 4 bit, setting a new standard for AI factor efficiency and scalability.
Blackwell 為大型語言模型預訓練引入了突破性的數值方法。使用 NV FP4,GB300 上的計算現在可以比使用 FP8 的 H100 實現快 7 倍的訓練速度。這項創新實現了 16 位元精度的準確度以及 4 位元的速度和效率,為 AI 因素效率和可擴展性樹立了新標準。
The AI industry is quickly adopting this revolutionary technology with major players such as AWS, Google Cloud, Microsoft Azure, and OpenAI as well as Cohere, Mistral, Kimi, Perplexity, Reflection and Runway, already embracing it.
人工智慧產業正在迅速採用這項革命性技術,AWS、Google Cloud、Microsoft Azure 和 OpenAI 等主要參與者以及 Cohere、Mistral、Kimi、Perplexity、Reflection 和 Runway 等都已開始採用它。
NVIDIA's performance leadership was further validated in the latest MLPerf training benchmarks, where the GB200 delivered a clean sweep beyond the lookout for the upcoming MLPerf inference results in September, which will include benchmarks based on the Blackwell Ultra.
NVIDIA 的效能領先地位在最新的 MLPerf 訓練基準測試中得到了進一步驗證,其中 GB200 以絕對優勢超越了即將於 9 月發布的 MLPerf 推理結果,該結果將包括基於 Blackwell Ultra 的基準測試。
NVIDIA RTX Pro servers are in full production for the world system makers. These are air-cooled PCIe-based systems integrated seamlessly into standard IT environments and run traditional enterprise IT applications as well as the most advanced agentic and physical AI applications. Nearly 90 companies, including many global leaders are already adopting RTX Pro servers. Hitachi uses them for real-time simulation and digital twins, Lilly for drug discovery, Hyundai for factory design and AV validation, and Disney for immersive story telling. As enterprises modernize data centers, RTX Pro servers are poised to become a multibillion-dollar product line.
NVIDIA RTX Pro 伺服器正在為全球系統製造商全面生產。這些是基於 PCIe 的風冷系統,無縫整合到標準 IT 環境中,可運行傳統企業 IT 應用程式以及最先進的代理和實體 AI 應用程式。包括許多全球領導者在內的近 90 家公司已經採用 RTX Pro 伺服器。日立將它們用於即時模擬和數位孿生,禮來用於藥物研發,現代用於工廠設計和 AV 驗證,迪士尼用於沉浸式故事講述。隨著企業對資料中心進行現代化改造,RTX Pro 伺服器有望成為價值數十億美元的產品線。
Sovereign AI is one on the rise as the nation's ability to develop its own AI using domestic infrastructure, data and talent presents a significant opportunity for NVIDIA. NVIDIA is at the forefront of landmark initiatives across the UK and Europe. The European Union plans to invest EUR20 billion to establish 20 AI factories across France, Germany, Italy, and Spain, including 5 giga factories to increase its AI compute infrastructure by tenfold.
自主人工智慧正在崛起,因為國家利用國內基礎設施、數據和人才開發自己的人工智慧的能力為 NVIDIA 帶來了重大機會。NVIDIA 在英國和歐洲的里程碑計劃中處於領先地位。歐盟計畫投資200億歐元,在法國、德國、義大利和西班牙建立20家人工智慧工廠,其中包括5家千兆工廠,將其人工智慧運算基礎設施規模擴大十倍。
In the UK, the unbarred AI supercomputer powered by NVIDIA was unveiled at the country's most powerful AI system, delivering of AI performance to accelerate breakthroughs in fields of drug discovery and climate modeling. We are on track to achieve over $20 billion in Sovereign AI revenue this year, more than double than that last year.
在英國,由 NVIDIA 提供支援的開放式 AI 超級電腦在該國最強大的 AI 系統中亮相,提供 AI 性能以加速藥物發現和氣候建模領域的突破。我們今年的主權人工智慧收入預計將超過 200 億美元,比去年成長一倍以上。
Networking delivered record revenue of $7.3 billion, and escalating demands of AI compute clusters necessitate high efficiency and low latency networking. This represents a 46% sequential and 98% year-on-year increase with strong demand across Spectrum-X Ethernet, InfiniBand, and NVLink.
網路業務實現了創紀錄的 73 億美元收入,而人工智慧運算集群的不斷增長的需求需要高效率和低延遲的網路。由於對 Spectrum-X 乙太網路、InfiniBand 和 NVLink 的需求強勁,這一數字比上一季成長了 46%,比去年同期成長了 98%。
Our Spectrum-X enhanced Ethernet solutions provide the highest throughput and lowest latency network for Ethernet AI workloads. Spectrum-X Ethernet delivered double-digit sequential and year-over-year growth with annualized revenue exceeding $10 billion. At hot chips, we introduced Spectrum XGS Ethernet, a technology design to unify disparate data centers into giga-scale AI super factories. CoreWeave is an initial adopter of the solution, which is projected to double GPU to GPU communication speed.
我們的 Spectrum-X 增強型乙太網路解決方案為乙太網路 AI 工作負載提供最高吞吐量和最低延遲的網路。Spectrum-X 乙太網路實現了兩位數的連續和同比增長,年化收入超過 100 億美元。在熱門晶片大會上,我們推出了Spectrum XGS以太網,這是將分散的資料中心統一為千兆級AI超級工廠的技術設計。CoreWeave 是該解決方案的首批採用者,預計該解決方案將使 GPU 到 GPU 的通訊速度提高一倍。
InfiniBand revenue nearly doubled sequentially, fueled by the adoption of XDR technology, which provides double the bandwidth improvement over its predecessor, especially valuable for the model builders.
InfiniBand 營收季增近一倍,這得益於 XDR 技術的採用,該技術比其前代產品提供了兩倍的頻寬提升,這對於模型建構者來說尤其有價值。
World's fastest switch NVLink with 14x the bandwidth of PCIe Gen 5 delivered strong growth as customers deployed Blackwell NVLink rack-scale systems. The positive reception to NVLink Fusion, which allows semi-custom AI infrastructure has been widespread.
隨著客戶部署 Blackwell NVLink 機架式系統,全球速度最快的交換器 NVLink 擁有 PCIe Gen 5 14 倍的頻寬,實現了強勁成長。NVLink Fusion 允許半客製化 AI 基礎設施,獲得了廣泛的好評。
Japan's upcoming FugakuNEXT will integrate Fujitsu's CPUs with our architecture via NVLink fusion. It will run a range of workloads, including AI, supercomputing, and quantum computing. FugakuNEXT joins a rapidly expanding list of leading quantum supercomputing and research centers running on NVIDIA's CUDA-Q Quantum platform, including ULC, AIST, NNS and supported by over 300 ecosystem partners, including AWS, Google Quantum AI, quantinuum, QuEra, and [PsiQuantum].
日本即將推出的 FugakuNEXT 將透過 NVLink 融合將富士通的 CPU 與我們的架構整合在一起。它將運行一系列工作負載,包括人工智慧、超級運算和量子運算。FugakuNEXT 加入了在 NVIDIA CUDA-Q Quantum 平台上運行的領先量子超級計算和研究中心的快速擴張名單,其中包括 ULC、AIST、NNS,並得到 300 多個生態系統合作夥伴的支持,包括 AWS、Google Quantum AI、quantinuum、QuEra 和[PsiQuantum]。
Just in for our new robotics computing platform is now available or delivers an order of magnitude, greater AI performance and energy efficiency than NVIDIA AGX Orin. It runs the latest generative and reasoning AI models at the edge in real time, enabling state-of-the-art robotics. Adoption of NVIDIA's robotics full stack platform is growing at rapid rate. Over 2 million developers and 1,000-plus hardware software applications and sensor partners taking our platform to market. Leading enterprises across industries have adopted four, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic, and Meta.
我們的新機器人計算平台現已上市,其 AI 性能和能源效率比 NVIDIA AGX Orin 高出一個數量級。它實時運行邊緣最新的生成和推理人工智慧模型,實現最先進的機器人技術。NVIDIA 機器人全端平台的採用正在快速成長。超過 200 萬名開發人員和 1,000 多個硬體軟體應用程式和感測器合作夥伴將我們的平台推向市場。各行各業的領導企業都採用了四種技術,包括 Agility Robotics、Amazon Robotics、Boston Dynamics、Caterpillar、Figure、Hexagon、Medtronic 和 Meta。
Robotic applications require exponentially more compute on the device and in infrastructure, representing a significant long-term demand driver for our data center platform. NVIDIA Omniverse with Cosmos is our data center physical AI digital trim platform built for development of robot and robotic systems. This quarter, we announced a major expansion of our partnership with Siemens to enable AI, automatic factories, leading European robotics companies, including Agile Robots, Neuro Robotics, and Universal Robots are building their latest innovations with the Omniverse platform.
機器人應用需要在設備和基礎設施上進行更多的運算,這對我們的資料中心平台來說是一個重要的長期需求驅動因素。帶有 Cosmos 的 NVIDIA Omniverse 是我們為機器人和機器人系統開發而構建的資料中心實體 AI 數位修剪平台。本季度,我們宣布大力擴展與西門子的合作夥伴關係,以實現人工智慧和自動化工廠,包括 Agile Robots、Neuro Robotics 和 Universal Robots 在內的歐洲領先機器人公司正在利用 Omniverse 平台建立其最新創新。
Transitioning to a quick summary of our revenue by geography. China declined on a sequential basis to low-single-digit percentage of data center revenue. Note, our Q3 outlook does not include H20 shipments to China customers.
過渡到按地區對我們的收入進行快速總結。中國資料中心收入佔比連續下降至個位數低點。請注意,我們的第三季預測不包括向中國客戶的 H20 出貨量。
Singapore revenue represented 22% of second quarter's billed revenue as customers have centralized their invoicing in Singapore. Over 99% of data center compute revenue billed to Singapore was for US-based customers.
由於客戶將發票集中在新加坡,新加坡的收入佔第二季帳單收入的 22%。新加坡資料中心計算收入的 99% 以上來自美國客戶。
Our gaming revenue was a record $4.3 billion, a 14% sequential increase and a 49% jump year on year. This was driven by the ramp of Blackwell GeForce GPUs and as strong sales continued as we increase supply availability.
我們的博彩收入達到創紀錄的 43 億美元,季增 14%,年增 49%。這是由 Blackwell GeForce GPU 的成長所推動的,並且隨著我們增加供應量,銷售持續強勁成長。
This quarter, we shipped GeForce RTX 5060 desktop GPO. It brings double the performance along with advanced ray tracing, neuro rendering, and AI-powered DLSS 4 gameplay to millions of gamers worldwide.
本季度,我們推出了 GeForce RTX 5060 桌面 GPO。它為全球數百萬遊戲玩家帶來雙倍的性能以及先進的光線追蹤、神經渲染和 AI 驅動的 DLSS 4 遊戲體驗。
Blackwell is coming to GeForce now in September. This is GeForce Now's most significant upgrade offering RTX 5080 cost performance, minimal latency and 500 resolution at 120 frames per second. We are also doubling the GeForce Now catalog to over 4,500 titles, the largest library of any cloud gaming service.
Blackwell 將於 9 月登陸 GeForce。這是 GeForce Now 最重要的升級,提供 RTX 5080 的性價比、最小延遲和每秒 120 幀的 500 解析度。我們還將 GeForce Now 的遊戲目錄增加一倍,達到 4,500 多個遊戲,這是所有雲端遊戲服務中最大的遊戲庫。
For AI enthusiasts, On-device AI performs the best RTX GPUs. We partnered with OpenAI to optimize their open source GPT models for high-quality fast and efficient inference on millions of RTX-enabled window devices. With the RTX platform stack, window developers can create AI applications designed to run on the world's largest AI PC user base.
對於 AI 愛好者來說,裝置上的 AI 效能最佳的 RTX GPU。我們與 OpenAI 合作,優化他們的開源 GPT 模型,以便在數百萬台支援 RTX 的視窗裝置上進行高品質、快速、高效的推理。借助 RTX 平台堆疊,Windows 開發人員可以創建旨在在全球最大的 AI PC 用戶群上運行的 AI 應用程式。
Professional visualization revenue reached $601 million, a 32% year-on-year increase. Growth was driven by an adoption of the high-end RTX Workstation GPUs and AI-powered workload like design, simulation, and prototyping. Key customers are leveraging our solutions to accelerate their operations. Activision Blizzard uses RTX workstations to enhance creative workflows. While robotics innovator, Figure AI, powers its humanoid robots with RTX embedded GPUs.
專業視覺化收入達6.01億美元,較去年成長32%。成長的動力來自於高階 RTX 工作站 GPU 的採用以及設計、模擬和原型設計等 AI 驅動的工作負載。主要客戶正在利用我們的解決方案來加速他們的營運。Activision Blizzard 使用 RTX 工作站來增強創意工作流程。機器人創新者 Figure AI 則利用 RTX 嵌入式 GPU 為其人形機器人提供動力。
Automotive revenue, which includes only in-car compute revenue was $586 million, up 69% year on year, primarily driven by self-driving solutions. We have begun shipments of NVIDIA Thor SoC, the successor to Oren. Thor's arrival coincides with the industry's accelerating shift to vision language model architecture generative AI and higher levels of autonomy. Thor is the most successful robotics and AV computer we've ever created, Thor power.
汽車收入(僅包括車載計算收入)為 5.86 億美元,年增 69%,主要受自動駕駛解決方案的推動。我們已經開始出貨 NVIDIA Thor SoC(Oren 的後繼產品)。Thor 的到來恰逢業界向視覺語言模型架構生成式人工智慧和更高層次自主性的加速轉變。Thor 是我們迄今為止創造的最成功的機器人和 AV 計算機,Thor 的力量。
Our full stack Drive AV software platform is now in production, opening up billions to new revenue opportunities for NVIDIA while improving vehicle safety and autonomy.
我們的全端 Drive AV 軟體平台現已投入生產,為 NVIDIA 帶來數十億美元的新收入機會,同時提高車輛安全性和自主性。
Now, moving to the rest of our P&L. GAAP gross margin was 72.4% and a non-GAAP gross margin was 72.7%. These figures include a $180 million or 40-basis-point benefit from releasing previously reserved H20 inventory. Excluding this benefit, non-GAAP gross margins would have been 72.3%, still exceeding our outlook.
現在,我們來看看其餘的損益表。GAAP 毛利率為 72.4%,非 GAAP 毛利率為 72.7%。這些數字包括釋放先前保留的 H20 庫存所帶來的 1.8 億美元或 40 個基點的收益。不計入此項收益,非公認會計準則毛利率將達到 72.3%,仍超過我們的預期。
GAAP operating expenses rose 8% and 6% on a non-GAAP basis sequentially. This increase was driven by higher compute and infrastructure costs as well as higher compensation and benefit costs. To support the ramp of Blackwell and Blackwell Ultra, inventory increased sequentially from $11 billion to $15 billion in Q2.
根據 GAAP 計算的營運費用較上季增加 8%,根據非 GAAP 計算的營運費用較上季增加 6%。這一增長是由於計算和基礎設施成本以及薪酬和福利成本的增加所致。為了支持 Blackwell 和 Blackwell Ultra 的產量成長,第二季的庫存從 110 億美元較上季增加到 150 億美元。
While we prioritize funding our growth and strategic initiatives, in Q2, we returned $10 billion to shareholders through share repurchases and cash dividends. Our Board of Directors recently approved a $60 million share repurchase authorization to add to our remaining $14.7 billion of authorization at the end of Q2.
雖然我們優先為我們的成長和策略計劃提供資金,但在第二季度,我們透過股票回購和現金股息向股東返還了 100 億美元。我們的董事會最近批准了 6,000 萬美元的股票回購授權,以增加我們在第二季末剩餘的 147 億美元授權。
Okay. Let me turn it to the outlook for the third quarter. Total revenue is expected to be $54 billion, plus or minus 2%. This represents over $7 billion in sequential growth. Again, we do not assume any H20 shipments to China customers in our outlook. GAAP and non-GAAP gross margins are expected to be 73.3%, 73.5%, respectively, plus or minus 50 basis points. We continue to expect to exit the year with non-GAAP gross margins in the mid-70s. GAAP and non-GAAP operating expenses are expected to be approximately $5.9 billion and $4.2 billion, respectively.
好的。讓我來談談第三季的展望。預計總收入為 540 億美元,上下浮動 2%。這意味著連續成長超過 70 億美元。再次強調,我們預計未來不會向中國客戶運送任何 H20。預計 GAAP 和非 GAAP 毛利率分別為 73.3%、73.5%,上下浮動 50 個基點。我們仍然預計今年的非 GAAP 毛利率將達到 75% 左右。預計 GAAP 和非 GAAP 營運費用分別約為 59 億美元和 42 億美元。
For the full year, we expect operating expenses to grow in the high-30s range year over year, up from our prior expectations of the mid-30s. We are accelerating investments in the business to address the magnitude of growth opportunities that lie ahead.
我們預計全年營運費用將年增 30% 以上,高於我們先前預期的 30% 左右。我們正在加快對業務的投資,以應對未來巨大的成長機會。
GAAP and non-GAAP other income and expenses are expected to be an income of approximately $500 million, excluding gains and losses from nonmarketable and public held equity securities. GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items. Further financial data are included in the CFO commentary and other information available on our website.
預計 GAAP 和非 GAAP 其他收入和支出約為 5 億美元的收入,不包括非流通股本證券和公眾持有的股權證券的收益和損失。預計 GAAP 和非 GAAP 稅率為 16.5%,上下浮動 1%,不包括任何單一項目。進一步的財務數據包含在財務長評論和我們網站上的其他資訊中。
In closing, let me highlight upcoming events for the financial community. We will be at the Goldman Sachs Technology Conference on September 8 in San Francisco. Our annual NDR will commence the first part of October. DTC data center begins on October 27, with Jensen's keynote scheduled for the 28th. We look forward to seeing you at these events. Our earnings call to discuss the results of our third quarter of fiscal 2026 is scheduled for November 19.
最後,讓我重點介紹一下金融界即將發生的事件。我們將於 9 月 8 日參加在舊金山舉行的高盛技術大會。我們的年度 NDR 將於 10 月初開始。DTC 資料中心將於 10 月 27 日開始運營,Jensen 的主題演講定於 28 日舉行。我們期待在這些活動中見到您。我們的財報電話會議定於 11 月 19 日舉行,討論 2026 財年第三季的業績。
We will now open the call for questions. Operator, would you please poll for questions?
我們現在開始提問。接線員,請問可以投票詢問嗎?
Operator
Operator
(Operator Instructions) C.J. Muse, Cantor Fitzgerald.
(操作員指示)C.J. Muse,Cantor Fitzgerald。
C.J. Muse - Analyst
C.J. Muse - Analyst
Good afternoon. Thanks for taking the question. I guess, with wafer into rack out lead times of 12 months, you confirmed on the call today that Rubin is on track for the second half. And obviously, many of these investments are multiyear projects contingent upon power cooling, et cetera. I was hoping perhaps could you take a high-level view and speak to your vision for growth into 2026. And as part of that, if you can kind of comment between network and data center would be very helpful. Thank you.
午安.感謝您回答這個問題。我想,由於晶圓從製造到交付的交付週期為 12 個月,您在今天的電話會議上確認了 Rubin 在下半年的工作進展順利。顯然,其中許多投資都是多年期項目,取決於電力冷卻等。我希望您能從高層次的角度談談您對 2026 年發展的願景。作為其中的一部分,如果您能對網路和資料中心之間進行評論將會非常有幫助。謝謝。
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
Yeah. Thanks, CJ. At the highest level of growth drivers would be the evolution, the introduction, if you will, of reasoning agentic AI. Where chatbots used shot, you give it a prompt and it would generate the answer.
是的。謝謝,CJ。成長動力的最高層次是推理代理人工智慧的演進和引入。聊天機器人使用時,您給它一個提示,它就會產生答案。
Now the AI does research, it thinks and does a plan, and it might use tools. And so it's called long thinking and the longer it thinks, oftentimes, it produces better answers. And the amount of computation necessary for one shot versus reasoning agentic AI models could be 100 times, 1,000 times and potentially even more as the amount of research and basically reading and comprehension that it goes off to do. And so the amount of computation that has resulted in agentic AI has grown tremendously.
現在人工智慧可以進行研究、思考並製定計劃,並且可能使用工具。因此這被稱為長期思考,思考的時間越長,通常就會產生更好的答案。而一次推理代理 AI 模型所需的計算量可能是其需要進行的研究量、基本閱讀和理解量的 100 倍、1,000 倍甚至更多。因此,導致代理人工智慧的計算量大幅成長。
And of course, the effectiveness has also grown tremendously. Because of agentic AI, the amount of hallucination has dropped significantly. You can now use tools and perform tasks. Enterprises have been opened up. As a result of agentic AI and vision language models, we now are seeing a breakthrough in physical AI and robotics, autonomous systems. So the last year, AI has made tremendous progress and agentic systems, reasoning systems is completely revolutionary.
當然,其有效性也得到了極大的提升。由於代理人工智慧的存在,幻覺的數量已大幅下降。現在您可以使用工具並執行任務。企業已經開放。由於代理人工智慧和視覺語言模型,我們現在看到實體人工智慧和機器人、自主系統取得了突破。去年,人工智慧取得了巨大的進步,代理系統和推理系統是完全革命性的。
Now, we built the Blackwell NVLink 72 system, a rack scale computing system for this moment. We've been working on it for several years. This last year, we transitioned from NVLink 8, which is a node scale computing, each node is a computer to now NVLink 72, where each rack is a computer. That disaggregation of NVLink 72 into a rack scale system was extremely hard to do, but the results are extraordinary. We're seeing orders of magnitude speed up and therefore, energy efficiency and therefore, cost effectiveness of token generation because of NVLink 72.
現在,我們建立了 Blackwell NVLink 72 系統,這是目前的機架規模計算系統。我們已經為此努力了好幾年。去年,我們從 NVLink 8(節點規模計算,每個節點都是一台電腦)過渡到現在的 NVLink 72(每個機架都是一台電腦)。將 NVLink 72 分解為機架規模系統極為困難,但結果卻非比尋常。由於 NVLink 72,我們看到速度提高了幾個數量級,因此能源效率也提高了,令牌產生的成本效益也提高了。
And so over the next a couple of years, you're going to -- well, you asked about longer term, over the next five years, we're going to scale into with Blackwell, with Rubin, and follow-ons to scale into effectively a $3 billion to $4 trillion AI infrastructure opportunity.
因此,在接下來的幾年裡,你將會——好吧,你問的是長期來看,在接下來的五年裡,我們將與布萊克威爾、魯賓以及後續公司一起擴大規模,有效地擴大到 30 億至 4 兆美元的人工智慧基礎設施機會。
The last couple of years, you have seen that CapEx has grown in just the top four CSPs and by -- has doubled and grown to about $600 billion. So we're in the beginning of this build-out, and the AI technology advances has really enabled AI to be able to adopt and solve problems to many different industries.
過去幾年,我們已經看到,光是前四大 CSP 的資本支出就成長了一倍,達到約 6,000 億美元。因此,我們正處於這項建設的開始階段,人工智慧技術的進步確實使人工智慧能夠應用並解決許多不同行業的問題。
Operator
Operator
Vivek Arya, Bank of America Securities.
Vivek Arya 的美國銀行證券。
Vivek Arya - Analyst
Vivek Arya - Analyst
Thanks for taking my questions. Colette, I just wanted to clarify, $2 billion to $5 billion in China, what needs to happen? And what is the sustainable pace of that China business as you get into Q4?
感謝您回答我的問題。科萊特,我只是想澄清一下,在中國投資 20 億到 50 億美元需要做什麼?進入第四季度,中國業務的可持續成長速度如何?
And then, Jensen, for you on the competitive landscape, several of your large customers already have or are planning many ASIC projects. I think one of your ASIC competitors, Broadcom signal that they could grow their AI business almost 55%, 60% next year. Any scenario in which you see the market moving more towards ASICs and away from NVIDIA GPU? Just what are you hearing from your customers, how are they managing this split between the use of merchant silicon and ASICs?
然後,Jensen,就競爭格局而言,您的幾個大客戶已經擁有或正在計劃許多 ASIC 專案。我認為你們的 ASIC 競爭對手之一博通表示,他們明年的 AI 業務可能會成長近 55% 到 60%。您認為市場會更轉向 ASIC 而遠離 NVIDIA GPU 嗎?您從客戶那裡聽到了什麼?他們如何管理商用矽片和 ASIC 之間的這種分離?
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Thanks, Vivek. So let me first answer your question regarding what will it take for the H20s to be shipped. There is interest in our H20s. There is the initial set of license that we received. And then additionally, we do have supply that we are ready, and that's why we communicated that somewhere in the range of about $2 billion to $5 billion this quarter we could potentially ship.
謝謝,維韋克。那麼,首先讓我來回答您關於運送 H20 需要什麼的問題。人們對我們的 H20 很感興趣。這是我們收到的初始許可證。此外,我們確實有準備好的供應,這就是為什麼我們表示本季我們可能出貨約 20 億至 50 億美元。
We're still waiting on several of the geopolitical issues going back and forth between the governments and the companies trying to determine their purchases and what they want to do. So it's still open at this time, and we're not exactly sure what that full amount will be this quarter. However, if more interest arrives, more licenses arrives, again, we can also still build additional H20s and ship more as well.
我們仍在等待政府和公司之間就一些地緣政治問題進行反覆討論,以確定他們的購買行為以及他們想要做什麼。所以目前它仍然開放,我們不確定本季的全部金額是多少。然而,如果有更多人有興趣,獲得更多許可證,我們仍然可以製造更多 H20 並運送更多。
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
NVIDIA Builds very different things in ASICs. So let's talk about ASIC first. A lot of projects are started, many start-up companies are created very few products go into production. And the reason for that is it's really hard.
NVIDIA 在 ASIC 中建構了非常不同的東西。那我們先來談談 ASIC。許多計畫啟動,許多新創公司成立,但很少有產品投入生產。原因是這真的很難。
Accelerated computing is unlike general-purpose computing. You don't write software and just compile it into a processor. Accelerated computing is a full stack co-design problem. And AI factories in the last several years have become so much more complex because of the scale of the problems have grown so significantly. It is really the ultimate, the most extreme computer science problem the world's ever seen, obviously.
加速計算不同於通用計算。您不需要編寫軟體,只需將其編譯到處理器中即可。加速計算是一個全端協同設計問題。由於問題規模顯著擴大,過去幾年人工智慧工廠變得越來越複雜。顯然,這確實是世界上迄今為止最極端的電腦科學問題。
And so the stack is complicated. The models are changing incredibly fast from generative based on auto regressive to degenerative based on diffusion to mixed models to multi-modality. The number of different models that are coming out that are either derivatives of transformers or evolutions of transformers is just daunting.
因此堆疊很複雜。模型的變化非常快,從基於自回歸的生成模型到基於擴散的退化模型,再到混合模型和多模態模型。出現的變形金剛衍生品或變形金剛進化品的不同型號的數量令人望而生畏。
One of the advantages that we have is that NVIDIA is available in every cloud. We're available from every computer company. We're available from the cloud to on-prem to edge to robotics on the same programming model. And so it's sensible that every framework in the world supports NVIDIA.
我們的優勢之一是 NVIDIA 可以在每個雲端中使用。每家電腦公司都可以為我們提供服務。我們可以使用相同的程式設計模型,從雲端到本地、從邊緣到機器人。因此,世界上每個框架都支援 NVIDIA 是明智的。
When you're building a new model architecture, releasing it on NVIDIA is most sensible. And so the diversity of our platform, both in the ability to evolve into any architecture, the fact that we're everywhere.
當你建立新的模型架構時,在 NVIDIA 上發布它是最明智的。我們的平台具有多樣性,既有能力演變成任何架構,又無所不在。
And also, we accelerate the entire pipeline. Everything from data processing to pretraining to post training with reinforcement learning, all the way out to inference. And so when you build a data center with NVIDIA platform in it, the utility of it is best. The lifetime usefulness is much, much longer.
而且,我們還加速了整個流程。從資料處理到預訓練,再到強化學習的後訓練,一直到推理。因此,當您使用 NVIDIA 平台建立資料中心時,它的實用性是最好的。使用壽命長得多。
And then I would just say that in addition to all of that, and it's just a really extremely complex systems problem anymore. People talk about the chip itself. There's one ASIC, the GPU that many people talk about. But in order to build Blackwell, the platform and Rubin the platform, we had to build CPUs that connect fast memory low -- extremely energy-efficient memory for large KB cashing necessary for agentic AI to the GPU to a SuperNIC to a scale-up switch, we call NVLink completely revolutionary, we're in our fifth generation now, to a scale-out switch, whether it's Quantum or Spectrum-X Ethernet, to now scale across switches so that we could prepare for these AI super factories with multiple gigawatts of computing, all connected together.
然後我只想說,除此之外,它只是一個極其複雜的系統問題。人們談論的是晶片本身。有一個 ASIC,就是很多人談論的 GPU。但為了建立 Blackwell 平台和 Rubin 平台,我們必須建立能夠連接快速記憶體、低能耗記憶體(用於代理 AI 所需的大 KB 快取)的 CPU、GPU、SuperNIC 和橫向擴展交換器(我們稱 NVLink 為革命性的),現在我們已經進入第五代,橫向擴展交換機(無論是擴展 Quantum 或 Spectrum-X 交換機)以便我們能夠為新計算程式設計超級工廠做好準備,所有這些都連接在一起。
We call that spectrum XGS, we just announced that at hot chips this week. And so the complications, the complexity of everything that we do is really quite extraordinary. It's just done at a really, really extreme scale now.
我們將該頻譜稱為 XGS,我們剛剛在本週的熱門晶片上宣布了這一點。因此,我們所做的每件事的複雜性和複雜性確實相當驚人。現在,這項行動已經達到了極其極端的規模。
And then lastly, if I could just say one more thing. We're in every cloud for a good reason. Not only are we the most energy efficient our Perf per watt is the best of any computing platform. And in a world of power limited data centers, perf per watt drives directly to revenues. And you've heard me say before that in a lot of ways, the more you buy, the more you grow.
最後,如果我可以再說一件事的話。我們身處每一片雲朵之中,都是有原因的。我們不僅是最節能的,而且我們的每瓦效能也是所有運算平台中最好的。在資料中心電力有限的世界中,每瓦性能直接決定收入。你們之前也聽我說過,在很多方面,你買的越多,你成長得越多。
And because our perf per dollar, the performance per dollar is so incredible, you also have extremely great margins. So the growth opportunity with NVIDIA's architecture and the gross margins opportunity with NVIDIA's architecture is absolutely the best. And so there's a lot of reasons why NVIDIA has chosen by every cloud and every startup and every computer company, we're really a holistic full stack solution for AI factories.
而且由於我們的每美元表現如此令人難以置信,因此您也擁有極大的利潤。因此,NVIDIA 架構帶來的成長機會和 NVIDIA 架構帶來的毛利率機會絕對是最好的。因此,NVIDIA 被每一家雲端運算公司、每一家新創公司和每一家電腦公司選擇的原因有很多,我們實際上是一個為 AI 工廠提供整體全端解決方案的公司。
Operator
Operator
Ben Reitzes, Melius.
本·雷澤斯(Ben Reitzes),梅利厄斯(Melius)。
Ben Reitzes - Equity Analyst
Ben Reitzes - Equity Analyst
Hey, thanks a lot. Jensen, I wanted to ask you about your $3 billion to $4 trillion in data center infrastructure spend by the end of the decade. Previously, you talked about something in the $1 billion range, which I believe was just for compute by 2028. If you take past comments, $3 billion to $4 trillion would imply maybe $2 billion plus in compute spend. And just wanted to know if that was right, and that's what you're seeing by the end of the decade. And wondering what you think your share will be of that. Your share right now of total infrastructure compute wise is very high. So I wanted to see.
嘿,非常感謝。詹森,我想問您關於到本世紀末資料中心基礎設施支出 30 億至 4 兆美元的情況。之前,您談到了 10 億美元左右的市場規模,我相信到 2028 年這只是用於計算。如果你參考過去的評論,30億到4兆美元可能意味著20億美元以上的計算支出。我只是想知道這是否正確,這就是你在十年末所看到的。並且想知道您認為您的份額將是多少。您現在在整個基礎設施計算中所佔的份額非常高。所以我想看看。
And also if there's any bottlenecks you're concerned about like power to get to the $3 billion to $4 trillion.
此外,如果您擔心任何瓶頸,例如電力問題,那麼要達到 30 億至 4 兆美元。
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
As you know, the CapEx of just the top four hyperscalers has doubled in two years. As the AI revolution went into full steam, as the AI race is now on, the CapEx spend has doubled to $600 billion per year. There's five years between now and the end of the decade and $600 billion only represents the top four hyperscalers. We still have the rest of the enterprise companies building on-prem. You have cloud service providers building around the world. United States represents about 60% of the world's compute and over time, you would think that artificial intelligence would reflect GDP scale and growth. And so -- and we'll be, of course, accelerating GDP growth.
如您所知,僅前四大超大規模企業的資本支出在兩年內就翻了一番。隨著人工智慧革命的全面展開以及人工智慧競賽的展開,資本支出翻了一番,達到每年 6000 億美元。從現在到本世紀末還有五年時間,而 6,000 億美元僅代表前四大超大規模企業。我們仍有其他企業公司在本地進行建設。您的雲端服務供應商遍布世界各地。美國約佔世界運算總量的 60%,隨著時間的推移,你會認為人工智慧將反映 GDP 規模和成長。因此——我們當然會加速 GDP 成長。
And so our contribution to that is a large part of the AI infrastructure. Out of a gigawatt AI factory, which can go anywhere from 50 to plus or minus 10%, let's say, $50 billion to $60 billion, we represent about 35% plus or minus of that and $35 billion out of $50 billion per gigawatt data center. And of course, what you get for that is not a GPU.
因此,我們對此的貢獻很大一部分是人工智慧基礎設施。在一個千兆瓦的人工智慧工廠中,其成本可以在 500 億美元到 600 億美元之間變化,我們可以從中獲取大約 35% 的收益,也就是每千兆瓦資料中心 500 億美元中的 350 億美元。當然,你得到的並不是 GPU。
I think people we're famous for building the GPU and inventing the GPU. But as you know, over the last decade, we've really transitioned to become an AI infrastructure company. It takes six chips just to build -- six different types of chips just to build an Rubin AI supercomputer. And just to scale that out, to a gigawatt, you have hundreds of thousands of GPU compute nodes and a whole bunch of racks. And so we're really an AI infrastructure company. And we're hoping to continue to contribute to growing this industry, making AI more useful and then very importantly, driving the performance per watt because the world, as you mentioned, limiters, it will always likely be power limitations or building limitations.
我認為人們因建造和發明 GPU 而聞名。但正如你所知,在過去十年裡,我們已經真正轉型成為一家人工智慧基礎設施公司。光是建造一台 Rubin AI 超級電腦就需要六種晶片——六種不同類型的晶片。為了將其擴展到千兆瓦,您需要數十萬個 GPU 運算節點和一大堆機架。所以我們其實是一家人工智慧基礎設施公司。我們希望繼續為這個行業的發展做出貢獻,使人工智慧更加有用,然後非常重要的是,提高每瓦性能,因為正如您所提到的,世界的限制因素總是可能是功率限製或建築限制。
And so we need to squeeze as much out of that factory as possible. NVIDIA's performance per unit of energy used drives the revenue growth of that factory. It directly translates. If you have a 100-megawatt factory, perf per 100-megawatt drives your revenues. It's tokens per 100 megawatts of factory.
因此我們需要盡可能地從該工廠榨取利潤。NVIDIA 每單位能源的使用性能推動了該工廠的收入成長。直接翻譯就行了。如果您擁有一個 100 兆瓦的工廠,那麼每 100 兆瓦的性能將決定您的收入。這是工廠每 100 兆瓦的代幣數。
In our case, also, the performance per dollar spent is so high that your gross margins are also the best. But anyhow, these are the limiters going forward and $3 trillion to $4 trillion is fairly sensible for the next five years.
在我們的案例中,每花費一美元,其效益也非常高,因此您的毛利率也是最好的。但無論如何,這些都是未來的限制因素,未來五年 3 兆至 4 兆美元是相當合理的。
Operator
Operator
Joe Moore, Morgan Stanley.
摩根士丹利的喬摩爾。
Joseph Moore - Analyst
Joseph Moore - Analyst
Great, thank you. Congratulations on reopening the China opportunity. Can you talk about the long-term prospects there? You've talked about, I think, half of the AI software world being there. How much can NVIDIA grow in that business? And how important is it that you get the black architecture ultimately license there?
太好了,謝謝。恭喜您重新開啟中國機會。您能談談那裡的長期前景嗎?我認為,您已經談到了人工智慧軟體領域的一半都存在這種情況。NVIDIA 在該業務上能達到多大的成長?獲得黑色建築最終許可有多重要?
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
The China market, I've estimated to be about $50 billion of opportunity for us this year. If we were able to address it with competitive products. And if it's $50 billion this year, you would expect it to grow, say, 50% per year. As the rest of the world's AI market is growing as well. It is the second largest computing market in the world, and it is also the home of AI researchers.
我估計,今年中國市場將為我們帶來約 500 億美元的商機。如果我們能夠用有競爭力的產品來解決這個問題。如果今年是 500 億美元,那麼你預計它每年的成長速度會達到 50%。世界其他地區的人工智慧市場也在成長。它是全球第二大運算市場,也是人工智慧研究人員的家。
About 50% of the world's AI researchers are in China. The vast majority of the leading open source models are created in China. And so it's fairly important, I think, for the American technology companies to be able to address that market. And open source, as you know, is created in one country, but it's used all over the world.
全球約50%的人工智慧研究人員在中國。絕大多數領先的開源模型都是在中國創造的。因此,我認為,美國科技公司能夠滿足這一市場的需求是相當重要的。眾所周知,開源是在一個國家創建的,但卻在世界各地使用。
The open source models that have come out of China are really excellent. DeepSeek, of course, gained global notoriety. [Q1] is excellent. Kimi's excellent. There's a whole bunch of new models that are coming out. They're multimodal, they're great language models. And it's really fueled the adoption of AI in enterprises around the world because enterprises want to build their own custom proprietary software stacks. And so open source model is really important for enterprise.
中國出來的開源模型確實很優秀。當然,DeepSeek 享譽全球。 [Q1] 非常棒。Kimi 很棒。一大批新車型即將問世。它們是多模式的,是很好的語言模型。它確實推動了全球企業對人工智慧的採用,因為企業希望建立自己的客製化專有軟體堆疊。因此,開源模式對於企業來說非常重要。
It's really important for SaaS who also would like to build proprietary systems. It has been really incredible for robotics around the world. And so open source is really important. And it's important that the American companies are able to address it. This is -- it's going to be a very large market.
對於也想建立專有系統的 SaaS 來說,這確實很重要。這對世界各地的機器人技術來說確實是一件不可思議的事。所以開源確實很重要。重要的是美國公司能夠解決這個問題。這將是一個巨大的市場。
We're talking to the administration about the importance of American companies to be able to address the Chinese market. And as you know, H20 has been approved for companies that are not on the entities list, and many licenses have been approved. And so I think the opportunity for us to bring Blackwell to the China market is a real possibility. And so we just have to keep advocating the sensibility of and the importance of American tech companies to be able to lead and win the AI race and help make the American tech stack the global standard.
我們正在與政府討論美國公司進入中國市場的重要性。而且大家知道,H20已經獲得批准給不在實體名單上的公司,而且很多許可證都已經獲得批准。因此我認為我們將 Blackwell 引入中國市場是完全有可能的。因此,我們必須繼續倡導美國科技公司的敏感度和重要性,以便能夠引領和贏得人工智慧競賽,並幫助美國技術堆疊成為全球標準。
Operator
Operator
Aaron Rakers, Wells Fargo.
富國銀行的 Aaron Rakers。
Aaron Rakers - Analyst
Aaron Rakers - Analyst
Yeah. Thank you for the question. I want to go back to the Spectrum-XGS announcement this week. And thinking about the Ethernet product now pushing over $10 billion of annualized revenue. Jensen, how -- what is the opportunity set that you see for Spectrum-XGS, do we think about this as kind of the data center interconnect layer. Any thoughts on the sizing of this opportunity within that Ethernet portfolio?
是的。謝謝你的提問。我想回顧本週的 Spectrum-XGS 公告。想想乙太網路產品現在的年收入已經超過 100 億美元。Jensen,您認為 Spectrum-XGS 有哪些機會,我們是否將其視為資料中心互連層。對於乙太網路產品組合中這機會的規模有何看法?
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
We now offer three networking technologies. One is for scale up, one is for scale-out, and one for scale across. Scale-up is so that we could build the largest possible virtual GPU, the virtual compute node. NVLink is revolutionary, NVLink 72 is what made it possible for Blackwell to deliver such an extraordinary generational jump over hoppers NVLink 8 at a time when we have long thinking models, agentic AI reasoning systems, the NVLink basically amplifies the memory bandwidth, which is really critical for reasoning systems, and so NVLink 72 is fantastic.
我們現在提供三種網路技術。一個用於縱向擴展,一個用於橫向擴展,一個用於橫向擴展。擴大規模是為了能夠建立盡可能大的虛擬 GPU,也就是虛擬運算節點。NVLink 是革命性的,NVLink 72 使 Blackwell 能夠實現如此非凡的跨越式發展,超越 NVLink 8,在當今時代,我們擁有長期思考模型、代理 AI 推理系統,NVLink 基本上放大了內存頻寬,這對於推理系統來說至關重要,因此 NVLink 72 非常棒。
We then scale out with networking, which we have two, we have InfiniBand, which is unquestionably the lowest latency, the lowest jitter, the best scale-out network. It does require more expertise in managing those networks and for supercomputing for the leading model makers. InfiniBand, Quantum InfiniBand is the unambiguous choice. If you were to benchmark an AI factory, the ones with InfiniBand are the best performance.
然後,我們透過網路擴展,我們有兩種網絡,一種是 InfiniBand,毫無疑問它是延遲最低、抖動最低、擴展性最好的網路。它確實需要更多的專業知識來管理這些網路以及為領先的模型製造商提供超級運算。InfiniBand、Quantum InfiniBand 是毫無疑問的選擇。如果要對 AI 工廠進行基準測試,那麼採用 InfiniBand 的工廠性能最佳。
For those who would like to use Ethernet because their whole data center is built with Ethernet, we have a new type of Ethernet called Spectrum Ethernet. Spectrum Ethernet is not off the shelf. It has a whole bunch of new technologies designed for low latency and low jitter and congestion control. And it has the ability to come closer, much, much closer to InfiniBand than anything that's out there, that we call that Spectrum-X Ethernet.
對於那些希望使用乙太網路的人來說,因為他們的整個資料中心都是用乙太網路建構的,我們有一種新型以太網,稱為 Spectrum 乙太網路。Spectrum乙太網路並不是現成的。它擁有大量專為低延遲、低抖動和擁塞控製而設計的新技術。它比現有的任何技術都更接近 InfiniBand,我們稱之為 Spectrum-X 乙太網路。
And then finally, we have Spectrum-XGS, a giga-scale for connecting multiple data centers, multiple AI factories into a super factory, a gigantic system. And we're going to -- you're going to see that networking obviously is very important in AI factories.
最後,我們有了 Spectrum-XGS,它是一種千兆系統,可以將多個資料中心、多個 AI 工廠連接成一個超級工廠,一個龐大的系統。我們將會看到,網路在人工智慧工廠中顯然非常重要。
In fact, choosing the right networking, the performance, the throughput improvement, going from 65% to 85% or 90%, that kind of step-up because of your networking capability effectively makes networking free. Choosing the right networking, you're basically paying -- you'll get a return on it like you can't believe because the AI factory, a gigawatt, as I mentioned before, could be $50 billion. And so the ability to improve the efficiency of that factory by tens of percent is -- results in $10 billion, $20 billion worth of effective benefit. And so this -- the networking is a very important part of it.
事實上,選擇正確的網絡,效能、吞吐量的提高,從 65% 到 85% 或 90%,這種由於網路能力而實現的提升實際上使網路變得免費。選擇正確的網絡,基本上就是付出——你會得到難以置信的回報,因為正如我之前提到的,人工智慧工廠,一千兆瓦,可能價值 500 億美元。因此,如果能將該工廠的效率提高數十個百分點,就能帶來價值 100 億美元、200 億美元的有效效益。所以這個——網路是其中非常重要的一部分。
It's the reason why NVIDIA dedicate so much in networking. That's the reason why we purchased Mellanox 5.5 years ago. And Spectrum-X, as we mentioned earlier, is now quite a sizable business, and it's only about 1.5 years old. So Spectrum-X is a home run. All three of them are going to be fantastic. NVLink scale-up, Spectrum X and InfiniBand scale out, and then Spectrum-XGS for scale across.
這就是 NVIDIA 在網路領域投入如此多精力的原因。這就是我們 5.5 年前購買 Mellanox 的原因。正如我們之前提到的,Spectrum-X 現在是一家規模相當大的企業,而它成立才只有大約 1.5 年的時間。因此 Spectrum-X 是一個本壘打。他們三個都會表現得非常出色。NVLink 縱向擴展、Spectrum X 和 InfiniBand 橫向擴展,然後 Spectrum-XGS 進行橫向擴展。
Operator
Operator
Stacy Rasgon, Bernstein Research.
拉斯貢(Stacy Rasgon),伯恩斯坦研究公司。
Stacy Rasgon - Analyst
Stacy Rasgon - Analyst
Hi, guys. Thanks for taking the question. I have more tactical question for Colette. So on the guidance, we're up over $7 billion, the vast bulk of that is going to be from data center. How do I think about apportioning that $7 billion across Blackwell versus Hopper versus networking? I mean, it looks like Blackwell was probably $27 billion in the quarter, up from maybe $23 billion last quarter. Hopper is still $6 billion or $7 billion post the H20. Like do you think the hopper strength continues? Just how do I think about parsing that $7 billion out across the orient components?
嗨,大家好。感謝您回答這個問題。我對科萊特還有更多戰術的問題。因此,根據預期,我們的投資將超過 70 億美元,其中絕大部分將來自資料中心。我該如何考慮在 Blackwell、Hopper 和 Networking 之間分配這 70 億美元?我的意思是,看起來布萊克韋爾本季的銷售額可能為 270 億美元,高於上一季的 230 億美元。在 H20 之後,Hopper 的市值仍為 60 億美元或 70 億美元。您認為料斗強度會持續下去嗎?我該如何考慮將這 70 億美元分配給東方各部分?
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Thanks, Stacy, for the question. First part of it, looking at our growth between Q2 and Q3, Blackwell is still going to be the lion's share of what we have in terms of data center. But keep in mind, that helps both our compute side as well as it helps our networking side because we are selling those significant systems that are incorporating the NVLink that Jensen just spoke about.
謝謝 Stacy 提出這個問題。首先,從我們第二季和第三季之間的成長來看,就資料中心而言,Blackwell 仍將佔據我們最大份額。但請記住,這不僅對我們的計算方面有幫助,而且對我們的網路方面也有幫助,因為我們正在銷售那些採用了 Jensen 剛才提到的 NVLink 的重要係統。
Selling Hopper, we are still selling it. H100, H200s, we are again, they are HGX systems, and they still believe our Blackwell will be the line share of what we're doing on there. So we'll continue.
出售 Hopper,我們仍在出售它。H100、H200s,我們再次提到,它們是 HGX 系統,他們仍然相信我們的 Blackwell 將是我們在那裡所做工作的線路份額。所以我們會繼續。
We don't have any more specific details in terms of how we'll finish our quarter, but you should expect Blackwell again to be the driver of the growth.
關於我們如何結束本季度,我們還沒有更多具體細節,但你應該期待布萊克威爾再次成為成長的推動力。
Operator
Operator
Jim Schneider, Goldman Sachs.
高盛的吉姆·施耐德。
James Schneider, Ph.D. - Analyst
James Schneider, Ph.D. - Analyst
Good afternoon. Thank you for taking my question. You've been very clear about the reasoning model opportunity that you see, and you've also been relatively clear about technical specs for Rubin, but maybe you could provide a little bit of context about how you view the Rubin product transition going forward? What incremental capability does that offer to customers? And would you say that Rubin is a bigger, smaller, or similar step up in terms of performance from a capability perspective relative to what we saw with Blackwell?
午安.感謝您回答我的問題。您非常清楚您所看到的推理模型機會,而且您也相對清楚 Rubin 的技術規格,但也許您可以提供一些關於您如何看待未來 Rubin 產品轉型的背景資訊?這為客戶提供了哪些增量功能?從能力角度來看,您認為魯賓的表現與布萊克威爾相比有較大、較小還是相似的進步?
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
Thanks. Rubin. Rubin, we're on an annual cycle. And the reason why we're on an annual cycle is because we can do so to accelerate the cost reduction and maximize the revenue generation for our customers.
謝謝。魯賓。魯賓,我們正處於年度週期。我們採用年度週期的原因是因為這樣做可以加速降低成本並最大限度地為客戶創造收入。
When we increase the perf per watt, the token generation per amount of usage of energy, we are effectively driving the revenues of our customers. The perf per watt of Blackwell will be for reasoning systems in order of magnitude higher than Hopper. And so for the same amount of energy and everybody's data center is energy limited by definition, for any data center, using Blackwell, you'll be able to maximize your revenues compared to anything we've done in the past compared to anything in the world today. And because the perf per dollar, the performance is so good that the perf per dollar invested in the capital would also allow you to improve your gross margins.
當我們提高每瓦性能、每使用能源產生的代幣量時,我們有效地推動了客戶的收入。對於推理系統來說,Blackwell 的每瓦性能將比 Hopper 高出一個數量級。因此,對於相同數量的能量,並且每個人的資料中心在定義上都是能量有限的,對於任何資料中心,使用 Blackwell,您將能夠最大化您的收入,與我們過去所做的任何事情相比,與當今世界上的任何事情相比。而且由於每美元的收益表現非常好,因此每美元投資於資本的收益也會讓你提高毛利率。
To the extent that we have great ideas for every single generation, we could improve the revenue generation, improve the AI capability, improve the margins of our customers by releasing new architectures. And so we advise our partners, our customers to pace themselves and to build these data centers on an annual rhythm. And Rubin is going to have a whole bunch of new ideas.
只要我們對每一代產品都有偉大的想法,我們就可以透過發布新的架構來提高收入,提高人工智慧能力,提高客戶的利潤率。因此,我們建議我們的合作夥伴和客戶按照自己的節奏,按照年度步調建立這些資料中心。魯賓將會有很多新想法。
I will pause for a second because I've got plenty of time between now and a year from now to tell you about all the breakthroughs that Rubin is going to bring. But Rubin has a lot of great ideas. I'm anxious to tell you but I can't right now. And I'll save it for GTC to tell you more and more about it.
我會暫停一下,因為從現在到一年後我有足夠的時間來告訴你們魯賓將帶來的所有突破。但魯賓有很多很棒的想法。我很想告訴你,但我現在不能。我會將其保存到 GTC 中,以便向您詳細介紹它。
But nonetheless, for the next year, we're ramping really hard into now Grace Blackwell, GB200, and then now Blackwell Ultra GB300, we're ramping really hard into data centers. This year is obviously a record-breaking year. I expect next year to be a record-breaking year. And while we continue to increase the performance of AI capabilities as we race towards artificial super intelligence on the one hand and continue to increase the revenue generation capabilities of our hyperscalers on the other hand.
但儘管如此,在接下來的一年裡,我們將大力進軍 Grace Blackwell、GB200,然後是 Blackwell Ultra GB300,我們將大力進軍資料中心。今年顯然是破紀錄的一年。我預計明年將是破紀錄的一年。一方面,我們在邁向超級人工智慧的過程中不斷提升人工智慧的效能;另一方面,我們不斷提升超大規模企業的創收能力。
Operator
Operator
Timothy Arcuri, UBS.
瑞銀的提摩西·阿庫裡。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Thanks a lot. Jensen, I wanted to ask you just answered the question. You threw at a number, you said 50% CAGR for the AI market. So I'm wondering how much visibility that you have into next year, is that kind of a reasonable bogey in terms of how much your data center revenue should grow next year? I would think you'll grow at least in line with that CAGR? And maybe are there any puts and takes to that?
多謝。詹森,我想問你剛才回答的問題。你給了一個數字,你說人工智慧市場的複合年增長率為 50%。所以我想知道您對明年的預測有多大,對於明年您的資料中心收入應該會成長多少,這是否是一個合理的預測?我認為你的成長至少會與複合年增長率保持一致?或許這其中存在著什麼利弊?
Jensen Huang - President, Chief Executive Officer, Director
Jensen Huang - President, Chief Executive Officer, Director
Well, I think the best way to look at it is we have reasonable forecasts from our large customers for next year. a very, very significant forecast. And we still have a lot of businesses that we're still winning and a lot of start-ups that are still being created. Don't forget that the number of startups for AI is was $100 billion was funded last year. This year, the year is not even over yet, it's $180 billion funded.
嗯,我認為最好的看待這個問題的方式是,我們從大客戶那裡得到了明年的合理預測。一個非常非常重要的預測。我們還有很多企業正在贏得勝利,還有很多新創公司正在創建中。不要忘記,去年人工智慧新創公司獲得的資助數量已達 1000 億美元。今年,甚至還沒結束,就已經籌集了 1,800 億美元的資金。
If you look at AI native the top AI native start-ups that are generating revenues last year was $2 billion. This year, it's $20 billion, next year being 10x higher than this year is not inconceivable. And the open source models is now opening up large enterprises, SaaS companies, industrial companies, robotics companies to now join the AI revolution, another source of growth. And whether it's AI natives or enterprise SaaS or industrial AI or start-ups, we're just seeing just enormous amount of interest in AI and demand for AI.
如果你看一下人工智慧原生,你會發現去年創造收入的頂級人工智慧原生新創公司為 20 億美元。今年是200億美元,明年比今年高出10倍也是可以想像的。開源模式現在正在吸引大型企業、SaaS 公司、工業公司、機器人公司加入人工智慧革命,這是另一個成長來源。無論是人工智慧原生企業、企業 SaaS、工業人工智慧或新創企業,我們都看到了對人工智慧的巨大興趣和需求。
Right now, the buzz is, I'm sure all of you know about the buzz out there. The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs. And so the AI native start-ups are really scrambling to get capacity so that they could train their reasoning models. And so the demand is really, really high.
現在,我相信大家都知道外面有這樣的傳聞。據說所有東西都賣光了。H100 已售罄。H200 已售罄。大型 CSP 正在從其他 CSP 租用容量。因此,人工智慧原生新創公司正在爭先恐後地獲取能力,以便能夠訓練他們的推理模型。所以需求真的非常高。
But the long-term outlook between where we are today, CapEx has doubled in two years. It is now running about $600 billion a year just in the large hyperscalers. For us to grow into that $600 billion a year, representing a significant part of that CapEx isn't unreasonable. And so I think the next several years, surely through the through the decade, we see just a really fast growing, really significant growth opportunities ahead.
但從目前的情況來看,長期來看,資本支出在兩年內翻了一番。目前,光是大型超大規模企業每年的營運規模就達到約 6,000 億美元。對我們來說,每年成長到 6,000 億美元,佔資本支出的很大一部分並不是不合理的。因此我認為,未來幾年,甚至未來十年,我們都將看到真正快速的成長,真正顯著的成長機會。
Let me conclude with this. Blackwell is the next-generation AI platform the world has been waiting for. It delivers an exceptional generational leap. NVIDIA's NVLink 72, rack scale computing is revolutionary, arriving just in time as reasoning AI models drive order of magnitude increases in training and inference performance requirement. Blackwell Ultra is ramping at full speed and the demand is extraordinary.
讓我以此作為結束。Blackwell 是全世界期待的下一代人工智慧平台。它實現了非凡的世代飛躍。NVIDIA 的 NVLink 72 機架規模計算具有革命性,恰逢推理 AI 模型推動訓練和推理性能要求數量級增長之際。Blackwell Ultra 正在全速發展,需求量龐大。
Our next platform Rubin, is already in fab. We have six new chips that represents the Rubin platform. They have all take out the TSMC. Rubin will be our third-generation NVLink rack scale AI supercomputer, and so we expect to have a much more mature and fully scaled up supply chain.
我們的下一個平台 Rubin 已在製造中。我們有六款代表 Rubin 平台的新晶片。他們都把台積電拿下來了。Rubin 將成為我們的第三代 NVLink 機架規模 AI 超級計算機,因此我們期望擁有更成熟和全面擴展的供應鏈。
Blackwell and Rubin AI factory platforms will be scaling into the $3 billion to $4 trillion global AI factory build out through the end of the decade. Customers are building ever greater scale AI factories from thousands of Hopper GPUs in tens of megawatt data centers to now hundreds of thousands of Blackwells in 100-megawatt facilities. And soon, we'll be building millions of Rubin GPU platforms, powering multi-gigawatt multisite AI super factories.
到本世紀末,Blackwell 和 Rubin 人工智慧工廠平台將擴大規模,建造價值 30 億至 4 兆美元的全球人工智慧工廠。客戶正在建造規模越來越大的人工智慧工廠,從數十兆瓦資料中心的數千個 Hopper GPU 到現在 100 兆瓦設施中的數十萬個 Blackwell。很快,我們將建立數百萬個 Rubin GPU 平台,為數千兆瓦的多站點 AI 超級工廠提供動力。
With each generation demand only grows, one shot chat bots have evolved into reasoning agentic AI that research, plan and use tools, driving orders of magnitude jump in compute for both training and inference. Agentic AI is reaching maturity and has opened the enterprise market to build domain and company-specific AI agents for enterprise workflows, products, and services.
隨著每一代需求的增長,一次性聊天機器人已經發展成為研究、規劃和使用工具的推理代理人工智慧,推動訓練和推理的計算量呈數量級的增長。Agentic AI 正在成熟,並已打開企業市場,為企業工作流程、產品和服務建立特定領域和公司的 AI 代理。
The age of physical AI has arrived, unlocking entirely new industries in robotics, industrial automation. Every industrial company will need to build two factories: one, to build the machines; and another to build their robotic AI.
實體人工智慧時代已經到來,它將開啟機器人、工業自動化等全新產業。每家工業公司都需要建造兩座工廠:一座用於製造機器;另一座用於製造機器人人工智慧。
This quarter, NVIDIA reached record revenues and an extraordinary milestone in our journey. The opportunity ahead is immense. A new industrial revolution has started. The AI races on.
本季度,NVIDIA 的營收創下了歷史新高,這是我們發展歷程中非凡的里程碑。未來的機會是巨大的。一場新的工業革命已經開始。人工智慧繼續競賽。
Thanks for joining us today, and I look forward to addressing you next week -- next earnings call. Thank you.
感謝您今天加入我們,我期待下週的下一次財報電話會議。謝謝。
Operator
Operator
This concludes today's conference call. You may now disconnect.
今天的電話會議到此結束。您現在可以斷開連線。