NVIDIA 報告稱,2025 財年第四季營收創下歷史新高,資料中心收入和對其 Blackwell 架構的需求顯著增長。在新產品推出的推動下,該公司預計第一季將繼續成長。他們討論了人工智慧模型日益增長的重要性以及為不同任務配置資料中心所面臨的挑戰。
NVIDIA 正在準備發布 Blackwell Ultra 和 Vera Rubin,專注於客製化 ASIC 和商用 GPU。演講者強調了人工智慧技術跨行業的主流融合以及人工智慧徹底改變軟體和服務的潛力。資料中心內的企業業務正在快速成長,重點關注超大規模企業和企業 AI 的成長潛力。
NVIDIA 的多功能架構和高效的性能使其在競爭對手中脫穎而出。演講者對AI技術未來的發展和影響表示樂觀。
使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon.
午安.
My name is Krista, and I'll be your conference operator today.
我叫克里斯塔,今天我將擔任您的會議主持人。
At this time, I would like to welcome everyone to NVIDIA's fourth-quarter earnings call.
現在,我歡迎大家參加 NVIDIA 第四季財報電話會議。
(Operator Instructions)
(操作員指令)
Thank you.
謝謝。
Stewart Stecker, you may begin your conference.
史都華·斯特克,你可以開始你的會議了。
Stewart Stecker - Investor Relations
Stewart Stecker - Investor Relations
Thank you.
謝謝。
Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2025.
大家下午好,歡迎參加NVIDIA 2025財年第四季電話會議。
With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
今天與我一起出席的還有 NVIDIA 總裁兼執行長黃仁勳;以及執行副總裁兼財務長 Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website.
我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網路直播。
The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2026.
網路直播將可供重播,直至電話會議討論 2026 財年第一季的財務表現。
The content of today's call is NVIDIA's property.
今天電話會議的內容屬於 NVIDIA 的財產。
It can't be reproduced or transcribed without prior written consent.
未經事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations.
在本次電話會議中,我們可能會根據目前預期做出前瞻性陳述。
These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.
這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。
For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益報告中的披露內容、我們最新的 10-K 和 10-Q 表格以及我們可能向美國證券交易委員會提交的 8-K 表格報告。
All our statements are made as of today, February 26, 2025, based on information currently available to us.
我們的所有聲明均根據我們目前掌握的資訊截至今天(2025 年 2 月 26 日)作出。
Except as required by law, we assume no obligation to update any such statements.
除法律要求外,我們不承擔更新任何此類聲明的義務。
During this call, we will discuss non-GAAP financial measures.
在本次電話會議中,我們將討論非公認會計準則財務指標。
Confine a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在我們網站上發布的 CFO 評論中將這些非 GAAP 財務指標與 GAAP 財務指標的調整限制在一起。
With that, let me turn the call over to Colette.
說完這些,讓我把電話轉給科萊特。
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Thanks, Stewart.
謝謝,斯圖爾特。
Q4 was another record quarter.
第四季又是一個創紀錄的季度。
Revenue of $39.3 billion was up 12% sequentially and up 78% year on year, and above our outlook of $37.5 billion.
營收為 393 億美元,季增 12%,年增 78%,高於我們預期的 375 億美元。
For fiscal 2025, revenue was $130.5 billion, up 114% in the prior year.
2025 財年營收為 1,305 億美元,較前一年成長 114%。
Let's start with datacenter.
讓我們從資料中心開始。
Data center revenue for fiscal 2025 was $115.2 billion, more than doubling from the prior year.
2025財年的資料中心營收為1,152億美元,比前一年增長一倍以上。
In the fourth quarter, data center revenue of $35.6 billion was a record of 16% sequentially and 93% year on year as the Blackwell ramp commenced and Hopper 200 continued sequential growth
第四季度,資料中心營收達 356 億美元,季增 16%,年增 93%,這得益於 Blackwell 產能提升和 Hopper 200 產能持續季增
In Q4, Blackwell sales exceeded our expectations.
第四季度,Blackwell 的銷售額超出了我們的預期。
We delivered $11 billion of Blackwell revenue to meet strong demand.
我們實現了 110 億美元的布萊克威爾收入以滿足強勁的需求。
This is the fastest product ramp in our company's history, unprecedented in its speed and scale.
這是我們公司歷史上最快的產品成長,其速度和規模都是前所未有的。
Blackwell production is in full gear across multiple configurations, and we are increasing supply quickly, expanding customer adoption.
Blackwell 的生產正在多種配置中全面展開,我們正在迅速增加供應,擴大客戶採用率。
Our Q4 data center compute revenue jumped 18% sequentially and over 2x year on year.
我們第四季的資料中心計算營收季增 18%,年增 2 倍多。
Customers are racing to scale infrastructure to train the next generation of cutting-edge models and unlock the next level of AI capabilities.
客戶正在競相擴展基礎設施,以訓練下一代尖端模型並釋放更高水準的人工智慧能力。
With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more.
在 Blackwell 看來,這些叢集通常都以 100,000 個或更多 GPU 開始。
Shipments have already started for multiple infrastructures of this size.
多個此類規模的基礎設施已經開始出貨。
Post training and model customization are fueling demand for NVIDIA infrastructure and software as developers and enterprises leverage techniques such as fine tuning, reinforcement learning, and distillation to tailor models for domain specific use cases.
隨著開發人員和企業利用微調、強化學習和提煉等技術來為特定領域的用例客製化模型,後期訓練和模型客製化正在推動對 NVIDIA 基礎設施和軟體的需求。
Hugging Face alone hosts over 90,000 derivatives created from the Llama foundation model.
光是 Hugging Face 就擁有超過 90,000 個基於 Llama 基礎模型創建的衍生品。
The scale of post training and model customization is massive and can collectively demand orders of magnitude more compute than pre-training.
後期訓練和模型定制的規模非常龐大,整體而言需要的計算量比預訓練高出幾個數量級。
Our inference demand is accelerating.
我們的推理需求正在加速成長。
Driven by test time scaling and new reasoning models like OpenAI's O3, DeepSeek-R1, and Grok 3.
由測試時間擴展和新推理模型(如 OpenAI 的 O3、DeepSeek-R1 和 Grok 3)驅動。
Long-thinking reasoning AI can require 100x more compute per task compared to one-shot inferences.
與一次性推理相比,長期推理人工智慧每個任務可能需要多 100 倍的計算量。
Blackwell was architected for reasoning AI inference.
Blackwell 是為推理 AI 推理而設計的。
Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100.
與 Hopper 100 相比,Blackwell 可將推理 AI 模型的代幣吞吐量提高 25 倍,成本降低 20 倍。
It is revolutionary.
這是革命性的。
Transformer engine is built for LLM and mixture-of-experts inference.
Transformer 引擎專為 LLM 和混合專家推理而建置。
And its NVLin domain delivers 14x the throughput of PCIe Gen5, ensuring the response time, throughput, and cost efficiency needed to tackle the growing complexity of inference of scale.
其 NVLin 域可提供 14 倍 PCIe Gen5 的吞吐量,確保應對日益複雜的規模推理所需的回應時間、吞吐量和成本效率。
Companies across industries are tapping into NVIDIA's full stack inference platform to boost performance and slash costs, now tripled inference throughput and cut costs by 66% using NVIDIA TensorRT, for its screenshot feature.
各行各業的公司都在利用 NVIDIA 的全端推理平台來提高效能並削減成本,現在使用 NVIDIA TensorRT 的螢幕截圖功能,推理吞吐量提高了兩倍,成本降低了 66%。
Perplexity sees 435 million monthly queries and reduced its inference costs 3x with NVIDIA Triton Inference Server and TensorRT-LLM.
Perplexity 每月有 4.35 億次查詢,並透過 NVIDIA Triton 推理伺服器和 TensorRT-LLM 將推理成本降低了 3 倍。
Microsoft Bing achieved a 5x speed up and major TCO savings for visual search across billions of images with NVIDIA TensorRT and acceleration libraries.
透過 NVIDIA TensorRT 和加速庫,Microsoft Bing 在數十億張影像的視覺搜尋中實現了 5 倍的速度提升和大幅的 TCO 節省。
Blackwell has great demand for inference.
布萊克威爾對於推理的需求很高。
Many of the early GB200 deployments are earmarked for inference, a first for a new architecture.
許多早期的 GB200 部署都專門用於推理,這對於新架構來說尚屬首次。
Blackwell addresses the entire AI market from pre-training, post-training to inference across clouds to on-premise to enterprise.
Blackwell 涵蓋整個 AI 市場,從預訓練、後訓練到跨雲端、本地和企業的推理。
CUDA's programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving markets.
CUDA 的可編程架構可加速每個 AI 模型和超過 4,400 個應用程序,確保在快速發展的市場中大型基礎設施投資不會過時。
Our performance and pace of innovation is unmatched.
我們的表現和創新步伐是無與倫比的。
We're driven to a 200x reduction in inference cost in just the last two years.
僅在過去兩年內,我們就將推理成本降低了 200 倍。
We delivered the lowest TCO and the highest ROI, and full stack optimizations for NVIDIA and our large ecosystem, including 5.9 million developers, continuously improve our customers' economics.
我們提供了最低的 TCO 和最高的 ROI,並為 NVIDIA 和我們包括 590 萬開發人員的大型生態系統提供了全端優化,不斷提高客戶的經濟效益。
In Q4, large CSPs represented about half of our data center revenue.
在第四季度,大型 CSP 約占我們資料中心收入的一半。
And these sales increased nearly 2x year on year.
這些銷售額比去年同期增長了近兩倍。
Large CSPs were some of the first to stand up Blackwell, with Azure, GCP, AWS, and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI.
大型 CSP 是第一批支援 Blackwell 的公司之一,其中 Azure、GCP、AWS 和 OCI 將 GB200 系統帶到世界各地的雲端區域,以滿足客戶對 AI 不斷增長的需求。
Regional cloud hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory buildouts globally and rapidly rising demand for AI reasoning models and agents.
區域雲端託管 NVIDIA GPU 佔資料中心收入的百分比有所增加,這反映了全球 AI 工廠的持續建設以及對 AI 推理模型和代理的需求快速增長。
We've launched a 100,000 GB200 cluster-based instance with NVLink Switch and Quantum-2 InfiniBand.
我們已經啟動了一個基於 NVLink Switch 和 Quantum-2 InfiniBand 的 100,000 GB200 叢集的實例。
Consumer Internet revenue grew 3x year on year, driven by an expanding set of generative AI and deep learning use cases.
受生成式人工智慧和深度學習用例不斷擴大的推動,消費者網路收入年增了 3 倍。
These include recommender systems, vision language understanding, synthetic data generation search, and agentic AI.
這些包括推薦系統、視覺語言理解、合成資料生成搜尋和代理人工智慧。
For example, xAI is adopting the GB200 to train and inference its next generation of Grok AI models.
例如,xAI 正在採用 GB200 來訓練和推理其下一代 Grok AI 模型。
Meta's cutting-edge Andromeda advertising engine runs on NVIDIA's Grace Hopper Superchip, serving vast quantities of ads across Instagram, Facebook applications.
Meta 的尖端 Andromeda 廣告引擎在 NVIDIA 的 Grace Hopper 超級晶片上運行,為 Instagram、Facebook 應用程式提供大量廣告。
Andromeda harnesses Grace Hopper's fast interconnect and large memory to boost inference throughput by 3x, enhance ad personalization, and deliver meaningful jumps in monetization and ROI.
Andromeda 利用 Grace Hopper 的快速互連和大內存將推理吞吐量提高 3 倍,增強廣告個性化,並實現貨幣化和投資回報率的顯著提升。
Enterprise revenue increased nearly 2x year on year the accelerating demand of model fine tuning, RAG, and agentic AI workflows and GPU accelerated data processing.
企業營收年增近2倍,模型微調、RAG、代理AI工作流程和GPU加速資料處理的需求不斷成長。
We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications including customer support, fraud detection, and product supply chain and inventory management.
我們推出了 NVIDIA Llama Nemotron 模型系列 NIM,幫助開發人員在一系列應用程式中建立和部署 AI 代理,包括客戶支援、詐欺偵測以及產品供應鍊和庫存管理。
Leading AI agent platform providers including SAP and ServiceNow are among the first to use new models.
SAP、ServiceNow等領先的AI代理平台提供者是第一批使用新模型的公司之一。
Healthcare leaders IQVIA, Illumina, Mayo Clinic as well as Arc Institute are using NVIDIA AI to speed drug discovery, enhance genomic research, and pioneer advanced healthcare services with generative and agentic AI.
醫療保健領導者 IQVIA、Illumina、Mayo Clinic 以及 Arc Institute 正在使用 NVIDIA AI 來加速藥物研發、加強基因組研究,並透過生成性和代理性 AI 開拓先進的醫療保健服務。
As AI expands beyond the digital world, NVIDIA infrastructure and software platforms are increasingly being adopted to power robotics and physical AI development.
隨著 AI 擴展到數位世界之外,NVIDIA 基礎設施和軟體平台越來越多地被用於支援機器人和實體 AI 開發。
One of the early and largest robotics applications and autonomous vehicles where virtually every AV company is developing on NVIDIA in the data center, the car, or both.
早期和最大的機器人應用和自動駕駛汽車之一,幾乎每個 AV 公司都在資料中心、汽車或兩者上使用 NVIDIA 進行開發。
NVIDIA's automotive vertical revenue is expected to grow to approximately $5 billion this fiscal year.
NVIDIA 的汽車垂直收入預計本財年將成長至約 50 億美元。
At CES, Hyundai Motor Group announced it is adopting NVIDIA's technologies to accelerate AV and robotics development and smart factory initiatives.
在 CES 上,現代汽車集團宣布正在採用 NVIDIA 的技術來加速 AV 和機器人技術的發展以及智慧工廠計畫。
Vision transformers, self-supervised learning, multimodal sensor fusion, and high-fidelity simulation are driving breakthroughs in AV development and will require 10x more compute.
視覺轉換器、自監督學習、多模態感測器融合和高保真模擬正在推動 AV 開發的突破,並且將需要 10 倍的計算量。
At CES, we announced the NVIDIA Cosmos world foundation model platform.
在CES上,我們發表了NVIDIA Cosmos世界基礎模式平台。
Just as language foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics.
正如語言基礎模型徹底改變了語言人工智慧一樣,Cosmos 是一種徹底改變機器人技術的實體人工智慧。
Leading robotics and automotive companies, including ride-sharing giant Uber, are among the first to adopt the platform.
包括共乘巨頭 Uber 在內的領先機器人和汽車公司是首批採用該平台的公司之一。
From a geographic perspective, sequential growth in our data center revenue was strongest in the US, driven by the initial ramp of Blackwell.
從地理區域來看,受 Blackwell 初期成長的推動,我們資料中心營收的連續成長在美國最為強勁。
Countries across the globe are building their AI ecosystems, and demand for compute infrastructure is surging.
世界各國都在建構人工智慧生態系統,對運算基礎設施的需求正在激增。
France's EUR200 billion AI investment and the EU's EUR200 billion invest AI initiatives offer a glimpse into the build out to set redefined global AI infrastructure in the coming years.
法國2000億歐元的人工智慧投資和歐盟2000億歐元的人工智慧投資計畫讓我們一窺未來幾年重新定義的全球人工智慧基礎設施的建設。
Now, as a percentage of total data center revenue, data center sales in China remained well below levels seen on the onset of export controls.
現在,作為資料中心總收入的百分比,中國資料中心的銷售額仍遠低於出口管制開始時的水平。
Absent any change in regulations, we believe that China's shipments will remain roughly at the current percentage.
如果沒有法規變化,我們認為中國的出貨量將大致維持在目前的比例。
The market in China for data center solutions remains very competitive.
中國資料中心解決方案市場競爭依然十分激烈。
We will continue to comply with export controls while serving our customers.
我們在為客戶提供服務的同時,將繼續遵守出口管制。
Networking revenue declined 3% sequentially.
網路收入季減 3%。
Our networking attached to GPU compute systems is robust at over 75%.
我們與 GPU 運算系統相連的網路穩定性達到 75% 以上。
We are transitioning from small NVLink 8 with InfiniBand to large NVLink 72 with Spectrum-X.
我們正在從採用 InfiniBand 的小型 NVLink 8 過渡到採用 Spectrum-X 的大型 NVLink 72。
Spectrum-X and NVLink Switch revenue increased and represents a major new growth sector.
Spectrum-X 和 NVLink Switch 收入有所增加,代表著一個重要的新成長領域。
We expect networking to return to growth in Q1.
我們預計網路將在第一季恢復成長。
AI requires a new class of networking.
人工智慧需要一種新型的網路。
NVIDIA offers NVLink Switch systems for scale up compute.
NVIDIA 提供 NVLink Switch 系統用於擴展計算。
For scale out, we offer Quantum InfiniBand for HPC supercomputers and Spectrum-X for Ethernet environments.
對於橫向擴展,我們為 HPC 超級電腦提供 Quantum InfiniBand,為乙太網路環境提供 Spectrum-X。
Spectrum-X enhances the Ethernet for AI computing and has been a huge success.
Spectrum-X 增強了乙太網路用於 AI 運算,並取得了巨大的成功。
Microsoft Azure, OCI, CoreWeave, and others are building large AI factories with Spectrum-X.
Microsoft Azure、OCI、CoreWeave 和其他公司正在使用 Spectrum-X 建造大型 AI 工廠。
The first Stargate data centers will use Spectrum-X.
第一個星際之門資料中心將使用 Spectrum-X。
Yesterday Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure.
昨天,思科宣布將Spectrum-X整合到其網路產品組合中,以協助企業建構AI基礎設施。
With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.
憑藉龐大的企業足跡和全球影響力,思科將把 NVIDIA 乙太網路帶入各行各業。
Now moving to gaming and AI PCs.
現在轉向遊戲和 AI PC。
Gaming revenue of $2.5 billion decreased 22% sequentially and 11% year on year.
博彩收入為 25 億美元,季減 22%,年減 11%。
Full-year revenue of $11.4 billion increased 9% year on year, and demand remained strong throughout the holiday.
全年營收 114 億美元,年增 9%,假期期間需求依然強勁。
However, Q4 shipments were impacted by supply constraints.
然而,第四季的出貨量受到供應限制的影響。
We expect strong sequential growth in Q1 as the supply increases.
隨著供應量的增加,我們預計第一季將出現強勁的環比成長。
The new GeForce RTX 50 Series desktop and laptop GPUs are here, built for gamers, creators, and developers they fuse AI and graphics redefining visual computing.
全新 GeForce RTX 50 系列桌上型電腦和筆記型電腦 GPU 現已推出,專為遊戲玩家、創作者和開發者打造,融合 AI 和圖形,重新定義視覺運算。
Powered by the Blackwell architecture, Fifth-Generation Tensor Cores and Fourth-Generation RT Cores and featuring up to 3,400 AI TOPS, these GPUs deliver a 2x performance leap and new AI-driven rendering including neural shaders, digital human technologies, geometry and lighting.
這些 GPU 採用 Blackwell 架構、第五代 Tensor 核心和第四代 RT 核心,具有高達 3,400 AI TOPS,可實現 2 倍性能飛躍和新的 AI 驅動渲染,包括神經著色器、數位人技術、幾何和照明。
The new DLSS 4 boost frame rates up to 8x with AI-driven frame generation, turning one rendered frame into 3.
新的 DLSS 4 透過 AI 驅動的幀生成將幀速率提高到 8 倍,將一個渲染幀變成 3 個。
It also features the industry's first real-time application of transformer models packing 2x more parameters and 4x to compute for unprecedented visual fidelity.
它還具有業界首個即時應用的變壓器模型,其參數量增加了 2 倍,計算量增加了 4 倍,實現了前所未有的視覺保真度。
We also announced a wave of GeForce Blackwell laptop GPUs with new NVIDIA Max-Q technology that extends battery life by up to an incredible 40%.
我們也發布了一系列採用全新 NVIDIA Max-Q 技術的 GeForce Blackwell 筆記型電腦 GPU,可將電池壽命延長高達 40%。
These laptops will be available starting in March from the world's top manufacturers.
這些筆記型電腦將於三月由世界頂級製造商上市。
Moving to our professional visualization business, revenue of $511 million was up 5% sequentially and 10% year on year.
轉向我們的專業視覺化業務,營收 5.11 億美元,季增 5%,年增 10%。
Full year revenue of $1.9 billion increased 21% year on year.
全年營收19億美元,年增21%。
Key industry verticals driving demand include automotive and healthcare.
推動需求的關鍵垂直產業包括汽車和醫療保健。
NVIDIA technologies and generative AI are reshaping design, engineering, and simulation workloads.
NVIDIA 技術和生成式 AI 正在重塑設計、工程和類比工作負載。
Increasingly, these technologies are being leveraged in leading software platforms from ANSYS, Cadence, and Siemens, fueling demand for NVIDIA RTX workstations.
這些技術越來越多地被用於 ANSYS、Cadence 和西門子等領先的軟體平台,從而推動了對 NVIDIA RTX 工作站的需求。
Now moving to automotive.
現在轉向汽車領域。
Revenue was a record $570 million, up 27% sequentially and up 103% year on year.
營收達到創紀錄的 5.7 億美元,比上一季成長 27%,比去年同期成長 103%。
Full year revenue of $1.7 billion increased 55% year on year.
全年營收17億美元,年增55%。
Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis.
強勁的成長得益於自動駕駛汽車(包括汽車和機器人計程車)的持續成長。
At CES, we announced Toyota, the world's largest automaker, will build its next generation vehicles on NVIDIA Orin, running the safety certified NVIDIA DRIVE OS.
在 CES 上,我們宣布全球最大的汽車製造商豐田將在 NVIDIA Orin 上打造其下一代汽車,並運行經過安全認證的 NVIDIA DRIVE OS。
We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA DRIVE Thor.
我們宣布 Aurora 和 Continental 將大規模部署由 NVIDIA DRIVE Thor 支援的無人駕駛卡車。
Finally, our end-to-end autonomous vehicle platform, NVIDIA DRIVE Hyperion, has passed industry safety assessments by TUV SUD and TUV Rheinland, two of the industry's foremost authorities for automotive grade safety and cybersecurity.
最後,我們的端到端自動駕駛汽車平台 NVIDIA DRIVE Hyperion 已通過 TUV SUD 和 TUV Rheinland 的產業安全評估,這兩家機構是業內汽車級安全和網路安全領域的兩大權威。
NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments.
NVIDIA 是第一個獲得全面第三方評估的 AV 平台。
Okay, moving to the rest of the P&L.
好的,轉到損益表的其餘部分。
GAAP gross margins was 73%, and non-GAAP gross margins was 73.5%, down sequentially as expected with our first deliveries of the Blackwell architecture.
GAAP 毛利率為 73%,非 GAAP 毛利率為 73.5%,與我們首次交付 Blackwell 架構後預期的一樣,季減。
As discussed last quarter, Blackwell is a customizable AI infrastructure with several different types of NVIDIA-built chips, multiple networking options, and for air- and liquid-cooled data center.
如同上個季度所討論的,Blackwell 是一個可自訂的 AI 基礎設施,具有幾種不同類型的 NVIDIA 晶片、多種網路選項以及用於風冷和液冷資料中心。
We exceeded our expectations in Q4 in ramping Blackwell, increasing system availability, providing several configurations to our customers.
我們在第四季度超越了我們的預期,提升了 Blackwell 的產能,提高了系統可用性,並為客戶提供了多種配置。
As Blackwell ramps, we expect gross margins to be in the low 70s.
隨著 Blackwell 業務的成長,我們預計毛利率將達到 70% 以下。
We -- initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure.
最初,我們專注於加速 Blackwell 系統的製造,以滿足客戶在競相建造 Blackwell 基礎設施時的強烈需求。
When fully ramped, we have many opportunities to improve the cost and gross margin.
當產量全面提升時,我們有許多機會來改善成本和毛利率。
We'll improve and return to the mid-70s late this fiscal year.
我們將在本財政年度末有所改善並回到 70 年代中期。
Sequentially, GAAP operating expenses were up 9% and non-GAAP operating expenses were 11%, reflecting higher engineering development costs and higher compute and infrastructure costs for new product introductions.
與上一季相比,GAAP 營業費用上漲 9%,非 GAAP 營業費用上漲 11%,這反映了工程開發成本的增加以及新產品推出的計算和基礎設施成本的增加。
In Q4, we returned $8.1 billion to shareholders in the form of share repurchases and cash dividends.
第四季度,我們以股票回購和現金分紅的形式向股東返還了 81 億美元。
Let me turn to the outlook in the first quarter.
讓我來談談第一季的展望。
Total revenue is expected to be $43 billion, plus or minus 2%.
預計總收入為 430 億美元,上下浮動 2%。
Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1.
由於需求持續強勁,我們預計 Blackwell 的銷售量將在第一季大幅成長。
We expect sequential growth in both data center and gaming.
我們預計資料中心和遊戲都將實現連續成長。
Within data center, we expect sequential growth from both compute and networking.
在資料中心內,我們預期計算和網路都會實現連續成長。
GAAP and non-GAAP gross margins are expected to be 70.6% and 71% respectively, plus or minus 50 basis points.
預計 GAAP 和非 GAAP 毛利率分別為 70.6% 和 71%,上下浮動 50 個基點。
GAAP and non-GAAP operating expenses are expected to be approximately $5.2 billion and $3.6 billion, respectively.
根據 GAAP 和非 GAAP 計算,預計營業費用分別約為 52 億美元和 36 億美元。
We expect full-year -- fiscal year '26 operating expenses to grow -- to be in the mid-thirties.
我們預計 26 財年全年營運費用將成長至 35% 左右。
GAAP and non-GAAP other incoming expenses are expected to be an income of approximately $400 million, excluding gains and losses from non-marketable and publicly held equity securities.
根據 GAAP 和非 GAAP 標準,其他應收費用預計約為 4 億美元,不包括非流通股本證券和公開持有的股權證券的損益。
GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.
預計 GAAP 和非 GAAP 稅率為 17%,正負 1%,不包括任何單一項目。
Further financial details are included in the CFO commentary and other information available on our IR website, including a new financial information AI agent.
進一步的財務細節包含在 CFO 評論和我們 IR 網站上提供的其他資訊中,包括一個新的財務資訊 AI 代理。
In closing, let me highlight upcoming events for the financial community.
最後,讓我重點介紹一下金融界即將發生的事件。
We will be at the TD Cowen Health Care Conference in Boston on March 3 and at the Morgan Stanley Technology, Media & Telecom Conference in San Francisco on March 5.
我們將於 3 月 3 日參加在波士頓舉行的 TD Cowen 醫療保健會議,並於 3 月 5 日參加在舊金山舉行的摩根士丹利科技、媒體和電信會議。
Please join us for our Annual GTC Conference starting Monday, March 17, in San Jose, California.
歡迎參加我們於 3 月 17 日星期一在加州聖荷西舉行的年度 GTC 會議。
Jensen will deliver a news packed keynote on March 18, and we will host a Q&A session for our financial analysts the next day, March 19.
Jensen 將於 3 月 18 日發表一場新聞豐富的主題演講,第二天即 3 月 19 日,我們將為我們的財務分析師舉辦一場問答環節。
We look forward to seeing you at these events.
我們期待在這些活動中見到您。
Our earnings call to discuss the results for our first quarter of fiscal 2026 is scheduled for May 28, 2025.
我們將於 2025 年 5 月 28 日召開收益電話會議,討論 2026 財年第一季的業績。
We are going to open up the call operator to questions.
我們將開放接線員來回答問題。
If you could start that that would be great.
如果你能開始這樣做,那就太好了。
Operator
Operator
(Operator Instructions) CJ Muse, Cantor Fitzgerald.
(操作員指令) CJ Muse、Cantor Fitzgerald。
CJ Muse - Analyst
CJ Muse - Analyst
I guess for me, Jensen, as test-time compute and reinforcement learning shows such promise, we're clearly seeing an increasing blurring of the lines between training and inference, what does this mean for the potential future of potentially inference dedicated clusters?
我想對我來說,Jensen,由於測試時間計算和強化學習顯示出如此大的希望,我們清楚地看到訓練和推理之間的界限越來越模糊,這對於潛在推理專用集群的潛在未來意味著什麼?
And how do you think about the overall impact to NVIDIA and your customers?
您如何看待這對 NVIDIA 及其客戶的整體影響?
Thank you.
謝謝。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Yeah, I appreciate that CJ.
是的,我很感謝 CJ。
There are now multiple scaling laws.
現在有多個縮放定律。
There's the pre-training scaling law, and that's going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining.
存在預訓練擴展定律,並且它將繼續擴展,因為我們擁有多模態性,我們擁有來自推理的數據,現在可用於進行預訓練。
And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning verifiable rewards.
然後第二個是訓練後的擴展規律,使用強化學習人類回饋,強化學習AI回饋,強化學習可驗證獎勵。
The amount of computation you use for post-training is actually higher than pre-training.
後訓練所使用的計算量實際上比預訓練高。
And it's kind of sensible in the sense that you could, while you're using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens.
從某種意義上說,這是合理的,因為當你使用強化學習時,你可以產生大量的合成資料或合成生成的代幣。
AI models are basically generating tokens to train AI models.
人工智慧模型基本上是生成代幣來訓練人工智慧模型。
And that's post-training.
這是訓練後的情況。
And the third part, this is the part that you mentioned, is test-time compute or reasoning, long thinking, inference scaling, they're all basically the same ideas.
第三部分,也就是您提到的部分,是測驗時間計算或推理、長時間思考、推理擴展,它們基本上都是相同的想法。
And there you have a chain of thought, you've search.
這樣你就有了一個思路鏈,你進行了搜尋。
The amount of tokens generated, the amount of inference compute needed is already 100 times more than the one-shot examples and the one-shot capabilities of large language models in the beginning.
產生的 token 數量、所需的推理計算量已經是最初一次性範例和大型語言模型一次性能力的 100 倍。
And that's just the beginning.
這只是一個開始。
This is just the beginning.
這只是一個開始。
The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future.
未來,我們的下一代可能會擁有數千倍、甚至希望是極其周到的、基於模擬和基於搜尋的模型,這些模型的運算能力可能會比現在高出數十萬倍、數百萬倍。
And so, the question is how do you design such an architecture?
那麼問題是如何設計這樣的架構呢?
Some of it -- some of the models are auto regressive.
其中一些——一些模型是自回歸的。
Some of the models are diffusion based.
有些模型是基於擴散的。
Some of it -- some of the times you want your data center to have disaggregated inference.
有些時候,您希望您的資料中心具有分解推理的能力。
Sometimes it is compacted.
有時它會被壓縮。
And so, it's hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA's architecture is so popular.
因此,很難弄清楚資料中心的最佳配置是什麼,這也是 NVIDIA 的架構如此受歡迎的原因。
We run every model.
我們運行每一個模型。
We are great at training.
我們的訓練非常出色。
The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level.
我們今天絕大多數的計算實際上都是推理,而 Blackwell 將這一切提升到了一個新水平。
We designed Blackwell with the idea of reasoning models in mind.
我們在設計 Blackwell 時考慮了推理模型的想法。
And when you look at training, it's many times more performing.
而當你觀察訓練時,你會發現它的表現好很多倍。
But what's really amazing is for long thinking, test-time scaling, reasoning AI models were tens of times faster, 25 times higher throughput.
但真正令人驚訝的是,對於長時間思考、測試時間擴展而言,推理 AI 模型的速度提高了數十倍,吞吐量提高了 25 倍。
And so, Blackwell is going to be incredible across the board.
所以,布萊克威爾將在各方面表現出色。
And when you have a data center that allows you to configure and use your data center based on are you doing more pre-training now, post-training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways.
當您擁有一個資料中心時,您可以根據您現在是否進行更多的預訓練、後訓練或擴展推理來配置和使用您的資料中心,我們的架構是可互換的,並且易於以所有這些不同的方式使用。
And so, we're seeing, in fact, much, much more concentration of a unified architecture than ever before.
因此,事實上,我們看到統一架構的集中度比以往任何時候都要高得多。
Operator
Operator
Joe Moore, JPMorgan.
摩根大通的喬摩爾。
Joe Moore - Analyst
Joe Moore - Analyst
Morgan Stanley, actually.
實際上是摩根士丹利。
Thank you.
謝謝。
I wonder if you could talk about GB200 at CES, you sort of talked about the complexity of the rack level systems and the challenges you have.
我想知道您是否可以在 CES 上談談 GB200,您談到了機架級系統的複雜性以及您面臨的挑戰。
And then as you said in the prepared remarks, we've seen a lot of general availability -- where are you in terms of that ramp?
然後,正如您在準備好的演講中所說的那樣,我們已經看到了很多普遍的可用性 - 就該坡道而言,您在哪裡?
Are there still bottlenecks to consider at a systems level above and beyond the chip level?
除了晶片級之外,在系統級是否還存在需要考慮的瓶頸?
And just have you maintained your enthusiasm for the NVL72 platforms?
您對 NVL72 平台的熱情是否一直延續?
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Well, I'm more enthusiastic today than I was at CES.
嗯,今天我比在 CES 上更興奮。
And the reason for that is because we've shipped a lot more since CES.
原因在於自 CES 以來,我們的出貨量大幅增加。
We have some 350 plants manufacturing the 1.5 million components that go into each one of the Blackwell racks, Grace Blackwell racks.
我們擁有大約 350 家工廠,負責生產每個 Blackwell 貨架(Grace Blackwell 貨架)所需的 150 萬個零件。
Yes, it's extremely complicated.
是的,非常複雜。
And we successfully and incredibly ramped up Grace Blackwell, delivering some $11 billion of revenues last quarter.
我們成功且令人驚奇地擴大了 Grace Blackwell 的業務,上個季度實現了約 110 億美元的收入。
We're going to have to continue to scale as demand is quite high, and customers are anxious and impatient to get their Blackwell systems.
由於需求很高,我們必須繼續擴大規模,而且客戶急切地想要得到他們的 Blackwell 系統。
You've probably seen on the web, a fair number of celebrations about Grace Blackwell systems coming online and we have them, of course.
您可能已經在網路上看到不少關於 Grace Blackwell 系統上線的慶祝活動,當然,我們也舉辦了這些活動。
We have a fairly large installation of Grace Blackwell systems for our own engineering and our own design teams and software teams.
我們為自己的工程團隊、設計團隊和軟體團隊安裝了相當大的 Grace Blackwell 系統。
CoreWeave has now been quite public about the successful bring up of theirs.
CoreWeave 現已公開宣布成功上市。
Microsoft has, of course, OpenAI has, and you're starting to see many come online.
當然,微軟有,OpenAI 也有,你會開始看到很多技術上線。
And so, I think the answer to your question is nothing is easy about what we're doing, but we're doing great, and all of our partners are doing great.
所以,我認為你問題的答案是,我們所做的事情並不容易,但我們做得很好,我們所有的伴侶都做得很好。
Operator
Operator
Vivek Arya, Bank of America Securities.
Vivek Arya 的美國銀行證券。
Vivek Arya - Analyst
Vivek Arya - Analyst
Colette, if you wouldn't mind confirming if Q1 is the bottom for gross margins?
Colette,您介意確認第一季的毛利率是否處於最低水準嗎?
And then Jensen, my question is for you.
那麼 Jensen,我的問題是問你的。
What is on your dashboard to give you the confidence that the strong demand can sustain into next year?
您有什麼信心相信強勁的需求能持續到明年?
And has DeepSeek and whatever innovations they came up with, has that changed that view in any way?
DeepSeek 及其提出的任何創新是否改變了這種觀點?
.
。
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Let me first take the first part of the question there regarding the gross margin.
讓我先回答有關毛利率的問題的第一部分。
During our Blackwell ramp, our gross margins will be in the low 70s.
在我們的 Blackwell 產能提升期間,我們的毛利率將達到 70% 以下。
And at this point, we are focusing on expediting our manufacturing, expediting our manufacturing to make sure that we can provide to customers as soon as possible.
目前,我們正專注於加快生產速度,以確保能夠盡快向客戶提供產品。
Our Blackwell is fully round.
我們的布萊克威爾是圓形的。
And once it does -- I'm sorry, once our Blackwell fully rounds, we can improve our cost and our gross margin.
一旦實現 - 抱歉,一旦我們的 Blackwell 完全實現,我們就可以改善我們的成本和毛利率。
So we expect to probably be in the mid-70s later this year.
因此我們預計今年稍後該數字可能會達到 70 年代中期。
Walking through what you heard Jensen speak about the systems and their complexity, they are customizable in some cases.
透過聽到 Jensen 談論系統及其複雜性,您會發現在某些情況下它們是可自訂的。
They've got multiple networking options.
他們有多種網路選擇。
They have liquid cool, and water cooled.
它們有液體冷卻和水冷卻。
So we know there is an opportunity for us to improve these gross margins going forward.
因此我們知道未來還有機會提高毛利率。
But right now, we are going to focus on getting the manufacturing complete and to our customers as soon as possible.
但現在,我們將專注於盡快完成製造並將其交付給客戶。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
We know several things that we have a fairly good line of sight of the amount of capital investment that data centers are building out towards.
我們知道一些事情,我們對資料中心建設的資本投資金額有相當好的了解。
We know that going forward, the vast majority of software is going to be based on machine learning.
我們知道,未來絕大多數軟體將基於機器學習。
And so accelerated computing and generative AI, reasoning AI are going to be the type of architecture you want in your data center.
因此,加速運算、產生人工智慧和推理人工智慧將成為您在資料中心所希望的架構類型。
We have, of course, forecast and plans from our top partners.
當然,我們有來自頂級合作夥伴的預測和計劃。
And we also know that there are many innovative, really exciting start-ups that are still coming online as new opportunities for developing the next breakthroughs in AI, whether it's agentic AIs, reasoning AI or physical AIs.
我們也知道,有許多創新的、真正令人興奮的新創公司正在不斷湧現,為開發人工智慧的下一個突破帶來新的機會,無論是代理人工智慧、推理人工智慧還是物理人工智慧。
The number of start-ups are still quite vibrant and each one of them need a fair amount of computing infrastructure.
新創企業的數量仍然非常活躍,每家企業都需要相當數量的運算基礎設施。
And so, I think the -- whether it's the near-term signals or the mid-term signals -- near-term signals, of course, are POs and forecasts and things like that.
所以,我認為——無論是近期訊號還是中期訊號——近期訊號當然是 PO 和預測之類的東西。
Midterm signals would be the level of infrastructure and CapEx scale-out compared to previous years.
中期訊號將是與前幾年相比基礎設施和資本支出的擴大水準。
And then the long-term signals has to do with the fact that we know fundamentally software has changed from hand coding that runs on CPUs, to machine learning and AI-based software that runs on GPUs and accelerated computing systems.
然後,長期訊號與這樣一個事實有關:我們知道,軟體從根本上已經從在 CPU 上運行的手動編碼轉變為在 GPU 和加速計算系統上運行的機器學習和基於 AI 的軟體。
And so, we have a fairly good sense that this is the future of software.
因此,我們非常清楚這就是軟體的未來。
And then maybe as you roll it out, another way to think about that is we've really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software.
然後也許當你推出它時,另一種思考方式是,我們實際上只利用了消費者人工智慧和搜尋以及一定數量的消費者生成人工智慧、廣告、推薦器,這些都屬於軟體的早期階段。
The next wave is coming, agentic AI for enterprise, physical AI for robotics, and sovereign AI as different regions build out their AI for their own ecosystems.
下一波浪潮即將到來,企業將迎來代理人工智慧,機器人將迎來物理人工智慧,而不同地區將為自己的生態系統建構人工智慧,從而迎來主權人工智慧。
And so, each one of these are barely off the ground, and we can see them.
所以,它們每一個都剛離開地面,我們就能看到它們。
We can see them because, obviously, we're in the center of much of this development and we can see great activity happening in all these different places and these will happen.
我們可以看到它們,因為顯然我們正處於這個發展過程的中心,我們可以看到在所有這些不同的地方正在發生偉大的活動,而這些將會發生。
So near term, midterm, long term.
因此,短期、中期、長期都是如此。
Operator
Operator
Harlan Sur, JPMorgan.
摩根大通的 Harlan Sur。
Harlan Sur - Analyst
Harlan Sur - Analyst
Your next-generation Blackwell Ultra is set to launch in the second half of this year, in line with the team's annual product cadence.
您的下一代 Blackwell Ultra 預計將於今年下半年推出,與團隊的年度產品節奏一致。
Jensen, can you help us understand the demand dynamics for Ultra given that you'll still be ramping the current generation Blackwell solutions?
詹森,鑑於您仍將擴大當前一代 Blackwell 解決方案的規模,您能否幫助我們了解 Ultra 的需求動態?
How do your customers and the supply chain also manage the simultaneous ramps of these two products?
您的客戶和供應鏈如何管理這兩種產品的同時產量提升?
And is the team still on track to execute Blackwell Ultra in the second half of this year?
今年下半年,車隊是否仍將按計畫實施 Blackwell Ultra?
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Yes.
是的。
Blackwell Ultra is second half.
布萊克威爾超級聯賽是下半場。
As you know, the first Blackwell was -- and we had a hiccup that probably cost us a couple of months.
如您所知,第一台 Blackwell 出現了小問題,可能因此耽誤了我們幾個月的時間。
We're fully recovered, of course.
當然,我們已經完全康復了。
The team did an amazing job recovering and all of our supply chain partners and just so many people helped us recover at the speed of light.
團隊的復原工作非常出色,我們所有的供應鏈合作夥伴和許多人都幫助我們以光速恢復。
And so now we've successfully ramped production of Blackwell.
現在我們已經成功提高了 Blackwell 的產量。
But that doesn't stop the next train.
但這並不能阻止下一班火車的運行。
The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories, and of course, new processors, and all of that is coming online.
下一班列車將按照年度節奏運行,Blackwell Ultra 將配備新的網路、新的內存,當然還有新的處理器,所有這些都將上線。
We've have been working with all of our partners and customers, laying this out.
我們一直在與所有合作夥伴和客戶合作制定這項計劃。
They have all of the necessary information, and we'll work with everybody to do the proper transition.
他們掌握了所有必要的信息,我們將與大家合作,實現適當的過渡。
This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same.
這次Blackwell與Blackwell Ultra的系統架構完全相同。
It's a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system.
從 Hopper 轉到 Blackwell 要困難得多,因為我們從 NVLink 8 系統轉移到了基於 NVLink 72 的系統。
So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change.
因此,底盤、系統架構、硬體、電力傳輸等都必須改變。
This was quite a challenging transition.
這是一個相當具有挑戰性的轉變。
But the next transition will slot right in.
但下一次轉變將會順利進行。
Grace Blackwell Ultra will slot right in.
Grace Blackwell Ultra 將立即加入。
And we've also already revealed and been working very closely with all of our partners on the click after that.
我們也已經透露了這項消息,並將繼續與所有合作夥伴就此展開密切合作。
And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition.
之後的點擊被稱為 Vera Rubin,我們所有的合作夥伴都在加快這一轉變,並為這一轉變做準備。
And again, we're going to provide a big, huge step-up.
再次,我們將提供巨大的進步。
And so come to GTC, and I'll talk to you about Blackwell Ultra, Vera Rubin, and then show you what's the one click after that.
那麼來到 GTC,我將和你們討論 Blackwell Ultra、Vera Rubin,然後向你們展示之後的「一鍵點擊」是什麼。
Really exciting new products, so to come to GTC.
確實令人興奮的新產品,所以來參加GTC。
Operator
Operator
Timothy Arcuri, UBS.
瑞銀的提摩西·阿庫裡。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Jensen, we heard a lot about custom ASICs.
詹森,我們聽到了很多關於客製化 ASIC 的消息。
Can you kind of speak to the balance between customer ASIC and merchant GPU.
您能談談客戶 ASIC 和商家 GPU 之間的平衡嗎?
We hear about some of these heterogeneous superclusters to use both GPU and ASIC?
我們聽過一些異質超級叢集同時使用 GPU 和 ASIC?
Is that something customers are planning on building?
這是客戶計劃建造的東西嗎?
Or will these infrastructures remain fairly distinct?
或者這些基礎設施將保持相當獨特的狀態?
Thanks.
謝謝。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Well, we built very different things than ASICs, in some ways, completely different in some areas we intercept.
嗯,我們製造的東西與 ASIC 非常不同,在某些方面,在我們攔截的某些領域完全不同。
We're different in several ways.
我們在很多方面都存在著不同。
One, NVIDIA'S architecture is general whether you're -- you've optimized for unregressive models or diffusion-based models or vision-based models or multimodal models or text models.
首先,無論您針對非迴歸模型、基於擴散的模型、基於視覺的模型、多模式模型或文字模型進行了最佳化,NVIDIA 的架構都是通用的。
We're great in all of it.
我們在所有方面都表現出色。
We're great on all of it because our software stack is so -- our architecture is flexible; our software stack is -- ecosystem is so rich that we're the initial target of most exciting innovations and algorithms.
我們在所有方面都很出色,因為我們的軟體堆疊非常靈活;我們的軟體堆疊——生態系統非常豐富,我們是大多數令人興奮的創新和演算法的初始目標。
And so, by definition, we're much, much more general than narrow.
所以,根據定義,我們的普遍性遠大於狹隘性。
We're also really good from the end-to-end from data processing, the curation of the training data, to the training of the data, of course, to reinforcement learning used in post training, all the way to inference with tough time scaling.
我們在端到端資料處理、訓練資料的管理、資料訓練,當然還有後期訓練中使用的強化學習,一直到具有嚴格時間擴展的推理方面都做得非常出色。
So we're general, we're end-to-end, and we're everywhere.
所以我們是通用的、端到端的、而且無所不在。
And because we're not in just one cloud, we're in every cloud, we could be on-prem, we could be in a robot, our architecture is much more accessible and a great target -- initial target for anybody who's starting up a new company.
而且因為我們不只存在於一朵雲中,而是存在於每一朵雲中,我們可以在本地,我們可以在機器人中,我們的架構更容易訪問,也是一個很好的目標——對於任何創辦新公司的人來說,這都是最初的目標。
And so, we're everywhere.
所以,我們無所不在。
And the third thing I would say is that our performance in our rhythm is so incredibly fast.
我想說的第三件事是我們的節奏表現得非常快。
Remember that these data centers are always fixed in size.
請記住,這些資料中心的大小始終是固定的。
They're fixed in size or they're fixed in power.
它們的大小是固定的,或者功率是固定的。
And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues.
如果我們的每瓦效能提高 2 倍、4 倍或 8 倍(這並不罕見),那麼這將直接轉化為收入。
And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4 times or 8 times higher, your revenues for that gigawatt data center is 8 times higher.
因此,如果您擁有 100 兆瓦的資料中心,如果該 100 兆瓦或千兆瓦資料中心的效能或吞吐量高出 4 倍或 8 倍,那麼該千兆瓦資料中心的收入就會高出 8 倍。
And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated.
這與過去的資料中心如此不同的原因在於,人工智慧工廠可以透過其產生的代幣直接貨幣化。
And so, the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROIs.
因此,我們架構的令牌吞吐量如此之快,對於所有為了創造收入和快速獲取投資回報而建立這些東西的公司來說,都是非常有價值的。
And so, I think the third reason is performance.
所以,我認為第三個原因是性能。
And then the last thing that I would say is the software stack is incredibly hard.
我最後要說的是軟體堆疊非常難。
Building an ASIC is no different than what we do.
建構 ASIC 與我們所做的沒有什麼不同。
We build a new architecture.
我們建立了一個新的架構。
And the ecosystem that sits on top of our architecture is 10 times more complex today than it was two years ago.
如今,我們架構上的生態系統比兩年前複雜了 10 倍。
And that's fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly.
這是相當明顯的,因為世界在架構之上構建的軟體數量呈指數級增長,而且人工智慧正在快速發展。
So bringing that whole ecosystem on top of multiple chips is hard.
因此,將整個生態系統置於多個晶片之上非常困難。
And so, I would say that those four reasons, and then finally, I will say this, just because the chip is designed doesn't mean it gets deployed.
所以,我想說的是這四個原因,最後,我想說的是,僅僅因為晶片的設計並不意味著它會被部署。
And you've seen this over and over again.
你已經一次又一次地看到過這種情況。
There are a lot of chips that gets built, but when the time comes, a business decision has to be made, and that business decision is about deploying a new engine, a new processor into a limited AI factory in size, in power, and in time.
有很多晶片可以被製造出來,但是當時間到來時,必須做出一個商業決策,而這個商業決策就是將一個新引擎、一個新的處理器部署到規模、功率和時間都有限的人工智慧工廠中。
And our technology is not only more advanced, more performant, it has much, much better software capability and very importantly, our ability to deploy is lightning fast.
我們的技術不僅更先進、效能更佳,而且軟體功能也更加強大,而且非常重要的是,我們的部署能力極快。
And so these things are enough for the faint of heart, as everybody knows now.
所以,正如現在每個人都知道的那樣,這些事情對於膽小的人來說已經足夠了。
And so, there's a lot of different reasons why we do well, why we win.
所以,我們之所以表現優異、取得勝利有很多不同的原因。
Operator
Operator
Ben Reitzes, Melius Research.
Ben Reitzes,Melius Research。
Ben Reitzes - Analyst
Ben Reitzes - Analyst
Hey Jensen, it's a geography-related question. you did a great job explaining some of the demand underlying factors here on the strength.
嘿 Jensen,這是一個與地理有關的問題。您很好地解釋了一些影響需求強度的潛在因素。
But US was up about $5 billion or so sequentially.
但美國季增了約 50 億美元左右。
And I think there is a concern about whether US can pick up the slack if there's regulations towards other geographies.
我認為,人們擔心如果對其他地區也實施監管,美國是否能夠彌補這一不足。
And I was just wondering, as we go throughout the year, if this kind of surge in the US continues and it's going to be -- whether that's okay?
我只是想知道,隨著時間的流逝,如果美國的這種激增趨勢繼續下去,這是否會沒問題?
And if that underlies your growth rate, how can you keep growing so fast with this mix shift towards the US?
如果這是你們成長率的基礎,那麼在業務結構轉移到美國的情況下,你們如何能夠維持如此快速的成長?
Your guidance looks like China is probably up sequentially.
您的指導看起來中國可能會持續上漲。
So just wondering if you could go through that dynamic and maybe Colette can weigh in.
所以只是想知道你是否可以經歷這種動態,也許 Colette 可以參與其中。
Thanks a lot.
多謝。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
China is approximately the same percentage as Q4 and as previous quarters.
中國的比例與第四季及之前幾季大致相同。
It's about half of what it was before the export control.
這大約是出口管制之前的一半。
But it's approximately the same in percentage.
但百分比大致相同。
With respect to geographies, the takeaway is that AI is software.
就地理位置而言,重點是人工智慧是軟體。
It's modern software.
這是現代軟體。
It's incredible modern software, but it's modern software, and AI has gone mainstream.
這是令人難以置信的現代軟體,但它是現代軟體,而且人工智慧已經成為主流。
AI is used in delivery services everywhere, shopping services everywhere.
人工智慧應用於各地的送貨服務和各地的購物服務。
If you were to buy a quarter from milk, it's delivered to you.
如果你買一夸脫牛奶,我們就會送貨上門。
AI was involved.
人工智慧也參與其中。
And so, almost everything that a consumer service provides AIs at the core of it.
因此,消費者服務所提供的一切幾乎都以人工智慧為核心。
Every student will use AI as a tutor; healthcare services use AI; financial services use AI.
每個學生都會有AI作為導師;醫療保健服務使用人工智慧;金融服務使用人工智慧。
No fintech company will not use AI.
沒有一家金融科技公司不會使用人工智慧。
Every fintech company will.
每家金融科技公司都會這麼做。
Climate tech company use AI.
氣候科技公司使用人工智慧。
Mineral discovery now uses AI.
礦產發現現在使用人工智慧。
The number of -- every higher education, every university uses AI.
許多高等教育機構、每所大學都使用人工智慧。
And so, I think it is fairly safe to say that AI has gone mainstream and that it's being integrated into every application.
因此,我認為可以肯定地說,人工智慧已經成為主流,並且正在融入每個應用程式中。
And our hope is that, of course, the technology continues to advance safely and advance in a helpful way to our society.
當然,我們的希望是科技能夠繼續安全地進步,並以有益於社會的方式進步。
And with that, we're -- I do believe that we're at the beginning of this new transition.
有了這些,我相信我們正處於這新轉變的開始。
And what I mean by that in the beginning is, remember, behind us has been decades of data centers and decades of computers that have been built.
我一開始說的意思是,記住,我們已經建立了幾十年的資料中心和幾十年的電腦。
And they've been built for a world of hand coding and general-purpose computing and CPUs and so on and so forth.
它們是為手工編碼、通用計算、CPU 等等的世界而構建的。
And going forward, I think it's fairly safe to say that world is going to be almost all software to be infused with AI.
展望未來,我認為可以肯定地說,世界上幾乎所有的軟體都將融入人工智慧。
All software and all services will be based on -- ultimately, based on machine learning and the data flywheel is going to be part of improving software and services and that the future computers will be accelerated, the future computers will be based on AI.
所有軟體和服務最終都將基於機器學習,數據飛輪將成為改進軟體和服務的一部分,未來的電腦將加速發展,未來的電腦將基於人工智慧。
And we're really two years into that journey and in modernizing computers that have taken decades to build out.
我們實際上已經踏上了這段旅程的兩年,並且對花了幾十年才建成的電腦進行了現代化改造。
And so, I'm fairly sure that we're in the beginning of this new era.
因此,我相當確信我們正處於這個新時代的開始。
And then lastly, no technology has ever had the opportunity to address a larger part of the world's GDP than AI.
最後,沒有任何技術有機會比人工智慧解決世界 GDP 更大一部分問題。
No software tool ever has.
沒有任何軟體工具曾經實現過這一點。
And so, this is now a software tool that can address a much larger part of the world's GDP more than any time in history.
因此,現在這是一個可以解決世界 GDP 更大一部分問題的軟體工具,比歷史上任何時候都更重要。
And so the way we think about growth and the way we think about whether something is big or small, has to be in the context of that.
因此,我們思考成長的方式以及我們思考某件事是大還是小的方式都必須放在這樣的背景下。
And when you take a step back and look at it from that perspective, we're really just in the beginnings.
當你退一步從這個角度來看時,我們會發現我們才剛開始。
Operator
Operator
Aaron Rakers, Wells Fargo.
富國銀行的 Aaron Rakers。
(Operator Instructions) Mark Lipacis, Evercore ISI.
(操作員指示)Mark Lipacis,Evercore ISI。
Mark Lipacis - Analyst
Mark Lipacis - Analyst
I had a clarification and a question.
我有一個澄清和疑問。
Colette up for the clarification.
科萊特 (Colette) 對此作出澄清。
Did you say that enterprise within the data center grew 2x year-on-year for the January quarter?
您是否說過 1 月份資料中心內的企業數量年增了 2 倍?
And if so, does that -- would that make it the fast faster growing than the hyperscalers?
如果是這樣,那麼這是否會使其比超大規模企業的成長速度更快?
And then, Jensen, for you, the question, hyperscalers are the biggest purchasers of your solutions, but they buy equipment for both internal and external workloads, external workloads being cloud services that enterprise is used.
然後,Jensen,對你來說,問題是,超大規模企業是你的解決方案的最大購買者,但他們購買設備用於內部和外部工作負載,外部工作負載是企業使用的雲端服務。
So the question is, can you give us a sense of how that hyperscaler spend splits between that external workload and internal?
所以問題是,您能否讓我們了解一下超大規模伺服器在外部工作負載和內部工作負載之間的支出分配情況?
And as these new AI workflows and applications come up, would you expect enterprises to become a larger part of that consumption mix?
隨著這些新的 AI 工作流程和應用程式的出現,您是否預期企業將成為消費組合中更大的一部分?
And does that impact how you develop your service, your ecosystem.
這會影響您開發服務和生態系統的方式嗎?
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Sure.
當然。
Thanks for the question regarding our Enterprise business.
感謝您提出有關我們企業業務的問題。
Yes, it grew 2x and very similar to what we were seeing with our large CSPs.
是的,它增長了 2 倍,與我們在大型 CSP 中看到的情況非常相似。
Keep in mind, these are both important areas to understand.
請記住,這些都是需要理解的重要領域。
Working with the CSPs can be working on large language models, can be working on inference in their own work.
與 CSP 合作可以研究大型語言模型,也可以在自己的工作中進行推理。
But keep in mind, that is also where the enterprises are surfacing.
但請記住,這也是企業出現的地方。
Your enterprises are both with your CSPs as well as in terms of building on their own.
您的企業既要與 CSP 合作,又要自行建造。
They're both growing quite well.
它們都長得很好。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
The CSPs are about half of our business.
CSP 約占我們業務的一半。
And the CSPs have internal consumption and external consumption, as you say.
正如您所說,CSP 有內部消耗和外部消耗。
And we're using -- of course, used for internal consumption.
當然,我們正在使用,用於內部消費。
We work very closely with all of them to optimize workloads that are internal to them, because they have a large infrastructure of NVIDIA GEAR that they could take advantage of.
我們與他們所有人密切合作,以優化他們的內部工作負載,因為他們擁有可以利用的大型 NVIDIA GEAR 基礎設施。
And the fact that we could be used for AI on the one hand, video processing on the other hand, data processing like Spark, we're fungible.
事實上,一方面我們可以用於人工智慧,另一方面可以用於視訊處理和 Spark 等資料處理,我們是可互換的。
And so, the useful life of our infrastructure is much better.
因此,我們基礎設施的使用壽命會更長。
If the useful life is much longer, then the TCO is also lower.
如果使用壽命更長,那麼 TCO 也會更低。
And so, the second part is how do we see the growth of enterprise or not CSPs, if you will, going forward?
那麼第二部分是,如果您願意的話,我們如何看待未來企業或非 CSP 的成長?
And the answer is, I believe, long term, it is by far larger and the reason for that is because if you look at the computer industry today and what is not served by the computer industry is largely industrial.
答案是,我相信,從長遠來看,它的規模要大得多,原因在於,如果你看看今天的電腦產業,你會發現電腦產業所沒有服務的大部分是工業領域。
So let me give you an example.
讓我給你舉個例子。
When we say enterprise -- and let's use the car company as an example because they make both soft things and hard things.
當我們說到企業時——讓我們以汽車公司為例,因為他們既生產軟體產品,也生產硬體產品。
And so, in the case of a car company, the employees will be what we call enterprise and agentic AI and software planning systems and tools, and we have some really exciting things to share with you guys at GTC, those agentic systems are for employees, to make employees more productive, to design, to market plan, to operate their company.
因此,對於一家汽車公司來說,員工就是我們所說的企業和代理人工智慧以及軟體規劃系統和工具,我們在 GTC 上有一些非常令人興奮的事情要與大家分享,這些代理系統是為員工服務的,旨在提高員工的生產力,設計、行銷計劃和運營他們的公司。
That's agentic AI.
這就是代理人工智慧。
On the other hand, the cars that they manufacture also need AI.
另一方面,他們生產的汽車也需要人工智慧。
They need an AI system that trains the cars, treats this entire giant fleet of cars.
他們需要一個可以訓練汽車、處理整個龐大車隊的人工智慧系統。
And today, there's some billion cars on the road.
如今,道路上行駛的汽車已有數十億輛。
Someday, there will be a billion cars on the road, and every single one of those cars will be robotic cars, and they'll all be collecting data, and we'll be improving them using an AI factory; whereas they have a factory today, in the future they'll have a car factory and an AI factory.
也許有一天,路上會有十億輛汽車,每一輛都將成為機器人汽車,它們都會收集數據,我們將使用人工智慧工廠來改進它們;他們今天有一個工廠,未來他們將擁有一個汽車工廠和一個人工智慧工廠。
And then inside the car itself is a robotic system.
汽車內部本身就有一個機器人系統。
And so, as you can see, there are three computers involved.
正如您所看到的,有三台電腦參與其中。
And there's the computer that helps the people.
還有幫助人們的計算機。
There's the computer that build the AI for the machineries that could be, of course, could be a tractor, it could be a lawn mower.
電腦可以為機械建造人工智慧,當然,這些機械可能是拖拉機,也可能是割草機。
It could be a human or robot that's being developed today.
它可能是當今正在開發的人類或機器人。
It could be a building; it could be a warehouse.
它可能是一棟建築物;它可能是一個倉庫。
These physical systems require new type of AI we call physical AI.
這些實體系統需要我們稱為實體人工智慧的新型人工智慧。
They can't just understand the meaning of words and languages, but they have to understand the meaning of the world, friction and inertia, object permanence, its cause and effect.
他們不能只理解字詞和語言的意義,還必須理解世界的意義、摩擦和慣性、物體永久性及其因果關係。
And all of those type of things that are common sense to you and I, but AIs have to go learn those physical effects.
所有這些事情對你我來說都是常識,但人工智慧必須去學習這些物理效應。
So we call that physical AI.
所以我們稱之為物理 AI。
That whole part of using agentic AI to revolutionize the way we work inside companies, that's just starting.
使用代理人工智慧徹底改變公司內部工作方式的整個過程才剛開始。
This is now the beginning of the agent AI era, and you hear a lot of people talking about it and we got some really great things going on.
現在正是代理 AI 時代的開始,你會聽到很多人談論它,而且我們也取得了一些非常棒的進展。
And then there's the physical AI after that, and then there are robotic systems after that.
之後是實體人工智慧,之後是機器人系統。
And so, these three computers are all brand new.
所以,這三台電腦都是全新的。
And my sense is that long term, this will be by far the larger of them all, which kind of makes sense.
我的感覺是,從長遠來看,這將是迄今為止規模最大的一個,這也是有道理的。
The world's GDP is representing -- represented by either heavy industries or industrials and companies that are providing for those.
世界 GDP 是由重工業或工業以及為這些工業提供服務的公司所代表的。
Operator
Operator
Aaron Rakers, Wells Fargo.
富國銀行的 Aaron Rakers。
Aaron Rakers - Analyst
Aaron Rakers - Analyst
Jensen, I'm curious as we now approach the two-year anniversary of really the Hopper inflection that you saw in 2023 in Gen AI in general.
詹森,我很好奇,因為我們現在即將迎來霍珀拐點的兩週年紀念日,你在 2023 年看到的 Gen AI 總體情況就是如此。
And when we think about the road map you have in front of us, how do you think about the infrastructure that's been deployed from a replacement cycle perspective?
當我們考慮您面前的路線圖時,您如何從更換週期的角度看待已經部署的基礎架構?
And whether if it's GB300 or if it's the Rubin cycle where we start to see maybe some refresh opportunity?
無論是 GB300 還是魯賓週期,我們都會開始看到一些更新的機會嗎?
I'm just curious how you look at that.
我只是好奇您是如何看待這一點的。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
I appreciate it.
我很感激。
First of all, people are still using Voltus and Pascals and amperes.
首先,人們仍在使用沃特斯(Voltus)、帕斯卡(Pascal)和安培(Amperise)。
And the reason for that is because there are always things that -- because CUDA is so programmable, you could use it right -- well, one of the major use cases right now is data processing and data curation.
原因在於總是有一些事情 — — 因為 CUDA 具有很強的可程式性,所以您可以正確使用它 — — 目前的主要用例之一就是資料處理和資料管理。
You find a circumstance that an AI model is not very good at.
你發現 AI 模型不太擅長處理某種情況。
You present that circumstance to a vision language model, let's say, it's a car.
你將該情況呈現給視覺語言模型,假設它是一輛汽車。
You present that circumstance to a vision language model.
您將該情況呈現給視覺語言模型。
The vision language model actually looks in the circumstances and, says, this is what happened, and I wasn't very good at it.
視覺語言模型實際上會觀察當時的情況,然後說,這就是發生的事情,而我在這方面並不是很擅長。
You then take that response to the prompt, and you go and prompt an AI model to go find in your whole lake of data other circumstances like that, whatever that circumstance was.
然後,你會根據提示做出回應,並提示 AI 模型在整個資料湖中尋找類似的其他情況,無論情況是什麼。
And then you use an AI to do domain randomization and generate a whole bunch of other examples.
然後您使用 AI 進行域隨機化並產生一大堆其他範例。
And then from that, you can go train the model.
然後從那裡,你可以去訓練模型。
And so, you could use amperes to go and do data processing and data curation and machine learning-based search.
因此,您可以使用安培來進行資料處理、資料管理和基於機器學習的搜尋。
And then you create the training data set, which you then present to your Hopper systems for training.
然後建立訓練資料集,並將其提供給 Hopper 系統進行訓練。
And so, each one of these architectures are completely -- they're all CUDA-compatible and so everything wants on everything.
因此,這些架構中的每一個都是完全的——它們都與 CUDA 相容,因此一切都需要一切。
But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past.
但如果您已經擁有基礎設施,那麼您可以將不太密集的工作負載放到過去的安裝基礎上。
All of our GPUs are very well employed.
我們所有的 GPU 都得到了很好的利用。
Operator
Operator
Atif Malik, Citi.
花旗銀行的阿蒂夫馬利克 (Atif Malik) 說:
Atif Malik - Analyst
Atif Malik - Analyst
I have a follow-up question on gross margins for Colette.
我有一個關於 Colette 毛利率的後續問題。
Colette, I understand there are many moving parts that Blackwell yields, NVLink 72 and Ethernet mix.
科萊特,我知道 Blackwell 生產了許多活動部件,NVLink 72 和乙太網路混合。
And you kind of tipped to the earlier question, the April quarter is the bottom, but second half would have to ramp like 200 basis points per quarter to get to the mid-70s range that you're giving for the end of the fiscal year.
您之前提到過,四月季度是最低點,但下半年每季必須增加 200 個基點,才能達到您給出的財年末 70 年代中期的範圍。
And we still don't know much about tariff's impact to broader semiconductor.
而且我們仍然不太了解關稅對更廣泛的半導體產業的影響。
So what kind of gives you the confidence in that trajectory in the back half of this year?
那麼是什麼讓您對今年下半年的發展軌跡充滿信心呢?
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Yeah.
是的。
Thanks for the question.
謝謝你的提問。
Our gross margins, they're quite complex in terms of the material and everything that we put together in a Blackwell system, a tremendous amount of opportunity to look at a lot of different pieces of that on how we can better improve our gross margins over time.
我們的毛利率非常複雜,包括材料以及我們在 Blackwell 系統中組裝的所有東西,我們有大量機會從不同部分來研究如何隨著時間的推移更好地提高我們的毛利率。
Remember, we have many different configurations as well on Blackwell that will be able to help us do that.
請記住,Blackwell 上也有許多不同的配置可以幫助我們做到這一點。
So together, working after we get some of these really strong ramping completed for our customers, we can begin a lot of that work.
因此,在我們共同努力為客戶完成一些真正強大的提升之後,我們就可以開始更多的工作了。
If not, we're going to probably start as soon as possible if we can.
如果沒有的話,我們可能會盡快開始。
And if we can improve it in the short term, we will also do that.
如果我們能在短期內改善它,我們也會這樣做。
Tariff at this point, it's a little bit of an unknown.
目前而言,關稅還有點未知。
It's an unknown until we understand further what the US government's plan is, both its timing, it's where and how much.
在我們進一步了解美國政府的計劃(包括其時間、地點和規模)之前,這一切都仍然未知。
So at this time, we are awaiting, but again, we would, of course, always follow export controls and/or tariffs in that manner.
因此,目前我們正在等待,但是,我們當然會始終以這種方式遵守出口管制和/或關稅。
Operator
Operator
Ladies and gentlemen, that does conclude our question-and-answer session -- I'm sorry.
女士們、先生們,我們的問答環節到此結束——很抱歉。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Thank you.
謝謝。
No, no
不,不
--
--
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
We are going to open up to Jensen.
我們將向 Jensen 敞開心扉。
I believe he has a couple of things.
我相信他有幾件事。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
I just wanted to thank you.
我只是想感謝你。
Thank you, Colette.
謝謝你,科萊特。
The demand for Blackwell is extraordinary.
布萊克威爾的需求量非常大。
AI is evolving beyond perception and generative AI into reasoning.
人工智慧正從感知人工智慧和產生人工智慧向推理人工智慧演進。
With reasoning AI, we're observing another scaling law, inference time or test time scaling, more computation.
透過推理人工智慧,我們觀察到另一個縮放定律、推理時間或測驗時間縮放、更多的計算。
The more the model thinks the smarter the answer.
模型思考越多,答案就越聰明。
Models like OpenAI, Grok 3, DeepSeek-R1 are reasoning models that apply inference time scaling.
OpenAI、Grok 3、DeepSeek-R1 等模型是應用推理時間縮放的推理模型。
Reasoning models can consume 100 times more compute.
推理模型可能消耗100倍以上的計算能力。
Future reasoning models can consume much more compute.
未來的推理模型可以消耗更多的計算。
DeepSeek-R1 has ignited global enthusiast.
DeepSeek-R1點燃了全球愛好者的熱潮。
It's an excellent innovation.
這是一個出色的創新。
But even more importantly, it has open source, a world-class reasoning AI model.
但更重要的是,它已經開源,擁有世界一流的推理AI模型。
Nearly every AI developer is applying R1 or chain of thought and reinforcement learning techniques like R1 to scale their model's performance.
幾乎每個 AI 開發人員都在應用 R1 或思路鍊和類似 R1 的強化學習技術來擴展其模型的效能。
We now have three scaling laws, as I mentioned earlier, driving the demand for AI computing.
正如我之前提到的,我們現在有三個擴展定律,推動著對人工智慧運算的需求。
The traditional scaling loss of AI remains intact.
人工智慧的傳統擴展損失仍然完好無損。
Foundation models are being enhanced with multimodality, and pretraining is still growing.
基礎模型正在透過多模態性得到增強,預訓練仍在不斷發展。
But it's no longer enough.
但這已經不夠了。
We have two additional scaling dimensions.
我們還有兩個額外的擴展維度。
Post-training skilling, where reinforcement learning, fine-tuning, model distillation require orders of magnitude more compute than pretraining alone.
訓練後技能,其中強化學習、微調、模型提煉需要比單獨的預訓練多幾個數量級的計算。
Inference time scaling and reasoning where a single query and demand 100 times more compute.
推理時間擴展和推理,其中單一查詢和需求增加 100 倍的計算。
We defined Blackwell for this moment, a single platform that can easily transition from pre-training, post-training and test-time scaling.
我們為此定義了 Blackwell,一個可以輕鬆從預訓練、後訓練和測試時間擴展過渡的單一平台。
Blackwell's FP4 transformer engine and NVLink 72 scale-up fabric and new software technologies-led Blackwell process reasoning AI models 25x faster than Hopper.
Blackwell 的 FP4 轉換引擎和 NVLink 72 擴展結構以及新軟體技術使得 Blackwell 處理推理 AI 模型的速度比 Hopper 快 25 倍。
Blackwell in all of this configuration is in full production.
Blackwell 的所有此類配置均已全面投入生產。
Each Grace Blackwell NVLink 72 rack is an engineering marvel. 1.5 million components produced across 350 manufacturing sites by nearly 100,000 factory operators.
每個 Grace Blackwell NVLink 72 機架都是一個工程奇蹟。近 10 萬名工廠操作員在 350 個製造基地生產了 150 萬個零件。
AI is advancing at light speed.
人工智慧正在以光速前進。
We're at the beginning of reasoning AI and inference time scaling.
我們正處於推理人工智慧和推理時間擴展的開始階段。
But we're just at the start of the age of AI, multimodal AIs, enterprise AI, sovereign AI and physical AI are right around the corner.
但我們才剛處於人工智慧時代的開始,多模人工智慧、企業人工智慧、自主人工智慧和實體人工智慧即將到來。
We will grow strongly in 2025.
2025年我們將實現強勁成長。
Going forward, data centers will dedicate most of CapEx to accelerated computing and AI.
展望未來,資料中心將把大部分資本支出用於加速運算和人工智慧。
Data centers will increasingly become AI factories, and every company will have them either rented or self-operated.
資料中心將越來越成為AI工廠,每家公司都會租用或自營資料中心。
I want to thank all of you for joining us today.
我感謝大家今天的出席。
Come join us at GTC in a couple of weeks.
幾週後,歡迎來參加我們的 GTC。
We're going to be talking about Blackwell Ultra, Rubin and other new computing, networking, reasoning AI, physical AI products and a whole bunch more.
我們將討論 Blackwell Ultra、Rubin 和其他新的運算、網路、推理 AI、實體 AI 產品等等。
Thank you.
謝謝。
Operator
Operator
This concludes today's conference call.
今天的電話會議到此結束。
You may now disconnect.
您現在可以斷開連線。