輝達 (NVDA) 2025 Q4 法說會逐字稿

內容摘要

NVIDIA 報告稱,2025 財年第四季營收創下歷史新高,資料中心收入和對其 Blackwell 架構的需求顯著增長。在新產品推出的推動下,該公司預計第一季將繼續成長。他們討論了人工智慧模型日益增長的重要性以及為不同任務配置資料中心所面臨的挑戰。

NVIDIA 正在準備發布 Blackwell Ultra 和 Vera Rubin,專注於客製化 ASIC 和商用 GPU。演講者強調了人工智慧技術跨行業的主流融合以及人工智慧徹底改變軟體和服務的潛力。資料中心內的企業業務正在快速成長,重點關注超大規模企業和企業 AI 的成長潛力。

NVIDIA 的多功能架構和高效的性能使其在競爭對手中脫穎而出。演講者對AI技術未來的發展和影響表示樂觀。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is Krista, and I'll be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's fourth-quarter earnings call. (Operator Instructions)

    午安.我叫克里斯塔,今天我將擔任您的會議主持人。現在,我歡迎大家參加 NVIDIA 第四季財報電話會議。(操作員指示)

  • Thank you. Stewart Stecker, you may begin your conference.

    謝謝。史都華·斯特克,你可以開始你的會議了。

  • Stewart Stecker - Investor Relations

    Stewart Stecker - Investor Relations

  • Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    謝謝。大家下午好,歡迎參加NVIDIA 2025財年第四季電話會議。今天與我一起出席的還有 NVIDIA 總裁兼執行長 Jensen Huang 和執行副總裁兼財務長 Colette Kress。

  • I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without prior written consent.

    我想提醒您,我們的電話會議正在 NVIDIA 的投資者關係網站上進行網路直播。網路直播將可重播,直至電話會議討論我們 2026 財年第一季的財務業績。今天電話會議的內容屬於 NVIDIA 的財產。未經事先書面同意,不得複製或轉錄。

  • During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

    在本次電話會議中,我們可能會根據目前預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益報告中的披露內容、我們最新的 10-K 和 10-Q 表格以及我們可能向美國證券交易委員會提交的 8-K 表格報告。

  • All our statements are made as of today, February 26, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    我們所有的聲明都是根據我們目前掌握的資訊截至今天(2025 年 2 月 26 日)做出的。除法律要求外,我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. Confine a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在本次電話會議中,我們將討論非公認會計準則財務指標。在我們網站上發布的 CFO 評論中,將這些非 GAAP 財務指標與 GAAP 財務指標的對帳限制在一定範圍內。

  • With that, let me turn the call over to Colette.

    說完這些,讓我把電話轉給科萊特。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Thanks, Stewart. Q4 was another record quarter. Revenue of $39.3 billion was up 12% sequentially and up 78% year on year, and above our outlook of $37.5 billion. For fiscal 2025, revenue was $130.5 billion, up 114% in the prior year.

    謝謝,斯圖爾特。第四季又是一個創紀錄的季度。營收為 393 億美元,季增 12%,年增 78%,高於我們預期的 375 億美元。2025財年營收為1,305億美元,較上一年成長114%。

  • Let's start with datacenter. Data center revenue for fiscal 2025 was $115.2 billion, more than doubling from the prior year. In the fourth quarter, data center revenue of $35.6 billion was a record of 16% sequentially and 93% year on year as the Blackwell ramp commenced and Hopper 200 continued sequential growth

    讓我們從資料中心開始。2025財年的資料中心營收為1,152億美元,比前一年增長一倍以上。第四季度,資料中心營收達 356 億美元,季增 16%,年增 93%,這得益於 Blackwell 產能提升和 Hopper 200 產能持續環比成長

  • In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp in our company's history, unprecedented in its speed and scale. Blackwell production is in full gear across multiple configurations, and we are increasing supply quickly, expanding customer adoption.

    第四季度,Blackwell 的銷售額超出了我們的預期。我們交付了 110 億美元的布萊克威爾收入來滿足強勁的需求。這是我們公司史上最快的產品提升,速度和規模都是前所未有的。Blackwell 的生產正在多種配置中全面展開,我們正在快速增加供應,擴大客戶採用率。

  • Our Q4 data center compute revenue jumped 18% sequentially and over 2x year on year. Customers are racing to scale infrastructure to train the next generation of cutting-edge models and unlock the next level of AI capabilities.

    我們的第四季資料中心計算營收季增 18%,年增 2 倍多。客戶正在競相擴展基礎設施,以訓練下一代尖端模型並釋放更高水準的人工智慧能力。

  • With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size.

    有了 Blackwell,這些叢集通常以 100,000 個或更多 GPU 開始。多個此類規模的基礎設施已開始出貨。

  • Post training and model customization are fueling demand for NVIDIA infrastructure and software as developers and enterprises leverage techniques such as fine tuning, reinforcement learning, and distillation to tailor models for domain specific use cases. Hugging Face alone hosts over 90,000 derivatives created from the Llama foundation model. The scale of post training and model customization is massive and can collectively demand orders of magnitude more compute than pre-training.

    隨著開發人員和企業利用微調、強化學習和提煉等技術來為特定領域的用例客製化模型,後期訓練和模型客製化正在推動對 NVIDIA 基礎設施和軟體的需求。光是 Hugging Face 就擁有超過 90,000 個基於 Llama 基礎模型創建的衍生產品。後期訓練和模型定制的規模非常龐大,整體而言,所需的計算量比預訓練高出幾個數量級。

  • Our inference demand is accelerating. Driven by test time scaling and new reasoning models like OpenAI's O3, DeepSeek-R1, and Grok 3. Long-thinking reasoning AI can require 100x more compute per task compared to one-shot inferences. Blackwell was architected for reasoning AI inference.

    我們的推理需求正在加速成長。由測試時間擴展和新的推理模型(如 OpenAI 的 O3、DeepSeek-R1 和 Grok 3)驅動。與一次性推理相比,長期推理人工智慧每個任務可能需要多 100 倍的計算量。Blackwell 是為推理 AI 推理而設計的。

  • Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. It is revolutionary. Transformer engine is built for LLM and mixture-of-experts inference. And its NVLin domain delivers 14x the throughput of PCIe Gen5, ensuring the response time, throughput, and cost efficiency needed to tackle the growing complexity of inference of scale.

    與 Hopper 100 相比,Blackwell 可使推理 AI 模型的令牌吞吐量提高 25 倍,成本降低 20 倍。這是革命性的。Transformer 引擎專為 LLM 和混合專家推理而建置。其 NVLin 域可提供 PCIe Gen5 14 倍的吞吐量,確保應對日益複雜的規模推理所需的回應時間、吞吐量和成本效率。

  • Companies across industries are tapping into NVIDIA's full stack inference platform to boost performance and slash costs, now tripled inference throughput and cut costs by 66% using NVIDIA TensorRT, for its screenshot feature.

    各行各業的公司都在利用 NVIDIA 的全端推理平台來提高效能並削減成本,現在使用 NVIDIA TensorRT 的螢幕截圖功能,推理吞吐量提高了兩倍,成本降低了 66%。

  • Perplexity sees 435 million monthly queries and reduced its inference costs 3x with NVIDIA Triton Inference Server and TensorRT-LLM.

    Perplexity 每月有 4.35 億次查詢,並透過 NVIDIA Triton 推理伺服器和 TensorRT-LLM 將推理成本降低了 3 倍。

  • Microsoft Bing achieved a 5x speed up and major TCO savings for visual search across billions of images with NVIDIA TensorRT and acceleration libraries.

    透過 NVIDIA TensorRT 和加速庫,Microsoft Bing 在數十億張影像的視覺搜尋中實現了 5 倍的速度提升和大幅的 TCO 節省。

  • Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture. Blackwell addresses the entire AI market from pre-training, post-training to inference across clouds to on-premise to enterprise. CUDA's programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving markets.

    布萊克威爾對推理的需求很大。許多早期的 GB200 部署都專門用於推理,這對於新架構來說尚屬首次。Blackwell 涉及整個人工智慧市場,從預訓練、後訓練到跨雲端、本地和企業的推理。CUDA 的可編程架構可加速每個 AI 模型和超過 4,400 個應用程序,確保大型基礎設施投資在快速發展的市場中不會過時。

  • Our performance and pace of innovation is unmatched. We're driven to a 200x reduction in inference cost in just the last two years. We delivered the lowest TCO and the highest ROI, and full stack optimizations for NVIDIA and our large ecosystem, including 5.9 million developers, continuously improve our customers' economics.

    我們的表現和創新步伐是無與倫比的。僅在過去兩年內,我們就將推理成本降低了 200 倍。我們提供了最低的 TCO 和最高的 ROI,並為 NVIDIA 和我們的大型生態系統(包括 590 萬名開發人員)提供了全端優化,不斷提高客戶的經濟效益。

  • In Q4, large CSPs represented about half of our data center revenue. And these sales increased nearly 2x year on year. Large CSPs were some of the first to stand up Blackwell, with Azure, GCP, AWS, and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI.

    在第四季度,大型 CSP 約占我們資料中心收入的一半。這些銷售額比去年同期增長了近兩倍。大型 CSP 是第一批支援 Blackwell 的公司之一,Azure、GCP、AWS 和 OCI 都將 GB200 系統帶到了世界各地的雲端區域,以滿足客戶對 AI 不斷增長的需求。

  • Regional cloud hosting NVIDIA GPUs increased as a percentage of data center revenue, reflecting continued AI factory buildouts globally and rapidly rising demand for AI reasoning models and agents. We've launched a 100,000 GB200 cluster-based instance with NVLink Switch and Quantum-2 InfiniBand.

    區域雲端託管 NVIDIA GPU 佔資料中心收入的百分比有所增加,這反映了全球 AI 工廠的持續建設以及對 AI 推理模型和代理的需求的快速增長。我們已經啟動了一個基於 NVLink Switch 和 Quantum-2 InfiniBand 的 100,000 GB200 叢集的實例。

  • Consumer Internet revenue grew 3x year on year, driven by an expanding set of generative AI and deep learning use cases. These include recommender systems, vision language understanding, synthetic data generation search, and agentic AI. For example, xAI is adopting the GB200 to train and inference its next generation of Grok AI models.

    受生成式人工智慧和深度學習用例不斷擴展的推動,消費者網路收入較去年同期成長 3 倍。這些包括推薦系統、視覺語言理解、合成資料生成搜尋和代理人工智慧。例如,xAI 正在採用 GB200 來訓練和推理其下一代 Grok AI 模型。

  • Meta's cutting-edge Andromeda advertising engine runs on NVIDIA's Grace Hopper Superchip, serving vast quantities of ads across Instagram, Facebook applications. Andromeda harnesses Grace Hopper's fast interconnect and large memory to boost inference throughput by 3x, enhance ad personalization, and deliver meaningful jumps in monetization and ROI.

    Meta 的尖端 Andromeda 廣告引擎在 NVIDIA 的 Grace Hopper 超級晶片上運行,為 Instagram、Facebook 應用程式提供大量廣告。Andromeda 利用 Grace Hopper 的快速互連和大內存將推理吞吐量提高 3 倍,增強廣告個性化,並在貨幣化和投資回報率方面實現顯著飛躍。

  • Enterprise revenue increased nearly 2x year on year the accelerating demand of model fine tuning, RAG, and agentic AI workflows and GPU accelerated data processing.

    企業營收年增近 2 倍,模型微調、RAG、代理 AI 工作流程和 GPU 加速資料處理的需求不斷增長。

  • We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications including customer support, fraud detection, and product supply chain and inventory management. Leading AI agent platform providers including SAP and ServiceNow are among the first to use new models.

    我們推出了 NVIDIA Llama Nemotron 模型系列 NIM,幫助開發人員在一系列應用程式中建立和部署 AI 代理,包括客戶支援、詐欺偵測以及產品供應鍊和庫存管理。SAP 和 ServiceNow 等領先的 AI 代理平台供應商是首批使用新模型的公司之一。

  • Healthcare leaders IQVIA, Illumina, Mayo Clinic as well as Arc Institute are using NVIDIA AI to speed drug discovery, enhance genomic research, and pioneer advanced healthcare services with generative and agentic AI.

    醫療保健領導者 IQVIA、Illumina、Mayo Clinic 以及 Arc Institute 正在使用 NVIDIA AI 來加速藥物發現、加強基因組研究,並透過生成性和代理性 AI 開拓先進的醫療保健服務。

  • As AI expands beyond the digital world, NVIDIA infrastructure and software platforms are increasingly being adopted to power robotics and physical AI development.

    隨著人工智慧在數位世界之外的擴展,NVIDIA 基礎設施和軟體平台越來越多地被用於推動機器人和實體人工智慧的發展。

  • One of the early and largest robotics applications and autonomous vehicles where virtually every AV company is developing on NVIDIA in the data center, the car, or both. NVIDIA's automotive vertical revenue is expected to grow to approximately $5 billion this fiscal year.

    早期和最大的機器人應用和自動駕駛汽車之一,幾乎每個 AV 公司都在資料中心、汽車或兩者上使用 NVIDIA 進行開發。NVIDIA 的汽車垂直收入預計本財年將成長至約 50 億美元。

  • At CES, Hyundai Motor Group announced it is adopting NVIDIA's technologies to accelerate AV and robotics development and smart factory initiatives. Vision transformers, self-supervised learning, multimodal sensor fusion, and high-fidelity simulation are driving breakthroughs in AV development and will require 10x more compute.

    在 CES 上,現代汽車集團宣布將採用 NVIDIA 的技術來加速 AV 和機器人技術的發展以及智慧工廠計畫。視覺轉換器、自監督學習、多模態感測器融合和高保真模擬正在推動 AV 開發的突破,並且需要 10 倍以上的運算能力。

  • At CES, we announced the NVIDIA Cosmos world foundation model platform. Just as language foundation models have revolutionized language AI, Cosmos is a physical AI to revolutionize robotics. Leading robotics and automotive companies, including ride-sharing giant Uber, are among the first to adopt the platform.

    在CES上,我們發表了NVIDIA Cosmos世界基礎模式平台。正如語言基礎模型徹底改變了語言人工智慧一樣,Cosmos 是一種徹底改變機器人技術的實體人工智慧。包括共乘巨頭 Uber 在內的領先機器人和汽車公司是首批採用該平台的公司之一。

  • From a geographic perspective, sequential growth in our data center revenue was strongest in the US, driven by the initial ramp of Blackwell. Countries across the globe are building their AI ecosystems, and demand for compute infrastructure is surging. France's EUR200 billion AI investment and the EU's EUR200 billion invest AI initiatives offer a glimpse into the build out to set redefined global AI infrastructure in the coming years.

    從地理角度來看,受 Blackwell 初期成長的推動,我們的資料中心營收在美國實現了最強勁的連續成長。世界各國都在建構人工智慧生態系統,對運算基礎設施的需求正在激增。法國 2000 億歐元的人工智慧投資和歐盟 2000 億歐元的人工智慧投資計畫讓我們得以一窺未來幾年重新定義的全球人工智慧基礎設施的建設。

  • Now, as a percentage of total data center revenue, data center sales in China remained well below levels seen on the onset of export controls. Absent any change in regulations, we believe that China's shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive. We will continue to comply with export controls while serving our customers.

    目前,作為資料中心總收入的百分比,中國資料中心的銷售額仍遠低於出口管制開始時的水平。如果法規沒有任何變化,我們相信中國的出貨量將大致維持在目前的比例。中國資料中心解決方案市場競爭依然十分激烈。我們在為客戶提供服務的同時,將繼續遵守出口管制。

  • Networking revenue declined 3% sequentially. Our networking attached to GPU compute systems is robust at over 75%. We are transitioning from small NVLink 8 with InfiniBand to large NVLink 72 with Spectrum-X. Spectrum-X and NVLink Switch revenue increased and represents a major new growth sector. We expect networking to return to growth in Q1.

    網路收入季減 3%。我們連接到 GPU 運算系統的網路穩健性超過 75%。我們正在從帶有 InfiniBand 的小型 NVLink 8 過渡到帶有 Spectrum-X 的大型 NVLink 72。Spectrum-X 和 NVLink Switch 收入有所增加,代表著一個重要的新成長領域。我們預計網路將在第一季恢復成長。

  • AI requires a new class of networking. NVIDIA offers NVLink Switch systems for scale up compute. For scale out, we offer Quantum InfiniBand for HPC supercomputers and Spectrum-X for Ethernet environments. Spectrum-X enhances the Ethernet for AI computing and has been a huge success. Microsoft Azure, OCI, CoreWeave, and others are building large AI factories with Spectrum-X.

    人工智慧需要一種新型的網路。NVIDIA 提供 NVLink Switch 系統用於擴展計算。為了實現橫向擴展,我們為 HPC 超級電腦提供 Quantum InfiniBand,為乙太網路環境提供 Spectrum-X。Spectrum-X 增強了乙太網路的 AI 運算能力,並取得了巨大的成功。Microsoft Azure、OCI、CoreWeave 和其他公司正在使用 Spectrum-X 建造大型 AI 工廠。

  • The first Stargate data centers will use Spectrum-X. Yesterday Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure. With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.

    第一個星際之門資料中心將使用 Spectrum-X。昨天,思科宣布將 Spectrum-X 整合到其網路產品組合中,以幫助企業建立人工智慧基礎架構。憑藉其龐大的企業足跡和全球影響力,思科將將 NVIDIA 乙太網路帶入各行各業。

  • Now moving to gaming and AI PCs. Gaming revenue of $2.5 billion decreased 22% sequentially and 11% year on year. Full-year revenue of $11.4 billion increased 9% year on year, and demand remained strong throughout the holiday. However, Q4 shipments were impacted by supply constraints. We expect strong sequential growth in Q1 as the supply increases. The new GeForce RTX 50 Series desktop and laptop GPUs are here, built for gamers, creators, and developers they fuse AI and graphics redefining visual computing.

    現在轉向遊戲和 AI PC。博彩收入為 25 億美元,季減 22%,年減 11%。全年營收 114 億美元,年增 9%,假期期間需求依然強勁。然而,第四季的出貨量受到供應限制的影響。隨著供應增加,我們預計第一季將出現強勁的環比成長。全新 GeForce RTX 50 系列桌上型電腦和筆記型電腦 GPU 現已推出,專為遊戲玩家、創作者和開發者打造,融合 AI 和圖形,重新定義視覺運算。

  • Powered by the Blackwell architecture, Fifth-Generation Tensor Cores and Fourth-Generation RT Cores and featuring up to 3,400 AI TOPS, these GPUs deliver a 2x performance leap and new AI-driven rendering including neural shaders, digital human technologies, geometry and lighting.

    這些 GPU 採用 Blackwell 架構、第五代 Tensor 核心和第四代 RT 核心,具有高達 3,400 AI TOPS,可實現 2 倍性能飛躍和新的 AI 驅動渲染,包括神經著色器、數位人技術、幾何和照明。

  • The new DLSS 4 boost frame rates up to 8x with AI-driven frame generation, turning one rendered frame into 3. It also features the industry's first real-time application of transformer models packing 2x more parameters and 4x to compute for unprecedented visual fidelity.

    新的 DLSS 4 透過 AI 驅動的幀生成將幀速率提高至 8 倍,將一個渲染幀變成 3 個。它還具有業界首個即時應用的變壓器模型,其參數增加了 2 倍,計算量增加了 4 倍,實現了前所未有的視覺保真度。

  • We also announced a wave of GeForce Blackwell laptop GPUs with new NVIDIA Max-Q technology that extends battery life by up to an incredible 40%. These laptops will be available starting in March from the world's top manufacturers.

    我們也發布了一系列採用全新 NVIDIA Max-Q 技術的 GeForce Blackwell 筆記型電腦 GPU,可將電池壽命延長高達 40%。這些筆記型電腦將於三月由全球頂級製造商推出。

  • Moving to our professional visualization business, revenue of $511 million was up 5% sequentially and 10% year on year. Full year revenue of $1.9 billion increased 21% year on year. Key industry verticals driving demand include automotive and healthcare.

    我們的專業視覺化業務收入達 5.11 億美元,季增 5%,年增 10%。全年營收19億美元,年增21%。推動需求的關鍵垂直產業包括汽車和醫療保健。

  • NVIDIA technologies and generative AI are reshaping design, engineering, and simulation workloads. Increasingly, these technologies are being leveraged in leading software platforms from ANSYS, Cadence, and Siemens, fueling demand for NVIDIA RTX workstations.

    NVIDIA 技術和生成式 AI 正在重塑設計、工程和類比工作負載。這些技術越來越多地被 ANSYS、Cadence 和西門子等領先的軟體平台所採用,從而推動了對 NVIDIA RTX 工作站的需求。

  • Now moving to automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year on year. Full year revenue of $1.7 billion increased 55% year on year. Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis.

    現在轉向汽車領域。營收達到創紀錄的 5.7 億美元,比上一季成長 27%,比去年同期成長 103%。全年營收17億美元,年增55%。強勁的成長是由自動駕駛汽車(包括汽車和機器人計程車)的持續成長所推動的。

  • At CES, we announced Toyota, the world's largest automaker, will build its next generation vehicles on NVIDIA Orin, running the safety certified NVIDIA DRIVE OS. We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA DRIVE Thor.

    在 CES 上,我們宣布全球最大的汽車製造商豐田將在 NVIDIA Orin 上打造其下一代汽車,並運行經過安全認證的 NVIDIA DRIVE OS。我們宣布 Aurora 和 Continental 將大規模部署由 NVIDIA DRIVE Thor 支援的無人駕駛卡車。

  • Finally, our end-to-end autonomous vehicle platform, NVIDIA DRIVE Hyperion, has passed industry safety assessments by TUV SUD and TUV Rheinland, two of the industry's foremost authorities for automotive grade safety and cybersecurity.

    最後,我們的端到端自動駕駛汽車平台 NVIDIA DRIVE Hyperion 已通過 TUV SUD 和 TUV Rheinland 的產業安全評估,這兩家機構是汽車級安全和網路安全領域的兩大產業權威。

  • NVIDIA is the first AV platform to receive a comprehensive set of third-party assessments.

    NVIDIA 是第一個獲得全面第三方評估的 AV 平台。

  • Okay, moving to the rest of the P&L. GAAP gross margins was 73%, and non-GAAP gross margins was 73.5%, down sequentially as expected with our first deliveries of the Blackwell architecture.

    好的,轉到損益表的其餘部分。GAAP 毛利率為 73%,非 GAAP 毛利率為 73.5%,與我們首次交付 Blackwell 架構後預期的一樣,季減。

  • As discussed last quarter, Blackwell is a customizable AI infrastructure with several different types of NVIDIA-built chips, multiple networking options, and for air- and liquid-cooled data center. We exceeded our expectations in Q4 in ramping Blackwell, increasing system availability, providing several configurations to our customers. As Blackwell ramps, we expect gross margins to be in the low 70s.

    如同上個季度所討論的,Blackwell 是一個可自訂的 AI 基礎設施,具有幾種不同類型的 NVIDIA 晶片、多種網路選項以及用於風冷和液冷資料中心。我們在第四季度超出了預期,擴大了 Blackwell 的規模,提高了系統可用性,並為客戶提供了多種配置。隨著 Blackwell 業務的成長,我們預計毛利率將達到 70% 出頭。

  • We -- initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure. When fully ramped, we have many opportunities to improve the cost and gross margin. We'll improve and return to the mid-70s late this fiscal year.

    我們—最初,我們專注於加速 Blackwell 系統的製造,以滿足客戶在競相建造 Blackwell 基礎設施時的強烈需求。當產量全面提升時,我們有許多機會來改善成本和毛利率。我們將在本財政年度末有所改善並回到 70 年代中期。

  • Sequentially, GAAP operating expenses were up 9% and non-GAAP operating expenses were 11%, reflecting higher engineering development costs and higher compute and infrastructure costs for new product introductions.

    與上一季相比,GAAP 營運費用上漲 9%,非 GAAP 營運費用上漲 11%,這反映了工程開發成本的增加以及新產品推出的計算和基礎設施成本的增加。

  • In Q4, we returned $8.1 billion to shareholders in the form of share repurchases and cash dividends.

    第四季度,我們以股票回購和現金分紅的形式向股東返還了 81 億美元。

  • Let me turn to the outlook in the first quarter.

    我來談談第一季的展望。

  • Total revenue is expected to be $43 billion, plus or minus 2%. Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1. We expect sequential growth in both data center and gaming. Within data center, we expect sequential growth from both compute and networking.

    預計總收入為 430 億美元,上下浮動 2%。由於需求持續強勁,我們預計 Blackwell 的銷售量將在第一季大幅成長。我們預計資料中心和遊戲都將實現連續成長。在資料中心內,我們預期計算和網路都會實現連續成長。

  • GAAP and non-GAAP gross margins are expected to be 70.6% and 71% respectively, plus or minus 50 basis points.

    預計 GAAP 和非 GAAP 毛利率分別為 70.6% 和 71%,上下浮動 50 個基點。

  • GAAP and non-GAAP operating expenses are expected to be approximately $5.2 billion and $3.6 billion, respectively. We expect full-year -- fiscal year '26 operating expenses to grow -- to be in the mid-thirties.

    預計 GAAP 和非 GAAP 營運費用分別約為 52 億美元和 36 億美元。我們預計 26 財年全年營運費用將成長至 35% 左右。

  • GAAP and non-GAAP other incoming expenses are expected to be an income of approximately $400 million, excluding gains and losses from non-marketable and publicly held equity securities.

    預計 GAAP 和非 GAAP 其他收入支出約為 4 億美元,不包括非流通股本證券和公開持有的股權證券的損益。

  • GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.

    預計 GAAP 和非 GAAP 稅率為 17%,上下浮動 1%,不包括任何單一項目。

  • Further financial details are included in the CFO commentary and other information available on our IR website, including a new financial information AI agent.

    進一步的財務細節包含在財務長評論和我們 IR 網站上提供的其他資訊中,包括一個新的財務資訊 AI 代理。

  • In closing, let me highlight upcoming events for the financial community. We will be at the TD Cowen Health Care Conference in Boston on March 3 and at the Morgan Stanley Technology, Media & Telecom Conference in San Francisco on March 5. Please join us for our Annual GTC Conference starting Monday, March 17, in San Jose, California. Jensen will deliver a news packed keynote on March 18, and we will host a Q&A session for our financial analysts the next day, March 19. We look forward to seeing you at these events.

    最後,讓我重點介紹一下金融界即將發生的事件。我們將於 3 月 3 日參加在波士頓舉行的 TD Cowen 醫療保健會議,並於 3 月 5 日參加在舊金山舉行的摩根士丹利科技、媒體和電信會議。歡迎參加我們於 3 月 17 日星期一在加州聖荷西舉行的年度 GTC 會議。詹森將於 3 月 18 日發表一場新聞豐富的主題演講,第二天,即 3 月 19 日,我們將為我們的財務分析師舉辦一場問答環節。我們期待在這些活動中見到您。

  • Our earnings call to discuss the results for our first quarter of fiscal 2026 is scheduled for May 28, 2025.

    我們將於 2025 年 5 月 28 日召開收益電話會議,討論 2026 財年第一季的業績。

  • We are going to open up the call operator to questions. If you could start that that would be great.

    我們將向接線員開放提問。如果你能開始的話那就太好了。

  • Operator

    Operator

  • (Operator Instructions) CJ Muse, Cantor Fitzgerald.

    (操作員指示)CJ Muse,Cantor Fitzgerald。

  • CJ Muse - Analyst

    CJ Muse - Analyst

  • I guess for me, Jensen, as test-time compute and reinforcement learning shows such promise, we're clearly seeing an increasing blurring of the lines between training and inference, what does this mean for the potential future of potentially inference dedicated clusters? And how do you think about the overall impact to NVIDIA and your customers? Thank you.

    我想對我來說,Jensen,由於測試時間計算和強化學習顯示出如此大的前景,我們清楚地看到訓練和推理之間的界限越來越模糊,這對潛在推理專用集群的潛在未來意味著什麼?您如何看待這對 NVIDIA 及其客戶的整體影響?謝謝。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • Yeah, I appreciate that CJ. There are now multiple scaling laws. There's the pre-training scaling law, and that's going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining.

    是的,我很感激 CJ。現在有多個縮放定律。存在預訓練縮放定律,並且它將繼續縮放,因為我們有多模態性,我們有來自推理的數據,現在用於進行預訓練。

  • And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning verifiable rewards. The amount of computation you use for post-training is actually higher than pre-training. And it's kind of sensible in the sense that you could, while you're using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens. AI models are basically generating tokens to train AI models. And that's post-training.

    第二個是訓練後的擴展規律,使用強化學習人類回饋、強化學習人工智慧回饋、強化學習可驗證獎勵。後訓練所使用的計算量實際上高於預訓練。從某種意義上說,這是很明智的,因為當你使用強化學習時,你可以產生大量的合成資料或合成生成的代幣。AI 模型基本上是產生令牌來訓練 AI 模型。這就是訓練後的情況。

  • And the third part, this is the part that you mentioned, is test-time compute or reasoning, long thinking, inference scaling, they're all basically the same ideas. And there you have a chain of thought, you've search. The amount of tokens generated, the amount of inference compute needed is already 100 times more than the one-shot examples and the one-shot capabilities of large language models in the beginning. And that's just the beginning. This is just the beginning.

    第三部分,也就是您提到的部分,是測驗時間計算或推理、長時間思考、推理擴展,它們基本上都是相同的想法。這樣你就有了一個思路,你已經搜尋過了。產生的 token 數量、所需的推理計算量已經是一次性範例和大型語言模型的一次性能力的 100 倍。而這只是個開始。這只是個開始。

  • The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future.

    未來,我們的下一代模型將擁有數千倍甚至更完善的、基於模擬和搜尋的模型,其運算能力將比現在高出數十萬倍、數百萬倍。

  • And so, the question is how do you design such an architecture? Some of it -- some of the models are auto regressive. Some of the models are diffusion based. Some of it -- some of the times you want your data center to have disaggregated inference. Sometimes it is compacted.

    那麼問題是如何設計這樣的架構呢?其中一些——一些模型是自回歸的。有些模型是基於擴散的。其中一些—有時您希望您的資料中心具有分解推理。有時它會被壓實。

  • And so, it's hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA's architecture is so popular. We run every model. We are great at training. The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level. We designed Blackwell with the idea of reasoning models in mind. And when you look at training, it's many times more performing.

    因此,很難弄清楚資料中心的最佳配置是什麼,這也是NVIDIA架構如此受歡迎的原因。我們運行每個模型。我們非常擅長訓練。我們今天的絕大多數計算實際上都是推理,而 Blackwell 將這一切提升到了一個新的水平。我們在設計 Blackwell 時就考慮到了推理模型。當你觀察訓練時,你會發現它的表現好很多倍。

  • But what's really amazing is for long thinking, test-time scaling, reasoning AI models were tens of times faster, 25 times higher throughput. And so, Blackwell is going to be incredible across the board. And when you have a data center that allows you to configure and use your data center based on are you doing more pre-training now, post-training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways. And so, we're seeing, in fact, much, much more concentration of a unified architecture than ever before.

    但真正令人驚訝的是,對於長期思考、測試時間擴展、推理 AI 模型來說,速度提高了數十倍,吞吐量提高了 25 倍。所以,布萊克威爾將在各方面表現出色。當您擁有一個資料中心,它允許您根據您現在是否進行更多的預訓練、後訓練或擴展推理來配置和使用您的資料中心時,我們的架構是可互換的,並且易於以所有這些不同的方式使用。因此,事實上,我們看到統一架構的集中度比以往任何時候都要高得多。

  • Operator

    Operator

  • Joe Moore, JPMorgan.

    摩根大通的喬摩爾。

  • Joe Moore - Analyst

    Joe Moore - Analyst

  • Morgan Stanley, actually. Thank you. I wonder if you could talk about GB200 at CES, you sort of talked about the complexity of the rack level systems and the challenges you have. And then as you said in the prepared remarks, we've seen a lot of general availability -- where are you in terms of that ramp?

    實際上是摩根士丹利。謝謝。我想知道您是否可以在 CES 上談論 GB200,您談到了機架級系統的複雜性以及您面臨的挑戰。然後,正如您在準備好的演講中所說的那樣,我們已經看到了很多普遍的可用性 - 就該坡道而言,您在哪裡?

  • Are there still bottlenecks to consider at a systems level above and beyond the chip level? And just have you maintained your enthusiasm for the NVL72 platforms?

    除了晶片級之外,系統級是否還存在需要考慮的瓶頸?您對 NVL72 平台的熱情還保持著嗎?

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • Well, I'm more enthusiastic today than I was at CES. And the reason for that is because we've shipped a lot more since CES. We have some 350 plants manufacturing the 1.5 million components that go into each one of the Blackwell racks, Grace Blackwell racks.

    嗯,今天我比在 CES 上更熱情。原因在於 CES 以來我們的出貨量大幅增加。我們擁有約 350 家工廠,負責生產 Blackwell 貨架和 Grace Blackwell 貨架所需的 150 萬個零件。

  • Yes, it's extremely complicated. And we successfully and incredibly ramped up Grace Blackwell, delivering some $11 billion of revenues last quarter. We're going to have to continue to scale as demand is quite high, and customers are anxious and impatient to get their Blackwell systems.

    是的,這非常複雜。我們成功且令人驚奇地擴大了 Grace Blackwell 的業務,上個季度實現了約 110 億美元的收入。由於需求很高,我們必須繼續擴大規模,而且客戶急切地想要得到他們的 Blackwell 系統。

  • You've probably seen on the web, a fair number of celebrations about Grace Blackwell systems coming online and we have them, of course. We have a fairly large installation of Grace Blackwell systems for our own engineering and our own design teams and software teams.

    您可能已經在網路上看到很多關於 Grace Blackwell 系統上線的慶祝活動,當然,我們也舉辦了這些活動。我們為自己的工程團隊、設計團隊和軟體團隊安裝了相當大的 Grace Blackwell 系統。

  • CoreWeave has now been quite public about the successful bring up of theirs. Microsoft has, of course, OpenAI has, and you're starting to see many come online. And so, I think the answer to your question is nothing is easy about what we're doing, but we're doing great, and all of our partners are doing great.

    CoreWeave 現已公開宣布成功上市。當然,微軟有,OpenAI 也有,而且你開始看到許多技術上上線。所以,我認為你問題的答案是,我們所做的事情並不容易,但我們做得很好,我們所有的伴侶也做得很好。

  • Operator

    Operator

  • Vivek Arya, Bank of America Securities.

    Vivek Arya 的美國銀行證券。

  • Vivek Arya - Analyst

    Vivek Arya - Analyst

  • Colette, if you wouldn't mind confirming if Q1 is the bottom for gross margins? And then Jensen, my question is for you. What is on your dashboard to give you the confidence that the strong demand can sustain into next year? And has DeepSeek and whatever innovations they came up with, has that changed that view in any way? .

    Colette,您是否介意確認第一季的毛利率是否處於最低水準?那麼詹森,我的問題是問你的。您的儀表板上有什麼資訊讓您有信心強勁的需求能夠持續到明年?那麼 DeepSeek 及其提出的任何創新是否在某種程度上改變了這種觀點?。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Let me first take the first part of the question there regarding the gross margin. During our Blackwell ramp, our gross margins will be in the low 70s. And at this point, we are focusing on expediting our manufacturing, expediting our manufacturing to make sure that we can provide to customers as soon as possible. Our Blackwell is fully round. And once it does -- I'm sorry, once our Blackwell fully rounds, we can improve our cost and our gross margin. So we expect to probably be in the mid-70s later this year.

    首先,讓我回答有關毛利率的問題的第一部分。在我們的 Blackwell 產能提升期間,我們的毛利率將達到 70% 出頭。目前,我們正專注於加快生產速度,並加快生產速度以確保能夠盡快向客戶提供產品。我們的布萊克威爾是圓形的。一旦實現——抱歉,一旦我們的 Blackwell 完全實現,我們就可以改善我們的成本和毛利率。因此我們預計今年稍後可能會達到 70 年代中期。

  • Walking through what you heard Jensen speak about the systems and their complexity, they are customizable in some cases. They've got multiple networking options. They have liquid cool, and water cooled. So we know there is an opportunity for us to improve these gross margins going forward. But right now, we are going to focus on getting the manufacturing complete and to our customers as soon as possible.

    透過聽 Jensen 講述系統及其複雜性,你會發現在某些情況下它們是可自訂的。他們有多種網路選擇。它們有液體冷卻和水冷卻。因此我們知道未來還有機會提高毛利率。但現在,我們將專注於盡快完成製造並交付給客戶。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • We know several things that we have a fairly good line of sight of the amount of capital investment that data centers are building out towards. We know that going forward, the vast majority of software is going to be based on machine learning. And so accelerated computing and generative AI, reasoning AI are going to be the type of architecture you want in your data center.

    我們知道一些事情,我們對資料中心建設所需的資本投資金額有相當好的了解。我們知道,未來絕大多數軟體將基於機器學習。因此,加速運算、產生人工智慧和推理人工智慧將成為您在資料中心中想要的架構類型。

  • We have, of course, forecast and plans from our top partners. And we also know that there are many innovative, really exciting start-ups that are still coming online as new opportunities for developing the next breakthroughs in AI, whether it's agentic AIs, reasoning AI or physical AIs. The number of start-ups are still quite vibrant and each one of them need a fair amount of computing infrastructure.

    當然,我們有來自頂級合作夥伴的預測和計劃。我們也知道,許多創新的、真正令人興奮的新創公司仍在不斷湧現,為開發人工智慧的下一個突破帶來新的機會,無論是代理人工智慧、推理人工智慧還是物理人工智慧。新創企業的數量仍然非常活躍,每家企業都需要相當數量的運算基礎設施。

  • And so, I think the -- whether it's the near-term signals or the mid-term signals -- near-term signals, of course, are POs and forecasts and things like that. Midterm signals would be the level of infrastructure and CapEx scale-out compared to previous years. And then the long-term signals has to do with the fact that we know fundamentally software has changed from hand coding that runs on CPUs, to machine learning and AI-based software that runs on GPUs and accelerated computing systems.

    所以,我認為——無論是近期訊號還是中期訊號——近期訊號當然是採購訂單和預測之類的東西。中期訊號將是與前幾年相比基礎設施和資本支出的水平。然後,長期訊號與這樣一個事實有關:我們知道軟體從根本上已經從在 CPU 上運行的手動編碼轉變為在 GPU 和加速計算系統上運行的機器學習和基於 AI 的軟體。

  • And so, we have a fairly good sense that this is the future of software. And then maybe as you roll it out, another way to think about that is we've really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software. The next wave is coming, agentic AI for enterprise, physical AI for robotics, and sovereign AI as different regions build out their AI for their own ecosystems.

    因此,我們非常清楚這就是軟體的未來。然後也許當你推出它時,另一種思考方式是,我們實際上只利用了消費者人工智慧和搜尋以及一定數量的消費者生成人工智慧、廣告、推薦器,這些都是軟體的早期階段。下一波浪潮即將到來,企業將迎來代理人工智慧,機器人將迎來物理人工智慧,不同地區將為自己的生態系統建構人工智慧,主權人工智慧也將到來。

  • And so, each one of these are barely off the ground, and we can see them. We can see them because, obviously, we're in the center of much of this development and we can see great activity happening in all these different places and these will happen. So near term, midterm, long term.

    所以,它們每一個都剛離開地面,我們就能看到它們。我們可以看到它們,因為顯然我們正處於這一發展的中心,我們可以看到在所有這些不同的地方正在發生巨大的活動,而這些都會發生。所以短期、中期、長期。

  • Operator

    Operator

  • Harlan Sur, JPMorgan.

    摩根大通的 Harlan Sur。

  • Harlan Sur - Analyst

    Harlan Sur - Analyst

  • Your next-generation Blackwell Ultra is set to launch in the second half of this year, in line with the team's annual product cadence. Jensen, can you help us understand the demand dynamics for Ultra given that you'll still be ramping the current generation Blackwell solutions? How do your customers and the supply chain also manage the simultaneous ramps of these two products? And is the team still on track to execute Blackwell Ultra in the second half of this year?

    您的下一代 Blackwell Ultra 預計將於今年下半年推出,與團隊的年度產品節奏一致。詹森,鑑於您仍將推廣當前一代 Blackwell 解決方案,您能否幫助我們了解 Ultra 的需求動態?您的客戶和供應鏈如何管理這兩種產品的同時產量提升?今年下半年,車隊是否仍有望執行 Blackwell Ultra 計畫?

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • Yes. Blackwell Ultra is second half. As you know, the first Blackwell was -- and we had a hiccup that probably cost us a couple of months. We're fully recovered, of course. The team did an amazing job recovering and all of our supply chain partners and just so many people helped us recover at the speed of light. And so now we've successfully ramped production of Blackwell.

    是的。Blackwell Ultra 是下半場。如您所知,第一台 Blackwell 出現了小問題,可能因此耽誤了幾個月的時間。當然,我們已經完全康復了。團隊的復原工作非常出色,我們所有的供應鏈合作夥伴以及許多人都幫助我們以光速恢復。現在我們已經成功提高了 Blackwell 的產量。

  • But that doesn't stop the next train. The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories, and of course, new processors, and all of that is coming online. We've have been working with all of our partners and customers, laying this out. They have all of the necessary information, and we'll work with everybody to do the proper transition.

    但這並不能阻止下一班火車的運行。下一班列車將按照年度節奏推出,Blackwell Ultra 將配備新的網路、新的內存,當然還有新的處理器,所有這些都將上線。我們一直在與所有合作夥伴和客戶合作,制定這項計劃。他們掌握了所有必要的信息,我們將與大家合作,完成適當的過渡。

  • This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It's a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition.

    這次 Blackwell 和 Blackwell Ultra 的系統架構完全相同。從 Hopper 轉到 Blackwell 要困難得多,因為我們從 NVLink 8 系統轉移到了基於 NVLink 72 的系統。因此,底盤、系統架構、硬體、電力傳輸等都必須改變。這是一個相當具有挑戰性的轉變。

  • But the next transition will slot right in. Grace Blackwell Ultra will slot right in. And we've also already revealed and been working very closely with all of our partners on the click after that. And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition.

    但下一次轉變將會順利進行。Grace Blackwell Ultra 將立即加入。我們也已經透露了這項消息,並將繼續與所有合作夥伴就此展開密切合作。之後的點擊被稱為 Vera Rubin,我們所有的合作夥伴都在加快這一轉變,並為這一轉變做好準備。

  • And again, we're going to provide a big, huge step-up. And so come to GTC, and I'll talk to you about Blackwell Ultra, Vera Rubin, and then show you what's the one click after that. Really exciting new products, so to come to GTC.

    再次,我們將提供巨大的進步。那麼來到 GTC,我將和你們討論 Blackwell Ultra、Vera Rubin,然後向你們展示接下來的點擊操作是什麼。確實令人興奮的新產品,所以來參加GTC。

  • Operator

    Operator

  • Timothy Arcuri, UBS.

    瑞銀的提摩西·阿庫裡。

  • Timothy Arcuri - Analyst

    Timothy Arcuri - Analyst

  • Jensen, we heard a lot about custom ASICs. Can you kind of speak to the balance between customer ASIC and merchant GPU. We hear about some of these heterogeneous superclusters to use both GPU and ASIC? Is that something customers are planning on building? Or will these infrastructures remain fairly distinct? Thanks.

    詹森,我們聽到了很多關於客製化 ASIC 的消息。您能談談客戶 ASIC 和商家 GPU 之間的平衡嗎?我們聽過一些異質超級叢集同時使用 GPU 和 ASIC?這是客戶計劃建造的東西嗎?或者這些基礎設施將保持相當獨特?謝謝。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • Well, we built very different things than ASICs, in some ways, completely different in some areas we intercept. We're different in several ways. One, NVIDIA'S architecture is general whether you're -- you've optimized for unregressive models or diffusion-based models or vision-based models or multimodal models or text models. We're great in all of it.

    嗯,我們製造的東西與 ASIC 非常不同,在某些方面,在我們攔截的某些領域完全不同。我們在很多方面都有所不同。首先,無論您針對非迴歸模型、基於擴散的模型、基於視覺的模型、多模式模型或文字模型進行了最佳化,NVIDIA 的架構都是通用的。我們在所有方面都表現出色。

  • We're great on all of it because our software stack is so -- our architecture is flexible; our software stack is -- ecosystem is so rich that we're the initial target of most exciting innovations and algorithms. And so, by definition, we're much, much more general than narrow. We're also really good from the end-to-end from data processing, the curation of the training data, to the training of the data, of course, to reinforcement learning used in post training, all the way to inference with tough time scaling.

    我們在所有方面都很出色,因為我們的軟體堆疊非常——我們的架構非常靈活;我們的軟體堆疊——生態系統非常豐富,因此我們成為大多數令人興奮的創新和演算法的初始目標。因此,根據定義,我們的範圍更加廣泛,而不是狹窄。我們在端到端資料處理、訓練資料的管理、資料訓練,當然還有後期訓練中使用的強化學習,一直到具有嚴格時間擴展的推理方面也做得非常出色。

  • So we're general, we're end-to-end, and we're everywhere. And because we're not in just one cloud, we're in every cloud, we could be on-prem, we could be in a robot, our architecture is much more accessible and a great target -- initial target for anybody who's starting up a new company. And so, we're everywhere.

    所以我們是通用的、端到端的、無所不在的。而且因為我們不只存在於一個雲端中,而是存在於每個雲端中,我們可以在本地,我們可以在機器人中,我們的架構更容易訪問,並且是一個很好的目標——對於任何創辦新公司的人來說都是最初的目標。所以,我們無所不在。

  • And the third thing I would say is that our performance in our rhythm is so incredibly fast. Remember that these data centers are always fixed in size. They're fixed in size or they're fixed in power. And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues. And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4 times or 8 times higher, your revenues for that gigawatt data center is 8 times higher.

    我想說的第三件事是,我們的節奏表現得非常快。請記住,這些資料中心的大小始終是固定的。它們的大小是固定的,或者功率是固定的。如果我們的每瓦效能提高 2 倍、4 倍甚至 8 倍(這並不罕見),那麼它就會直接轉化為收入。因此,如果您擁有 100 兆瓦的資料中心,如果該 100 兆瓦或千兆瓦資料中心的效能或吞吐量高出 4 倍或 8 倍,那麼該千兆瓦資料中心的收入就會高出 8 倍。

  • And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated. And so, the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROIs. And so, I think the third reason is performance.

    這與過去的資料中心如此不同的原因是,人工智慧工廠可以透過其產生的代幣直接貨幣化。因此,我們架構的令牌吞吐量如此之快,對於所有為了創造收入和快速獲得投資回報而建立這些東西的公司來說都是非常有價值的。所以,我認為第三個原因是性能。

  • And then the last thing that I would say is the software stack is incredibly hard. Building an ASIC is no different than what we do. We build a new architecture. And the ecosystem that sits on top of our architecture is 10 times more complex today than it was two years ago. And that's fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly.

    最後我想說的是軟體堆疊非常難。建構 ASIC 與我們所做的沒有什麼不同。我們建構了一個新的架構。如今,我們架構之上的生態系統比兩年前複雜了 10 倍。這是相當明顯的,因為世界在架構之上構建的軟體數量正在呈指數級增長,而且人工智慧正在快速發展。

  • So bringing that whole ecosystem on top of multiple chips is hard. And so, I would say that those four reasons, and then finally, I will say this, just because the chip is designed doesn't mean it gets deployed. And you've seen this over and over again. There are a lot of chips that gets built, but when the time comes, a business decision has to be made, and that business decision is about deploying a new engine, a new processor into a limited AI factory in size, in power, and in time.

    因此,將整個生態系統置於多個晶片之上非常困難。所以,我想說的是這四個原因,最後,我想說的是,僅僅因為晶片被設計出來並不意味著它會被部署。你已經多次看到這種情況。有很多晶片被製造出來,但時機成熟時,必須做出一個商業決策,而這個商業決策就是將一個新的引擎、一個新的處理器部署到規模、功率和時間都有限的人工智慧工廠中。

  • And our technology is not only more advanced, more performant, it has much, much better software capability and very importantly, our ability to deploy is lightning fast. And so these things are enough for the faint of heart, as everybody knows now. And so, there's a lot of different reasons why we do well, why we win.

    我們的技術不僅更先進、效能更佳,而且軟體功能也更加強大,而且非常重要的是,我們的部署能力非常快。正如現在每個人都知道的那樣,這些事情對於膽小的人來說已經足夠了。所以,我們之所以做得好、取得勝利有很多不同的原因。

  • Operator

    Operator

  • Ben Reitzes, Melius Research.

    Ben Reitzes,Melius Research。

  • Ben Reitzes - Analyst

    Ben Reitzes - Analyst

  • Hey Jensen, it's a geography-related question. you did a great job explaining some of the demand underlying factors here on the strength. But US was up about $5 billion or so sequentially. And I think there is a concern about whether US can pick up the slack if there's regulations towards other geographies.

    嘿,詹森,這是一個與地理相關的問題。你很好地解釋了這裡一些關於強度的需求潛在因素。但美國環比成長了約 50 億美元。我認為,如果對其他地區實施監管,人們會擔心美國是否能彌補這一不足。

  • And I was just wondering, as we go throughout the year, if this kind of surge in the US continues and it's going to be -- whether that's okay? And if that underlies your growth rate, how can you keep growing so fast with this mix shift towards the US? Your guidance looks like China is probably up sequentially. So just wondering if you could go through that dynamic and maybe Colette can weigh in. Thanks a lot.

    我只是想知道,隨著時間的流逝,如果美國的這種激增趨勢繼續下去,情況會如何?如果這是你們成長率的基礎,那麼在業務結構轉移到美國的情況下,你們如何能夠維持如此快速的成長?您的指導看起來中國可能會持續上漲。所以我只是想知道你是否可以經歷這種動態,也許 Colette 可以參與其中。多謝。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • China is approximately the same percentage as Q4 and as previous quarters. It's about half of what it was before the export control. But it's approximately the same in percentage.

    中國的比例與第四季及前幾季大致相同。這大約是出口管制之前的一半。但百分比大致相同。

  • With respect to geographies, the takeaway is that AI is software. It's modern software. It's incredible modern software, but it's modern software, and AI has gone mainstream. AI is used in delivery services everywhere, shopping services everywhere. If you were to buy a quarter from milk, it's delivered to you. AI was involved.

    就地理位置而言,重點是人工智慧是軟體。這是現代軟體。這是令人難以置信的現代軟體,但它是現代軟體,而且人工智慧已經成為主流。人工智慧應用於各地的送貨服務和購物服務。如果您購買四分之一牛奶,它就會送到您手中。人工智慧參與其中。

  • And so, almost everything that a consumer service provides AIs at the core of it. Every student will use AI as a tutor; healthcare services use AI; financial services use AI. No fintech company will not use AI. Every fintech company will. Climate tech company use AI. Mineral discovery now uses AI. The number of -- every higher education, every university uses AI. And so, I think it is fairly safe to say that AI has gone mainstream and that it's being integrated into every application.

    因此,幾乎所有消費者服務所提供的內容都以人工智慧為核心。每個學生都會使用人工智慧作為導師;醫療保健服務使用人工智慧;金融服務使用人工智慧。沒有一家金融科技公司不會使用人工智慧。每家金融科技公司都會這麼做。氣候科技公司使用人工智慧。礦物發現現在使用人工智慧。許多高等教育機構、每所大學都使用人工智慧。因此,我認為可以肯定地說,人工智慧已經成為主流,並且正在融入每個應用程式中。

  • And our hope is that, of course, the technology continues to advance safely and advance in a helpful way to our society. And with that, we're -- I do believe that we're at the beginning of this new transition. And what I mean by that in the beginning is, remember, behind us has been decades of data centers and decades of computers that have been built. And they've been built for a world of hand coding and general-purpose computing and CPUs and so on and so forth.

    當然,我們希望科技能繼續安全地進步,並以有益於社會的方式進步。有了這些,我相信我們正處於這新轉變的開始。我一開始就說的意思是,請記住,我們已經建立了幾十年的資料中心和幾十年的電腦。它們是為手工編碼、通用計算、CPU 等的世界而構建的。

  • And going forward, I think it's fairly safe to say that world is going to be almost all software to be infused with AI. All software and all services will be based on -- ultimately, based on machine learning and the data flywheel is going to be part of improving software and services and that the future computers will be accelerated, the future computers will be based on AI.

    展望未來,我認為可以肯定地說,世界幾乎所有的軟體都將融入人工智慧。所有軟體和服務最終都將基於機器學習,數據飛輪將成為改進軟體和服務的一部分,未來的電腦將加速,未來的電腦將基於人工智慧。

  • And we're really two years into that journey and in modernizing computers that have taken decades to build out. And so, I'm fairly sure that we're in the beginning of this new era.

    實際上,我們已踏上這趟旅程兩年,並對耗費數十年時間打造的電腦進行了現代化改造。因此,我確信我們正處於這個新時代的開始。

  • And then lastly, no technology has ever had the opportunity to address a larger part of the world's GDP than AI. No software tool ever has. And so, this is now a software tool that can address a much larger part of the world's GDP more than any time in history. And so the way we think about growth and the way we think about whether something is big or small, has to be in the context of that. And when you take a step back and look at it from that perspective, we're really just in the beginnings.

    最後,沒有任何一項技術有機會像人工智慧一樣解決全球 GDP 的更大份額。沒有任何軟體工具曾經有過這樣的經驗。因此,現在這是一個可以解決世界 GDP 更大一部分問題的軟體工具,比歷史上任何時候都更重要。因此,我們思考成長的方式以及我們思考某件事是大還是小的方式都必須放在這樣的背景下。當你退一步從這個角度來看時,你會發現我們才剛開始。

  • Operator

    Operator

  • Aaron Rakers, Wells Fargo. (Operator Instructions) Mark Lipacis, Evercore ISI.

    富國銀行的 Aaron Rakers。(操作員指示)Mark Lipacis,Evercore ISI。

  • Mark Lipacis - Analyst

    Mark Lipacis - Analyst

  • I had a clarification and a question. Colette up for the clarification. Did you say that enterprise within the data center grew 2x year-on-year for the January quarter? And if so, does that -- would that make it the fast faster growing than the hyperscalers?

    我有一個澄清和一個問題。科萊特 (Colette) 對此作出澄清。您是否說過,一月份季度資料中心內的企業數量年增了 2 倍?如果是這樣,那是否會使其比超大規模企業成長得更快?

  • And then, Jensen, for you, the question, hyperscalers are the biggest purchasers of your solutions, but they buy equipment for both internal and external workloads, external workloads being cloud services that enterprise is used.

    然後,Jensen,對您來說,問題是,超大規模企業是您的解決方案的最大購買者,但他們購買設備用於內部和外部工作負載,外部工作負載是企業使用的雲端服務。

  • So the question is, can you give us a sense of how that hyperscaler spend splits between that external workload and internal? And as these new AI workflows and applications come up, would you expect enterprises to become a larger part of that consumption mix? And does that impact how you develop your service, your ecosystem.

    所以問題是,您能否讓我們了解一下超大規模資料中心的支出在外部工作負載和內部工作負載之間的分配情況?隨著這些新的人工智慧工作流程和應用程式的出現,您是否預期企業將成為消費組合中更大的一部分?這是否會影響您開發服務和生態系統的方式?

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Sure. Thanks for the question regarding our Enterprise business. Yes, it grew 2x and very similar to what we were seeing with our large CSPs. Keep in mind, these are both important areas to understand. Working with the CSPs can be working on large language models, can be working on inference in their own work. But keep in mind, that is also where the enterprises are surfacing. Your enterprises are both with your CSPs as well as in terms of building on their own. They're both growing quite well.

    當然。感謝您提出有關我們企業業務的問題。是的,它增長了 2 倍,與我們在大型 CSP 中看到的情況非常相似。請記住,這兩個都是需要理解的重要領域。與 CSP 合作可以研究大型語言模型,也可以在自己的工作中進行推理。但請記住,這也是企業出現的地方。您的企業既要與 CSP 合作,也要自行建置。它們都長得很好。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • The CSPs are about half of our business. And the CSPs have internal consumption and external consumption, as you say. And we're using -- of course, used for internal consumption. We work very closely with all of them to optimize workloads that are internal to them, because they have a large infrastructure of NVIDIA GEAR that they could take advantage of.

    CSP 約占我們業務的一半。正如您所說,CSP 有內部消耗和外部消耗。我們正在使用——當然,用於內部消費。我們與他們密切合作,以優化他們的內部工作負載,因為他們擁有可以利用的大型 NVIDIA GEAR 基礎設施。

  • And the fact that we could be used for AI on the one hand, video processing on the other hand, data processing like Spark, we're fungible. And so, the useful life of our infrastructure is much better. If the useful life is much longer, then the TCO is also lower.

    事實上,一方面我們可以用於人工智慧,另一方面可以用於視訊處理,以及像 Spark 這樣的資料處理,我們是可替代的。因此,我們的基礎設施的使用壽命會更長。如果使用壽命更長,那麼 TCO 也會更低。

  • And so, the second part is how do we see the growth of enterprise or not CSPs, if you will, going forward? And the answer is, I believe, long term, it is by far larger and the reason for that is because if you look at the computer industry today and what is not served by the computer industry is largely industrial.

    那麼,第二部分是,如果您願意的話,我們如何看待企業或非 CSP 的未來成長?答案是,我相信,從長遠來看,它的規模要大得多,原因在於,如果你看看今天的電腦產業,你會發現電腦產業所不服務的領域主要是工業領域。

  • So let me give you an example. When we say enterprise -- and let's use the car company as an example because they make both soft things and hard things. And so, in the case of a car company, the employees will be what we call enterprise and agentic AI and software planning systems and tools, and we have some really exciting things to share with you guys at GTC, those agentic systems are for employees, to make employees more productive, to design, to market plan, to operate their company. That's agentic AI.

    讓我給你舉個例子。當我們說企業時-讓我們以汽車公司為例,因為他們既生產軟體產品,也生產硬體產品。因此,對於一家汽車公司來說,員工就是我們所說的企業和代理人工智慧以及軟體規劃系統和工具,我們在 GTC 上有一些非常令人興奮的事情要與大家分享,這些代理系統是為員工服務的,旨在提高員工的工作效率、進行設計、制定市場計劃和運營公司。這就是代理人工智慧。

  • On the other hand, the cars that they manufacture also need AI. They need an AI system that trains the cars, treats this entire giant fleet of cars. And today, there's some billion cars on the road. Someday, there will be a billion cars on the road, and every single one of those cars will be robotic cars, and they'll all be collecting data, and we'll be improving them using an AI factory; whereas they have a factory today, in the future they'll have a car factory and an AI factory.

    另一方面,他們生產的汽車也需要人工智慧。他們需要一個人工智慧系統來訓練汽車,處理整個龐大的車隊。如今,道路上行駛的汽車數量已達數十億輛。有一天,路上將有十億輛汽車,每一輛都是機器人汽車,它們都將收集數據,我們將使用人工智慧工廠來改進它們;今天他們有一個工廠,而未來他們將擁有一個汽車工廠和一個人工智慧工廠。

  • And then inside the car itself is a robotic system. And so, as you can see, there are three computers involved. And there's the computer that helps the people. There's the computer that build the AI for the machineries that could be, of course, could be a tractor, it could be a lawn mower. It could be a human or robot that's being developed today. It could be a building; it could be a warehouse.

    汽車內部本身就有一個機器人系統。如您所見,有三台電腦參與其中。還有幫助人們的計算機。電腦可以為機械建構人工智慧,當然,這些機械可以是拖拉機,也可以是割草機。它可能是當今正在開發的人類或機器人。它可能是一棟建築物;也可能是倉庫。

  • These physical systems require new type of AI we call physical AI. They can't just understand the meaning of words and languages, but they have to understand the meaning of the world, friction and inertia, object permanence, its cause and effect. And all of those type of things that are common sense to you and I, but AIs have to go learn those physical effects. So we call that physical AI.

    這些實體系統需要我們稱為實體人工智慧的新型人工智慧。他們不能僅僅理解字詞和語言的意義,還必須理解世界、摩擦和慣性、物體永久性及其因果關係的意思。所有這些事情對你我來說都是常識,但人工智慧必須學習這些物理效應。所以我們稱之為物理人工智慧。

  • That whole part of using agentic AI to revolutionize the way we work inside companies, that's just starting. This is now the beginning of the agent AI era, and you hear a lot of people talking about it and we got some really great things going on. And then there's the physical AI after that, and then there are robotic systems after that.

    使用代理人工智慧來徹底改變公司內部的工作方式,這才剛開始。現在正是代理 AI 時代的開始,你會聽到很多人談論它,而且我們也取得了一些非常棒的成就。然後是物理人工智慧,然後是機器人系統。

  • And so, these three computers are all brand new. And my sense is that long term, this will be by far the larger of them all, which kind of makes sense. The world's GDP is representing -- represented by either heavy industries or industrials and companies that are providing for those.

    所以,這三台電腦都是全新的。我的感覺是,從長遠來看,這將是迄今為止規模最大的一個,這也是有道理的。世界 GDP 是由重工業或工業以及為其提供服務的公司所代表的。

  • Operator

    Operator

  • Aaron Rakers, Wells Fargo.

    富國銀行的 Aaron Rakers。

  • Aaron Rakers - Analyst

    Aaron Rakers - Analyst

  • Jensen, I'm curious as we now approach the two-year anniversary of really the Hopper inflection that you saw in 2023 in Gen AI in general. And when we think about the road map you have in front of us, how do you think about the infrastructure that's been deployed from a replacement cycle perspective? And whether if it's GB300 or if it's the Rubin cycle where we start to see maybe some refresh opportunity? I'm just curious how you look at that.

    詹森,我很好奇,因為我們現在即將迎來霍珀拐點的兩週年紀念日,你在 2023 年看到了通用人工智慧的整體情況。當我們考慮您面前的路線圖時,您如何從更換週期的角度看待已部署的基礎架構?無論是 GB300 還是 Rubin 週期,我們都會開始看到一些更新的機會嗎?我只是好奇你是如何看待這件事的。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • I appreciate it. First of all, people are still using Voltus and Pascals and amperes. And the reason for that is because there are always things that -- because CUDA is so programmable, you could use it right -- well, one of the major use cases right now is data processing and data curation. You find a circumstance that an AI model is not very good at. You present that circumstance to a vision language model, let's say, it's a car. You present that circumstance to a vision language model.

    我很感激。首先,人們仍在使用 Voltus、帕斯卡和安培。原因在於,總是有一些事情 — — 因為 CUDA 具有高度可編程性,所以您可以正確使用它 — — 目前的主要用例之一就是資料處理和資料管理。你發現 AI 模型不太擅長處理某種情況。你將該情況呈現給視覺語言模型,假設它是一輛汽車。您將該情況呈現給視覺語言模型。

  • The vision language model actually looks in the circumstances and, says, this is what happened, and I wasn't very good at it. You then take that response to the prompt, and you go and prompt an AI model to go find in your whole lake of data other circumstances like that, whatever that circumstance was. And then you use an AI to do domain randomization and generate a whole bunch of other examples.

    視覺語言模型實際上會觀察當時的情況,然後說,這就是發生的事情,而我對此並不擅長。然後,你會根據提示做出回應,並提示 AI 模型在整個資料湖中尋找類似的其他情況,無論情況是什麼。然後,您使用 AI 進行域隨機化並產生一大堆其他範例。

  • And then from that, you can go train the model. And so, you could use amperes to go and do data processing and data curation and machine learning-based search. And then you create the training data set, which you then present to your Hopper systems for training.

    然後,您就可以開始訓練模型了。因此,您可以使用安培來進行資料處理、資料管理和基於機器學習的搜尋。然後建立訓練資料集,並將其提供給 Hopper 系統進行訓練。

  • And so, each one of these architectures are completely -- they're all CUDA-compatible and so everything wants on everything. But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past. All of our GPUs are very well employed.

    因此,這些架構中的每一個都是完全的 - 它們都與 CUDA 相容,因此一切都需要一切。但如果您已經擁有基礎設施,那麼您可以將不太密集的工作負載放到過去已安裝的基礎上。我們所有的 GPU 都得到了很好的利用。

  • Operator

    Operator

  • Atif Malik, Citi.

    花旗銀行的阿蒂夫馬利克 (Atif Malik)。

  • Atif Malik - Analyst

    Atif Malik - Analyst

  • I have a follow-up question on gross margins for Colette. Colette, I understand there are many moving parts that Blackwell yields, NVLink 72 and Ethernet mix. And you kind of tipped to the earlier question, the April quarter is the bottom, but second half would have to ramp like 200 basis points per quarter to get to the mid-70s range that you're giving for the end of the fiscal year. And we still don't know much about tariff's impact to broader semiconductor. So what kind of gives you the confidence in that trajectory in the back half of this year?

    我有一個關於 Colette 毛利率的後續問題。科萊特,我知道 Blackwell 有很多活動部件,NVLink 72 和乙太網路混合。您剛才提到了先前的問題,四月份季度是最低點,但下半年必須每季增加 200 個基點才能達到您給出的財年末 70 年代中期的水平。我們還不太了解關稅對更廣泛的半導體產業的影響。那麼是什麼讓您對今年下半年的發展軌跡充滿信心呢?

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Yeah. Thanks for the question. Our gross margins, they're quite complex in terms of the material and everything that we put together in a Blackwell system, a tremendous amount of opportunity to look at a lot of different pieces of that on how we can better improve our gross margins over time. Remember, we have many different configurations as well on Blackwell that will be able to help us do that.

    是的。謝謝你的提問。我們的毛利率在材料和我們在 Blackwell 系統中組合的所有東西方面都相當複雜,有大量的機會可以研究其中的許多不同部分,以了解如何隨著時間的推移更好地提高我們的毛利率。請記住,Blackwell 上也有許多不同的配置可以幫助我們做到這一點。

  • So together, working after we get some of these really strong ramping completed for our customers, we can begin a lot of that work. If not, we're going to probably start as soon as possible if we can. And if we can improve it in the short term, we will also do that.

    因此,在我們共同努力為客戶完成一些真正強大的提升之後,我們就可以開始很多這樣的工作了。如果沒有的話,我們可能會盡快開始。如果我們能在短期內改善它,我們也會這樣做。

  • Tariff at this point, it's a little bit of an unknown. It's an unknown until we understand further what the US government's plan is, both its timing, it's where and how much. So at this time, we are awaiting, but again, we would, of course, always follow export controls and/or tariffs in that manner.

    目前關稅還有些未知。在我們進一步了解美國政府的計劃(包括時間、地點和規模)之前,這都是未知的。因此,目前我們正在等待,但當然,我們始終以這種方式遵守出口管制和/或關稅。

  • Operator

    Operator

  • Ladies and gentlemen, that does conclude our question-and-answer session -- I'm sorry.

    女士們、先生們,我們的問答環節到此結束——很抱歉。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • Thank you. No, no --

    謝謝。不,不--

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • We are going to open up to Jensen. I believe he has a couple of things.

    我們將向詹森敞開心扉。我相信他有幾件事。

  • Jensen Huang - Founder, President, Chief Executive Officer

    Jensen Huang - Founder, President, Chief Executive Officer

  • I just wanted to thank you. Thank you, Colette. The demand for Blackwell is extraordinary. AI is evolving beyond perception and generative AI into reasoning. With reasoning AI, we're observing another scaling law, inference time or test time scaling, more computation.

    我只是想謝謝你。謝謝你,科萊特。布萊克威爾的需求量非常大。人工智慧正在從感知和產生人工智慧向推理人工智慧演進。透過推理人工智慧,我們觀察到另一個縮放定律,推理時間或測試時間縮放,更多的計算。

  • The more the model thinks the smarter the answer. Models like OpenAI, Grok 3, DeepSeek-R1 are reasoning models that apply inference time scaling. Reasoning models can consume 100 times more compute. Future reasoning models can consume much more compute. DeepSeek-R1 has ignited global enthusiast. It's an excellent innovation. But even more importantly, it has open source, a world-class reasoning AI model.

    模型思考越多,答案就越聰明。OpenAI、Grok 3、DeepSeek-R1 等模型是應用推理時間縮放的推理模型。推理模型可能消耗 100 倍以上的計算量。未來的推理模型可以消耗更多的計算。DeepSeek-R1點燃了全球愛好者的熱情。這是一個出色的創新。但更重要的是,它已經開源,擁有世界一流的推理AI模型。

  • Nearly every AI developer is applying R1 or chain of thought and reinforcement learning techniques like R1 to scale their model's performance. We now have three scaling laws, as I mentioned earlier, driving the demand for AI computing. The traditional scaling loss of AI remains intact. Foundation models are being enhanced with multimodality, and pretraining is still growing.

    幾乎每個人工智慧開發人員都在應用 R1 或思路鍊和強化學習技術(如 R1)來擴展其模型的效能。正如我之前提到的,我們現在有三個縮放定律,推動著對人工智慧運算的需求。人工智慧的傳統規模損失依然完好無損。基礎模型正在透過多模態性得到增強,並且預訓練仍在不斷發展。

  • But it's no longer enough. We have two additional scaling dimensions. Post-training skilling, where reinforcement learning, fine-tuning, model distillation require orders of magnitude more compute than pretraining alone. Inference time scaling and reasoning where a single query and demand 100 times more compute. We defined Blackwell for this moment, a single platform that can easily transition from pre-training, post-training and test-time scaling.

    但這已經不夠了。我們還有兩個額外的縮放維度。訓練後技能,其中強化學習、微調、模型提煉需要比單獨的預訓練多幾個數量級的計算。推理時間擴展和推理,其中單一查詢和需求增加 100 倍的計算。我們為此定義了 Blackwell,一個可以輕鬆從預訓練、後訓練和測試時間擴展過渡的單一平台。

  • Blackwell's FP4 transformer engine and NVLink 72 scale-up fabric and new software technologies-led Blackwell process reasoning AI models 25x faster than Hopper. Blackwell in all of this configuration is in full production. Each Grace Blackwell NVLink 72 rack is an engineering marvel. 1.5 million components produced across 350 manufacturing sites by nearly 100,000 factory operators.

    Blackwell 的 FP4 變換引擎和 NVLink 72 擴展結構以及新軟體技術使 Blackwell 處理推理 AI 模型的速度比 Hopper 快 25 倍。布萊克威爾的所有此類配置均已全面投入生產。每個 Grace Blackwell NVLink 72 機架都是一個工程奇蹟。近 10 萬名工廠操作員在 350 個製造基地生產了 150 萬個組件。

  • AI is advancing at light speed. We're at the beginning of reasoning AI and inference time scaling. But we're just at the start of the age of AI, multimodal AIs, enterprise AI, sovereign AI and physical AI are right around the corner.

    人工智慧正在以光速前進。我們正處於推理人工智慧和推理時間擴展的開始階段。但我們才剛處於人工智慧時代的開始,多模式人工智慧、企業人工智慧、主權人工智慧和實體人工智慧即將到來。

  • We will grow strongly in 2025. Going forward, data centers will dedicate most of CapEx to accelerated computing and AI. Data centers will increasingly become AI factories, and every company will have them either rented or self-operated.

    2025年我們將實現強勁成長。展望未來,資料中心將把大部分資本支出用於加速運算和人工智慧。資料中心將日益成為人工智慧工廠,每家公司都會租用或自營資料中心。

  • I want to thank all of you for joining us today. Come join us at GTC in a couple of weeks. We're going to be talking about Blackwell Ultra, Rubin and other new computing, networking, reasoning AI, physical AI products and a whole bunch more. Thank you.

    我要感謝大家今天的出席。幾週後歡迎來參加我們的 GTC 活動。我們將討論 Blackwell Ultra、Rubin 和其他新運算、網路、推理 AI、實體 AI 產品等等。謝謝。

  • Operator

    Operator

  • This concludes today's conference call. You may now disconnect.

    今天的電話會議到此結束。您現在可以斷開連線。