輝達 (NVDA) 2025 Q3 法說會逐字稿

內容摘要

受資料中心、雲端和消費互聯網領域成長的推動,NVIDIA 公佈 2025 財年第三季營收創歷史新高。該公司的人工智慧企業平台正在企業和工業領域採用,收入預計將增加一倍。他們對 Blackwell 人工智慧基礎設施的需求很高,並且正在提高產量。

NVIDIA 對第四季的前景樂觀,預計營收為 375 億美元。該公司專注於為客戶提高性能和降低成本,計劃在 2025 年下半年達到 70 年代中期的毛利率。 他們正在解決供應限制以及向機器學習和人工智慧運算的轉變。

NVIDIA 處於有利位置,可以充分利用人工智慧和機器人市場的機會。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is JL, and I'll be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's third-quarter earnings call. (Operator Instructions)

    午安.我叫 JL,今天我將擔任你們的會議操作員。此時此刻,我謹歡迎大家參加 NVIDIA 第三季財報電話會議。 (操作員說明)

  • Thank you. Stewart Stecker, you may begin your conference.

    謝謝。斯圖爾特·斯特克,您可以開始會議了。

  • Stewart Stecker - Investor Relations

    Stewart Stecker - Investor Relations

  • Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    謝謝。大家下午好,歡迎參加 NVIDIA 2025 財年第三季電話會議。執行副總裁兼財務長 Colette Kress。

  • I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

    我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網路直播。此網路廣播將在討論 2025 財年第四季財務業績的電話會議之前進行重播。未經我們事先書面同意,不得複製或轉錄。

  • During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

    在這次電話會議中,我們可能會根據目前的預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益發布中的揭露、我們最新的表格 10-K 和 10-Q,以及我們可能在表格 8-K 上提交的報告證券交易委員會。

  • All our statements are made as of today, November 20, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    我們的所有聲明均根據我們目前掌握的資訊於今天(2024 年 11 月 20 日)作出。除法律要求外,我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的財務長評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的調整表。

  • With that, let me turn the call over to Colette.

    現在,讓我把電話轉給科萊特。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Thank you, Stewart. Q3 was another record quarter. We continue to deliver incredible growth. Revenue of $35.1 billion was up 17% sequentially and up 94% year on year and well above our outlook of $32.5 billion. All market platforms posted strong sequential and year-over-year growth, fueled by the adoption of NVIDIA accelerated computing and AI.

    謝謝你,斯圖爾特。第三季又是創紀錄的季度。我們繼續實現令人難以置信的成長。營收為 351 億美元,季增 17%,年增 94%,遠高於我們 325 億美元的預期。在 NVIDIA 加速運算和人工智慧的採用的推動下,所有市場平台均實現了強勁的環比和同比增長。

  • Starting with data center. Another record was achieved in data center. Revenue of $30.8 billion, up 17% sequential and up, 112% year on year. NVIDIA Hopper demand is exceptional. And sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest product ramp in our company's history. The H200 delivers up to 2x faster inference performance and up to [50%] improved TCO.

    從資料中心開始。數據中心創下了另一項紀錄。營收 308 億美元,季增 17%,年增 112%。 NVIDIA Hopper 的需求量非常大。隨後,NVIDIA H200 銷量大幅成長,達到兩位數數十億美元,這是我們公司歷史上最快的產品成長速度。 H200 的推理效能提高了 2 倍,整體擁有成本 (TCO) 提高了 [50%]。

  • Cloud service providers were approximately half of our data center sales with revenue increasing more than 2x year on year. CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of DPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave, and Microsoft Azure with Google Cloud and OCI coming soon.

    雲端服務供應商約占我們資料中心銷售額的一半,營收年增超過 2 倍。 CSP 部署了 NVIDIA H200 基礎設施和高速網絡,安裝量可擴展到數萬個 DPU,以發展業務並滿足對 AI 訓練和推理工作負載快速增長的需求。 AWS、CoreWeave 和 Microsoft Azure 現已提供基於 NVIDIA H200 的雲端實例,Google Cloud 和 OCI 也即將推出。

  • Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2x year on year as North America, EMEA, and Asia-Pacific regions ramped NVIDIA cloud instances and sovereign cloud build-outs. Consumer internet revenue more than doubled year on year as companies scale their NVIDIA Hopper infrastructure to support next-generation AI models, training multi-modal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads.

    隨著大型 CSP 的顯著成長,隨著北美、歐洲、中東和非洲和亞太地區 NVIDIA 雲端執行個體和主權雲端建置的增加,NVIDIA GPU 區域雲端收入年增了 2 倍。隨著公司擴展 NVIDIA Hopper 基礎設施以支援下一代 AI 模型、訓練多模式和代理 AI、深度學習推薦引擎以及生成式 AI 推理和內容創建工作負載,消費者互聯網收入同比增長了一倍多。

  • Nvidia's Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world. Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCO improvements. Rapid advancements in NVIDIA software algorithms boosted Hopper inference throughput by an incredible 5x in one year and cut time to first token by 5x.

    Nvidia 的 Ampere 和 Hopper 基礎設施正在推動客戶的推理收入成長。 NVIDIA 是全球最大的推理平台。我們龐大的安裝基礎和豐富的軟體生態系統鼓勵開發人員針對 NVIDIA 進行最佳化,並持續改進效能和總擁有成本 (TCO)。 NVIDIA 軟體演算法的快速進步使 Hopper 推理吞吐量在一年內提高了令人難以置信的 5 倍,並將第一個代幣的時間縮短了 5 倍。

  • Our upcoming release of NVIDIA NIM will boost Hopper inference performance by an additional 2.4x. Continuous performance optimizations are a hallmark of NVIDIA and drive increasingly economic returns for the entire NVIDIA installed base.

    我們即將發布的 NVIDIA NIM 將使 Hopper 推理性能額外提高 2.4 倍。持續的效能優化是 NVIDIA 的標誌,並為整個 NVIDIA 用戶群帶來日益增長的經濟回報。

  • Blackwell is in full production after a successfully executed mask change. We shipped 13,000 GPU samples to customers in the third quarter including one of the first Blackwell DGX engineering samples to OpenAI.

    成功更換掩模後,Blackwell 已全面投入生產。我們在第三季向客戶交付了 13,000 個 GPU 樣本,其中包括首批向 OpenAI 提供的 Blackwell DGX 工程樣本之一。

  • Blackwell is a full-stack, full-infrastructure AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLink, and from liquid-cooled to air-cooled. Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners, and they are working to bring up their data centers.

    Blackwell 是一個全堆疊、全基礎設施AI 資料中心規模系統,具有可自訂的配置,可滿足從x86 到ARM、訓練到推理GPU、InfiniBand 到乙太網路交換器和NVLink 以及液冷等多樣化且不斷增長的AI 市場的需求至風冷。每個客戶都競相成為第一個進入市場的人。 Blackwell 現在掌握在我們所有主要合作夥伴的手中,他們正在努力建立他們的資料中心。

  • We are integrating Blackwell systems into the diverse data center configurations of our customers. Blackwell demand is staggering, and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale.

    我們正在將 Blackwell 系統整合到客戶的不同資料中心配置中。布萊克韋爾的需求是驚人的,我們正在競相擴大供應規模,以滿足客戶對我們提出的令人難以置信的需求。客戶正準備大規模部署 Blackwell。

  • Oracle announced the world's first Zettascale AI cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models. Yesterday, Microsoft announced they will be the first CSP to offer in private preview, Blackwell-based cloud instances powered by NVIDIA GB200 and Quantum InfiniBand.

    Oracle 宣布推出全球首個 Zettascale AI 雲端運算集群,可擴展到超過 131,000 個 Blackwell GPU,幫助企業訓練和部署一些要求最苛刻的下一代 AI 模型。昨天,微軟宣布他們將成為第一個提供私人預覽版、基於 Blackwell 的雲端實例的 CSP,該執行個體由 NVIDIA GB200 和 Quantum InfiniBand 提供支援。

  • Last week, Blackwell made its debut on the most recent round of MLPerf training results, sweeping the perf GPU benchmarks and delivering a 2.2x leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute. Just 64 Blackwell GPUs are required to run the [GPT free] benchmark compared to 256 H100 or a 4x reduction in cost. NVIDIA Blackwell architecture with NVLink switch enables up to 30x faster inference performance and a new level of inference scaling throughput and response time that is excellent for running new reasoning inference applications like OpenAI's o1 model.

    上週,Blackwell 在最新一輪 MLPerf 訓練結果中首次亮相,橫掃 perf GPU 基準,效能較 Hopper 提升 2.2 倍。結果也證明了我們對降低計算成本的不懈追求。與 256 個 H100 相比,只需 64 個 Blackwell GPU 即可運行 [GPT free] 基準測試,成本降低了 4 倍。配備 NVLink 開關的 NVIDIA Blackwell 架構可將推理效能提升高達 30 倍,並將推理擴展吞吐量和回應時間提升到新水平,非常適合運行 OpenAI o1 模型等新推理推理應用程式。

  • With every new platform shift, a wave of start-ups is created. Hundreds of AI native companies are already delivering AI services with great success. Though Google Meta, Microsoft, and OpenAI are the headliners, Anthropic, Perplexity, [Blackstraw], Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Podium, Cursor and the Bridge are seeing great success while thousands of AI native startups are building new services.

    每一次新的平台轉變都會催生一波新創企業。數百家人工智慧本土公司已經在提供人工智慧服務並且取得了巨大成功。儘管Google Meta、Microsoft 和OpenAI 是頭條新聞,但Anthropic、Perplexity、[Blackstraw]、Adobe Firefly、Runway、Midjourney、Lightricks、Harvey、Podium、Cursor 和the Bridge 都取得了巨大成功,而數以千計的AI原生新創公司正在打造新的產品服務。

  • The next wave of AI are enterprise AI and industrial AI. Enterprise AI is in full throttle. NVIDIA AI enterprise, which includes NVIDIA NeMo and NIM microservices is an operating platform of agentic AI. Industry leaders are using NVIDIA AI to build copilots and agents. Working with NVIDIA Cadence, Cloudera, Cohesity, NetApp, Neutronics, Salesforce, SAP, and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years.

    人工智慧的下一波浪潮是企業人工智慧和工業人工智慧。企業人工智慧正全力以赴。 NVIDIA AI Enterprise包括NVIDIA NeMo和NIM微服務,是代理AI的運作平台。行業領導者正在使用 NVIDIA AI 來建立副駕駛和代理。 Cloudera、Cohesity、NetApp、Neutronics、Salesforce、SAP 和 ServiceNow 與 NVIDIA Cadence 合作,競相加速這些應用程式的開發,未來幾年可望部署數十億個代理程式。

  • Consulting leaders like Accenture and Deloitte are taking NVIDIA AI to the world's enterprises. Accenture launched a new business group with 30,000 professionals trained on NVIDIA AI technology to help facilitate this global build out. Additionally, Accenture with over 770,000 employees is leveraging NVIDIA-powered agentic AI applications internally, including one case that cuts manual steps in marketing campaigns by 25% to 35%.

    埃森哲 (Accenture) 和德勤 (Deloitte) 等領先顧問公司正在將 NVIDIA AI 引入全球企業。埃森哲成立了一個新的業務團隊,擁有 30,000 名接受過 NVIDIA AI 技術培訓的專業人員,以幫助促進這項全球建設。此外,擁有超過 77 萬名員工的埃森哲正在內部利用 NVIDIA 支援的代理 AI 應用程序,其中一個案例將行銷活動中的手動步驟減少了 25% 至 35%。

  • Nearly 1,000 companies are using NVIDIA NIM, and the speed of its uptake is evident in NVIDIA AI enterprise monetization. We expect NVIDIA AI enterprise full-year revenue to increase over 2x from last year, and our pipeline continues to build. Overall, our software service and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion.

    近 1,000 家公司正在使用 NVIDIA NIM,其採用速度在 NVIDIA AI 企業貨幣化中顯而易見。我們預計 NVIDIA AI 企業全年營收將比去年成長 2 倍以上,並且我們的通路將繼續建置。總體而言,我們的軟體服務和支援年收入為 15 億美元,預計今年退出時的年收入將超過 20 億美元。

  • Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world. Like NVIDIA NeMo for enterprise AI agents, we built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics. Some of the largest industrial manufacturers in the world are adopting NVIDIA Omniverse to accelerate their businesses, automate their workflows, and to achieve new levels of operating efficiency.

    工業人工智慧和機器人技術正在加速發展。這是由物理人工智慧的突破引發的,而物理人工智慧是理解物理世界的基礎模型。與企業 AI 代理程式的 NVIDIA NeMo 一樣,我們建置了 NVIDIA Omniverse,供開發人員建置、訓練和操作工業 AI 和機器人。全球一些最大的工業製造商正在採用 NVIDIA Omniverse 來加速其業務發展、實現工作流程自動化並實現新的營運效率水準。

  • Foxconn, the world's largest electronics manufacturer is using digital twins and industrial AI built on NVIDIA Omniverse to feed the bring up of its Blackwell factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce -- a reduction of over 30% in annual kilowatt hour usage.

    全球最大的電子產品製造商富士康正在使用基於 NVIDIA Omniverse 構建的數位孿生和工業 AI 來滿足其 Blackwell 工廠的發展並推動新的效率水平。光是在墨西哥工廠,富士康就預計每年千瓦時使用量將減少 30% 以上。

  • From a geographic perspective, our data center revenue in China grew sequentially due to shipments of export-compliant copper products to industries. As a percentage of total data center revenue, it remains well below levels prior to the onset of export controls. We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers.

    從地理角度來看,我們在中國的資料中心收入由於向各行業運送符合出口標準的銅產品而持續成長。作為資料中心總收入的百分比,它仍然遠低於出口管制開始之前的水平。我們預計中國市場未來仍將保持高度競爭。我們將繼續遵守出口管制,同時為客戶提供服務。

  • Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI. India's leading CSPs include product communications, and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year end, they will have boosted NVIDIA GPU deployment in the country by nearly 10x. Infosys, [TSC], Wipro are adopting NVIDIA AI enterprise and up-skilling nearly half a million developers and consultants to help clients build and run AI agents on our platform.

    隨著各國紛紛採用 NVIDIA 加速運算來推動人工智慧驅動的新工業革命,我們的主權人工智慧計畫繼續累積動力。印度領先的 CSP 包括產品通信,Yotta Data Services 正在為數萬個 NVIDIA GPU 建造 AI 工廠。到年底,他們將把該國的 NVIDIA GPU 部署量提高近 10 倍。 Infosys、[TSC]、Wipro 正在採用 NVIDIA AI 企業,並提升近 50 萬名開發人員和顧問的技能,以協助客戶在我們的平台上建置和運行 AI 代理程式。

  • In Japan, SoftBank is building the nation's most powerful AI Supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA. We are launching the same in the US with T-Mobile. Leaders across Japan including Fujitsu, NEC, and NTT are adopting NVIDIA AI enterprise and major consulting companies including EY Strategy and Consulting -- will help bring NVIDIA A I technology to Japan's industries.

    在日本,軟銀正在利用 NVIDIA DGX Blackwell 和 Quantum InfiniBand 打造日本最強大的 AI 超級電腦。軟銀也與 NVIDIA 合作,利用 NVIDIA AI Aerial 和 AI-RAN 平台將電信網路轉變為分散式 AI 網絡,該平台可以在 CUDA 上處理 AI 上的 5G RAN。我們正在美國與 T-Mobile 合作推出相同的產品。富士通、NEC 和 NTT 等日本領導企業正在採用 NVIDIA AI 企業,安永策略與諮詢等主要顧問公司將協助將 NVIDIA AI 技術引入日本各行業。

  • Networking revenue increased 20% year on year. Areas of sequential revenue growth including InfiniBand and Ethernet switches, SmartNICs and Bluefield DPUs. The networking revenue was sequentially down. Networking demand is strong and growing, and we anticipate sequential growth in Q4.

    網路營收年增20%。營收連續成長的領域包括 InfiniBand 和乙太網路交換器、SmartNIC 和 Bluefield DPU。網路收入環比下降。網路需求強勁且不斷成長,我們預計第四季將出現環比成長。

  • CSPs and supercomputing centers are using and adopting the NVIDIA in InfiniBand platform to power new H200 clusters. NVIDIA's Spectrum-X Ethernet for AI revenue increased over 3x year on year. And our pipeline continues to build with multiple CSPs and consumer internet companies planning large cluster deployments.

    CSP 和超級運算中心正在使用 InfiniBand 平台中的 NVIDIA 來為新的 H200 叢集提供支援。 NVIDIA 用於 AI 的 Spectrum-X 乙太網路營收年增超過 3 倍。我們的管道繼續與規劃大型集群部署的多個通訊服務供應商和消費者互聯網公司一起建置。

  • Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI's Colossus 100,000 Hopper supercomputer experience zero application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet.

    傳統乙太網路並不是為人工智慧而設計的。 NVIDIA Spectrum-X 獨特地利用了 InfiniBand 先前專有的技術,使客戶能夠實現大規模的 GPU 運算。利用 Spectrum-X,xAI 的 Colossus 100,000 Hopper 超級電腦實現了零應用延遲降級,並保持了 95% 的資料吞吐量,而傳統乙太網路僅為 60%。

  • Now moving to gaming and AI PCs. Gaming revenue of $3.3 billion increased 14% sequentially and 15% year on year. Q3 was a great quarter for gaming with notebook, console, and desktop revenue, all growing sequentially and year on year. RTX end demand was fueled by strong back-to-school sales as consumers continue to choose GeForce RTX DPUs and devices to power gaming, creative, and AI applications.

    現在轉向遊戲和人工智慧電腦。博彩收入為 33 億美元,季增 14%,年增 15%。第三季度對於遊戲來說是一個偉大的季度,筆記型電腦、遊戲機和桌上型電腦的收入都在連續增長和同比增長。隨著消費者繼續選擇 GeForce RTX DPU 和設備來支援遊戲、創意和 AI 應用,強勁的返校銷售推動了 RTX 終端需求。

  • Channel inventory remains healthy, and we are gearing up for the holiday season. We began shipping new GeForce RTX AI PCs with up to 321 AI TOPS from Asus and MSI with Microsoft's Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo and video editing, image generation, and coding.

    渠道庫存保持健康,我們正在為假期做好準備。我們開始從華碩和 MSI 開始出貨具有高達 321 AI TOPS 的新型 GeForce RTX AI PC,預計將在第四季度推出 Microsoft Copilot+ 功能。這些機器利用 RTX 光線追蹤和人工智慧技術的力量來增強遊戲、照片和影片編輯、影像生成和編碼的能力。

  • This past quarter, we celebrated the 25th anniversary of the GeForce 256, the world's first GPU. The transforming (technical difficulty) graphics to igniting the AI revolution, Nvidia's GPUs have been the driving force behind some of the most consequential technologies of our time.

    上個季度,我們慶祝了全球首款 GPU GeForce 256 誕生 25 週年。 Nvidia 的 GPU 改變了圖形(技術難度),引發了人工智慧革命,成為當今時代一些最重要技術背後的驅動力。

  • Moving to pro-vis, revenue of $486 million was up 7% sequentially and 17% year on year. NVIDIA RTX workstations continue to be the preferred choice to power professional graphics, design, and engineering-related workloads. Additionally, AI is emerging as a powerful demand driver including autonomous vehicle simulation, generative AI model prototyping for productivity-related use cases and generative AI content creation in media and entertainment.

    轉向預計,營收為 4.86 億美元,季增 7%,年增 17%。 NVIDIA RTX 工作站仍是支援專業圖形、設計和工程相關工作負載的首選。此外,人工智慧正在成為強大的需求驅動因素,包括自動駕駛汽車模擬、生產力相關用例的生成式人工智慧模型原型設計以及媒體和娛樂領域的生成式人工智慧內容創建。

  • Moving to automotive, revenue was a record $449 million, up 30% sequentially and up 72% year on year. Strong growth was driven by self-driving grants of NVIDIA Orin and robust end-market demand for NEVs. Mobile cars is rolling out its fully electric SUV built on NVIDIA Orin and DriveOS.

    汽車領域的營收達到創紀錄的 4.49 億美元,季增 30%,年增 72%。強勁的成長得益於 NVIDIA Orin 的自動駕駛撥款以及終端市場對新能源車的強勁需求。 Mobile cars 正在推出基於 NVIDIA Orin 和 DriveOS 構建的全電動 SUV。

  • Okay. Moving to the rest of the P&L, GAAP gross margin was 74.6% and non-GAAP gross margin was 75%, down sequentially primarily driven by a mix shift of the H100 systems to more complex and higher cost systems within data center. Sequentially, GAAP operating expenses and non-GAAP operating expenses were up 9% due to higher compute, infrastructure, and engineering development costs for new product introductions.

    好的。至於損益表的其餘部分,GAAP 毛利率為 74.6%,非 GAAP 毛利率為 75%,環比下降主要是由於 H100 系統向資料中心內更複雜、成本更高的系統的混合轉變所致。隨後,由於新產品推出的運算、基礎設施和工程開發成本增加,GAAP 營運費用和非 GAAP 營運費用增加了 9%。

  • In Q3, we returned $11.2 billion to shareholders in the form of share repurchases and cash dividends.

    第三季度,我們以股票回購和現金股利的形式向股東返還 112 億美元。

  • Well, let me turn to the outlook for the fourth quarter. Total revenue is expected to be $37.5 billion plus or minus 2%, which incorporates continued demand for Hopper architecture and the initial ramp of our Blackwell products. While demand greatly exceed supply, we are on track to exceed our previous Blackwell revenue estimate of several billion dollars as our visibility into supply continues to increase. On gaming, although sell-through is strong in Q3, we expect fourth-quarter revenue to decline sequentially due to supply constraints.

    好吧,讓我談談第四季的前景。總收入預計為 375 億美元正負 2%,其中包括對 Hopper 架構的持續需求以及 Blackwell 產品的初始成長。雖然需求大大超過供應,但隨著我們對供應的了解不斷增加,我們預計將超過 Blackwell 之前估計的數十億美元的收入。在遊戲方面,儘管第三季銷售強勁,但我們預計由於供應限制,第四季營收將環比下降。

  • GAAP and non-GAAP gross margins are expected to be 73% and 73.5% respectively plus or minus 50 basis points. Blackwell is a customizable AI infrastructure with seven different types of NVIDIA-built chips, multiple networking options, and for air- and liquid-cooled data centers.

    GAAP 和非 GAAP 毛利率預計分別為 73% 和 73.5%,上下浮動 50 個基點。 Blackwell 是一個可自訂的 AI 基礎設施,擁有七種不同類型的 NVIDIA 晶片、多種網路選項以及風冷和液冷資料中心。

  • Our current focus is on ramping to strong demand, increasing system availability, and providing the optimal mix of configurations to our customer. As Blackwell ramps, we expect gross margins to moderate to the low 70s. When fully ramped, we expect Blackwell margins to be in the mid-70s.

    我們目前的重點是滿足強勁需求、提高系統可用性以及為客戶提供最佳配置組合。隨著 Blackwell 的發展,我們預計毛利率將降至 70 多美元。當全面提升時,我們預計 Blackwell 的利潤率將達到 70 年代中期。

  • GAAP and non-GAAP operating expenses are expected to be approximately $4.8 billion and $3.4 billion respectively. We are a data center scale AI infrastructure company. Our investments include building data centers for development of our hardware and software stacks and to support new introductions.

    GAAP 和非 GAAP 營運費用預計分別約為 48 億美元和 34 億美元。我們是一家資料中心規模的人工智慧基礎設施公司。我們的投資包括建造資料中心,用於開發我們的硬體和軟體堆疊並支援新產品的推出。

  • GAAP and non-GAAP other income and expenses are expected to be an income of approximately $400 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 16.5% plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR websites.

    GAAP 和非 GAAP 其他收入和支出預計約為 4 億美元,不包括非附屬投資的損益。 GAAP 和非 GAAP 稅率預計為 16.5% 正負 1%(不包括任何離散項目)。更多財務細節包含在 CFO 評論和我們的 IR 網站上提供的其他資訊中。

  • In closing, let me highlight upcoming events for the financial community. We will be attending the UBS Global Technology and AI conference on December 3 in Scottsdale. Please join us at CES in Las Vegas where Jensen will deliver a keynote on January 6, and we will host a Q&A session for financial analysts the next day on January 7. Our earnings call to discuss results for the fourth quarter of fiscal 2025 is scheduled for February 26, 2025.

    最後,讓我重點介紹金融界即將發生的事件。我們將於 12 月 3 日參加在斯科茨代爾舉行的瑞銀全球技術和人工智慧會議。請參加我們在拉斯維加斯舉行的CES,Jensen 將於1 月6 日發表主題演講,我們將在第二天(即1 月7 日)為金融分析師舉辦問答會議。會議中討論2025 財年第四季的業績2025 年 2 月 26 日。

  • We will now open the call for questions. Operator, can you pull for questions please?

    我們現在開始提問。接線員,可以提問嗎?

  • Operator

    Operator

  • (Operator Instructions)

    (操作員說明)

  • C.J. Muse, Cantor Fitzgerald.

    C.J.繆斯,坎托·費茲傑拉。

  • CJ Muse - Analyst

    CJ Muse - Analyst

  • Good afternoon. Thank you for taking the question. I guess just a question for you on the debate around whether scaling for large language models have stalled. Obviously, we're very early here, but would love to hear your thoughts on this front. How are you helping your customers as they work through these issues?

    午安.感謝您提出問題。我想這只是一個關於大型語言模型的擴展是否已經停滯的爭論的問題。顯然,我們還很早,但很想聽聽您在這方面的想法。您如何幫助您的客戶解決這些問題?

  • And then obviously part of the context here is we're discussing clusters that have yet to benefit from Blackwell. So is this driving even greater demand for Blackwell? Thank you.

    顯然,這裡的部分背景是我們正在討論尚未從 Blackwell 中受益的集群。那麼這是否會推動對布萊克威爾的更大需求?謝謝。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Foundation model pre-training scaling is intact, and it's continuing. As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale. What we're learning, however, is that it's not enough, that we've now discovered two other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated -- data that assists in post-training scaling.

    基礎模型預訓練縮放完好無損,並且仍在繼續。如您所知,這是一個經驗定律,而不是基本物理定律。但有證據表明它仍在繼續擴大。然而,我們了解到這還不夠,我們現在還發現了另外兩種擴展方法。一是訓練後縮放。當然,第一代訓練後是強化學習人類回饋,但現在我們有了強化學習人工智慧回饋和產生的各種形式的合成資料——有助於訓練後擴展的資料。

  • And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1 -- or OpenAI's o1, which does inference time scaling what's called test time scaling. The longer it thinks, the better and higher quality answer it produces, and it considers approaches like a chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth. And intuitively, it's a little bit like us doing -- thinking in our head before we answer a question.

    最大的事件之一和最令人興奮的發展之一是 Strawberry、ChatGPT o1——或者 OpenAI 的 o1,它可以進行推理時間縮放,即所謂的測試時間縮放。它思考的時間越長,它產生的答案就越好、品質越高,並且它會考慮諸如思想鍊和多路徑規劃之類的方法以及反思所需的各種技術等等。直覺上,這有點像我們所做的事——在回答問題之前先在頭腦中思考。

  • And so we now have three ways of scaling, and we're seeing all three ways of scaling. And as a result of that, the demand for our infrastructures is really great.

    所以我們現在有了三種擴展方式,並且我們看到了所有三種擴展方式。因此,對我們基礎設施的需求非常大。

  • You see now that at the tail-end of the last generation of foundation models, we're at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pre-training scaling, post-training scaling, and then now, very importantly, inference time scaling. And so the demand is really great for all of those reasons.

    現在您看到,在最後一代基礎模型的末端,我們有大約 100,000 個 Hoppers。下一代從 100,000 Blackwells 開始。因此,這可以讓您了解行業在訓練前擴展、訓練後擴展以及現在非常重要的推理時間擴展方面的發展方向。因此,由於所有這些原因,需求確實很大。

  • But remember, simultaneously, we're seeing inference really starting to scale up for our company. We are the largest inference platform in the world today, because our installed base is so large. And everything that's that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers. And as we as we move to Blackwells for training foundation models, it leaves behind it, a large installed base of extraordinary infrastructure for inference.

    但請記住,同時,我們看到推理真正開始為我們公司擴大規模。我們是當今世界上最大的推理平台,因為我們的安裝基礎非常龐大。所有在 Amperes 和 Hoppers 上訓練的東西都可以在 Amperes 和 Hoppers 上進行令人難以置信的推理。當我們轉向 Blackwells 訓練基礎模型時,它留下了一個龐大的用於推理的非凡基礎設施的安裝基礎。

  • And so we're seeing inference demand go up. We're seeing inference time scaling go up. We see the number of AI native companies continue to grow. And of course, we're starting to see enterprise adoption of agentic AI really is the latest rage. And so we're seeing a lot of demand coming from a lot of different places.

    因此我們看到推理需求正在上升。我們看到推理時間在增加。我們看到人工智慧本土公司的數量持續成長。當然,我們開始看到企業採用代理人工智慧確實是最新的風潮。因此,我們看到來自許多不同地方的大量需求。

  • Operator

    Operator

  • Toshiya Hari, Goldman Sachs.

    Toshiya Hari,高盛。

  • Toshiya Hari - Analyst

    Toshiya Hari - Analyst

  • Hi, good afternoon. Thank you so much for taking the question. Jensen, you executed the mask change earlier this year. There were some reports over the weekend about some heating issues. On the back of this, we've had investors ask about your ability to execute to the road map you presented at GTC this year with Ultra coming out next year and the transition to Rubin in '26.

    嗨,下午好。非常感謝您提出這個問題。 Jensen,你今年早些時候更換了面具。週末有一些關於暖氣問題的報導。在此背景下,我們讓投資者詢問您執行今年在 GTC 上提出的路線圖的能力,其中包括明年推出 Ultra 以及 26 年向 Rubin 的過渡。

  • Can you sort of speak to that? And some investors are questioning that. So if you can sort of speak to your ability to execute on time, that would be super helpful.

    你能談談嗎?一些投資者對此表示質疑。因此,如果您能談談自己按時執行的能力,那將非常有幫助。

  • And then a quick part B, on supply constraints, is it a multitude of componentry that's causing this or is it specifically, [co-ops] HBM? Is the supply constraint -- are the supply constraints getting better? Are they worsening? Any sort of color on that would be super helpful as well? Thank you.

    然後是 B 部分,關於供應限制,是由眾多組件造成的,還是具體來說,[合作社] HBM?供應限制是否-供應限制是否正在改善?他們的病情正在惡化嗎?任何類型的顏色也會非常有幫助嗎?謝謝。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Yeah. Thanks. So let's see. Back to the first question, Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated. And so the supply chain team is doing an incredible job of working with our supply partners to increase Blackwell, and we're going to continue to work hard to increase Blackwell through next year.

    是的。謝謝。那麼讓我們來看看。回到第一個問題,布萊克韋爾的生產正在全力進行中。事實上,正如科萊特之前提到的,我們本季交付的 Blackwell 數量將超過我們先前的估計。因此,供應鏈團隊正在與我們的供應合作夥伴合作以增加 Blackwell 的數量,這方面做得非常出色,我們將繼續努力工作,以在明年增加 Blackwell 的數量。

  • It is the case that demand exceeds our supply, and that's expected as we're in the beginnings of this generative AI revolution as we all know. And we're at the beginning of a new generation of foundation models that are able to do reasoning and able to long thinking. And of course, one of the really exciting areas is physical AI, AI that now understands the structure of the physical world.

    需求超過了供應,這是預料之中的,因為眾所周知,我們正處於這場生成式人工智慧革命的開端。我們正處於新一代基礎模型的開端,這些模型能夠進行推理並且能夠進行長期思考。當然,真正令人興奮的領域之一是物理人工智慧,現在人工智慧能夠理解物理世界的結構。

  • And so Blackwall demand is very strong. Our execution is on is going well. And there's obviously a lot of engineering that we're doing across the world. You see now systems that are being stood up by Dell and CoreWeave. I think you saw systems from Oracle stood up. You have systems from Microsoft, and they're about to preview their Grace Blackwell systems. You have systems that are at Google.

    因此 Blackwall 的需求非常強勁。我們的執行進展順利。顯然,我們正在世界各地進行大量工程設計。現在您可以看到 Dell 和 CoreWeave 正在支援的系統。我想您已經看到 Oracle 的系統站起來了。您擁有 Microsoft 的系統,他們即將預覽他們的 Grace Blackwell 系統。您擁有 Google 的系統。

  • And so all of these CSPs are racing to be first. The engineering that we do with them is, as you know, rather complicated. And the reason for that is because although we build full stack and full infrastructure, we disaggregate all of this AI supercomputer, and we integrate it into all of the custom data centers in architectures around the world. That integration process is something we've done several generations now.

    因此,所有這些 CSP 都在競相爭第一。如您所知,我們對它們所做的工程相當複雜。原因是,雖然我們建立了完整的堆疊和完整的基礎設施,但我們將所有這些人工智慧超級電腦分解,並將其整合到世界各地架構中的所有自訂資料中心中。這個整合過程是我們幾個世代都在做的事情。

  • We're very good at it, but still, there's still a lot of engineering that happens at this point. But as you see from all of the systems that are being stood up, Blackwell is in great shape. And as we mentioned earlier, the supply and what we're planning to ship this quarter is greater than our previous estimates.

    我們非常擅長這一點,但此時仍有許多工程要做。但正如你從所有正在建立的系統中看到的那樣,布萊克威爾狀況良好。正如我們之前提到的,本季的供應和計劃發貨量高於我們先前的估計。

  • With respect to the supply chain, there's seven different chips, seven custom chips that we built in order for us to deliver the Blackwell systems. The Blackwell systems go in air-cooled or liquid-cooled NVLink 8 or NVLink 72 or NVLink 8, NVLink 36, NVLink 72. We have x86 or Grace. And the integration of all those -- all of those systems into the world's data centers is nothing short of a miracle.

    就供應鏈而言,我們有七種不同的晶片,七種客製化晶片,以便我們交付 Blackwell 系統。 Blackwell 系統採用風冷或液冷 NVLink 8 或 NVLink 72 或 NVLink 8、NVLink 36、NVLink 72。將所有這些系統整合到世界資料中心簡直就是一個奇蹟。

  • And so the component supply chain necessary to ramp at this scale, you have to go back and take a look at how much Blackwell was shipped last quarter, which was zero, and in terms of how much Blackwell total systems we'll ship this quarter, which is measured in billions -- the ramp is incredible. And so almost every company in the world seems to be involved in our supply chain, and we've got great partners -- everybody from of course, TSMC and Amphenol, the connector company, incredible company, Vertiv, and SK hynix and Micron, Spill, Amcor, and KYEC, and there's Foxconn and the factories that they've built, and Quanta and We Win and -- gosh, Dell and HP and Supermicro, Lenovo and -- the number of companies is just really quite incredible -- Quanta and I'm sure I've missed partners that are involved in the ramping of Blackwell which I really appreciate. And so anyways, I think we're in great shape with respect to the Blackwell ramp at this point.

    因此,要以這種規模增長所需的組件供應鏈,您必須回去看看上個季度 Blackwell 的發貨量(為零),以及本季度我們將發貨的 Blackwell 總系統數量,以十億為單位——增幅令人難以置信。因此,世界上幾乎每家公司似乎都參與了我們的供應鏈,而且我們擁有出色的合作夥伴——當然,每個人都來自台積電和安費諾、連接器公司、令人難以置信的公司維諦技術(Vertiv)、SK 海力士和美光, Spill、Amcor 和KYEC,還有富士康和他們建造的工廠,還有廣達和We Win,還有——天哪,戴爾、惠普、超微、聯想— —公司的數量真是令人難以置信——廣達和我確信我很想念那些參與 Blackwell 發展的合作夥伴,我真的很感激。無論如何,我認為我們目前在布萊克韋爾坡道方面處於良好狀態。

  • And then lastly, your question about our execution of our road map, we're on an annual road map, and we're expecting to continue to execute on that annual road map. And by doing so, we increase the performance, of course, of our platform. But it's also really important to realize that when we're able to increase performance and do so at X factors at a time, we're reducing the cost of training, we're reducing the cost of inferencing, we're reducing the cost of AI so that it could be much more accessible.

    最後,關於我們執行路線圖的問題,我們正在製定年度路線圖,並且我們希望繼續執行該年度路線圖。透過這樣做,我們當然可以提高平台的效能。但同樣重要的是要認識到,當我們能夠提高性能並一次在 X 個因素上做到這一點時,我們就降低了訓練成本,我們就降低了推理成本,我們就降低了成本人工智能,使其更容易使用。

  • But the other factor that's very important to note is that when there's a data center of some fixed size -- and the data center always is of some fixed size. It could be, of course, tens of megawatts in the past, and now it's -- most data centers are now 100 megawatts to several 100 megawatts, and we're planning on gigawatt data centers. It doesn't really matter how large the data centers are; the power is limited. And when you're in a power-limited data center, the best -- the highest performance per watt translates directly into the highest revenues for our partners.

    但另一個需要注意的重要因素是,當資料中心具有固定大小時,且資料中心始終具有固定大小。當然,過去可能是幾十兆瓦,現在是──現在大多數資料中心都是100兆瓦到幾百兆瓦,我們正在規劃千兆瓦的資料中心。資料中心有多大並不重要;重要的是。力量是有限的。當您處於功率有限的資料中心時,最好的——最高的每瓦效能將直接轉化為我們合作夥伴的最高收入。

  • And so on the one hand, our annual road map reduces cost, but on the other hand, because our Perf per watt is so good compared to anything out there, we generate for our customers, the greatest possible revenues. And so that annual rhythm is really important to us, and we have every intention of continuing to do that. And everything's on track as far as I know.

    因此,一方面,我們的年度路線圖降低了成本,但另一方面,由於我們的每瓦性能比其他任何產品都好,我們為客戶創造了盡可能大的收入。因此,年度節奏對我們來說非常重要,我們完全打算繼續這樣做。就我所知,一切都步入正軌。

  • Operator

    Operator

  • Timothy Arcuri, UBS.

    提摩西‧阿庫裡,瑞銀集團。

  • Timothy Arcuri - Analyst

    Timothy Arcuri - Analyst

  • Thanks a lot. I'm wondering if you can talk about the trajectory of how Blackwell is going to ramp this year. I know Jensen, you did just talk about Blackwell being better than I think you had said several billions of dollars in January, sounds like you're going to do more than that. But I think in recent months also, you said that Blackwell crosses over Hopper in the April quarter. So I guess I had two questions.

    多謝。我想知道您能否談談布萊克韋爾今年的發展軌跡。我知道詹森,你剛才談到布萊克威爾比我認為你在一月份所說的數十億美元要好,聽起來你要做的不止於此。但我認為最近幾個月,你也說過布萊克威爾在四月季度超越了霍珀。所以我想我有兩個問題。

  • First of all, is that still the right way to think about it, that Blackwell will cross over Hopper in April? And then collect you kind of talked about Blackwell bringing down gross margin to the low 70s as it ramps. So I guess if April is the crossover, is that the worst of the pressure on gross margin? So you're going to be kind of in the low 70s as soon as April, I'm just wondering if you can sort of shape that for us. Thanks.

    首先,布萊克威爾將在四月超越霍珀,這仍然是正確的思考方式嗎?然後收集你談論的 Blackwell 的毛利率,隨著毛利率的上升,將其降至 70 多美元。所以我猜如果四月是交叉點,那是毛利率壓力最大的時候嗎?所以你最快到四月就會達到 70 多歲,我只是想知道你是否可以為我們塑造這一點。謝謝。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Colette, why don't you start?

    科萊特,你為什麼不開始呢?

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • So let me first start with your question. Thank you regarding our gross margins and we discussed that our gross margins as we are ramping Blackwell in the very beginning and the many different configurations, the many different chips that we are bringing to market, we are going to focus on making sure we have the best experience for our customers as they stand that up. We will start growing in for our gross margins, but we do believe those will be in the low 70s in that first part of the ramp.

    那麼讓我先從你的問題開始。感謝您介紹我們的毛利率,我們討論了我們的毛利率,因為我們一開始就在提高Blackwell 的毛利率,以及我們向市場推出的許多不同的配置、許多不同的晶片,我們將專注於確保我們擁有為我們的客戶提供最好的體驗,因為他們堅持了這一點。我們的毛利率將開始成長,但我們確實相信,在成長的第一部分,毛利率將在 70 左右。

  • So you're correct, as you look at the quarters following after that, we will start increasing our gross margins and we hope to get to the mid-70s quite quickly as part of that ramp.

    所以你是對的,當你看看之後的幾個季度時,我們將開始增加毛利率,我們希望作為成長的一部分很快達到 70 年代中期。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Hopper demand will continue through next year. Surely the first several quarters of the next year and meanwhile we'll ship more Blackwells next quarter than this, and we'll ship more Blackwells the quarter after that than our first quarter.

    料斗需求將持續到明年。當然,明年的前幾個季度,同時我們下個季度的 Blackwell 發貨量將比這個季度多,之後的季度我們將比第一季發貨更多的 Blackwell。

  • And so that kind of puts it in perspective. We are really at the beginnings of two fundamental shifts in computing that is really quite significant. And the first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs. And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning.

    這樣就可以正確地看待它。我們確實正處於計算領域兩個非常重要的根本轉變的開端。第一個是從在 CPU 上運行的編碼轉向創建在 GPU 上運行的神經網路的機器學習。從編碼到機器學習的根本性轉變在這一點上已經很普遍了。沒有一家公司不打算進行機器學習。

  • And so machine learning is also what enables a Generative AI. And so on the one hand, the first thing that's happening is a $1 trillion worth of computing systems and data centers around the world is now being modernized for machine learning.

    因此,機器學習也是生成式人工智慧的基礎。因此,一方面,發生的第一件事是世界各地價值 1 兆美元的運算系統和資料中心正在進行機器學習現代化改造。

  • On the other hand, secondarily, I guess is that on top of these systems are going to be -- we're going to be creating a new type of capability called AI. And when we say Generative AI, we're essentially saying that these data centers are really AI factories, they're generating something just like we generate electricity, we're now going to be generating AI.

    另一方面,我想,在這些系統之上,我們將創造一種稱為人工智慧的新型功能。當我們說生成式人工智慧時,我們本質上是在說這些資料中心實際上是人工智慧工廠,它們正在發電,就像我們發電一樣,我們現在將產生人工智慧。

  • And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7 just like an AI factory. And so we're going to see this new type of system come online and I call it an AI factory because that's really as close to what it is. It's unlike a data center of the past.

    如果客戶數量很大,就像電力消費者數量很大一樣,這些發電機將 24/7 運行。如今,許多人工智慧服務就像人工智慧工廠一樣全天候(24/7)運作。因此,我們將看到這種新型系統上線,我稱之為人工智慧工廠,因為這確實非常接近它的本質。它與過去的數據中心不同。

  • And so these two fundamental trends are really just beginning. And so we expect this to happen, this growth, this modernization and the creation of a new industry to go on for several years.

    因此,這兩個基本趨勢實際上才剛開始。因此,我們預計這種情況會發生,這種成長、這種現代化和新行業的創建將持續數年。

  • Operator

    Operator

  • Vivek Arya, Bank of America Securities.

    Vivek Arya,美國銀行證券公司。

  • Vivek Arya - Analyst

    Vivek Arya - Analyst

  • Thanks for taking my question. Could I just to clarify, do you think it's a fair assumption to think NVIDIA could recover to kind of mid 70s gross margin in the back half of calendar '25? Just wanted to clarify that.

    感謝您提出我的問題。我能否澄清一下,您認為 NVIDIA 可以在 25 世紀後半段恢復到 70 年代中期的毛利率是一個合理的假設嗎?只是想澄清這一點。

  • And then, Jensen my main question, historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase or is it just too premature to discuss that, because you're just the start of Blackwell?

    然後,Jensen 我的主要問題是,從歷史上看,當我們看到硬體部署週期時,它們不可避免地包括一些消化過程。你認為我們什麼時候會進入這個階段,或者現在討論這個還為時過早,因為你只是布萊克威爾的開始?

  • So how many quarters of shipments do you think is required to kind of satisfy this first wave? Can you continue to grow this into a calendar '26. Just how should we be prepared to see what we have seen historically, right? The periods of digestion along the way of a long-term kind of secular hardware deployment?

    那麼,您認為需要多少季度的出貨量才能滿足第一波浪潮的需求?你能繼續把它變成日曆'26嗎?我們應該如何準備好看到我們所看到的歷史,對吧?長期的世俗硬體部署過程中的消化期?

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Okay. Vivek, thank you for the question. Let me clarify your question regarding gross margins. Could we reach the mid-70s in the second half of next year?

    好的。維韋克,謝謝你的提問。讓我澄清一下您關於毛利率的問題。明年下半年我們能達到70年代中期嗎?

  • And yes, I think it is reasonable assumption or a goal for us to do, but we'll just have to see how that mix of ramp goes. But yes, it is definitely possible.

    是的,我認為這是合理的假設或我們要做的目標,但我們只需要看看斜坡的組合如何進行。但是,是的,這絕對是可能的。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • The way to think through that Vivek is, I believe that there will be no digestion until we [modernize] $1 trillion with the data centers. You know, those -- if you just look at the world's data centers, the vast majority of it is built for a time when we wrote applications by hand and we ran them on CPUs. It's just not a sensible thing to do anymore.

    Vivek 的思考方式是,我相信,在我們對 1 兆美元的資料中心進行[現代化]改造之前,不會有任何消化。你知道,如果你看看世界上的資料中心,絕大多數資料中心都是為我們手工編寫應用程式並在 CPU 上運行它們的時代而建構的。這不再是一件明智的事了。

  • If you have -- if every company's CapEx, if they're ready to build a data center tomorrow, they ought to build it for a future of machine learning and Generative AI, because they have plenty of old data centers. And so what's going to happen over the course of next x number of years? And let's assume that over the course of four years, the world's data centers could be modernized as we grow into IT. As you know, IT continues to grow about 20% 30% a year, let's say.

    如果你有——如果每個公司的資本支出,如果他們準備好明天建立一個資料中心,他們應該為機器學習和產生人工智慧的未來建立它,因為他們有大量的舊資料中心。那麼在接下來的 x 年裡會發生什麼事?讓我們假設,在四年的時間裡,隨著我們向 IT 領域的發展,世界資料中心將會現代化。如您所知,IT 每年持續成長約 20% 至 30%。

  • And so let's say by 2030, the world's data centers for computing is, call it a couple trillion dollars. And we have to grow into that. We have to modernize the data center from coding to machine learning. And that's number one.

    假設到 2030 年,全球計算資料中心的價值將達到數兆美元。我們必須成長為這樣的人。我們必須對資料中心進行現代化改造,從編碼到機器學習。這是第一名。

  • The second part of it is Generative AI. And we're now producing a new type of capability that the world's never known, a new market segment that the world's never had. If you look at OpenAI, it didn't replace anything. It's something that's completely brand new. It's in a lot of ways -- as when the iPhone came, it was completely brand new, it wasn't really replacing anything.

    第二部分是產生人工智慧。我們現在正在創造一種世界從未知曉的新型能力,一個世界從未有過的新細分市場。如果你看看 OpenAI,你會發現它並沒有取代任何東西。這是全新的東西。從很多方面來說——就像 iPhone 出現時一樣,它是全新的,並沒有真正取代任何東西。

  • And -- so we're going to see more and more companies like that and they're going to create and generate out of their services essentially intelligence. Some of it would be a digital artist intelligence like Runway and some of it would be basic intelligence like OpenAI. Some of it would be legal intelligence like Harvey. Digital marketing intelligence like Writer's.

    而且 - 所以我們將看到越來越多的這樣的公司,他們將透過他們的服務創建和產生本質上的智慧。其中一些是像 Runway 這樣的數位藝術家智能,有些是像 OpenAI 這樣的基礎智能。其中一些是像哈維這樣的法律情報。像 Writer 一樣的數位行銷情報。

  • So on and so forth and the number of these companies -- these, what are they called? AI native companies, are just in hundreds and almost every platform shift there were internet companies as you recall, there were cloud first companies, there were mobile first companies, now they're AI natives. And so these companies are being created because people see that there's a platform shift and there's a brand-new opportunity to do something completely new.

    等等,以及這些公司的數量──這些公司叫什麼?人工智慧原生公司只有數百家,幾乎每次平台轉變都會有網路公司,正如你所記得的,有雲端優先公司,有行動優先公司,現在他們是人工智慧原生公司。因此,這些公司的創建是因為人們看到了平台的轉變,並且有一個全新的機會來做一些全新的事情。

  • And so my sense is that we're going to continue to build out -- to modernize IT, modernized computing, number one. And then number two, create these AI factories that are going to be for a new industry for the production of artificial intelligence.

    因此,我的感覺是,我們將繼續建立 — — 實現 IT 現代化、現代化計算,這是第一。第二,創建這些人工智慧工廠,這些工廠將成為人工智慧生產的新產業。

  • Operator

    Operator

  • Stacy Rasgon, Bernstein research.

    史黛西‧拉斯貢,伯恩斯坦研究。

  • Stacy Rasgon - Analyst

    Stacy Rasgon - Analyst

  • Hi guys. Thanks for taking my questions. Colette, I had a clarification and a question for you. The clarification just, when you say low 70s gross margins, does 73.5 count as low 70s or do you have something else in mind? And for my question, you're guiding total revenues and so, I mean total data center revenues in the next quarter must be up, quote unquote several billion dollars, but it sounds like Blackwell now should be up more than that.

    嗨,大家好。感謝您回答我的問題。科萊特,我有一個澄清和一個問題要問你。只是澄清一下,當您說 70 年代低毛利率時,73.5 是否算作 70 年代低,還是您還有其他想法?對於我的問題,您正在指導總收入,因此,我的意思是下個季度的數據中心總收入必須增加,引用不帶引號的數十億美元,但聽起來布萊克韋爾現在應該增加更多。

  • But you also said Hopper was still strong. So like is Hopper down sequentially next quarter. And if it is, like why? Is it because of the supply constraints? Is -- China's been pretty strong as China is kind of rolling off a bit into Q4. So any color you can give us on sort of the Blackwell ramp and the Blackwell versus Hopper behavior into Q4 would be really helpful. Thank you.

    但你也說霍珀仍然很堅強。霍珀下個季度的業績也將連續下滑。如果是的話,為什麼?是因為供應緊張嗎?中國的表現相當強勁,因為中國在第四季有所下滑。因此,您可以為我們提供有關 Blackwell 坡道以及 Blackwell 與 Hopper 進入第四季度的行為的任何顏色都會非常有幫助。謝謝。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • So, first starting on your first question there Stacy, regarding our gross margin and define low. Low of course is below the mids. And let's say we might be at 71 maybe about 72, 72.5, we're going to be in that range. We could be higher than that as well. We're just going to have to see how it comes through. We do want to make sure that we are ramping and continuing that improvement, the improvement in terms of our yields, the improvement in terms of the product as we go through the rest of the year. So we'll get up to the mid-70s by that point.

    所以,先從史黛西的第一個問題開始,關於我們的毛利率和低的定義。低點當然是低於中點。假設我們可能在 71 左右,或大約 72、72.5,我們將在這個範圍內。我們也可以比這個更高。我們只需要看看它是如何實現的。我們確實希望確保在今年剩餘的時間裡,我們正在加強並繼續改進,提高產量,改進產品。到那時我們將進入 70 年代中期。

  • The second statement was a question regarding our Hopper and what is our Hopper doing? We have seen a substantial growth for H200, not only in terms of orders but the quickness in terms of those that are standing that up, it is an amazing product and it's the fastest growing and ramping that we've seen. We will continue to be selling Hopper in this quarter in Q4 for sure. That is across the board in terms of all of our different configurations and our configurations include what we may do in terms of China.

    第二個陳述是關於我們的霍珀的問題,我們的霍珀在做什麼?我們看到 H200 的大幅成長,不僅體現在訂單方面,而且體現在訂單成長速度方面,這是一款令人驚嘆的產品,也是我們所見過的成長最快的產品。我們肯定會在第四季繼續銷售 Hopper。就我們所有不同的配置而言,這是全面的,我們的配置包括我們在中國可能會做的事情。

  • But keep that in mind that folks are also at the same time looking to build out their Blackwell. So we've got a little bit of both happening in Q4. But yes -- is it possible for Hopper to grow between Q3 and Q4? It's possible. But we'll just have to see.

    但請記住,人們同時也在尋求建造他們的布萊克威爾。所以我們在第四季這兩種情況都發生了一些。但是,Hopper 有可能在第三季和第四季之間實現成長嗎?這是有可能的。但我們只需要看看。

  • Operator

    Operator

  • Joseph Moore, Morgan Stanley.

    約瑟夫‧摩爾,摩根士丹利。

  • Joseph Moore - Analyst

    Joseph Moore - Analyst

  • Great. Thank you. I wonder if you could talk a little bit about what you're seeing in the inference market. You've talked about Strawberry and some of the ramifications of longer scaling inference projects. But you've also talked about the possibility that as some of these Hopper clusters age that you could use some of the Hopper latent chips for inference. So, I guess, do you expect inference to outgrow training in the next kind of 12-month time frame? And just generally your thoughts there.

    偉大的。謝謝。我想知道您是否可以談談您在推理市場中看到的情況。您談到了草莓以及長期擴展推理計畫的一些後果。但您也談到了這樣的可能性:隨著其中一些 Hopper 集群老化,您可以使用一些 Hopper 潛在晶片進行推理。那麼,我想,您是否期望推理在下一個 12 個月的時間範圍內能夠超越訓練?一般而言,您的想法就在那裡。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Our hopes and dreams is that someday the world does a ton of inference and that's when AI has really succeeded. Is when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course, and coding of course. And so we hope that every company is doing Inference 24/7 and in that there will be a whole bunch of AI native startups, thousands of AI native startups that are generating tokens and generating AI. And every aspect of your computer experience from using Outlook to PowerPoint or when you're sitting there with Excel, you're constantly generating tokens and every time you read a PDF, open a PDF, it generated a whole bunch of tokens.

    我們的希望和夢想是,有一天世界會進行大量的推理,那時人工智慧才真正成功。每家公司都在其公司內部為行銷部門、預測部門、供應鏈團隊、法律部門和工程部門(當然還有編碼)進行推理。因此,我們希望每家公司都在進行 24/7 推理,這樣就會有一大堆 AI 原生新創公司,數以千計的 AI 原生新創公司正在產生代幣並產生 AI。從使用 Outlook 到 PowerPoint,或者當您坐在那裡使用 Excel 時,您的電腦體驗的各個方面都會不斷產生令牌,每次您閱讀 PDF、開啟 PDF 時,它都會產生一大堆令牌。

  • One of my favorite applications is NotebookLM. This Google application that came out. I use the living daylights out of it just because it's fun, and I put every PDF, every archive paper into it just to listen to it as well as, scanning through it. And so I think that's the goal is to train these models so that people use it. And there's now a whole new era of AI, if you will, a whole new genre of AI called Physical AI.

    我最喜歡的應用程式之一是 NotebookLM。這個谷歌應用程式就出來了。我使用它只是因為它很有趣,我把每個 PDF、每份檔案文件都放進去只是為了聽它、掃描它。所以我認為目標是訓練這些模型以便人們使用它。如果你願意的話,現在有一個全新的人工智慧時代,一種全新的人工智慧類型,稱為實體人工智慧。

  • You know, just as large language models understand the human language and how we -- the thinking process if you will. Physical AI understand the physical world and it understands the meaning of the structure and understands what's sensible and what's not and what could happen and what went and not only does it understand but it can predict and roll out a short future.

    你知道,就像大型語言模型理解人類語言以及我們如何思考過程。物理人工智慧理解物理世界,它理解結構的含義,理解什麼是明智的,什麼是不明智的,什麼可能發生,什麼已經發生,它不僅理解,而且可以預測和推出一個短暫的未來。

  • That capability is incredibly valuable for industrial AI and robotics. And so that's fired up so many AI native companies and robotics companies and physical AI companies that you're probably hearing about. And it's really the reason why we built omniverse. You know, omniverse is so that we can enable these AI's is to be created and learn in omniverse and learn from synthetic data generation and reinforcement learning physics feedback instead of just a human feedback is now physics feedback.

    這種能力對於工業人工智慧和機器人技術來說非常有價值。因此,這激發了許多你可能聽說過的人工智慧本土公司、機器人公司和實體人工智慧公司。這確實是我們建立全宇宙的原因。你知道,全宇宙是為了讓我們能夠在全宇宙中創建和學習這些人工智慧,並從合成數據生成和強化學習物理反饋中學習,而不僅僅是人類反饋,現在是物理反饋。

  • To have these capabilities, Omniverse was created so that we can enable physical AI. And so that the goal is to generate tokens, the goal is to inference and we're starting to see that growth happening, so I'm super excited about that. Now, let me just say one more thing, inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible.

    為了擁有這些功能,Omniverse 被創建,以便我們能夠啟用實體人工智慧。因此,我們的目標是產生代幣,目標是推理,我們開始看到這種增長的發生,所以我對此感到非常興奮。現在,我再說一件事,推理是非常困難的。而推理之所以超級困難,一方面是因為你需要很高的準確度。您需要較高的吞吐量,以便盡可能降低成本。

  • But you also need the latency to be low and computers that are high throughput as well as low latency is incredibly hard to build. And these applications have long context lengths because they want to understand, they want to be able to inference within understanding the context of what they're being asked to do. And so the context length is growing larger and larger.

    但您還需要低延遲,而建立高吞吐量和低延遲的電腦非常困難。這些應用程式具有很長的上下文長度,因為它們想要理解,它們希望能夠在理解上下文的情況下推斷它們被要求執行的操作。因此上下文的長度變得越來越大。

  • On the other hand, the models are getting larger, they're multi-modality, you know just the number of dimensions that inferences is innovating is incredible. And this innovation rate is what makes NVIDIA's architecture so great. You know, because we, our ecosystem is fantastic. Everybody knows that if they innovate on top of CUDA, on top of NVIDIA's architecture, they can innovate more quickly and they know that everything should work and if something were to happen, it's probably likely their code and not ours.

    另一方面,模型變得越來越大,它們是多模態的,你知道推理所創新的維度數量是令人難以置信的。這種創新速度使得 NVIDIA 的架構如此出色。你知道,因為我們的生態系統非常棒。每個人都知道,如果他們在 CUDA 和 NVIDIA 架構之上進行創新,他們就能更快地創新,而且他們知道一切都應該有效,如果發生什麼事情,很可能是他們的程式碼而不是我們的程式碼。

  • And, and so that ability to innovate in every single direction at the same time having a large installed base so that whatever you create could land on a NVIDIA computer and be deployed broadly all around the world in every single data center, all the way out to the edge, you know, into robotic systems. You know that capability is really quite phenomenal.

    並且,能夠在各個方向進行創新,同時擁有龐大的安裝基礎,這樣您創建的任何內容都可以登陸 NVIDIA 計算機,並廣泛部署在世界各地的每個數據中心,一路走來你知道,到機器人系統的邊緣。要知道,這個能力確實是相當驚人的。

  • Operator

    Operator

  • Aaron Rakers, Wells Fargo.

    亞倫·雷克斯,富國銀行。

  • Aaron Rakers - Analyst

    Aaron Rakers - Analyst

  • Yeah, thanks for taking the question. I wanted to ask you as we kind of focus on the Blackwell cycle and think about the data center business, when I look at the results this last quarter, you know Colette you mentioned that obviously the networking business was down about 15% sequentially. But then your comments were that you were seeing very strong demand. You mentioned also that you had multiple cloud, CFP design wins for these large-scale clusters.

    是的,感謝您提出問題。我想問你,因為我們專注於 Blackwell 週期並考慮資料中心業務,當我查看上個季度的結果時,你知道 Colette 你提到網路業務顯然比上一季下降了約 15%。但隨後您的評論是您看到了非常強勁的需求。您還提到,您為這些大型集群贏得了多個雲端、CFP 設計。

  • So I'm curious if you could unpack what's going on in the networking business and where maybe you've seen some constraints and just your confidence in the pace of Spectrum-X progressing to multiple billions of dollars that you previously had talked about. Thank you.

    因此,我很好奇您是否可以了解網路業務中正在發生的事情,以及您可能在哪些方面看到了一些限制,以及您對 Spectrum-X 發展到您之前談到的數十億美元的步伐的信心。謝謝。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • Let's first start with the networking, the growth year over year is tremendous and our focus since the beginning of our acquisition of Mellanox has really been about building together. The work that we do in terms of in the data center, the networking is such a critical part of that. Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do right quite well.

    讓我們先從網路開始,逐年成長是巨大的,自從收購 Mellanox 以來,我們的重點實際上是共同建設。我們在資料中心所做的工作,網路是其中的關鍵部分。我們銷售我們在資料中心所做的許多系統的網路的能力正在持續增長並且做得很好。

  • So this quarter is just a slight dip down and we're going to be right back up in terms of growing. They're getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems that we're providing them to.

    因此,本季度只是略有下降,但我們將在成長方面立即回升。他們正在為 Blackwell 和越來越多的系統做好準備,這些系統不僅將使用我們現有的網絡,而且還將使用我們提供給它們的許多大型系統中將要整合的網絡。

  • Operator

    Operator

  • Atif Malik, Citi.

    阿蒂夫·馬利克,花旗銀行。

  • Atif Malik - Analyst

    Atif Malik - Analyst

  • Thank you for taking my question. I have two quick ones for Colette. Colette, on the last earnings call, you mentioned that Sovereign demand is in low double-digit billions. Can you provide an update on that?

    感謝您回答我的問題。我為科萊特準備了兩份速食。科萊特,在上次財報電話會議上,您提到主權需求僅為數十億美元。能提供最新情況嗎?

  • And then can you explain the supply constrained situation in gaming? Is that because you're shifting your supply towards data center?

    那麼您能解釋一下遊戲產業供應緊張的情況嗎?這是因為您正在將供應轉向資料中心嗎?

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • So first starting in terms of a Sovereign AI, such an important part of growth, something that is really surfaced with the onset of Generative AI and building models in the individual countries around the world. And we see a lot of them and we talked about a lot of them in the call today and the work that they're doing. So our Sovereign AI and our pipeline going forward is still absolutely intact as those are working to build these foundational models and their own language and their own culture. And working in terms of the enterprises within those countries.

    因此,首先從主權人工智慧開始,這是成長的重要組成部分,隨著生成人工智慧的出現和世界各地各國建立模型的出現,這一點才真正浮出水面。我們看到了他們中的許多人,我們在今天的電話會議中討論了他們中的許多人以及他們正在做的工作。因此,我們的主權人工智慧和我們未來的管道仍然絕對完整,因為它們正在努力建立這些基礎模型以及他們自己的語言和文化。並針對這些國家內的企業開展工作。

  • And I think you'll continue to see this be a growth opportunities that you may see with our regional clouds that are being stood up and or those that are focusing in terms of AI factories for many parts of the Sovereign AI. This is areas where this is growing not only in terms of in Europe, but you're also seeing this in terms of growth in terms of in the Asia-PAC as well.

    我認為您將繼續看到這是一個成長機會,您可能會看到我們正在建立的區域雲和/或那些專注於主權人工智慧許多部分的人工智慧工廠的雲端。不僅在歐洲,而且在亞太地區,您也看到了這一成長。

  • Let me flip to your second question that you asked regarding gaming. So our gaming right now from a supply, we're busy trying to make sure that we can ramp all of our different products. And in this case, our gaming supply given what we saw selling through was moving quite fast. Now, the challenge that we have is how fast could we get that supply, getting ready into the market for this quarter? Not to worry, I think we'll be back on track with more suppliers. We turn the corner into the new calendar year. We're just going to be tight for this quarter.

    讓我來回答您提出的有關遊戲的第二個問題。因此,我們的遊戲現在來自供應,我們正忙於確保我們能夠提升所有不同的產品。在這種情況下,考慮到我們所看到的銷售情況,我們的遊戲供應量變化得相當快。現在,我們面臨的挑戰是我們能多快獲得供應,為本季進入市場做好準備?不用擔心,我認為我們將與更多供應商重回正軌。我們即將迎來新的一年。本季我們的情況將會很緊張。

  • Operator

    Operator

  • Ben Reitzes, Melius Research.

    本‧雷茨 (Ben Reitzes),Melius 研究中心。

  • Ben Reitzes - Analyst

    Ben Reitzes - Analyst

  • Yeah, hi. Thanks a lot for the question. I wanted to ask Colette and Jensen with regard to sequential growth. So very strong sequential growth this quarter and you're guiding to about 7%. Do your comments on Blackwell imply that we reaccelerate from there as you get more supply. I just -- in the first half it would seem that there would be some catch up. So I was wondering how prescriptive you could be there.

    是的,嗨。非常感謝您的提問。我想詢問 Colette 和 Jensen 關於連續成長的問題。本季環比成長非常強勁,預計成長約 7%。您對布萊克韋爾的評論是否意味著,當您獲得更多供應時,我們會從那裡重新加速。我只是——在上半場,似乎會有一些追趕。所以我想知道你在那裡的規範程度如何。

  • And then Jensen just overall, you know, with the change in administration that's going to take place here in the US and the China situation. Have you gotten any sense or any conversations about tariffs or anything with regard to your China business? Any sense of what may or may not go on? It's probably too early but wondering if you had any thoughts there? Thanks so much.

    然後詹森就總體而言,你知道,隨著美國和中國局勢將發生政府更迭。您對關稅或與您的中國業務有關的任何事情有任何了解或對話嗎?知道什麼可能會發生或不會發生嗎?現在可能還為時過早,但想知道您是否有任何想法?非常感謝。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • We got one quarter at a time.

    我們一次得到四分之一。

  • Colette Kress - Chief Financial Officer, Executive Vice President

    Colette Kress - Chief Financial Officer, Executive Vice President

  • We are working right now on the quarter that we're in and building what we need to ship in terms of Blackwell. We have every supplier on the planet working seamlessly with us to do that. And once we get to next quarter, we'll help you understand in terms of that ramp that we'll see to the next quarter going after that.

    我們現在正在本季度工作,並建立我們需要在布萊克韋爾發貨的產品。全球所有供應商都與我們無縫合作實現這一目標。一旦我們到達下個季度,我們將幫助您了解我們將在下個季度看到的成長情況。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Whatever the new administration decides, we will of course support the administration and that's our highest mandate and then after that, do the best we can. And just as we always do. And so we have to simultaneously and we will comply with any regulation that comes along fully and support our customers to the best of our abilities and compete in the marketplace. We'll do all of these three things simultaneously.

    無論新政府做出什麼決定,我們當然都會支持政府,這是我們的最高使命,然後盡我們所能。正如我們一貫所做的那樣。因此,我們必須同時遵守任何現行法規,並盡最大努力支持我們的客戶並在市場上競爭。我們將同時做這三件事。

  • Operator

    Operator

  • Pierre Ferragu, New Street Research.

    皮埃爾費拉古,新街研究。

  • Pierre Ferragu - Analyst

    Pierre Ferragu - Analyst

  • Hey, thanks for taking my question. Jensen, you mentioned in your comments, you have the pre-trainings, the actual language models and you have reinforcement learning that becomes more and more important in training and in inference as well. And then you have inference itself. And I was wondering if you have a sense, like a high-level typical sense of out of an overall AI ecosystem, like maybe one of your clients or one of the large models that are out there. Today, how much of the compute goes into each of these buckets, how much for the pretraining, how much for the reinforcement and how much into inference today. Do you have any sense for how it's splitting and where the growth is the most important as well?

    嘿,謝謝你回答我的問題。 Jensen,你在評論中提到,你有預訓練、實際的語言模型,還有強化學習,這在訓練和推理中也變得越來越重要。然後你就有了推理本身。我想知道你是否有一種感覺,例如對整個人工智慧生態系統的高級典型感覺,就像你的客戶之一或現有的大型模型之一。今天,有多少計算進入每個儲存桶,有多少用於預訓練,有多少用於強化,以及有多少用於推理。您是否知道它是如何分裂的以及成長最重要的地方?

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Well today, it's a vastly in pretraining of foundation model because as you know, post training, the new technologies are just coming online and whatever you could do in pretraining and post training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do a priority and so you'll always have to do on the spot thinking and in context thinking and reflection.

    今天,這是基礎模型的預訓練,因為如您所知,訓練後,新技術剛剛上線,無論您在預訓練和訓練後可以做什麼,您都會嘗試這樣做,以便推理成本可以是對每個人來說盡可能低。然而,你可以優先做的事情是有限的,所以你總是必須進行現場思考、情境思考和反思。

  • And so I think the fact that all three are scaling is actually very sensible based on who we are and in the area of foundation model, now we have multimodality foundation models and the amount of petabytes of video that these foundation models are going to be trained on, it's incredible. And so my expectation is that, for the foreseeable future, we're going to be scaling pretraining, post training as well as inference time scaling. And which is the reason why I think we're going to need more and more compute. And we're going to have to drive as hard as we can to keep increasing the performance by x factors at a time so that we can continue to drive down the cost and continue to increase the revenues and get the AI revolution going.

    因此,我認為,基於我們的身份和基礎模型領域,這三個方面的擴展實際上是非常明智的,現在我們擁有多模態基礎模型以及這些基礎模型將要訓練的 PB 級視頻量上,太不可思議了。因此,我的期望是,在可預見的未來,我們將擴展預訓練、訓練後以及推理時間擴展。這就是為什麼我認為我們需要越來越多的計算的原因。我們必須盡最大努力不斷將效能提高 x 倍,這樣我們才能繼續降低成本、繼續增加收入並推動人工智慧革命。

  • Pierre Ferragu - Analyst

    Pierre Ferragu - Analyst

  • Thank you.

    謝謝。

  • Operator

    Operator

  • Thank you. I'll turn the call back over to Jensen Huang for closing remarks.

    謝謝。我會將電話轉回黃仁勳以作結束語。

  • Jen-hsun Huang - President, Chief Executive Officer, Director

    Jen-hsun Huang - President, Chief Executive Officer, Director

  • Thank you. The tremendous growth in our business is being fueled by two fundamental trends that are driving global adoption of NVIDIA computing. First, the computing stack is undergoing a reinvention, a platform shift from coding to machine learning, from executing code on CPUs to processing neural networks on GPUs. The trillion-dollar installed base of traditional data center infrastructure is being rebuilt for software 2.0 which applies machine learning to produce AI.

    謝謝。推動全球採用 NVIDIA 運算的兩大基本趨勢推動了我們業務的巨大成長。首先,計算堆疊正在經歷重塑,平台從編碼轉向機器學習,從在 CPU 上執行程式碼轉向在 GPU 上處理神經網路。傳統資料中心基礎設施價值數兆美元的安裝基礎正在為軟體 2.0 進行重建,該軟體應用機器學習來產生人工智慧。

  • Second. The age of AI is in full steam. Generative AI is not just a new software capability but a new industry with AI factories manufacturing digital intelligence, a new industrial revolution that can create a multi-trillion dollar AI industry. Demand for Hopper and anticipation for Blackwell which is now in full production are incredible for several reasons.

    第二。人工智慧時代正如火如荼地進行。生成式人工智慧不僅是一種新的軟體能力,而且是一個新的產業,人工智慧工廠製造數位智能,這是一場新的工業革命,可以創造一個數萬億美元的人工智慧產業。由於多種原因,對 Hopper 的需求和對現已全面生產的 Blackwell 的預期令人難以置信。

  • There are more foundation model makers now than there were a year ago. The computing scale of pretraining and post training continues to grow exponentially. There are more AI native startups than ever, and the number of successful inference services is rising. And with the introduction of Chat GPTo1, OpenAI o1, a new scaling law called Test-Time scaling has emerged.

    現在的基礎模型製作者比一年前要多。訓練前和訓練後的計算規模持續呈指數級增長。人工智慧原生新創公司比以往任何時候都多,成功的推理服務數量也不斷增加。隨著 Chat GPTo1、OpenAI o1 的引入,出現了一種稱為測試時間縮放的新縮放法則。

  • All of these consume a great deal of computing. AI is transforming every industry, company and country. Enterprises are adopting Agentic AI to revolutionize workflows. Over time AI co-workers will assist employees in performing their jobs faster and better. Investments in industrial robotics are surging due to breakthroughs in physical AI. Driving new training infrastructure demand as researchers train world foundation models on petabytes of video and omniverse synthetically generated data. The age of robotics is coming.

    所有這些都消耗大量的計算。人工智慧正在改變每個產業、公司和國家。企業正在採用 Agentic AI 來徹底改變工作流程。隨著時間的推移,人工智慧同事將幫助員工更快、更好地完成工作。由於物理人工智慧的突破,工業機器人的投資激增。隨著研究人員在 PB 級視訊和全宇宙綜合生成的資料上訓練世界基礎模型,推動新的培訓基礎設施需求。機器人時代即將來臨。

  • Countries across the world recognize the fundamental AI trends we are seeing and have awaken to the importance of developing their national AI infrastructure. The age of AI is upon us and it's large and diverse. NVIDIA's expertise, scale and ability to deliver full stack and full infrastructure let us serve the entire multitrillion dollar AI and robotics opportunities ahead from every hyperscale cloud, enterprise private cloud to sovereign regional AI clouds on prem to industrial edge and robotics.

    世界各國都認識到我們所看到的人工智慧基本趨勢,並意識到發展國家人工智慧基礎設施的重要性。人工智慧時代已經來臨,它規模龐大且多樣化。 NVIDIA 的專業知識、規模以及提供全堆疊和完整基礎設施的能力使我們能夠為從每個超大規模雲、企業私有雲到本地主權區域人工智慧雲再到工業邊緣和機器人在內的整個數萬億美元的人工智慧和機器人機會提供服務。

  • Thanks for joining us today and catch-up next time.

    感謝您今天加入我們,下次再見。

  • Operator

    Operator

  • This concludes today's conference call. You may now disconnect.

    今天的電話會議到此結束。您現在可以斷開連線。