輝達 (NVDA) 2026 Q3 法說會逐字稿

內容摘要

  1. 摘要
    • Q3 營收達 570 億美元,年增 62%,季增 22%;EPS 未揭露
    • Q4 營收指引為 650 億美元(±2%),預期續受 Blackwell 架構帶動;毛利率預估 74.8%(GAAP)/75%(Non-GAAP)
    • 盤後市場反應未揭露,同業對比未提及
  2. 成長動能 & 風險
    • 成長動能:
      • AI 基礎建設需求持續超預期,雲端服務供應商(CSP)全面售罄
      • 三大平台轉型(加速運算、生成式 AI、Agentic AI)推動長期成長
      • Blackwell 架構(GB300)快速滲透,Rubin 平台預計 2026 下半年量產
      • 資料中心網路業務(NVLink、InfiniBand、Spectrum-X Ethernet)年增 162%,成為全球最大 AI 專用網路供應商
      • 與 OpenAI、Anthropic、xAI 等頂尖 AI 公司建立深度合作與投資,擴大 CUDA 生態系
    • 風險:
      • 中國市場因地緣政治與競爭加劇,資料中心產品銷售受限,H20 銷售僅 5000 萬美元
      • 原物料與零組件成本上升,對毛利率造成壓力
      • AI 基礎建設資本支出規模龐大,產業資金籌措與 ROI 仍受市場關注
  3. 核心 KPI / 事業群
    • 資料中心營收:510 億美元,年增 66%,創新高,主因 GB300 與網路產品強勁成長
    • 網路業務營收:82 億美元,年增 162%,NVLink、InfiniBand、Spectrum-X Ethernet 全面貢獻
    • Gaming 營收:43 億美元,年增 30%,Blackwell 架構帶動需求,庫存水位正常
    • 專業視覺化(ProViz)營收:7.6 億美元,年增 56%,DGX Spark 帶動
    • 汽車營收:5.92 億美元,年增 32%,自駕解決方案推動
  4. 財務預測
    • Q4 營收預估 650 億美元(±2%)
    • Q4 毛利率預估 74.8%(GAAP)/75%(Non-GAAP),全年維持中段 70% 水準
    • CapEx 未揭露
  5. 法人 Q&A
    • Q: Blackwell + Rubin 2025-26 年 5000 億美元營收目標進度?有無上修空間?
      A: 目前進度符合預期,已出貨 500 億美元,未來幾季將持續出貨,近期如 KSA、Anthropic 等新訂單有望推升總額超過 5000 億美元。
    • Q: AI 基礎建設供需展望,供給何時能追上需求?
      A: 三大平台轉型同時推動需求,NVIDIA 供應鏈規劃充足,應用層面廣泛,需求成長動能強勁,短期內供給仍緊張。
    • Q: 每 GW 資料中心 NVIDIA 內容金額假設?未來資本支出誰來負擔?
      A: Blackwell 世代每 GW 約 30 億美元,Rubin 會更高。效率提升帶動 TCO 降低。資本支出主要由雲端業者現金流支撐,未來各國、各產業也會自行籌資。
    • Q: 未來自由現金流規劃?生態系投資與回購策略?
      A: 強大資產負債表支撐供應鏈與客戶成長,持續進行庫藏股回購。生態系投資(如 OpenAI、Anthropic)聚焦擴大 CUDA 生態,深化技術合作,預期帶來長期回報。
    • Q: Rubin CPX 產品定位?AI inference 佔比未來展望?
      A: CPX 聚焦長上下文推理型工作負載,效能/成本優異。Inference 需求持續爆發,Grace Blackwell 在推理領域領先同業,預期 inference 佔比將持續提升。
    • Q: 成長最大瓶頸為何?電力、資金、供應鏈哪個最關鍵?
      A: 所有環節皆具挑戰,但供應鏈規劃與合作夥伴關係強大,電力、土地、資金等問題可控,最重要是持續提供最佳效能/成本架構。
    • Q: 明年毛利率與 OpEx 展望?成本壓力來源?
      A: 明年毛利率目標維持中段 70%,主要成本壓力來自零組件與系統複雜度提升,將持續優化成本結構與產品組合。OpEx 會隨創新與新架構推進而增加。
    • Q: AI ASIC/專用晶片對 GPU 架構競爭看法?
      A: AI 應用多元且複雜,NVIDIA 架構能支援所有 AI 模型與平台,生態系廣泛、彈性高,具備長期領先優勢。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is [Sarah]. I will be your conference operator today.

    午安.我的名字是[莎拉]我今天將擔任你們的會議接線生。

  • At this time, I would like to welcome everyone to NVIDIA's third-quarter earnings call.

    此時此刻,我謹代表英偉達歡迎各位參加第三季財報電話會議。

  • (Operator Instructions) Thank you.

    (操作說明)謝謝。

  • Toshiya Hari, you may begin your conference.

    Toshiya Hari,你可以開始你的會議了。

  • Toshiya Hari - Vice President - Investor Relations & Strategic Finance

    Toshiya Hari - Vice President - Investor Relations & Strategic Finance

  • Thank you. Good afternoon, everyone. Welcome to NVIDIA's conference call for the third quarter of fiscal 2026.

    謝謝。大家下午好。歡迎參加英偉達2026財年第三季業績電話會議。

  • With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    今天與我一同出席的還有英偉達總裁兼執行長黃仁勳,以及執行副總裁兼財務長科萊特·克雷斯。

  • I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter of fiscal 2026.

    我想提醒各位,我們的電話會議正在英偉達投資者關係網站上進行網路直播。網路直播將提供回放,直至召開電話會議討論我們 2026 財年第四季的財務業績。

  • The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

    本次電話會議的內容歸英偉達所有。未經我們事先書面同意,不得複製或轉錄。

  • During this call, we may make forward-looking statements, based on current expectations. These are subject to a number of significant risks and uncertainties. Our actual results may differ materially.

    在本次電話會議中,我們可能會根據目前的預期發表一些前瞻性聲明。這些都存在著許多重大風險和不確定因素。我們的實際結果可能與此有重大差異。

  • For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

    有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天發布的收益報告中的披露資訊、我們最新的 10-K 表格和 10-Q 表格,以及我們可能向美國證券交易委員會提交的 8-K 表格報告。

  • All our statements are made as of today, November 19, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    我們所有聲明均截至今日(2025年11月19日),並基於我們目前掌握的資訊。除法律另有規定外,我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在本次電話會議中,我們將討論非GAAP財務指標。您可以在我們的財務長評論中找到這些非GAAP財務指標與GAAP財務指標的調節表,該評論已發佈在我們的網站上。

  • With that, let me turn the call over to Colette.

    那麼,現在讓我把電話交給科萊特。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Thank you, Toshiya.

    謝謝你,俊也。

  • We delivered another outstanding quarter, with revenue of $57 billion, up 62% year over year; and a record sequential revenue growth of $10 billion or 22%. Our customers continue to lean into three platform shifts, fueling exponential growth for accelerated computing, powerful AI models, and Agentic applications. Yet, we are still in the early innings of these transitions that will impact our work across every industry.

    我們又取得了一項出色的季度業績,營收達到 570 億美元,年增 62%;環比營收成長 100 億美元,創歷史新高,增幅達 22%。我們的客戶持續轉向三大平台轉型,推動加速運算、強大的 AI 模型和智慧體應用指數級成長。然而,我們仍處於這些變革的初期階段,這些變革將影響到各行各業的工作。

  • We currently have visibility to a $0.5 trillion in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026. By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA will be the superior choice for the $3 trillion to $4 trillion in annual AI infrastructure build we estimate by the end of the decade.

    我們目前可以預見,從今年年初到 2026 年年底,Blackwell and Rubin 的營收將達到 0.5 兆美元。透過執行我們的年度產品節奏並透過全端設計擴大我們的性能領先地位,我們相信,到本十年末,NVIDIA 將成為每年 3 兆至 4 兆美元人工智慧基礎設施建設的最佳選擇(我們預計)。

  • Demand for AI infrastructure continues to exceed our expectations. The clouds are sold out. Our GPU installed base, both new and previous generations, including Blackwell, Hopper, and Ampere, is fully utilized.

    對人工智慧基礎設施的需求持續超出我們的預期。雲朵票已售罄。我們已安裝的 GPU 基礎,包括 Blackwell、Hopper 和 Ampere 等新一代和上一代產品,都得到了充分利用。

  • Record Q3 data center revenue of $51 billion increased 66% year over year, a significant feat at our scale. Compute grew 56% year over year, driven primarily by the GB300 RAM; while networking more than doubled, given the onset of NVLink's scale-up and robust double-digit growth across Spectrum-X Ethernet, and Quantum-X InfiniBand.

    第三季資料中心營收創下 510 億美元的歷史新高,年增 66%,這對我們這樣的規模來說是一項了不起的成就。運算業務年增 56%,主要得益於 GB300 RAM 的推動;而網路業務成長超過一倍,這得益於 NVLink 的規模化發展以及 Spectrum-X Ethernet 和 Quantum-X InfiniBand 的強勁兩位數成長。

  • The world hyperscalers, a trillion-dollar industry, are transforming search, recommendations, and content understanding from classical machine learning to Generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition, driving infrastructure investment measured in hundreds of billions of dollars. At Meta, AI recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Threads.

    全球超大規模資料中心(一個兆美元的產業)正在將搜尋、推薦和內容理解從傳統的機器學習轉變為生成式人工智慧。NVIDIA CUDA 在這兩方面都表現出色,是實現這項轉型的理想平台,推動了數千億美元的基礎設施投資。在 Meta,人工智慧推薦系統正在提供更高品質、更相關的內容,從而讓用戶在 Facebook 和 Threads 等應用程式上花費更多時間。

  • Analyst expectations for the top CSPs and hyperscalers in 2026 aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher, relative to the start of the year. We see the transition to accelerated computing and Generative AI across current hyperscale workloads contributing toward roughly half of our long-term opportunity.

    分析師對頂級 CSP 和超大規模營運商 2026 年總資本支出的預期持續增長,目前已達到約 6,000 億美元,比年初高出 2,000 多億美元。我們看到,目前超大規模工作負載向加速運算和生成式人工智慧的過渡,將為我們帶來大約一半的長期機會。

  • Another growth pillar is the ongoing increase in compute spend driven by foundation model builders, such as Anthropic, Mistral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab, and xAI, all scaling compute, aggressively, to scale intelligence.

    另一個成長支柱是運算支出的持續成長,這主要由 Anthropic、Mistral、OpenAI、Reflection、Safe Superintelligence、Thinking Machines Lab 和 xAI 等基礎模型建構者推動,它們都在積極擴展運算能力,以擴展智慧。

  • The three scaling laws -- pre-training, post-training, and inference -- remain intact. In fact, we see a positive virtuous cycle emerging, whereby the three scaling laws and access to compute are generating better intelligence and, in turn, increasing adoption and profits.

    預訓練、後訓練和推理這三個縮放定律保持不變。事實上,我們看到一個良性循環正在形成,即三大擴展定律和計算能力的獲取正在產生更好的智能,進而提高普及率和利潤。

  • OpenAI recently shared that their weekly user base has grown to 800 million; Enterprise customers has increased to 1 million; and that their gross margins were healthy.

    OpenAI 最近分享稱,其每週用戶數量已增長至 8 億;企業客戶數量已增至 100 萬;且其毛利率狀況良好。

  • Well, Anthropic recently reported that its annualized run rate revenue has reached $7 billion as of last month, up from $1 billion at the start of the year. We

    Anthropic 最近公佈的數據顯示,截至上個月,其年度運行收入已達到 70 億美元,高於年初的 10 億美元。我們

  • are also witnessing a proliferation of Agentic AI across various industries and tasks. Companies such as Cursor, Anthropic, OpenEvidence, Epic, and Abridge are experiencing a surge in user growth, as they supercharge the existing workforce, delivering unquestionable ROI for coders and healthcare professionals.

    我們也目睹了智能體人工智慧在各行業和任務中的廣泛應用。Cursor、Anthropic、OpenEvidence、Epic 和 Abridge 等公司正在經歷用戶成長激增,因為它們極大地增強了現有員工的能力,為程式設計師和醫療保健專業人員帶來了毋庸置疑的投資回報率。

  • The world's most important enterprise software platforms like ServiceNow, CrowdStrike, and SAP are integrating NVIDIA's accelerated computing and AI stack.

    全球最重要的企業軟體平台,如 ServiceNow、CrowdStrike 和 SAP,正在整合 NVIDIA 的加速運算和 AI 技術堆疊。

  • Our new partner, Palantir, is supercharging the incredibly popular ontology platform with NVIDIA CUDA-X libraries and AI models for the first time.

    我們的新合作夥伴 Palantir 首次利用 NVIDIA CUDA-X 庫和 AI 模型,為這款極其流行的本體平台注入了強大的功能。

  • Previously, like most enterprise software platforms, ontology runs only on CPUs. Lowe's is leveraging the platform to build supply chain agility, reducing costs, and improving customer satisfaction.

    在此之前,像大多數企業軟體平台一樣,本體只能在 CPU 上運作。勞氏公司正在利用該平台來提高供應鏈的靈活性,降低成本,並提高客戶滿意度。

  • Enterprises, broadly, are leveraging AI to boost productivity, increase efficiency, and reduce cost. RBC is leveraging Agentic AI to drive significant analyst productivity, slashing report generation time from hours to minutes.

    整體而言,企業正在利用人工智慧來提高生產力、提升效率並降低成本。RBC 正在利用智慧人工智慧來顯著提高分析師的工作效率,將報告生成時間從數小時縮短到數分鐘。

  • AI and digital twins are helping Unilever accelerate content creation by 2x and cut costs by 50%. And

    人工智慧和數位孿生技術正在幫助聯合利華將內容創作速度提高 2 倍,並將成本降低 50%。和

  • Salesforce's engineering team has seen at least 30% productivity increase in new code development after adopting Cursor.

    Salesforce 的工程團隊在採用 Cursor 後,新程式碼開發效率至少提高了 30%。

  • This past quarter, we announced AI factory and infrastructure projects amounting to an aggregate of 5 million GPUs. This demand spans every market, CSPs, sovereigns, modern builders, enterprises, and supercomputing centers, and includes multiple landmark build-outs. xAI's Colossus 2, the world's first gigawatt-scale data center; Lilly's AI factory for drug discovery, the pharmaceutical industry's most powerful data center.; and, just today, AWS and HUMAIN expanded their partnership, including the deployment of up to 150,000 AI accelerators, including our GB300.

    上個季度,我們宣布了人工智慧工廠和基礎設施項目,總計涉及 500 萬個 GPU。這種需求涵蓋所有市場,包括雲端服務供應商 (CSP)、主權國家、現代建築商、企業和超級運算中心,並涉及多個里程碑式的建設項目。例如,xAI 的 Colossus 2,全球首個千兆瓦級資料中心;禮來公司用於藥物研發的 AI 工廠,製藥業最強大的資料中心;以及就在今天,AWS 和 HUMAIN 擴大了合作,包括部署多達 15 萬個 AI 加速器,其中包括我們的 GB300。

  • xAI and HUMAIN also announced a partnership in which the two will jointly develop a network of world-class GPU data centers, anchored by the flagship 500-megawatt facilities.

    xAI 和 HUMAIN 也宣佈建立合作夥伴關係,雙方將共同開發一個世界一流的 GPU 資料中心網絡,以旗艦級 500 兆瓦設施為核心。

  • Blackwell gained further momentum in Q3, as GB300 crossed over GB200 and contributed roughly two-thirds of the total Blackwell revenue. The transition to GB300 has been seamless, with production shipments to the major cloud service providers, hyperscalers, and GP clouds -- and is already driving their growth.

    Blackwell 在第三季度獲得了進一步的成長勢頭,GB300 的銷售額超過了 GB200,並貢獻了 Blackwell 總收入的大約三分之二。向 GB300 的過渡非常順利,已向主要雲端服務供應商、超大規模雲端和 GP 雲端交付生產產品,並且已經推動了它們的成長。

  • The Hopper platform, in its 13th quarter, since exemption recorded approximately $2 billion in revenue in Q3. H20 sales were approximately 50 million.

    Hopper 平台自獲得豁免以來已運營 13 個季度,第三季度營收約 20 億美元。H2O 的銷售額約為 5000 萬。

  • Sizeable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China. While we were disappointed in the current state that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the US and China governments and will continue to advocate for America's ability to compete around the world.

    由於地緣政治問題和中國市場競爭日益激烈,本季未能實現大額採購訂單。儘管我們對目前阻礙我們向中國出口更具競爭力的資料中心運算產品的狀況感到失望,但我們致力於繼續與美國和中國政府進行溝通,並將繼續倡導美國在全球競爭的能力。

  • To establish a sustainable leadership position in AI computing, America must win the support of every developer and be the platform of choice for every commercial business, including those in China. The

    要想在人工智慧運算領域確立可持續的領導地位,美國必須贏得每位開發者的支持,並成為包括中國在內的所有商業企業的首選平台。這

  • Rubin platform is on track to ramp in the second half of 2026. Powered by seven chips, the Vera Rubin platform will once again deliver an x-factor improvement in performance, relative to Blackwell.

    Rubin平台預計在2026年下半年實現量產。Vera Rubin 平台由七顆晶片驅動,與 Blackwell 相比,其性能將再次顯著提升。

  • We have received silicon back from our supply chain partners and are happy to report that NVIDIA teams across the world are executing the bring-up beautifully.

    我們已經從供應鏈合作夥伴那裡收到了晶片,很高興地報告說,NVIDIA 世界各地的團隊正在出色地執行啟動工作。

  • Rubin is our third-generation rack-scale system, substantially, to redefine the manufacturability while remaining compatible with Grace Blackwell.

    Rubin 是我們的第三代機架式系統,從本質上重新定義了可製造性,同時保持與 Grace Blackwell 的兼容性。

  • Our supply chain data center ecosystem and cloud partners have now mastered the build-to-installation process of NVIDIA's rack architecture. Our ecosystem will be ready for a fast Rubin ramp.

    我們的供應鏈資料中心生態系統和雲端合作夥伴現在已經掌握了 NVIDIA 機架架構的建置到安裝流程。我們的生態系統已做好準備,迎接 Rubin 的快速發展。

  • Our annual x-Factor performance lead increases performance per dollar, while driving down computing costs for our customers. The long useful life of NVIDIA's CUDA GPUs is a significant TCO advantage over accelerators. CUDA's compatibility and our massive installed base extend the life of NVIDIA systems well beyond their original, estimated, useful life.

    我們每年的 x 因子效能領先優勢提高了每美元的效能,同時降低了客戶的運算成本。NVIDIA CUDA GPU 的超長使用壽命使其在整體擁有成本方面比加速器具有顯著優勢。CUDA 的兼容性和我們龐大的用戶群使 NVIDIA 系統的使用壽命遠遠超過了其最初的預期使用壽命。

  • For more than two decades, we have optimized the CUDA ecosystem, improving existing workloads, accelerating new ones, and increasing throughput with every software release. Most accelerators without CUDA and NVIDIA's time-tested and versatile architecture became obsolete within a few years, as model technologies evolve.

    二十多年來,我們不斷優化 CUDA 生態系統,改進現有工作負載,加速新工作負載,並在每次軟體版本發佈時提高吞吐量。隨著模型技術的演進,大多數不具備 CUDA 和 NVIDIA 久經考驗且用途廣泛的架構的加速器在幾年內就被淘汰了。

  • Thanks to CUDA, the A100 GPUs we shipped six years ago are still running at full utilization today, powered by vastly improved software stack.

    由於 CUDA,我們六年前推出的 A100 GPU 至今仍在滿載運行,這得益於大幅改進的軟體堆疊。

  • We have evolved over the past 25 years from a gaming GPU company to now, an AI data center infrastructure company. Our ability to innovate across the CPU, the GPU, networking, software, and, ultimately, drive down cost per token is unmatched across the industry.

    在過去的25年裡,我們從一家遊戲GPU公司發展成為現在的AI資料中心基礎設施公司。我們在 CPU、GPU、網路、軟體等方面的創新能力,以及最終降低每個代幣成本的能力,在整個產業中都是無與倫比的。

  • Our networking business, purpose-built for AI, and now the largest in the world, generated revenue of $8.2 billion, up 162% year over year, with NVLink, InfiniBand, and Spectrum-X Ethernet all contributing to the growth.

    我們專為人工智慧打造的網路業務,如今已成為全球最大的網路業務,創造了 82 億美元的收入,年成長 162%,其中 NVLink、InfiniBand 和 Spectrum-X 乙太網路都為這一成長做出了貢獻。

  • We are winning in data center networking, as the majority of AI deployments now include our switches with Ethernet GPU attach rates roughly on par with InfiniBand. Meta, Microsoft, Oracle, and xAI are building gigawatt AI factories with Spectrum-X Ethernet switches. Each will run its operating system of choice, highlighting the flexibility and openness of our platform.

    我們在資料中心網路領域取得了勝利,因為現在大多數 AI 部署都採用了我們的交換機,其乙太網路 GPU 連線速率與 InfiniBand 大致相當。Meta、微軟、Oracle 和 xAI 正在利用 Spectrum-X 乙太網路交換機構建造千兆瓦級人工智慧工廠。每個設備都將運行其選擇的作業系統,凸顯了我們平台的靈活性和開放性。

  • We recently introduced Spectrum-XGS, a scale-across technology that enables gigascale AI factories. NVIDIA is the only company with AI scale up, scale out, and scale across platforms, reinforcing our unique position in the market as the AI infrastructure provider.

    我們最近推出了 Spectrum-XGS,這是一種可跨規模擴展的技術,能夠實現千兆級人工智慧工廠。NVIDIA 是唯一一家能夠實現 AI 縱向擴展、橫向擴展和跨平台擴展的公司,這鞏固了我們作為 AI 基礎設施供應商在市場上的獨特地位。

  • Customer interest in NVLink Fusion continues to grow. We announced a strategic collaboration with Fujitsu in October, where we will integrate Fujitsu's CPUs and NVIDIA GPUs via NVLink Fusion, connecting our large ecosystems.

    消費者對NVLink Fusion的興趣持續增長。我們在 10 月宣布與富士通達成策略合作,我們將透過 NVLink Fusion 將富士通的 CPU 和 NVIDIA 的 GPU 整合在一起,連接我們龐大的生態系統。

  • We also announced a collaboration with Intel to develop multiple generations of custom data center and PC products connecting NVIDIA and Intel's ecosystems using NVLink.

    我們也宣布與英特爾合作,開發多代客製化資料中心和 PC 產品,利用 NVLink 將 NVIDIA 和英特爾的生態系統連接起來。

  • This week at Supercomputing '25, Arm announced that it will be integrating NVLink IP for customers to build CPU SOCs that connect with NVIDIA.

    本週在超級運算 '25 大會上,Arm 宣布將整合 NVLink IP,供客戶建置可與 NVIDIA 連接的 CPU SoC。

  • Currently on its fifth generation, NVLink is the only proven scale-up technology available on the market today.

    NVLink 目前已發展到第五代,是目前市場上唯一經過驗證的可擴展技術。

  • In the latest MLPerf training results, Blackwell Ultra delivered 5x faster time to train than Hopper. NVIDIA swept every benchmark. Notably, NVIDIA is the only training platform to leverage FP4 while meeting the MLPerf's strict accuracy standards.

    在最新的 MLPerf 訓練結果中,Blackwell Ultra 的訓練速度比 Hopper 快 5 倍。NVIDIA橫掃所有基準測試。值得注意的是,NVIDIA 是唯一能夠利用 FP4 並符合 MLPerf 嚴格準確度標準的訓練平台。

  • In SemiAnalysis InferenceMAX benchmark, Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell's NVLink's performance on a Mixture-of-Experts, the architecture for the world's most popular reasoning models.

    在 SemiAnalysis InferenceMAX 基準測試中,Blackwell 在所有模型和用例中均實現了最高的效能和最低的總擁有成本。Blackwell 的 NVLink 在混合專家模型上的表現尤其重要,而混合專家模型是世界上最受歡迎的推理模型的架構。

  • On DeepSeek R1, Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus h200, a huge generational leap fueled by our extreme co-design approach.

    在 DeepSeek R1 上,Blackwell 的性能是 h200 的 10 倍(每瓦),每個代幣的成本是 h200 的 10 倍,這得益於我們極致的協同設計方法,實現了巨大的世代飛躍。

  • NVIDIA Dynamo, an open-source, low-latency, modular inference framework has now been adopted by every major cloud service provider. Leveraging Dynamo's enablement and disaggregated inference, the resulting increase in performance of complex AI models, such as MoE models, AWS, Google Cloud, Microsoft Azure, and OCI have boosted AI inference performance for Enterprise Cloud customers.

    NVIDIA Dynamo 是一款開源、低延遲、模組化的推理框架,目前已被所有主要雲端服務供應商採用。借助 Dynamo 的賦能和分解推理,複雜 AI 模型(如 MoE 模型)的效能得到了提升,AWS、Google Cloud、Microsoft Azure 和 OCI 等企業雲端客戶的 AI 推理效能也得到了提升。

  • We are working on a strategic partnership with OpenAI, focused on helping them build and deploy at least 10 gigawatts of AI data centers. In addition, we have the opportunity to invest in the company. We serve OpenAI through their cloud partners, Microsoft Azure, OCI, and CoreWeave. We will continue to do so for the foreseeable future.

    我們正在與 OpenAI 建立策略合作夥伴關係,重點是幫助他們建立和部署至少 10 吉瓦的 AI 資料中心。此外,我們還有機會投資該公司。我們透過其雲端合作夥伴 Microsoft Azure、OCI 和 CoreWeave 為 OpenAI 提供服務。在可預見的未來,我們將繼續這樣做。

  • As they continue to scale, we are delighted to support the company to add self-build infrastructure. We are working toward a definitive agreement and are excited to support OpenAI's growth.

    隨著公司規模的不斷擴大,我們很高興能夠支持公司增加自建基礎設施。我們正在努力達成最終協議,並很高興能夠支持 OpenAI 的發展。

  • Yesterday, we celebrated an announcement with Anthropic. For the first time, Anthropic is adopting NVIDIA. We are establishing a deep technology partnership to support Anthropic's fast growth. We will collaborate to optimize Anthropic models for CUDA and deliver the best possible performance, efficiency, and TCO.

    昨天,我們與 Anthropico 共同慶祝了一項公告。這是 Anthropico 首次採用 NVIDIA 的產品。我們正在建立深度技術合作夥伴關係,以支援 Anthropic 的快速發展。我們將合作優化 CUDA 的人為模型,以實現最佳效能、效率和總擁有成本。

  • We will also optimize future NVIDIA architectures for Anthropic workloads. Anthropic's compute commitment is initially including up to 1 gigawatt of compute capacity with Grace Blackwell and Vera Rubin Systems.

    我們也將針對 Anthropic 工作負載優化未來的 NVIDIA 架構。Anthropic 的計算承諾最初包括與 Grace Blackwell 和 Vera Rubin Systems 合作提供的高達 1 吉瓦的運算能力。

  • Our strategic investments in Anthropic, Mistral, OpenAI, Reflection, Thinking Machines, and other represent partnerships that grow the NVIDIA CUDA AI ecosystem and enable every model to run optimally on NVIDIAs, everywhere.

    我們對 Anthropic、Mistral、OpenAI、Reflection、Thinking Machines 等公司的策略投資,代表著發展 NVIDIA CUDA AI 生態系統的合作夥伴關係,並使每個模型都能在任何地方的 NVIDIA 設備上以最佳方式運作。

  • We will continue to invest strategically while preserving our disciplined approach to cash flow management.

    我們將繼續進行策略性投資,同時維持我們嚴謹的現金流管理方式。

  • Physical AI is already a multi-billion dollar business addressing a multi-trillion dollar opportunity and the next leg of growth for NVIDIA. Leading US manufacturers and robotics innovators are leveraging NVIDIA's three computer architecture to train on NVIDIA, test on Omniverse computer, and deploy real-world AI on Justin robotic computers.

    實體人工智慧已經是一個價值數十億美元的產業,它蘊藏著數兆美元的市場機遇,也是英偉達的下一個成長點。美國領先的製造商和機器人創新者正在利用 NVIDIA 的三種電腦架構,在 NVIDIA 電腦上進行訓練,在 Omniverse 電腦上進行測試,並在 Justin 機器人電腦上部署現實世界的 AI。

  • PTC and Siemens introduced new services that bring omniverse-powered digital twin workflows to their extensive installed base of customers. Companies including Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC, and Wistron are building omniverse digital twin factories to accelerate AI-driven manufacturing and automation.

    PTC 和西門子推出了新的服務,將 Omniverse 驅動的數位孿生工作流程帶給他們龐大的客戶群。包括貝爾登、卡特彼勒、富士康、Lucid Motors、豐田、台積電和緯創在內的多家公司正在建造全宇宙數位孿生工廠,以加速人工智慧驅動的製造和自動化。

  • Agility Robotics, Amazon Robotics, Figure, and Skild.ai are building our platform, tapping offerings, such as NVIDIA Cosmos, World Foundation Models for Development, Omniverse for Simulation and Validation, and Jetson to power next-generation intelligent robots.

    Agility Robotics、Amazon Robotics、Figure 和 Skild.ai 正在建立我們的平台,利用 NVIDIA Cosmos、World Foundation Models for Development、Omniverse for Simulation and Validation 和 Jetson 等產品,為下一代智慧機器人提供動力。

  • We remain focused on building resiliency and redundancy in our global supply chain. Last month, in partnership with TSMC, we celebrated the first Blackwell wafer produced on US soil. We will continue to work with Foxconn, Wistron, Amcor, SPIL, and others to grow our presence in the US over the next four years.

    我們將繼續致力於增強全球供應鏈的韌性和冗餘性。上個月,我們與台積電合作,慶祝了美國本土生產的第一片 Blackwell 晶圓問世。未來四年,我們將繼續與富士康、緯創、安姆科、SPIL 等公司合作,擴大我們在美國的業務。

  • Gaming revenue was $4.3 billion, up 30% year on year, driven by strong demand as Blackwell momentum continued. End-market sell-through remains robust. Channel inventories are at normal levels, heading into the holiday season.

    受 Blackwell 持續強勁的成長勢頭推動,遊戲收入達到 43 億美元,年增 30%。終端市場銷售依然強勁。進入假日季,渠道庫存處於正常水準。

  • Steam recently broke its concurrent user record, with 42 million gamers, while thousands of fans packed the GeForce Gamer Festival in South Korea to celebrate 25 years of GeForce.

    Steam 最近打破了同時線上用戶記錄,達到 4,200 萬玩家;與此同時,成千上萬的粉絲湧入韓國的 GeForce 玩家節,慶祝 GeForce 成立 25 週年。

  • NVIDIA Pro visualization has evolved into computers for engineers and developers, whether for graphics or for AI. Professional Visualization revenue was $760 million, up 56% year over year; was another record. Growth was driven by DGX Spark, the world's smallest AI supercomputer, built on a small configuration of Grace Blackwell.

    NVIDIA 專業視覺化技術已發展成為面向工程師和開發人員的計算機,無論是用於圖形處理還是人工智慧。專業視覺化收入為 7.6 億美元,年增 56%,再次創下紀錄。成長的驅動力是 DGX Spark,它是世界上最小的 AI 超級計算機,基於 Grace Blackwell 的小型配置構建而成。

  • Automotive revenue was $592 million, up 32% year-over-year, primarily driven by self-driving solutions. We are partnering with Uber to scale the world's largest Level 4-ready autonomous fleet, built on the new NVIDIA Hyperion L4 robotaxi reference architecture.

    汽車業務收入為 5.92 億美元,年增 32%,主要得益於自動駕駛解決方案的成長。我們正與 Uber 合作,以擴大全球最大的 L4 級自動駕駛車隊規模,該車隊基於全新的 NVIDIA Hyperion L4 機器人計程車參考架構建構。

  • Moving to the rest of the P&L, GAAP gross margins were 73.4% and non-GAAP gross margins was 73.6%, exceeding our outlook. Gross margins increased sequentially due to our data center mix, improved cycle time, and cost structure.

    再來看損益表的其他部分,GAAP毛利率為73.4%,非GAAP毛利率為73.6%,都超過了我們的預期。由於資料中心組合最佳化、週期時間縮短和成本結構調整,毛利率較上季成長。

  • GAAP operating expenses were up 8% sequentially and up 11% on non-GAAP basis. The (inaudible) was driven by infrastructure compute, as well as higher compensation and benefits and engineering development costs.

    以美國通用會計準則 (GAAP) 計算,營運費用較上季成長 8%;以非美國通用會計準則 (non-GAAP) 計算,營運費用較上季成長 11%。(聽不清楚)的驅動因素是基礎設施計算,以及更高的薪資福利和工程開發成本。

  • Non-GAAP effective tax rate for the third quarter was just over 17%, higher than our guidance of 16.5% due to the strong US. revenue.

    由於美國市場收入強勁,第三季非GAAP有效稅率略高於17%,高於我們先前16.5%的預期。

  • On our balance sheet, inventory grew 32% quarter over quarter, while supply commitments increased 63% sequentially. We are preparing for significant growth ahead and feel good about our ability to execute against our opportunity set.

    從資產負債表來看,庫存較上季成長 32%,而供應承諾則較上季成長 63%。我們正在為未來的顯著成長做好準備,並對把握機會的能力充滿信心。

  • Okay. Let me turn to the outlook for the fourth quarter. Total revenue is expected to be $65 billion, plus or minus 2%. At the midpoint, our outlook implies 14% sequential growth, driven by continued momentum in the Blackwell architecture. Consistent with last quarter, we are not assuming any data center compute revenue from China.

    好的。接下來,我將展望一下第四季的情況。預計總收入為 650 億美元,上下浮動 2%。根據我們的預測,中期數據顯示,在 Blackwell 架構持續成長的推動下,環比成長將達到 14%。與上一季一致,我們不假設來自中國的資料中心計算收入。

  • GAAP and non-GAAP gross margins are expected to be 74.8% and 75%, respectively, plus or minus 50 basis points.

    GAAP 和非 GAAP 毛利率預計分別為 74.8% 和 75%,上下浮動 50 個基點。

  • Looking ahead to fiscal year 2027, input costs are on the rise but we are working to hold gross margins in the mid-[70s].

    展望 2027 財年,投入成本不斷上漲,但我們正努力將毛利率維持在 70% 左右。

  • GAAP and non-GAAP operating expenses are expected to be approximately $6.7 billion and $5 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $500 million, excluding gains and losses from non-marketable and publicly held equity securities.

    GAAP 和非 GAAP 營運費用預計分別約為 67 億美元和 50 億美元。GAAP 和非 GAAP 其他收入和支出預計為 5 億美元左右,不包括非上市和公開持有的股權證券的損益。

  • GAAP and non-GAAP tax raise are expected to be 17%, plus or minus 1%, excluding any discrete items.

    GAAP 和非 GAAP 稅收成長預估為 17%,上下浮動 1%,不包括任何特殊項目。

  • At this time, let me turn the call over to Jensen for him to say a few words.

    現在,我把電話交給詹森,讓他說幾句話。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Thanks, Colette.

    謝謝你,科萊特。

  • There's been a lot of talk about an AI bubble. From our vantage point, we see something very different.

    關於人工智慧泡沫的討論很多。從我們的角度來看,情況則截然不同。

  • As a reminder, NVIDIA is unlike any other accelerator. We excel at every phase of AI, from pre-training and post training to inference. With our two-decade investment in CUDA-X acceleration libraries, we are also exceptional at science and engineering simulations, computer graphics, structured data processing, to classical machine learning.

    再次提醒,NVIDIA 與其他任何加速器都不同。我們在人工智慧的各個階段都表現出色,從訓練前和訓練後到推理。憑藉我們二十年來對 CUDA-X 加速庫的投入,我們在科學和工程模擬、電腦圖形學、結構化資料處理以及經典機器學習等領域也表現出色。

  • The world is undergoing three massive platform shifts at once, the first time since the dawn of Moore's Law. NVIDIA is uniquely addressing each of the three transformations.

    世界正同時經歷三大平台變革,這是自摩爾定律誕生以來的第一次。NVIDIA 針對這三種變革分別提出了獨特的解決方案。

  • The first transition is from CPU general-purpose computing to GPU accelerated computing, as Moore's Law slows. The world has a massive investment in non-AI software, from data processing to science and engineering simulations, representing hundreds of billions of dollars in cloud computing spend each year.

    第一個過渡階段是從 CPU 通用計算向 GPU 加速計算,因為摩爾定律的成長速度正在放緩。全球對非人工智慧軟體的投資龐大,涵蓋數據處理、科學和工程模擬等領域,每年雲端運算支出高達數千億美元。

  • Many of these applications -- which ran, once, exclusively on CPUs -- are now rapidly shifting to CUDA GPUs. Accelerated computing has reached a tipping point.

    許多曾經只在 CPU 上運行的應用程序,現在正迅速轉向 CUDA GPU。加速計算已經達到了一個臨界點。

  • Secondly, AI has also reached a tipping point and is transforming existing applications, while enabling entirely new ones. For existing applications, Generative AI is replacing classical machine learning in search ranking, recommender systems, ad targeting, click-through prediction, to content moderation, the very foundations of hyperscale infrastructure.

    其次,人工智慧也已達到臨界點,正在改變現有的應用,同時也催生了全新的應用。對於現有應用程式而言,生成式人工智慧正在取代傳統的機器學習,應用於搜尋排名、推薦系統、廣告定向、點擊率預測以及內容審核等領域,這些都是超大規模基礎設施的基礎。

  • Meta's GEM, a foundation model for ad recommendations trained on large-scale GPU clusters, exemplifies this shift. In Q2, Meta reported over a 5% increase in ad conversions on Instagram and 3% gain on Facebook feed, driven by Generative AI-based GEM. Transitioning to Generative AI represents substantial revenue gains for hyperscalers.

    Meta 的 GEM 是一個基於大規模 GPU 叢集訓練的廣告推薦基礎模型,它體現了這種轉變。第二季度,Meta 報告稱,在基於生成式人工智慧的 GEM 的推動下,Instagram 上的廣告轉換率增長超過 5%,Facebook 動態上的廣告轉換率增長 3%。向生成式人工智慧轉型將為超大規模資料中心帶來可觀的收入成長。

  • Now, a new wave is rising, Agentic AI systems: capable of reasoning, planning and using tools, from coding assistance like Cursor and quad code to radiology tools like iDoc, legal assistants like Harvey, and AI chauffeurs like Tesla FSD and Waymo. These systems mark the next frontier of computing.

    現在,一股新浪潮正在興起,那就是智慧人工智慧系統:它們能夠推理、規劃和使用各種工具,從 Cursor 和四方碼等編碼輔助工具到 iDoc 等放射學工具,再到 Harvey 等法律助手,以及 Tesla FSD 和 Waymo 等人工智慧駕駛。這些系統標誌著計算領域的下一個前沿。

  • The fastest-growing companies in the world today, Open AI, Anthropic, xAI, Google, Cursor, Lovable, Replit, Cognition AI, OpenEvidence, Abridge, Tesla are pioneering Agentic AI.

    當今世界上發展最快的公司,如 OpenAI、Anthropic、xAI、Google、Cursor、Lovable、Replit、Cognition AI、OpenEvidence、Abridge 和 Tesla,正在引領智能體人工智慧的發展。

  • So there are three massive platform shifts. The transition to accelerated computing is foundational and necessary, essential in a post-Moore's Law era. The transition to Generative AI is transformational and necessary, supercharging existing applications and business models. The transition to Agentic and Physical AI will be revolutionary, giving rise to new applications, companies, products, and services.

    因此,平台發生了三次巨大的轉變。向加速計算的過渡是基礎性的、必要的,也是後摩爾定律時代必不可少的。向生成式人工智慧的過渡是變革性的,也是必要的,它將極大地增強現有應用程式和商業模式。向智慧體人工智慧和實體人工智慧的過渡將是革命性的,它將催生新的應用、公司、產品和服務。

  • As you consider infrastructure investments, consider these three fundamental dynamics, each will contribute to infrastructure growth in the coming years.

    在考慮基礎設施投資時,請考慮以下三個基本動態,它們都將在未來幾年促進基礎設施的發展。

  • NVIDIA is chosen because our singular architecture enables all three transitions. And, thus so, for any form and modality of AI across all industries, across every phase of AI, across all of the diverse computing needs in the cloud and also, from cloud to enterprise to robots, one architecture.

    之所以選擇 NVIDIA,是因為我們獨特的架構能夠實現所有三種過渡。因此,對於所有產業、所有階段、所有不同運算需求的 AI 的任何形式和模式,以及從雲端到企業再到機器人,都採用一種架構。

  • Toshiya, back to you.

    俊也,把鏡頭交還給你。

  • Toshiya Hari - Vice President - Investor Relations & Strategic Finance

    Toshiya Hari - Vice President - Investor Relations & Strategic Finance

  • We will now open the call for questions. Operator, would you please pull for questions?

    現在開始接受提問。操作員,請問可以安排提問嗎?

  • Operator

    Operator

  • (Operator Instructions)

    (操作說明)

  • Joseph Moore, Morgan Stanley.

    約瑟夫‧摩爾,摩根士丹利。

  • Joseph Moore - Analyst

    Joseph Moore - Analyst

  • I wonder if you could update us -- you talked about the $500 billion of revenue for Blackwell plus Rubin in '25 and '26 at GTC. At that time, you talked about $150 billion of that already having been shipped.

    我想知道您能否為我們更新一下情況——您在 GTC 上談到了 Blackwell 和 Rubin 在 2025 年和 2026 年的 5000 億美元收入。當時,你提到其中價值 1500 億美元的貨物已經發貨。

  • As the quarter is wrapped up, are those still the general parameters that there's $350 billion in the next 14 months or so. I would assume, over that time, you haven't seen all the demand but there is -- there's any possibility of upside to those numbers, as we move forward.

    本季即將結束,未來 14 個月左右是否仍會有 3,500 億美元的投資額?我估計,在那段時間裡,你還沒有看到全部的需求,但需求是存在的——隨著我們向前邁進,這些數字還有上升的空間。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Yeah. Thanks, Joe. I'll start first with a response here on that.

    是的。謝謝,喬。我先在這裡回覆一下。

  • Yes, that's correct. We are working into our $500 billion forecast. We are on track for that, as we have finished some of the quarters. Now, we have several quarters now in front of us to take us through the end of calendar year '26. The number will grow. We will achieve, I'm sure, additional needs for compute that will be shippable by fiscal year '26.

    是的,沒錯。我們正在努力實現5000億美元的預測目標。我們正按計劃推進,因為我們已經完成了部分季度的工作。現在,我們還有好幾個季度的時間,直到 2026 年底。這個數字還會成長。我相信,到 2026 財年,我們將滿足額外的運算需求,並交付相應的解決方案。

  • So we shipped $50 billion this quarter. But we would be not finished if we didn't say that we'll probably be taking more orders.

    我們本季出貨額達到了500億美元。但如果我們不說我們可能會接受更多訂單,那就不算完整。

  • For example, just even today, our announcements with KSA. That agreement in itself is 400,000 to 600,000 more GPUs over three years.

    例如,就在今天,我們與沙烏地阿拉伯王國發布了公告。該協議本身意味著三年內將增加 40 萬至 60 萬個 GPU。

  • Anthropic is also net-new.

    人類學也是全新的。

  • So there's definitely an opportunity for us to have more, on top of the $500 billion that we announced.

    因此,除了我們宣布的 5000 億美元之外,我們肯定還有機會獲得更多資金。

  • Operator

    Operator

  • CJ Muse, Cantor Fitzgerald.

    CJ Muse,坎托·菲茨杰拉德。

  • CJ Muse - Analyst

    CJ Muse - Analyst

  • There's clearly a great deal of consternation around the magnitude of AI infrastructure build-outs and the ability to fund such plans and the ROI yet, at the same time, you're talking about being sold out, every suite of GPUI is taken.

    顯然,人們對人工智慧基礎設施建設的規模、此類計劃的資金籌措能力以及投資回報率感到非常擔憂,但與此同時,你又在談論顯卡晶片的銷售情況,所有GPUI套件都已被預訂一空。

  • The AI world hasn't seen the enormous benefit yet from B300. Never mind, Rubin. Gemini 3 just announced Grok 5 coming soon.

    人工智慧領域尚未感受到 B300 帶來的巨大益處。沒關係,魯賓。Gemini 3 剛剛宣布 Grok 5 即將推出。

  • And so the question is this: When you look at that as the backdrop, do you see a realistic path for supply to catch up with demand over the next 12 to 18 months? Or do you think it can extend beyond that timeframe?

    因此,問題是:在這種背景下,你認為未來 12 到 18 個月內,供應是否有可能趕上需求?還是你認為它會持續到超出這個時間範圍?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Well, as you know, we've done a really good job planning our supply chain. NVIDIA's supply chain basically includes every technology company in the world. TSMC and their packaging and our memory vendors and memory partners and all of our system ODMs have done a really good job planning with us.

    如您所知,我們在供應鏈規劃方面做得非常出色。英偉達的供應鏈基本上涵蓋了世界上的每家科技公司。台積電及其封裝廠商、我們的記憶體供應商、記憶體合作夥伴以及所有系統ODM廠商都與我們一起出色地完成了規劃工作。

  • We were planning for a big year. We've seen for some time the three transitions that I spoke about just a second ago: accelerated computing from general-purpose computing. It's really important to recognize that AI is not just Agentic AI but Generative AI is transforming the way that hyperscalers did the work that they used to do on CPUs.

    我們原本計劃今年會大有作為。我們已經看到了我剛才提到的三種轉變:從通用計算到加速計算。真正重要的是要認識到,人工智慧不僅僅是智慧體人工智慧,生成式人工智慧正在改變超大規模資料中心營運商過去在 CPU 上完成工作的方式。

  • Generative AI made it possible for them to move search and recommender systems and ad recommendations and targeting, all of that has been moved to Generative AI and still transitioning.

    生成式人工智慧使他們能夠將搜尋和推薦系統、廣告推薦和定向等功能轉移到生成式人工智慧,並且仍在不斷過渡。

  • And so whether you install NVIDIA GPUs for data processing or you did it for Generative AI for your recommender system or you're building it for Agentic chatbots and the type of AIs that most people see when they think about AI, all of those applications are accelerated by NVIDIA.

    因此,無論您安裝 NVIDIA GPU 用於資料處理,還是用於生成式 AI 的推薦系統,或用於建立智慧聊天機器人以及大多數人想到 AI 時所看到的 AI 類型,所有這些應用程式都由 NVIDIA 加速。

  • And so when you look at the totality of the spend, it's really important to think about each one of those layers. They're all growing. They're related but not the same. But the wonderful thing is that they all run on NVIDIA GPUs.

    因此,在考慮總支出時,仔細考慮每一層支出都非常重要。它們都在成長。它們有親緣關係,但並不相同。但最棒的是,它們都運行在 NVIDIA GPU 上。

  • Simultaneously, because the quality of the AI models are improving so incredibly, the adoption of it in the different use cases, whether it's in code assistance, which NVIDIA uses fairly exhaustively -- and we're not the only one.

    同時,由於人工智慧模型的品質得到了極大的提高,因此它在各種應用場景中的採用也越來越廣泛,例如在程式碼輔助方面,NVIDIA 已經相當徹底地使用了這項技術——而且我們並不是唯一一家這樣做的公司。

  • The fastest-growing application in history, a combination of Cursor and Quadcode and OpenAI's Codex, and GitHub CoPilot -- these applications are the fastest-growing in history. It's not just used for software engineers. Because of [wide] coding, it's used by engineers and marketers, all over companies; supply chain planners, all over companies.

    歷史上成長最快的應用程序,結合了 Cursor 和 Quadcode、OpenAI 的 Codex 以及 GitHub CoPilot——這些應用程式是歷史上成長最快的。它並非僅供軟體工程師使用。由於其編碼方式廣泛,因此被各公司的工程師、行銷人員以及供應鏈規劃人員廣泛使用。

  • And so I think that that's just one example and the list goes on, whether it's OpenEvidence and the work that they do in healthcare; or the work that's being done in digital video editing in Runway. A number of really, really exciting start-ups that are taking advantage of Generative AI and Agentic AI is growing quite rapidly.

    所以我認為這只是一個例子,這樣的例子還有很多,無論是 OpenEvidence 及其在醫療保健領域所做的工作;還是 Runway 在數位影片編輯領域所做的工作。一些利用生成式人工智慧和智慧體人工智慧的新創公司發展迅猛,令人振奮。

  • Not to mention, we're all using it a lot more.

    更何況,我們現在使用它的頻率也大大增加了。

  • And so all of these exponentials -- not to mention, just today, I was reading a text from Dennis. He was saying that pre-training and post-training are fully intact. Gemini 3 takes advantage of the scaling loss and got to receive a huge jump in quality performance, model performance.

    所以所有這些指數級增長——更不用說,就在今天,我還讀了丹尼斯發來的短信。他說,訓練前和訓練後的工作都完全到位。Gemini 3 利用了縮放損失,在品質性能和模型性能方面獲得了巨大的飛躍。

  • And so we're seeing all of these exponentials running at the same time. Just always go back to first principles and think about what's happening from each one of the dynamics that I mentioned before: general purpose computing to accelerated computing; Generative AI replacing classical machine learning; and, of course, Agentic AI, which is a brand-new category.

    因此,我們看到所有這些指數級增長同時發生。始終要回歸基本原理,思考我之前提到的每一種動態變化:從通用計算到加速計算;生成式人工智慧取代經典機器學習;當然還有智能體人工智慧,這是一個全新的類別。

  • Operator

    Operator

  • Vivek Arya, Bank of America Securities.

    Vivek Arya,美國銀行證券。

  • Vivek Arya - Analyst

    Vivek Arya - Analyst

  • I'm curious what assumptions are you making on NVIDIA content per gigawatt in that $500 billion number? Because we have heard numbers as low as $25 billion per gigawatt of content or as high as $30 billion or $40 billion per gigawatt.

    我很好奇,在計算5000億美元的數字時,你對每千兆瓦NVIDIA內容的貢獻做了哪些假設?因為我們聽說過每吉瓦內容價值低至 250 億美元,也聽說過每吉瓦內容價值高達 300 億美元或 400 億美元的數字。

  • I'm curious what power and what dollar per gigawatt assumptions you are making as part of that $500 billion number.

    我很好奇,在計算這 5000 億美元的數字時,你分別採用了哪些功率單位和每吉瓦多少美元的假設。

  • And then, longer term, Jensen, the $3 billion to $4 trillion in data center by 2030 was mentioned. How much of that do you think will require vendor financing? How much of that can be supported by cash flows of your large customers or governments or enterprises?

    然後,從長遠來看,詹森提到,到 2030 年,資料中心的規模將達到 30 億至 4 兆美元。你認為其中有多少需要供應商融資?其中有多少可以由您的大客戶、政府或企業的現金流來支撐?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • In each generation, from Ampere to Hopper, from Hopper to Blackwell, Blackwell to Rubin, part of the data center increases. Hopper generation was probably something along the lines of [20-some-odd], [20] to [25]. Blackwell generation, Grace Blackwell particularly, is probably [30] to 30 to -- say, 30 plus or minus. And then, Rubin is probably higher than that.

    從安培架構到霍珀架構,從霍珀架構到布萊克威爾架構,再到布萊克威爾架構到魯賓架構,每一代資料中心的規模都在不斷擴大。Hopper 的生成數量可能在 [20 多] 左右,[20] 到[25].布萊克威爾一代,特別是格蕾絲·布萊克威爾,大概是 30 到 30 歲——比如說,30 歲左右。而且,魯賓的排名可能比這還要高。

  • In each one of these generations, the speed up is X-actors. Therefore, their TCO, the customer TCO, improves by X-factors.

    在每一世代中,速度提升都是 X 演員。因此,他們的總擁有成本(即客戶總擁有成本)提高了 X 倍。

  • The most important thing is, in the end, you still only have 1 gigawatt of power. 1 gigawatt data centers, 1 gigawatt power. Therefore, performance per watt, the efficiency of your architecture is incredibly important. The efficiency of your architecture can't be brute force. There is no brute forcing about it.

    最重要的是,到頭來,你最終也只有1吉瓦的電力。 1吉瓦的資料中心,1吉瓦的電力。因此,每瓦性能,也就是架構的效率,至關重要。架構效率不能靠蠻力提升。這並非蠻力所能及。

  • That 1 gigawatt translates directly. Your performance per watt translates directly -- absolutely directly -- to your revenues, which is the reason why choosing the right architecture matters so much now. The world doesn't have an excess of anything to squander. And so we have to be really, really -- we use this concept called co-design across our entire stack, across the frameworks and models, across the entire data center -- even power and cooling, optimized across the entire supply chain or ecosystem.

    那1吉瓦可以直接轉換。每瓦性能直接——絕對直接——轉化為您的收入,這就是為什麼現在選擇正確的架構如此重要的原因。世界上沒有什麼東西是多餘的,可以浪費。因此,我們必須真正做到——我們在整個技術堆疊、框架和模型、整個資料中心(甚至包括電力和冷卻)中運用了協同設計這一概念,並在整個供應鏈或生態系統中進行最佳化。

  • And so each generation, our economic contribution will be greater. Our value delivered will be greater. But the most important thing is our energy efficiency per watt is going to be extraordinary, every single generation.

    因此,每一代人,我們對經濟的貢獻都會更大。我們創造的價值將會更大。但最重要的是,每一代產品的每瓦能源效率都將非常出色。

  • With respect to growing into -- continuing to grow, our customers' financing is up to them. We see the opportunity to grow for quite some time.

    至於發展壯大——以及持續發展,客戶的融資方式由他們自己決定。我們看到未來很長一段時間都有發展的機會。

  • Remember, today, most of the focus has been on the hyperscalers. One of the areas that is really misunderstood about the hyperscalers is that the investment on NVIDIA GPUs not only improves their scale, speed, and cost from general purpose computing. That's number one. Because Moore's Law saw scaling has really slowed.

    請記住,目前大多數關注點都集中在超大規模資料中心上。人們對超大規模資料中心有一個誤解,那就是對 NVIDIA GPU 的投資不僅提高了其在通用運算方面的規模、速度和成本。這是第一點。因為摩爾定律表明,規模擴張速度確實已經放緩了。

  • Moore's Law is about driving cost down. It's about deflationary cost, the incredible deflationary cost, of computing, over time. But that has slowed. Therefore, a new approach is necessary for them to keep driving the cost down.

    摩爾定律的核心在於降低成本。這是關於通貨緊縮成本的問題,也是計算成本隨著時間而產生的驚人通貨緊縮成本。但這種情況已經放緩。因此,他們需要採取新的方法來繼續降低成本。

  • Going to NVIDIA GPU computing is really the best way to do so.

    使用NVIDIA GPU運算才是實現這一目標的最佳途徑。

  • The second is revenue boosting in their current business models. Recommender systems drive the world's hyperscalers. Every single -- whether it's watching short-form videos or recommending books or recommending the next item in your basket to recommending (inaudible) to recommending news to -- it's all about recommenders.

    第二點是提高他們目前商業模式的收入。推薦系​​統驅動全球超大規模資料中心的發展。每一個——無論是觀看短片、推薦書籍、推薦購物車中的下一個商品,還是推薦(聽不清楚)新聞——都與推薦有關。

  • The world has -- the Internet has trillions of pieces of content, how could they possibly figure out what to put in front of you and your little tiny screen, unless they have really sophisticated recommender systems to do so. Well, that has gone Generative AI.

    世界擁有-網路擁有數萬億條內容,除非他們擁有非常複雜的推薦系統,否則他們怎麼可能知道該把什麼內容呈現在你和你那小小的螢幕上呢?嗯,這已經發展成生成式人工智慧了。

  • So the first two things that I've just said, hundreds of billions of dollars of CapEx that's going to have to be invested is fully cash flow funded. What is above it, therefore, is Agentic AI. This is revenue -- this is net-new, net-new consumption. But it's also net -new applications. Some of the applications I mentioned before. But these new applications are also the fastest-growing applications in history, okay?

    所以我剛才說的前兩件事,就是需要投資的數千億美元的資本支出,都將完全由現金流提供資金。因此,在其之上的是智能體人工智慧。這是收入——這是新增的淨消費。但它也包含全新的應用程式。我之前提到的一些應用。但這些新應用也是史上成長最快的應用,懂嗎?

  • So I think that you're going to see that once people start to appreciate what is actually happening under the water, if you will, from the simplistic view of what's happening to CapEx investment, recognizing there's these three dynamics.

    所以我認為,一旦人們開始了解水下究竟發生了什麼,而不是僅僅從資本支出投資的簡單角度來看待問題,就會發現這三種動態。

  • And then, lastly, remember, we were just talking about the American CSPs. Each country will fund their own infrastructure. You have multiple countries. You have multiple industries. Most of the world's industries haven't really engaged Agentic AI yet. They're about to.

    最後,別忘了,我們剛才還在討論美國的CSP。每個國家將自行籌資興建基礎建設。您擁有多個國家。您涉足多個行業。世界上大多數產業尚未真正應用智能體人工智慧。他們即將這樣做。

  • All the names of companies that you know we're working with, whether it's autonomous vehicle companies or digital twins for Physical AI for factories and the number of factories and warehouses being built around the world, just a number of digital biology start-ups that are being funded so that we could accelerate drug discovery, all of those different industries are now getting engaged. They're going to do their own fundraising. And

    所有你都知道我們正在合作的公司,無論是自動駕駛汽車公司,還是用於工廠物理人工智能的數位孿生體,以及世界各地正在建設的工廠和倉庫的數量,還有許多正在獲得資助的數位生物學新創公司,以便我們能夠加速藥物發現,所有這些不同的行業現在都參與其中。他們打算自行籌款。和

  • so don't just look at the hyperscalers as a way to build out for the future. You got to look at the world. You got to look at all the different industries. Enterprise computing is going to fund their own industry.

    所以不要只是把超大規模資料中心看作是面向未來的一種發展方式。你得看看這個世界。你需要考察所有不同的行業。企業運算將自給自足,發展壯大自身產業。

  • Operator

    Operator

  • Ben Reitzes, Melius.

    本‧雷茨斯,梅利烏斯。

  • Ben Reitzes - Equity Analyst

    Ben Reitzes - Equity Analyst

  • Jensen, I wanted to ask you about cash. Speaking of $0.5 trillion, you may generate about $0.5 trillion in free cash flow over the next couple of years. What are your plans for that cash? How much goes to buyback versus investing in the ecosystem? How do you look at investing in the ecosystem?

    Jensen,我想問你一些關於現金的問題。說到 0.5 兆美元,你未來幾年可能會產生約 0.5 兆美元的自由現金流。你打算如何使用這筆錢?股票回購和投資生態系建設的資金比例是多少?您如何看待生態系的投資?

  • I think there's just a lot of confusion out there about how these deals work and your criteria for doing those like the Anthropic, the OpenAIs, et cetera.

    我認為很多人對這些交易的運作方式以及你們達成這些交易(例如 Anthropic、OpenAIs 等)的標準存在著許多誤解。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah. I appreciate the question.

    是的。感謝您的提問。

  • Of course, using cash to fund our growth. No company has grown at the scale that we're talking about and have the connection and the depth and the breadth of supply chain that NVIDIA has. The reason why our entire customer base can rely on us is because we've secured a really resilient supply chain. We have the balance sheet to support them.

    當然,我們會用現金來資助我們的發展。沒有一家公司能像英偉達那樣發展到我們所說的規模,並擁有如此強大的供應鏈網絡、深度和廣度。我們所有客戶之所以能夠信賴我們,是因為我們建立了一條真正具有韌性的供應鏈。我們有足夠的財務實力來支撐這些投資。

  • When we make purchases, our suppliers can take it to the bank. When we make forecast and we plan with them, they take us seriously because of our balance sheet.

    當我們進行採購時,我們的供應商可以把錢存入銀行。當我們和他們一起做預測和計劃時,他們會認真對待我們,因為我們的資產負債表狀況良好。

  • We're not making up the offtake. We know what our offtake is.

    我們並沒有捏造銷量。我們知道我們的銷售量是多少。

  • Because they've been planning with us for so many years, our reputation and our credibility is incredible. And so it takes really strong balance sheet to do that: to support the level of growth and the rate of growth and the magnitude associated with that.

    因為他們與我們合作規劃多年,我們的聲譽和信譽都非常出色。因此,要做到這一點,就需要非常強大的資產負債表:以支撐成長水準、成長速度以及與之相關的成長規模。

  • That's number one.

    這是第一點。

  • The second thing, of course, is we're going to continue to do stock buybacks. We're going to continue to do that.

    第二點當然是,我們將繼續進行股票回購。我們將繼續這樣做。

  • But with respect to the investments, this is really, really important work that we do. All of the investments that we've done so far -- all the period -- is associated with expanding the reach of CUDA, expanding the ecosystem.

    但就投資而言,這確實是我們所做的非常重要的工作。我們迄今為止所做的所有投資——整個時期——都與擴大 CUDA 的影響範圍、擴展生態系統有關。

  • If you look at the work that -- the investments that we did with Open AI, it's -- of course, that relationship we've had since 2016. I delivered the first AI supercomputer ever made to Open AI. And so we've had a close and wonderful relationship with OpenAI since then. Everything that OpenAI does runs on NVIDIA today. So all the clouds that they deploy in, whether it's training and inference, runs NVIDIA. We love working with them.

    如果你看看我們與 Open AI 所做的工作——投資,當然,這指的是我們自 2016 年以來建立的合作關係。我向 OpenAI 交付了世界上第一台人工智慧超級電腦。因此,從那時起,我們與 OpenAI 就建立了密切而美好的關係。OpenAI 目前的所有程式都運行在 NVIDIA 平台上。因此,他們部署的所有雲端平台,無論是用於訓練還是推理,都運行 NVIDIA 的技術。我們很喜歡和他們合作。

  • The partnership that we have with them is one -- so that we could work even deeper from a technical perspective so that we could support their accelerated growth. This is a company that's growing incredibly fast.

    我們與他們的合作關係是這樣的——這樣我們就可以從技術角度進行更深入的合作,從而支持他們的加速發展。這是一家發展速度驚人的公司。

  • Don't just look at what is said in the press, look at all the ecosystem partners and all the developers that are connected to OpenAI. They're all driving consumption of it. The quality of the AI that's being produced, huge step-up since a year ago. And so the quality of response is extraordinary.

    不要只看媒體報道,也要看看所有與 OpenAI 相關的生態系統合作夥伴和開發者。它們都在推動這種產品的消費。人工智慧的產量和品質相比一年前有了巨大的提升。因此,反饋品質非常出色。

  • So we invest in OpenAI for a deep partnership in co-development to expand our ecosystem and support their growth. Of course, rather than giving up a share of our company, we get a share of their company.

    因此,我們投資 OpenAI,旨在建立深度合作夥伴關係,共同開發,以擴展我們的生態系統並支持他們的發展。當然,我們不是放棄我們公司的股份,而是要拿到他們公司的股份。

  • We invested in them in one of the most consequential once-in-a-generation company -- (inaudible) that we have a share. And so I fully expect that investment to translate to extraordinary returns.

    我們投資了他們,投資了一家百年一遇、影響深遠的公司——(聽不清楚)我們持有該公司股份。因此,我完全相信這項投資會帶來非凡的回報。

  • Now, in the case of Anthropic, this is the first time that Anthropic will be on NVIDIA's architecture. The first time Anthropic will be NVIDIA's architecture is the second most successful AI in the world, in terms of total number of users.

    現在,就 Anthropic 而言,這是 Anthropic 首次採用 NVIDIA 的架構。Anthropico 首次採用 NVIDIA 的架構,就用戶總數而言,它是世界上第二成功的 AI。

  • But in Enterprise, they're doing incredibly well. Quadcode is doing incredibly well. Quad is doing incredibly well -- all of the world's enterprise. Now, we have the opportunity to have a deep partnership with them in bringing Quad onto the NVIDIA platform.

    但在企業級專案中,他們的表現非常出色。Quadcode的表現非常出色。四方機制發展得非常出色——全球所有企業都從中受益。現在,我們有機會與他們建立深度合作關係,將 Quad 帶到 NVIDIA 平台。

  • And so what do we have now? NVIDIA's architecture -- taking a step back, NVIDIA's architecture, NVIDIA's platform, is the singular platform in the world that runs every AI model. We run OpenAI. We run on Anthropic. We run xAI. Because of our deep partnership with Elon and xAI, we were able to bring that opportunity to Saudi Arabia -- to the KSA -- so that HUMAIN could also be hosting opportunity for xAI.

    那我們現在的情況如何呢?NVIDIA 的架構-退一步講,NVIDIA 的架構,NVIDIA 的平台,是世界上唯一能夠運行所有 AI 模型的平台。我們運營 OpenAI。我們用的是 Anthropic 燃料。我們運行 xAI。由於我們與伊隆和 xAI 建立了深厚的合作關係,我們得以將這一機會帶到沙烏地阿拉伯——沙烏地阿拉伯王國——以便 HUMAIN 也能成為 xAI 的主辦單位。

  • We run xAI. We run Gemini. We run Thinking Machines. Let's see, what else do we run? We've run them all.

    我們運行 xAI。我們經營 Gemini。我們運行智能機器。嗯,我們還要運行什麼程式?我們都運行過了。

  • And so not to mention, we run the science models, the biology models, DNA models, gene models, chemical models in all the different fields around the world. It's not just Cognitive AI that the world uses. AI is impacting every single industry.

    更不用說,我們在世界各地各個領域運行科學模型、生物學模型、DNA模型、基因模型、化學模型了。世界使用的不僅是認知人工智慧。人工智慧正在影響每一個產業。

  • And so we have the ability to the ecosystem investments that we make to partner with -- deeply partner, on a technical basis, with some of the best companies, most brilliant companies in the world.

    因此,我們有能力透過生態系的投資,與世界上一些最優秀、最傑出的公司進行深入的技術合作。

  • We are expanding the reach of our ecosystem. We're getting a share and investment in what will be a very successful company; oftentimes, once-in-a-generation company.

    我們正在擴大生態系統的影響範圍。我們將獲得一家未來會非常成功的公司的股份和投資;通常來說,這樣的公司是百年一遇的。

  • And so that basic -- that's our investment thesis.

    所以,這就是我們基本的投資理念。

  • Operator

    Operator

  • Jim Schneider, Goldman Sachs.

    吉姆‧施耐德,高盛集團。

  • James Schneider, Ph.D. - Analyst

    James Schneider, Ph.D. - Analyst

  • In the past, you've talked about roughly 40% of your shipments tied to AI inference. I'm wondering, as you look forward into next year, where do you expect that percentage could go in, say, a year's time? Can you maybe address the Rubin CPX product you expect to introduce next year? Or contextualize that: How big of the overall TAM you expect that can take? Maybe talk about some of the target customer applications for that specific product.

    過去,您曾提到大約 40% 的出貨量與人工智慧推理有關。我想知道,展望明年,您預計一年後這個百分比會達到什麼水平?您能否談談您預計明年推出的 Rubin CPX 產品?或者換個角度來看:你預期這會佔據多大的潛在市場規模?或許可以談談該產品的一些目標客戶應用場景。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • CPX is designed for long context-type of workload generation. And so long context, basically, before use our generating answers, you have to read a lot. Basically, long context.

    CPX 專為產生長時間上下文類型的工作負載而設計。所以,基本上來說,在使用我們產生的答案之前,您需要閱讀大量背景資訊。基本上,需要詳細解釋。

  • It could be a bunch of PDFs. It could be watching a bunch of videos, studying 3D images, so on and so forth. You have to absorb the context.

    可能是一堆PDF文件。可能是觀看大量影片、研究 3D 影像等等。你必須理解上下文。

  • And so CPX is designed for a long context-type of workloads. It's perf-per-dollar excellent. It's perf-per-dollar (inaudible) excellent, which -- maybe forget the first part of the question --

    因此,CPX 是為長時間運作的工作負載而設計的。性價比極高。它的性價比(聽不清楚)非常出色,──或許可以忽略問題的第一部分。--

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • (inaudible) --

    (聽不清楚)——

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Oh, yeah. There are three scaling laws that are scaling at the same time.

    哦,是的。有三種同時適用的縮放定律。

  • The first scaling law called pre-training continues to be very effective.

    第一個被稱為預訓練的縮放定律仍然非常有效。

  • The second is post-training. Post-training basically has found incredible algorithms for improving an AI's ability to break a problem down and solve a problem step by step. Post-training is scaling exponentially. Basically, the more compute you apply to a model, the smarter it is, the more intelligent it is.

    第二點是訓練後。後訓練基本上找到了令人難以置信的演算法,可以提高人工智慧分解問題並逐步解決問題的能力。訓練後的應用呈指數級增長。基本上,模型所用的計算量越大,它就越聰明。

  • And then, the third is inference. Inference. Because of chain of thought, because of reasoning capabilities, AIs are essentially reading, thinking before it answers. The amount of computation necessary as a result of those three things has gone completely exponential.

    第三點是推理。推理。由於具備思考鍊和推理能力,人工智慧本質上是在閱讀和思考之後才給出答案。由於這三件事,所需的計算量呈指數級增長。

  • I think that it's hard to know exactly what the percentage will be at any given point in time and who. But, of course, our hope is that inference is a very large part of the market because if inference is large, then what it suggests is that people are using it in more applications and they're using it more frequently.

    我認為很難確切地知道在任何特定時間點的具體百分比是多少,以及具體是哪些人。當然,我們希望推理技術能夠佔據很大的市場份額,因為如果推理技術的市場份額很大,那就表明人們在更多的應用中使用它,並且使用它的頻率也更高。

  • We should all hope for inference to be very large. This is where Grace Blackwell is just an order of magnitude better, more advanced than anything in the world.

    我們都應該希望推論的範圍非常廣。在這方面,格蕾絲·布萊克威爾比世界上任何人都優秀得多,領先一個數量級。

  • The second best platform is H200. It's very clear now that GB300 -- GB200and GB300 -- because of NVLink72, the scale-up network that we have achieved -- and you saw and Colette talked about in the SemiAnalysis benchmark. It's the largest single inference benchmark ever done. GB200, NVLink72 is 10 times -- 10 times to 15 times higher performance. And so that's a big step-up. It's going to take a long time before somebody is able to take that on. Our leadership there is surely multiyear.

    第二好的平台是H200。現在很明顯,GB300——GB200 和 GB300——由於 NVLink72,我們實現了擴展網絡——正如您在 SemiAnalysis 基準測試中看到和 Colette 所談到的那樣。這是迄今為止規模最大的單項推理基準測試。GB200 和 NVLink72 的效能是其 10 倍至 15 倍。所以這是一個很大的進步。還需要很長時間才能有人承擔起這項工作。我們在那裡的領導地位肯定會持續多年。

  • And so I think -- I'm hoping that inference becomes a very big deal. Our leadership in inference is extraordinary.

    所以我認為──我希望推理能夠變得非常重要。我們在推理領域的領先地位非同凡響。

  • Operator

    Operator

  • Timothy Arcuri, UBS.

    提摩西‧阿庫裡,瑞銀集團。

  • Timothy Arcuri - Analyst

    Timothy Arcuri - Analyst

  • Jensen, many of your customers are pursuing behind-the-meter power. But, like, what's the single biggest bottleneck that worries you that could constrain your growth? Is it power? Or maybe it's financing? Or maybe it's something else like memory or even foundry?

    Jensen,你的許多客戶都在尋求用戶端供電。但是,你認為可能限制你發展的最大瓶頸是什麼?這是權力嗎?或許是融資問題?或者可能是其他東西,例如內存,甚至是鑄造廠?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Well, these are all issues. They're all constraints. The reason for that, when you're growing at the rate that we are and the scale that we are, how could anything be easy? What NVIDIA is doing obviously has never been done before. We've created a whole new industry.

    這些都是問題。它們都是限制條件。原因在於,當我們以這樣的速度和規模發展時,任何事情都不可能變得容易?英偉達正在做的事情顯然是前所未有的。我們創造了一個全新的產業。

  • Now, on the one hand, we are transitioning computing from general purpose and classical or traditional computing to accelerated computing and AI. That's on the one hand.

    一方面,我們正在將計算從通用計算和經典或傳統計算過渡到加速計算和人工智慧。這是一方面。

  • On the other hand, we created a whole new industry called AI factories. The idea that in order for software to run, you need these factories to generate it generate every single token instead of retrieving information that was pre-created -- and so I think this whole transition requires extraordinary scale, all the way from the supply chain.

    另一方面,我們創造了一個全新的產業,叫做人工智慧工廠。軟體要運行,就需要這些工廠來產生每一個令牌,而不是檢索預先創建的資訊——所以我認為整個轉型需要非凡的規模,從供應鏈開始。

  • Of course, the supply chain, we have much better visibility and control of it because, obviously, we're incredibly good at managing our supply chain. We have great partners that we've worked with for 33 years. And so the supply chain part of it, we're quite confident.

    當然,在供應鏈方面,我們擁有更好的可視性和控制力,因為很顯然,我們非常擅長管理我們的供應鏈。我們擁有優秀的合作夥伴,我們已經與他們合作了33年。所以,對於供應鏈部分,我們相當有信心。

  • Now, looking down our supply chain, we've now established partnerships with so many players in land and power and shell and, of course, financing. These things -- none of these things are easy but they're all attractable and they're all solvable things.

    現在,縱觀我們的供應鏈,我們已經與土地、電力、殼牌以及融資領域的眾多參與者建立了合作關係。這些事情──這些事情都不容易,但它們都是可以吸引的,也是可以解決的。

  • The most important thing that we have to do is do a good job planning. We plan up the supply chain, down the supply chain. We have established a whole lot of partners. And so we have a lot of routes to market.

    我們最重要的事情就是做好計劃。我們從供應鏈的上游到下游進行規劃。我們已經建立了許多合作夥伴關係。因此,我們有很多進入市場的管道。

  • Very importantly, our architecture has to deliver the best value to the customers that we have.

    非常重要的一點是,我們的架構必須為我們現有的客戶提供最大的價值。

  • And so at this point, I'm very confident that NVIDIA's architecture is the best performance for TCO. It is the best performance for (inaudible) watt. Therefore, for any amount of energy that is delivered, our architecture will drive the most revenues.

    因此,目前我非常有信心,NVIDIA 的架構在總體擁有成本 (TCO) 方面是效能最佳的。這是(聽不清楚)瓦特的最佳性能。因此,無論輸送多少能量,我們的架構都能帶來最大的效益。

  • I think the increasing rate of our success -- I think that we're more successful this year at this point than we were last year at this point. The number of customers coming to us and the number of platforms coming to us after they've explored others, is increasing, not decreasing.

    我認為我們成功的速度越來越快——我認為今年我們比去年同期更加成功。前來光顧的客戶數量以及他們在探索其他平台後選擇我們的平台數量都在增加,而不是減少。

  • And so I think the -- I think all of that is just -- all the things that I've been telling you over the years are really coming -- are coming through or becoming evident.

    所以我覺得──我覺得這一切──這些年來我一直告訴你們的所有事情,現在真的要實現了,或者說變得顯而易見了。

  • Operator

    Operator

  • Stacy Rasgon, Bernstein Research.

    Stacy Rasgon,伯恩斯坦研究公司。

  • Stacy Rasgon - Analyst

    Stacy Rasgon - Analyst

  • Colette, I have some questions on margins. You said, for next year, you're working to hold them in the mid-[seven days]. First of all, what are the biggest cost increase? Is it just memory or is it something else? What are you doing to work toward that? How much is, like, cost optimizations versus pre-buys versus pricing?

    科萊特,我有一些關於利潤率的問題。你說過,明年你打算把比賽安排在七天的中間。首先,成本增加最多的是什麼?只是記憶問題,還是另有隱情?你正在採取哪些措施來實現這個目標?成本優化、預購與定價之間究竟各佔多少比重?

  • And then, also, how should we think about OpEx growth next year, given the revenues seem likely to grow materially from where we're running, right now?

    此外,鑑於收入很可能比我們目前的水平大幅增長,我們該如何看待明年的營運支出成長?

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Thanks, Stacy. Let me see if I can start with remembering where we were with the current fiscal year that we're in.

    謝謝你,史黛西。讓我先回憶一下,我們現在所處的財政年度進展如何。

  • Remember, earlier this year, we indicated that through cost improvements and mix, we would exit the year in our gross margins in the mid-seven days. We've achieved that and getting ready to also execute that in Q4. Now, it's time for us to communicate where are we working, right now, in terms of next year.

    請記住,今年早些時候,我們曾表示,透過成本改善和產品組合優化,我們將在年底前實現七天左右的毛利率。我們已經實現了這一目標,並準備在第四季度繼續執行。現在,是時候讓我們溝通一下,就明年而言,我們目前的工作重點是什麼了。

  • Next year, there are input prices that are well known in the industries that we need to work through. Our systems are by no means very easy to work with. There are tremendous amount of components in many different parts of it, as we think about that.

    明年,我們需要處理一些業界眾所周知的投入價格問題。我們的系統絕非易於操作。仔細想想,它有很多不同的部分,其中包含大量的組件。

  • So we're taking all of that into account. But we do believe, as we look at working again on cost improvement, cycle time, and mix that we will work to try and hold at our gross margins in the mid-seven days. That's our overall plan for gross margin.

    所以我們會把所有這些因素都考慮進去。但我們相信,當我們再次著手改善成本、縮短週期時間和優化產品組合時,我們將努力在七天左右維持毛利率。這就是我們關於毛利率的整體計劃。

  • Your second question is around OpEx. Right now, our goal, in terms of OpEx, is to really make sure that we are innovating with our engineering teams, with all of our business teams, to create more and more systems for this market.

    你的第二個問題與營運支出有關。目前,就營運支出而言,我們的目標是確保我們的工程團隊和所有業務團隊都在不斷創新,為這個市場創造越來越多的系統。

  • As you know, right now, we have a new architecture coming out. That means they are quite busy in order to meet that goal. And so we're going to continue to see our investments on innovating more and more, both the software, both our systems and our hard work to do so.

    如您所知,我們目前正在推出新的架構。這意味著他們為了實現這個目標非常忙碌。因此,我們將繼續增加對創新方面的投入,包括軟體、系統以及我們為此付出的辛勤努力。

  • I'll leave it turn it to Jensen if he wants to add a couple of more comments.

    我就不多說了,如果詹森還想補充幾句,就讓他來吧。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • I think that's spot on. I think the only thing that I would add is, remember that we plan, we forecast, we plan, and we negotiate with our supply chain well in advance.

    我覺得說得完全正確。我唯一要補充的是,請記住,我們會提前做好規劃、預測和與供應鏈進行談判。

  • Our supply chain have known for quite a long time our requirements. They've known for quite a long time our demand. We've been working with them and negotiating with them for quite a long time.

    我們的供應鏈夥伴很早就了解我們的需求。他們早就知道我們的需求。我們已經和他們合作談判很久了。

  • And so I think the recent surge -- obviously, quite significant.

    所以我認為最近的這波激增——顯然意義重大。

  • But, remember, our supply chain has been working with us for a very long time. So in many cases, we've secured a lot of supply for ourselves because, obviously, they're working with the largest company in the world in doing so.

    但是,請記住,我們的供應鏈已經與我們合作了很長時間。因此,在許多情況下,我們已經為自己確保了大量的供應,因為很顯然,他們這樣做是與世界上最大的公司合作。

  • We've also been working closely with them on the financial aspects of it and securing forecasts and plans and so on and so forth. So I think all of that has worked out well for us.

    我們也與他們密切合作,處理財務方面的問題,並確保做出預測和製定計劃等等。所以我覺得這一切對我們來說都是好事。

  • Operator

    Operator

  • Aaron Rakers, Wells Fargo.

    Aaron Rakers,富國銀行。

  • Aaron Rakers - Analyst

    Aaron Rakers - Analyst

  • Jensen, the question is for you. As you think about the Anthropic deal that was announced and just the overall breadth of your customers, I'm curious if your thoughts around the role that AI ASICs or dedicated XPUs play in these architecture build-outs has changed at all? Have you seen -- I think you've been fairly adamant in the past that some of these programs never really see deployments. But I'm curious if we're at a point where maybe that's even changed more in favor of just GPU architecture.

    Jensen,這個問題要問你。考慮到您宣布的與 Anthropico 的交易以及您客戶的整體規模,我很好奇您對 AI ASIC 或專用 XPU 在這些架構構建中所扮演的角色的看法是否有所改變?你有沒有註意到——我認為你過去一直非常堅定地認為,其中一些專案從未真正部署過。但我很好奇,我們是否已經到了這樣一個階段:這個趨勢可能更有利於GPU架構。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah. Thank you very much. I really appreciate the question.

    是的。非常感謝。非常感謝你的提問。

  • First of all, you're not competing against teams -- excuse me, against a company, you're competing against teams. There just aren't that many teams in the world who are built -- who are extraordinary at building these incredibly complicated things.

    首先,你不是在跟團隊競爭──抱歉,是和公司競爭,你是在跟團隊競爭。世界上真正擅長建造這些極其複雜事物的團隊並不多。

  • Back in the Hopper day and the Ampere days, we would build on GPU. That's the definition of an accelerated AI system. But, today, we've got to build entire racks and -- three different types of switches: a scale up, a scale out, and a scale across switch. It takes a lot more than one chip to build a compute node anymore.

    在 Hopper 和 Ampere 時代,我們會在 GPU 上進行開發。這就是加速人工智慧系統的定義。但是,如今我們必須建造整個機架,並且需要三種不同類型的交換器:向上擴展交換器、向外擴展交換器和橫向擴展交換器。如今建構一個運算節點需要的遠不止一個晶片。

  • Everything about that computing system -- because AI needs to have memory. AI didn't use to have memory at all. Now, it has to remember things. The amount of memory and context it has is gigantic.

    這個計算系統的一切——因為人工智慧需要記憶。人工智慧以前根本沒有記憶功能。現在,它需要記住一些東西。它所儲存的記憶體和上下文資訊量非常龐大。

  • The memory architecture implication is incredible. The diversity of models from Mixture-of-Experts to dense models to diffusion models are regressive, not to mention biological models that are based the laws of physics. The list of different types of models have exploded in the last several years.

    記憶體架構的影響是巨大的。從專家混合模型到密集模型再到擴散模型,模型的多樣性是倒退的,更不用說基於物理定律的生物模型了。近幾年來,不同類型車型的種類爆炸性成長。

  • And so the challenge is the complexity of the problem is much higher. The diversity of AI models is incredibly, incredibly large. And so this is where, if I will say, there are five things that makes us special, if you will.

    因此,挑戰在於問題的複雜性要高得多。人工智慧模型的種類極為繁多。所以,如果要我說的話,這就是我們與眾不同的五件事。

  • The first thing I would say that makes us special is that we accelerate every phase of that transition. That's the first space. That CUDA allows us to have CUDA-X for transitioning from general purpose to accelerated computing. We are incredibly good at Generative AI. We're incredibly good at Agentic AI. So every single phase of that -- through every single layer of the transition, we are excellent at.

    首先,我認為我們與眾不同的地方在於,我們加速了這個轉型過程的每個階段。這是第一個空格。CUDA 使我們能夠擁有 CUDA-X,從而實現從通用計算到加速計算的過渡。我們在生成式人工智慧方面非常出色。我們在智能體人工智慧方面非常出色。因此,在過渡的每一個階段——每一層過渡——我們都做得非常好。

  • You can invest in one architecture, use it across the board. You can use one architecture and not worry about the changes in the workload across those three phases. That's number one.

    你可以投資一種架構,然後全面使用它。你可以使用一種架構,而不用擔心這三個階段工作負載的變化。這是第一點。

  • Number two, we're excellent at every phase of AI. Everybody's always known that we're incredibly good at pre-training. We're obviously very good at post-training. We're incredibly good, as it turns out, at inference because inference is really, really hard.

    第二,我們在人工智慧的各個階段都非常出色。大家都知道我們非常擅長賽前訓練。我們顯然非常擅長賽後培訓。事實證明,我們非常擅長推理,因為推理真的非常非常困難。

  • How could thinking be easy? People think that inference is one shot and, therefore, it's easy; anybody could approach the market that way. But it turns out to be the hardest of all because thinking, as it turns out, is quite hard. We're great at every phase of AI, the second thing.

    思考怎麼可能很容易?人們認為推斷是一擊即中,因此很容易;任何人都可以用這種方式進入市場。但事實證明,思考是最難的,因為思考本身就很難。第二點,我們在人工智慧的各個階段都表現出色。

  • The third thing is we're now the only architecture in the world that runs every AI model, every frontier AI model. We run open-source AI models incredibly well. We run science models, biology models, robotics models. We run every single model. We're the only architecture in the world that can claim that. It doesn't matter whether you're auto-regressive or diffusion-based. We run everything. We run it for every major platform, as I just mentioned. So we run every model.

    第三點是,我們現在是世界上唯一能夠運行所有人工智慧模型、所有前沿人工智慧模型的架構。我們運行開源人工智慧模型的效果非常好。我們運行科學模型、生物學模型、機器人模型。我們運行了每一個模型。我們是世界上唯一可以這樣宣稱的建築事務所。是自迴歸模型還是擴散模型都無關緊要。我們負責一切。正如我剛才提到的,我們在每個主流平台上都運行了它。所以我們運行了所有模型。

  • And then, the fourth thing I would say is that we're in every cloud. The reason why developers love us is because we're literally everywhere. We're in every cloud. We're in every -- we can even make you a little tiny cloud called DGX Spark. And so we're in every computer. We're everywhere, from cloud to on-prem to robotic systems, edge devices, PCs, you name it. One architecture, things just work. It's incredible.

    最後,我想說的第四點是,我們現在處境非常有利。開發者喜歡我們的原因是,我們無所不在。我們身處雲端。我們無所不在——我們甚至可以為你打造一個名為 DGX Spark 的微型雲端。因此,我們的產品已經植入到每一台電腦中。從雲端到本地部署,再到機器人系統、邊緣設備、個人電腦,我們無所不在。一種架構,一切運作正常。真是不可思議。

  • And then, the last thing -- and this is probably the most important thing -- the fifth thing is, if you are a cloud service provider, if you're a new company like HUMAIN; if you're a new company like CoreWeave -- (inaudible) -- OCI for that matter, the reason why NVIDIA is the best platform for you is because our offtake is so diverse. We can help you with offtake.

    最後一點——這可能是最重要的一點——第五點是,如果你是一家雲端服務供應商,如果你是像 HUMAIN 這樣的新公司;如果你是像 CoreWeave 這樣的新公司——(聽不清楚)——或者像 OCI 這樣的新公司,NVIDIA 之所以是你的最佳平台,是因為我們的產品種類非常豐富。我們可以協助您進行承購。

  • It's not about just putting a random ASIC into a data center. Where is the offtake coming from? Where is the diversity coming from? Where is the resilience coming from? The versatility of the architecture coming from; the diversity of capability coming from.

    這不只是把一塊隨機的ASIC晶片放到資料中心那麼簡單。排放源來自哪裡?這種多樣性從何而來?這種韌性源自於何處?架構的多功能性源自於此;功能的多樣性源自於此。

  • NVIDIA has such incredibly good offtake because our ecosystem is so large.

    NVIDIA 的銷售量如此之好,是因為我們的生態系統非常龐大。

  • So these five things, every phase of acceleration and transition every phase of AI; every model; every cloud to on-prem; and, of course, finally, it all leads to offtake.

    所以這五件事,加速和轉型的每個階段,人工智慧的每個階段;每個模型;從雲端到本地部署的每一次轉變;當然,最終,這一切都會導致銷售。

  • Operator

    Operator

  • Thank you. I will now turn the call to Toshiya Hari for closing remarks.

    謝謝。現在我將把發言權交給 Toshiya Hari,請他作總結發言。

  • Toshiya Hari - Vice President - Investor Relations & Strategic Finance

    Toshiya Hari - Vice President - Investor Relations & Strategic Finance

  • In closing, please note, we will be at the UBS Global Technology and AI Conference on December 2. Our earnings call to discuss the results of our fourth quarter of fiscal 2026 is scheduled for February 25.

    最後,請注意,我們將於 12 月 2 日參加瑞銀全球科技與人工智慧大會。我們將於2月25日召開財報電話會議,討論2026財年第四季的業績。

  • Thank you for joining us today. Operator, please go ahead and close the call.

    感謝您今天蒞臨。接線員,請您結束通話。

  • Operator

    Operator

  • Thank you. This concludes today's conference call. You may now disconnect.

    謝謝。今天的電話會議到此結束。您現在可以斷開連線了。