輝達 (NVDA) 2025 Q2 法說會逐字稿

內容摘要

受 NVIDIA Hopper GPU 運算和網路平台等資料中心產品強勁需求的推動,NVIDIA 第二季度營收達到創紀錄的 300 億美元。該公司強調了人工智慧工作負載、企業人工智慧、汽車、醫療保健和專業視覺化領域的成長。

他們提供了第三季的展望,預計收入為 325 億美元,並宣布授權 500 億美元的股票回購。 NVIDIA 討論了向加速運算和生成式 AI 的過渡,強調了提高 AI 應用效能和效率的重要性。

他們還提到了對其 Hopper 和 Blackwell GPU 的高需求,重點是實現資料中心現代化並擴展到企業人工智慧解決方案。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is Abby, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's second-quarter earnings call. (Operator Instructions) Thank you.

    午安.我叫艾比,今天我將擔任你們的會議操作員。此時此刻,我謹歡迎大家參加 NVIDIA 第二季財報電話會議。 (操作員說明)謝謝。

  • Mr. Stewart Stecker, you may begin.

    史都華‧史特克先生,您可以開始了。

  • Stewart Stecker - Senior Director, Investor Relations

    Stewart Stecker - Senior Director, Investor Relations

  • Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    謝謝。大家下午好,歡迎參加 NVIDIA 2025 財年第二季的電話會議。執行副總裁兼財務長 Colette Kress。

  • I would like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. Our webcast will be available for replay after the conference call to discuss our financial results for the third quarter of fiscal 2025. The content of today's call is NVIDIA's property and cannot be reproduced or transcribed without prior written consent.

    我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網路直播。我們的網路廣播將在討論 2025 財年第三季財務業績的電話會議結束後進行重播。

  • During this call, we may make forward-looking statements based on current expectations. These are subject to a number of risks, significant risks, and uncertainties; and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.

    在這次電話會議中,我們可能會根據目前的預期做出前瞻性陳述。這些都受到許多風險、重大風險和不確定性的影響;我們的實際結果可能存在重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益發布中的揭露、我們最新的表格 10-K 和 10-Q,以及我們可能在表格 8-K 上提交的報告證券交易委員會。

  • All our statements are made as of today, August 28, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    我們的所有聲明均基於我們目前掌握的信息,於今天(2024 年 8 月 28 日)作出。除法律要求外,我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的財務長評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的調整表。

  • Let me highlight an upcoming event for the financial community. We will be attending the Goldman Sachs Communacopia and Technology Conference on September 11t in San Francisco, where Jensen will participate in a keynote fireside chat. Our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday, November 20, 2024.

    讓我重點介紹一下金融界即將舉辦的活動。我們將於 9 月 11 日在舊金山參加高盛 Communacopia 和技術會議,Jensen 將參加一場主題爐邊談話。我們定於 2024 年 11 月 20 日星期三召開收益電話會議,討論 2025 財年第三季的業績。

  • With that, let me turn the call over to Colette.

    現在,讓我把電話轉給科萊特。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year on year and well above our outlook of $28 billion.

    謝謝,斯圖爾特。第二季又是創紀錄的季度。營收為 300 億美元,季增 15%,年增 122%,遠高於我們 280 億美元的預期。

  • Starting with data center. Data center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year on year, driven by strong demand for NVIDIA Hopper GPU computing and our networking platforms.

    從資料中心開始。在對 NVIDIA Hopper GPU 運算和我們的網路平台的強勁需求的推動下,資料中心收入達到 263 億美元,創歷史新高,環比增​​長 16%,同比增長 154%。

  • Compute revenue grew more than 2.5x. Networking revenue grew more than 2x from the last year. Cloud service providers represented roughly 45% of our data center revenue, and more than 50% stem from the consumer, Internet, and enterprise companies.

    計算收入成長超過 2.5 倍。網路收入比去年增長了兩倍多。雲端服務供應商約占我們資料中心收入的 45%,其中超過 50% 來自消費者、網際網路和企業公司。

  • Customers continue to accelerate their Hopper architecture purchases while gearing up to adopt Blackwell. Key workloads driving our data center growth include generative AI model training and inferencing; video, image, and text data, pre- and post-processing with CUDA and AI workloads; synthetic data generation; AI-powered recommender systems; SQL; and vector database processing as well.

    客戶繼續加速其 Hopper 架構的購買,同時準備採用 Blackwell。推動我們資料中心成長的關鍵工作負載包括生成式人工智慧模型訓練和推理;視訊、影像和文字數據,使用 CUDA 和 AI 工作負載進行預處理和後處理;合成數據生成;人工智慧驅動的推薦系統; SQL;以及向量資料庫處理。

  • Next-generation models will require 10 to 20 times more compute to train with significantly more data. The trend is expected to continue.

    下一代模型將需要 10 到 20 倍的運算能力才能使用更多的資料進行訓練。預計這一趨勢將持續下去。

  • Over the trailing four quarters, we estimate that inference drove more than 40% of our data center revenue. CSPs, consumer internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform.

    在過去的四個季度中,我們估計推理推動了我們資料中心 40% 以上的收入。通訊服務供應商、消費性網路公司和企業都受惠於 NVIDIA 推理平台令人難以置信的吞吐量和效率。

  • Demand for NVIDIA is coming from frontier model makers, consumer internet services, and tens of thousands of companies and startups building generative AI applications for consumers, advertising, education, enterprise and healthcare, and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud. CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand.

    對 NVIDIA 的需求來自前沿模型製造商、消費者互聯網服務以及數以萬計為消費者、廣告、教育、企業和醫療保健以及機器人構建生成式 AI 應用程式的公司和新創公司。開發人員希望 NVIDIA 擁有豐富的生態系統以及在每個雲端中的可用性。通訊服務提供者對 NVIDIA 的廣泛採用表示讚賞,並鑑於需求量的高漲正在擴大其 NVIDIA 容量。

  • NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer internet, and enterprise companies. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering, over 40% more memory bandwidth compared to the H100.

    NVIDIA H200 平台在第二季開始量產,向大型 CSP、消費網路和企業公司供貨。 NVIDIA H200 是基於我們的 Hopper 架構的優勢而構建,與 H100 相比,記憶體頻寬增加了 40% 以上。

  • Our data center revenue in China grew sequentially in Q2 and is a significant contributor to our data center revenue. As a percentage of total data center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going forward.

    我們在中國的資料中心營收在第二季連續成長,是我們資料中心收入的重要貢獻者。佔資料中心總收入的百分比仍低於實施出口管制之前的水準。我們仍然預期中國市場未來的競爭將非常激烈。

  • The latest round of MLPerf Inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA, Hopper, and Blackwell platforms combining to win gold medals on all tasks. At Computex, NVIDIA, with the top computer manufacturers, unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers.

    最新一輪的 MLPerf 推理基準測試突顯了 NVIDIA 在推理方面的領先地位,NVIDIA、Hopper 和 Blackwell 平台聯合起來在所有任務中贏得了金牌。在 Computex 上,NVIDIA 與頂級電腦製造商一起推出了一系列 Blackwell 架構驅動的系統和 NVIDIA 網絡,用於建立 AI 工廠和資料中心。

  • With the NVIDIA MDX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively. The NVIDIA Blackwell platform brings together multiple GPUs, CPUs, DPUs, NVLink, NVLink Switch, and the networking chips, systems, and NVIDIA CUDA software to power the next generation of AI across the cases, industries, and countries.

    透過 NVIDIA MDX 模組化參考架構,我們的 OEM 和 ODM 合作夥伴正在建構 100 多個快速且經濟高效地設計的基於 Blackwell 的系統。 NVIDIA Blackwell 平台匯集了多個 GPU、CPU、DPU、NVLink、NVLink 交換器以及網路晶片、系統和 NVIDIA CUDA 軟體,為跨案例、產業和國家的下一代 AI 提供動力。

  • The NVIDIA GB200 NBL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30 times faster inference for LLMs, workloads, and unlocking the ability to run trillion-parameter models in real time. Copper demand is strong, and Blackwell is widely sampling.

    配備第五代 NVLink 的 NVIDIA GB200 NBL72 系統使所有 72 個 GPU 能夠充當單個 GPU,為 LLM、工作負載提供高達 30 倍的推理速度,並解鎖實時運行萬億參數模型的能力。銅需求強勁,Blackwell 正在廣泛取樣。

  • We executed a change to the Blackwell GPU mass to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26.

    我們對 Blackwell GPU 品質進行了更改,以提高產量。 Blackwell 的產量增加計劃於第四季度開始,並持續到 26 財年。

  • In Q4, we expected several billion dollars in Blackwell revenue. Copper shipments are expected to increase in the second half of fiscal 2025. Copper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking revenue increased 16% sequentially.

    第四季度,我們預計 Blackwell 的營收將達到數十億美元。預計 2025 財年下半年銅出貨量將增加。 Blackwell 平台的需求遠高於供應,我們預計這種情況將持續到明年。網路收入季增 16%。

  • Our Ethernet for AI revenue, which includes our Spectrum X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. SpectrumX has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprise, including xAI, to connect the largest GPU compute cluster in the world.

    我們的人工智慧乙太網路收入(包括我們的 Spectrum X 端到端乙太網路平台)連續翻倍,數百家客戶採用了我們的乙太網路產品。 SpectrumX 擁有來自 OEM 和 ODM 合作夥伴的廣泛市場支持,並被 CSP、GPU 雲端供應商和企業(包括 xAI)採用,以連接世界上最大的 GPU 運算叢集。

  • SpectrumX supercharges Ethernet for AI processing and delivers 1.6x the performance of traditional Ethernet. We plan to launch new SpectrumX products every year to support demand for scaling compute clusters from tens of thousands of DPUs today to millions of DPUs in the near future. SpectrumX is well on track to begin a multi-billion dollar product line within a year.

    SpectrumX 增強了乙太網路的 AI 處理能力,效能是傳統乙太網路的 1.6 倍。我們計劃每年推出新的 SpectrumX 產品,以滿足將運算叢集從目前的數萬個 DPU 擴展到不久的將來數百萬個 DPU 的需求。 SpectrumX 預計將在一年內啟動價值數十億美元的產品線。

  • Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for the society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI bridging cloud infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low double-digit billions this year.

    隨著各國認識到人工智慧專業知識和基礎設施符合國家社會和產業的需要,我們的主權人工智慧機會繼續擴大。日本產業技術綜合研究所正在與 NVIDIA 合作建構 AI 橋接雲端基礎架構 3.0 超級電腦。我們相信今年主權人工智慧收入將達到數十億美元。

  • The enterprise AI wave has started. Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots, and agents to build new, monetizable business applications and enhance employee productivity.

    企業人工智慧浪潮已經開始。企業也推動了本季營收的環比成長。我們正在與大多數財富 100 強公司合作,開展跨行業和跨地區的人工智慧計畫。一系列應用程式正在推動我們的發展,包括人工智慧驅動的聊天機器人、生成式人工智慧副駕駛以及用於建立新的、可貨幣化的業務應用程式並提高員工生產力的代理。

  • Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its NowAssist offering, the fastest growing new product in the company's history. SAP is using NVIDIA to build dual co-pilots.

    Amdocs 正在將 NVIDIA 生成式 AI 用於其智慧代理,從而改變客戶體驗並將客戶服務成本降低 30%。 ServiceNow 正在使用 NVIDIA 來開發 NowAssist 產品,這是該公司歷史上成長最快的新產品。 SAP 正在使用 NVIDIA 打造雙副駕駛。

  • Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake, who serves over 3 billion queries a day for over 10,000 enterprise customers, is working with NVIDIA to build copilots. And lastly, Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%.

    Cohesity 正在使用 NVIDIA 建立其生成式 AI 代理程式並降低生成式 AI 開發成本。 Snowflake 每天為超過 10,000 多名企業客戶提供超過 30 億次查詢,目前正在與 NVIDIA 合作打造副駕駛。最後,緯創資通正在使用 NVIDIA AI Omniverse 將其工廠的端到端週期時間縮短 50%。

  • Automotive was a key growth driver for the quarter, as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multi-billion dollars in revenue across on-prem and cloud consumption and will grow as next generation AV models require significantly more compute.

    汽車是本季的主要成長動力,因為每家開發自動駕駛汽車技術的汽車製造商都在其資料中心使用 NVIDIA。汽車產業將透過本地和雲端消費帶來數十億美元的收入,並將隨著下一代自動駕駛模型需要更多運算而成長。

  • Healthcare is also on its way to being a multi-billion dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health, record processing, and drug discovery. During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world's enterprises with Meta's Llama 3.1 collection of models. This marked a watershed moment for enterprise AI.

    隨著人工智慧徹底改變醫學影像、手術機器人、病患照護、電子健康、記錄處理和藥物發現,醫療保健也正在成為一個價值數十億美元的業務。本季度,我們宣布推出一項新的 NVIDIA AI 代工服務,透過 Meta 的 Llama 3.1 模型系列為全球企業增強生成式 AI 能力。這標誌著企業人工智慧的分水嶺時刻。

  • Companies for the first time can leverage the capabilities of an open-source, frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications.

    公司第一次可以利用開源前沿模型的功能來開發客製化的人工智慧應用程序,將其機構知識編碼到人工智慧飛輪中,以實現業務自動化和加速。埃森哲是第一家採用這項新服務來建立客製化 Llama 3.1 模型的公司,該模型既可以供自己使用,也可以幫助尋求部署生成式 AI 應用程式的客戶。

  • NVIDIA and NIM accelerate and simplify model deployment. Companies across healthcare, energy, financial services, retail, transportation, and telecommunications are adopting NIM, including Aramco, Lowe's, and Uber. AT&T realized 70% cost savings and eight times latency reduction after moving into NIMS for generative AI, call transcription, and classification.

    NVIDIA 和 NIM 加速並簡化了模型部署。醫療保健、能源、金融服務、零售、運輸和電信等領域的公司正在採用 NIM,包括 Aramco、Lowe's 和 Uber。 AT&T 在採用 NIMS 進行生成式 AI、通話轉錄和分類後,實現了 70% 的成本節省和八倍的延遲減少。

  • Over 150 partners are embedding NIMS across every layer of the AI ecosystem. customer service, We announced NIM agent blueprints, a catalog of customizable reference applications that include a full suite of software for building enterprise generative AI applications.

    超過 150 個合作夥伴正在將 NIMS 嵌入到 AI 生態系統的各個層面。客戶服務,我們發布了 NIM 代理藍圖,這是一個可自訂參考應用程式目錄,其中包括用於建立企業生成人工智慧應用程式的全套軟體。

  • With NIM agent blueprints, enterprises can refine their AI applications over time, creating a data-driven data flywheel. The first NIM agent blueprints include workflows for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM agent blueprints to enterprises.

    借助 NIM 代理藍圖,企業可以隨著時間的推移完善其 AI 應用程序,創建數據驅動的數據飛輪。第一個 NIM 代理藍圖包括客戶服務、電腦輔助藥物發現和企業檢索增強生成的工作流程。我們的系統整合商、技術解決方案供應商和系統建置商正在為企業帶來 NVIDIA NIM 代理藍圖。

  • NVIDIA NIM and NIM agent blueprints are available through the NVIDIA AI enterprise software platform, which has great momentum. We expect our software, SaaS, and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth.

    NVIDIA NIM 和 NIM 代理藍圖可透過 NVIDIA AI 企業軟體平台取得,該平台發展勢頭強勁。我們預計我們的軟體、SaaS 和支援收入今年將達到 20 億美元的年運行率,其中 NVIDIA AI Enterprise 對成長的貢獻顯著。

  • Moving to gaming and AI PCs. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year on year. We saw sequential growth in console, notebook, and desktop revenue; and demand is strong and growing, and channel inventory remains healthy.

    轉向遊戲和人工智慧電腦。博彩收入為 28.8 億美元,季增 9%,年增 16%。我們看到遊戲機、筆記型電腦和桌上型電腦收入連續成長;需求強勁且不斷成長,通路庫存保持健康。

  • Every PC with RTX is an AI PC. RTX PCs can deliver up to 1,300 AI tops, and there are now over 200 RTX AI laptops designed from leading PC manufacturers. With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI.

    每台配備 RTX 的 PC 都是 AI PC。 RTX PC 最多可提供 1,300 個 AI 頂尖,目前已有 200 多款 RTX AI 筆記型電腦由領先的 PC 製造商設計。 RTX 擁有 600 個人工智慧驅動的應用程式和遊戲以及 1 億台設備的安裝基礎,將透過生成式人工智慧徹底改變消費者體驗。

  • NVIDIA ACE, a suite of generative AI technologies, is available for RTX AI PCs. MegaBreak is the first game to use NVIDIA ACE, including our small-language model, Minitron 4B, optimized on device inference. The NVIDIA gaming ecosystem continues to grow.

    NVIDIA ACE 是一套生成式 AI 技術,可用於 RTX AI PC。 MegaBreak 是第一款使用 NVIDIA ACE 的遊戲,其中包含我們針對裝置推理進行最佳化的小語言模型 Minitron 4B。 NVIDIA 遊戲生態系統持續發展。

  • Recently added RTX and DLSS titles include Indiana Jones and The Great Circle; Dune Awakening; and Dragon Age, The Veil Guard. The GeForce Now library continues to expand with total catalog size of over 2,000 titles, the most content of any cloud gaming service.

    最近新增的 RTX 和 DLSS 遊戲包括《法櫃奇兵》和《The Great Circle》;沙丘覺醒;以及《龍騰世紀》、《面紗守衛》。 GeForce Now 庫繼續擴展,目錄總規模已超過 2,000 款,是所有雲端遊戲服務中內容最多的。

  • Moving to pro-visualization. Revenue of $454 million was up 6% sequentially and 20% year on year. Demand is being driven by AI and graphic use cases, including model fine-tuning and omniverse-related workloads. Automotive and manufacturing were among the key industry verticals driving growth this quarter.

    轉向專業視覺化。營收為 4.54 億美元,季增 6%,年增 20%。需求是由人工智慧和圖形用例驅動的,包括模型微調和與全宇宙相關的工作負載。汽車和製造業是推動本季成長的關鍵垂直產業之一。

  • Companies are racing to digitalize workflows to drive efficiency across their operations. The world's largest electronics manufacturer, Foxconn, is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz, signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins of factories.

    公司正在競相實現工作流程數位化,以提高整個營運的效率。全球最大的電子產品製造商富士康正在使用 NVIDIA Omniverse 為生產 NVIDIA Blackwell 系統的實體工廠的數位孿生提供動力。包括梅賽德斯-奔馳在內的多家全球大型企業簽署了 NVIDIA Omniverse Cloud 的多年合同,以建立工廠的工業數位孿生。

  • We announced new NVIDIA USD NIMS and connectors to open Omniverse to new industries and enable developers to incorporate generative AI copilots and agents into USD workloads, accelerating their ability to build highly accurate virtual worlds. WPP is implementing USD NIM microservices in its generative AI-enabled content creation pipeline for customers, such as the Coca-Cola company.

    我們發布了新的 NVIDIA USD NIMS 和連接器,以向新行業開放 Omniverse,並使開發人員能夠將生成式 AI 副駕駛和代理納入 USD 工作負載中,從而加快他們構建高度準確的虛擬世界的能力。 WPP 正在其支援 AI 的生成式內容創建管道中為可口可樂公司等客戶實施 USD NIM 微服務。

  • Moving to automotive and robotics. Revenue was $346 million, up 5% sequentially and up 37% year on year. Year-on-year growth was driven by the new customer ramps in self-driving platforms and increased demand for AI cockpit solutions.

    轉向汽車和機器人技術。營收為 3.46 億美元,季增 5%,年增 37%。自動駕駛平台新客戶的增加以及對人工智慧駕駛艙解決方案的需求增加推動了同比增長。

  • At the Computer Vision and Pattern Recognition Conference, NVIDIA won the autonomous grant challenge, in the end-to-end writing at scale category, outperforming more than 400 entries worldwide. Austin Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, Skilled 8i, and Teradyne Robotics are using the NVIDIA ISAAC Robotics Platform for autonomous robot arms, humanoids, and mobile robots.

    在電腦視覺和模式識別大會上,NVIDIA 在端到端大規模寫作類別中贏得了自主資助挑戰,超越了全球 400 多個參賽作品。 Austin Dynamics、比亞迪電子、Figure、Intrinsic、Siemens、Skilled 8i 和 Teradyne Robotics 正在將 NVIDIA ISAAC 機器人平台用於自主機器人手臂、人形機器人和移動機器人。

  • Now moving to the rest of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within data center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs.

    現在轉向損益表的其餘部分。 GAAP 毛利率為 75.1%,非 GAAP 毛利率為 75.7%,環比下降,原因是資料中心新產品組合增加以及低收益 Blackwell 材料的庫存撥備。隨後,GAAP 和非 GAAP 營運費用增加了 12%,主要反映了薪酬相關成本的增加。

  • Cash flow from operations was $14.5 billion. In Q2, we utilized cash of $7.4 billion toward shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per share. Our Board of Directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2.

    營運現金流為 145 億美元。第二季度,我們以股票回購和現金股利的形式使用了 74 億美元的現金來回報股東,反映出每股股利的增加。我們的董事會最近批准了一項 500 億美元的股票回購授權,以增加我們第二季末剩餘的 75 億美元的授權。

  • Let me turn the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third-quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4.

    讓我談談第三季的前景。總收入預計為 325 億美元,上下浮動 2%。我們的第三季營收前景包括我們的 Hopper 架構的持續成長和我們的 Blackwell 產品的採樣。我們預計 Blackwell 產量將在第四季增加。

  • GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points. As our data center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025.

    GAAP 和非 GAAP 毛利率預計分別為 74.4% 和 75%,上下浮動 50 個基點。隨著我們的資料中心組合繼續轉向新產品,我們預計這一趨勢將持續到 2025 財年第四季。

  • For the full year, we expect gross margins to be in the mid-70% range. GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid to upper 40% range as we work on developing our next generation of products.

    我們預計全年毛利率將在 70% 左右。 GAAP 和非 GAAP 營運費用預計分別約為 43 億美元和 30 億美元。隨著我們致力於開發下一代產品,全年營運費用預計將成長 40% 左右。

  • GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from non-affiliated investments and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.

    GAAP 和非 GAAP 其他收入和支出預計約為 3.5 億美元,包括非關聯投資和公開持有股本證券的損益。 GAAP 和非 GAAP 稅率預計為 17%,上下浮動 1%(不包括任何離散項目)。

  • Further financial details are included in the CFO commentary and other information available on our IR website. We are now going to open the call for questions. Operator, would you please help us and pull for questions?

    更多財務細節包含在 CFO 評論和我們的 IR 網站上提供的其他資訊中。我們現在開始提問。接線員,您能幫我們解答一下問題嗎?

  • Operator

    Operator

  • Thank you. (Operator Instructions) Vivek Arya, Bank of America Securities.

    謝謝。 (操作員指令)Vivek Arya,美國銀行證券。

  • Vivek Arya - Analyst

    Vivek Arya - Analyst

  • Thanks for taking my question. Jensen, you mentioned in the prepared comments that there's a change in the Blackwell GPU mask. I'm curious, are there any other incremental changes in back-end packaging or anything else?

    感謝您提出我的問題。 Jensen,您在準備好的評論中提到 Blackwell GPU 遮罩發生了變化。我很好奇,後端包裝或其他方面是否還有其他增量變化?

  • And I think related, you suggested that you could ship several billion dollars of Blackwell in Q4 despite a change in the design. Is it because all these issues will be solved by then? Just help us size what is the overall impact of any changes in Blackwell timing, what that means to your revenue profile, and how are customers reacting to it?

    我認為相關的是,儘管設計發生了變化,您還是建議您可以在第四季度運送數十億美元的 Blackwell。是因為到那時所有這些問題都會解決嗎?請協助我們評估 Blackwell 時間安排的任何變化的整體影響是什麼,這對您的收入狀況意味著什麼,以及客戶對此有何反應?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah. Thanks, Vivek. The change to the mask is complete. There were no functional changes necessary.

    是的。謝謝,維韋克。面具更換完畢。無需進行功能變更。

  • And so we're sampling functional samples of Blackwell, Grace Blackwell, in a variety of system configurations as we speak. There are something like a hundred different types of Blackwell-based systems that are built that were shown at Computex, and we're enabling our ecosystem to start sampling those. The functionality of Blackwell is as it is, and we expect to start production in Q4.

    因此,正如我們所說,我們正在各種系統配置中對 Blackwell、Grace Blackwell 的功能樣本進行取樣。在 Computex 上展示了大約一百種不同類型的基於 Blackwell 的系統,我們正在使我們的生態系統開始對這些系統進行採樣。 Blackwell 的功能維持現狀,我們預計在第四季開始生產。

  • Operator

    Operator

  • Toshiya Hari, Goldman Sachs.

    Toshiya Hari,高盛。

  • Toshiya Hari - Analyst

    Toshiya Hari - Analyst

  • Hi. Thank you so much for taking the question. Jensen, I had a relatively longer term question. As you may know, there's a pretty heated debate in the market on your customers and customers' customers' return on investment and what that means for the sustainability of CapEx going forward.

    你好。非常感謝您提出這個問題。詹森,我有一個相對長期的問題。如您所知,市場上對於您的客戶和客戶的客戶的投資回報以及這對未來資本支出的可持續性意味著什麼存在著相當激烈的爭論。

  • Internally at NVIDIA, what are you guys watching? What's on your dashboard as you try to gauge customer return and how that impacts CapEx?

    在 NVIDIA 內部,你們在看什麼?當您嘗試衡量客戶回報時,儀表板上顯示了什麼以及這對資本支出有何影響?

  • And then a quick follow-up maybe for Colette. I think your sovereign AI number for the full year went up maybe a couple billion. What's driving the improved outlook and how should we think about fiscal '26? Thank you.

    然後可能是科萊特的快速跟進。我認為全年的主權人工智慧數量可能增加了幾十億。是什麼推動了前景的改善?謝謝。

  • Stewart Stecker - Senior Director, Investor Relations

    Stewart Stecker - Senior Director, Investor Relations

  • Thanks, Toshiya. First of all, when I said ship production in Q4, I mean shipping out. I don't mean starting to ship -- I mean, I don't mean starting production, but shipping out.

    謝謝,俊哉。首先,當我說第四季出貨時,我的意思是出貨。我的意思不是開始出貨——我的意思是,我不是指開始生產,而是出貨。

  • On the longer-term question, let's take a step back. And you've heard me say that we're going through two simultaneous platform transitions at the same time. The first one is transitioning from accelerated computing to -- from general purpose computing to accelerated computing.

    關於更長遠的問題,讓我們退後一步。你們已經聽我說過,我們正在同時經歷兩個同步平台轉換。第一個是從加速計算過渡到從通用計算到加速計算。

  • And the reason for that is because CPU scaling has been known to be slowing for some time. And it is slow to a crawl, and yet the amount of computing demand continues to growth quite significantly. You could maybe even estimate it to be doubling every single year.

    原因是 CPU 縮放速度已經放緩了一段時間。雖然速度緩慢,但計算需求量仍在顯著成長。你甚至可以估計它每年都會翻倍。

  • And so if we don't have a new approach, computing inflation would be driving up the cost for every company and it would be driving up the energy consumption of data centers around the world. In fact, you're seeing that.

    因此,如果我們沒有新的方法,計算通膨將推高每家公司的成本,並且會推高全球資料中心的能源消耗。事實上,你也看到了。

  • And so the answer is accelerated computing. We know that accelerated computing, of course, speeds up applications. It also enables you to do computing at a much larger scale. For example, scientific simulations or database processing. But what that translates directly to is lower cost and lower energy consumed.

    所以答案是加速計算。我們知道,加速運算當然可以加快應用程式的速度。它還使您能夠進行更大規模的計算。例如,科學模擬或資料庫處理。但這直接意味著更低的成本和更低的能源消耗。

  • And in fact, this week, there's a blog that came out that talked about a whole bunch of new libraries that we offer. And that's really the core of the first platform transition, going from general purpose computing to accelerated computing.

    事實上,本週出現了一個博客,討論了我們提供的一大堆新庫。這確實是第一個平台過渡的核心,從通用運算到加速運算。

  • And it's not unusual to see someone save 90% of their computing cost. And the reason for that is, of course, you just sped up an application 50x. You would expect the computing cost to decline quite significantly.

    人們節省 90% 的計算成本並不罕見。當然,原因是您剛剛將應用程式的速度提高了 50 倍。您會期望計算成本會顯著下降。

  • The second was enabled by accelerated computing. Because we drove down the cost of training large language models or training deep learning so incredibly that it is now possible to have gigantic scale models, multi-trillion parameter models, and train it on, pre-train it, on just about the world's knowledge corpus and let the model go figure out how to understand human language representation and how to codify knowledge into its neural networks and how to learn reasoning.

    第二個是透過加速計算來實現的。因為我們大大降低了訓練大型語言模型或訓練深度學習的成本,以至於現在可以擁有巨大規模的模型、數萬億參數模型,並利用世界知識對其進行訓練、預訓練語料庫,讓模型弄清楚如何理解人類語言表示、如何將知識編碼到神經網路中以及如何學習推理。

  • And so -- which caused the generative AI evolution. Now, generative AI, taking a step back about why it is that we went so deeply into it, is because it's not just a feature. It's not just a capability. It's a fundamental new way of doing software.

    因此,這引發了生成式人工智慧的進化。現在,退一步來說,我們為什麼要如此深入地研究生成式人工智慧,是因為它不僅僅是一個功能。這不僅僅是一種能力。這是一種全新的軟體開發方式。

  • Instead of human-engineered algorithms, we now have data. We tell the AI, we tell the model, we tell the computer what are the expected answers. What are our previous observations? And then for it to figure out what the algorithm is what's the function, it learns a universal -- AI is a bit of a universal function approximator, and it learns the function.

    我們現在擁有的不是人工設計的演算法,而是數據。我們告訴人工智慧,我們告訴模型,我們告訴電腦預期的答案是什麼。我們之前的觀察結果是什麼?然後為了弄清楚演算法是什麼,函數是什麼,它學習了一個通用函數——人工智慧有點像通用函數逼近器,它學習了這個函數。

  • And so you could learn the function of almost anything -- anything that you have that's predictable, anything that has structure, anything that you have previous examples of. And so now, here we are with generative AI. It's a fundamental new form of computer science. It's affecting how every layer of computing is done from CPU to GPU, from human-engineered algorithms to machine-learned algorithms. And the type of applications you could now develop and produce is fundamentally remarkable.

    所以你可以學習幾乎任何東西的功能——任何可預測的東西、任何有結構的東西、任何你以前有過例子的東西。現在,我們有了生成式人工智慧。它是計算機科學的一種基本新形式。它影響著從 CPU 到 GPU、從人類工程演算法到機器學習演算法的每一層計算的完成方式。您現在可以開發和生產的應用程式類型從根本上來說是非凡的。

  • And there are several things that are happening in generative AI. So the first thing that's happening is the frontier models are growing in quite substantial scale. And they're still seeing -- we're still all seeing the benefits of scaling. And whenever you double the size of a model, you also have to more than double the size of the data set to go train it.

    生成人工智慧領域正在發生一些事情。因此,首先發生的事情是前沿模型正在大規模成長。他們仍然看到——我們仍然看到擴展的好處。每當你將模型的大小增加一倍時,你也必須將資料集的大小增加一倍以上才能訓練它。

  • And so the amount of flops necessary in order to create that model goes up quadratically. And so it's not unexpected to see that the next generation models could take 20 -- 10, 20, 40 times more compute than last generation. So we have to continue to drive the generational performance up quite significantly so we can drive down the energy consumed and drive down the cost necessary to do it.

    因此,建立此模型所需的失敗次數呈現二次方增加。因此,下一代模型所需的計算量可能比上一代多 20 至 10、20、40 倍,這一點也就不足為奇了。因此,我們必須繼續大幅提高一代效能,這樣我們就可以降低能源消耗並降低實現這一目標所需的成本。

  • So the first one is there are larger frontier models trained on more modalities. And surprisingly, there are more frontier model makers than last year. And so you have more on more on more. That's one of the dynamics going on in generative AI.

    所以第一個是有更大的前沿模型接受更多模式的訓練。令人驚訝的是,前沿模型製造商比去年更多。所以你有更多更多更多。這是生成人工智慧中正在發生的動態之一。

  • The second is, although it's below the tip of the iceberg, what we see are ChatGPT, image generators. We see coding -- we use generative AI for coding quite extensively here at NVIDIA now. We, of course, have a lot of digital designers and things like that. But those are kind of the tip of the iceberg.

    第二個是,雖然它在冰山一角以下,但我們看到的是ChatGPT,影像產生器。我們看到了編碼——我們現在在 NVIDIA 廣泛使用生成式 AI 進行編碼。當然,我們有很多數位設計師和類似的東西。但這些只是冰山一角。

  • What's below the iceberg are the largest systems -- largest computing systems in the world today, which are -- and you've heard me talk about this in the past -- which are recommender systems moving from CPUs, is now moving from CPUs to generative AI. So recommender systems, ad generation, custom ad generation, targeting ads at very large scale and quite hyper-targeting, search and user-generated content. These are all very large scale applications have now evolved to generative AI.

    冰山下面是最大的系統——當今世界上最大的運算系統——你過去聽過我談論過這個——它們是從CPU轉移的推薦系統,現在正在從CPU轉移到生成式人工智慧。因此,推薦系統、廣告生成、自訂廣告生成、大規模定位廣告和超定位、搜尋和用戶生成的內容。這些都是非常大規模的應用程序,現在已經發展為生成式人工智慧。

  • Of course, the number of generative AI startups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners and sovereign AI. Countries that are now realizing that their data is their natural and national resource and they have to use AI to build their own AI infrastructure so that they could have their own digital intelligence.

    當然,生成式人工智慧新創公司的數量正在為我們的雲端合作夥伴和主權人工智慧創造數百億美元的雲端租賃機會。各國現在意識到他們的數據是他們的自然資源和國家資源,他們必須利用人工智慧來建立自己的人工智慧基礎設施,這樣他們才能擁有自己的數位智慧。

  • Enterprise AI, as Colette mentioned earlier, is starting. And you might have seen our announcement that the world's leading IT companies are joining us to take the NVIDIA enterprise platform to the world's enterprises. The companies that we're talking to, so many of them were just so incredibly excited to drive more productivity out of the company.

    正如科萊特之前提到的,企業人工智慧正在起步。您可能已經看到我們宣布全球領先的 IT 公司將與我們一起將 NVIDIA 企業平台推向全球企業。在與我們交談的公司中,有許多都對提高公司生產力感到非常興奮。

  • And then general robotics, the big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation, from reinforcement learning, from systems like Omniverse. We are now able to work with just about every robotics companies now to start thinking about -- start building general robotics.

    然後是通用機器人技術,這是去年的重大轉變,因為我們現在能夠透過觀看影片、人類演示和合成數據生成、強化學習、Omniverse 等系統來學習實體人工智慧。我們現在能夠與幾乎所有機器人公司合作,開始思考——開始建造通用機器人。

  • And so you can see that there are just so many different directions that generative AI is going. And so we're actually seeing the momentum of generative AI accelerating.

    所以你可以看到生成式人工智慧有許多不同的發展方向。因此,我們實際上看到了生成式人工智慧的發展動能正在加速。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • And Toshiya, to answer your question regarding sovereign AI and our goals in terms of growth, in terms of revenue, it certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country.

    Toshiya,回答你關於主權人工智慧和我們在成長、收入方面的目標的問題,這無疑是一個獨特且不斷增長的機會,隨著生成人工智慧和世界各國擁有自己的人工智慧的願望而浮現出來。生成式人工智慧將能夠融入自己的語言,融入他們自己的文化,融入他們自己在那個國家的數據。

  • So more and more excitement around these models and what they can be specific for those countries. So yes, we are seeing some growth opportunity in front of us.

    因此,人們對這些模型及其針對這些國家的具體情況越來越感興趣。所以,是的,我們看到了一些成長機會。

  • Operator

    Operator

  • Joe Moore, Morgan Stanley.

    喬摩爾,摩根士丹利。

  • Joe Moore - Analyst

    Joe Moore - Analyst

  • Great. Thank you. Jensen, in the press release, you talked about Blackwell anticipation being incredible, but it seems like Hopper demand is also really strong. I mean, you're guiding for a very strong quarter without Blackwell in October.

    偉大的。謝謝。詹森,在新聞稿中,你談到布萊克韋爾的預期令人難以置信,但霍珀的需求似乎也非常強勁。我的意思是,您預計 10 月布萊克威爾不在場時將迎來一個非常強勁的季度。

  • So how long do you see coexisting strong demand for both? And can you talk about the transition to Blackwell? Do you see people intermixing clusters? Do you think most of the Blackwell activities, new clusters, just some sense of what that transition looks like?

    那麼,您認為兩者的強勁需求會共存多久?您能談談向布萊克韋爾的過渡嗎?你看到人們混合在一起嗎?您是否認為布萊克韋爾的大部分活動、新集群只是對這種轉變的一些了解?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah. Thanks, Joe. The demand for Hopper is really strong. And it's true, the demand for Blackwell is incredible. There's a couple of reasons for that.

    是的。謝謝,喬。對霍珀的需求確實很強烈。確實,對布萊克威爾的需求令人難以置信。這有幾個原因。

  • The first reason is, if you just look at the world's cloud service providers and the amount of GPU capacity they have available, it's basically none. And the reason for that is because they're either being deployed internally for accelerating their own workloads -- data processing, for example.

    第一個原因是,如果你只看全球的雲端服務供應商以及他們可用的 GPU 容量,基本上沒有。原因是它們要么在內部部署,要么是為了加速自己的工作負載(例如資料處理)。

  • Data processing, we hardly ever talk about it because it's mundane. It's not very cool because it doesn't generate a picture or generate words, but almost every single company in the world processes data in the background.

    數據處理,我們幾乎從不談論它,因為它很平常。它不是很酷,因為它不會產生圖片或生成文字,但世界上幾乎每家公司都會在後台處理資料。

  • And NVIDIA GPUs are the only accelerators on the planet that process and accelerate data, SQL data, Pandas data science; toolkits like Pandas; and the new one, Polars. These are the most popular data processing platforms in the world. And aside from CPUs, which as I've mentioned before, really running out of steam, NVIDIA's accelerated computing is really the only way to get boosting performance out of that.

    NVIDIA GPU 是地球上唯一能夠處理和加速資料、SQL 資料、Pandas 資料科學的加速器; Pandas 等工具包;還有新的,Polars。這些是世界上最受歡迎的資料處理平台。除了 CPU(正如我之前提到的那樣,CPU 確實已經失去動力)之外,NVIDIA 的加速運算實際上是提高效能的唯一方法。

  • And so that's number one is the primary -- the number one use case long before generative AI came along is that the migration of applications one after another to accelerated computing. The second is, of course, the rentals.

    因此,第一個就是主要的——早在生成式人工智慧出現之前,第一個用例就是應用程式一個接一個地遷移到加速運算。第二個當然是租金。

  • They're renting capacity to model makers or renting it to startup companies. And a generative AI company spends the vast majority of their invested capital into infrastructure so that they could use an AI to help them create products.

    他們將產能租給模型製造商或新創公司。生成式人工智慧公司將絕大多數投資資金投入基礎設施,以便他們可以使用人工智慧來幫助他們創造產品。

  • And so these companies need it now. They just simply can't afford -- you just raised money. They want you to put it to use now. You have processing that you have to do. You can't do it next year, you got to do it today.

    所以這些公司現在需要它。他們只是買不起——你只是籌集了資金。他們希望你現在就使用它。你有必須做的處理。你不可能明年再做,你必須今天做。

  • And so there's a fair -- that's one reason. The second reason for Hopper demand right now is because of the race to the next plateau. The first person to the next plateau gets to be -- gets to introduce a revolutionary level of AI. The second person to get there is incrementally better about the same.

    所以這是一個公平的——這就是原因之一。目前對料斗需求的第二個原因是進入下一個平台期的競爭。第一個進入下一個平台的人將能夠引入革命性的人工智慧等級。第二個到達那裡的人在同樣的情況下逐漸變得更好。

  • And so the ability to systematically and consistently race to the next plateau and be the next one there is how you establish leadership. NVIDIA's constantly doing that, and we showed that to the world in the GPUs we make, in AI factories that we make, the networking systems we make, the SOCs we create.

    因此,能夠有系統地、持續地奔向下一個平台並成為下一個平台的能力就是你建立領導力的方法。 NVIDIA 一直在這樣做,我們透過我們製造的 GPU、我們製造的 AI 工廠、我們製造的網路系統、我們創建的 SOC 向世界展示了這一點。

  • We want to set the pace. We want to be consistently the world's best. And that's the reason why we drive ourselves so hard.

    我們想引領步伐。我們希望始終成為世界上最好的。這就是我們如此努力激勵自己的原因。

  • Of course, we also want to see our dreams come true. And all of the capabilities that we imagine in the future and the benefits that we can bring to society, we want to see all that come true. And so these model makers are the same.

    當然,我們也希望看到我們的夢想成真。所有我們想像的未來的能力以及我們能為社會帶來的好處,我們都希望看到這一切成為現實。所以這些模型製作者都是一樣的。

  • Of course, they want to be the world's best. They want to be the world's first. And although Blackwell will start shipping out in billions of dollars at the end of this year, the standing up of the capacity is still probably weeks and a month or so away.

    當然,他們想成為世界上最好的。他們想成為世界第一。儘管布萊克韋爾將於今年底開始出貨數十億美元,但產能的恢復可能還需要幾週甚至一個月左右的時間。

  • And so between now and then is a lot of generative AI market dynamic. And so everybody is just really in a hurry. It's either operational reasons that they need it. They need accelerated computing. They don't want to build any more general purpose computing infrastructure, even Hopper.

    因此,從現在到那時,生成式人工智慧市場充滿活力。所以每個人都非常匆忙。他們需要它要么是出於運營原因。他們需要加速計算。他們不想建造任何更通用的運算基礎設施,甚至是 Hopper。

  • Of course, H200 is state-of-the-art. Hopper, if you have a choice between building CPU infrastructure right now for business or Hopper infrastructure for business right now, that decision is relatively clear. And so I think people are just clamoring to transition the trillion dollars of established installed infrastructure to a modern infrastructure and Hopper's state-of-the-art.

    當然,H200 是最先進的。 Hopper,如果您可以選擇立即為企業建置 CPU 基礎架構還是立即為企業建置 Hopper 基礎設施,那麼這個決定是相對明確的。因此,我認為人們只是呼籲將價值數萬億美元的已安裝基礎設施轉變為現代基礎設施和霍珀最先進的基礎設施。

  • Operator

    Operator

  • Matt Ramsay, TD Cowen

    馬特·拉姆齊,TD·考恩

  • Matt Ramsay - Analyst

    Matt Ramsay - Analyst

  • Thank you very much. Good afternoon, everybody. I wanted to kind of circle back to an earlier question about the debate that investors are having about the ROI on all of this CapEx. And hopefully this question and the distinction will make some sense.

    非常感謝。大家下午好。我想回到之前的一個問題,即投資者對所有資本支出的投資回報率進行的辯論。希望這個問題和差異能夠有意義。

  • But what I'm having discussions about is with the percentage of folks that you see that are spending all of this money and looking to push the frontier towards AGI convergence and, as you just said, a new plateau in capability -- and they're going to spend regardless to get to that level of capability because it opens up so many doors for the industry and for their company versus customers that are really, really focused today on CapEx versus ROI.

    但我正在討論的是,你看到有多少人正在花費所有這些錢,並希望推動 AGI 融合的前沿,正如你剛才所說,能力達到一個新的平台——他們’為了達到這種能力水平,我們將不惜一切代價,因為它為行業、他們的公司與客戶打開了許多大門,而客戶現在真正非常關注資本支出與投資回報率。

  • I don't know if that distinction makes sense. I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars on the ground on this new technology and what their priorities are their timeframes are for that investment. Thanks.

    我不知道這種差異是否有意義。我只是想了解一下,您如何看待那些在這項新技術上投入資金的人們的優先事項,以及他們的優先事項是什麼,他們的投資時間表是什麼。謝謝。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Thanks, Matt. The people who are investing in NVIDIA infrastructure are getting returns on it right away. It's the best ROI computing infrastructure investment you can make today.

    謝謝,馬特。投資 NVIDIA 基礎設施的人可以立即獲得回報。這是您當今可以進行的最佳投資回報率計算基礎設施投資。

  • So one way to think through it, probably the most -- the easiest way to think through it is just go back to first principles. You have a trillion dollars' worth of general purpose computing infrastructure. And the question is, do you want to build more of that or not?

    因此,思考它的一種方法,可能是最——最簡單的思考方法就是回到第一原則。您擁有價值一萬億美元的通用運算基礎設施。問題是,你是否想建造更多這樣的東西?

  • And for every billion dollars' worth of general purpose CPU-based infrastructure that you stand up, you probably rent it for less than a billion. And so, because it's commoditized, there's already a trillion dollars on the ground. What's the point of getting more?

    對於您建立的每價值 10 億美元的基於 CPU 的通用基礎設施,您的租金可能不到 10 億美元。因此,由於它已經商品化,因此已經有一萬億美元的資金。得到更多有什麼意義呢?

  • And so the people who are clamoring to get this infrastructure, one, when they build out Hopper-based infrastructure and, soon, Blackwell-based infrastructure, they start saving money. That's tremendous return on investment.

    因此,那些吵著要獲得這種基礎設施的人,一,當他們建造基於霍珀的基礎設施以及很快基於布萊克韋爾的基礎設施時,他們開始省錢。這是巨大的投資回報。

  • And the reason why they start saving money is because data processing saves money. Data processing is probably just a giant part of it already. And so recommender systems save money, so on and so forth. Okay. And so you start saving money.

    他們之所以開始省錢,是因為數據處理可以省錢。數據處理可能只是其中一個重要的部分。因此推薦系統可以省錢,等等。好的。於是你開始存錢。

  • The second thing is everything you stand up are going to get rented because so many companies are being founded to create generative AI. And so your capacity gets rented right away. And the return on investment of that is really good.

    第二件事是,你站起來的所有東西都將被租用,因為有很多公司正在成立來創造生成人工智慧。這樣您的容量就會立即被租用。而且其投資報酬率非常好。

  • And then the third reason is your own business. You want to either create the next frontier yourself or your own internet services benefit from a next-generation ad system or a next-generation recommender system or a next-generation search system. So for your own services, for your own stores, for your own user-generated content, social media platforms, for your own services, generative AI is also a fast ROI.

    第三個原因是你自己的事。您希望自己創建下一個前沿,或者您自己的網路服務受益於下一代廣告系統或下一代推薦系統或下一代搜尋系統。所以對於你自己的服務,對於你自己的商店,對於你自己的用戶生成的內容,社交媒體平台,對於你自己的服務,生成式人工智慧也是一個快速的投資回報率。

  • And so there's a lot of ways you could think through it. But at the core, it's because it is the best computing infrastructure you could put in the ground today. The world of general-purpose computing is shifting to accelerated computing.

    因此,您可以透過多種方式進行思考。但從本質上講,這是因為它是當今可以投入使用的最佳運算基礎設施。通用運算領域正在轉向加速運算。

  • The world of human-engineered software is moving to generative AI software. If you were to build infrastructure to modernize your cloud and your data centers, build it with accelerated computing and NVIDIA. That's the best way to do it.

    人類工程軟體的世界正在轉向生成式人工智慧軟體。如果您要建置基礎架構來實現雲端和資料中心的現代化,請使用加速運算和 NVIDIA 進行建置。這是最好的方法。

  • Operator

    Operator

  • Timothy Arcuri, UBS.

    提摩西‧阿庫裡,瑞銀集團。

  • Timothy Arcuri - Analyst

    Timothy Arcuri - Analyst

  • Thanks a lot. I had a question on the shape of the revenue growth, both near and longer term. I know, Colette, you did increase OpEx for the year. And if I look at the increase in your purchase commitments and your supply obligations, that's also quite bullish.

    多謝。我對近期和長期收入成長的情況有疑問。我知道,Colette,您今年確實增加了營運支出。如果我看看你們的購買承諾和供應義務的增加,這也是相當樂觀的。

  • On the other hand, there's some school of thought that not that many customers really seem ready for liquid cooling. And I do recognize that some of these racks can be air cooled. But Jensen, is that something to consider on the shape of how Blackwell is going to ramp?

    另一方面,有一些觀點認為,似乎沒有多少客戶真正準備好採用液體冷卻。我確實認識到其中一些機架可以進行空氣冷卻。但是詹森,布萊克威爾的發展方式是否需要考慮這一點?

  • And then I guess when you look beyond next year, which is obviously going to be a great year, and you look into '26, do you worry about any other gating factors, like say the power supply chain or at some point models start to get smaller? I'm just wondering if you can speak to that. Thanks.

    然後我想,當你展望明年之後,這顯然將是偉大的一年,當你展望 26 年時,你是否擔心任何其他門控因素,例如電力供應鏈或在某些時候模型開始變小?我只是想知道你是否可以談談這一點。謝謝。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • I'm going to work backwards. I really appreciate the question, Tim. So remember, the world is moving from general purpose computing to accelerated computing. And the world builds about a trillion dollars' worth of data centers. A trillion dollars' worth of data centers in a few years will be all accelerated computing.

    我要向後工作。我真的很感激這個問題,提姆。因此請記住,世界正在從通用計算轉向加速計算。全球正在建造價值約一兆美元的資料中心。幾年後價值一兆美元的資料中心將全部採用加速運算。

  • In the past, no GPUs are in data centers, just CPUs. In the future, every single data center will have GPUs. And the reason for that is very clear, because we need to accelerate workloads so that we can continue to be sustainable, continue to drive down the cost of computing so that when we do more computing, we don't experience computing inflation.

    過去,資料中心沒有 GPU,只有 CPU。未來,每個資料中心都將配備 GPU。原因很明顯,因為我們需要加速工作負載,以便我們能夠繼續保持可持續性,繼續降低計算成本,這樣當我們進行更多計算時,我們就不會遇到計算膨脹。

  • Second, we need GPUs for a new computing model called generative AI that we can all acknowledge is going to be quite transformative to the future of computing. And so I think working backwards, the way to think about that is the next trillion dollars of the world's infrastructure will clearly be different than the last trillion. And it will be vastly accelerated.

    其次,我們需要 GPU 來實現稱為生成式 AI 的新運算模型,我們都承認該模型將對運算的未來產生巨大的變革。因此,我認為逆向思考,思考世界基礎設施的下一個萬億美元的方式顯然將與上一個萬億美元不同。而且這將大大加速。

  • With respect to the shape of our ramp, we offer multiple configurations of Blackwell. Blackwell comes in either a Blackwell classic, if you will, that uses the HGX form factor that we pioneered with Volta -- and I think it was Volta. And so we've been shipping the HGX form factor for some time. It is air-cooled.

    關於坡道的形狀,我們提供 Blackwell 的多種配置。 Blackwell 要么是 Blackwell 經典款,如果你願意的話,它使用我們在 Volta 中首創的 HGX 外形尺寸——我認為它是 Volta。因此,我們已經推出 HGX 外形尺寸一段時間了。它是風冷的。

  • The Grace Blackwell is liquid-cooled. However, the number of data centers that want to go liquid-cooled is quite significant. And the reason for that is because we can, in a liquid-cooled data center, in any power-limited data center, whatever size data center you choose, you could install and deploy anywhere from three to five times the AI throughput compared to the past.

    Grace Blackwell 採用液冷式。然而,想要採用液冷的資料中心數量相當可觀。原因是我們可以在液冷資料中心、任何功率受限的資料中心、無論您選擇什麼規模的資料中心,都可以在任何地方安裝和部署AI 吞吐量,其吞吐量是傳統資料中心的三到五倍。

  • And so liquid cooling is cheaper. Liquid cooling [our] TCO is better. And liquid cooling allows you to have the benefit of this capability we call NVLink. which allows us to expand it to 72 Grace Blackwell packages, which has essentially 144 GPUs.

    因此液體冷卻更便宜。液體冷卻 [我們的] TCO 更好。液體冷卻使您能夠受益於我們稱為 NVLink 的這種功能。這使我們能夠將其擴展到 72 個 Grace Blackwell 軟體包,其中基本上有 144 個 GPU。

  • And so imagine 144 GPUs connected in NVLink, and we're increasingly showing you the benefits of that. And the next click is obviously very low latency, very high throughput, large language model inference. The large NVLink domain is going to be a game changer for that.

    因此,想像 NVLink 中連接的 144 個 GPU,我們將越來越多地向您展示其優勢。而接下來的點擊顯然是非常低的延遲、非常高的吞吐量、大的語言模型推理。大型 NVLink 網域將改變遊戲規則。

  • And so I think people are very comfortable deploying both. And so almost every CSP we're working with are deploying some of both. And so I'm pretty confident that we'll wrap it up just fine.

    所以我認為人們很樂意部署這兩種技術。因此,幾乎每個與我們合作的 CSP 都部署了這兩種技術。所以我非常有信心我們會很好地完成它。

  • Your second question out of the third is looking forward. Yeah, next year is going to be a great year. We expect to grow our data center business quite significantly next year. Blackwell is going to be a complete game changer for the industry. And Blackwell is going to carry into the following year.

    第三個問題中的第二個問題是展望未來。是的,明年將會是偉大的一年。我們預計明年資料中心業務將大幅成長。布萊克韋爾將徹底改變該行業的遊戲規則。布萊克威爾將延續到明年。

  • And as I mentioned earlier, working backwards from first principles, remember that computing is going through two platform transitions at the same time. And that's just really, really important to keep your mind focused on, which is general purpose computing is shifting to accelerated computing and human engineered software is going to transition to generative AI or artificial intelligence learned software. Okay?

    正如我之前提到的,從第一原則倒推,請記住計算正在同時經歷兩個平台轉換。讓你的注意力集中在這一點上真的非常重要,那就是通用運算正在轉向加速運算,而人類工程軟體將過渡到生成人工智慧或人工智慧學習軟體。好的?

  • Operator

    Operator

  • Stacy Rasgon, Bernstein Research.

    史黛西‧拉斯貢,伯恩斯坦研究中心。

  • Stacy Rasgon - Analyst

    Stacy Rasgon - Analyst

  • Hi guys. Thanks for taking my questions. I have two short questions to Colette. The first, several billion dollars of Blackwell revenue in Q4. Is that additive? You said you expected Hopper demand to strengthen in the second half. Does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars?

    嗨,大家好。感謝您回答我的問題。我有兩個簡短的問題要問科萊特。第一個是 Blackwell 第四季的營收達數十億美元。那是添加物嗎?您說您預計下半年料斗需求將會增強。這是否意味著在 Blackwell 增加數十億美元的基礎上,Hopper 也會加強第三季到第四季的實力?

  • And the second question on gross margins. If I have mid-70%s for the year -- depending on where I want to draw that, if I have 75% for the year, I'd be something like 71% to 72%. for Q4, somewhere in that range. Is that the kind of exit rate for gross margins that you're expecting?

    第二個問題是關於毛利率的。如果我今年有 70% 的中期——取決於我想把它畫到哪裡,如果我今年有 75%,我會是 71% 到 72% 左右。對於第四季度,在該範圍內的某個位置。這是您所期望的毛利率退出率嗎?

  • And how should we think about the drivers of gross margin evolution into next year as Blackwell ramps? And I mean, hopefully, I guess the yields and the inventory reserves and everything come up.

    隨著布萊克韋爾的崛起,我們該如何思考明年毛利率演變的驅動因素?我的意思是,希望我猜產量和庫存儲備以及一切都會出現。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Yeah, Stacy. Let's first take your question that you had about Hopper and Blackwell. So we believe our Hopper will continue to grow into the second half. We have many new products for Hopper, our existing products for Hopper, that we believe will start continuing to ramp in the next quarters, including our Q3 and those new products moving to Q4.

    是的,史黛西。讓我們先回答你關於霍珀和布萊克威爾的問題。所以我們相信我們的 Hopper 將在下半年繼續成長。我們為 Hopper 提供了許多新產品,我們為 Hopper 提供了現有產品,我們相信這些產品將在接下來的幾個季度中繼續增加,包括我們的第三季和進入第四季的新產品。

  • So let's say Hopper, therefore, versus H1 is a growth opportunity for that. Additionally, we have the Blackwell on top of that, and the Blackwell starting ramping in Q4. So hope that helps you on those two pieces.

    因此,我們可以說 Hopper 相對於 H1 來說是一個成長機會。此外,我們還有 Blackwell,而 Blackwell 在第四季開始增加。所以希望這對您在這兩方面有所幫助。

  • Your second piece is in terms of on our gross margin. We provided gross margin for our Q3. We provided our gross margin on a non-GAAP at about 75%. We'll work with all the different transitions that we're going through, but we do believe we can do that 75% in Q3.

    你的第二部分是關於我們的毛利率。我們提供了第三季的毛利率。我們提供的非 GAAP 毛利率約為 75%。我們將應對正在經歷的所有不同轉變,但我們相信我們可以在第三季做到 75%。

  • We provided that we're still on track for the full year, also in the mid-70%s or approximately the 75%. So we're going to see some slight difference, possibly in Q4, again, with our transitions and the different cost structures that we have on our new product introductions.

    我們假設全年仍處於正軌,也處於 70% 左右或約 75% 左右。因此,我們可能會在第四季再次看到一些細微的差異,因為我們的轉型和新產品推出的不同成本結構。

  • However, I'm not in the same number that you are there. We don't have exactly guidance, but I do believe you're lower than where we are.

    然而,我和你不在同一個數字中。我們沒有確切的指導,但我相信你的水平比我們低。

  • Operator

    Operator

  • Ben Reitzes, Melius.

    本‧雷茨,梅利厄斯。

  • Ben Reitzes - Analyst

    Ben Reitzes - Analyst

  • Yeah. Hey, thanks a lot for the question, Jensen and Colette. I wanted to ask about the geographies. There was the 10-Q that came out and the United States was down sequentially while several Asian geographies were up a lot sequentially. Just wondering what the dynamics are there.

    是的。嘿,非常感謝詹森和科萊特的提問。我想問一下地理。 10-Q 指數發布後,美國連續下跌,而亞洲幾個地區則連續大幅上漲。只是想知道那裡的動態是什麼。

  • And obviously, China did very well. You mentioned it in your remarks. What are the puts and takes? And then I just wanted to clarify from Stacy's question if that means the sequential overall revenue growth rates for the company accelerate in the fourth quarter, given all those favorable revenue dynamics. Thanks.

    顯然,中國做得非常好。你在發言中提到了這一點。賣出和賣出是什麼?然後我只是想澄清史黛西的問題,這是否意味著考慮到所有這些有利的收入動態,公司第四季的整體收入連續成長率會加速。謝謝。

  • Colette Kress - Executive Vice President, Chief Financial Officer

    Colette Kress - Executive Vice President, Chief Financial Officer

  • Let me talk about -- a bit in terms of our disclosure in terms of the 10-Q, a required disclosure and a choice of geographies. Very challenging sometimes to create that right disclosure as we have to come up with one key piece, pieces in terms of we have in terms of who we sell to and/or specifically who we invoice to.

    讓我談談我們在 10-Q 方面的揭露、必要的揭露和地理位置的選擇。有時,創建正確的揭露資訊非常具有挑戰性,因為我們必須拿出一個關鍵部分,即我們所擁有的銷售對象和/或具體向誰開立發票的部分。

  • And so what you're seeing in terms of there is who we invoice. That's not necessarily where the product will eventually be and where it may even travel to the end customer. These are just moving to our OEMs, our ODMs, and our system integrators, for the most part, across our product portfolio.

    因此,您所看到的就是我們向誰開立發票。這不一定是產品最終會出現的地方,甚至可能會到達最終客戶手中。在我們的產品組合中,這些大部分只是轉移到我們的 OEM、ODM 和系統整合商。

  • So what you're seeing there is sometimes just a shift in terms of who they are using to complete their full configuration before those things are going into the data center, going into notebooks, and those pieces of it. And that shift happens from time to time.

    因此,您所看到的有時只是他們使用誰來完成完整配置的轉變,然後這些東西才進入資料中心,進入筆記型電腦以及其中的那些部分。這種轉變時常發生。

  • But yes, our China number there -- our invoicing to China, keep in mind that is incorporating both gaming, also data center, also automotive in those numbers that we have.

    但是,是的,我們的中國數據——我們向中國開具的發票,請記住,我們所擁有的數據中既包括遊戲,也包括數據中心和汽車。

  • Going back to your statement regarding gross margin and also what we're seeing in terms of what we're looking at for Hopper and Blackwell in terms of revenue. Hopper will continue to grow in the second half. We'll continue to grow from what we are currently seeing.

    回到您關於毛利率的聲明,以及我們對霍珀和布萊克韋爾的收入的看法。下半年霍珀將繼續成長。我們將在目前所見的基礎上繼續成長。

  • Determining that exact mix in each Q3 and Q4, we don't have here. We are not here to guide yet in terms of Q4. But we do see right now the demand expectations. We do see the visibility that there will be a growth opportunity in Q4. On top of that, we will have our Blackwell architecture.

    確定每個第三季和第四季的確切組合,我們這裡沒有。我們還沒有就第四季度提供指導。但我們現在確實看到了需求預期。我們確實看到第四季將出現成長機會。最重要的是,我們將擁有 Blackwell 架構。

  • Operator

    Operator

  • C.J. Muse, Cantor Fitzgerald.

    C.J.繆斯,坎托·費茲傑拉。

  • C.J. Muse - Analyst

    C.J. Muse - Analyst

  • Yeah, good afternoon. Thank you for taking the question. You've embarked on a remarkable annual product cadence with challenges, only likely becoming more and more, given rising complexity and a radical limit advanced package world. So curious, if you take a step back, how does this backdrop alter your thinking around potentially greater vertical integration, supply chain partnerships, and then thinking through consequential impact to your margin profile? Thank you.

    是的,下午好。感謝您提出問題。您已經開始了令人矚目的年度產品節奏,面臨著挑戰​​,鑑於複雜性不斷增加和先進封裝世界的徹底限制,這些挑戰只會變得越來越多。很好奇,如果您退後一步,這種背景如何改變您對潛在更大的垂直整合、供應鏈合作夥伴關係的思考,然後思考對您的利潤狀況的間接影響?謝謝。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah, thanks. Let's see. I think the answer to your first question is that the reason why our velocity is so high is simultaneously because the complexity of the model is growing and we want to continue to drive its cost down. It's growing, so we want to increase its scale.

    是的,謝謝。讓我們來看看。我認為你的第一個問題的答案是,我們的速度如此之高的原因同時是因為模型的複雜性正在增長,而我們希望繼續降低其成本。它正在成長,所以我們想擴大它的規模。

  • And we believe that by continuing to scale the AI models, that will reach a level of extraordinary usefulness and that it would open up -- realize the next industrial revolution. We believe it. And so we're going to drive ourselves really hard to continue to go up that scale.

    我們相信,透過繼續擴展人工智慧模型,它將達到非凡的實用性水平,並且它將開啟——實現下一次工業革命。我們相信。因此,我們將非常努力地推動自己繼續擴大規模。

  • We have the ability, fairly uniquely, to integrate -- to design an AI factory because we have all the parts. It's not possible to come up with a new AI factory every year unless you have all the parts. And so we have -- next year, we're going to ship a lot more CPUs than we've ever had in the history of our company, more GPUs, of course, but also NVLink switches, CX DPUs, ConnectX DPUs for East and West, Bluefield DPUs for North and South, and data and storage processing, to InfiniBand for supercomputing centers, to Ethernet, which is a brand new product for us, which is well on its way to becoming a multi-billion dollar business, to bring AI to Ethernet.

    我們有相當獨特的能力來整合——設計一個人工智慧工廠,因為我們擁有所有的零件。除非你擁有所有零件,否則不可能每年建立一個新的人工智慧工廠。因此,明年我們將交付比公司歷史上更多的 CPU,當然還有更多的 GPU,還有 NVLink 交換器、CX DPU、ConnectX DPU。到用於超級運算中心的InfiniBand,到以太網,這對我們來說是一個全新產品,正在成為一項價值數十億美元的業務,將人工智慧帶入以太網。

  • And so the fact that we could build -- we have access to all of this -- we have one architectural stack, as you know -- it allows us to introduce new capabilities to the market as we complete it. Otherwise, what happens, you ship these parts, you go find customers to sell it to, and then you've got to build -- somebody's got to build up an AI factory. And the AI factory's got a mountain of software.

    因此,我們可以構建——我們可以訪問所有這些——我們擁有一個架構堆棧,如您所知——這一事實使我們能夠在完成後向市場推出新功能。否則,會發生什麼,你運送這些零件,你去找客戶出售它,然後你必須建造——有人必須建造一個人工智慧工廠。人工智慧工廠擁有堆積如山的軟體。

  • And so it's not about who integrates it. We love the fact that our supply chain is disintegrated in the sense that we could service Quanta, Foxconn, HP, Dell, Lenovo, Supermicro. We used to be able to service ZT. They were recently purchased and so on and so forth.

    因此,這不是誰整合它的問題。我們喜歡這樣一個事實:我們的供應鏈是分散的,因為我們可以為廣達、富士康、惠普、戴爾、聯想、超微提供服務。我們以前可以為ZT提供服務。它們是最近購買的等等。

  • And so the number of ecosystem partners that we have, GIGABYTE, ASUS, the number of ecosystem partners that we have, that allows them to take our architecture, which all works, but integrate it in a bespoke way into all of the world's cloud service providers, enterprise data centers, The scale and reach necessary from our ODMs and our integrators -- integrator supply chain is vast and gigantic because the world is huge.

    因此,我們擁有的生態系統合作夥伴數量,技嘉、華碩,我們擁有的生態系統合作夥伴數量,使他們能夠採用我們的架構,該架構一切正常,但以定制的方式將其集成到世界上所有的雲端服務中供應商、企業資料中心、我們的 ODM 和整合商所需的規模和覆蓋範圍——整合商供應鏈是巨大的,因為世界很大。

  • And so that part, we don't want to do and we're not good at doing. But we know how to design the AI infrastructure, provide it the way that customers would like it, and let the ecosystem integrate it. Well, yeah. So anyways, that's the reason why.

    所以這部分,我們不想做,也不擅長做。但我們知道如何設計人工智慧基礎設施,以客戶喜歡的方式提供它,並讓生態系統整合它。嗯,是的。所以無論如何,這就是原因。

  • Operator

    Operator

  • Aaron Rakers, Wells Fargo.

    亞倫·雷克斯,富國銀行。

  • Aaron Rakers - Analyst

    Aaron Rakers - Analyst

  • Yeah, thanks for taking the question. I wanted to go back into the Blackwell product cycle. One of the questions that we tend to get asked is how you see the rack-scale system mix dynamic as you think about leveraging NVLink, you think about GB, NVL72, and how that go-to-market dynamic looks as far as the Blackwell product cycle. I guess, to put it simply, how do you see that mix of rack-scale systems as we start to think about the Blackwell cycle playing out?

    是的,感謝您提出問題。我想回到 Blackwell 產品週期。我們經常被問到的問題之一是,當您考慮利用 NVLink、GB、NVL72 時,您如何看待機架規模系統組合動態,以及 Blackwell 的上市動態如何產品週期。我想,簡單地說,當我們開始考慮布萊克韋爾循環時,您如何看待機架規模系統的組合?

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Yeah, Aaron. Thanks. The Blackwell rack system, it's designed and architected as a rack, but it's sold in disaggregated system components. We don't sell the whole rack. The reason for that is because everybody's rack is a little different, surprisingly.

    是的,亞倫。謝謝。 Blackwell 機架系統的設計和架構為機架,但以分解的系統組件形式出售。我們不出售整個機架。原因是每個人的機架都有點不同,令人驚訝。

  • Some of them are OCP standards, some of them are not. Some of them are enterprise. The power limits for everybody could be a little different. Choice of CDUs, the choice of power bus bars, the configuration and integration into people's data centers, all different.

    其中一些是 OCP 標準,有些則不是。其中一些是企業。每個人的功率限制可能略有不同。 CDU 的選擇、電源母線的選擇、資料中心的配置和集成,全都不同。

  • So the way we designed it, we architected the whole rack. The software is going to work perfectly across the whole rack. And then we provide the system components. Like for example, the CPU and GPU compute board is then integrated into an MGX. It's a modular system architecture. MGX is completely ingenious.

    因此,我們的設計方式是建造整個機架。該軟體將在整個機架上完美運作。然後我們提供系統組件。例如,CPU 和 GPU 運算板隨後整合到 MGX 中。它是一個模組化的系統架構。 MGX 非常巧妙。

  • And we have MGX ODMs and integrators and OEMs all over the planet. And so just about any configuration you would like, where you would like that 3,000-pound rack to be delivered, it's got to be close to -- it has to be integrated and assembled close to the data center because it's fairly heavy. And so everything from the supply chain from the moment that we ship the GPUs, CPUs, the switches, the NICs, from that point forward, the integration is done quite close to the location of the CSPs and the locations of the data centers.

    我們在全球擁有 MGX ODM、整合商和 OEM。因此,對於您想要的任何配置,您希望交付 3,000 磅重的機架,它都必須靠近資料中心進行整合和組裝,因為它相當重。因此,從我們運送 GPU、CPU、交換器、NIC 的那一刻起,供應鏈中的所有內容,從那時起,整合都是在距離 CSP 位置和資料中心位置非常近的地方完成的。

  • And so you can imagine how many data centers in the world there are and how many logistics hubs we've scaled out to with our ODM partners. And so I think that because we show it as one rack and because it's always rendered that way and shown that way, we might've left the impression that we're doing the integration.

    因此,您可以想像世界上有多少個資料中心以及我們與 ODM 合作夥伴一起擴展的物流中心。所以我認為,因為我們將其顯示為機架,並且因為它總是以這種方式呈現和顯示,所以我們可能會給人留下這樣的印象:我們正在進行整合。

  • Our customers hate that we do integration. The supply chain hates us doing integration. They want to do the integration. That's their value added.

    我們的客戶討厭我們進行整合。供應鏈討厭我們進行整合。他們想要進行整合。這就是他們的附加價值。

  • There's a final design in, if you will. It's not quite as simple as shimmy into a data center. But that design fit in is really complicated.

    如果你願意的話,最終的設計已經出來了。這並不像進入資料中心那麼簡單。但這種設計的配合確實很複雜。

  • And so the design fit in, the installation, the bring up, the repair, repair and replace, that entire cycle is done all over the world. And we have a sprawling network of ODM and OEM partners that does this incredibly well.

    因此,設計、安裝、調配、修理、修理和更換,整個週期在世界各地完成。我們擁有龐大的 ODM 和 OEM 合作夥伴網絡,在這方面做得非常好。

  • So integration is not the reason why we're doing racks. That's the anti-reason of doing it. We don't want to be an integrator. We want to be a technology provider.

    所以整合並不是我們做機架的原因。這就是這樣做的反理由。我們不想成為整合商。我們希望成為技術提供者。

  • Operator

    Operator

  • I will now turn the call back over to Jensen Huang for closing remarks.

    現在我將把電話轉回黃仁勳做總結發言。

  • Jensen Huang - President, Chief Executive Officer, Director

    Jensen Huang - President, Chief Executive Officer, Director

  • Thank you. Let me make a couple of comments that I made earlier again. Data centers worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative AI. Hopper demand remains strong and the anticipation for Blackwell is incredible.

    謝謝。讓我再次發表我之前發表的一些評論。世界各地的資料中心正在全力以赴,透過加速運算和產生人工智慧來實現整個運算堆疊的現代化。霍珀的需求依然強勁,對布萊克韋爾的期待令人難以置信。

  • Let me highlight the top five things of our company. Accelerated computing has reached a tipping point. CPU scaling slows. Developers must accelerate everything possible.

    讓我重點介紹一下我們公司最重要的五件事。加速計算已達到臨界點。 CPU 縮放速度變慢。開發人員必須盡一切可能加快速度。

  • Accelerated computing starts with CUDAx libraries. New libraries open new markets for NVIDIA. We released many new libraries, including CUDAx-accelerated Polars, Pandas, and Spark, the leading data science and data processing libraries; QVS for vector databases, this is incredibly hot right now; Arial and Shiona for 5G wireless base station, a whole suite of -- a whole world of data centers that we can go into now; Parabrix for gene sequencing. And AlphaFold2 for protein structure prediction is now CUDA accelerated.

    加速運算始於 CUDAx 函式庫。新庫為 NVIDIA 開闢了新市場。我們發布了許多新程式庫,包括領先的資料科學和資料處理庫 CUDAx 加速的 Polars、Pandas 和 Spark; QVS 用於向量資料庫,現在非常熱門; Arial 和 Shiona 用於 5G 無線基地台,一整套——我們現在可以進入的整個資料中心世界;用於基因定序的 Parabrix。用於蛋白質結構預測的 AlphaFold2 現在已透過 CUDA 加速。

  • We are at the beginning of our journey to modernize a trillion dollars' worth of data centers from general purpose computing to accelerated computing. That's number one.

    我們正處於將價值萬億美元的資料中心從通用運算升級為加速運算的現代化之旅的開始。這是第一名。

  • Number two, Blackwell is a step function leap over Hopper. Blackwell is an AI infrastructure platform, not just a GPU. Also happens to be the name of our GPU, but it's an AI infrastructure platform. As we reveal more of Blackwell and sample systems to our partners and customers, the extent of Blackwell's leap becomes clear.

    第二,布萊克威爾是霍普的階躍函數。 Blackwell 是一個 AI 基礎設施平台,而不僅僅是 GPU。也恰好是我們 GPU 的名稱,但它是一個 AI 基礎設施平台。隨著我們向合作夥伴和客戶展示更多 Blackwell 和範例係統,Blackwell 的飛躍程度變得清晰起來。

  • The Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize, the Gray CPU, the Blackwell Dual GPU in a co-auth package, ConnectX DPU for east-west traffic. Bluefield DPU for north-south and storage traffic, NVLink Switch for all-to-all GPU communications, and Quantum and Spectrum X for both InfiniBand and Ethernet, can support the massive burst traffic of AI.

    Blackwell Vision 花了近五年的時間和七個獨一無二的晶片來實現,包括 Grey CPU、聯合認證套件中的 Blackwell 雙 GPU、東西向流量的 ConnectX DPU。用於南北向和儲存流量的 Bluefield DPU、用於所有 GPU 通訊的 NVLink Switch、以及用於 InfiniBand 和乙太網路的 Quantum 和 Spectrum X,可以支援 AI 的大量突發流量。

  • Blackwell AI factories are building-sized computers. NVIDIA designed and optimized the Blackwell platform full stack; end-to-end; from chips, systems, networking, even structured cables, power and cooling, and mountains of software to make it fast for customers to build AI factories. These are very capital-intensive infrastructures customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO. Blackwell provides three to five times more AI throughput in a power-limited data center than Hopper.

    布萊克韋爾人工智慧工廠是建築大小的計算機。 NVIDIA設計並優化了Blackwell平台全端;端對端;從晶片、系統、網絡,甚至結構化電纜、電源和冷卻,以及大量軟體,讓客戶能夠快速建造人工智慧工廠。這些都是資本密集型基礎設施,客戶希望在拿到設備後立即部署,並提供最佳性能和 TCO。 Blackwell 在功率有限的資料中心中提供的 AI 吞吐量是 Hopper 的三到五倍。

  • The third is NVLink. This is a very big deal. Its all-to-all GPU switch is game-changing. The Blackwell system lets us connect 144 GPUs in 72 GB200 packages into one NVLink domain with an aggregate NVLink bandwidth of 259 terabytes per second in one rack.

    第三個是NVLink。這是一件大事。其全面的 GPU 切換改變了遊戲規則。 Blackwell 系統讓我們能夠將 72 GB200 封裝中的 144 個 GPU 連接到一個 NVLink 域中,一個機架中的 NVLink 聚合頻寬為每秒 259 TB。

  • Just to put that in perspective, that's about 10 times higher than Hopper, 259 terabytes per second. Kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens. And so that natural amount of data needs to be moved around from GPU to GPU.

    從長遠來看,這大約是 Hopper 的 10 倍,即每秒 259 TB。這是有道理的,因為您需要加強對數萬億代幣的數萬億參數模型的訓練。因此,自然量的資料需要在 GPU 之間移動。

  • For inference, NVLink is vital for low latency, high throughput, large language model token generation. We now have three networking platforms, NVLink for GPU scale-up, Quantum InfiniBand for supercomputing and dedicated AI factories, and SpectrumX for AI on Ethernet. NVIDIA's networking footprint is much bigger than before.

    對於推理而言,NVLink 對於低延遲、高吞吐量、大型語言模型令牌生成至關重要。我們現在擁有三個網路平台:用於 GPU 擴展的 NVLink、用於超級運算和專用 AI 工廠的 Quantum InfiniBand 以及用於乙太網路上 AI 的 SpectrumX。 NVIDIA 的網路足跡比以前大得多。

  • Generative AI momentum is accelerating. Generative AI frontier model makers are racing to scale to the next AI plateau to increase model safety and IQ. We're also scaling to understand more modalities from text, images, and video; to 3D, physics, chemistry, and biology.

    生成式人工智慧的發展動能正在加速。生成型人工智慧前沿模型製造商正在競相擴展到下一個人工智慧平台,以提高模型的安全性和智商。我們還在擴展以從文字、圖像和影片中理解更多模式; 3D、物理、化學和生物學。

  • Chatbots, coding AIs, and image generators are growing fast; but it's just the tip of the iceberg. Internet services are deploying generative AI for large-scale recommenders, ad targeting, and search systems. AI startups are consuming tens of billions of dollars yearly of CSP's cloud capacity, and countries are recognizing the importance of AI and investing in sovereign AI infrastructure.

    聊天機器人、人工智慧編碼和圖像生成器正在快速發展;但這只是冰山一角。網路服務正在為大規模推薦、廣告定位和搜尋系統部署生成式人工智慧。人工智慧新創公司每年消耗數百億美元的 CSP 雲端容量,各國正在認識到人工智慧的重要性並投資主權人工智慧基礎設施。

  • And NVIDIA Omniverse is opening up the next era of AI, general robotics. And now, the enterprise AI wave has started and we're poised to help companies transform their businesses. The NVIDIA AI enterprise platform consists of NEMO, NIMS, NIM, agent blueprints, and AI Foundry that our ecosystem partners, the world-leading IT companies, use to help companies customize AI models and build bespoke AI applications.

    NVIDIA Omniverse 正在開啟 AI 的下一個時代:通用機器人技術。現在,企業人工智慧浪潮已經開始,我們準備好幫助企業實現業務轉型。 NVIDIA AI 企業平台由 NEMO、NIMS、NIM、代理藍圖和 AI Foundry 組成,我們的生態系統合作夥伴(世界領先的 IT 公司)使用這些平台來幫助企業自訂 AI 模型並建立客製化的 AI 應用程式。

  • Enterprises can then deploy on NVIDIA AI Enterprise Runtime. And at $4,500 per GPU per year, NVIDIA AI Enterprise is an exceptional value for deploying AI anywhere. And for NVIDIA's software, TAM can be significant as the CUDA-compatible GPU install base grows from millions to tens of millions. And as Colette mentioned, NVIDIA software will exit the year at a $2 billion run rate. Thank you all for joining us today.

    然後,企業可以在 NVIDIA AI Enterprise Runtime 上進行部署。 NVIDIA AI Enterprise 的價格為每 GPU 每年 4,500 美元,對於在任何地方部署 AI 來說具有非凡的價值。對於 NVIDIA 的軟體來說,隨著相容 CUDA 的 GPU 安裝基數從數百萬成長到數千萬,TAM 可能意義重大。正如 Colette 所提到的,NVIDIA 軟體今年的運作成本將達到 20 億美元。感謝大家今天加入我們。

  • Operator

    Operator

  • And ladies and gentlemen, this concludes today's call, and we thank you for your participation. You may now disconnect.

    女士們、先生們,今天的電話會議到此結束,我們感謝你們的參與。您現在可以斷開連線。