NVIDIA 報告稱,由於對其數據中心平台的強勁需求,第二季度營收達到 135.1 億美元。該公司預計到明年每個季度的供應量都會增加。
NVIDIA 強調了加強對中國出口監管的潛在影響。該公司的遊戲收入環比增長 11%,而汽車收入下降 15%。 NVIDIA 宣布與多家公司建立合作夥伴關係。
該公司第三季度的展望包括預計總收入為 160 億美元。演講者討論了大型模型推理的新興應用以及生成式人工智能需求的可持續性。
全球數據中心行業正在向加速計算和生成式人工智能轉型。 NVIDIA 的軟件生態系統和架構使其成為開發者的首選。該公司正在與領先客戶一起規劃下一代基礎設施。
NVIDIA 在分配 GPU 時優先考慮客戶的選擇並提供網絡解決方案。該公司的DGX Cloud戰略旨在與CSP建立合作夥伴關係並提高高性能計算的性能。
NVIDIA 預計其軟件業務將持續增長。該公司正在擴大產能並與企業IT公司合作。 NVIDIA 對於推動計算領域的代際轉變感到非常興奮。
使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon. My name is David, and I'll be your conference operator today. At this time, I'd like to welcome everyone to NVIDIA's second quarter earnings call. Today's conference is being recorded. (Operator Instructions)
下午好。我叫大衛,今天我將擔任你們的會議操作員。此時此刻,我謹歡迎大家參加 NVIDIA 第二季度財報電話會議。今天的會議正在錄製中。 (操作員說明)
Thank you. Simona Jankowski, you may begin your conference.
謝謝。 Simona Jankowski,您可以開始會議了。
Simona Jankowski - VP of IR
Simona Jankowski - VP of IR
Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the Second Quarter of Fiscal 2024. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
謝謝。大家下午好,歡迎參加 NVIDIA 2024 財年第二季度電話會議。今天與我一起出席的有 NVIDIA 總裁兼首席執行官黃仁勳 (Jensen Huang);執行副總裁兼首席財務官 Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2024. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網絡直播。該網絡廣播將在討論 2024 財年第三季度財務業績的電話會議之前進行重播。今天電話會議的內容屬於 NVIDIA 的財產。未經我們事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在這次電話會議中,我們可能會根據當前的預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會存在重大差異。有關可能影響我們未來財務業績和業務的因素的討論,請參閱今天的收益發布中的披露、我們最新的表格 10-K 和 10-Q,以及我們可能在表格 8-K 上提交的報告證券交易委員會。
All our statements are made as of today, August 23, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
我們的所有聲明均根據我們目前掌握的信息於今天(2023 年 8 月 23 日)作出。除法律要求外,我們不承擔更新任何此類聲明的義務。在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的首席財務官評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的調節表。
And with that, let me turn the call over to Colette.
接下來,讓我把電話轉給科萊特。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Thanks, Simona. We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on-year and above our outlook of $11 billion. Let me first start with Data Center. Record revenue of $10.32 billion was up 141% sequentially and up 171% year-on-year.
謝謝,西蒙娜。我們度過了一個出色的季度。第二季度營收創歷史新高,達 135.1 億美元,環比增長 88%,同比增長 101%,高於我們 110 億美元的預期。讓我首先從數據中心開始。收入創紀錄地達到 103.2 億美元,環比增長 141%,同比增長 171%。
Data Center Compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs.
數據中心計算收入同比增長近兩倍,主要是由於雲服務提供商和大型消費互聯網公司對我們的 HGX 平台(生成人工智能和大型語言模型的引擎)的需求不斷增長。 AWS、Google Cloud、Meta、Microsoft Azure 和 Oracle Cloud 等主要公司以及越來越多的 GPU 雲提供商正在批量部署基於我們的 Hopper 和 Ampere 架構 Tensor Core GPU 的 HGX 系統。
Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA Accelerated Computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.
在我們的端到端 InfiniBand 網絡平台(人工智能的黃金標準)的推動下,網絡收入同比幾乎翻了一番。對 NVIDIA 加速計算和 AI 平台的需求巨大。我們的供應合作夥伴在提高產能以滿足我們的需求方面表現出色。
Our data center supply chain, including HGX, with 35,000 parts and highly complex networking has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process such as CoWoS packaging. We expect supply to increase each quarter through next year.
我們的數據中心供應鏈,包括 HGX,擁有 35,000 個零件和高度複雜的網絡,是在過去十年中建立起來的。我們還為 CoWoS 包裝等製造過程中的關鍵步驟開發並鑑定了額外的產能和供應商。我們預計到明年每個季度供應量都會增加。
By geography, Data Center growth was strongest in the U.S. as customers direct their capital investments to AI and Accelerated Computing. China demand was within the historical range of 20% to 25% of our Data Center revenue, including compute and networking solutions. At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China.
從地理位置來看,隨著客戶將資本投資轉向人工智能和加速計算,美國的數據中心增長最為強勁。中國需求占我們數據中心收入(包括計算和網絡解決方案)的 20% 至 25%,處於歷史範圍內。現在,讓我花點時間談談最近有關我們對中國出口可能加強監管的報導。
We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss and opportunity for the U.S. industry to compete and lead in one of the world's largest markets.
我們相信目前的監管正在達到預期效果。鑑於全球對我們產品的需求強勁,我們預計對數據中心 GPU 的額外出口限制如果採用,不會對我們的財務業績產生直接重大影響。然而,從長遠來看,禁止向中國銷售數據中心 GPU 的限制一旦實施,將導緻美國行業永久失去在全球最大市場之一競爭和領先的機會。
Our cloud service providers drove exceptionally strong demand for HGX systems in the quarter as they undertake a generational transition to upgrade their data center infrastructure for the new era of Accelerated Computing and AI. The NVIDIA HGX platform is culminating of nearly 2 decades of full-stack innovation across silicon, systems, interconnects, networking, software and algorithms. Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly.
我們的雲服務提供商在本季度推動了對 HGX 系統的異常強勁的需求,因為他們正在進行代際過渡,升級其數據中心基礎設施,以迎接加速計算和人工智能的新時代。 NVIDIA HGX 平台是近 2 年來跨芯片、系統、互連、網絡、軟件和算法的全棧創新的巔峰之作。由 NVIDIA H100 Tensor Core GPU 提供支持的實例現已在 AWS、Microsoft Azure 和多家 GPU 雲提供商處全面上市,其他提供商也將很快推出。
Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram.
消費互聯網公司也帶動了非常強勁的需求。他們對專門為人工智能構建的數據中心基礎設施的投資已經產生了可觀的回報。例如,Meta 最近強調,自推出 Reels 以來,人工智能推薦使 Instagram 上的使用時間增加了 24% 以上。
Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI Enterprise software, with access to our acceleration libraries, pretrained models and APIs.
企業也在競相部署生成式 AI,推動雲中 NVIDIA 支持的實例的強勁消耗以及對本地基礎設施的需求。無論我們是通過合作夥伴還是直接在雲端或本地為客戶提供服務,他們的應用程序都可以在 NVIDIA AI Enterprise 軟件上無縫運行,並可以訪問我們的加速庫、預訓練模型和 API。
We announced a partnership with Snowflake to provide enterprises with accelerated clouds to create customized generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbots, search and summarization, right from the Snowflake Data Cloud.
我們宣布與 Snowflake 建立合作夥伴關係,為企業提供加速雲,以使用他們自己的專有數據創建定制的生成人工智能應用程序,所有這些都安全地在 Snowflake 數據云中進行。借助用於開發大型語言模型的 NVIDIA NeMo 平台,企業將能夠直接從 Snowflake 數據云為高級 AI 服務(包括聊天機器人、搜索和摘要)製作定制 LLM。
Virtually, every industry can benefit from generative AI. For example, AI Copilot, such as those just announced by Microsoft, can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.
事實上,每個行業都可以從生成式人工智能中受益。例如微軟剛剛發布的AI Copilot,可以提高超過10億辦公室職員和數千萬軟件工程師的生產力。法律服務、銷售、客戶支持和教育領域的數十億專業人士將可以利用在其領域接受過培訓的人工智能係統。 AI Copilot 和助手將為我們的客戶創造新的數千億美元的市場機會。
We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world's largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation.
我們看到了生成式人工智能在營銷、媒體和娛樂領域的一些最早的應用。全球最大的營銷和傳播服務組織 WPP 正在使用 NVIDIA Omniverse 開發內容引擎,使藝術家和設計師能夠將生成式 AI 集成到 3D 內容創作中。
WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider, Shutterstock, is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI.
WPP 設計師可以根據文本提示創建圖像,同時使用NVIDIA Picasso(一家為視覺設計定制生成AI 模型的代工廠)提供經過負責任培訓的生成式AI 工具以及來自Adobe 和Getty Images 等NVIDIA 合作夥伴的內容。視覺內容提供商 Shutterstock 也使用 NVIDIA Picasso 構建工具和服務,使用戶能夠在生成式 AI 的幫助下創建 3D 場景背景。
We've partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast-tracking the development of Enterprise AI capabilities. AI Lighthouse unites the ServiceNow enterprise automation platform and engine, unites NVIDIA-accelerated computing and with Accenture's consulting and deployment services.
我們與 ServiceNow 和埃森哲合作推出 AI Lighthouse 計劃,快速跟踪企業 AI 能力的開發。 AI Lighthouse 結合了 ServiceNow 企業自動化平台和引擎、NVIDIA 加速計算以及埃森哲的諮詢和部署服務。
We are collaborating also with Hugging Face to simplify the creation of new and custom AI models for enterprises. Hugging Face will offer a new service for enterprises to train and tune advanced AI models, powered by NVIDIA DGX cloud.
我們還與 Hugging Face 合作,簡化企業新的自定義人工智能模型的創建。 Hugging Face 將為企業提供一項新服務,以訓練和調整由 NVIDIA DGX 雲提供支持的高級 AI 模型。
And just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and accelerated computing from NVIDIA with multi-cloud software for enterprises running VMware. VMware's hundreds of thousands of enterprise customers will have access to the infrastructure, AI and cloud management software needed to customize models and run generative AI applications such as intelligent chatbots, assistants, search and summarization.
就在昨天,VMware 和NVIDIA 宣布推出一項名為VMware Private AI Foundation with NVIDIA 的重要新企業產品,這是一個完全集成的平台,採用NVIDIA 的AI 軟件和加速計算功能,並為運行VMware 的企業提供多雲軟件。 VMware 的數十萬企業客戶將能夠訪問自定義模型和運行智能聊天機器人、助手、搜索和摘要等生成式 AI 應用程序所需的基礎設施、AI 和雲管理軟件。
We also announced new NVIDIA AI enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry-standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor. L40S is not limited by CoWoS supply and is shipping to the world's leading services and movers. L40S is a universal data center processor designed for high-volume data center standing out to accelerate the most compute-intensive applications, including AI training and inventing through the designing, visualization, video processing and NVIDIA Omniverse industrial digitalization.
我們還推出了全新 NVIDIA AI 企業級服務器,配備專為行業標準數據中心服務器生態系統構建的全新 NVIDIA L40S GPU 和 BlueField-3 DPU 數據中心基礎設施處理器。 L40S 不受 CoWoS 供應的限制,正在運送給世界領先的服務和搬家公司。 L40S 是一款通用數據中心處理器,專為大容量數據中心而設計,能夠加速計算密集型應用,包括通過設計、可視化、視頻處理和 NVIDIA Omniverse 工業數字化進行 AI 訓練和發明。
NVIDIA AI Enterprise-ready servers are fully optimized for VMware Cloud Foundation and Private AI Foundation. Nearly 100 configurations of NVIDIA AI Enterprise-ready servers will soon be available from the world's leading enterprise IT computing companies, including Dell, HP and Lenovo.
NVIDIA AI Enterprise-ready 服務器針對 VMware Cloud Foundation 和 Private AI Foundation 進行了全面優化。戴爾、惠普和聯想等全球領先的企業 IT 計算公司即將推出近 100 種配置的 NVIDIA AI 企業級服務器。
The GH200 Grace Hopper Superchip, which combines our ARM-based Grace CPU with Hopper GPU, entered full production and will be available this quarter in OEM servers. It is also shipping to multiple supercomputing customers, including Dolby Atmos, National Labs and the Swiss National Computing Center.
GH200 Grace Hopper Superchip 將我們基於 ARM 的 Grace CPU 與 Hopper GPU 相結合,現已全面投入生產,並將於本季度在 OEM 服務器中上市。它還向多個超級計算客戶發貨,包括杜比全景聲、國家實驗室和瑞士國家計算中心。
And NVIDIA and SoftBank are collaborating on a platform based on GH200 for generative AI and 5G/6G applications. The second generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024.
NVIDIA 和軟銀正在合作開發基於 GH200 的平台,用於生成式 AI 和 5G/6G 應用。配備最新 HBM3e 內存的第二代 Grace Hopper Superchip 版本將於 2024 年第二季度上市。
We announced the DGX GH200, a new class of large-memory AI supercomputer for giant AI language model recommendator systems and data analytics. This is the first use of the new NVIDIA NVMe switch system, enabling all of its 256 Grace Hopper Superchips to work together as 1, a huge jump compared to our prior generation connecting just 8 GPUs over an NVLink. The DGX GH200 systems are expected to be available by the end of the year. Google Cloud, Meta and Microsoft among the first to gain access.
我們推出了 DGX GH200,這是一種新型大內存 AI 超級計算機,用於巨型 AI 語言模型推薦系統和數據分析。這是全新 NVIDIA NVMe 交換機系統的首次使用,使所有 256 個 Grace Hopper Superchip 能夠作為 1 個芯片一起工作,與我們上一代僅通過 NVLink 連接 8 個 GPU 相比,這是一個巨大的飛躍。 DGX GH200 系統預計將於今年年底上市。谷歌云、Meta 和微軟是最先獲得訪問權限的。
Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems. Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of (inaudible) for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners.
網絡的強勁增長主要是由連接 HGX GPU 系統的 InfiniBand 基礎設施推動的。憑藉其端到端優化和網內計算能力,InfiniBand 為 AI 提供的性能是傳統以太網的兩倍以上。對於價值數十億美元的人工智能基礎設施來說,InfiniBand 增加的吞吐量所帶來的價值相當於網絡數百(聽不清)。此外,只有 InfiniBand 可以擴展到數十萬個 GPU。它是領先的人工智能從業者的首選網絡。
For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads. Spectrum-X couples the Spectrum or Ethernet switch with the BlueField-3 DPU, achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet. BlueField-3 DPU is a major success. It is in qualification with major OEMs and ramping across multiple CSPs and consumer companies.
對於尋求優化 AI 性能的基於以太網的雲數據中心,我們推出了 NVIDIA Spectrum-X,這是一個加速網絡平台,旨在優化 AI 工作負載的以太網。 Spectrum-X 將 Spectrum 或以太網交換機與 BlueField-3 DPU 結合起來,與傳統以太網相比,整體 AI 性能和功效提高了 1.5 倍。 BlueField-3 DPU 取得了重大成功。它已獲得主要 OEM 廠商的認證,並在多個 CSP 和消費公司中推廣。
Now moving to Gaming. Gaming revenue of $2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs for laptops and desktops. End customer demand was solid and consistent with seasonality. We believe global end demand has returned to growth after last year's slowdown. We have a large upgrade opportunity ahead of us. Just 47% of our installed base have upgraded to RTX and about 20% of the GPU with an RTX 3060 or higher performance.
現在轉向遊戲。博彩收入為 24.9 億美元,環比增長 11%,同比增長 22%。用於筆記本電腦和台式機的 GeForce RTX 40 系列 GPU 推動了增長。最終客戶需求強勁且與季節性相符。我們認為全球終端需求在去年放緩後已恢復增長。我們面前有一個巨大的升級機會。我們的安裝基礎中只有 47% 已升級到 RTX,約 20% 的 GPU 具有 RTX 3060 或更高性能。
Laptop GPUs posted strong growth in the key back-to-school season, led by RTX 4060 GPUs. NVIDIA's GPU-powered laptops have gained in popularity, and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the reality of our overall gaming revenue a bit, with Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops.
在關鍵的返校季,筆記本電腦 GPU 實現了強勁增長,其中 RTX 4060 GPU 領銜。 NVIDIA 的 GPU 驅動的筆記本電腦越來越受歡迎,其出貨量現已超過全球多個地區的台式機 GPU。這可能會稍微改變我們整體遊戲收入的現實,第二季度和第三季度是一年中表現強勁的季度,反映了筆記本電腦的返校和假期構建時間表。
In desktop, we launched the GeForce RTX 4060 and the GeForce RTX 4060 TI GPUs, bringing the Ada Lovelace Architecture down to price points as low as $299. The ecosystem of RTX and DLSS games continue to expand. 35 new games added to DLSS support, including blockbusters such as Diablo IV and Baldur's Gate 3. There's now over 330 RTX accelerated games and apps.
在桌面領域,我們推出了 GeForce RTX 4060 和 GeForce RTX 4060 TI GPU,將 Ada Lovelace 架構的價格降至 299 美元。 RTX和DLSS遊戲的生態系統不斷擴大。 DLSS 支持新增 35 款新遊戲,包括《暗黑破壞神 IV》和《博德之門 3》等熱門遊戲。現在有超過 330 個 RTX 加速遊戲和應用程序。
We are bringing generative AI to Gaming. At Computex, we announced NVIDIA Avatar Cloud Engine, or ACE, for games, a custom AI model and foundry service. Developers can use this service to bring intelligence to nonplayer characters. And it harnesses a number of NVIDIA Omniverse and AI technologies, including NeMo, Riva and Audio2Face.
我們正在將生成式人工智能引入遊戲。在 Computex 上,我們發布了用於遊戲的 NVIDIA Avatar Cloud Engine(ACE)、定制 AI 模型和代工服務。開發人員可以使用此服務為非玩家角色帶來智能。它利用了多項 NVIDIA Omniverse 和 AI 技術,包括 NeMo、Riva 和 Audio2Face。
Now moving to Professional Visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year. The Ada architecture ramp drove strong growth in Q2, rolling out initially in laptop workstations with a refresh of desktop workstations coming in Q3. This will include the powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU Medline.
現在轉向專業可視化。收入為 3.75 億美元,環比增長 28%,同比下降 24%。 Ada 架構的提升推動了第二季度的強勁增長,最初在筆記本電腦工作站中推出,第三季度將更新桌面工作站。其中包括功能強大的全新 RTX 系統,配備多達 4 個 NVIDIA RTX 6000 GPU,可提供超過 5,800 teraflops 的 AI 性能和 192 GB 的 GPU Medline。
They can be configured with NVIDIA AI Enterprise or NVIDIA Omniverse Enterprise. We also announced 3 new desktop workstation GPUs based on the Ada generation, the NVIDIA RTX 5000, 4500 and 4000, offering up to 2x the RT core throughput and up to 2x faster AR training performance compared to the previous generation.
它們可以配置 NVIDIA AI Enterprise 或 NVIDIA Omniverse Enterprise。我們還發布了 3 款基於 Ada 代的全新桌面工作站 GPU:NVIDIA RTX 5000、4500 和 4000,與上一代相比,RT 核心吞吐量高達 2 倍,AR 訓練性能提升高達 2 倍。
In addition to traditional workloads such as 3D design and content creation, new workloads in generative AI, large language model development and data science are expanding the opportunity in pro visualization for our RTX technology. One of the key themes in Jensen's keynote is the graph earlier this month was the conversion of graphics and AI.
除了 3D 設計和內容創建等傳統工作負載之外,生成式 AI、大型語言模型開發和數據科學領域的新工作負載正在為我們的 RTX 技術擴大專業可視化的機會。 Jensen 本月早些時候的主題演講的關鍵主題之一是圖形和 AI 的轉換。
This is where NVIDIA Omniverse is positioned. Omniverse is OpenUSD's native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D internet. Together, Adobe, Apple, Autodesk, Pixar and NVIDIA form the alliance for OpenUSD. Our mission is to accelerate OpenUSD's development and adoption. We announced new and upcoming Omniverse Cloud APIs, including RunUSD and ChatUSD to bring generative AI to open the workload.
這就是 NVIDIA Omniverse 的定位。 Omniverse 是 OpenUSD 的原生平台。 OpenUSD 是一種通用交換,正在迅速成為 3D 世界的標準,就像 HTML 是 2D 互聯網的通用語言一樣。 Adobe、Apple、Autodesk、Pixar 和 NVIDIA 共同組成了 OpenUSD 聯盟。我們的使命是加速 OpenUSD 的開發和採用。我們宣布了新的和即將推出的 Omniverse Cloud API,包括 RunUSD 和 ChatUSD,以引入生成式 AI 來開放工作負載。
Moving to Automotive. Revenue was $253 million, down 15% sequentially and up 15% year-on-year. Solid year-on-year growth was driven by the ramp of self-driving platforms based on NVIDIA DRIVE or associated with a number of new energy vehicle makers. The sequential decline reflects lower overall automotive demand, particularly in China.
轉向汽車。收入為 2.53 億美元,環比下降 15%,同比增長 15%。同比穩健增長得益於基於 NVIDIA DRIVE 或與多家新能源汽車製造商相關的自動駕駛平台的崛起。環比下降反映出整體汽車需求下降,尤其是在中國。
We announced a partnership with MediaTek to bring drivers and passengers new experiences inside the car. MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA's GPU chiplets. The partnership covers a wide range of vehicle segments from luxury to entry level.
我們宣布與聯發科技合作,為駕駛員和乘客帶來全新的車內體驗。聯發科將開發汽車 SoC 並整合 NVIDIA GPU 小芯片的新產品線。此次合作涵蓋從豪華車到入門級的廣泛汽車領域。
Moving to the rest of the P&L. GAAP gross margins expanded to 70.1% and non-GAAP gross margin to 71.2%, driven by higher Data Center sales. Our Data Center products include a significant amount of software and complexity, which is also helping drive our gross margin.
轉向損益表的其餘部分。在數據中心銷售額增加的推動下,GAAP 毛利率擴大至 70.1%,非 GAAP 毛利率擴大至 71.2%。我們的數據中心產品包含大量軟件和復雜性,這也有助於提高我們的毛利率。
Sequential GAAP operating expenses were up 6% and non-GAAP operating expenses were up 5%, primarily reflecting increased compensation and benefits. We returned approximately $3.4 billion to shareholders in the form of share repurchases and cash dividends. Our Board of Directors has just approved an additional $25 billion in stock repurchases to add to our remaining $4 billion of authorization as of the end of Q2.
按公認會計原則 (GAAP) 計算的運營費用增長了 6%,非公認會計原則 (non-GAAP) 運營費用增長了 5%,主要反映了薪酬和福利的增加。我們以股票回購和現金股息的形式向股東返還約 34 億美元。我們的董事會剛剛批准了額外 250 億美元的股票回購,以增加我們截至第二季度末剩餘的 40 億美元的授權。
Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our Data Center platform where AI is tremendous and broad-based across industries and customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.
讓我談談 2024 財年第三季度的前景。對我們的數據中心平台的需求,其中人工智能在各個行業和客戶中具有巨大且廣泛的基礎。我們的需求可見性將延續到明年。隨著我們縮短週期時間並與供應合作夥伴合作增加產能,我們在未來幾個季度的供應將繼續增加。
Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise. For Q3, total revenue is expected to be $16 billion, plus or minus 2%. We expect sequential growth to be driven largely by Data Center, with Gaming and Pro Vis also contributing.
此外,新的 L40S GPU 將有助於滿足從雲到企業的多種類型工作負載不斷增長的需求。第三季度總收入預計為 160 億美元,上下浮動 2%。我們預計環比增長將主要由數據中心推動,遊戲和 Pro Vis 也將做出貢獻。
GAAP and non-GAAP gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $2.95 billion and $2 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $100 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 14.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
GAAP 和非 GAAP 毛利率預計分別為 71.5% 和 72.5%,上下浮動 50 個基點。 GAAP 和非 GAAP 運營費用預計分別約為 29.5 億美元和 20 億美元。 GAAP 和非 GAAP 其他收入和支出預計約為 1 億美元,不包括非關聯投資的損益。 GAAP 和非 GAAP 稅率預計為 14.5%,上下浮動 1%(不包括任何離散項目)。更多財務細節包含在 CFO 評論和我們的 IR 網站上提供的其他信息中。
In closing, let me highlight some upcoming events for the financial community. We will attend the Jefferies Tech Summit on August 30 in Chicago, the Goldman Sachs Conference on September 5 in San Francisco, the Evercore Semiconductor Conference on September 6 as well as the Citi Tech Conference on September 7, both in New York, and BofA Virtual AI Conference on September 11. Our earnings call to discuss the results of our third quarter of fiscal 2024 is scheduled for Tuesday, November 21.
最後,讓我強調一下金融界即將發生的一些事件。我們將參加8 月30 日在芝加哥舉行的傑富瑞科技峰會、9 月5 日在舊金山舉行的高盛會議、9 月6 日在紐約舉行的Evercore 半導體會議以及9 月7 日在紐約舉行的花旗科技會議以及美國銀行虛擬會議9 月 11 日人工智能會議。我們定於 11 月 21 日星期二召開財報電話會議,討論 2024 財年第三季度的業績。
Operator, we will now open the call for questions. Could you please poll for questions for us? Thank you.
接線員,我們現在開始提問。您能為我們投票嗎?謝謝。
Operator
Operator
(Operator Instructions) We'll take our first question from Matt Ramsay with TD Cowen.
(操作員說明)我們將接受 TD Cowen 的 Matt Ramsay 提出的第一個問題。
Matthew D. Ramsay - MD & Senior Research Analyst
Matthew D. Ramsay - MD & Senior Research Analyst
Obviously, remarkable results. Jensen, I wanted to ask a question of you regarding the really quickly emerging application of large model inference. So I think it's pretty well understood by the majority of investors that you guys have very much a lockdown share of the training market. A lot of the smaller market -- smaller model inference workloads have been done on ASICs or CPUs in the past.
顯然,成果顯著。 Jensen,我想問你一個關於大型模型推理的快速新興應用的問題。因此,我認為大多數投資者都非常清楚,你們在培訓市場上佔有很大的份額。過去,許多較小的市場——較小的模型推理工作負載都是在 ASIC 或 CPU 上完成的。
And with many of these GPT and other really large models, there's this new workload that's accelerating super-duper quickly on large model inference. And I think your Grace Hopper Superchip products and others are pretty well aligned for that. But could you maybe talk to us about how you're seeing the inference market segment between small model inference and large model inference and how your product portfolio is positioned for that?
對於許多這樣的 GPT 和其他非常大的模型,這種新的工作負載可以在大型模型推理上快速加速超級欺騙。我認為你們的 Grace Hopper Superchip 產品和其他產品非常適合這一點。但是您能否與我們談談您如何看待小模型推理和大型模型推理之間的推理市場細分以及您的產品組合如何定位?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes, thanks a lot. So let's take a quick step back. These large language models are fairly -- are pretty phenomenal. It does several things, of course. It has the ability to understand unstructured language. But at its core, what it has learned is the structure of human language. And it has encoded -- or within it -- compressed within it a large amount of human knowledge that it has learned by the corpuses that it studied. .
是的,非常感謝。那麼讓我們快速退後一步。這些大型語言模型相當驚人。當然,它可以做幾件事。它具有理解非結構化語言的能力。但其核心是,它學到的是人類語言的結構。它編碼了——或者說在其中——壓縮了它通過研究的語料庫學到的大量人類知識。 。
What happens is you create these large language models and you create as large as you can, and then you derive from it smaller versions of the model, essentially teacher-student models. It's a process called distillation. And so when you see these smaller models, it's very likely the case that they were derived from or distilled from or learned from larger models, just as you have professors and teachers and students and so on and so forth.
所發生的情況是,您創建了這些大型語言模型,並且創建了盡可能大的語言模型,然後從中派生出模型的較小版本,本質上是師生模型。這是一個稱為蒸餾的過程。因此,當你看到這些較小的模型時,很可能它們是從較大的模型派生、提煉或學習的,就像你有教授、老師和學生等等。
And you're going to see this going forward. And so you start from a very large model, and it has a large amount of generality and generalization and what's called zero-shot capability. And so for a lot of applications and questions or skills that you haven't trained it specifically on, these large language models miraculously has the capability to perform them. That's what makes it so magical.
未來你將會看到這一點。所以你從一個非常大的模型開始,它具有大量的通用性和泛化性以及所謂的零樣本能力。因此,對於許多您沒有專門訓練過的應用程序、問題或技能,這些大型語言模型奇蹟般地有能力執行它們。這就是它如此神奇的原因。
On the other hand, you would like to have these capabilities in all kinds of computing devices, and so what you do is you distill them down. These smaller models might have excellent capabilities on a particular skill, but they don't generalize as well. They don't have what is called as good zero-shot capabilities. And so they all have their own unique capabilities, but you start from very large models.
另一方面,您希望在各種計算設備中都具有這些功能,因此您要做的就是將它們提煉出來。這些較小的模型可能在特定技能上具有出色的能力,但它們也不能概括。他們不具備所謂的良好的零射擊能力。因此,它們都有自己獨特的功能,但您是從非常大的模型開始的。
Operator
Operator
Next, we'll go to Vivek Arya with BofA Securities.
接下來,我們將採訪美國銀行證券公司的 Vivek Arya。
Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst
Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst
Just had a quick clarification and a question. Colette, if you could please clarify how much incremental supply do you expect to come online in the next year? Do you think it's up 20%, 30%, 40%, 50%? So just any sense of how much supply because you said it's growing every quarter.
只是進行了快速澄清和提問。 Colette,能否請您澄清一下您預計明年上線的增量供應量是多少?你認為上漲20%、30%、40%、50%嗎?所以,你知道供應量有多少,因為你說供應量每個季度都在增長。
And then, Jensen, the question for you is when we look at the overall hyperscaler spending, that buy is not really growing that much. So what is giving you the confidence that they can continue to carve out more of that pie for generative AI? Just give us your sense of how sustainable is this demand as we look over the next 1 to 2 years?
然後,Jensen,你的問題是,當我們觀察超大規模的總體支出時,購買量並沒有真正增長那麼多。那麼,是什麼讓您有信心他們能夠繼續為生成人工智能做出更大的貢獻呢?請告訴我們您對未來 1 到 2 年這種需求的可持續性有何看法?
So if I take your implied Q3 outlook of Data Center, $12 billion, $13 billion, what does that say about how many servers are already AI accelerated? Where is that going? So just give us some confidence that the growth that you are seeing is sustainable into the next 1 to 2 years.
因此,如果我按照您隱含的數據中心第三季度前景預測,即 120 億美元、130 億美元,這說明有多少服務器已經進行了 AI 加速?那要去哪裡?因此,請給我們一些信心,讓我們相信您所看到的增長在未來 1 到 2 年內可持續。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
So thanks for that question regarding our supply. Yes, we do expect to continue increasing ramping our supply over the next quarters as well as into next fiscal year. In terms of percent, it's not something that we have here. It is a work across so many different suppliers, so many different parts of building an HGX and many of our other new products that are coming to market. But we are very pleased with both the support that we have with our suppliers and the long time that we have spent with them improving their supply.
感謝您提出有關我們供應的問題。是的,我們確實預計在接下來的幾個季度以及下一財年繼續增加我們的供應。就百分比而言,我們這裡沒有。這是一項涉及許多不同供應商、構建 HGX 以及我們即將上市的許多其他新產品的許多不同部分的工作。但我們對供應商的支持以及我們與他們一起改善供應的長期合作感到非常滿意。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
The world has something along the lines of about $1 trillion worth of data centers installed in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing 2 simultaneous platform shifts at the same time.
全世界在雲、企業和其他地方安裝了價值約 1 萬億美元的數據中心。價值 1 萬億美元的數據中心正在轉型為加速計算和生成人工智能。我們同時看到兩個同步平台轉換。
One is accelerated computing. And the reason for that is because it's the most cost-effective, most energy-effective and the most performant way of doing computing now. So what you're seeing, and then all of a sudden, enabled by generative AI -- enabled by accelerated compute and generative AI came along.
一是加速計算。原因是它是目前最具成本效益、最節能和最高效的計算方式。所以你所看到的,然後突然之間,由生成式人工智能實現——由加速計算和生成式人工智能實現。
And this incredible application now gives everyone 2 reasons to transition, to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It's about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year.
這個令人難以置信的應用程序現在為每個人提供了兩個轉型的理由,即從通用計算(傳統的計算方式)到這種新的計算方式(加速計算)的平台轉變。數據中心每年的資本支出約為 1 萬億美元,即 0.25 萬億美元。
You're seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition, and we're seeing these 2 platform shifts happening at the same time.
您會看到世界各地的數據中心正在將資本支出集中在當今計算的兩個最重要的趨勢上,即加速計算和生成人工智能。所以我認為這不是近期的事情。這是一個長期的行業轉型,我們看到這兩個平台的轉變同時發生。
Operator
Operator
Next, we go to Stacy Rasgon with Bernstein Research.
接下來,我們將採訪伯恩斯坦研究中心的史黛西·拉斯貢 (Stacy Rasgon)。
Stacy Aaron Rasgon - Senior Analyst
Stacy Aaron Rasgon - Senior Analyst
I was wondering, Colette, if you could tell me like how much of Data Center in the quarter, maybe even the guide, is like systems versus GPU, like DGX versus just the H100. What I'm really trying to get at is how much is like pricing or content or however you want to define that versus units actually driving the growth going forward. Can you give us any color around that?
Colette,我想知道您能否告訴我本季度的數據中心有多少,甚至可能是指南,就像系統與 GPU,就像 DGX 與 H100 一樣。我真正想要了解的是定價或內容有多少,或者你想如何定義它,與實際推動未來增長的單位相比。你能給我們一些關於它的顏色嗎?
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Sure, Stacy. Let me help. Within the quarter, our HGX systems were a very significant part of our Data Center as well as our Data Center growth that we had seen. Those systems include our HGX of our Hopper architecture but also our Ampere architecture. Yes, we are still selling both of these architectures in the market.
當然,史黛西。讓我來幫忙。在本季度內,我們的 HGX 系統是我們數據中心以及我們所看到的數據中心增長的非常重要的一部分。這些系統包括我們的 Hopper 架構的 HGX 以及我們的 Ampere 架構。是的,我們仍在市場上銷售這兩種架構。
Now when you think about that, what does that mean from both the systems as a unit, of course, is growing quite substantially. And that is driving in terms of the revenue increases. So both of these things are the drivers of the revenue inside Data Center.
現在,當您考慮這一點時,這對於兩個系統作為一個整體意味著什麼,當然,這兩個系統都在大幅增長。這推動了收入的增長。因此,這兩件事都是數據中心內部收入的驅動因素。
Our DGXs are always a portion of additional systems that we will sell. Those are great opportunities for enterprise customers and many other different types of customers that we're seeing even in our consumer Internet companies.
我們的 DGX 始終是我們將銷售的附加系統的一部分。對於企業客戶和我們在消費者互聯網公司中看到的許多其他不同類型的客戶來說,這些都是巨大的機會。
The importance there is also coming together with software that we sell with our DGXs, but that's a portion of our sales that we're doing. The rest of the GPUs, we have new GPUs coming to market that we talked about, the L40S, and they will add continued growth going forward. But again, the largest driver of our revenue within this last quarter was definitely the HGX systems.
與我們隨 DGX 一起銷售的軟件相結合也很重要,但這是我們正在進行的銷售的一部分。其餘的 GPU,我們討論過的新 GPU L40S 即將上市,它們將在未來帶來持續增長。但同樣,上個季度我們收入的最大推動力無疑是 HGX 系統。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
And Stacy, if I could just add something. You say it's H100, and I know you know what your mental image in your mind. But the H100 is 35,000 parts, 70 pounds, nearly 1 trillion transistors in combination. It takes a robot to build -- well, many robots to build because it's 70 pounds to lift. And it takes a supercomputer to test a supercomputer.
史黛西,如果我可以補充一些東西的話。你說它是H100,我知道你知道你心目中的形像是什麼。但 H100 由 35,000 個零件、70 磅、近 1 萬億個晶體管組合而成。這需要一個機器人來建造——嗯,需要很多機器人來建造,因為它要舉起 70 磅。測試超級計算機需要超級計算機。
And so these things are technology marvels and the manufacturing of them is really intensive. And so I think we call it H100 as if it's a chip that comes off of a fab, but H100s go out really as HGXs sent to the world's hyperscalers and they're really, really quite large system components, if you will.
因此,這些東西都是技術奇蹟,而且它們的製造確實非常密集。因此,我認為我們將其稱為H100,就好像它是從晶圓廠生產的芯片一樣,但H100 實際上是作為HGX 發送給世界各地的超大規模企業而推出的,如果你願意的話,它們確實是非常非常大的系統組件。
Operator
Operator
Next, we go to Mark Lipacis with Jefferies.
接下來,我們將與 Jefferies 一起拜訪馬克·利帕西斯 (Mark Lipacis)。
Mark John Lipacis - MD & Senior Equity Research Analyst
Mark John Lipacis - MD & Senior Equity Research Analyst
Congrats on the success. Jensen, it seems like a key part of the success -- your success in the market is delivering the software ecosystem, along with the chip and the hardware platform. And I had a 2-part question on this. I was wondering if you could just help us understand the evolution of your software ecosystem, the critical elements. And is there a way to quantify your lead on this dimension, like how many person years you've invested in building it? And then part 2, I was wondering if you would care to share with us your view on the -- what percentage of the value of the NVIDIA platform is hardware differentiation versus software differentiation?
恭喜你成功了。 Jensen,這似乎是成功的關鍵部分——您在市場上的成功在於提供軟件生態系統以及芯片和硬件平台。我對此有一個由兩部分組成的問題。我想知道您是否可以幫助我們了解您的軟件生態系統的演變以及關鍵要素。有沒有一種方法可以量化您在這個維度上的領先地位,例如您投入了多少人年來構建它?然後是第 2 部分,我想知道您是否願意與我們分享您對以下問題的看法:硬件差異化與軟件差異化在 NVIDIA 平台的價值中所佔的百分比是多少?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes, Mark, I really appreciate the question. Let me see if I could use some metrics. So we have a run time called NVIDIA AI Enterprise. This is one part of our software stack. And this is, if you will, the run time that just about every company uses for the end-to-end of machine learning, from data processing, the training of any model that you like to do on any framework you'd like to do, the inference and the deployment, the scaling it out into a data center. It could be a scale-out for a hyperscale data center. It could be a scale-out for enterprise data center, for example, on VMware.
是的,馬克,我真的很感激這個問題。讓我看看是否可以使用一些指標。所以我們有一個名為 NVIDIA AI Enterprise 的運行時。這是我們軟件堆棧的一部分。如果你願意的話,這就是幾乎每個公司用於端到端機器學習的運行時間,從數據處理到你想要在任何你想要的框架上進行的任何模型的訓練執行、推理和部署,將其擴展到數據中心。它可能是超大規模數據中心的橫向擴展。它可以是企業數據中心的橫向擴展,例如在 VMware 上。
You can do this on any of our GPUs. We have hundreds of millions of GPUs in the field and millions of GPUs in the cloud, in just about every single cloud. And it runs in a single GPU configuration as well as multi-GPU per compute or multinode. It also has multiple sessions or multiple computing instances per GPU.
您可以在我們的任何 GPU 上執行此操作。我們在現場擁有數億個 GPU,在雲端擁有數百萬個 GPU,幾乎在每個雲中都有。它在單 GPU 配置以及每個計算或多節點的多 GPU 配置中運行。每個 GPU 還具有多個會話或多個計算實例。
So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale. So this run time called NVIDIA AI Enterprise has something like 4,500 software packages, software libraries and has something like 10,000 dependencies among each other. And that run time is, as I mentioned, continuously updated and optimized for our installed base, for our stack.
因此,從每個 GPU 多個實例到多個 GPU、多個節點到整個數據中心規模。因此,這個名為 NVIDIA AI Enterprise 的運行時擁有大約 4,500 個軟件包、軟件庫,並且彼此之間具有大約 10,000 個依賴項。正如我所提到的,運行時間針對我們的安裝基礎和堆棧不斷更新和優化。
And that's just 1 example of what it would take to get accelerated computing to work. The number of code combinations and type of application combinations is really quite insane. And it's taken us 2 decades to get here. But what I would characterize as probably our -- the elements of our company, if you will, are several.
這只是加速計算發揮作用所需的其中一個示例。代碼組合的數量和應用程序組合的類型確實相當瘋狂。我們花了20年才走到這一步。但如果你願意的話,我認為我們公司的要素可能有幾個。
I would say, number 1 is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the things that I just said, from data processing to training to inference, for preprocessing of the data before you do the inference, to the post processing of the data, tokenizing of languages so that you could then train with it. The amount of -- the workflow is much more intense than just training or inference.
我想說,第一是建築。我們架構的靈活性、多功能性和性能使我們能夠完成我剛才所說的所有事情,從數據處理到訓練到推理,在推理之前對數據進行預處理,再到數據的後處理。數據,對語言進行標記,以便您可以用它進行訓練。工作流程的數量比訓練或推理要密集得多。
But anyways, that's where we'll focus, and it's fine. But when people actually use these computing systems, it's quite -- requires a lot of applications. And so the combination of our architecture makes it possible for us to deliver the lowest cost of ownership. And the reason for that is because we accelerate so many different things.
但無論如何,這就是我們關注的焦點,這很好。但當人們實際使用這些計算系統時,需要大量的應用程序。因此,我們的架構組合使我們能夠提供最低的擁有成本。原因是我們加速了許多不同的事情。
The second characteristic of our company is the installed base. You have to ask yourself, why is it that all the software developers come to our platform? And the reason for that is because software developers seek a large installed base so that they can reach the largest number of end users, so that they could build a business or get a return on the investments that they make.
我們公司的第二個特點是安裝基礎。你要問自己,為什麼所有的軟件開發者都來到我們的平台?原因是軟件開發人員尋求龐大的安裝基礎,以便能夠接觸到最大數量的最終用戶,從而可以建立業務或獲得投資回報。
And then the third characteristic is reach. We're in the cloud today, both for public cloud, public-facing cloud because we have so many customers that use it -- so many developers and customers that use our platform. CSPs are delighted to put it up in the cloud. They use it for internal consumption to develop and train and to operate recommender systems or search or data processing engines and whatnot all the way to training and inference.
第三個特徵是覆蓋範圍。今天我們在雲中,無論是公共雲還是面向公眾的雲,因為我們有這麼多的客戶使用它——這麼多的開發人員和客戶使用我們的平台。通信服務提供商很高興將其放置在雲端。他們將其用於內部消費,以開發、培訓和操作推薦系統或搜索或數據處理引擎等等,直至培訓和推理。
And so we're in the cloud, we're in enterprise. Yesterday, we had a very big announcement. It's really worthwhile to take a look at that. VMware is the operating system of the world's enterprise. And we've been working together for several years now, and we're going to bring together -- together, we're going to bring generative AI to the world's enterprises all the way out to the edge.
所以我們在雲中,我們在企業中。昨天,我們宣布了一個非常重大的消息。確實值得一看。 VMware 是全球企業的操作系統。我們已經合作了好幾年了,我們將齊心協力,將生成式人工智能帶給世界各地的企業,直至邊緣。
And so reach is another reason. And because of reach, all of the world's system makers are anxious to put NVIDIA's platform in their systems. And so we have a very broad distribution from all of the world's OEMs and ODMs and so on and so forth because of our reach.
因此,影響力是另一個原因。由於影響範圍廣,全球所有系統製造商都渴望將 NVIDIA 的平台放入他們的系統中。因此,由於我們的影響力,我們擁有來自世界各地的 OEM 和 ODM 等的非常廣泛的分銷。
And then lastly, because of our scale and velocity, we were able to sustain this really complex stack of software and hardware, networking and compute and across all of these different usage models and different computing environments. And we're able to do all this while accelerating the velocity of our engineering.
最後,由於我們的規模和速度,我們能夠在所有這些不同的使用模型和不同的計算環境中維持這個非常複雜的軟件和硬件、網絡和計算堆棧。我們能夠在加快工程速度的同時完成這一切。
It seems like we're introducing a new architecture every 2 years. Now we're introducing a new architecture, a new product just about every 6 months. And so these properties make it possible for the ecosystem to build their company and their business on top of us. And so those, in combination, makes us special.
我們似乎每兩年就會引入一種新架構。現在,我們大約每 6 個月就會推出一種新架構、一款新產品。因此,這些特性使生態系統能夠在我們之上建立他們的公司和業務。因此,這些因素結合在一起,使我們變得與眾不同。
Operator
Operator
Next, we'll go to Atif Malik with Citi.
接下來,我們將與 Citi 一起前往 Atif Malik。
Atif Malik - Director & Semiconductor Capital Equipment and Specialty Semiconductor Analyst
Atif Malik - Director & Semiconductor Capital Equipment and Specialty Semiconductor Analyst
Great job on the results and outlook. Colette, I have a question on the CoWoS-less L40S that you guys talked about. Any idea how much of the supply tightness can L40S help with? And if you can talk about the incremental profitability or gross margin contribution from this product?
在結果和前景方面做得很好。 Colette,我有一個關於你們談到的無 CoWoS L40S 的問題。您知道 L40S 能在多大程度上緩解供應緊張問題嗎?您能否談談該產品的增量盈利能力或毛利率貢獻?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes. Atif, let me take that for you. The L40S is really designed for a different type of application. H100 is designed for large-scale language models and processing just very large models and a great deal of data. And so that's not L40S' focus. L40S' focus is to be able to fine-tune models, fine-tune pretrained models, and it'll do that incredibly well. It has a transform engine. It's got a lot of performance.
是的。阿蒂夫,讓我幫您拿一下。 L40S 確實是為不同類型的應用而設計的。 H100 專為大規模語言模型而設計,僅處理非常大的模型和大量數據。所以這不是 L40S 的重點。 L40S 的重點是能夠微調模型,微調預訓練模型,它會做得非常好。它有一個變換引擎。它有很多性能。
You can get multiple GPUs in a server. It's designed for hyperscale scale-out, meaning it's easy to install L40S servers into the world's hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard. And so it's easy to install. L40S also is with the software stack around it.
您可以在一台服務器中獲得多個 GPU。它專為超大規模橫向擴展而設計,這意味著可以輕鬆地將 L40S 服務器安裝到世界超大規模數據中心。它配備標準機架、標準服務器,一切都是標準的。因此它很容易安裝。 L40S 還具有圍繞它的軟件堆棧。
And along with BlueField-3 and all the work that we did with VMware and the work that we did with Snowflake and ServiceNow and so many other enterprise partners, L40S is designed for the world's enterprise IT systems. And that's the reason why HPE, Dell, and Lenovo and some 20 other system makers building about 100 different configurations of enterprise servers are going to work with us to take generative AI to the world's enterprise.
與 BlueField-3 以及我們與 VMware 所做的所有工作以及我們與 Snowflake 和 ServiceNow 以及許多其他企業合作夥伴所做的工作一起,L40S 專為全球企業 IT 系統而設計。這就是為什麼 HPE、戴爾、聯想以及其他大約 20 家系統製造商構建了大約 100 種不同配置的企業服務器,將與我們合作,將生成式 AI 引入全球企業。
And so L40S is really designed for a different type of scale-out, if you will. It's, of course, large language models. It's, of course, generative AI, but it's a different use case. And so the L40S is going to -- is off to a great start. And the world's enterprise and hyperscalers are really clamoring to get L40S deployed.
因此,如果您願意的話,L40S 確實是為不同類型的橫向擴展而設計的。當然,這是大型語言模型。當然,它是生成式人工智能,但它是一個不同的用例。因此,L40S 將是一個良好的開端。全球企業和超大規模企業都迫切希望部署 L40S。
Operator
Operator
Next, we'll go to Joe Moore with Morgan Stanley.
接下來,我們將採訪摩根士丹利的喬·摩爾。
Joseph Lawrence Moore - Executive Director
Joseph Lawrence Moore - Executive Director
I guess the thing about these numbers that's so remarkable to me is the amount of demand that remains unfulfilled, talking to some of your customers. As good as these numbers are, you sort of more than tripled your revenue in a couple of quarters. There's a demand, in some cases, for multiples of what people are getting.
我想這些數字對我來說如此引人注目的是與您的一些客戶交談後仍未滿足的需求量。儘管這些數字不錯,但您的收入在幾個季度內增加了兩倍多。在某些情況下,人們所獲得的需求是數倍的。
So can you talk about that? How much unfulfilled demand do you think there is? And you talked about visibility extending into next year. Do you have line of sight into when you get to see supply-demand equilibrium here?
那麼你能談談這個嗎?您認為還有多少未滿足的需求?您談到了明年的可見性。當你看到這裡的供需平衡時,你能看到嗎?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes. We have excellent visibility through the year and into next year. And we're already planning the next-generation infrastructure with the leading CSPs and data center builders. The demand -- the easiest way to think about the demand is -- the world is transitioning from general-purpose computing to accelerated computing. That's the easiest way to think about the demand.
是的。我們全年和明年都有良好的能見度。我們已經與領先的通信服務提供商和數據中心建設者一起規劃下一代基礎設施。需求——思考需求的最簡單的方式是——世界正在從通用計算過渡到加速計算。這是考慮需求的最簡單方法。
The best way for companies to increase their throughput, improve their energy efficiency, improve their cost efficiency is to divert their capital budget to accelerated computing and generative AI. Because by doing that, you're going to offload so much workload off of the CPUs, but the available CPUs is -- in your data center, will get boosted. And so what you're seeing companies do now is recognizing this -- the tipping point here, recognizing the beginning of this transition and diverting their capital investment to accelerated computing and generative AI.
企業提高吞吐量、提高能源效率、提高成本效率的最佳方式是將資本預算轉移到加速計算和生成式人工智能上。因為通過這樣做,您將從 CPU 上卸載大量工作負載,但數據中心內的可用 CPU 將會得到提升。因此,你現在看到的公司所做的就是認識到這一點——這裡是轉折點,認識到這一轉變的開始,並將其資本投資轉向加速計算和生成人工智能。
And so that's probably the easiest way to think about the opportunity ahead of us. This isn't a singular application that is driving the demand, but this is a new computing platform, if you will, a new computing transition that's happening. And data centers all over the world are responding to this and shifting in a broad-based way.
因此,這可能是思考我們面前的機會的最簡單的方法。這不是一個推動需求的單一應用程序,但這是一個新的計算平台,如果你願意的話,這是一個正在發生的新的計算轉變。世界各地的數據中心正在對此做出反應並進行廣泛的轉變。
Operator
Operator
Next, we go to Toshiya Hari with Goldman Sachs.
接下來,我們和高盛一起去Toshiya Hari。
Toshiya Hari - MD
Toshiya Hari - MD
I had 1 quick clarification question for Colette and then another 1 for Jensen. Colette, I think last quarter, you had said CSPs were about 40% of your Data Center revenue, Consumer Internet at 30%, Enterprise, 30%. Based on your remarks, it sounded like CSPs and Consumer Internet may have been a larger percentage of your business. If you can kind of clarify that or confirm that, that would be super helpful.
我向 Colette 提出了 1 個快速澄清問題,然後向 Jensen 提出了另外 1 個問題。 Colette,我記得上個季度,您曾說過 CSP 約佔數據中心收入的 40%,消費互聯網佔 30%,企業佔 30%。根據您的評論,聽起來 CSP 和消費者互聯網可能在您的業務中所佔的比例更大。如果您能澄清或確認這一點,那將非常有幫助。
And then Jensen, a question for you. Given your position as the key enabler of AI, the breadth of engagements and the visibility you have into customer projects, I'm curious how confident you are that there will be enough applications or use cases for your customers to generate a reasonable return on their investments. I guess I ask the question because there is a concern out there that there could be a bit of a pause in your demand profile in the out years. Curious if there's enough breadth and depth there to support a sustained increase in your Data Center business going forward.
然後是詹森,問你一個問題。考慮到您作為人工智能關鍵推動者的地位、參與的廣度以及您對客戶項目的可見性,我很好奇您對有足夠的應用程序或用例為您的客戶帶來合理的回報有多大信心?投資。我想我問這個問題是因為有人擔心未來幾年您的需求狀況可能會有所停頓。很好奇是否有足夠的廣度和深度來支持您的數據中心業務的持續增長。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Okay. So thank you, Toshiya, on the question regarding our types of customers that we have in our Data Center business. And we look at it in terms of combining our compute as well as our networking together. Our CSPs, our large CSPs, are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer Internet companies. And then the last piece of that will be our enterprise and high-performance computing.
好的。謝謝你,Toshiya,關於我們數據中心業務中的客戶類型的問題。我們從將計算和網絡結合在一起的角度來看待它。我們的 CSP,我們的大型 CSP,在第二季度貢獻了我們收入的 50% 以上。下一個最大的類別將是我們的消費互聯網公司。最後一部分將是我們的企業和高性能計算。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Toshiya, I'm reluctant to guess about the future, and so I'll answer the question from the first principle of computer science perspective. It is recognized for some time now that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It's too energy costly, it's too expensive, and the performance of the applications are too slow.
Toshiya,我不願意猜測未來,所以我會從計算機科學第一原理的角度來回答這個問題。一段時間以來,人們認識到通用計算並不是強制通用計算。大規模使用通用計算不再是前進的最佳方式。它的能源成本太高,太昂貴,而且應用程序的性能太慢。
And finally, the world has a new way of doing it. It's called accelerated computing, and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that's already in the data center. And by using it, you offload the CPUs. You save a ton of money and order of magnitude, in cost and order of magnitude and energy and the throughput is higher. And that's what the industry is really responding to.
最後,世界有了一種新的實現方式。這就是所謂的加速計算,而推動它加速發展的是生成式人工智能。但加速計算可用於數據中心已有的各種不同應用程序。通過使用它,您可以減輕 CPU 的負擔。您可以節省大量資金和能源,並且吞吐量更高。這就是該行業真正做出的反應。
Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It's ready to be deployed.
展望未來,投資數據中心的最佳方式是將資本投資從通用計算上轉移出來,集中到生成式人工智能和加速計算上。生成式 AI 提供了一種提高生產力的新方式、一種生成為客戶提供的新服務的新方式,而加速計算可幫助您節省資金和電力。申請的數量是成噸的。很多開發人員、很多應用程序、很多庫。它已準備好部署。
And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers. This is true for the world's clouds, and you're seeing a whole crop of new GPU specialty -- GPU specialized cloud service providers. One of the famous ones is CoreWeave, and they're doing incredibly well.
因此,我認為世界各地的數據中心都認識到這一點,這是部署資源、為數據中心部署資本的最佳方式。對於全球雲來說都是如此,您會看到一大批新的 GPU 專業人士——GPU 專業雲服務提供商。 CoreWeave 是其中著名的公司之一,他們做得非常好。
But you're seeing the regional GPU specialist service providers all over the world now. And it's because they all recognize the same thing, that the best way to invest their capital going forward is to put it into accelerated computing and generative AI.
但您現在看到的區域 GPU 專業服務提供商遍布世界各地。正是因為他們都認識到同一件事,未來投資資本的最佳方式是將其投入加速計算和生成人工智能。
But we're also seeing that enterprises want to do that. But in order for enterprises to do it, you have to support the management system, the operating system, the security and software-defined data center approach of enterprises, and that's called VMware. And we've been working several years with VMware to make it possible for VMware to support not just the virtualization of CPUs but a virtualization of GPUs as well as the distributed computing capabilities of GPUs, supporting NVIDIA's BlueField for high-performance networking.
但我們也看到企業希望這樣做。但為了讓企業做到這一點,你必須支持企業的管理系統、操作系統、安全性和軟件定義的數據中心方法,這就是VMware。我們多年來一直與VMware合作,使VMware不僅支持CPU虛擬化,還支持GPU虛擬化以及GPU的分佈式計算能力,支持NVIDIA的BlueField高性能網絡。
And all of the generative AI libraries that we've been working on is now going to be offered as a special SKU by VMware's sales force, which is, as we all know, quite large because they reach some several hundred thousand VMware customers around the world. And this new SKU is going to be called VMware Private AI Foundation. And this will be a new SKU that makes it possible for enterprises.
我們一直在開發的所有生成式 AI 庫現在都將由 VMware 銷售人員作為特殊 SKU 提供,眾所周知,該銷售人員規模相當大,因為它們覆蓋了全球數十萬 VMware 客戶。世界。這個新的 SKU 將被稱為 VMware Private AI Foundation。而這將是一個讓企業成為可能的新SKU。
And in combination with HP, Dell, and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage generative AI. And so I think the answer to that question is hard to predict exactly what's going to happen quarter-to-quarter. But I think the trend is very, very clear now that we're seeing a platform shift.
結合惠普、戴爾和聯想基於 L40S 的新服務器產品,任何企業都可以擁有最先進的人工智能數據中心,並能夠參與生成式人工智能。因此,我認為這個問題的答案很難準確預測每個季度會發生什麼。但我認為現在我們看到了平台的轉變,趨勢非常非常明顯。
Operator
Operator
Next, we'll go to Timothy Arcuri with UBS.
接下來,我們將邀請瑞銀集團的蒂莫西·阿庫裡 (Timothy Arcuri)。
Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment
Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment
Can you talk about the attach rate of your networking solutions to your -- to the compute that you're shipping? In other words, is like half of your compute shipping with your networking solutions more than half, less than half? And is this something that maybe you can use to prioritize allocation of the GPUs?
您能談談您的網絡解決方案與您正在運輸的計算的連接率嗎?換句話說,您的網絡解決方案的計算量的一半是多於一半,還是少於一半?您是否可以使用它來確定 GPU 分配的優先級?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Well, working backwards, we don't use that to prioritize the allocation of our GPUs. We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant, some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings.
好吧,反過來看,我們不會用它來確定 GPU 分配的優先級。我們讓客戶決定他們想要使用什麼網絡。對於正在構建超大型基礎設施的客戶來說,InfiniBand(我不想這麼說)是理所當然的選擇。其原因在於 InfiniBand 的效率非常顯著,對於 10 億美元的基礎設施而言,吞吐量提高約 10%、15%、20% 可以轉化為巨大的節省。
Basically, the networking is free. And so if you have a single application, if you will, infrastructure where it's largely dedicated to large language models or large AI systems, InfiniBand is really, really a terrific choice.
基本上,網絡是免費的。因此,如果您有一個單一應用程序(如果您願意的話),其基礎設施主要致力於大型語言模型或大型人工智能係統,那麼 InfiniBand 確實是一個絕佳的選擇。
However, if you're hosting for a lot of different users and Ethernet is really core to the way you manage your data center, we have an excellent solution there that we had just recently announced, and it's called Spectrum-X. Well, we're going to bring the capabilities, if you will, not all of it, but some of it, of the capabilities of InfiniBand to Ethernet so that we can also, within the environment of Ethernet, allow you to -- enable you to get excellent generative AI capabilities.
然而,如果您為許多不同的用戶託管,並且以太網確實是您管理數據中心方式的核心,那麼我們最近剛剛宣布了一個出色的解決方案,它被稱為 Spectrum-X。好吧,如果您願意的話,我們將把 InfiniBand 的功能(不是全部,而是其中的一部分)引入以太網,這樣我們也可以在以太網環境中,讓您能夠——您將獲得卓越的生成人工智能能力。
So Spectrum-X is just ramping now. It requires BlueField-3, and it supports both our Spectrum-2 and Spectrum-3 Ethernet switches. And the additional performance is really spectacular. BlueField-3 makes it possible and a whole bunch of software that goes along with it.
所以 Spectrum-X 現在才剛剛起步。它需要 BlueField-3,並且支持我們的 Spectrum-2 和 Spectrum-3 以太網交換機。而且附加的性能確實非常驚人。 BlueField-3 以及與之配套的一整套軟件使之成為可能。
BlueField, as all of you know, is a project really dear to my heart, and it's off to just a tremendous start. I think it's a home run. And this is the concept of in-network computing and putting a lot of software in the computing fabric is being realized with BlueField-3, and it is going to be a home run.
正如你們所知,BlueField 是一個我非常珍視的項目,而且它只是一個巨大的開始。我認為這是一個本壘打。這就是網絡內計算的概念,將大量軟件放入計算結構中正在通過 BlueField-3 實現,這將是一個全壘打。
Operator
Operator
Our final question comes from the line of Ben Reitzes with Melius.
我們的最後一個問題來自 Ben Reitzes 和 Melius。
Benjamin Alexander Reitzes - MD & Head of Technology Research
Benjamin Alexander Reitzes - MD & Head of Technology Research
My question is with regard to DGX Cloud. Can you talk about the reception that you're seeing and how the momentum is going? And then Colette, can you also talk about your software business? What is the run rate right now and the materiality of that business? And it does seem like it's already helping margins a bit.
我的問題是關於 DGX Cloud 的。您能談談您所看到的反響以及勢頭如何嗎? Colette,您能談談您的軟件業務嗎?目前的運行率以及該業務的重要性是多少?看起來它確實已經對利潤率有所幫助。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
DGX Cloud's strategy, let me start there. DGX Cloud's strategy is to achieve several things: number one, to enable a really close partnership between us and the world's CSPs. We recognize that many of our -- we work with some 30,000 companies around the world, 15,000 of them are startups, thousands of them are generative AI companies. And the fastest-growing segment, of course, is generative AI.
DGX Cloud 的策略,讓我從這裡開始。 DGX Cloud 的戰略是實現幾件事:第一,使我們與世界各地的 CSP 之間建立真正密切的合作夥伴關係。我們認識到,我們與世界各地約 30,000 家公司合作,其中 15,000 家是初創公司,數千家是生成人工智能公司。當然,增長最快的領域是生成人工智能。
We're working with all of the world's AI start-ups. And ultimately, they would like to be able to land in one of the world's leading clouds. And so we built DGX Cloud as a footprint inside the world's leading clouds so that we could simultaneously work with all of our AI partners and help land them easily in one of our cloud partners.
我們正在與世界上所有的人工智能初創企業合作。最終,他們希望能夠降落在世界領先的雲之一上。因此,我們將 DGX Cloud 作為世界領先雲中的一個足跡,以便我們可以同時與所有 AI 合作夥伴合作,並幫助他們輕鬆地登陸我們的雲合作夥伴之一。
The second benefit is that it allows our CSPs and ourselves to work really closely together to improve the performance of hyperscale clouds, which is historically designed for multi-tenancy and not designed for high-performance distributed computing like generative AI. And so to be able to work closely architecturally, to have our engineers work hand in hand to improve the networking performance and the computing performance has been really powerful, really terrific.
第二個好處是,它允許我們的CSP 和我們自己真正緊密地合作,以提高超大規模雲的性能,超大規模雲歷來是為多租戶設計的,而不是為生成式AI 等高性能分佈式計算而設計的。因此,能夠在架構上緊密合作,讓我們的工程師攜手合作,提高網絡性能和計算性能,這真的非常強大,非常棒。
And then thirdly, of course, NVIDIA uses very large infrastructures ourselves. And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant. And none of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI, and optimizing software and infrastructure software requires AI to even develop. It's been well publicized that our engineering uses AI to design our chips.
第三,當然,NVIDIA 自己也使用非常龐大的基礎設施。而我們的自動駕駛汽車團隊、我們的 NVIDIA 研究團隊、我們的生成式 AI 團隊、我們的語言模型團隊,我們需要的基礎設施數量是相當可觀的。如果沒有我們的 DGX 系統,我們的優化編譯器就不可能實現。如今,甚至編譯器也需要人工智能,優化軟件和基礎設施軟件甚至需要人工智能來開發。我們的工程部門使用人工智能來設計我們的芯片,這一點已經廣為人知。
And so the internal -- our own consumption of AI, our robotics team, so on and so forth, Omniverse teams, so on and so forth, all needs AI. And so our internal consumption is quite large as well, and we land that in DGX Cloud. And so DGX Cloud has multiple use cases, multiple drivers, and it's been off to just an enormous success. And our CSPs love it, the developers love it and our own internal engineers are clamoring to have more of it. And it's a great way for us to engage and work closely with all of the AI ecosystem around the world.
因此,內部——我們自己對人工智能的消費、我們的機器人團隊等等、Omniverse 團隊等等,都需要人工智能。所以我們的內部消耗也相當大,我們把它放在DGX Cloud上。因此,DGX Cloud 擁有多個用例、多個驅動程序,並且取得了巨大的成功。我們的 CSP 喜歡它,開發人員喜歡它,我們自己的內部工程師也強烈要求擁有更多它。這是我們與世界各地的人工智能生態系統密切接觸和合作的好方法。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
And let's see if I can answer your question regarding our software revenue. And part of our opening remarks that we made as well, remember, software is a part of almost all of our products, whether they're our Data Center products, GPU systems or any of our products within Gaming and our future Automotive products. You're correct, we're also selling it in a standalone business. And that standalone software continues to grow where we are providing both the software services, upgrades across there as well.
讓我們看看我是否可以回答您有關我們軟件收入的問題。在我們的開場白中,請記住,軟件是我們幾乎所有產品的一部分,無論是我們的數據中心產品、GPU 系統還是我們遊戲領域的任何產品以及我們未來的汽車產品。你是對的,我們也在獨立業務中銷售它。在我們提供軟件服務和升級的地方,獨立軟件繼續增長。
Now we're seeing, at this point, probably hundreds of millions of dollars annually for our software business, and we are looking at NVIDIA AI Enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100. And I think we're going to see more availability even with our CSP marketplaces. So we're off to a great start, and I do believe we'll see this continue to grow going forward.
現在我們看到,我們的軟件業務每年可能有數億美元,我們正在考慮將 NVIDIA AI Enterprise 納入我們銷售的許多產品中,例如我們的 DGX,例如作為 H100 的 PCIe 版本。我認為即使在我們的 CSP 市場中,我們也會看到更多的可用性。所以我們有了一個良好的開端,我相信我們會看到這種情況繼續發展。
Operator
Operator
And that does conclude today's question-and-answer session. I'll turn the call back over to Jensen Huang for any additional or closing remarks.
今天的問答環節到此結束。我會將電話轉回黃仁勳以獲取任何補充或結束語。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
A new computing era has begun. The industry is simultaneously going through 2 platform transitions, accelerated computing and generative AI. Data centers are making a platform shift from general purpose to accelerated computing. The $1 trillion of global data centers will transition to accelerated computing to achieve an order of magnitude better performance, energy efficiency and cost. Accelerated computing enabled generative AI, which is now driving a platform shift in software and enabling new, never-before possible applications.
新的計算時代已經開始。該行業正在同時經歷兩個平台轉型:加速計算和生成式人工智能。數據中心正在將平台從通用計算轉向加速計算。價值 1 萬億美元的全球數據中心將過渡到加速計算,以實現更高數量級的性能、能源效率和成本。加速計算催生了生成式人工智能,它正在推動軟件平台的轉變,並實現前所未有的新應用。
Together, accelerated computing and generative AI are driving a broad-based computer industry platform shift. Our demand is tremendous. We are significantly expanding our production capacity. Supply will substantially increase for the rest of this year and next year. NVIDIA has been preparing for this for over 2 decades and has created a new computing platform that the world's industry -- world's industries can build upon.
加速計算和生成式人工智能共同推動了廣泛的計算機行業平台轉變。我們的需求是巨大的。我們正在大幅擴大生產能力。今年剩餘時間和明年的供應量將大幅增加。 NVIDIA 已經為此準備了 20 多年,並創建了一個新的計算平台,全世界的工業都可以在此基礎上進行構建。
What makes NVIDIA special are: one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency.
NVIDIA 的特殊之處在於:一是架構。 NVIDIA 加速了從數據處理、訓練、推理、每個 AI 模型、實時語音到計算機視覺、從巨型推薦到矢量數據庫的一切。我們架構的性能和多功能性可轉化為最低的數據中心總體擁有成本和最佳的能源效率。
Two, installed base. NVIDIA has hundreds of millions of CUDA-compatible GPUs worldwide. Developers need a large installed base to reach end users and grow their business. NVIDIA is the developer's preferred platform. More developers create more applications that make NVIDIA more valuable for customers.
二、安裝基礎。 NVIDIA 在全球擁有數億個兼容 CUDA 的 GPU。開發人員需要龐大的安裝基礎來接觸最終用戶並發展業務。 NVIDIA 是開發人員的首選平台。更多開發人員創建更多應用程序,使 NVIDIA 對客戶更有價值。
Three, reach. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments and robotics. Each has fundamentally unique computing models and ecosystems. System suppliers like OEMs, computer OEMs can confidently invest in NVIDIA because we offer significant market demand and reach.
三、到達。 NVIDIA 涉足雲、企業數據中心、工業邊緣、PC、工作站、儀器和機器人領域。每個都有根本上獨特的計算模型和生態系統。 OEM、計算機 OEM 等系統供應商可以放心地投資 NVIDIA,因為我們提供了巨大的市場需求和覆蓋範圍。
Scale and velocity. NVIDIA has achieved significant scale and is 100% invested in accelerated computing and generative AI. Our ecosystem partners can trust that we have the expertise, focus and scale to deliver a strong road map and reach to help them grow. We are accelerating because of the additive results of these capabilities. We're upgrading and adding new products about every 6 months versus every 2 years to address the expanding universe of generative AI.
規模和速度。 NVIDIA 已實現顯著規模,並 100% 投資於加速計算和生成式 AI。我們的生態系統合作夥伴可以相信,我們擁有專業知識、專注力和規模,可以提供強大的路線圖並幫助他們成長。由於這些能力的疊加結果,我們正在加速。我們大約每 6 個月(而不是每 2 年)升級和添加新產品,以應對不斷擴大的生成人工智能領域。
While we increased the output of H100 for training and inference of large language models, we're ramping up our new L40S universal GPU for scale, for cloud scale-out and enterprise servers. Spectrum-X, which consists of our Ethernet switch, BlueField-3 Super NIC and software helps customers who want the best possible AI performance on Ethernet infrastructures. Customers are already working on next-generation accelerated computing and generative AI with our Grace Hopper.
雖然我們增加了 H100 的輸出以用於大型語言模型的訓練和推理,但我們正在加大新的 L40S 通用 GPU 的規模、雲橫向擴展和企業服務器的規模。 Spectrum-X 由我們的以太網交換機、BlueField-3 Super NIC 和軟件組成,可幫助希望在以太網基礎設施上獲得最佳 AI 性能的客戶。客戶已經在使用我們的 Grace Hopper 開發下一代加速計算和生成人工智能。
We're extending NVIDIA AI to the world's enterprises that demand generative AI but with the model privacy, security and sovereignty. Together with the world's leading enterprise IT companies, Accenture, Adobe, Getty, Hugging Face, Snowflake, ServiceNow, VMware and WPP and our enterprise system partners, Dell, HPE, and Lenovo, we are bringing generative AI to the world's enterprise. We're building NVIDIA Omniverse to digitalize and enable the world's multitrillion-dollar heavy industries to use generative AI to automate how they build and operate physical assets and achieve greater productivity.
我們正在將 NVIDIA AI 擴展到需要生成式 AI 但同時具有隱私、安全和主權模型的全球企業。我們與全球領先的企業 IT 公司 Accenture、Adobe、Getty、Hugging Face、Snowflake、ServiceNow、VMware 和 WPP 以及我們的企業系統合作夥伴 Dell、HPE 和 Lenovo 一起,為全球企業帶來生成式 AI。我們正在構建 NVIDIA Omniverse,以實現數字化,並使世界上價值數万億美元的重工業能夠使用生成式 AI 來自動化他們構建和運營實體資產的方式,並實現更高的生產力。
Generative AI starts in the cloud, but the most significant opportunities are in the world's largest industries, where companies can realize trillions of dollars of productivity gains. It is an exciting time for NVIDIA, our customers, partners and the entire ecosystem to drive this generational shift in computing. We look forward to updating you on our progress next quarter.
生成式人工智能始於雲端,但最重要的機會出現在世界上最大的行業中,企業可以在這些行業中實現數万億美元的生產力提升。對於 NVIDIA、我們的客戶、合作夥伴以及整個生態系統來說,推動這一計算的代際轉變是一個激動人心的時刻。我們期待向您通報下季度的最新進展。
Operator
Operator
This concludes today's conference call. You may now disconnect.
今天的電話會議到此結束。您現在可以斷開連接。