NVIDIA 報告稱,由於對其數據中心平台的強勁需求,第二季度營收達到 135.1 億美元。該公司預計到明年每個季度的供應量都會增加。
NVIDIA 強調了加強對中國出口監管的潛在影響。該公司的遊戲收入環比增長 11%,而汽車收入下降 15%。 NVIDIA 宣布與多家公司建立合作夥伴關係。
該公司第三季度的展望包括預計總收入為 160 億美元。演講者討論了大型模型推理的新興應用以及生成式人工智能需求的可持續性。
全球數據中心行業正在向加速計算和生成式人工智能轉型。 NVIDIA 的軟件生態系統和架構使其成為開發者的首選。該公司正在與領先客戶一起規劃下一代基礎設施。
NVIDIA 在分配 GPU 時優先考慮客戶的選擇並提供網絡解決方案。該公司的DGX Cloud戰略旨在與CSP建立合作夥伴關係並提高高性能計算的性能。
NVIDIA 預計其軟件業務將持續增長。該公司正在擴大產能並與企業IT公司合作。 NVIDIA 對於推動計算領域的代際轉變感到非常興奮。
使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon. My name is David, and I'll be your conference operator today. At this time, I'd like to welcome everyone to NVIDIA's second quarter earnings call. Today's conference is being recorded. (Operator Instructions)
午安.我叫大衛,今天我將擔任你們的會議操作員。此時此刻,我謹歡迎大家參加 NVIDIA 第二季財報電話會議。今天的會議正在錄製中。 (操作員說明)
Thank you. Simona Jankowski, you may begin your conference.
謝謝。 Simona Jankowski,您可以開始會議了。
Simona Jankowski - VP of IR
Simona Jankowski - VP of IR
Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the Second Quarter of Fiscal 2024. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
謝謝。大家下午好,歡迎參加 NVIDIA 2024 財年第二季電話會議。執行副總裁兼財務長 Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2024. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網路直播。此網路廣播將在討論 2024 財年第三季財務業績的電話會議之前進行重播。未經我們事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在這次電話會議中,我們可能會根據目前的預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益發布中的揭露、我們最新的表格 10-K 和 10-Q,以及我們可能在表格 8-K 上提交的報告證券交易委員會。
All our statements are made as of today, August 23, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
我們的所有聲明均根據我們目前掌握的資訊於今天(2023 年 8 月 23 日)作出。除法律要求外,我們不承擔更新任何此類聲明的義務。在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的財務長評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的調整表。
And with that, let me turn the call over to Colette.
接下來,讓我把電話轉給科萊特。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Thanks, Simona. We had an exceptional quarter. Record Q2 revenue of $13.51 billion was up 88% sequentially and up 101% year-on-year and above our outlook of $11 billion. Let me first start with Data Center. Record revenue of $10.32 billion was up 141% sequentially and up 171% year-on-year.
謝謝,西蒙娜。我們度過了一個出色的季度。第二季營收創歷史新高,達 135.1 億美元,季增 88%,年增 101%,高於我們 110 億美元的預期。讓我先從資料中心開始。營收創紀錄達到 103.2 億美元,季增 141%,年增 171%。
Data Center compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs.
資料中心運算收入較去年同期成長近兩倍,主要是由於雲端服務供應商和大型消費網路公司對我們的 HGX 平台(產生人工智慧和大型語言模型的引擎)的需求不斷增加。 AWS、Google Cloud、Meta、Microsoft Azure 和 Oracle Cloud 等主要公司以及越來越多的 GPU 雲端供應商正在大量部署基於我們的 Hopper 和 Ampere 架構 Tensor Core GPU 的 HGX 系統。
Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.
在我們的端到端 InfiniBand 網路平台(人工智慧的黃金標準)的推動下,網路收入比去年同期幾乎翻了一番。對 NVIDIA 加速運算和 AI 平台的需求龐大。我們的供應夥伴在提高產能以滿足我們的需求方面表現出色。
Our data center supply chain, including HGX, with 35,000 parts and highly complex networking, has been built up over the past decade. We have also developed and qualified additional capacity and suppliers for key steps in the manufacturing process, such as CoWoS packaging. We expect supply to increase each quarter through next year.
我們的資料中心供應鏈,包括 HGX,擁有 35,000 個零件和高度複雜的網絡,是在過去十年中建立起來的。我們也為製造過程中的關鍵步驟(例如 CoWoS 包裝)開發並鑑定了額外的產能和供應商。我們預計到明年每季供應量都會增加。
By geography, Data Center growth was strongest in the U.S., as customers direct their capital investments to AI and accelerated computing. China demand was within the historical range of 20% to 25% of our Data Center revenue, including compute and networking solutions. At this time, let me take a moment to address recent reports on the potential for increased regulations on our exports to China.
從地理位置來看,資料中心的成長在美國最為強勁,因為客戶將資本投資轉向人工智慧和加速運算。中國需求占我們資料中心收入(包括運算和網路解決方案)的 20% 至 25%,處於歷史範圍內。現在,讓我花點時間談談最近有關我們對中國出口可能加強監管的報道。
We believe the current regulation is achieving the intended results. Given the strength of demand for our products worldwide, we do not anticipate that additional export restrictions on our Data Center GPUs, if adopted, would have an immediate material impact to our financial results. However, over the long term, restrictions prohibiting the sale of our Data Center GPUs to China, if implemented, will result in a permanent loss of an opportunity for the U.S. industry to compete and lead in one of the world's largest markets.
我們相信目前的監管正在達到預期效果。鑑於全球對我們產品的需求強勁,我們預計對資料中心 GPU 的額外出口限制如果採用,不會對我們的財務業績產生直接重大影響。然而,從長遠來看,禁止向中國銷售資料中心 GPU 的限制一旦實施,將導緻美國工業永久失去在全球最大市場之一競爭和領先的機會。
Our cloud service providers drove exceptionally strong demand for HGX systems in the quarter as they undertake a generational transition to upgrade their data center infrastructure for the new era of accelerated computing and AI. The NVIDIA HGX platform is culminating of nearly 2 decades of full-stack innovation across silicon, systems, interconnects, networking, software and algorithms. Instances powered by the NVIDIA H100 Tensor Core GPUs are now generally available at AWS, Microsoft Azure and several GPU cloud providers, with others on the way shortly.
我們的雲端服務供應商在本季度推動了對 HGX 系統的異常強勁的需求,因為他們正在進行代際過渡以升級其資料中心基礎設施,以迎接加速運算和人工智慧的新時代。 NVIDIA HGX 平台是近 2 年來跨晶片、系統、互連、網路、軟體和演算法的全端創新的巔峰之作。由 NVIDIA H100 Tensor Core GPU 提供支援的執行個體現已在 AWS、Microsoft Azure 和多家 GPU 雲端供應商處全面上市,其他供應商也將很快推出。
Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram.
消費互聯網公司也帶動了非常強勁的需求。他們對專門為人工智慧建構的資料中心基礎設施的投資已經產生了可觀的回報。例如,Meta 最近強調,自從推出 Reels 以來,人工智慧推薦使 Instagram 上花費的時間增加了 24% 以上。
Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI Enterprise software, with access to our acceleration libraries, pretrained models and APIs.
企業也在競相部署生成式 AI,推動雲端中 NVIDIA 支援的實例的強勁消費以及對本地基礎設施的需求。無論我們透過合作夥伴或直接在雲端或本地為客戶提供服務,他們的應用程式都可以在 NVIDIA AI Enterprise 軟體上無縫運行,並可以存取我們的加速程式庫、預訓練模型和 API。
We announced a partnership with Snowflake to provide enterprises with accelerated clouds to create customized generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbots, search and summarization, right from the Snowflake Data Cloud.
我們宣布與 Snowflake 建立合作夥伴關係,為企業提供加速雲,以使用他們自己的專有數據創建定制的生成人工智能應用程序,所有這些都安全地在 Snowflake 數據雲中進行。借助用於開發大型語言模型的 NVIDIA NeMo 平台,企業將能夠直接從 Snowflake 資料雲為高級 AI 服務(包括聊天機器人、搜尋和摘要)製作客製化 LLM。
Virtually every industry can benefit from generative AI. For example, AI copilots, such as those just announced by Microsoft, can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI copilots and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.
幾乎每個產業都可以從生成式人工智慧中受益。例如,微軟剛宣布的人工智慧副駕駛可以提高超過 10 億辦公室職員和數千萬軟體工程師的工作效率。法律服務、銷售、客戶支援和教育領域的數十億專業人士將可以利用在其領域接受過培訓的人工智慧系統。人工智慧副駕駛和助理將為我們的客戶創造新的數千億美元的市場機會。
We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world's largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation.
我們看到了生成式人工智慧在行銷、媒體和娛樂領域的一些最早的應用。全球最大的行銷和傳播服務組織 WPP 正在使用 NVIDIA Omniverse 開發內容引擎,使藝術家和設計師能夠將生成式 AI 整合到 3D 內容創作中。
WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider Shutterstock is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI.
WPP 設計師可以根據文字提示創建圖像,同時使用NVIDIA Picasso(一家為視覺設計定制生成AI 模型的代工廠)提供經過負責任培訓的生成式AI 工具和來自AdAdobe 和Getty Images 等NVIDIA 合作夥伴的內容。視覺內容提供者 Shutterstock 也使用 NVIDIA Picasso 建立工具和服務,使用戶能夠在生成式 AI 的幫助下創建 3D 場景背景。
We've partnered with ServiceNow and Accenture to launch the AI Lighthouse program, fast-tracking the development of enterprise AI capabilities. AI Lighthouse unites the ServiceNow enterprise automation platform and engine with NVIDIA-accelerated computing and with Accenture's consulting and deployment services.
我們與ServiceNow、埃森哲合作推出AI Lighthouse計劃,加速企業AI能力發展。 AI Lighthouse 將 ServiceNow 企業自動化平台和引擎與 NVIDIA 加速運算以及埃森哲的諮詢和部署服務結合在一起。
We are collaborating also with Hugging Face to simplify the creation of new and custom AI models for enterprises. Hugging Face will offer a new service for enterprises to train and tune advanced AI models, powered by NVIDIA DGX cloud.
我們也與 Hugging Face 合作,簡化企業新的自訂人工智慧模式的創建。 Hugging Face 將為企業提供一項新服務,以訓練和調整由 NVIDIA DGX 雲端提供支援的高級 AI 模型。
And just yesterday, VMware and NVIDIA announced a major new enterprise offering called VMware Private AI Foundation with NVIDIA, a fully integrated platform featuring AI software and accelerated computing from NVIDIA with multi-cloud software for enterprises running VMware. VMware's hundreds of thousands of enterprise customers will have access to the infrastructure, AI and cloud management software needed to customize models and run generative AI applications such as intelligent chatbots, assistants, search and summarization.
就在昨天,VMware 和NVIDIA 宣布了一項名為VMware Private AI Foundation with NVIDIA 的重要新企業產品,這是一個完全整合的平台,採用NVIDIA 的AI 軟體和加速運算以及面向運行VMware 的企業的多雲軟體。 VMware 的數十萬企業客戶將能夠存取自訂模型和運行智慧聊天機器人、助理、搜尋和摘要等生成式 AI 應用程式所需的基礎設施、AI 和雲端管理軟體。
We also announced new NVIDIA AI Enterprise-ready servers featuring the new NVIDIA L40S GPU built for the industry-standard data center server ecosystem and BlueField-3 DPU data center infrastructure processor. L40S is not limited by CoWoS supply and is shipping to the world's leading server system makers. L40S is a universal data center processor designed for high-volume data center scaling out to accelerate the most compute-intensive applications, including AI training and inferencing through the designing, visualization, video processing and NVIDIA Omniverse industrial digitalization.
我們也發表了全新 NVIDIA AI Enterprise-ready 伺服器,配備專為業界標準資料中心伺服器生態系統建置的全新 NVIDIA L40S GPU 和 BlueField-3 DPU 資料中心基礎架構處理器。 L40S不受CoWoS供應限制,並且正在向全球領先的伺服器系統製造商出貨。 L40S 是一款通用資料中心處理器,專為大容量資料中心橫向擴展而設計,可加速運算密集型應用,包括透過設計、視覺化、視訊處理和 NVIDIA Omniverse 工業數位化進行 AI 訓練和推理。
NVIDIA AI Enterprise-ready servers are fully optimized for VMware Cloud Foundation and Private AI Foundation. Nearly 100 configurations of NVIDIA AI Enterprise-ready servers will soon be available from the world's leading enterprise IT computing companies, including Dell, HPE and Lenovo.
NVIDIA AI Enterprise-ready 伺服器針對 VMware Cloud Foundation 和 Private AI Foundation 進行了全面最佳化。戴爾、HPE 和聯想等全球領先的企業 IT 運算公司即將推出近 100 種配置的 NVIDIA AI 企業級伺服器。
The GH200 Grace Hopper Superchip, which combines our ARM-based Grace CPU with Hopper GPU, entered full production and will be available this quarter in OEM servers. It is also shipping to multiple supercomputing customers, including Los Alamos National Labs and the Swiss National Computing Center.
GH200 Grace Hopper Superchip 將我們基於 ARM 的 Grace CPU 與 Hopper GPU 相結合,現已全面投入生產,並將於本季度在 OEM 伺服器中上市。它還向多個超級運算客戶發貨,包括洛斯阿拉莫斯國家實驗室和瑞士國家計算中心。
And NVIDIA and SoftBank are collaborating on a platform based on GH200 for generative AI and 5G/6G applications. The second generation version of our Grace Hopper Superchip with the latest HBM3e memory will be available in Q2 of calendar 2024.
NVIDIA 和軟銀正在合作開發基於 GH200 的平台,用於生成式 AI 和 5G/6G 應用。配備最新 HBM3e 內存的第二代 Grace Hopper Superchip 版本將於 2024 年第二季上市。
We announced the DGX GH200, a new class of large-memory AI supercomputer for giant AI language model recommendator systems and data analytics. This is the first use of the new NVIDIA NVLink Switch System, enabling all of its 256 Grace Hopper Superchips to work together as 1, a huge jump compared to our prior generation connecting just 8 GPUs over NVLink. The DGX GH200 systems are expected to be available by the end of the year. Google Cloud, Meta and Microsoft among the first to gain access.
我們推出了 DGX GH200,這是一種新型大內存 AI 超級計算機,用於巨型 AI 語言模型推薦系統和數據分析。這是全新 NVIDIA NVLink 交換器系統的首次使用,使所有 256 個 Grace Hopper 超級晶片能夠作為 1 個晶片一起工作,與我們上一代僅透過 NVLink 連接 8 個 GPU 相比,這是一個巨大的飛躍。 DGX GH200 系統預計今年底上市。 Google雲端、Meta 和微軟是最早獲得存取權限的。
Strong networking growth was driven primarily by InfiniBand infrastructure to connect HGX GPU systems. Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of (inaudible) and pays for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners.
網路的強勁成長主要是由連接 HGX GPU 系統的 InfiniBand 基礎設施所推動的。憑藉其端對端優化和網內運算能力,InfiniBand 為 AI 提供的效能是傳統乙太網路的兩倍以上。對於價值數十億美元的人工智慧基礎設施來說,InfiniBand 增加的吞吐量所帶來的價值價值數百(聽不清楚)並為網路付費。此外,只有 InfiniBand 可以擴展到數十萬個 GPU。它是領先的人工智慧從業者的首選網路。
For Ethernet-based cloud data centers that seek to optimize their AI performance, we announced NVIDIA Spectrum-X, an accelerated networking platform designed to optimize Ethernet for AI workloads. Spectrum-X couples the Spectrum or Ethernet switch with the BlueField-3 DPU, achieving 1.5x better overall AI performance and power efficiency versus traditional Ethernet. BlueField-3 DPU is a major success. It is in qualification with major OEMs and ramping across multiple CSPs and consumer internet companies.
對於尋求優化 AI 效能的基於乙太網路的雲端資料中心,我們推出了 NVIDIA Spectrum-X,這是一個加速網路平台,旨在優化 AI 工作負載的乙太網路。 Spectrum-X 將 Spectrum 或乙太網路交換器與 BlueField-3 DPU 結合起來,與傳統乙太網路相比,整體 AI 效能和效能提高了 1.5 倍。 BlueField-3 DPU 取得了重大成功。它已獲得主要 OEM 廠商的認證,並在多個 CSP 和消費互聯網公司中推廣。
Now moving to Gaming. Gaming revenue of $2.49 billion was up 11% sequentially and 22% year-on-year. Growth was fueled by GeForce RTX 40 Series GPUs for laptops and desktops. End customer demand was solid and consistent with seasonality. We believe global end demand has returned to growth after last year's slowdown. We have a large upgrade opportunity ahead of us. Just 47% of our installed base have upgraded to RTX and about 20% have a GPU with an RTX 3060 or higher performance.
現在轉向遊戲。博彩收入為 24.9 億美元,季增 11%,年增 22%。用於筆記型電腦和桌上型電腦的 GeForce RTX 40 系列 GPU 推動了成長。最終客戶需求強勁且與季節性相符。我們認為全球終端需求在去年放緩後已恢復成長。我們面前有一個巨大的升級機會。我們的安裝基礎中只有 47% 已升級到 RTX,大約 20% 擁有 RTX 3060 或更高效能的 GPU。
Laptop GPUs posted strong growth in the key back-to-school season, led by RTX 4060 GPUs. NVIDIA's GPU-powered laptops have gained in popularity, and their shipments are now outpacing desktop GPUs from several regions around the world. This is likely to shift the [seasonality] of our overall gaming revenue a bit, with Q2 and Q3 as the stronger quarters of the year, reflecting the back-to-school and holiday build schedules for laptops.
在關鍵的返校季,筆記型電腦 GPU 實現了強勁成長,其中 RTX 4060 GPU 領銜。 NVIDIA 的 GPU 驅動的筆記型電腦越來越受歡迎,其出貨量現已超過全球多個地區的桌上型電腦 GPU。這可能會稍微改變我們整體遊戲收入的[季節性],第二季度和第三季度是一年中表現最強勁的季度,反映了筆記型電腦的返校和假期構建時間表。
In desktop, we launched the GeForce RTX 4060 and the GeForce RTX 4060 Ti GPUs, bringing the Ada Lovelace Architecture down to price points as low as $299. The ecosystem of RTX and DLSS games continue to expand. 35 new games added to DLSS support, including blockbusters such as Diablo IV and Baldur's Gate 3. There's now over 330 RTX-accelerated games and apps.
在桌面領域,我們推出了 GeForce RTX 4060 和 GeForce RTX 4060 Ti GPU,將 Ada Lovelace 架構的價格降至 299 美元。 RTX和DLSS遊戲的生態系統不斷擴大。 DLSS 支援新增 35 款新遊戲,包括《暗黑破壞神 IV》和《博德之門 3》等熱門遊戲。
We are bringing generative AI to Gaming. At Computex, we announced NVIDIA Avatar Cloud Engine, or ACE, for games, a custom AI model and foundry service. Developers can use this service to bring intelligence to nonplayer characters. And it harnesses a number of NVIDIA Omniverse and AI technologies, including NeMo, Riva and Audio2Face.
我們正在將生成式人工智慧引入遊戲。在 Computex 上,我們發布了用於遊戲的 NVIDIA Avatar Cloud Engine(ACE)、客製化 AI 模型和代工服務。開發人員可以使用此服務為非玩家角色帶來智慧。它利用了多項 NVIDIA Omniverse 和 AI 技術,包括 NeMo、Riva 和 Audio2Face。
Now moving to Professional Visualization. Revenue of $375 million was up 28% sequentially and down 24% year-on-year. The Ada architecture ramp drove strong growth in Q2, rolling out initially in laptop workstations with a refresh of desktop workstations coming in Q3. These will include the powerful new RTX systems with up to 4 NVIDIA RTX 6000 GPUs, providing more than 5,800 teraflops of AI performance and 192 gigabytes of GPU memory.
現在轉向專業視覺化。營收為 3.75 億美元,季增 28%,年減 24%。 Ada 架構的提升推動了第二季的強勁成長,最初在筆記型電腦工作站中推出,第三季將更新桌面工作站。其中包括功能強大的全新 RTX 系統,配備多達 4 個 NVIDIA RTX 6000 GPU,可提供超過 5,800 teraflops 的 AI 效能和 192 GB 的 GPU 記憶體。
They can be configured with NVIDIA AI Enterprise or NVIDIA Omniverse Enterprise. We also announced 3 new desktop workstation GPUs based on the Ada generation -- the NVIDIA RTX 5000, 4500 and 4000 -- offering up to 2x the RT core throughput and up to 2x faster AR training performance compared to the previous generation.
它們可以配置 NVIDIA AI Enterprise 或 NVIDIA Omniverse Enterprise。我們也發表了 3 款基於 Ada 世代的全新桌上型工作站 GPU(NVIDIA RTX 5000、4500 和 4000),與上一代相比,RT 核心吞吐量高達 2 倍,AR 訓練效能提升高達 2 倍。
In addition to traditional workloads such as 3D design and content creation, new workloads in generative AI, large language model development and data science are expanding the opportunity in pro visualization for our RTX technology. One of the key themes in Jensen's keynote at SIGGRAPH earlier this month was the conversion of graphics and AI.
除了 3D 設計和內容創建等傳統工作負載之外,生成式 AI、大型語言模型開發和資料科學領域的新工作負載正在為我們的 RTX 技術擴大專業視覺化的機會。 Jensen 本月稍早在 SIGGRAPH 上發表的主題演講的關鍵主題之一是圖形和人工智慧的轉換。
This is where NVIDIA Omniverse is positioned. Omniverse is OpenUSD's native platform. OpenUSD is a universal interchange that is quickly becoming the standard for the 3D world, much like HTML is the universal language for the 2D (inaudible). Together, Adobe, Apple, Autodesk, Pixar and NVIDIA form the Alliance for OpenUSD. Our mission is to accelerate OpenUSD's development and adoption. We announced new and upcoming Omniverse Cloud APIs, including RunUSD and ChatUSD to bring generative AI to open the workload.
這就是 NVIDIA Omniverse 的定位。 Omniverse 是 OpenUSD 的原生平台。 OpenUSD 是一種通用交換,正迅速成為 3D 世界的標準,就像 HTML 是 2D 世界的通用語言一樣(聽不清楚)。 Adobe、Apple、Autodesk、Pixar 和 NVIDIA 共同組成了 OpenUSD 聯盟。我們的使命是加速 OpenUSD 的開發和採用。我們宣布了新的和即將推出的 Omniverse Cloud API,包括 RunUSD 和 ChatUSD,以引入生成式 AI 來開放工作負載。
Moving to Automotive. Revenue was $253 million, down 15% sequentially and up 15% year-on-year. Solid year-on-year growth was driven by the ramp of self-driving platforms based on NVIDIA DRIVE [or] SoCs with a number of new energy vehicle makers. The sequential decline reflects lower overall automotive demand, particularly in China.
轉向汽車。營收為 2.53 億美元,季減 15%,年增 15%。年比穩健成長得益於多家新能源汽車製造商基於 NVIDIA DRIVE [或] SoC 的自動駕駛平台的持續推出。環比下降反映出整體汽車需求下降,尤其是在中國。
We announced a partnership with MediaTek to bring drivers and passengers new experiences inside the car. MediaTek will develop automotive SoCs and integrate a new product line of NVIDIA GPU chiplets. The partnership covers a wide range of vehicle segments from luxury to entry level.
我們宣布與聯發科技合作,為駕駛和乘客帶來全新的車內體驗。聯發科技將開發汽車 SoC 並整合 NVIDIA GPU 小晶片的新產品線。此次合作涵蓋從豪華車到入門級的廣泛汽車領域。
Moving to the rest of the P&L. GAAP gross margins expanded to 70.1% and non-GAAP gross margin to 71.2%, driven by higher Data Center sales. Our Data Center products include a significant amount of software and complexity, which is also helping drive our gross margin.
轉向損益表的其餘部分。在資料中心銷售額增加的推動下,GAAP 毛利率擴大至 70.1%,非 GAAP 毛利率擴大至 71.2%。我們的資料中心產品包含大量軟體和複雜性,這也有助於提高我們的毛利率。
Sequential GAAP operating expenses were up 6% and non-GAAP operating expenses were up 5%, primarily reflecting increased compensation and benefits. We returned approximately $3.4 billion to shareholders in the form of share repurchases and cash dividends. Our Board of Directors has just approved an additional $25 billion in stock repurchases to add to our remaining $4 billion of authorization as of the end of Q2.
以 GAAP 計算的營業費用連續增加 6%,非 GAAP 營業費用增加 5%,主要反映了薪資和福利的增加。我們以股票回購和現金股利的形式向股東返還約 34 億美元。我們的董事會剛剛批准了額外 250 億美元的股票回購,以增加我們截至第二季末剩餘的 40 億美元的授權。
Let me turn to the outlook for the third quarter of fiscal 2024. Demand for our Data Center platform where AI is tremendous and broad-based across industries and customers. Our demand visibility extends into next year. Our supply over the next several quarters will continue to ramp as we lower cycle times and work with our supply partners to add capacity.
讓我談談 2024 財年第三季的展望。我們的需求可見度將延續到明年。隨著我們縮短週期時間並與供應合作夥伴合作增加產能,我們在未來幾季的供應將繼續增加。
Additionally, the new L40S GPU will help address the growing demand for many types of workloads from cloud to enterprise. For Q3, total revenue is expected to be $16 billion, plus or minus 2%. We expect sequential growth to be driven largely by Data Center, with Gaming and Pro Vis also contributing.
此外,新的 L40S GPU 將有助於滿足從雲端到企業的多種類型工作負載不斷增長的需求。第三季總收入預計為 160 億美元,上下浮動 2%。我們預計環比成長將主要由資料中心推動,遊戲和 Pro Vis 也將做出貢獻。
GAAP and non-GAAP gross margins are expected to be 71.5% and 72.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $2.95 billion and $2 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $100 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 14.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
GAAP 和非 GAAP 毛利率預計分別為 71.5% 和 72.5%,上下浮動 50 個基點。 GAAP 和非 GAAP 營運費用預計分別約為 29.5 億美元和 20 億美元。 GAAP 和非 GAAP 其他收入和支出預計約為 1 億美元,不包括非關聯投資的損益。 GAAP 和非 GAAP 稅率預計為 14.5%,上下浮動 1%(不包括任何離散項目)。更多財務細節包含在 CFO 評論和我們的 IR 網站上提供的其他資訊中。
In closing, let me highlight some upcoming events for the financial community. We will attend the Jefferies Tech Summit on August 30 in Chicago, the Goldman Sachs Conference on September 5 in San Francisco, the Evercore Semiconductor Conference on September 6 as well as the Citi Tech Conference on September 7, both in New York, and BofA Virtual AI Conference on September 11. Our earnings call to discuss the results of our third quarter of fiscal 2024 is scheduled for Tuesday, November 21.
最後,讓我強調一下金融界即將發生的一些事件。我們將參加8 月30 日在芝加哥舉行的傑富瑞科技高峰會、9 月5 日在舊金山舉行的高盛會議、9 月6 日在紐約舉行的Evercore 半導體會議以及9 月7 日在紐約舉行的花旗科技會議以及美國銀行虛擬會議9 月 11 日人工智慧會議。
Operator, we will now open the call for questions. Could you please poll for questions for us? Thank you.
接線員,我們現在開始提問。您能幫我們投票提問嗎?謝謝。
Operator
Operator
(Operator Instructions) We'll take our first question from Matt Ramsay with TD Cowen.
(操作員說明)我們將接受 TD Cowen 的 Matt Ramsay 提出的第一個問題。
Matthew D. Ramsay - MD & Senior Research Analyst
Matthew D. Ramsay - MD & Senior Research Analyst
Obviously, remarkable results. Jensen, I wanted to ask a question of you regarding the really quickly emerging application of large model inference. So I think it's pretty well understood by the majority of investors that you guys have very much a lockdown share of the training market. A lot of the smaller market -- smaller model inference workloads have been done on ASICs or CPUs in the past.
顯然,成果顯著。 Jensen,我想問你一個關於大型模型推理的快速新興應用的問題。因此,我認為大多數投資者都非常清楚,你們在培訓市場上佔有很大的份額。過去,許多較小的市場——較小的模型推理工作負載都是在 ASIC 或 CPU 上完成的。
And with many of these GPT and other really large models, there's this new workload that's accelerating super-duper quickly on large model inference. And I think your Grace Hopper Superchip products and others are pretty well aligned for that. But could you maybe talk to us about how you're seeing the inference market segment between small model inference and large model inference, and how your product portfolio is positioned for that?
對於許多這樣的 GPT 和其他非常大的模型,這種新的工作負載可以在大型模型推理上快速加速超級欺騙。我認為你們的 Grace Hopper Superchip 產品和其他產品非常適合這一點。但是您能否與我們談談您如何看待小模型推理和大型模型推理之間的推理市場細分,以及您的產品組合如何定位?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes, thanks a lot. So let's take a quick step back. These large language models are fairly -- are pretty phenomenal. It does several things, of course. It has the ability to understand unstructured language. But at its core, what it has learned is the structure of human language. And it has encoded -- or within it -- compressed within it a large amount of human knowledge that it has learned by the corpuses that it's studied.
是的,非常感謝。那麼讓我們快速退後一步。這些大型語言模型相當驚人。當然,它可以做幾件事。它具有理解非結構化語言的能力。但其核心是,它學到的是人類語言的結構。它編碼了——或者說在其中——壓縮了透過所研究的語料庫學到的大量人類知識。
What happens is you create these large language models and you create as large as you can, and then you derive from it smaller versions of the model, essentially teacher-student models. It's a process called distillation. And so when you see these smaller models, it's very likely the case that they were derived from or distilled from or learned from larger models, just as you have professors and teachers and students and so on and so forth.
所發生的情況是,您創建了這些大型語言模型,並且創建了盡可能大的語言模型,然後從中派生出模型的較小版本,本質上是師生模型。這是一個稱為蒸餾的過程。因此,當你看到這些較小的模型時,很可能它們是從較大的模型衍生、精煉或學習的,就像你有教授、老師和學生等等。
And you're going to see this going forward. And so you start from a very large model, and it has a large amount of generality and generalization and what's called zero-shot capability. And so for a lot of applications and questions or skills that you haven't trained it specifically on, these large language models miraculously has the capability to perform them. That's what makes it so magical.
未來你將會看到這一點。所以你從一個非常大的模型開始,它具有大量的通用性和泛化性以及所謂的零樣本能力。因此,對於許多您沒有專門訓練過的應用程式、問題或技能,這些大型語言模型奇蹟般地有能力執行它們。這就是它如此神奇的原因。
On the other hand, you would like to have these capabilities in all kinds of computing devices, and so what you do is you distill them down. These smaller models might have excellent capabilities on a particular skill, but they don't generalize as well. They don't have what is called as good zero-shot capabilities. And so they all have their own unique capabilities, but you start from very large models.
另一方面,您希望在各種計算設備中都具有這些功能,因此您要做的就是將它們提煉出來。這些較小的模型可能在特定技能上具有出色的能力,但它們也不能概括。他們不具備所謂的良好的零射擊能力。因此,它們都有自己獨特的功能,但您是從非常大的模型開始的。
Operator
Operator
Next, we'll go to Vivek Arya with BofA Securities.
接下來,我們將採訪美國銀行證券公司的 Vivek Arya。
Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst
Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst
Just had a quick clarification and a question. Colette, if you could please clarify how much incremental supply do you expect to come online in the next year? Do you think it's up 20%, 30%, 40%, 50%? So just any sense of how much supply, because you said it's growing every quarter.
只是進行了快速澄清和提問。 Colette,能否請您澄清一下您預計明年上線的增量供應量是多少?你認為上漲20%、30%、40%、50%嗎?所以只是對供應量的任何感覺,因為你說它每個季度都在增長。
And then, Jensen, the question for you is when we look at the overall hyperscaler spending, that pie is not really growing that much. So what is giving you the confidence that they can continue to carve out more of that pie for generative AI? Just give us your sense of how sustainable is this demand as we look over the next 1 to 2 years?
然後,Jensen,你的問題是,當我們審視超大規模的總體支出時,這個蛋糕並沒有真正增長那麼多。那麼,是什麼讓您有信心他們可以繼續為生成人工智慧做出更多貢獻呢?請告訴我們您對未來 1 到 2 年這種需求的可持續性有何看法?
So if I take your implied Q3 outlook of Data Center, $12 billion, $13 billion, what does that say about how many servers are already AI accelerated? Where is that going? So just give us some confidence that the growth that you are seeing is sustainable into the next 1 to 2 years.
因此,如果我看一下您隱含的資料中心第三季前景,即 120 億美元、130 億美元,這說明有多少伺服器已經進行了 AI 加速?那要去哪裡?因此,請給我們一些信心,讓我們相信您所看到的成長在未來 1 到 2 年內可持續。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
So thanks for that question regarding our supply. Yes, we do expect to continue increasing ramping our supply over the next quarters as well as into next fiscal year. In terms of percent, it's not something that we have here. It is a work across so many different suppliers, so many different parts, of building an HGX and many of our other new products that are coming to market. But we are very pleased with both the support that we have with our suppliers and the long time that we have spent with them improving the supply.
感謝您提出有關我們供應的問題。是的,我們確實預計在接下來的幾個季度以及下一財年繼續增加我們的供應。就百分比而言,我們這裡沒有。建造 HGX 以及我們即將推向市場的許多其他新產品需要眾多不同供應商、眾多不同部件的共同努力。但我們對供應商的支持以及與他們一起改善供應的長期合作感到非常滿意。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
The world has something along the lines of about $1 trillion worth of data centers installed in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We're seeing 2 simultaneous platform shifts at the same time.
全世界在雲端、企業和其他地方安裝了價值約 1 兆美元的資料中心。價值 1 兆美元的資料中心正在轉型為加速運算和產生人工智慧。我們同時看到兩個同步平台轉換。
One is accelerated computing. And the reason for that is because it's the most cost-effective, most energy-effective and the most performant way of doing computing now. So what you're seeing -- and then all of a sudden, enabled by generative AI -- enabled by accelerated computing, generative AI came along.
一是加速計算。原因是它是目前最具成本效益、最節能和最高效的計算方式。所以你所看到的——然後突然之間,在生成式人工智慧的支持下——在加速運算的支援下,生成式人工智慧出現了。
And this incredible application now gives everyone 2 reasons to transition, to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It's about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year.
這個令人難以置信的應用程式現在為每個人提供了兩個轉型的理由,從通用運算(傳統的運算方式)到這種新的運算方式(加速運算)進行平台轉換。資料中心每年的資本支出約為 1 兆美元,即 0.25 兆美元。
You're seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition, and we're seeing these 2 platform shifts happening at the same time.
您將看到世界各地的資料中心正在將資本支出集中在當今計算的兩個最重要的趨勢上,即加速運算和產生人工智慧。所以我認為這不是近期的事。這是一個長期的產業轉型,我們看到這兩個平台的轉變同時發生。
Operator
Operator
Next, we go to Stacy Rasgon with Bernstein Research.
接下來,我們將採訪伯恩斯坦研究中心的史黛西‧拉斯貢 (Stacy Rasgon)。
Stacy Aaron Rasgon - Senior Analyst
Stacy Aaron Rasgon - Senior Analyst
I was wondering, Colette, if you could tell me like how much of Data Center in the quarter, maybe even the guide, is like systems versus GPU, like DGX versus just the H100. What I'm really trying to get at is how much is like pricing or content or however you want to define that versus units actually driving the growth going forward. Can you give us any color around that?
Colette,我想知道您能否告訴我本季度的資料中心有多少,甚至可能是指南,就像系統與 GPU,就像 DGX 與 H100 一樣。我真正想要了解的是定價或內容有多少,或者你想如何定義它,與實際推動未來成長的單位相比。你能給我們一些關於它的顏色嗎?
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Sure, Stacy. Let me help. Within the quarter, our HGX systems were a very significant part of our Data Center as well as our Data Center growth that we had seen. Those systems include our HGX of our Hopper architecture but also our Ampere architecture. Yes, we are still selling both of these architectures in the market.
當然,史黛西。讓我幫忙。在本季度內,我們的 HGX 系統是我們資料中心以及我們所看到的資料中心成長的非常重要的一部分。這些系統包括我們的 Hopper 架構的 HGX 以及我們的 Ampere 架構。是的,我們仍在市場上銷售這兩種架構。
Now when you think about that, what does that mean from both the systems as a unit, of course, is growing quite substantially. And that is driving in terms of the revenue increases. So both of these things are the drivers of the revenue inside Data Center.
現在,當您考慮這一點時,這對於兩個系統作為一個整體意味著什麼,當然,這兩個系統都在大幅增長。這推動了收入的成長。因此,這兩件事都是資料中心內部收入的驅動因素。
Our DGXs are always a portion of additional systems that we will sell. Those are great opportunities for enterprise customers and many other different types of customers that we're seeing, even in our consumer Internet companies.
我們的 DGX 始終是我們將銷售的附加系統的一部分。對於企業客戶和我們看到的許多其他不同類型的客戶(甚至是我們的消費網路公司)來說,這些都是巨大的機會。
The importance there is also coming together with software that we sell with our DGXs, but that's a portion of our sales that we're doing. The rest of the GPUs, we have new GPUs coming to market that we talked about, the L40S, and they will add continued growth going forward. But again, the largest driver of our revenue within this last quarter was definitely the HGX systems.
與我們隨 DGX 一起銷售的軟體相結合也很重要,但這是我們正在進行的銷售的一部分。其餘的 GPU,我們討論過的新 GPU L40S 即將上市,它們將在未來帶來持續成長。但同樣,上個季度我們營收的最大動力無疑是 HGX 系統。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
And Stacy, if I could just add something. You say it's H100, and I know you know what your mental image in your mind. But the H100 is 35,000 parts, 70 pounds, nearly 1 trillion transistors in combination. It takes a robot to build -- well, many robots to build, because it's 70 pounds to lift. And it takes a supercomputer to test a supercomputer.
史黛西,如果我可以補充一些東西的話。你說它是H100,我知道你知道你心目中的形像是什麼。但 H100 由 35,000 個零件、70 磅、近 1 兆個電晶體組合而成。這需要一個機器人來建造——嗯,需要很多機器人來建造,因為它需要 70 磅才能抬起。測試超級電腦需要超級電腦。
And so these things are technology marvels and the manufacturing of them is really intensive. And so I think we call it H100 as if it's a chip that comes off of a fab, but H100s go out really as HGXs sent to the world's hyperscalers and they're really, really quite large system components, if you will.
因此,這些東西都是科技奇蹟,而且它們的製造確實非常密集。因此,我認為我們將其稱為H100,就好像它是從晶圓廠生產的晶片一樣,但H100 確實是作為HGX 發送給世界各地的超大規模企業而推出的,如果你願意的話,它們確實是非常非常大的系統組件。
Operator
Operator
Next, we go to Mark Lipacis with Jefferies.
接下來,我們將與 Jefferies 一起拜訪 Mark Lipacis。
Mark John Lipacis - MD & Senior Equity Research Analyst
Mark John Lipacis - MD & Senior Equity Research Analyst
Congrats on the success. Jensen, it seems like a key part of the success -- your success in the market is delivering the software ecosystem along with the chip and the hardware platform. And I had a 2-part question on this. I was wondering if you could just help us understand the evolution of your software ecosystem, the critical elements. And is there a way to quantify your lead on this dimension, like how many person years you've invested in building it? And then part 2, I was wondering if you would care to share with us your view on the -- what percentage of the value of the NVIDIA platform is hardware differentiation versus software differentiation?
恭喜你成功了。 Jensen,這似乎是成功的關鍵部分——您在市場上的成功在於提供軟體生態系統以及晶片和硬體平台。我對此有一個由兩個部分組成的問題。我想知道您是否可以幫助我們了解您的軟體生態系統的演變以及關鍵要素。有沒有一種方法可以量化您在這個維度上的領先地位,例如您投入了多少人年來構建它?然後是第 2 部分,我想知道您是否願意與我們分享您對以下問題的看法:硬體差異化與軟體差異化在 NVIDIA 平台的價值中所佔的百分比是多少?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes, Mark, I really appreciate the question. Let me see if I could use some metrics. So we have a runtime called NVIDIA AI Enterprise. This is one part of our software stack. And this is, if you will, the runtime that just about every company uses for the end-to-end of machine learning, from data processing, the training of any model that you like to do on any framework you like to do, the inference and the deployment, the scaling it out into a data center. It could be a scale-out for a hyperscale data center. It could be a scale-out for enterprise data center, for example, on VMware.
是的,馬克,我真的很感激這個問題。讓我看看是否可以使用一些指標。因此,我們有一個名為 NVIDIA AI Enterprise 的運行時。這是我們軟體堆疊的一部分。如果你願意的話,這就是幾乎每個公司用於端到端機器學習的運行時,從數據處理,到你喜歡在任何你喜歡做的框架上訓練任何模型,推理和部署,將其擴展到數據中心。它可能是超大規模資料中心的橫向擴展。它可以是企業資料中心的橫向擴展,例如在 VMware 上。
You can do this on any of our GPUs. We have hundreds of millions of GPUs in the field and millions of GPUs in the cloud, in just about every single cloud. And it runs in a single GPU configuration as well as multi-GPU per compute or multinode. It also has multiple sessions or multiple computing instances per GPU.
您可以在我們的任何 GPU 上執行此操作。我們在現場擁有數億個 GPU,在雲端擁有數百萬個 GPU,幾乎在每個雲端中都有。它在單 GPU 配置以及每個計算或多節點的多 GPU 配置中運行。每個 GPU 還具有多個會話或多個運算實例。
So from multiple instances per GPU to multiple GPUs, multiple nodes to entire data center scale. So this run time called NVIDIA AI Enterprise has something like 4,500 software packages, software libraries, and has something like 10,000 dependencies among each other. And that run time is, as I mentioned, continuously updated and optimized for our installed base, for our stack.
因此,從每個 GPU 多個執行個體到多個 GPU、多個節點到整個資料中心規模。因此,這個名為 NVIDIA AI Enterprise 的運行時擁有大約 4,500 個軟體包、軟體庫,並且彼此之間具有大約 10,000 個依賴項。正如我所提到的,運行時間針對我們的安裝基礎和堆疊不斷更新和最佳化。
And that's just one example of what it would take to get accelerated computing to work. The number of code combinations and type of application combinations is really quite insane. And it's taken us 2 decades to get here. But what I would characterize as probably our -- the elements of our company, if you will, are several.
這只是加速計算發揮作用所需的範例之一。程式碼組合的數量和應用程式組合的類型確實相當瘋狂。我們花了20年才走到這一步。但如果你願意的話,我認為我們公司的要素可能有幾個。
I would say, number 1 is architecture. The flexibility, the versatility and the performance of our architecture makes it possible for us to do all the things that I just said, from data processing to training to inference, for preprocessing of the data before you do the inference, to the post processing of the data, tokenizing of languages so that you could then train with it. The amount of -- the workflow is much more intense than just training or inference.
我想說,第一是建築。我們架構的靈活性、多功能性和性能使我們能夠完成我剛才所說的所有事情,從數據處理到訓練到推理,在推理之前對數據進行預處理,到推理的後處理。進行標記,以便您可以用它進行訓練。工作流程的數量比訓練或推理密集得多。
But anyways, that's where we're focused, and it's fine. But when people actually use these computing systems, it's quite -- requires a lot of applications. And so the combination of our architecture makes it possible for us to deliver the lowest cost of ownership. And the reason for that is because we accelerate so many different things.
但無論如何,這就是我們關注的焦點,這很好。但當人們實際使用這些計算系統時,需要大量的應用程式。因此,我們的架構組合使我們能夠提供最低的擁有成本。原因是我們加速了許多不同的事情。
The second characteristic of our company is the installed base. You have to ask yourself, why is it that all the software developers come to our platform? And the reason for that is because software developers seek a large installed base so that they can reach the largest number of end users, so that they could build a business or get a return on the investments that they make.
我們公司的第二個特點是安裝基礎。你要問自己,為什麼所有的軟體開發者都會來到我們的平台?原因是軟體開發人員尋求龐大的安裝基礎,以便能夠接觸到最大數量的最終用戶,從而可以建立業務或獲得投資回報。
And then the third characteristic is reach. We're in the cloud today, both for public cloud, public-facing cloud, because we have so many customers that use it -- so many developers and customers that use our platform. CSPs are delighted to put it up in the cloud. They use it for internal consumption to develop and train and to operate recommender systems or search or data processing engines and whatnot, all the way to training and inference.
第三個特徵是覆蓋範圍。今天我們在雲端中,無論是公有雲還是面向大眾的雲,因為我們有這麼多的客戶使用它——這麼多的開發人員和客戶使用我們的平台。通訊服務提供者很高興將其放置在雲端。他們將其用於內部消費來開發、訓練和操作推薦系統或搜尋或資料處理引擎等等,一直到訓練和推理。
And so we're in the cloud, we're in enterprise. Yesterday, we had a very big announcement. It's really worthwhile to take a look at that. VMware is the operating system of the world's enterprise. And we've been working together for several years now, and we're going to bring together -- together, we're going to bring generative AI to the world's enterprises all the way out to the edge.
所以我們在雲端中,我們在企業中。昨天,我們宣布了一個非常重大的消息。確實值得一看。 VMware 是全球企業的作業系統。我們已經合作了好幾年了,我們將齊心協力,將生成式人工智慧帶給世界各地的企業,直到邊緣。
And so reach is another reason. And because of reach, all of the world's system makers are anxious to put NVIDIA's platform in their systems. And so we have a very broad distribution from all of the world's OEMs and ODMs and so on and so forth because of our reach.
因此,影響力是另一個原因。由於影響範圍廣,全球所有系統製造商都渴望將 NVIDIA 的平台放入他們的系統中。因此,由於我們的影響力,我們擁有來自世界各地的 OEM 和 ODM 等的非常廣泛的分銷。
And then lastly, because of our scale and velocity, we're able to sustain this really complex stack of software and hardware, networking and compute and across all of these different usage models and different computing environments. And we're able to do all this while accelerating the velocity of our engineering.
最後,由於我們的規模和速度,我們能夠在所有這些不同的使用模型和不同的計算環境中維持這個非常複雜的軟體和硬體、網路和計算堆疊。我們能夠在加快工程速度的同時完成這一切。
It seems like we're introducing a new architecture every 2 years. Now we're introducing a new architecture, a new product just about every 6 months. And so these properties make it possible for the ecosystem to build their company and their business on top of us. And so those, in combination, makes us special.
我們似乎每兩年就會引入一種新架構。現在我們大約每 6 個月就會推出一種新架構、一款新產品。因此,這些特性使生態系統能夠在我們之上建立他們的公司和業務。因此,這些因素結合在一起,使我們變得與眾不同。
Operator
Operator
Next, we'll go to Atif Malik with Citi.
接下來,我們將與 Citi 一起前往 Atif Malik。
Atif Malik - Director and Semiconductor Capital Equipment & Specialty Semiconductor Analyst
Atif Malik - Director and Semiconductor Capital Equipment & Specialty Semiconductor Analyst
Great job on the results and outlook. Colette, I have a question on the CoWoS-less L40S that you guys talked about. Any idea how much of the supply tightness can L40S help with? And if you can talk about the incremental profitability or gross margin contribution from this product?
在結果和前景方面做得很好。 Colette,我有一個關於你們談到的無 CoWoS L40S 的問題。您知道 L40S 能在多大程度上緩解供應緊張問題嗎?您能否談談該產品的增量獲利能力或毛利率貢獻?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes. Atif, let me take that for you. The L40S is really designed for a different type of application. H100 is designed for large-scale language models and processing just very large models and a great deal of data. And so that's not L40S' focus. L40S' focus is to be able to fine-tune models, fine-tune pretrained models, and it'll do that incredibly well. It has a transform engine, it's got a lot of performance, you can get multiple GPUs in a server.
是的。阿蒂夫,讓我幫您拿。 L40S 確實是為不同類型的應用而設計的。 H100 專為大規模語言模型而設計,僅處理非常大的模型和大量資料。所以這不是 L40S 的重點。 L40S 的重點是能夠微調模型,微調預訓練模型,它會做得非常好。它有一個轉換引擎,具有很高的效能,您可以在伺服器中獲得多個 GPU。
It's designed for hyperscale scale-out, meaning it's easy to install L40S servers into the world's hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard. And so it's easy to install. L40S also is, with the software stack around it, and along with BlueField-3 and all the work that we did with VMware and the work that we did with Snowflake and ServiceNow and so many other enterprise partners, L40S is designed for the world's enterprise IT systems. And that's the reason why HPE, Dell, and Lenovo and some 20 other system makers building about 100 different configurations of enterprise servers are going to work with us to take generative AI to the world's enterprise.
它專為超大規模橫向擴展而設計,這意味著可以輕鬆地將 L40S 伺服器安裝到世界超大規模資料中心。它配備標準機架、標準伺服器,一切都是標準的。因此它很容易安裝。 L40S 及其周圍的軟體堆疊以及 BlueField-3 以及我們與 VMware 所做的所有工作以及我們與 Snowflake 和 ServiceNow 以及許多其他企業合作夥伴所做的工作一起,L40S 是為全球企業而設計的IT系統。這就是為什麼 HPE、戴爾、聯想以及其他大約 20 家系統製造商建立了大約 100 種不同配置的企業伺服器,將與我們合作,將生成式 AI 引入全球企業。
And so L40S is really designed for a different type of scale-out, if you will. It's, of course, large language models. It's, of course, generative AI, but it's a different use case. And so the L40S is going to -- is off to a great start. And the world's enterprise and hyperscalers are really clamoring to get L40S deployed.
因此,如果您願意的話,L40S 確實是為不同類型的橫向擴展而設計的。當然,這是大型語言模型。當然,它是生成式人工智慧,但它是一個不同的用例。因此,L40S 將是一個好的開始。全球企業和超大規模企業都迫切希望部署 L40S。
Operator
Operator
Next, we'll go to Joe Moore with Morgan Stanley.
接下來,我們將採訪摩根士丹利的喬摩爾。
Joseph Lawrence Moore - Executive Director
Joseph Lawrence Moore - Executive Director
I guess the thing about these numbers that's so remarkable to me is the amount of demand that remains unfulfilled, talking to some of your customers. As good as these numbers are, you sort of more than tripled your revenue in a couple of quarters, there's a demand in some cases for multiples of what people are getting.
我想,透過與您的一些客戶交談,這些數字對我來說如此引人注目的是仍未滿足的需求量。儘管這些數字很好,但在幾個季度內您的收入增加了兩倍多,在某些情況下,人們對產品的需求是數倍以上。
So can you talk about that? How much unfulfilled demand do you think there is? And you talked about visibility extending into next year. Do you have line of sight into when you get to see supply/demand equilibrium here?
那你能談談這個嗎?您認為還有多少未被滿足的需求?您談到了明年的可見性。當您看到這裡的供需平衡時,您能看到嗎?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Yes. We have excellent visibility through the year and into next year. And we're already planning the next-generation infrastructure with the leading CSPs and data center builders. The demand -- the easiest way to think about the demand is -- the world is transitioning from general-purpose computing to accelerated computing. That's the easiest way to think about the demand.
是的。我們全年和明年都有良好的能見度。我們已經與領先的通訊服務供應商和資料中心建置者一起規劃下一代基礎設施。需求——思考需求的最簡單方法是——世界正在從通用計算向加速計算過渡。這是考慮需求最簡單的方法。
The best way for companies to increase their throughput, improve their energy efficiency, improve their cost efficiency, is to divert their capital budget to accelerated computing and generative AI. Because by doing that, you're going to offload so much workload off of the CPUs that the available CPUs is -- in your data center will get boosted. And so what you're seeing companies do now is recognizing this -- the tipping point here, recognizing the beginning of this transition and diverting their capital investment to accelerated computing and generative AI.
公司提高吞吐量、提高能源效率、提高成本效率的最佳方法是將資本預算轉移到加速運算和生成式人工智慧。因為透過這樣做,您將從 CPU 上卸載如此多的工作負載,從而提高資料中心的可用 CPU 數量。因此,你現在看到的公司所做的就是認識到這一點——這裡是轉折點,認識到這一轉變的開始,並將其資本投資轉向加速計算和生成人工智慧。
And so that's probably the easiest way to think about the opportunity ahead of us. This isn't a singular application that is driving the demand, but this is a new computing platform, if you will, a new computing transition that's happening. And data centers all over the world are responding to this and shifting in a broad-based way.
因此,這可能是思考我們面前的機會的最簡單的方法。這不是一個推動需求的單一應用程序,但這是一個新的計算平台,如果你願意的話,這是一個正在發生的新的計算轉變。世界各地的資料中心正在對此做出反應並進行廣泛的轉變。
Operator
Operator
Next, we go to Toshiya Hari with Goldman Sachs.
接下來,我們和高盛一起去Toshiya Hari。
Toshiya Hari - MD
Toshiya Hari - MD
I had one quick clarification question for Colette and then another one for Jensen. Colette, I think last quarter, you had said CSPs were about 40% of your Data Center revenue, Consumer Internet at 30%, Enterprise, 30%. Based on your remarks, it sounded like CSPs and Consumer Internet may have been a larger percentage of your business. If you can kind of clarify that or confirm that, that would be super helpful.
我向 Colette 提出了一個快速澄清的問題,然後向 Jensen 提出了另一個問題。 Colette,我記得上個季度,您曾說過 CSP 約佔資料中心收入的 40%,消費網路佔 30%,企業佔 30%。根據您的評論,聽起來 CSP 和消費者互聯網可能在您的業務中所佔的比例更大。如果您能澄清或確認這一點,那將非常有幫助。
And then Jensen, a question for you. Given your position as the key enabler of AI, the breadth of engagements and the visibility you have into customer projects, I'm curious how confident you are that there will be enough applications or use cases for your customers to generate a reasonable return on their investments. I guess I ask the question because there is a concern out there that there could be a bit of a pause in your demand profile in the out years. Curious if there's enough breadth and depth there to support a sustained increase in your Data Center business going forward.
然後是詹森,問你一個問題。考慮到您作為人工智慧關鍵推動者的地位、參與的廣度以及您對客戶專案的可見性,我很好奇您對有足夠的應用程式或用例為您的客戶帶來合理的回報有多大信心? 。我想我問這個問題是因為有人擔心未來幾年您的需求狀況可能會停頓。很好奇是否有足夠的廣度和深度來支援您的資料中心業務的持續成長。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
Okay. So thank you, Toshiya, on the question regarding our types of customers that we have in our Data Center business. And we look at it in terms of combining our compute as well as our networking together. Our CSPs, our large CSPs, are contributing a little bit more than 50% of our revenue within Q2. And the next largest category will be our consumer Internet companies. And then the last piece of that will be our enterprise and high-performance computing.
好的。謝謝你,Toshiya,關於我們資料中心業務中的客戶類型的問題。我們從將計算和網路結合在一起的角度來看待它。我們的 CSP,我們的大型 CSP,在第二季貢獻了我們收入的 50% 以上。下一個最大的類別將是我們的消費網路公司。最後一部分將是我們的企業和高效能運算。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Toshiya, I'm reluctant to guess about the future, and so I'll answer the question from the first principle of computer science perspective. It is recognized for some time now that general purpose computing is just not -- and brute forcing general purpose computing, using general purpose computing at scale is no longer the best way to go forward. It's too energy costly, it's too expensive, and the performance of the applications are too slow.
Toshiya,我不願意猜測未來,所以我會從電腦科學第一原理的角度來回答這個問題。一段時間以來,人們已經認識到通用計算並非如此,而強制通用計算、大規模使用通用計算也不再是前進的最佳方式。它的能源成本太高,太昂貴,而且應用程式的效能太慢。
And finally, the world has a new way of doing it. It's called accelerated computing, and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that's already in the data center. And by using it, you offload the CPUs. You save a ton of money, an order of magnitude in cost, an order of magnitude and energy and the throughput is higher. And that's what the industry is really responding to.
最後,世界有了一種新的實現方式。這就是所謂的加速運算,而推動它加速發展的是生成式人工智慧。但加速運算可用於資料中心已有的各種不同應用程式。透過使用它,您可以減輕 CPU 的負擔。您可以節省大量資金、成本、能源和吞吐量提高一個數量級。這就是該行業真正做出的反應。
Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It's ready to be deployed.
展望未來,投資資料中心的最佳方式是將資本投資從通用運算轉移到生成式人工智慧和加速運算。生成式 AI 提供了一種提高生產力的新方式、一種產生為客戶提供的新服務的新方式,而加速運算可幫助您節省金錢和電力。申請的數量是成噸的。很多開發人員、很多應用程式、很多函式庫。它已準備好部署。
And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers. This is true for the world's clouds, and you're seeing a whole crop of new GPU specialty -- GPU specialized cloud service providers. One of the famous ones is CoreWeave, and they're doing incredibly well.
因此,我認為世界各地的資料中心都認識到這一點,這是部署資源、為資料中心部署資本的最佳方式。對於全球雲端來說都是如此,您會看到大量的新 GPU 專業人士—GPU 專業雲端服務供應商。 CoreWeave 是其中著名的公司之一,他們做得非常好。
But you're seeing the regional GPU specialists service providers all over the world now. And it's because they all recognize the same thing, that the best way to invest their capital going forward is to put it into accelerated computing and generative AI.
但現在您會看到世界各地都有區域 GPU 專家服務提供者。正是因為他們都認識到同一件事,未來投資資本的最佳方式是將其投入加速運算和產生人工智慧。
But we're also seeing that enterprises want to do that. But in order for enterprises to do it, you have to support the management system, the operating system, the security and software-defined data center approach of enterprises, and that's called VMware. And we've been working several years with VMware to make it possible for VMware to support not just the virtualization of CPUs but a virtualization of GPUs as well as the distributed computing capabilities of GPUs, supporting NVIDIA's BlueField for high-performance networking.
但我們也看到企業希望這樣做。但為了讓企業做到這一點,你必須支援企業的管理系統、作業系統、安全性和軟體定義的資料中心方法,這就是VMware。我們多年來一直與VMware合作,讓VMware不僅支援CPU虛擬化,還支援GPU虛擬化以及GPU的分散式運算能力,支援NVIDIA的BlueField高效能網路。
And all of the generative AI libraries that we've been working on is now going to be offered as a special SKU by VMware's sales force, which is, as we all know, quite large because they reach some several hundred thousand VMware customers around the world. And this new SKU is going to be called VMware Private AI Foundation. And this will be a new SKU that makes it possible for enterprises.
我們一直在開發的所有生成式 AI 庫現在都將由 VMware 銷售人員作為特殊 SKU 提供,眾所周知,該銷售人員規模相當大,因為它們覆蓋了全球數十萬 VMware 客戶。這個新的 SKU 將被稱為 VMware Private AI Foundation。而這將會是一個讓企業成為可能的新SKU。
And in combination with HP, Dell, and Lenovo's new server offerings based on L40S, any enterprise could have a state-of-the-art AI data center and be able to engage generative AI. And so I think the answer to that question is hard to predict exactly what's going to happen quarter-to-quarter. But I think the trend is very, very clear now that we're seeing a platform shift.
結合惠普、戴爾和聯想基於 L40S 的新伺服器產品,任何企業都可以擁有最先進的人工智慧資料中心,並且能夠參與生成式人工智慧。因此,我認為這個問題的答案很難準確預測每季會發生什麼。但我認為現在我們看到了平台的轉變,趨勢非常非常明顯。
Operator
Operator
Next, we'll go to Timothy Arcuri with UBS.
接下來,我們將邀請瑞銀集團的 Timothy Arcuri。
Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment
Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment
Can you talk about the attach rate of your networking solutions to your -- to the compute that you're shipping? In other words, is like half of your compute shipping with your networking solutions? More than half, less than half? And is this something that maybe you can use to prioritize allocation of the GPUs?
您能談談您的網路解決方案與您正在運送的計算的連接率嗎?換句話說,您的一半計算是否與網路解決方案一起運輸?超過一半還是少於一半?您是否可以使用它來確定 GPU 分配的優先順序?
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
Well, working backwards, we don't use that to prioritize the allocation of our GPUs. We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant. Some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings.
好吧,反過來看,我們不會用它來決定 GPU 分配的優先權。我們讓客戶決定他們想要使用什麼網路。對於正在建造超大型基礎設施的客戶來說,InfiniBand(我不想這麼說)是理所當然的選擇。原因是 InfiniBand 的效率非常顯著。價值 10 億美元的基礎設施吞吐量提高約 10%、15%、20% 意味著巨大的節省。
Basically, the networking is free. And so if you have a single application, if you will, infrastructure, or it's largely dedicated to large language models or large AI systems, InfiniBand is really, really a terrific choice.
基本上,網路是免費的。因此,如果您有一個應用程式(如果您願意的話)基礎設施,或者它主要致力於大型語言模型或大型人工智慧系統,那麼 InfiniBand 確實是一個絕佳的選擇。
However, if you're hosting for a lot of different users and Ethernet is really core to the way you manage your data center, we have an excellent solution there that we had just recently announced, and it's called Spectrum-X. Well, we're going to bring the capabilities, if you will -- not all of it, but some of it, of the capabilities of InfiniBand to Ethernet, so that we can also, within the environment of Ethernet, allow you to -- enable you to get excellent generative AI capabilities.
然而,如果您為許多不同的用戶託管,而乙太網路確實是您管理資料中心方式的核心,那麼我們最近剛剛宣布了一個出色的解決方案,它被稱為 Spectrum-X。好吧,如果您願意的話,我們將把 InfiniBand 的功能引入以太網,不是全部,而是部分,這樣我們也可以在以太網環境中,讓您- - 使您獲得卓越的生成式AI能力。
So Spectrum-X is just ramping now. It requires BlueField-3, and it supports both our Spectrum-2 and Spectrum-3 Ethernet switches. And the additional performance is really spectacular. BlueField-3 makes it possible and a whole bunch of software that goes along with it.
所以 Spectrum-X 現在才剛起步。它需要 BlueField-3,並且支援我們的 Spectrum-2 和 Spectrum-3 乙太網路交換器。而且附加的性能確實非常驚人。 BlueField-3 以及與之配套的一整套軟體使之成為可能。
BlueField, as all of you know, is a project really dear to my heart, and it's off to just a tremendous start. I think it's a home run. And this is the -- the concept of in-network computing and putting a lot of software in the computing fabric is being realized with BlueField-3, and it is going to be a home run.
如你所知,BlueField 是一個我非常珍惜的項目,而且它只是一個巨大的開始。我認為這是一個全壘打。這就是——網路內計算和將大量軟體放入計算結構中的概念正在透過 BlueField-3 實現,這將是一次全壘打。
Operator
Operator
Our final question comes from the line of Ben Reitzes with Melius.
我們的最後一個問題來自 Ben Reitzes 和 Melius。
Benjamin Alexander Reitzes - MD & Head of Technology Research
Benjamin Alexander Reitzes - MD & Head of Technology Research
My question is with regard to DGX Cloud. Can you talk about the reception that you're seeing and how the momentum is going? And then Colette, can you also talk about your software business? What is the run rate right now and the materiality of that business? And it does seem like it's already helping margins a bit.
我的問題是關於 DGX Cloud 的。您能談談您所看到的反響以及勢頭如何嗎? Colette,您能談談您的軟體業務嗎?目前的運作率以及該業務的重要性是多少?看起來它確實已經對利潤率有幫助。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
DGX Cloud's strategy, let me start there. DGX Cloud's strategy is to achieve several things. Number one, to enable a really close partnership between us and the world's CSPs. We recognize that many of our -- we work with some 30,000 companies around the world, 15,000 of them are startups, thousands of them are generative AI companies. And the fastest-growing segment, of course, is generative AI.
DGX Cloud 的策略,讓我從這裡開始。 DGX Cloud 的策略是實現幾件事。第一,使我們與世界各地的通訊服務提供者之間建立真正密切的合作夥伴關係。我們意識到,我們與世界各地約 30,000 家公司合作,其中 15,000 家是新創公司,數千家是生成人工智慧公司。當然,成長最快的領域是產生人工智慧。
We're working with all of the world's AI start-ups. And ultimately, they would like to be able to land in one of the world's leading clouds. And so we built DGX Cloud as a footprint inside the world's leading clouds so that we could simultaneously work with all of our AI partners and help land them easily in one of our cloud partners.
我們正在與世界上所有的人工智慧新創公司合作。最終,他們希望能夠降落在世界領先的雲端之一上。因此,我們將 DGX Cloud 作為世界領先雲端中的一個足跡,以便我們可以同時與所有 AI 合作夥伴合作,並幫助他們輕鬆登陸我們的雲端合作夥伴之一。
The second benefit is that it allows our CSPs and ourselves to work really closely together to improve the performance of hyperscale clouds, which is historically designed for multi-tenancy and not designed for high-performance distributed computing like generative AI. And so to be able to work closely architecturally, to have our engineers work hand in hand to improve the networking performance and the computing performance has been really powerful, really terrific.
第二個好處是,它允許我們的CSP 和我們自己真正緊密地合作,以提高超大規模雲端的效能,超大規模雲端歷來是為多租戶設計的,而不是為生成式AI 等高效能分散式運算而設計的。因此,能夠在架構上緊密合作,讓我們的工程師攜手合作,提高網路效能和運算效能,這真的非常強大,非常棒。
And then thirdly, of course, NVIDIA uses very large infrastructures ourselves. And our self-driving car team, our NVIDIA research team, our generative AI team, our language model team, the amount of infrastructure that we need is quite significant. And none of our optimizing compilers are possible without our DGX systems. Even compilers these days require AI, and optimizing software and infrastructure software requires AI to even develop. It's been well publicized that our engineering uses AI to design our chips.
第三,當然,NVIDIA 自己也使用非常龐大的基礎架構。而我們的自動駕駛汽車團隊、我們的 NVIDIA 研究團隊、我們的生成式 AI 團隊、我們的語言模型團隊,我們需要的基礎設施數量是相當可觀的。如果沒有我們的 DGX 系統,我們的最佳化編譯器就不可能實現。如今,連編譯器也需要人工智慧,優化軟體和基礎設施軟體甚至需要人工智慧來開發。我們的工程部門使用人工智慧來設計我們的晶片,這一點已經廣為人知。
And so the internal -- our own consumption of AI, our robotics team, so on and so forth, Omniverse teams, so on and so forth, all needs AI. And so our internal consumption is quite large as well, and we land that in DGX Cloud. And so DGX Cloud has multiple use cases, multiple drivers, and it's been off to just an enormous success. And our CSPs love it, the developers love it and our own internal engineers are clamoring to have more of it. And it's a great way for us to engage and work closely with all of the AI ecosystem around the world.
因此,內部——我們自己對人工智慧的消費、我們的機器人團隊等等、Omniverse 團隊等等,都需要人工智慧。所以我們的內部消耗也相當大,我們把它放在DGX Cloud上。因此,DGX Cloud 擁有多個用例、多個驅動程序,並且取得了巨大的成功。我們的 CSP 喜歡它,開發人員喜歡它,我們自己的內部工程師也強烈要求擁有更多它。這是我們與世界各地的人工智慧生態系統密切接觸和合作的好方法。
Colette M. Kress - Executive VP & CFO
Colette M. Kress - Executive VP & CFO
And let's see if I can answer your question regarding our software revenue. In part of our opening remarks that we made as well, remember, software is a part of almost all of our products, whether they're our Data Center products, GPU systems or any of our products within Gaming and our future Automotive products. You're correct, we're also selling it in a standalone business. And that standalone software continues to grow, where we are providing both the software services, upgrades across there as well.
讓我們看看我是否可以回答您有關我們軟體收入的問題。在我們的開場白中,請記住,軟體是我們幾乎所有產品的一部分,無論是我們的資料中心產品、GPU 系統還是我們遊戲和未來汽車產品中的任何產品。你是對的,我們也在獨立的業務中銷售它。獨立軟體不斷成長,我們提供軟體服務和升級。
Now we're seeing, at this point, probably hundreds of billions of dollars annually for our software business, and we are looking at NVIDIA AI Enterprise to be included with many of the products that we're selling, such as our DGX, such as our PCIe versions of our H100. And I think we're going to see more availability even with our CSP marketplaces. So we're off to a great start, and I do believe we'll see this continue to grow going forward.
現在我們看到,我們的軟體業務每年可能有數千億美元的收入,我們正在考慮將 NVIDIA AI Enterprise 納入我們正在銷售的許多產品中,例如我們的 DGX,例如作為 H100 的 PCIe 版本。我認為即使在我們的 CSP 市場中,我們也會看到更多的可用性。所以我們有了一個好的開端,我相信我們會看到這種情況繼續發展。
Operator
Operator
And that does conclude today's question-and-answer session. I'll turn the call back over to Jensen Huang for any additional or closing remarks.
今天的問答環節到此結束。我會將電話轉回黃仁勳以獲取任何補充或結束語。
Jen-Hsun Huang - Founder, CEO, President and Director
Jen-Hsun Huang - Founder, CEO, President and Director
A new computing era has begun. The industry is simultaneously going through 2 platform transitions, accelerated computing and generative AI. Data centers are making a platform shift from general purpose to accelerated computing. The $1 trillion of global data centers will transition to accelerated computing to achieve an order of magnitude better performance, energy efficiency and cost. Accelerated computing-enabled generative AI, which is now driving a platform shift in software and enabling new, never-before possible applications.
新的計算時代已經開始。該產業正在同時經歷兩個平台轉型:加速運算和生成式人工智慧。資料中心正在將平台從通用運算轉向加速運算。價值 1 兆美元的全球資料中心將過渡到加速運算,以實現更高數量級的效能、能源效率和成本。加速運算的生成式人工智慧正在推動軟體平台的轉變,並實現前所未有的新應用。
Together, accelerated computing and generative AI are driving a broad-based computer industry platform shift. Our demand is tremendous. We are significantly expanding our production capacity. Supply will substantially increase for the rest of this year and next year. NVIDIA has been preparing for this for over 2 decades and has created a new computing platform that the world's industry -- world's industries can build upon.
加速運算和生成式人工智慧共同推動了廣泛的電腦產業平台轉變。我們的需求是巨大的。我們正在大幅擴大生產能力。今年剩餘時間和明年的供應量將大幅增加。 NVIDIA 已經為此準備了 20 多年,並創建了一個新的運算平台,全世界的工業都可以在此基礎上進行建置。
What makes NVIDIA special are, one, architecture. NVIDIA accelerates everything from data processing, training, inference, every AI model, real-time speech to computer vision and giant recommenders to vector databases. The performance and versatility of our architecture translates to the lowest data center TCO and best energy efficiency.
NVIDIA 的獨特之處在於,其一是架構。 NVIDIA 加速了從資料處理、訓練、推理、每個 AI 模型、即時語音到電腦視覺、從巨型推薦到向量資料庫的一切。我們架構的效能和多功能性可轉換為最低的資料中心整體擁有成本和最佳的能源效率。
Two, installed base. NVIDIA has hundreds of millions of CUDA-compatible GPUs worldwide. Developers need a large installed base to reach end users and grow their business. NVIDIA is the developer's preferred platform. More developers create more applications that make NVIDIA more valuable for customers.
二、安裝基礎。 NVIDIA 在全球擁有數億個相容於 CUDA 的 GPU。開發人員需要龐大的安裝基礎來接觸最終用戶並發展業務。 NVIDIA 是開發人員的首選平台。更多開發人員創建更多應用程序,使 NVIDIA 對客戶更有價值。
Three, reach. NVIDIA is in clouds, enterprise data centers, industrial edge, PCs, workstations, instruments and robotics. Each has fundamentally unique computing models and ecosystems. System suppliers like OEMs, computer OEMs, can confidently invest in NVIDIA because we offer significant market demand and reach.
三、到達。 NVIDIA 涉足雲端、企業資料中心、工業邊緣、PC、工作站、儀器和機器人領域。每個都有根本上獨特的計算模型和生態系統。 OEM、電腦 OEM 等系統供應商可以放心地投資 NVIDIA,因為我們提供了龐大的市場需求和覆蓋範圍。
Scale and velocity. NVIDIA has achieved significant scale and is 100% invested in accelerated computing and generative AI. Our ecosystem partners can trust that we have the expertise, focus and scale to deliver a strong road map and reach to help them grow. We are accelerating because of the additive results of these capabilities. We're upgrading and adding new products about every 6 months versus every 2 years to address the expanding universe of generative AI.
規模和速度。 NVIDIA 已實現顯著規模,並 100% 投資於加速運算和生成式 AI。我們的生態系統合作夥伴可以相信,我們擁有專業知識、專注力和規模,可以提供強大的路線圖並幫助他們成長。由於這些能力的疊加結果,我們正在加速。我們大約每 6 個月(而不是每 2 年)升級和添加新產品,以應對不斷擴大的生成人工智慧領域。
While we increase the output of H100 for training and inference of large language models, we're ramping up our new L40S universal GPU for scale, for cloud scale-out and enterprise servers. Spectrum-X, which consists of our Ethernet switch, BlueField-3 Super NIC and software helps customers who want the best possible AI performance on Ethernet infrastructures. Customers are already working on next-generation accelerated computing and generative AI with our Grace Hopper.
在我們增加 H100 的輸出以用於大型語言模型的訓練和推理的同時,我們正在加大新的 L40S 通用 GPU 的規模、雲橫向擴展和企業伺服器的規模。 Spectrum-X 由我們的乙太網路交換器、BlueField-3 Super NIC 和軟體組成,可協助希望在乙太網路基礎架構上獲得最佳 AI 效能的客戶。客戶已經在使用我們的 Grace Hopper 開發下一代加速運算和產生人工智慧。
We're extending NVIDIA AI to the world's enterprises that demand generative AI, but with the model privacy, security and sovereignty. Together with the world's leading enterprise IT companies, Accenture, Adobe, Getty, Hugging Face, Snowflake, ServiceNow, VMware and WPP and our enterprise system partners, Dell, HPE, and Lenovo, we are bringing generative AI to the world's enterprise. We're building NVIDIA Omniverse to digitalize and enable the world's multitrillion-dollar heavy industries to use generative AI to automate how they build and operate physical assets and achieve greater productivity.
我們正在將 NVIDIA AI 擴展到需要生成式 AI 的全球企業,但同時具有隱私、安全性和主權模式。我們與全球領先的企業 IT 公司 Accenture、Adobe、Getty、Hugging Face、Snowflake、ServiceNow、VMware 和 WPP 以及我們的企業系統合作夥伴 Dell、HPE 和 Lenovo 一起,為全球企業帶來生成式 AI。我們正在建立 NVIDIA Omniverse,以實現數位化,並使世界上價值數兆美元的重工業能夠利用生成式 AI 來自動化建構和營運實體資產的方式,並實現更高的生產力。
Generative AI starts in the cloud, but the most significant opportunities are in the world's largest industries, where companies can realize trillions of dollars of productivity gains. It is an exciting time for NVIDIA, our customers, partners and the entire ecosystem to drive this generational shift in computing. We look forward to updating you on our progress next quarter.
生成式人工智慧始於雲端,但最重要的機會出現在世界上最大的產業中,企業可以在這些產業中實現數兆美元的生產力提升。對於 NVIDIA、我們的客戶、合作夥伴以及整個生態系統來說,推動這項運算的世代轉變是一個令人興奮的時刻。我們期待向您通報下季的最新進展。
Operator
Operator
This concludes today's conference call. You may now disconnect.
今天的電話會議到此結束。您現在可以斷開連線。