輝達 (NVDA) 2023 Q4 法說會逐字稿

內容摘要

這將使 GeForce NOW 成為唯一的雲遊戲服務,其中包含來自所有三個主要控制台生態系統的遊戲。

2023 年第四季度,NVIDIA 將其 90 級 GPU 引入筆記本電腦,這要歸功於其第五代 Max-Q 技術的能效。 RTX 40 系列 GPU 將為 170 多款遊戲和創作筆記本電腦提供動力。現在有超過 400 款遊戲和應用程序支持 NVIDIA 的 RTX 技術,用於實時光線追踪和 AI 驅動的圖形。 AI 架構採用我們的第三代 AI 驅動圖形 DLSS 3,可大幅提升性能。對於最先進的遊戲,賽博朋克 2077,最近添加了 DLSS 3,可在 4K 分辨率下將幀速率性能提高 3 到 4 倍。

我們的 GeForce NOW 雲遊戲服務在多個維度、用戶、遊戲和性能方面繼續擴展。它現在在 100 多個國家/地區擁有超過 2500 萬會員。上個月,它在新的高性能終極會員等級中啟用了 RTX 4080 圖形馬力。 Ultimate 會員可以通過全光線追踪和 DLSS 3 從雲端以高達每秒 240 幀的速度進行流式傳輸。

就在昨天,我們與微軟發布了一個重要公告。我們同意建立為期 10 年的合作夥伴關係,將 Microsoft 的 Xbox PC 遊戲系列引入 GeForce NOW,其中包括 Minecraft、Halo 和 Flight Simulator 等大作。在微軟對 Activision 的收購結束後,它將添加諸如使命召喚和守望先鋒之類的遊戲。這將使 GeForce NOW 成為唯一的雲遊戲服務,其中包含來自所有三個主要控制台生態系統的遊戲。 NVIDIA Corporation 公佈了強勁的第四季度和財年業績。第四季度收入為 2.26 億美元,環比增長 13%,同比下降 65%。本財年收入為 15.4 億美元,下降了 27%。該公司將連續增長歸因於在汽車和製造業垂直領域具有優勢的桌面工作站。同比下降反映了渠道庫存調整的影響,公司預計調整將在今年上半年結束。

該公司還宣布與富士康建立戰略合作夥伴關係,以開發自動化和自動駕駛汽車平台。該合作夥伴關係將為批量生產提供規模,以滿足對 NVIDIA DRIVE 平台不斷增長的需求。富士康將在其電動汽車中使用 NVIDIA DRIVE、Hyperion 計算和傳感器架構。

在回答有關哪些新的工作負載或應用程序可能會推動對 NVIDIA 產品的需求的問題時,首席執行官黃仁勳表示,公司尚未公開分享一些新的應用程序和工作負載。他表示,即將舉行的 NVIDIA GTC 大會的與會者將“對我們將要討論的應用程序感到非常驚訝和高興”。

在回答有關 H100 或 A100 架構是否賣得更好的問題時,NVIDIA 澄清說,他們特別看到了 H100 銷售的強勁季度。他們將此歸因於許多想要在該架構上啟動和運行的雲服務提供商 (CSP)。

Stacy Rasgon 隨後詢問了梅賽德斯交易的軟件收入。 NVIDIA 澄清說,他們預計到本世紀中期,該交易的軟件收入將達到個位數或低十億歐元,到本十年末將達到中等十億歐元的軟件收入。 2023 年第四季度,在雲、企業和超級計算客戶需求的推動下,NVIDIA Corporation 的 Quantum-2 每秒 40 Gb 平台實現了增長。在以太網方面,隨著客戶向更高速度過渡,該公司每秒 40 吉比特的 Spectrum-4 網絡平台正在獲得發展勢頭。 NVIDIA 專注於擴展其軟件和服務。該公司發布了 NVIDIA AI Enterprise 3.0 版本,支持 50 多個 NVIDIA AI 框架和預訓練模型,以及針對聯絡中心、智能虛擬協助、音頻轉錄和網絡安全的新工作流。即將推出的產品包括 NeMo 和 BioNeMo 大型語言模型服務,目前客戶可以提前使用這些服務。 NVIDIA 的 H100 數據中心 GPU 的採用率很高。僅在發布的第二季度,H100 的收入就已經超過了 A100。這是由於 H100 的卓越性能,它在訓練方面比 A100 快 9 倍,在推理基於 Transformer 的大型語言模型方面快 30 倍。 H100的transformer引擎來的正是時候,服務於大型語言模型的開發和推理的scale out。這些新型神經網絡模型可以提高各種任務的生產力,無論是生成營銷文案等文本、總結文檔、為廣告或視頻遊戲創建圖像,還是回答客戶問題。具有超過 1000 億個參數的生成式大型語言模型是當今世界最先進的神經網絡。 NVIDIA 的專業知識涵蓋 AI 超級計算機、算法、數據處理和培訓方法,可為企業帶來這些功能。該公司期待幫助客戶獲得生成人工智能的機會。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is Emma, and I will be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA's Fourth Quarter Earnings Call. (Operator Instructions)

    下午好。我叫艾瑪,今天我將擔任你們的會議接線員。在這個時候,我想歡迎大家參加 NVIDIA 的第四季度財報電話會議。 (操作員說明)

  • Simona Jankowski, you may begin your conference.

    Simona Jankowski,你可以開始你的會議了。

  • Simona Jankowski - VP of IR

    Simona Jankowski - VP of IR

  • Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2023. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    謝謝。大家下午好,歡迎參加 NVIDIA 2023 財年第四季度電話會議。今天來自 NVIDIA 的有總裁兼首席執行官黃仁勳;執行副總裁兼首席財務官 Colette Kress。

  • I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2024. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

    我想提醒您,我們的電話會議正在 NVIDIA 的投資者關係網站上進行網絡直播。在討論我們 2024 財年第一季度財務業績的電話會議之前,可以重播網絡廣播。今天電話會議的內容是 NVIDIA 的財產。未經我們事先書面同意,不得複製或轉錄。

  • During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, February 22, 2023, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    在此次電話會議中,我們可能會根據當前預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能存在重大差異。有關可能影響我們未來財務業績和業務的因素的討論,請參閱今天的收益發布中的披露、我們最近的 10-K 和 10-Q 表格以及我們可能在 8-K 表格上提交的報告證券交易委員會。我們的所有聲明都是根據我們目前可獲得的信息,截至今天,即 2023 年 2 月 22 日作出的。除非法律要求,否則我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在這次電話會議中,我們將討論非 GAAP 財務指標。您可以在我們網站上發布的 CFO 評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的對賬。

  • With that, let me turn the call over to Colette.

    有了這個,讓我把電話轉給科萊特。

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Thank you, Simona. Q4 revenue was $6.05 billion, up 2% sequentially, and down 21% year-on-year. Full year revenue was $27 billion, flat from the prior year.

    謝謝你,西蒙娜。第四季度收入為 60.5 億美元,環比增長 2%,同比下降 21%。全年收入為 270 億美元,與上年持平。

  • Starting with data center. Revenue was $3.62 billion was down 6% sequentially and up 11% year-on-year. Fiscal year revenue was $15 billion and up 41%. Hyperscale customer revenue posted strong sequential growth, though short of our expectations as some cloud service providers paused at the end of the year to recalibrate their build plans. Though we generally see tightening that reflects overall macroeconomic uncertainty, we believe this is a timing issue at the end market demand for GPUs and AI infrastructure is strong.

    從數據中心開始。收入為 36.2 億美元,環比下降 6%,同比增長 11%。財年收入為 150 億美元,增長 41%。超大規模客戶收入實現了強勁的環比增長,但由於一些雲服務提供商在年底暫停以重新調整其構建計劃,因此低於我們的預期。儘管我們普遍認為緊縮反映了整體宏觀經濟的不確定性,但我們認為這是終端市場對 GPU 的需求和人工智能基礎設施強勁的時機問題。

  • Networking grew but a bit less than our expected on softer demand for general-purpose CPU infrastructure. The total data center sequential revenue decline was driven by lower sales in China, which was largely in line with our expectations, reflecting COVID and other domestic issues.

    由於對通用 CPU 基礎設施的需求疲軟,網絡增長但略低於我們的預期。數據中心總收入連續下降是由於中國銷售額下降,這在很大程度上符合我們的預期,反映了 COVID 和其他國內問題。

  • With cloud adoption continuing to grow, we are serving an expanding list of fast-growing cloud service providers, including Oracle and GPU-specialized CSPs. Revenue growth from CSP customers last year significantly outpaced that of Data Center as a whole as more enterprise customers moved to a cloud-first approach. On a trailing 4-quarter basis, CSP customers drove about 40% of our Data Center revenue.

    隨著雲採用率的持續增長,我們正在為越來越多的快速增長的雲服務提供商提供服務,包括 Oracle 和 GPU 專用 CSP。隨著越來越多的企業客戶轉向雲優先方法,去年 CSP 客戶的收入增長顯著超過了整個數據中心的收入增長。在連續 4 個季度的基礎上,CSP 客戶推動了我們數據中心收入的約 40%。

  • Adoption of our new flagship H100 Data Center GPU is strong. In just the second quarter of its ramp, H100 revenue was already much higher than that of A100, which declined sequentially. This is a testament of the exceptional performance on the H100, which is as much as 9x faster than the A100 for training and up 30x cluster in inferencing of transformer-based large language models. The transformer engine of H100 arrived just in time to serve the development and scale out of inference of large language models.

    我們的新旗艦 H100 數據中心 GPU 的採用率很高。僅在其增長的第二季度,H100 的收入就已經遠高於 A100,A100 的收入連續下降。這證明了 H100 的卓越性能,它在訓練方面比 A100 快 9 倍,在基於 transformer 的大型語言模型的推理方面快 30 倍。 H100的Transformer引擎來的正是時候,服務於大型語言模型的開發和Scale Out推理。

  • AI adoption is at an inflection point. OpenAI's ChatGPT has captured interest worldwide, allowing people to experience AI firsthand and showing what's possible with Generative AI. These new types of neural network models can improve productivity in a wide range of tasks, whether generating text like marketing copy, summarizing documents like (inaudible), creating images for ads or video games or answering customer questions. Generative AI applications will help almost every industry do more faster.

    人工智能的採用正處於一個拐點。 OpenAI 的 ChatGPT 引起了全世界的關注,讓人們親身體驗人工智能並展示生成人工智能的可能性。這些新型神經網絡模型可以提高各種任務的生產力,無論是生成營銷文案等文本、總結(聽不清)等文檔、為廣告或視頻遊戲創建圖像還是回答客戶問題。生成式 AI 應用程序將幫助幾乎每個行業做得更快。

  • Generative large language models with over 100 billion parameters are the most advanced neural networks in today's world. NVIDIA's expertise spans across the AI supercomputers, algorithms, data processing and training methods that can bring these capabilities to enterprise. We look forward to helping customers with Generative AI opportunities.

    具有超過 1000 億個參數的生成式大型語言模型是當今世界最先進的神經網絡。 NVIDIA 的專業知識涵蓋 AI 超級計算機、算法、數據處理和培訓方法,可為企業帶來這些功能。我們期待通過生成人工智能機會幫助客戶。

  • In addition to working with every major hyperscale cloud provider, we are engaged with many consumer Internet companies, enterprises and start-ups. The opportunity is significant and driving strong growth in the Data Center that will accelerate through the year.

    除了與每個主要的超大規模雲提供商合作外,我們還與許多消費互聯網公司、企業和初創企業進行合作。這個機會意義重大,並推動數據中心的強勁增長,這一增長將在今年加速。

  • During the quarter, we made notable announcements in the financial services sector, one of our largest industry verticals. We announced a partnership with Deutsche Bank to accelerate the use of AI and machine learning in financial services. Together, we are developing a range of applications, including virtual customer service agents, speech AI, fraud detection and bank process automation, leveraging NVIDIA's full computing stack, both on-premise and in the cloud, including NVIDIA, AI enterprise software. We also announced that NVIDIA captured leading results for AI inference in a key financial services industry benchmark for applications such as asset price discovery. In networking, we see growing demand for our latest generation InfiniBand and HPC optimized Ethernet platforms fueled by AI.

    本季度,我們在金融服務領域發布了重要公告,這是我們最大的垂直行業之一。我們宣布與德意志銀行建立合作夥伴關係,以加速人工智能和機器學習在金融服務中的應用。我們正在共同開發一系列應用程序,包括虛擬客戶服務代理、語音 AI、欺詐檢測和銀行流程自動化,利用 NVIDIA 在本地和雲端的完整計算堆棧,包括 NVIDIA AI 企業軟件。我們還宣布,NVIDIA 在資產價格發現等應用的關鍵金融服務行業基準測試中獲得了 AI 推理的領先結果。在網絡方面,我們看到對由 AI 推動的最新一代 InfiniBand 和 HPC 優化以太網平台的需求不斷增長。

  • Generative AI foundation model sizes continue to grow at exponential rates, driving the need for high-performance networking to scale out multi-node accelerated workloads. Delivering unmatched performance, latency and in-network computing capabilities, InfiniBand is the clear choice for power-efficient cloud scale, Generative AI.

    生成式 AI 基礎模型的大小繼續以指數級速度增長,推動了對高性能網絡的需求,以擴展多節點加速工作負載。 InfiniBand 提供無與倫比的性能、延遲和網絡內計算功能,是高能效雲規模生成 AI 的明智選擇。

  • For smaller scale deployments, NVIDIA is bringing its full accelerated stack expertise and integrating it with the world's most advanced high-performance Ethernet fabrics. In the quarter, InfiniBand led our growth as our Quantum-2 40 gigabit per second platform is off to a great start, driven by demand across cloud, enterprise and supercomputing customers. In Ethernet, our 40 gigabit per second Spectrum-4 networking platform is gaining momentum as customers transition to higher speeds, next-generation adopters and switches.

    對於較小規模的部署,NVIDIA 帶來了其完整的加速堆棧專業知識,並將其與世界上最先進的高性能以太網結構相集成。在本季度,InfiniBand 引領了我們的增長,因為我們的 Quantum-2 每秒 40 Gb 平台在雲、企業和超級計算客戶的需求推動下開局良好。在以太網中,我們的 40 Gbps Spectrum-4 網絡平台隨著客戶轉向更高速度、下一代採用者和交換機而獲得發展勢頭。

  • We remain focused on expanding our software and services. We released version 3.0 of NVIDIA AI enterprise with support for more than 50 NVIDIA AI frameworks and pretrained model and new workflows for contact center, intelligent virtual assistance, audio transcription and cybersecurity. Upcoming offerings include our NeMo and BioNeMo large language model services, which are currently in early access with customers.

    我們仍然專注於擴展我們的軟件和服務。我們發布了 NVIDIA AI Enterprise 3.0 版本,支持 50 多個 NVIDIA AI 框架和預訓練模型以及聯絡中心、智能虛擬協助、音頻轉錄和網絡安全的新工作流。即將推出的產品包括我們的 NeMo 和 BioNeMo 大型語言模型服務,目前客戶可以提前使用這些服務。

  • Now to Jensen to talk a bit more about our software and cloud business (inaudible).

    現在讓詹森多談談我們的軟件和雲業務(聽不清)。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Thanks, Colette. The accumulation of technology breakthroughs has brought AI to an inflection point. Generative AI's versatility and capability has triggered a sense of urgency at enterprises around the world to develop and deploy AI strategies. Yet, the AI supercomputer infrastructure, model algorithms, data processing and training techniques remain an insurmountable obstacle for most. Today, I want to share with you the next level of our business model to help put AI within reach of every enterprise customer.

    謝謝,科萊特。技術突破的積累,讓人工智能迎來了拐點。生成式人工智能的多功能性和能力引發了全球企業開發和部署人工智能戰略的緊迫感。然而,人工智能超級計算機基礎設施、模型算法、數據處理和訓練技術仍然是大多數人無法逾越的障礙。今天,我想與大家分享我們商業模式的下一個層次,以幫助每個企業客戶觸及 AI。

  • We are partnering with major service -- cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners, and hosted within the world's largest clouds. NVIDIA AI as a service offers enterprises easy access to the world's most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world's most advanced clouds.

    我們正在與主要服務 - 雲服務提供商合作,提供 NVIDIA AI 雲服務,這些服務由 NVIDIA 直接提供並通過我們的上市合作夥伴網絡提供,並託管在全球最大的雲中。 NVIDIA AI 即服務讓企業可以輕鬆訪問世界上最先進的 AI 平台,同時與世界上最先進的雲提供的存儲、網絡、安全和雲服務保持緊密聯繫。

  • Customers can engage NVIDIA AI cloud services at the AI supercomputer, acceleration library software or pretrained AI model layers. NVIDIA DGX is an AI supercomputer, and the blueprint of AI factories being built around the world. AI supercomputers are hard and time-consuming to build. Today, we are announcing the NVIDIA DGX Cloud, the fastest and easiest way to have your own DGX AI supercomputer, just open your browser. NVIDIA DGX Cloud is already available through Oracle Cloud Infrastructure and Microsoft Azure, Google GCP and others on the way.

    客戶可以在 AI 超級計算機、加速庫軟件或預訓練的 AI 模型層上使用 NVIDIA AI 雲服務。 NVIDIA DGX 是一台 AI 超級計算機,也是全球正在建設的 AI 工廠的藍圖。構建人工智能超級計算機既困難又耗時。今天,我們宣布推出 NVIDIA DGX Cloud,這是擁有自己的 DGX AI 超級計算機的最快、最簡單的方式,只需打開瀏覽器即可。 NVIDIA DGX Cloud 已經可以通過 Oracle Cloud Infrastructure 和 Microsoft Azure、Google GCP 以及其他即將推出的公司獲得。

  • At the AI platform software layer, customers can access NVIDIA AI enterprise for training and deploying large language models or other AI workloads. And at the pretrained Generative AI model layer, we will be offering NeMo and BioNeMo, customizable AI models, to enterprise customers who want to build proprietary Generative AI models and services for their businesses. With our new business model, customers can engage NVIDIA's full scale of AI computing across their private to any public cloud. We will share more details about NVIDIA AI cloud services at our upcoming GTC so be sure to tune in.

    在AI平台軟件層,客戶可以接入NVIDIA AI enterprise,用於訓練和部署大型語言模型或其他AI工作負載。在預訓練生成 AI 模型層,我們將為希望為其業務構建專有生成 AI 模型和服務的企業客戶提供 NeMo 和 BioNeMo 可定制的 AI 模型。借助我們的新商業模式,客戶可以在他們的私有云和任何公共雲中使用 NVIDIA 的全面人工智能計算。我們將在即將舉行的 GTC 上分享有關 NVIDIA AI 雲服務的更多詳細信息,敬請關注。

  • Now let me turn it back to Colette on gaming.

    現在讓我把它轉回到科萊特的遊戲上。

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Thanks, Jensen. Gaming revenue of $1.83 billion was up 16% sequentially and down 46% from a year ago. Fiscal year revenue of $9.07 billion is down 27%. Sequential growth was driven by the strong reception of our 40 Series GeForce RTX GPUs based on the Ada Lovelace architecture. The year-on-year decline reflects the impact of channel inventory correction, which is largely behind us. And demand in the seasonally strong fourth quarter was solid in most regions. While China was somewhat impacted by disruptions related to COVID, we are encouraged by the early signs of recovery in that market.

    謝謝,詹森。博彩收入為 18.3 億美元,環比增長 16%,同比下降 46%。本財年收入為 90.7 億美元,下降 27%。我們基於 Ada Lovelace 架構的 40 系列 GeForce RTX GPU 受到熱烈歡迎,推動了連續增長。同比下降反映了渠道庫存調整的影響,這在很大程度上已經過去了。在季節性強勁的第四季度,大多數地區的需求都很穩定。儘管中國在一定程度上受到 COVID 相關中斷的影響,但我們對該市場復甦的早期跡象感到鼓舞。

  • Gamers are responding enthusiastically to the new RTX 4090, 4080, 4070 Ti desktop GPUs, with many retail and online outlets quickly selling out of stock. The flagship RTX 4090 has quickly shot up in popularity on Steam to claim the top spot for the Ads architecture, reflecting gamers' desire for high-performance graphic.

    遊戲玩家對新的 RTX 4090、4080、4070 Ti 台式機 GPU 反應熱烈,許多零售店和在線商店很快就賣光了。旗艦級 RTX 4090 在 Steam 上的人氣迅速飆升,奪得 Ads 架構的頭把交椅,反映出遊戲玩家對高性能圖形的渴望。

  • Earlier this month, the first phase of gaming laptops based on the Ada architecture reached retail shelves, delivering NVIDIA's largest-ever generational leap in performance and power efficiency. For the first time, we are bringing enthusiast-class GPU performance to laptops as slim as 14 inches, a fast-growing segment, previously limited to basic tasks and apps.

    本月早些時候,第一代基於 Ada 架構的遊戲筆記本電腦上架零售,實現了 NVIDIA 有史以來最大的性能和能效飛躍。我們首次將發燒級 GPU 性能帶到 14 英寸的纖薄筆記本電腦上,這是一個快速增長的細分市場,以前僅限於基本任務和應用程序。

  • In another first, we are bringing the 90 class GPUs, our most performing models, to laptops, thanks to the power efficiency of our fifth-generation Max-Q technology. All in, RTX 40 Series GPUs will power over 170 gaming and creator laptops, setting up for a great back-to-schools season.

    另一項創新是,我們將性能最佳的 90 級 GPU 引入筆記本電腦,這要歸功於我們第五代 Max-Q 技術的能效。總而言之,RTX 40 系列 GPU 將為 170 多款遊戲和創作筆記本電腦提供動力,為精彩的返校季做好準備。

  • There are now over 400 games and applications supporting NVIDIA's RTX technology for real-time ray tracing and AI-powered graphics. The AI architecture features DLSS 3, our third-generation AI-powered graphics, which massively boosts performance. With the most advanced games, Cyberpunk 2077, recently added DLSS 3 enabling a 3 to 4x boost in frame rate performance at 4K resolution.

    現在有超過 400 款遊戲和應用程序支持 NVIDIA 的 RTX 技術,用於實時光線追踪和 AI 驅動的圖形。 AI 架構採用我們的第三代 AI 驅動圖形 DLSS 3,可大幅提升性能。對於最先進的遊戲,賽博朋克 2077,最近添加了 DLSS 3,可在 4K 分辨率下將幀速率性能提高 3 到 4 倍。

  • Our GeForce NOW cloud gaming service continued to expand in multiple dimensions, users, titles and performance. It now has more than 25 million members in over 100 countries. Last month, it enabled RTX 4080 graphics horsepower in the new high-performance ultimate membership tier. Ultimate members can stream at up to 240 frames per second from a cloud with full ray tracing and DLSS 3.

    我們的 GeForce NOW 雲遊戲服務在多個維度、用戶、遊戲和性能方面繼續擴展。它現在在 100 多個國家/地區擁有超過 2500 萬會員。上個月,它在新的高性能終極會員等級中啟用了 RTX 4080 圖形馬力。 Ultimate 會員可以通過全光線追踪和 DLSS 3 從雲端以高達每秒 240 幀的速度進行流式傳輸。

  • And just yesterday, we made an important announcement with Microsoft. We agreed to a 10-year partnership to bring to GeForce NOW Microsoft's lineup of Xbox PC games, which includes blockbusters like Minecraft, Halo and Flight Simulator. And upon the close of Microsoft's Activision acquisition, it will add titles like Call of Duty and Overwatch.

    就在昨天,我們與微軟發布了一個重要公告。我們同意建立為期 10 年的合作夥伴關係,將 Microsoft 的 Xbox PC 遊戲系列引入 GeForce NOW,其中包括 Minecraft、Halo 和 Flight Simulator 等大作。在微軟對 Activision 的收購結束後,它將添加諸如使命召喚和守望先鋒之類的遊戲。

  • Moving to Pro Visualization. Revenue of $226 million was up 13% sequentially and down 65% from a year ago. Fiscal year revenue of $1.54 billion was down 27%. Sequential growth was driven by desktop workstations with strengths in the automotive and manufacturing industrial verticals. The year-on-year decline reflects the impact of the channel inventory correction, which we expect to end in the first half of the year.

    轉向專業可視化。營收為 2.26 億美元,環比增長 13%,同比下降 65%。本財年收入為 15.4 億美元,下降了 27%。連續增長是由在汽車和製造業垂直領域具有優勢的桌面工作站推動的。同比下降反映了渠道庫存調整的影響,我們預計該調整將在今年上半年結束。

  • Interest in NVIDIA's Omniverse continues to build with almost 300,000 downloads so far, 185 connectors to third-party design applications. The latest released of Omniverse has a number of features and enhancements, including support for 4K, real-time path tracing, Omniverse Search for AI-powered search through large untagged 3D databases, and Omniverse Cloud containers for AWS.

    對 NVIDIA Omniverse 的興趣持續增加,迄今為止已下載近 300,000 次,連接到第三方設計應用程序的 185 個連接器。最新發布的 Omniverse 具有許多功能和增強功能,包括支持 4K、實時路徑跟踪、通過大型無標記 3D 數據庫進行 AI 驅動搜索的 Omniverse Search,以及適用於 AWS 的 Omniverse Cloud 容器。

  • Let's move to automotive. Revenue was a record $294 million, up 17% from (inaudible) and up 135% from a year ago. Sequential growth was driven primarily by AI automotive solutions. New program ramps at both electric vehicle and traditional OEM customers helped drive this growth. Fiscal year revenue of $903 million was up 60%.

    讓我們轉向汽車。收入達到創紀錄的 2.94 億美元,比(聽不清)增長 17%,比一年前增長 135%。連續增長主要由人工智能汽車解決方案推動。電動汽車和傳統原始設備製造商客戶的新計劃幫助推動了這一增長。本財年收入為 9.03 億美元,增長 60%。

  • At CES, we announced a strategic partnership with Foxconn to develop automated and autonomous vehicle platforms. This partnership will provide scale for volume, manufacturing to meet growing demand for the NVIDIA DRIVE platform. Foxconn will use NVIDIA DRIVE, Hyperion compute and sensor architecture for its electric vehicles. Foxconn will be a Tier 1 manufacturer producing electronic control units based on NVIDIA DRIVE Orin for the global [automotive market].

    在 CES 上,我們宣布與富士康建立戰略合作夥伴關係,以開發自動化和自主車輛平台。這種夥伴關係將提供規模化製造,以滿足對 NVIDIA DRIVE 平台不斷增長的需求。富士康將在其電動汽車中使用 NVIDIA DRIVE、Hyperion 計算和傳感器架構。富士康將成為一級製造商,為全球 [汽車市場] 生產基於 NVIDIA DRIVE Orin 的電子控制單元。

  • We also reached an important milestone this quarter. The NVIDIA DRIVE operating system received safety certification from TÜV SÜD, one of the most experienced and rigorous assessment bodies in the automotive industry. With industry-leading performance and functional safety, our platform meets a higher standards required for autonomous transportation.

    本季度我們也達到了一個重要的里程碑。 NVIDIA DRIVE 操作系統獲得了 TÜV SÜD 的安全認證,TÜV SÜD 是汽車行業最有經驗和最嚴格的評估機構之一。憑藉行業領先的性能和功能安全性,我們的平台滿足自主運輸所需的更高標準。

  • Moving to the rest of the P&L. GAAP gross margin was 63.3%, and non-GAAP gross margin was 66.1%. Fiscal year GAAP gross margin was 56.9%, and non-GAAP gross margin was 59.2%. Year-on-year, Q4 GAAP operating expenses were up 21%, and non-GAAP operating expenses were up 23%, primarily due to the higher compensation and data center infrastructure expenses.

    轉到損益表的其餘部分。 GAAP 毛利率為 63.3%,非 GAAP 毛利率為 66.1%。本財年 GAAP 毛利率為 56.9%,非 GAAP 毛利率為 59.2%。與去年同期相比,第四季度 GAAP 運營支出增長 21%,非 GAAP 運營支出增長 23%,這主要是由於薪酬和數據中心基礎設施支出增加。

  • Sequentially, GAAP operating expenses were flat, and non-GAAP operating expenses were down 1%. We plan to keep them relatively flat at this level over the coming quarters. Full year GAAP operating expenses were up 50%, and non-GAAP operating expenses were up 31%.

    按美國通用會計準則計算的營業費用環比持平,非美國通用會計準則營業費用下降 1%。我們計劃在未來幾個季度將它們保持在這個水平上相對平穩。全年 GAAP 運營費用增長 50%,非 GAAP 運營費用增長 31%。

  • We returned $1.15 billion to shareholders in the form of share repurchases and cash dividends. At the end of Q4, we had approximately $7 billion remaining under our share repurchase authorization through December 2023.

    我們以股票回購和現金股息的形式向股東返還了 11.5 億美元。截至第四季度末,截至 2023 年 12 月,我們的股票回購授權剩餘約 70 億美元。

  • Let me look to the outlook for the first quarter of fiscal '24. We expect sequential growth to be driven by each of our 4 major market platforms led by strong growth in data center and gaming. Revenue is expected to be $6.5 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.1% and 66.5%, respectively, plus or minus 50 basis points.

    讓我看看 24 財年第一季度的前景。我們預計,在數據中心和遊戲的強勁增長帶動下,我們的 4 個主要市場平台中的每一個都將推動連續增長。收入預計為 65 億美元,上下浮動 2%。 GAAP 和非 GAAP 毛利率預計分別為 64.1% 和 66.5%,上下浮動 50 個基點。

  • GAAP operating expenses are expected to be approximately $2.53 billion. Non-GAAP operating expenses are expected to be approximately $1.78 billion. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $50 million, excluding gains and losses of nonaffiliated divestments. GAAP and non-GAAP tax rates are expected to be 13%, plus or minus 1%, excluding any discrete items.

    GAAP 運營費用預計約為 25.3 億美元。非 GAAP 運營費用預計約為 17.8 億美元。 GAAP 和非 GAAP 其他收入和支出預計約為 5000 萬美元,不包括非附屬資產剝離的收益和損失。 GAAP 和非 GAAP 稅率預計為 13%,上下浮動 1%,不包括任何離散項目。

  • Capital expenditures are expected to be approximately $350 million to $400 million for the first quarter and in the range of $1.1 billion to $1.3 billion for the full fiscal year 2024. Further financial details are included in the CFO commentary and other information available on our IR website.

    第一季度的資本支出預計約為 3.5 億至 4 億美元,2024 年整個財年的資本支出預計在 11 億至 13 億美元之間。進一步的財務細節包含在首席財務官的評論中以及我們 IR 網站上提供的其他信息中.

  • In closing, let me highlight upcoming events for the financial community. We will be attending the Morgan Stanley Technology Conference on March 6 in San Francisco and the Cowen Healthcare Conference on March 7 in Boston. We will also host GTC virtually with Jensen's keynote kicking off on March 21. Our earnings call to discuss the results of our first quarter of fiscal year '24 is scheduled for Wednesday, May 24.

    最後,讓我強調一下金融界即將發生的事件。我們將參加 3 月 6 日在舊金山舉行的摩根士丹利技術會議和 3 月 7 日在波士頓舉行的 Cowen Healthcare 會議。我們還將以虛擬方式主持 GTC,Jensen 的主題演講將於 3 月 21 日開始。我們的財報電話會議定於 5 月 24 日星期三舉行,討論我們 24 財年第一季度的結果。

  • Now we will open up the call for questions. Operator, would you please poll for questions?

    現在我們將打開問題電話。接線員,請你投票提問好嗎?

  • Operator

    Operator

  • (Operator Instructions) Your first question comes from the line of Aaron Rakers with Wells Fargo.

    (操作員說明)您的第一個問題來自 Aaron Rakers 與 Wells Fargo 的合作。

  • Aaron Christopher Rakers - MD of IT Hardware & Networking Equipment and Senior Equity Analyst

    Aaron Christopher Rakers - MD of IT Hardware & Networking Equipment and Senior Equity Analyst

  • Clearly, on this call, a key focal point is going to be the monetization effect of your software and cloud strategy. I think as we look at it, I think, straight up, the enterprise AI software suite, I think, is priced at around $6,000 per CPU socket. I think you've got pricing metrics a little bit higher for the cloud consumption model. I'm just curious, Colette, how do we start to think about that monetization contribution to the company's business model over the next couple of quarters relative to, I think, in the past, you've talked like a couple of hundred million or so? Just curious if you can unpack that a little bit.

    顯然,在這次電話會議上,一個關鍵焦點將是您的軟件和雲策略的貨幣化效果。我認為,當我們審視它時,我認為,企業 AI 軟件套件的價格約為每個 CPU 插槽 6,000 美元。我認為您的雲消費模型的定價指標略高一些。我只是好奇,科萊特,我們如何開始考慮未來幾個季度貨幣化對公司商業模式的貢獻,我想,在過去,你說的是幾億或所以?只是好奇您是否可以將其拆開一點。

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • So I'll start and turn it over to Jensen to talk more because I believe this will be a great topic and discussion also at our GTC.

    因此,我將開始並將其轉交給 Jensen 進行更多討論,因為我相信這將是一個很好的話題,並且在我們的 GTC 上也會進行討論。

  • Our plans in terms of software, we continue to see growth. Even in our Q4 results, we're making quite good progress in both working with our partners, onboarding more partners and increasing our software. You are correct. We've talked about our software revenues being in the hundreds of millions. And we're getting even stronger each day as Q4 was probably a record level in terms of our software levels. But there's more to unpack in terms of there, and I'm going to turn it to Jensen.

    我們在軟件方面的計劃,我們繼續看到增長。即使在我們第四季度的業績中,我們在與合作夥伴合作、加入更多合作夥伴和增加我們的軟件方面也取得了相當大的進展。你是對的。我們已經談到我們的軟件收入有數億美元。而且我們每天都變得更加強大,因為就我們的軟件水平而言,第四季度可能是創紀錄的水平。但就此而言,還有更多內容需要展開,我將把它交給 Jensen。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Yes, first of all, taking a step back, NVIDIA AI is essentially the operating system of AI systems today. It starts from data processing to learning, training, to validations, to inference. And so this body of software is completely accelerated. It runs in every cloud. It runs on-prem. And it supports every framework, every model that we know of, and it's accelerated everywhere.

    是的,首先退一步講,NVIDIA AI本質上就是今天AI系統的操作系統。它從數據處理開始,到學習、訓練、驗證、推理。因此,該軟件主體得到了完全加速。它在每一片雲中運行。它在本地運行。它支持我們所知道的每一個框架、每一個模型,並且它在任何地方都得到了加速。

  • By using NVIDIA AI, your entire machine learning operations is more efficient, and it is more cost effective. You save money by using accelerated software. Our announcement today of putting NVIDIA's infrastructure and have it be hosted from within the world's leading cloud service providers accelerates the enterprise's ability to utilize NVIDIA AI enterprise. It accelerates people's adoption of this machine learning pipeline, which is not for the faint of heart. It is a very extensive body of software. It is not deployed in enterprises broadly, but we believe that by hosting everything in the cloud, from the infrastructure through the operating system software, all the way through pretrained models, we can accelerate the adoption of Generative AI in enterprises. And so we're excited about this new extended part of our business model. We really believe that it will accelerate the adoption of software.

    通過使用 NVIDIA AI,您的整個機器學習操作將更加高效,並且更具成本效益。您可以通過使用加速軟件來節省資金。我們今天宣布部署 NVIDIA 的基礎架構並由世界領先的雲服務提供商託管,這將加速企業利用 NVIDIA AI 企業的能力。它加速了人們對這種機器學習管道的採用,這不適合膽小的人。它是一個非常廣泛的軟件體系。它並沒有廣泛部署在企業中,但我們相信,通過在雲中託管所有內容,從基礎設施到操作系統軟件,一直到預訓練模型,我們可以加速企業對生成人工智能的採用。因此,我們對我們商業模式的這個新的擴展部分感到興奮。我們真的相信它將加速軟件的採用。

  • Operator

    Operator

  • Your next question comes from the line of Vivek Arya with Bank of America.

    你的下一個問題來自美國銀行的 Vivek Arya。

  • Vivek Arya - MD in Equity Research & Research Analyst

    Vivek Arya - MD in Equity Research & Research Analyst

  • Just wanted to clarify, Colette, if you meant data center could grow on a year-on-year basis also in Q1?

    只是想澄清一下,科萊特,如果你的意思是數據中心也可以在第一季度同比增長?

  • And then Jensen, my main question kind of relate to 2 small related ones. The computing intensity for Generative AI, if it is very high, does it limit the market size to just a handful of hyperscalers? And on the other extreme, if the market gets very large, then doesn't it attract more competition for NVIDIA from cloud ASICs or other accelerator options that are out there in the market?

    然後 Jensen,我的主要問題與 2 個相關的小問題有關。生成式 AI 的計算強度,如果非常高,是否會將市場規模限制在少數超大規模計算器中?在另一個極端,如果市場變得非常大,那麼它不會從雲 ASIC 或市場上的其他加速器選項中吸引更多對 NVIDIA 的競爭嗎?

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Thanks for the question. First, talking about our data center guidance that we provided for Q1. We do expect a sequential growth in terms of our data center, strong sequential growth. And we are also expecting a growth year-over-year for our data center. We actually expect a great year, with our year-over-year growth in data center probably accelerating past Q1.

    謝謝你的問題。首先,談談我們為第一季度提供的數據中心指南。我們確實預計我們的數據中心會出現連續增長,強勁的連續增長。我們還預計我們的數據中心會同比增長。我們實際上期待一個偉大的一年,我們在數據中心的同比增長可能會在第一季度之後加速。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Large language models are called large because they are quite large. However, remember that we've accelerated and advanced AI processing by a million x over the last decade. Moore's Law, in its best days, would have delivered 100x in a decade. By coming up with new processors, new systems, new interconnects, new frameworks and algorithms and working with data scientists, AI researchers on new models, across that entire span, we've made large language model processing a million times faster, a million times faster.

    大型語言模型之所以稱為大型,是因為它們相當大。但是,請記住,在過去十年中,我們已經將 AI 處理速度提高了 100 萬倍。摩爾定律在其最好的日子裡,可以在十年內實現 100 倍的增長。通過提出新處理器、新系統、新互連、新框架和算法,並與數據科學家、AI 研究人員合作開發新模型,在整個跨度內,我們已經使大型語言模型處理速度提高了一百萬倍,一百萬倍快點。

  • What would have taken a couple of months in the beginning, now it happens in about 10 days. And of course, you still need a large infrastructure. And even the large infrastructure, we're introducing Hopper, which, with its transformer engine, its new NVLink switches and its new InfiniBand 400 gigabits per second data rates, we're able to take another leap in the processing of large language models.

    一開始需要幾個月的時間,現在只需大約 10 天。當然,您仍然需要大型基礎設施。即使是大型基礎設施,我們也引入了 Hopper,憑藉其變壓器引擎、新的 NVLink 交換機和新的 InfiniBand 每秒 400 吉比特的數據速率,我們能夠在處理大型語言模型方面實現又一次飛躍。

  • And so I think the -- by putting NVIDIA's DGX supercomputers into the cloud with NVIDIA DGX cloud, we're going to democratize the access of this infrastructure, and with accelerated training capabilities, really make this technology and this capability quite accessible. So that's one thought.

    所以我認為——通過使用 NVIDIA DGX 雲將 NVIDIA 的 DGX 超級計算機放入雲中,我們將使這種基礎設施的訪問民主化,並通過加速訓練能力,真正使這項技術和這種能力變得非常容易獲得。所以這是一個想法。

  • The second is the number of large language models or foundation models that have to be developed is quite large. Different countries with different cultures and its body of knowledge are different. Different fields, different domains, whether it's imaging or it's biology or it's physics, each one of them need their own domain of foundation models. With large language models, of course, we now have a prior that could be used to accelerate the development of all these other fields, which is really quite exciting.

    二是需要開發的大型語言模型或基礎模型數量相當多。具有不同文化的不同國家及其知識體係是不同的。不同的領域,不同的領域,無論是成像還是生物學還是物理學,每個領域都需要自己的基礎模型領域。當然,有了大型語言模型,我們現在有了一個先驗,可以用來加速所有這些其他領域的發展,這真的很令人興奮。

  • The other thing to remember is that the number of companies in the world have their own proprietary data. The most valuable data in the world are proprietary. And they belong to the company. It's inside their company. It will never leave the company. And that body of data will also be harnessed to train new AI models for the very first time. And so we -- our strategy and our goal is to put the DGX infrastructure in the cloud so that we can make this capability available to every enterprise, every company in the world who would like to create proprietary data and so -- proprietary models.

    另一件要記住的事情是,世界上有許多公司擁有自己的專有數據。世界上最有價值的數據是專有的。他們屬於公司。在他們公司裡面。它永遠不會離開公司。這些數據也將首次用於訓練新的 AI 模型。因此,我們——我們的戰略和目標是將 DGX 基礎設施放在雲端,這樣我們就可以為世界上每一個想要創建專有數據等的企業、公司提供這種能力——專有模型。

  • The second thing about competition. We've had competition for a long time. Our approach, our computing architecture, as you know, is quite different on several dimensions. Number one, it is universal, meaning you could use it for training, you can use it for inference, you can use it for models of all different types. It supports every framework. It supports every cloud. It's everywhere. It's cloud to private cloud, cloud to on-prem. It's all the way out to the edge. It could be an autonomous system. This 1 architecture allows developers to develop their AI models and deploy it everywhere.

    第二點關於競爭。我們已經競爭很長時間了。如您所知,我們的方法、我們的計算架構在多個方面都大不相同。第一,它是通用的,這意味著你可以將它用於訓練,你可以將它用於推理,你可以將它用於所有不同類型的模型。它支持每個框架。它支持每一朵雲。它無處不在。它是雲到私有云,雲到本地。一直到邊緣。它可以是一個自治系統。這種 1 架構允許開發人員開發他們的 AI 模型並將其部署到任何地方。

  • The second very large idea is that no AI in itself is an application. There's a preprocessing part of it and a post-processing part of it to turn it into an application or service. Most people don't talk about the pre and post processing because it's maybe not as sexy and not as interesting. However, it turns out that preprocessing and post-processing oftentimes consumes half or 2/3 of the overall workload. And so by accelerating the entire end-to-end pipeline, from preprocessing, from data ingestion, data processing, all the way to the preprocessing, all the way to post processing, we're able to accelerate the entire pipeline versus just accelerating half of the pipeline.

    第二個非常大的想法是,沒有 AI 本身是一個應用程序。它有一個預處理部分和一個後處理部分,可以將它變成一個應用程序或服務。大多數人不談論預處理和後處理,因為它可能不那麼性感也沒有那麼有趣。然而,事實證明,預處理和後處理通常會佔用總工作量的一半或 2/3。因此,通過加速整個端到端管道,從預處理、數據攝取、數據處理,一直到預處理,一直到後處理,我們能夠加速整個管道,而不是只加速一半的管道。

  • The limit to speed up, even if you're instantly passed if you only accelerate half of the workload, is twice as fast. Whereas if you accelerate the entire workload, you could accelerate the workload maybe 10, 20, 50x faster, which is the reason why when you hear about NVIDIA accelerating applications, you routinely hear 10x, 20x, 50x speed up. And the reason for that is because we accelerate things end to end, not just the deep learning part of it, but using CUDA to accelerate everything from end to end.

    加速的極限,即使你只加速一半的工作量就立即通過了,速度也是原來的兩倍。然而,如果您加速整個工作負載,您可能會將工作負載加速 10 倍、20 倍、50 倍,這就是為什麼當您聽說 NVIDIA 加速應用程序時,您通常會聽到 10 倍、20 倍、50 倍的加速。這樣做的原因是因為我們加速了端到端的事情,不僅僅是它的深度學習部分,而是使用 CUDA 來加速從端到端的一切。

  • And so I think the universality of our computing -- accelerated computing platform, the fact that we're in every cloud, the fact that we're from cloud to edge, makes our architecture really quite accessible and very differentiated in this way. And most importantly, to all the service providers, because of the utilization is so high, because you can use it to accelerate the end-to-end workload and get such a good throughput, our architecture is the lowest operating cost. It's not -- the comparison is not even close. So -- anyhow, those are the 2 answers.

    因此,我認為我們計算的普遍性——加速計算平台,我們在每個雲中的事實,我們從雲到邊緣的事實,使我們的架構真的很容易訪問並且以這種方式非常不同。最重要的是,對於所有服務提供商而言,由於利用率如此之高,因為您可以使用它來加速端到端工作負載並獲得如此好的吞吐量,我們的架構是最低的運營成本。它不是 - 比較還差得遠。所以 - 無論如何,這是 2 個答案。

  • Operator

    Operator

  • Your next question comes from the line of C.J. Muse with Evercore.

    你的下一個問題來自 C.J. Muse with Evercore。

  • Christopher James Muse - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst

    Christopher James Muse - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst

  • I guess, Jensen, you talked about ChatGPT as an inflection point kind of like the iPhone. And so curious, part A, how have your conversations evolved post-ChatGPT with hyperscale and large-scale enterprises? And then secondly, as you think about Hopper with the transformative engine and Grace with high-bandwidth memory, how have you kind of your outlook for growth for those 2 product cycles evolved in the last few months?

    我猜,Jensen,你談到 ChatGPT 是一個有點像 iPhone 的轉折點。非常好奇,A 部分,在 ChatGPT 之後,您與超大規模和大型企業的對話是如何演變的?其次,當您考慮具有變革性引擎的 Hopper 和具有高帶寬內存的 Grace 時,您如何看待過去幾個月演變的這兩個產品週期的增長前景?

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • ChatGPT is a wonderful piece of work, and the team did a great job, OpenAI did a great job with it. They stuck with it. And the accumulation of all of the breakthroughs led to a service with a model inside that surprised everybody with its versatility and its capability.

    ChatGPT 是一項很棒的工作,團隊做得很好,OpenAI 用它做得很好。他們堅持了下來。所有突破的積累導致了一種服務,其內部模型以其多功能性和能力讓每個人都感到驚訝。

  • What people -- People were surprised by, and this is in our -- and close within the industry is well understood. But the surprising capability of a single AI model that can perform tasks and skills that it was never trained to do. And for this language model to not just speak English, or can translate, of course, but not just speak human language, it can be prompted in human language, but output Python, output COBOL, a language that very few people even remember, output Python for Blender, a 3D program. So it's a program that writes a program for another program.

    什麼人 - 人們感到驚訝,這在我們 - 並且接近行業內是很好理解的。但是單個 AI 模型的驚人能力可以執行從未受過訓練的任務和技能。而這個語言模型不只是說英語,或者可以翻譯,當然,但不只是說人類語言,它可以用人類語言提示,而是輸出Python,輸出COBOL,一種很少有人記得的語言,輸出用於 Blender 的 Python,一個 3D 程序。所以這是一個為另一個程序編寫程序的程序。

  • We now realize -- the world now realizes that maybe human language is a perfectly good computer programming language, and that we've democratized computer programming for everyone, almost anyone who could explain in human language a particular task to be performed. This new computer -- when I say new era of computing, this new computing platform, this new computer could take whatever your prompt is, whatever your human-explained request is, and translate it to a sequence of instructions that you process it directly or it waits for you to decide whether you want to process it or not.

    我們現在意識到——世界現在意識到也許人類語言是一種非常好的計算機編程語言,我們已經使計算機編程民主化,適用於每個人,幾乎任何可以用人類語言解釋要執行的特定任務的人。這台新計算機——當我說新計算時代,這個新計算平台時,這台新計算機可以接受任何你的提示,無論你的人類解釋的請求是什麼,並將其翻譯成一系列指令,你可以直接處理它或它等待您決定是否要處理它。

  • And so this type of computer is utterly revolutionary in its application because it's democratized programming to so many people really has excited enterprises all over the world. Every single CSP, every single Internet service provider, and that, frankly, every single software company, because of what I just explained, that this is an AI model that can write a program for any program. Because of that reason, everybody who develops software is either alerted or shocked into alert or actively working on something that is like ChatGPT to be integrated into their application or integrated into their service. And so this is, as you can imagine, utterly worldwide.

    因此,這種類型的計算機在其應用中是完全革命性的,因為它對這麼多人的民主化編程確實讓全世界的企業都興奮不已。每一個 CSP,每一個 Internet 服務提供商,坦率地說,每一個軟件公司,因為我剛才解釋的,這是一個可以為任何程序編寫程序的 AI 模型。正因為如此,每個開發軟件的人要么被提醒,要么被震驚到警覺,或者積極致力於將 ChatGPT 之類的東西集成到他們的應用程序或集成到他們的服務中。因此,正如您可以想像的那樣,這完全是全球性的。

  • The activity around the AI infrastructure that we build Hopper and the activity around inferencing using Hopper and Ampere to inference large language models, has just gone through the roof in the last 60 days. And so there's no question that whatever our views are of this year as we enter the year has been fairly, dramatically changed as a result of the last 60, 90 days.

    在過去的 60 天裡,圍繞我們構建 Hopper 的 AI 基礎設施的活動以及使用 Hopper 和 Ampere 推理大型語言模型的推理活動剛剛火爆起來。因此,毫無疑問,無論我們對今年的看法如何,在我們進入這一年之際,由於過去 60、90 天的原因,已經發生了相當大的變化。

  • Operator

    Operator

  • Your next question comes from the line of Matt Ramsay with Cowen and Company.

    你的下一個問題來自 Cowen and Company 的 Matt Ramsay。

  • Matthew D. Ramsay - MD & Senior Research Analyst

    Matthew D. Ramsay - MD & Senior Research Analyst

  • Jensen, I wanted to ask a couple of questions on the DGX Cloud. And I guess, we're all talking about the drivers of the services and the compute that you're going to host on top of these services with the different hyperscalers. But I think we've been kind of watching and wondering when your data center business might transition to more of a systems level business, meaning pairing NVLink and InfiniBand with your Hopper product, with your Grace product and selling things more on a systems level. I wonder if you could step back, over the next 2 or 3 years, how do you think the mix of business in your data center segment evolves from maybe selling cards to systems and software? And what can that mean for the margins of that business over time?

    Jensen,我想問幾個關於 DGX Cloud 的問題。我想,我們都在談論服務的驅動程序以及您將使用不同的超大規模器在這些服務之上託管的計算。但我認為我們一直在觀察和想知道您的數據中心業務何時可能會過渡到更多的系統級業務,這意味著將 NVLink 和 InfiniBand 與您的 Hopper 產品、您的 Grace 產品配對,並在系統級別上銷售更多東西。我想知道您是否可以退後一步,在接下來的 2 或 3 年內,您認為數據中心部門的業務組合如何從銷售卡發展到系統和軟件?隨著時間的推移,這對該業務的利潤率意味著什麼?

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Yes, I appreciate the question. First of all, as you know, our Data Center business is a GPU business only in the context of a conceptual GPU because what we actually sell to the cloud service providers is a panel, a fairly large computing panel of 8 Hoppers or 8 Amperes that's connected with NVLink switches that are connected with NVLink. And so this board represents essentially 1 GPU. It's 8 chips connected together into 1 GPU with a very high-speed chip-to-chip interconnect. And so we've been working on, if you will, multi-die computers for quite some time. And that is 1 GPU.

    是的,我很欣賞這個問題。首先,如您所知,我們的數據中心業務僅在概念 GPU 的背景下是 GPU 業務,因為我們實際出售給雲服務提供商的是一個面板,一個相當大的 8 Hoppers 或 8 Amperes 計算面板與與 NVLink 相連的 NVLink 交換機相連。因此,該板基本上代表 1 個 GPU。它是 8 個芯片連接在一起成為 1 個 GPU,具有非常高速的芯片到芯片互連。因此,如果您願意的話,我們一直在研究多芯片計算機已有一段時間了。那是 1 個 GPU。

  • So when we think about a GPU, we actually think about an HGX GPU, and that's 8 GPUs. We're going to continue to do that. And the thing that the cloud service providers are really excited about is by hosting our infrastructure for NVIDIA to offer because we have so many companies that we work directly with. We're working directly with 10,000 AI start-ups around the world, with enterprises in every industry. And all of those relationships today would really love to be able to deploy both into the cloud at least or into the cloud and on-prem and oftentimes multi-cloud.

    所以當我們想到 GPU 時,我們實際上會想到 HGX GPU,也就是 8 個 GPU。我們將繼續這樣做。雲服務提供商真正興奮的是託管我們的基礎設施供 NVIDIA 提供,因為我們有很多直接合作的公司。我們直接與全球 10,000 家 AI 初創公司合作,涉及各個行業的企業。今天所有這些關係都非常希望能夠至少部署到雲中,或者部署到雲和本地,通常是多雲。

  • And so by having NVIDIA DGX and NVIDIA's infrastructure are full stack in their cloud, we're effectively attracting customers to the CSPs. This is a very, very exciting model for them. And they welcomed us with open arms. And we're going to be the best AI salespeople for the world's clouds. And for the customers, they now have an instantaneous infrastructure that is the most advanced. They have a team of people who are extremely good from the infrastructure to the acceleration software, the NVIDIA AI open -- operating system, all the way up to AI models. Within 1 entity, they have access to expertise across that entire span.

    因此,通過讓 NVIDIA DGX 和 NVIDIA 的基礎設施在他們的雲中成為全棧,我們有效地吸引了客戶到 CSP。這對他們來說是一個非常非常令人興奮的模型。他們張開雙臂歡迎我們。我們將成為世界雲領域最好的 AI 銷售人員。對於客戶來說,他們現在擁有最先進的即時基礎設施。他們擁有一支從基礎設施到加速軟件、NVIDIA AI 開放操作系統,一直到 AI 模型都非常優秀的團隊。在 1 個實體中,他們可以獲得整個跨度的專業知識。

  • And so this is a great model for customers. It's a great model for CSPs. And it's a great model for us. It lets us really run like the wind. As much as we will continue and continue to advance DGX AI supercomputers, it does take time to build AI supercomputers on-prem. It's hard no matter how you look at it. It takes time no matter how you look at it. And so now we have the ability to really prefetch a lot of that and get customers up and running as fast as possible.

    所以這對客戶來說是一個很好的模型。這是 CSP 的一個很好的模型。這對我們來說是一個很好的模型。它讓我們真正的奔跑如風。儘管我們將繼續並繼續推進 DGX AI 超級計算機,但在本地構建 AI 超級計算機確實需要時間。怎麼看都難。不管怎麼看都需要時間。所以現在我們有能力真正預取其中的很多內容,讓客戶盡快啟動和運行。

  • Operator

    Operator

  • Your next question comes from the line of Timothy Arcuri with UBS.

    你的下一個問題來自瑞銀集團的蒂莫西·阿庫裡 (Timothy Arcuri)。

  • Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment

    Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment

  • Jensen, I had a question about what this all does to your TAM. Most of the focus right now is on text, but obviously, there are companies doing a lot of training on video and music. They're working on models there. And it seems like somebody who's training these big models has maybe, on the high end, at least 10,000 GPUs in the cloud that they've contracted and maybe tens of thousands of more to inference a widely deployed model. So it seems like the incremental TAM is easily in the several hundred thousands of GPUs and easily in the tens of billions of dollars. But I'm kind of wondering what this does to the TAM numbers you gave last year. I think you said $300 billion hardware TAM and $300 billion software TAM. So how do you kind of think about what the new TAM would be?

    Jensen,我想問一下這一切對你的 TAM 有什麼影響。現在的大部分重點都放在文本上,但顯然,有些公司在視頻和音樂方面進行了大量培訓。他們在那裡研究模型。似乎正在訓練這些大型模型的人可能在高端的雲中至少有 10,000 個 GPU,他們已經簽約,可能還有數万個 GPU 用於推斷廣泛部署的模型。所以看起來增量 TAM 很容易在幾十萬個 GPU 中,很容易達到數百億美元。但我有點想知道這對你去年提供的 TAM 數字有何影響。我想你說的是 3000 億美元的硬件 TAM 和 3000 億美元的軟件 TAM。那麼您如何看待新的 TAM 是什麼?

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • I think those numbers are really good anchor still. The difference is because of the, if you will, incredible capabilities and versatility of Generative AI and all of the converging breakthroughs that happened towards the middle and the end of last year, we're probably going to arrive at that TAM sooner than later. There's no question that this is a very big moment for the computer industry. Every single platform change, every inflection point in the way that people develop computers happened because it was easier to use, easier to program and more accessible. This happened with the PC revolution. This happened with the Internet revolution. This happened with mobile cloud.

    我認為這些數字仍然是非常好的錨點。不同之處在於,如果你願意的話,生成 AI 令人難以置信的能力和多功能性以及去年年中和年底發生的所有融合突破,我們可能會早晚到達 TAM。毫無疑問,這對計算機行業來說是一個非常重要的時刻。每一個平台的變化,人們開發計算機方式的每一個轉折點,都是因為它更容易使用、更容易編程和更容易訪問。這發生在 PC 革命中。這發生在互聯網革命中。這發生在移動雲上。

  • Remember, mobile cloud, because of the iPhone and the App Store, 5 million applications and counting emerged. There weren't 5 million mainframe applications. There weren't 5 million workstation applications. There weren't 5 million PC applications. And because it was so easy to develop and deploy amazing applications part cloud, part on the mobile device and so easy to distribute because of app stores, the same exact thing is now happening to AI.

    請記住,移動雲,由於 iPhone 和 App Store,出現了 500 萬個應用程序,並且還在不斷增加。沒有 500 萬個大型機應用程序。沒有 500 萬個工作站應用程序。沒有 500 萬個 PC 應用程序。因為開發和部署令人驚嘆的應用程序非常容易,一部分是雲,一部分是在移動設備上,而且由於應用程序商店的存在,分發也很容易,所以同樣的事情現在也發生在人工智能身上。

  • In non-computing era, did 1 computing platform, ChatGPT, reached 150 million people in 60, 90 days. I mean, this is quite an extraordinary thing. And people are using it to create all kinds of things. And so I think that what you're seeing now is just a torrent of new companies and new applications that are emerging. There's no question this is, in every way, a new computing era. And so I think the TAM that we explained and expressed, it really is even more realizable today and sooner than before.

    在非計算時代,做了 1 個計算平台 ChatGPT,在 60、90 天內達到了 1.5 億人。我的意思是,這是一件非同尋常的事情。人們正在用它來創造各種各樣的東西。所以我認為你現在看到的只是湧現的新公司和新應用程序的洪流。毫無疑問,從各個方面來說,這都是一個新的計算時代。因此,我認為我們解釋和表達的 TAM 在今天確實比以前更容易實現。

  • Operator

    Operator

  • Your next question comes from the line of Stacy Rasgon with Bernstein.

    你的下一個問題來自 Stacy Rasgon 與 Bernstein 的對話。

  • Stacy Aaron Rasgon - Senior Analyst

    Stacy Aaron Rasgon - Senior Analyst

  • I have a clarification and then a question both for Colette. The clarification, you said H100 revenue's higher than A100. Was that an overall statement? Or was that at the same point in time like after 2 quarters of shipments?

    我先澄清一下,然後再問科萊特兩個問題。澄清一下,你說 H100 的收入高於 A100。這是一個總體陳述嗎?或者是在同一時間點,比如在 2 個季度的出貨量之後?

  • And then for my actual question. I wanted to ask about auto, specifically the Mercedes opportunity. The Mercedes had an event today, and they were talking about software revenues for their MB Drive that could be single digit or low billion euros by mid-decade and mid-billion euros by the end of the decade. And I know you guys were supposedly splitting the software revenues 50-50. Is that kind of the order of magnitude of software revenues from the Mercedes deal that you guys are thinking of and over that similar time frame? Is that how we should be modeling that?

    然後是我的實際問題。我想問問汽車,特別是梅賽德斯的機會。梅賽德斯今天舉辦了一場活動,他們正在談論他們的 MB Drive 的軟件收入,到 20 世紀中期和 20 世紀末可能達到 10 億歐元左右。我知道你們應該將軟件收入分成 50-50。你們正在考慮的梅賽德斯交易軟件收入的數量級是這種數量級嗎?在相似的時間範圍內?那是我們應該如何建模的嗎?

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Great. Thanks, Stacy, for the question. Let me first start with your question you had about H100 and A100. We began initial shipments of H100 back in Q3. It was a great start. Many of them began that process many quarters ago. And this was a time for us to get production level to them in Q3. So Q4 was an important time for us to see a great ramp of H100 that we saw. What that means is our H100 was the focus of many of our CSPs within Q4, and they were all wanting to get -- both get up and running in cloud instances. And so we actually saw less of A100 in Q4 of what we saw in H100 at a larger amount. We tend to continue to sell both architectures going forward, but just in Q4, it was a strong quarter for [H100].

    偉大的。謝謝斯泰西提出的問題。首先讓我從您提出的關於 H100 和 A100 的問題開始。我們在第三季度開始首次發貨 H100。這是一個很好的開始。他們中的許多人在很多季度前就開始了這個過程。這是我們在第三季度獲得生產水平的時候。因此,第四季度是我們看到 H100 大幅提升的重要時刻。這意味著我們的 H100 是我們在第四季度的許多 CSP 的焦點,他們都希望獲得 - 在雲實例中啟動和運行。因此,我們實際上在第四季度看到的 A100 數量少於我們在 H100 中看到的數量。我們傾向於繼續銷售這兩種架構,但就在第四季度,[H100] 表現強勁。

  • Your additional questions that you had on Mercedes-Benz. I'm very pleased with the joint connection that we have with them and the work. We've been working very diligently about getting ready to come to market. And you're right. They did talk about the software opportunity. They talked about their software opportunity in 2 phases, about what they can do with DRIVE as well as what they can also do with Connect. They extended out to a position of probably about 10 years looking at the opportunity that they see in front of us. So it aligns with what our thoughts are with a long-term partner of that and sharing that revenue over time.

    您對梅賽德斯-奔馳的其他問題。我對我們與他們的共同聯繫和工作感到非常滿意。我們一直在非常努力地準備上市。你是對的。他們確實談到了軟件機會。他們分兩個階段討論了他們的軟件機會,關於他們可以用 DRIVE 做什麼以及他們還可以用 Connect 做什麼。他們延長了大約 10 年的時間,著眼於他們在我們面前看到的機會。因此,它與我們與長期合作夥伴的想法一致,並隨著時間的推移分享收入。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • One of the things that, if I could add, Stacy, to say something about the wisdom of what Mercedes is doing. This is the only large luxury brand that has, across the board, from every -- from the entry all the way to the highest end of their luxury cars, to install every single one of them with a rich sensor set, every single one of them with an AI supercomputer, so that every future car in the Mercedes fleet will contribute to an installed base that could be upgradable and forever renewed for customers going forward. If you could just imagine what it looks like if the entire Mercedes fleet that is on the road today were completely programmable, that you can OTA, it would represent tens of millions of Mercedes that would represent revenue-generating opportunity. And that's the vision that Ola has. And what they're building for, I think, it's going to be extraordinary. The large installed base of luxury cars that will continue to renew for customers' benefits and also for revenue-generating benefits.

    Stacy,如果我可以補充的話,其中一件事是關於梅賽德斯正在做的事情的智慧。這是唯一的大型奢侈品牌,從每一輛——從入門級到最高端的豪華車,每一輛都安裝了豐富的傳感器,每一輛他們配備了人工智能超級計算機,這樣梅賽德斯車隊中的每輛未來汽車都將成為一個可以升級和永遠更新的安裝基礎,以供未來的客戶使用。如果你能想像一下如果今天在路上的整個梅賽德斯車隊都是完全可編程的,你可以 OTA,那將代表數千萬輛梅賽德斯,這將代表創收機會。這就是 Ola 的願景。我認為,他們正在建設的目標將是非同尋常的。豪華車的龐大安裝基礎將繼續為客戶的利益和創收利益而更新。

  • Operator

    Operator

  • Your next question comes from the line of Mark Lipacis with Jefferies.

    你的下一個問題來自 Mark Lipacis 與 Jefferies 的合作。

  • Mark John Lipacis - MD & Senior Equity Research Analyst

    Mark John Lipacis - MD & Senior Equity Research Analyst

  • I think for you, Jensen, it seems like every year a new workload comes out and drives demand for your process or your ecosystem cycles. And if I think back facial recognition and then recommendation engines, natural language processing, Omniverse and now Generative AI engines, can you share with us your view? Is this what we should expect going forward, like a brand-new workload that drives demand to the next level for your products?

    我認為對你來說,詹森,似乎每年都會出現新的工作負載,並推動對你的流程或生態系統週期的需求。如果我回想面部識別,然後是推薦引擎、自然語言處理、Omniverse 和現在的生成人工智能引擎,您能與我們分享您的觀點嗎?這是我們應該期待的未來嗎,比如將您的產品需求推向新水平的全新工作負載?

  • And the reason I ask is because I found it interesting your comments in your script where you mentioned that your kind of view about the demand that Generative AI is going to drive for your products and now services is -- seems to be a lot, better than what you thought just over the last 90 days. So -- and to the extent that there's new workloads that you're working on or new applications that can drive next levels of demand, would you care to share with us a little bit of what you think could drive it past what you're seeing today?

    我問的原因是因為我發現你在腳本中的評論很有趣,你提到你對生成式人工智能將推動你的產品和服務的需求的看法 - 似乎很多,更好比你過去 90 天的想法還要多。所以 - 就您正在處理的新工作負載或可以推動下一級別需求的新應用程序而言,您是否願意與我們分享一些您認為可以推動它超越您的需求的東西今天見?

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Yes, Mark, I really appreciate the question. First of all, I have new applications that you don't know about and new workloads that we've never shared that I would like to share with you at GTC. And so that's my hook to come to GTC, and I think you're going to be very surprised and quite delighted by the applications that we're going to talk about.

    是的,馬克,我真的很感激這個問題。首先,我有一些您不知道的新應用程序和我們從未分享過的新工作負載,我想在 GTC 上與您分享。這就是我參加 GTC 的誘因,我認為您會對我們將要討論的應用程序感到非常驚訝和非常高興。

  • Now there's a reason why it is the case that you're constantly hearing about new applications. The reason for that is, number one, NVIDIA is a multi-domain accelerated computing platform. It is not completely general purpose like a CPU because a CPU is 95%, 98% control functions and only 2% mathematics, which makes it completely flexible. We're not that way. We're an accelerated computing platform that works with the CPU that offloads the really heavy computing units, things that could be highly, highly paralyzed to offload them. But we're multi-domain. We could do particle systems. We could do fluids. We could do neurons. And we can do computer graphics. We can do [rays]. There are all kinds of different applications that we can accelerate, number one.

    現在,您不斷聽到有關新應用程序的消息是有原因的。原因是,第一,NVIDIA 是一個多域加速計算平台。它不像CPU那樣完全是通用的,因為CPU是95%,98%是控制函數,只有2%是數學,這使得它完全是靈活的。我們不是那樣的。我們是一個加速計算平台,與 CPU 一起工作,可以卸載真正繁重的計算單元,這些單元可能會非常非常癱瘓以卸載它們。但我們是多域的。我們可以做粒子系統。我們可以做流體。我們可以做神經元。我們可以做計算機圖形。我們可以做[射線]。我們可以加速各種不同的應用程序,第一。

  • Number two, our installed base is so large. This is the only accelerated computing platform, the only platform. Literally, the only one that is architecturally compatible across every single cloud from PCs to workstations, gamers to cars to on-prem. Every single computer is architecturally compatible, which means that a developer who developed something special would seek out our platform because they like the reach. They like the universal reach. They like the acceleration, number one. They like the ecosystem of programming tools and the ease of using it and the fact that they have so many people they can reach out to, to help them. There are millions of CUDA experts around the world, software all accelerated, tool all accelerated. And then very importantly, they like the reach. They like the fact that you can see -- they can reach so many users after they develop the software. And it is the reason why we just keep attracting new applications.

    第二,我們的客戶群非常龐大。這是唯一的加速計算平台,唯一的平台。從字面上看,這是唯一一個在架構上兼容從 PC 到工作站、遊戲玩家到汽車再到本地的所有云的架構。每台計算機在架構上都是兼容的,這意味著開發了一些特別的東西的開發人員會尋找我們的平台,因為他們喜歡它的影響力。他們喜歡普遍的影響力。他們喜歡加速,第一。他們喜歡編程工具的生態系統和它的易用性,以及他們可以接觸到很多人來幫助他們這一事實。全世界有數以百萬計的 CUDA 專家,軟件都加速了,工具都加速了。然後非常重要的是,他們喜歡觸及範圍。他們喜歡你可以看到的事實——他們在開發軟件後可以接觸到如此多的用戶。這就是我們不斷吸引新應用程序的原因。

  • And then finally, this is a very important point. Remember, the rate of CPU computing advance has slowed tremendously. And whereas back in the first 30 years of my career, at 10x in performance at about the same power every 5 years, and 10x every 5 years. That rate of continued advance has slowed. At a time when people still have really, really urging applications that they would like to bring to the world, and they can't afford to do that with the power keep going up. Everybody needs to be sustainable. You can't continue to consume power. By accelerating it, we can decrease the amount of power you use for any workload. And so all of these multitude of reasons is really driving people to use accelerated computing, and we keep discovering new exciting applications.

    最後,這是非常重要的一點。請記住,CPU 計算進步的速度已經大大放緩。而在我職業生涯的前 30 年裡,每 5 年以大約相同的功率提高 10 倍的性能,每 5 年提高 10 倍。這種持續前進的速度已經放緩。在這個時代,人們仍然非常非常迫切地想要將他們想要帶給世界的應用程序,而隨著功率的不斷增加,他們負擔不起這樣做。每個人都需要可持續發展。你不能繼續消耗電力。通過加速它,我們可以減少您為任何工作負載使用的電量。因此,所有這些原因確實促使人們使用加速計算,並且我們不斷發現新的令人興奮的應用程序。

  • Operator

    Operator

  • Your next question comes from the line of Atif Malik with Citi.

    你的下一個問題來自花旗銀行的 Atif Malik。

  • Atif Malik - Director & Semiconductor Capital Equipment and Specialty Semiconductor Analyst

    Atif Malik - Director & Semiconductor Capital Equipment and Specialty Semiconductor Analyst

  • Colette, I have a question on data center. You saw some weakness on build plan in the January quarter, but you're guiding to year-over-year acceleration in April and through the year. So if you can just rank order for us the confidence in the acceleration, is that based on your H100 ramp or Generative AI sales coming through or the new AI services model? And also, if you can talk about what you're seeing on the enterprise vertical?

    科萊特,我有一個關於數據中心的問題。您在 1 月季度看到了建設計劃的一些弱點,但您正在指導 4 月和全年的同比加速。因此,如果您可以為我們對加速的信心排序,那是基於您的 H100 ramp 或 Generative AI 銷售,還是新的 AI 服務模型?而且,您能否談談您在企業垂直領域看到的情況?

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Sure. Thanks for the question. When we think about our growth, yes, we're going to grow sequentially in Q1 and do expect year-over-year growth in Q1 as well. It will likely accelerate there going forward. So what do we see as the drivers of that? Yes, we have multiple product cycles coming to market. We have H100 in market now. We are continuing with our new launches as well that are sometimes fueled with our GPU computing with our networking. And then we have grades coming likely in the second half of the year.

    當然。謝謝你的問題。當我們考慮我們的增長時,是的,我們將在第一季度連續增長,並且預計第一季度也將實現同比增長。它可能會在那裡加速前進。那麼我們認為這是什麼驅動因素呢?是的,我們有多個產品週期進入市場。我們現在有 H100 上市。我們也將繼續推出新產品,有時我們的 GPU 計算和我們的網絡會為這些產品提供動力。然後我們可能會在今年下半年取得成績。

  • Additionally, Generative AI, it sparked interest definitely among our customers, whether those be CSPs, whether those be enterprises, whether those be start-ups. We expect that to be a part of our revenue growth this year. And then lastly, let's just not forget that given the end of Moore's Law, there's an error here of focusing on AI, focusing on accelerated computing. So as the economy improves, this is probably very important to the enterprises and it can be fueled by the existence of cloud first for the enterprises as they (inaudible). I'm going to turn it to Jensen to see if he has any additional things he'd like to add.

    此外,生成式 AI 引起了我們客戶的興趣,無論是 CSP、企業還是初創企業。我們預計這將成為我們今年收入增長的一部分。最後,我們不要忘記,考慮到摩爾定律的終結,關注人工智能、關注加速計算是錯誤的。因此,隨著經濟的改善,這對企業來說可能非常重要,並且云優先的存在可以為企業提供動力(聽不清)。我會把它轉給 Jensen,看看他是否有任何其他想要添加的內容。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • No, you did great. That was great.

    不,你做得很好。那很棒。

  • Operator

    Operator

  • Your last question today comes from the line of Joseph Moore with Morgan Stanley.

    你今天的最後一個問題來自摩根士丹利的約瑟夫摩爾。

  • Joseph Lawrence Moore - Executive Director

    Joseph Lawrence Moore - Executive Director

  • Jensen, you talked about the sort of 1 million times improvement in your ability to train these models over the last decade. Can you give us some insight into what that looks like in the next few years and to the extent that some of your customers with these large language models are talking about 100x the complexity over that kind of time frame. I know Hopper is 6x better transformer performance. But what can you do to scale that up? And how much of that just reflects that it's going to be a much larger hardware expense down the road?

    Jensen,你談到在過去十年中你訓練這些模型的能力提高了 100 萬倍。您能否讓我們深入了解未來幾年的情況,以及您的一些使用這些大型語言模型的客戶所談論的複雜性在那種時間範圍內增加了 100 倍的程度。我知道 Hopper 的變壓器性能提高了 6 倍。但是你能做些什麼來擴大規模呢?其中有多少只是反映了未來將有更大的硬件支出?

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • First, I'll start backwards. I believe the number of AI infrastructures are going to grow all over the world. And the reason for that is AI, the production of intelligence, is going to be manufacturing. There was a time when people manufacture just physical goods. In the future, there will be -- almost every company will manufacture soft goods. It just happens to be in the form of intelligence. Data comes in. That data center does exactly 1 thing and 1 thing only. It cranks on that data and it produces a new updated model. Where raw material comes in, a building or an infrastructure cranks on it, and something refined or improved comes out that is of great value, that's called the factory. And so I expect to see AI factories all over the world. Some of it will be hosted in cloud. Some of it will be on-prem. There will be some that are large, and there are some that will be mega large, and then there'll be some that are smaller. And so I fully expect that to happen, number one.

    首先,我會倒著開始。我相信人工智能基礎設施的數量將在全世界增長。原因是人工智能,即智能的生產,將在製造業中進行。曾幾何時,人們只生產實物商品。未來,幾乎每家公司都會製造軟商品。它恰好以智能的形式出現。數據進來了。那個數據中心只做一件事,而且只做一件事。它利用這些數據生成一個新的更新模型。在原材料進來的地方,建築物或基礎設施在其上運轉,然後精煉或改進的東西產生了巨大的價值,這就是工廠。所以我希望看到世界各地的 AI 工廠。其中一些將託管在雲中。其中一些將在本地進行。會有一些很大,有些會非常大,然後會有一些更小。所以我完全希望這會發生,第一。

  • Number two. Over the course of the next 10 years, I hope through new chips, new interconnects, new systems, new operating systems, new distributed computing algorithms and new AI algorithms and working with developers coming up with new models, I believe we're going to accelerate AI by another million x. There's a lot of ways for us to do that. And that's one of the reasons why NVIDIA is not just a chip company because the problem we're trying to solve is just too complex. You have to think across the entire stack all the way from the chip, all the way through the data center across the network through the software. And in the mind of one single company, we can think across that entire stack. And it's really quite a great playground for computer scientists for that reason because we can innovate across that entire stack. So my expectation is that you're going to see really gigantic breakthroughs in AI models in the next company -- the AI platforms in the coming decade. But simultaneously, because of the incredible growth and adoption of this, you're going to see these AI factories everywhere.

    第二。在接下來的 10 年裡,我希望通過新芯片、新互連、新系統、新操作系統、新分佈式計算算法和新 AI 算法,並與開發人員合作開發新模型,我相信我們會將人工智能再加速一百萬倍。我們有很多方法可以做到這一點。這就是 NVIDIA 不僅僅是一家芯片公司的原因之一,因為我們試圖解決的問題太複雜了。你必須考慮整個堆棧,從芯片一直到整個網絡的數據中心,再到軟件。在一家公司的頭腦中,我們可以考慮整個堆棧。由於這個原因,它確實是計算機科學家的一個很好的遊樂場,因為我們可以在整個堆棧中進行創新。所以我的期望是,你將在下一家公司——未來十年的人工智能平台——中看到人工智能模型的真正巨大突破。但與此同時,由於其令人難以置信的增長和採用,您將隨處看到這些 AI 工廠。

  • Operator

    Operator

  • This concludes our Q&A session. I will now turn the call back over to Jensen Huang for closing remarks.

    我們的問答環節到此結束。我現在將把電話轉回給黃仁勳作結束語。

  • Jensen Huang;Co-Founder, CEO, President & Director

    Jensen Huang;Co-Founder, CEO, President & Director

  • Thank you. The accumulation of breakthroughs from transformers, large language model and Generative AI has elevated the capability and versatility of AI to a remarkable level. A new computing platform has emerged. New companies, new applications and new solutions to long-standing challenges are being invented at an astounding rate. Enterprises in just about every industry are activating to apply Generative AI to reimagine their products and businesses.

    謝謝。 Transformer、大型語言模型和生成式 AI 的突破積累,將 AI 的能力和多功能性提升到了一個顯著的水平。一個新的計算平台已經出現。新公司、新應用程序和應對長期挑戰的新解決方案正在以驚人的速度誕生。幾乎每個行業的企業都在積極應用生成式人工智能來重新構想他們的產品和業務。

  • The level of activity around AI, which was already high, has accelerated significantly. This is the moment we've been working towards for over a decade. And we are ready. Our Hopper AI supercomputer with the new transformer engine and Quantum InfiniBand fabric is in full production, and CSPs are racing to open their Hopper cloud services. As we work to meet the strong demand for our GPUs, we look forward to accelerating growth through the year.

    圍繞 AI 的活動水平已經很高,現在已經顯著加快。這是我們十多年來一直努力的時刻。我們準備好了。我們配備新變壓器引擎和 Quantum InfiniBand 結構的 Hopper AI 超級計算機已全面投入生產,CSP 正在競相開放其 Hopper 雲服務。在我們努力滿足對 GPU 的強勁需求的同時,我們期待著全年加速增長。

  • Don't miss the upcoming GTC. We have much to tell you about new chips, systems and software, new CUDA applications and customers, new ecosystem partners and a lot more on NVIDIA AI and Omniverse. This will be our best GTC yet. See you there.

    不要錯過即將到來的 GTC。關於新芯片、系統和軟件、新 CUDA 應用程序和客戶、新生態系統合作夥伴以及 NVIDIA AI 和 Omniverse 的更多信息,我們有很多要告訴您的。這將是我們迄今為止最好的 GTC。到時候那裡見。

  • Operator

    Operator

  • This concludes today's conference. You may now disconnect.

    今天的會議到此結束。您現在可以斷開連接。