輝達 (NVDA) 2025 Q1 法說會逐字稿

內容摘要

受資料中心業務強勁成長的推動,NVIDIA 公佈 2025 財年第一季營收創歷史新高。該公司發布了 H200 GPU 和 Blackwell 平台等新產品,預計將進一步成長。 NVIDIA 正在全球擴大其在主權人工智慧計畫中的影響力,並預計對其產品的需求強勁。

第二季前景樂觀,預計營收為 280 億美元。該公司正在引領加速運算和人工智慧工廠的轉變,並專注於生成式人工智慧技術。 NVIDIA 的加速運算架構可以處理資料管道的各個方面,從而使資料中心的總擁有成本最低。

該公司正面臨對其 GB200 系統的強勁需求,並致力於推進網路技術。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's First Quarter Earnings Call. (Operator Instructions)

    午安.我叫雷吉娜,今天我將擔任你們的會議操作員。此時此刻,我謹歡迎大家參加 NVIDIA 第一季財報電話會議。 (操作員說明)

  • Simona Jankowski, you may begin your conference.

    Simona Jankowski,您可以開始會議了。

  • Simona Jankowski - VP of IR

    Simona Jankowski - VP of IR

  • Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the First Quarter of Fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

    謝謝。大家下午好,歡迎參加 NVIDIA 2025 財年第一季電話會議。執行副總裁兼財務長 Colette Kress。

  • I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2025.

    我想提醒您,我們的電話會議正在 NVIDIA 投資者關係網站上進行網路直播。此網路廣播將在討論 2025 財年第二季財務業績的電話會議之前進行重播。

  • The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

    今天電話會議的內容屬於 NVIDIA 的財產。未經我們事先書面同意,不得複製或轉錄。

  • During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 22, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

    在這次電話會議中,我們可能會根據目前的預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來財務表現和業務的因素的討論,請參閱今天的收益發布中的揭露、我們最新的表格 10-K 和 10-Q 以及我們可能透過表格 8-K 提交的報告證券交易委員會。我們的所有聲明均根據我們目前掌握的資訊於今天(2024 年 5 月 22 日)作出。除法律要求外,我們不承擔更新任何此類聲明的義務。

  • During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.

    在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的財務長評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的調整表。

  • Let me highlight some upcoming events. On Sunday, June 2, ahead of the Computex Technology trade show in Taiwan, Jensen will deliver a keynote which will be held in person in Taipei as well as streamed live. And on June 5, we will present at the Bank of America Technology Conference in San Francisco.

    讓我重點介紹一些即將舉行的活動。 6 月 2 日(星期日),在台灣舉行的 Computex 科技貿易展之前,Jensen 將在台北親自發表主題演講並進行現場直播。 6 月 5 日,我們將出席在舊金山舉行的美國銀行技術會議。

  • With that, let me turn the call over to Colette.

    現在,讓我把電話轉給科萊特。

  • Colette M. Kress - Executive VP & CFO

    Colette M. Kress - Executive VP & CFO

  • Thanks, Simona. Q1 was another record quarter. Revenue of $26 billion was up 18% sequentially and up 262% year-on-year and well above our outlook of $24 billion.

    謝謝,西蒙娜。第一季又創歷史新高。營收為 260 億美元,季增 18%,年增 262%,遠高於我們 240 億美元的預期。

  • Starting with Data Center. Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year.

    從資料中心開始。受 NVIDIA Hopper GPU 運算平台持續強勁需求的推動,資料中心營收達到 226 億美元,創歷史新高,環比成長 23%,較去年同期成長 427%。與去年相比,計算收入成長了 5 倍以上,網路收入成長了 3 倍以上。

  • Strong sequential Data Center growth was driven by all customer types, led by enterprise and consumer Internet companies. Large cloud providers continue to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale and represented the mid-40s as a percentage of our Data Center revenue.

    資料中心連續強勁成長是由以企業和消費者互聯網公司為首的所有客戶類型推動的。大型雲端供應商大規模部署和提升 NVIDIA AI 基礎設施,並持續推動強勁成長,占我們資料中心收入的比例已達到 40 多歲。

  • Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud providers' investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over 4 years.

    在 NVIDIA CUDA 上訓練和推理 AI 正在顯著加速雲端租賃收入的成長,為雲端供應商的投資帶來即時的豐厚回報。在 NVIDIA AI 基礎架構上每花費 1 美元,雲端供應商就有機會在 4 年內賺取 5 美元的 GPU 即時託管收入。

  • NVIDIA's rich software stack and ecosystem and tight integration with cloud providers makes it easy for end customers up and running on NVIDIA GPU instances in the public cloud.

    NVIDIA 豐富的軟體堆疊和生態系統以及與雲端供應商的緊密整合使最終客戶可以輕鬆地在公有雲中的 NVIDIA GPU 執行個體上安裝和運行。

  • For cloud rental customers, NVIDIA GPUs offer the best time-to-train models, the lowest cost to train models and the lowest cost to inference large language models. For public cloud providers, NVIDIA brings customers to their cloud, driving revenue growth and returns on their infrastructure investments. Leading LLM companies such as OpenAI, Adept, Anthropic, Character.ai, Cohere, Databricks, DeepMind, Meta, Mistral, XAi, and many others are building on NVIDIA AI in the cloud.

    對於雲端租賃客戶,NVIDIA GPU 提供最佳的模型訓練時間、最低的模型訓練成本以及最低的推理大型語言模型成本。對於公有雲供應商來說,NVIDIA 將客戶帶入他們的雲,推動營收成長和基礎設施投資回報。 OpenAI、Adept、Anthropic、Character.ai、Cohere、Databricks、DeepMind、Meta、Mistral、XAi 等領先的法學碩士公司都在雲端基於 NVIDIA AI 進行建置。

  • Enterprises drove strong sequential growth in Data Center this quarter. We supported Tesla's expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD version 12, their latest autonomous driving software based on Vision.

    本季度,企業推動了資料中心的強勁環比成長。我們支援 Tesla 將其訓練 AI 叢集擴展到 35,000 個 H100 GPU。他們使用 NVIDIA AI 基礎設施,為 FSD 版本 12(基於 Vision 的最新自動駕駛軟體)的突破性性能鋪平了道路。

  • NVIDIA Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption.

    NVIDIA Transformers 雖然消耗的運算量顯著增加,但能夠顯著提高自動駕駛功能,並推動整個汽車產業 NVIDIA AI 基礎設施的顯著成長。我們預計汽車產業今年將成為資料中心內最大的企業垂直領域,從而在本地和雲端消費領域帶來數十億美元的收入機會。

  • Consumer Internet companies are also a strong growth vertical. A big highlight this quarter was Meta's announcement of Llama 3, their latest large language model, which was trained on a cluster of 24,000 H100 GPUs. Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp, and Messenger. Llama 3 is openly available and has kickstarted a wave of AI development across industries.

    消費互聯網公司也是一個強勁成長的垂直領域。本季度的一大亮點是 Meta 宣布推出 Llama 3,這是他們最新的大型語言模型,該模型在 24,000 個 H100 GPU 叢集上進行訓練。 Llama 3 為 Meta AI 提供支持,這是一款可在 Facebook、Instagram、WhatsApp 和 Messenger 上使用的新人工智慧助理。 Llama 3 已公開發布,掀起了跨產業人工智慧開發浪潮。

  • As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute.

    隨著生成式人工智慧進入更多消費者網路應用,我們預計將看到持續成長的機會,因為推理隨著模型複雜性以及用戶數量和每個用戶的查詢數量而擴展,從而推動對人工智慧運算的更多需求。

  • In our trailing 4 quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly.

    在過去的 4 個季度中,我們估計推理推動了我們資料中心收入的 40% 左右。訓練和推理都在顯著成長。

  • Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out. In Q1, we worked with over 100 customers building AI factories ranging in size from hundreds to tens of thousands of GPUs, with some reaching 100,000 GPUs.

    像 Meta 和 Tesla 建構的大型叢集是人工智慧生產的基本基礎設施的例子,我們稱之為人工智慧工廠。這些新一代資料中心託管先進的全端加速運算平台,資料在此輸入並輸出智慧。第一季度,我們與超過 100 家客戶合作建造人工智慧工廠,規模從數百到數萬個 GPU,有的達到 10 萬個 GPU。

  • From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in sovereign AI. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce, and business networks.

    從地理角度來看,隨著世界各國對主權人工智慧的投資,資料中心收入持續多元化。主權人工智慧是指一個國家利用自己的基礎設施、數據、勞動力和商業網路生產人工智慧的能力。

  • Nations are building up domestic computing capacity through various models. Some are procuring and operating sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use. For example, Japan plans to invest more than $740 million in key digital infrastructure providers, including KDDI, Sakura Internet, and SoftBank to build out the nation's sovereign AI infrastructure. France-based Scaleway, a subsidiary of the Iliad Group, is building Europe's most powerful cloud native AI supercomputer. In Italy, Swisscom Group will build the nation's first and most powerful NVIDIA DGX-powered supercomputer to develop the first LLM natively trained in the Italian language. And in Singapore, the National Supercomputer Centre is getting upgraded with NVIDIA Hopper GPUs, while Singtel is building NVIDIA's accelerated AI factories across Southeast Asia.

    各國正透過各種模式建立國內運算能力。一些公司正在與國有電信供應商或公用事業公司合作採購和營運主權人工智慧雲端。其他公司正在贊助當地雲端合作夥伴,為公營和私營部門提供共享的人工智慧運算平台。例如,日本計劃向 KDDI、Sakura Internet 和 SoftBank 等關鍵數位基礎設施供應商投資超過 7.4 億美元,以建立國家主權人工智慧基礎設施。總部位於法國的 Scaleway 是 Iliad 集團的子公司,正在打造歐洲最強大的雲端原生人工智慧超級電腦。在義大利,Swisscom Group 將建造該國第一台也是最強大的由 NVIDIA DGX 驅動的超級計算機,以開發第一個以義大利語進行本地培訓的法學碩士。在新加坡,國家超級電腦中心正在使用 NVIDIA Hopper GPU 進行升級,而新加坡電信正在東南亞各地建造 NVIDIA 加速 AI 工廠。

  • NVIDIA's ability to offer end-to-end compute to networking technologies, full stack software, AI expertise, and rich ecosystem of partners and customers allows sovereign AI and regional cloud providers to jumpstart their country's AI ambitions. From nothing the previous year, we believe sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation.

    NVIDIA 能夠提供端到端運算到網路技術、全端軟體、人工智慧專業知識以及豐富的合作夥伴和客戶生態系統,使主權人工智慧和區域雲端供應商能夠快速啟動其國家的人工智慧雄心。我們相信,從前一年的零收入開始,今年主權人工智慧收入將接近數十億美元。人工智慧的重要性已經引起各國的重視。

  • We ramped new products designed specifically for China that don't require export control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward.

    我們推出了專為中國設計的不需要出口管制許可證的新產品。我們在中國的資料中心收入較 10 月實施新的出口管制限制之前的水準大幅下降。我們預計中國市場未來仍將保持高度競爭。

  • From a product perspective, the vast majority of compute revenue was driven by our Hopper GPU architecture. Demand for Hopper during the quarter continues to increase. Thanks to CUDA algorithm innovations, we've been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3.

    從產品角度來看,絕大多數的運算收入都是由我們的 Hopper GPU 架構所驅動。本季對料斗的需求持續增加。由於 CUDA 演算法創新,我們能夠將 H100 上的 LLM 推理速度提高高達 3 倍,這意味著為 Llama 3 等流行模型提供服務的成本降低了 3 倍。

  • We started sampling the H200 in Q1 and are currently in production with shipments on track for Q2. The first H200 system was delivered by Jensen to Sam Altman and the team at OpenAI and powered their amazing GPT-4o demos last week.

    我們在第一季開始對 H200 進行採樣,目前正在生產,預計第二季出貨。第一個 H200 系統由 Jensen 交付給 Sam Altman 和 OpenAI 團隊,並在上週為他們令人驚嘆的 GPT-4o 演示提供了支援。

  • H200 nearly doubles the inference performance of H100, delivering significant value for production deployments. For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over 4 years.

    H200 的推理性能幾乎是 H100 的兩倍,為生產部署帶來了巨大的價值。例如,使用具有 7000 億個參數的 Llama 3,單一 NVIDIA HGX H200 伺服器每秒可傳送 24,000 個令牌,同時支援超過 2,400 個使用者。這意味著,以目前每個代幣的價格,在 NVIDIA HGX H200 伺服器上每花費 1 美元,為 Llama 3 代幣提供服務的 API 提供者可以在 4 年內產生 7 美元的收入。

  • With ongoing software optimizations, we continue to improve the performance of NVIDIA AI infrastructure for serving AI models.

    透過持續的軟體最佳化,我們不斷提高 NVIDIA AI 基礎設施的效能,以服務 AI 模型。

  • While supply for H100 grew, we are still constrained on H200. At the same time, Blackwell is in full production. We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply, and we expect demand may exceed supply well into next year.

    雖然 H100 的供應量有所增長,但我們對 H200 的供應仍然受到限制。與此同時,Blackwell 已全面投入生產。我們正在努力讓我們的系統和雲端合作夥伴在今年稍後在全球範圍內可用。 H200 和 Blackwell 的需求遠遠超過供應,我們預計明年需求可能會遠遠超過供應。

  • Grace Hopper Superchip is shipping in volume. Last week at the International Supercomputing Conference, we announced that 9 new supercomputers worldwide are using Grace Hopper for a combined 200 exaflops of energy-efficient AI processing power delivered this year. These include the Alps Supercomputer at the Swiss National Supercomputing Centre, the fastest AI supercomputer in Europe; Isambard-AI at the University of Bristol in the U.K.; and JUPITER in the Jülich Supercomputing Centre in Germany. We are seeing an 80% attach rate of Grace Hopper in supercomputing due to its high energy efficiency and performance. We are also proud to see supercomputers powered with Grace Hopper take the #1, the #2, and the #3 spots of the most energy-efficient supercomputers in the world.

    Grace Hopper Superchip 正在批量發貨。上週在國際超級運算大會上,我們宣布全球 9 台新超級電腦正在使用 Grace Hopper,今年提供的節能 AI 處理能力總計達到 200 exaflops。其中包括瑞士國家超級電腦中心的阿爾卑斯超級計算機,歐洲最快的人工智慧超級電腦;英國布里斯託大學的 Isambard-AI;和 JUPITER 在德國於利希超級計算中心。由於其高能源效率和效能,我們看到 Grace Hopper 在超級運算中的使用率高達 80%。我們也很自豪地看到採用 Grace Hopper 的超級電腦在世界上最節能的超級電腦中排名第一、第二和第三。

  • Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2.

    InfiniBand 推動了網路的強勁同比增長。我們經歷了適度的連續下降,這主要是由於供應時間,需求遠遠超過了我們能夠發貨的數量。我們預計網路將在第二季度恢復環比增長。

  • In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up. It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet-only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year.

    在第一季度,我們開始交付全新的 Spectrum-X 乙太網路解決方案,從頭開始針對 AI 進行了最佳化。它包括我們的 Spectrum-4 交換器、BlueField-3 DPU 和新軟體技術,以克服乙太網路上 AI 的挑戰,為 AI 處理提供比傳統乙太網路高 1.6 倍的網路效能。 Spectrum-X 正在與多個客戶合作,其中包括一個擁有 100,000 個 GPU 的龐大叢集。 Spectrum-X 為 NVIDIA 網路開啟了一個全新的市場,並使純乙太網路資料中心能夠適應大規模人工智慧。我們預計 Spectrum-X 將在一年內躍升為價值數十億美元的產品線。

  • At GTC in March, we launched our next-generation AI factory platform, Blackwell. The Blackwell GPU architecture delivers up to 4x faster training and 30x faster inference than the H100 and enables real-time generative AI on trillion-parameter large language models. Blackwell is a giant leap with up to 25x lower TCO and energy consumption than Hopper. The Blackwell platform includes the fifth-generation NVLink with a multi-GPU spine and new InfiniBand and Ethernet switches, the X800 series designed for a trillion-parameter scale AI.

    在 3 月的 GTC 上,我們推出了下一代人工智慧工廠平台 Blackwell。 Blackwell GPU 架構的訓練速度比 H100 快 4 倍,推理速度快 30 倍,並可在兆參數大型語言模型上即時產生 AI。 Blackwell 是一個巨大的飛躍,與 Hopper 相比,TCO 和能耗降低了 25 倍。 Blackwell 平台包括具有多 GPU 主幹的第五代 NVLink 以及新的 InfiniBand 和乙太網路交換機,X800 系列專為萬億參數規模的人工智慧而設計。

  • Blackwell is designed to support data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand networking, and air cooling to liquid cooling. Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hoppers launched and representing every major computer maker in the world. This will support fast and broad adoption across the customer types, workloads, and data center environments in the first year shipments. Blackwell time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and XAi.

    Blackwell 旨在支援普遍的資料中心,從超大規模到企業、訓練到推理、x86 到 Grace CPU、乙太網路到 InfiniBand 網路以及空氣冷卻到液體冷卻。 Blackwell 將在推出時應用於 100 多個 OEM 和 ODM 系統,是已推出的 Hoppers 數量的兩倍多,代表了世界上所有主要電腦製造商。這將支援在第一年出貨時跨客戶類型、工作負載和資料中心環境的快速、廣泛採用。 Blackwell 上市客戶包括亞馬遜、Google、Meta、微軟、OpenAI、甲骨文、特斯拉和 XAi。

  • We announced a new software product with the introduction of NVIDIA Inference Microservices, or NIM. NIM provides secure and performance-optimized containers powered by NVIDIA CUDA acceleration in network computing and inference software, including Triton and PrintServer and TensorRT-LLM with industry-standard APIs for a broad range of use cases, including large language models for text, speech, imaging, vision, robotics, genomics, and digital biology. They enable developers to quickly build and deploy generative AI applications using leading models from NVIDIA, AI21, Adept, Cohere, Getty Images, and Shutterstock, and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Snowflake and Stability AI. NIMs will be offered as part of our NVIDIA AI enterprise software platform for production deployment in the cloud or on-prem.

    我們推出了一款新軟體產品,推出了 NVIDIA 推理微服務 (NIM)。 NIM 在網路運算和推理軟體中提供由 NVIDIA CUDA 加速支援的安全且效能最佳化的容器,包括 Triton 和 PrintServer 以及 TensorRT-LLM,具有適用於廣泛用例的行業標準 API,包括用於文字、語音、成像、視覺、機器人學、基因組學和數位生物學。它們使開發人員能夠使用NVIDIA、AI21、Adept、Cohere、Getty Images 和Shutterstock 的領先模型以及Google、Hugging Face、Meta、Microsoft、Mistral AI、Snowflake 和Stability AI 的開放模型快速構建和部署生成式AI 應用程序。 NIM 將作為 NVIDIA AI 企業軟體平台的一部分提供,用於雲端或本地的生產部署。

  • Moving to gaming and AI PCs. Gaming revenue of $2.65 billion was down 8% sequentially and up 18% year-on-year, consistent with our outlook for a seasonal decline. The GeForce RTX SUPER GPUs market reception is strong and end demand and channel inventory remained healthy across the product range.

    轉向遊戲和人工智慧電腦。博彩收入為 26.5 億美元,環比下降 8%,年增 18%,與我們對季節性下降的預期一致。 GeForce RTX SUPER GPU 市場反應強勁,整個產品系列的最終需求和通路庫存保持健康。

  • From the very start of our AI journey, we equipped GeForce RTX GPUs with CUDA Tensor cores. Now with over 100 million of an installed base, GeForce RTX GPUs are perfect for gamers, creators, AI enthusiasts, and offer unmatched performance for running generative AI applications on PCs.

    從 AI 之旅一開始,我們就為 GeForce RTX GPU 配備了 CUDA Tensor 核心。 GeForce RTX GPU 目前擁有超過 1 億的安裝量,非常適合玩家、創作者、AI 愛好者,並為在 PC 上運行生成式 AI 應用程式提供無與倫比的效能。

  • NVIDIA has full technology stack for deploying and running fast and efficient generative AI inference on GeForce RTX PCs. TensorRT-LLM now accelerates Microsoft's Phi-3 Mini model and Google's Gemma 2B and 7B models as well as popular AI frameworks, including LangChain and LlamaIndex.

    NVIDIA 擁有完整的技術堆疊,可在 GeForce RTX PC 上部署和運行快速且有效率的生成式 AI 推理。 TensorRT-LLM 現在可以加速 Microsoft 的 Phi-3 Mini 模型和 Google 的 Gemma 2B 和 7B 模型以及流行的 AI 框架,包括 LangChain 和 LlamaIndex。

  • Yesterday, NVIDIA and Microsoft announced AI performance optimizations for Windows to help run LLMs up to 3x faster on NVIDIA GeForce RTX AI PCs. And top game developers, including NetEase Games, Tencent and Ubisoft are embracing NVIDIA Avatar Character Engine (sic) [Avatar Cloud Engine] to create lifelike avatars to transform interactions between gamers and non-playable characters.

    昨天,NVIDIA 和 Microsoft 宣布針對 Windows 進行 AI 效能最佳化,以協助在 NVIDIA GeForce RTX AI PC 上將 LLM 的運作速度提高 3 倍。包括網易遊戲、騰訊和育碧在內的頂級遊戲開發人員正在採用 NVIDIA Avatar 角色引擎(原文如此)[Avatar 雲引擎] 來創建栩栩如生的頭像,從而改變遊戲玩家與非可玩角色之間的互動。

  • Moving to ProViz. Revenue of $427 million was down 8% sequentially and up 45% year-on-year. We believe generative AI and Omniverse industrial digitalization will drive the next wave of professional visualization growth. At GTC, we announced new Omniverse Cloud APIs to enable developers to integrate Omniverse industrial digital twin and simulation technologies into their applications. Some of the world's largest industrial software makers are adopting these APIs, including ANSYS, Cadence, 3DXCITE, Dassault Systems brand and Siemens. And developers can use them to stream industrial digital twins with spatial computing devices such as Apple Vision Pro. Omniverse Cloud APIs will be available on Microsoft Azure later this year.

    轉向 ProViz。營收為 4.27 億美元,季減 8%,年增 45%。我們相信生成式人工智慧和 Omniverse 工業數位化將推動下一波專業視覺化成長。在 GTC 上,我們宣布了新的 Omniverse Cloud API,使開發人員能夠將 Omniverse 工業數位孿生和模擬技術整合到他們的應用程式中。一些全球最大的工業軟體製造商正在採用這些 API,包括 ANSYS、Cadence、3DXCITE、Dassault Systems 品牌和西門子。開發人員可以使用它們透過 Apple Vision Pro 等空間計算設備傳輸工業數位孿生。 Omniverse Cloud API 將於今年稍後在 Microsoft Azure 上提供。

  • Companies are using Omniverse to digitalize their workflows. Omniverse power digital twins enable Wistron, one of our manufacturing partners, to reduce end-to-end production cycle times by 50% and defect rates by 40%. And BYD, the world's largest electric vehicle maker, is adopting Omniverse for virtual factory planning and retail configurations.

    公司正在使用 Omniverse 來數位化他們的工作流程。 Omniverse 電力數位孿生使我們的製造合作夥伴之一緯創資通能夠將端到端生產週期時間縮短 50%,並將缺陷率降低 40%。全球最大的電動車製造商比亞迪正在採用 Omniverse 進行虛擬工廠規劃和零售配置。

  • Moving to automotive. Revenue was $329 million, up 17% sequentially and up 11% year-on-year. Sequential growth was driven by the ramp of AI Cockpit solutions with global OEM customers and strength in our self-driving platforms. Year-on-year growth was driven primarily by self-driving.

    轉向汽車領域。營收為 3.29 億美元,季增 17%,年增 11%。連續成長是由全球 OEM 客戶的人工智慧座艙解決方案的成長以及我們的自動駕駛平台的實力所推動的。年比成長主要由自動駕駛驅動。

  • We supported Xiaomi in the successful launch of its first electric vehicle, the SU7 sedan built on the NVIDIA DRIVE Orin, our AI car computer for software-defined AV fleets.

    我們支持小米成功推出其首款電動車 SU7 轎車,該汽車基於 NVIDIA DRIVE Orin(我們為軟體定義的 AV 車隊提供的 AI 汽車電腦)構建。

  • We also announced a number of new design wins on NVIDIA DRIVE Thor, the successor to Orin, powered by the new NVIDIA Blackwell architecture with several leading EV makers, including BYD, XPeng, GAC's Aion Hyper and Nuro. DRIVE Thor is slated for production vehicles starting next year.

    我們也宣布了NVIDIA DRIVE Thor 的多項新設計勝利,它是Orin 的後繼產品,採用新的NVIDIA Blackwell 架構,並與多家領先的電動車製造商合作,包括比亞迪、小鵬汽車、廣汽集團的Aion Hyper 和Nuro。 DRIVE Thor 預計明年開始量產。

  • Okay, moving to the rest of the P&L. GAAP gross margin expanded sequentially to 78.4% and non-GAAP gross margins to 78.9% on lower inventory targets. As noted last quarter, both Q4 and Q1 benefited from favorable component costs.

    好的,接下來討論損益表的其餘部分。由於庫存目標較低,GAAP 毛利率較上月擴大至 78.4%,非 GAAP 毛利率擴大至 78.9%。正如上季所指出的,第四季和第一季都受益於有利的零件成本。

  • Sequentially, GAAP operating expenses were up 10% and non-GAAP operating expenses were up 13%, primarily reflecting higher compensation-related costs and increased compute and infrastructure investments.

    隨後,GAAP 營運費用增加了 10%,非 GAAP 營運費用增加了 13%,主要反映了薪資相關成本的增加以及計算和基礎設施投資的增加。

  • In Q1, we returned $7.8 billion to shareholders in the form of share repurchases and cash dividends. Today, we announced a 10-for-1 split of our shares with June 10 as the first day of trading on a split-adjusted basis. We are also increasing our dividend by 150%.

    第一季度,我們以股票回購和現金股利的形式向股東返還 78 億美元。今天,我們宣布將股票進行 10 換 1 的拆股,並將 6 月 10 日作為拆股調整後的首日交易。我們還將股息增加 150%。

  • Let me turn to the outlook for the second quarter. Total revenue is expected to be $28 billion, plus or minus 2%. We expect sequential growth in all market platforms.

    讓我談談第二季的前景。總收入預估為 280 億美元,上下浮動 2%。我們預期所有市場平台都會持續成長。

  • GAAP and non-GAAP gross margins are expected to be 74.8% and 75.5%, respectively, plus or minus 50 basis points, consistent with our discussion last quarter. For the full year, we expect gross margins to be in the mid-70s percent range.

    GAAP 和非 GAAP 毛利率預計分別為 74.8% 和 75.5%,上下浮動 50 個基點,與我們上季度的討論一致。我們預計全年毛利率將在 70% 左右。

  • GAAP and non-GAAP operating expenses are expected to be approximately $4 billion and $2.8 billion, respectively. Full year OpEx is expected to grow in the low 40% range.

    GAAP 和非 GAAP 營運費用預計分別約為 40 億美元和 28 億美元。預計全年營運支出將成長 40% 左右。

  • GAAP and non-GAAP other income and expenses are expected to be an income of approximately $300 million, excluding gains and losses from nonaffiliated investments.

    GAAP 和非 GAAP 其他收入和支出預計約為 3 億美元,不包括非關聯投資的損益。

  • GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items.

    GAAP 和非 GAAP 稅率預計為 17%,上下浮動 1%(不包括任何離散項目)。

  • Further financial details are included in the CFO commentary and other information available on our IR website.

    更多財務細節包含在 CFO 評論和我們的 IR 網站上提供的其他資訊中。

  • I would like to now turn it over to Jensen as he would like to make a few comments.

    我現在想把它交給詹森,因為他想發表一些評論。

  • Jensen Huang

    Jensen Huang

  • Thanks, Colette. The industry is going through a major change. Before we start Q&A, let me give you some perspective on the importance of the transformation.

    謝謝,科萊特。該行業正在經歷一場重大變革。在我們開始問答之前,讓我先向您介紹轉型重要性的一些觀點。

  • The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar installed base of traditional data centers to accelerated computing and build a new type of data center, AI factories, to produce a new commodity, artificial intelligence. AI will bring significant productivity gains to nearly every industry and help companies be more cost and energy efficient while expanding revenue opportunities.

    下一次工業革命已經開始。公司和國家正在與 NVIDIA 合作,將價值數兆美元的傳統資料中心安裝基礎轉向加速運算,並建造新型資料中心、人工智慧工廠,以生產新商品—人工智慧。人工智慧將幾乎為每個產業帶來顯著的生產力提升,幫助企業提高成本和能源效率,同時擴大收入機會。

  • CSPs were the first generative AI movers. With NVIDIA, CSPs accelerated workloads to save money and power. The tokens generated by NVIDIA Hopper drive revenues for their AI services. And NVIDIA cloud instances attract rental customers from our rich ecosystem of developers.

    CSP 是第一批生成式人工智慧推動者。借助 NVIDIA,CSP 加速了工作負載以節省資金和電力。 NVIDIA Hopper 產生的代幣推動了其 AI 服務的收入。 NVIDIA 雲端執行個體從我們豐富的開發者生態系統中吸引了租賃客戶。

  • Strong and accelerating demand for generative AI training and inference on the Hopper platform propels our Data Center growth. Training continues to scale as models learn to be multimodal, understanding text, speech, images, video, and 3D and learn to reason and plan.

    對 Hopper 平台上的生成式人工智慧訓練和推理的強勁且不斷增長的需求推動了我們資料中心的成長。隨著模型學習多模式、理解文字、語音、圖像、視訊和 3D 並學習推理和計劃,訓練規模不斷擴大。

  • Our inference workloads are growing incredibly. With generative AI, inference, which is now about fast token generation at massive scale, has become incredibly complex. Generative AI is driving a from-foundation-up full stack computing platform shift that will transform every computer interaction.

    我們的推理工作負載正在以驚人的速度成長。有了生成式人工智慧,推理(現在涉及大規模快速代幣生成)變得異常複雜。生成式人工智慧正在推動從基礎開始的全端運算平台轉變,這將改變每一次電腦互動。

  • From today's information retrieval model, we are shifting to an answers and skills generation model of computing. AI will understand context and our intentions, be knowledgeable, reason, plan and perform tasks. We are fundamentally changing how computing works and what computers can do, from general purpose CPU to GPU accelerated computing, from instruction-driven software to intention-understanding models, from retrieving information to performing skills and, at the industrial level, from producing software to generating tokens, manufacturing digital intelligence. Token generation will drive a multiyear build-out of AI factories.

    我們正在從今天的資訊檢索模型轉向答案和技能生成計算模型。人工智慧將理解背景和我們的意圖,知識淵博,推理,計劃和執行任務。我們正在從根本上改變計算的工作方式和電腦的功能,從通用 CPU 到 GPU 加速計算,從指令驅動軟體到意圖理解模型,從檢索資訊到執行技能,在工業層面,從生產軟體到生成代幣,製造數位智慧。代幣生成將推動人工智慧工廠的多年建設。

  • Beyond cloud service providers, generative AI has expanded to consumer Internet companies and enterprise, sovereign AI, automotive, and health care customers, creating multiple multibillion-dollar vertical markets.

    除了雲端服務供應商之外,生成式人工智慧已擴展到消費者網路公司和企業、主權人工智慧、汽車和醫療保健客戶,創造了多個數十億美元的垂直市場。

  • The Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI. The combination of Grace CPU, Blackwell GPUs, NVLink, Quantum, Spectrum, mix and switches, high-speed interconnects and a rich ecosystem of software and partners let us expand and offer a richer and more complete solution for AI factories than previous generations. Spectrum-X opens a brand-new market for us to bring large-scale AI to Ethernet-only data centers. And NVIDIA NIMs is our new software offering that delivers enterprise-grade optimized generative AI to run on CUDA everywhere from the cloud to on-prem data centers, to RTX AI PCs through our expansive network of ecosystem partners.

    Blackwell 平台已全面投入生產,為萬億參數規模的生成式人工智慧奠定了基礎。 Grace CPU、Blackwell GPU、NVLink、Quantum、Spectrum、混合和開關、高速互連以及豐富的軟體和合作夥伴生態系統的結合使我們能夠擴展並為人工智慧工廠提供比前幾代更豐富、更完整的解決方案。 Spectrum-X 為我們將大規模人工智慧引入僅乙太網路資料中心開啟了一個全新的市場。 NVIDIA NIM 是我們的新軟體產品,可透過我們廣泛的生態系統合作夥伴網絡,提供企業級優化的生成式AI,以便在從雲端到本地資料中心、再到RTX AI PC 的任何地方的CUDA 上運行。

  • From Blackwell to Spectrum-X to NIMs, we are poised for the next wave of growth. Thank you.

    從 Blackwell 到 Spectrum-X 再到 NIM,我們已經為下一波成長做好了準備。謝謝。

  • Simona Jankowski - VP of IR

    Simona Jankowski - VP of IR

  • Thank you, Jensen. We will now open the call for questions. Operator, could you please poll for questions?

    謝謝你,詹森。我們現在開始提問。接線員,可以投票詢問問題嗎?

  • Operator

    Operator

  • (Operator Instructions) Your first question comes from the line of Stacy Rasgon with Bernstein.

    (操作員說明)您的第一個問題來自 Stacy Rasgon 與 Bernstein 的線路。

  • Stacy Aaron Rasgon - Senior Analyst

    Stacy Aaron Rasgon - Senior Analyst

  • My first one, I wanted to drill a little bit into the Blackwell comment that it's in full production now. What does that suggest with regard to shipments and delivery timing if that product is -- it doesn't sound like it's sampling anymore. What does that mean when that's actually in customers' hands if it's in production now?

    我的第一個作品,我想深入了解布萊克威爾的評論,它現在已經全面製作。如果該產品聽起來不再是樣品,那麼這對發貨和交貨時間意味著什麼。如果產品現在已經投入生產,實際在客戶手中,這代表什麼?

  • Jensen Huang

    Jensen Huang

  • We will be shipping -- well, we've been in production for a little bit of time. But our production shipments will start in Q2 and ramp in Q3, and customers should have data centers stood up in Q4.

    我們將發貨——嗯,我們已經投入生產一段時間了。但我們的生產出貨量將在第二季開始,並在第三季開始增加,客戶應該在第四季建立資料中心。

  • Stacy Aaron Rasgon - Senior Analyst

    Stacy Aaron Rasgon - Senior Analyst

  • Got it. So this year, we will see Blackwell revenue, it sounds like.

    知道了。所以今年,我們將看到 Blackwell 的收入,聽起來是這樣。

  • Jensen Huang

    Jensen Huang

  • We will see a lot of Blackwell revenue this year.

    今年我們將看到布萊克韋爾的大量收入。

  • Operator

    Operator

  • Our next question will come from the line of Timothy Arcuri with UBS.

    我們的下一個問題將來自瑞銀集團的 Timothy Arcuri。

  • Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment

    Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment

  • I wanted to ask, Jensen, about the deployment of Blackwell versus Hopper just given the system's nature and all the demand for GB that you have. How does the deployment of this stuff differ from Hopper? I guess I ask because liquid cooling at scale hasn't been done before, and there's some engineering challenges both at the node level and within the data center. So do these complexities sort of elongate the transition? And how do you sort of think about how that's all going?

    Jensen,考慮到系統的性質以​​及您對 GB 的所有需求,我想問一下 Blackwell 與 Hopper 的部署。這個東西的部署和Hopper有什麼不同?我想我問這個問題是因為以前從未進行過大規模的液體冷卻,並且在節點層級和資料中心內都存在一些工程挑戰。那麼這些複雜性是否會延長過渡期呢?您如何看待這一切的進展?

  • Jensen Huang

    Jensen Huang

  • Yes. Blackwell comes in many configurations. Blackwell is a platform, not a GPU. And the platform includes support for air cooled, liquid cooled, x86 and Grace, InfiniBand, now Spectrum-X, and very large NVLink domain that I demonstrated at GTC -- that I showed at GTC. And so for some customers, they will ramp into their existing installed base of data centers that are already shipping Hoppers. They will easily transition from H100 to H200 to B100. And so Blackwell systems have been designed to be backwards compatible, if you will, electrically, mechanically. And of course, the software stack that runs on Hopper will run fantastically on Blackwell.

    是的。 Blackwell 有多種配置。 Blackwell 是一個平台,而不是 GPU。平台包括對風冷、液冷、x86 和 Grace、InfiniBand、現在的 Spectrum-X 以及我在 GTC 上展示的超大型 NVLink 域的支援。因此,對於一些客戶來說,他們將進軍現有的資料中心安裝基地,這些資料中心已經在運送 Hoppers。他們將輕鬆地從 H100 過渡到 H200 再到 B100。因此,Blackwell 系統被設計為向後相容(如果您願意的話)電氣或機械方面。當然,在 Hopper 上運行的軟體堆疊在 Blackwell 上也能運作得非常好。

  • We also have been priming the pump, if you will, with the entire ecosystem, getting them ready for liquid cooling. We've been talking to the ecosystem about Blackwell for quite some time. And the CSPs, the data centers, the ODMs, the system makers, our supply chain; beyond them, the cooling supply chain base, liquid cooling supply chain base, data center supply chain base, no one is going to be surprised with Blackwell coming and the capabilities that we would like to deliver with Grace Blackwell 200. GB200 is going to be exceptional.

    如果您願意的話,我們也一直在為整個生態系統啟動泵,讓它們為液體冷卻做好準備。我們與 Blackwell 生態系統的討論已經有一段時間了。還有 CSP、資料中心、ODM、系統製造商、我們的供應鏈;除此之外,冷卻供應鏈基地、液體冷卻供應鏈基地、資料中心供應鏈基地,沒有人會對 Blackwell 的到來以及我們希望透過 Grace Blackwell 200 提供的功能感到驚訝。

  • Operator

    Operator

  • Our next question will come from the line of Vivek Arya with Bank of America Securities.

    我們的下一個問題將來自美國銀行證券公司的 Vivek Arya。

  • Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst

    Vivek Arya - MD in Equity Research & Senior Semiconductor Analyst

  • Jensen, how are you ensuring that there is enough utilization of your products and that there isn't a pull-ahead or a holding behavior because of tight supply, competition or other factors? Basically, what checks have you built in the system to give us confidence that monetization is keeping pace with your really very strong shipment growth?

    Jensen,您如何確保您的產品得到足夠的利用,並且不會因為供應緊張、競爭或其他因素而出現提前或持有行為?基本上,您在系統中建立了哪些檢查,以使我們確信貨幣化與您真正非常強勁的出貨量成長保持同步?

  • Jensen Huang

    Jensen Huang

  • Well, I guess there's the big picture view that I'll come to, but I'll answer your question directly. The demand for GPUs in all the data centers is incredible. We're racing every single day. And the reason for that is because applications like ChatGPT and GPT-4o, and now it's going to be multi-modality, Gemini and its ramp and Anthropic, and all of the work that's being done at all the CSPs are consuming every GPU that's out there.

    好吧,我想我會談到大局觀,但我會直接回答你的問題。所有資料中心對 GPU 的需求都令人難以置信。我們每天都在比賽。原因是像 ChatGPT 和 GPT-4o 這樣的應用程序,現在它將是多模態的,Gemini 及其 Ramp 和 Anthropic,以及所有 CSP 上正在完成的所有工作都消耗了每一個 GPU那裡。

  • There's also a long line of generative AI startups, some 15,000, 20,000 startups that are in all different fields, from multimedia to digital characters, of course, all kinds of design tool application, productivity applications, digital biology, the moving of the AV industry to video so that they can train end-to-end models to expand the operating domain of self-driving cars, the list is just quite extraordinary. We're racing actually. Customers are putting a lot of pressure on us to deliver the systems and stand those up as quickly as possible. And of course, I haven't even mentioned all of the sovereign AIs who would like to train all of their regional natural resource of their country, which is their data, to train their regional models. And there's a lot of pressure to stand those systems up. So anyhow, the demand, I think, is really, really high and it outstrips our supply. That's the reason why I jumped in to make a few comments.

    還有一長串的生成式人工智慧新創公司,大約有15,000、20,000家新創公司,涉及各個不同領域,從多媒體到數位角色,當然還有各種設計工具應用、生產力應用、數位生物學、AV產業的發展到視頻,以便他們可以訓練端到端模型來擴展自動駕駛汽車的操作領域,這個列表非常非凡。我們實際上是在比賽。客戶給我們施加了很大的壓力,要求我們盡快交付系統並使其正常運作。當然,我甚至沒有提到所有願意訓練其國家所有區域自然資源(即他們的數據)來訓練其區域模型的主權人工智慧。維持這些系統面臨很大的壓力。所以無論如何,我認為需求確實非常高,並且超過了我們的供應。這就是我跳出來發表一些評論的原因。

  • Longer term, we're completely redesigning how computers work. And this is a platform shift. Of course, it's been compared to other platform shifts in the past. But time will clearly tell that this is much, much more profound than previous platform shifts. And the reason for that is because the computer is no longer an instruction-driven only computer. It's an intention-understanding computer. And it understands, of course, the way we interact with it, but it also understands our meaning, what we intend that we asked it to do. And it has the ability to reason, inference iteratively to process a plan and come back with a solution. And so every aspect of the computer is changing in such a way that instead of retrieving prerecorded files, it is now generating contextually relevant intelligent answers. And so that's going to change computing stacks all over the world.

    從長遠來看,我們正在徹底重新設計電腦的工作方式。這是一個平台的轉變。當然,它與過去的其他平台轉變進行了比較。但時間會清楚地告訴我們,這比之前的平台轉變要深刻得多。原因是電腦不再是僅指令驅動的電腦。這是一台能夠理解意圖的計算機。當然,它理解我們與其互動的方式,但它也理解我們的意思,我們要求它做什麼的意圖。它具有推理、迭代推理來處理計劃並得出解決方案的能力。因此,電腦的各個方面都在發生變化,不再檢索預先錄製的文件,而是產生上下文相關的智慧答案。這將改變全世界的計算堆疊。

  • And you saw a build that, in fact, even the PC computing stack is going to get revolutionized. And this is just the beginning of all the things that -- what people see today are the beginning of the things that we're working in our labs and the things that we're doing with all the startups and large companies and developers all over the world. It's going to be quite extraordinary.

    事實上,您看到的建置甚至連 PC 運算堆疊都將發生革命。這只是所有事情的開始——人們今天看到的是我們在實驗室中工作的事情的開始,以及我們與所有新創公司、大公司和開發商一起做的事情的開始世界。這將是非常非同尋常的。

  • Operator

    Operator

  • Our next question will come from the line of Joe Moore with Morgan Stanley.

    我們的下一個問題將來自摩根士丹利的喬·摩爾。

  • Joseph Lawrence Moore - Executive Director

    Joseph Lawrence Moore - Executive Director

  • I understand what you just said about how strong demand is. You have a lot of demand for H200 and for Blackwell products. Do you anticipate any kind of pause with Hopper and H100 as you sort of migrate to those products? Will people wait for those new products, which would be a good product to have? Or do you think there's enough demand for H100 to sustain growth?

    我理解你剛才所說的需求有多強勁。您對 H200 和 Blackwell 產品有大量需求。當您遷移到這些產品時,您預計 Hopper 和 H100 會出現任何暫停嗎?人們會等待那些值得擁有的新產品嗎?或者您認為 H100 有足夠的需求來維持成長嗎?

  • Jensen Huang

    Jensen Huang

  • We see increasing demand of Hopper through this quarter. And we expect demand to outstrip supply for some time as we now transition to H200, as we transition to Blackwell. Everybody is anxious to get their infrastructure online. And the reason for that is because they're saving money and making money, and they would like to do that as soon as possible.

    我們看到本季對料斗的需求不斷增加。我們預計,隨著我們現在過渡到 H200、過渡到 Blackwell,需求將在一段時間內超過供應。每個人都渴望讓他們的基礎設施上線。原因是他們正在省錢、賺錢,而且他們希望盡快做到這一點。

  • Operator

    Operator

  • Our next question will come from the line of Toshiya Hari with Goldman Sachs.

    我們的下一個問題將來自高盛的 Toshiya Hari。

  • Toshiya Hari - MD

    Toshiya Hari - MD

  • Jensen, I wanted to ask about competition. I think many of your cloud customers have announced new or updates to their existing internal programs, right, in parallel to what they're working on with you guys. To what extent did you consider them as competitors, medium to long term? And in your view, do you think they're limited to addressing mostly internal workloads? Or could they be broader in what they address going forward?

    詹森,我想問競爭的問題。我認為你們的許多雲端客戶已經宣布了他們現有內部計劃的新內容或更新,對吧,與他們正在與你們合作的同時。從中長期來看,您在多大程度上將他們視為競爭對手?在您看來,您認為它們僅限於解決大部分內部工作負載嗎?或者他們未來解決的問題可以更廣泛?

  • Jensen Huang

    Jensen Huang

  • We're different in several ways. First, NVIDIA's accelerated computing architecture allows customers to process every aspect of their pipeline from unstructured data processing to prepare it for training, to structured data processing, data frame processing like SQL to prepare for training, to training to inference. And as I was mentioning in my remarks, that inference has really fundamentally changed, it's now generation. It's not trying to just detect the cat, which was plenty hard in itself, but it has to generate every pixel of a cat. And so the generation process is a fundamentally different processing architecture. And it's one of the reasons why TensorRT-LLM was so well received. We improved the performance in using the same chips on our architecture by a factor of 3. That kind of tells you something about the richness of our architecture and the richness of our software.

    我們在幾個方面有所不同。首先,NVIDIA 的加速運算架構允許客戶處理管道的各個方面,從準備訓練的非結構化資料處理,到準備訓練的結構化資料處理、資料幀處理(如 SQL),再到訓練和推理。正如我在發言中提到的,這種推論確實發生了根本性的變化,現在已經是一代了。它不僅僅試圖檢測貓,這本身就很困難,但它必須產生貓的每個像素。因此生成過程是一個根本不同的處理架構。這也是 TensorRT-LLM 如此受歡迎的原因之一。我們在架構上使用相同晶片時,效能提高了 3 倍。

  • So one, you could use NVIDIA for everything, from computer vision to image processing, to computer graphics, to all modalities of computing. And as the world is now suffering from computing cost and computing energy inflation because general-purpose computing has run its course, accelerated computing is really the sustainable way of going forward. So accelerated computing is how you're going to save money in computing, is how you're going to save energy in computing. And so the versatility of our platform results in the lowest TCO for their data centers.

    因此,您可以將 NVIDIA 用於一切領域,從電腦視覺到影像處理、電腦圖形學,再到所有運算模式。由於通用計算已經走到了盡頭,世界現在正遭受計算成本和計算能源膨脹的困擾,加速計算確實是可持續發展的方式。因此,加速運算是您在計算中節省資金、節省計算能源的方式。因此,我們平台的多功能性可以幫助他們的資料中心實現最低的總體擁有成本。

  • Second, we're in every cloud. And so for developers that are looking for a platform to develop on, starting with NVIDIA is always a great choice. And we're on-prem. We're in the cloud. We're in computers of any size and shape. We're practically everywhere. And so that's the second reason.

    其次,我們身處雲端。因此,對於正在尋找開發平台的開發人員來說,從 NVIDIA 開始始終是一個不錯的選擇。我們是本地部署的。我們在雲端。我們使用的是任何尺寸和形狀的計算機。我們幾乎無所不在。這是第二個原因。

  • The third reason has to do with the fact that we build AI factories. And this is becoming more apparent to people that AI is not a chip problem only. It starts, of course, with very good chips and we build a whole bunch of chips for our AI factories, but it's a systems problem. In fact, even AI is now a systems problem. It's not just one large language model. It's a complex system of a whole bunch of large language models that are working together. And so the fact that NVIDIA builds this system causes us to optimize all of our chips to work together as a system, to be able to have software that operates as a system, and to be able to optimize across the system.

    第三個原因與我們建造人工智慧工廠有關。人們越來越清楚地認識到,人工智慧不僅僅是晶片問題。當然,它始於非常好的晶片,我們為我們的人工智慧工廠建造了一大堆晶片,但這是一個系統問題。事實上,即使是人工智慧現在也是一個系統問題。它不僅僅是一種大型語言模型。它是一個由一大堆協同工作的大型語言模型組成的複雜系統。因此,NVIDIA 建構這個系統的事實促使我們優化所有晶片,使其作為一個系統協同工作,能夠擁有作為系統運行的軟體,並能夠跨系統進行優化。

  • And just to put it in perspective, in simple numbers, if you had a $5 billion infrastructure and you improved the performance by a factor of 2, which we routinely do, when you improve the infrastructure by a factor of 2, the value to you is $5 billion. All the chips in that data center doesn't pay for it. And so the value of it is really quite extraordinary. And this is the reason why today, performance matters in everything. This is at a time when the highest performance is also the lowest cost because the infrastructure cost of carrying all of these chips cost a lot of money. And it takes a lot of money to fund the data center, to operate the data center, the people that goes along with it, the power that goes along with it, the real estate that goes along with it, and all of it adds up. And so the highest performance is also the lowest TCO.

    從簡單的角度來看,如果您擁有價值 50 億美元的基礎設施,並且您將效能提高了 2 倍(我們通常會這樣做),當您將基礎設施提高 2 倍時,您的價值是50億美元。該資料中心的所有晶片都不會為此付費。所以它的價值確實是非常非凡的。這就是為什麼今天性能對一切都很重要的原因。此時,最高的性能也是最低的成本,因為承載所有這些晶片的基礎設施成本需要花費大量資金。為資料中心提供資金、營運資料中心、與資料中心相關的人員、與資料中心相關的電力、與資料中心相關的房地產,所有這些加起來都需要大量資金。因此,最高的性能也是最低的 TCO。

  • Operator

    Operator

  • Our next question will come from the line of Matt Ramsay with TD Cowen.

    我們的下一個問題將來自 Matt Ramsay 和 TD Cowen 的對話。

  • Matthew D. Ramsay - MD & Senior Research Analyst

    Matthew D. Ramsay - MD & Senior Research Analyst

  • Jensen, I've been in the data center industry my whole career. I've never seen the velocity that you guys are introducing new platforms at the same combination of the performance jumps that you're getting, I mean, 5x in training, some of the stuff you talked about at GTC up to 30x in inference. And it's an amazing thing to watch but it also creates an interesting juxtaposition where the current generation of product that your customers are spending billions of dollars on is going to be not as competitive with your new stuff very, very much more quickly than the depreciation cycle of that product.

    Jensen,我的整個職業生涯都在資料中心產業。我從未見過你們引入新平台的速度與你們在訓練中獲得的性能跳躍相同的組合,我的意思是,在訓練中達到了5 倍,你們在GTC 上談到的一些東西在推理中達到了30 倍。這是一件令人驚奇的事情,但它也創造了一個有趣的並列,即您的客戶花費數十億美元購買的當前一代產品將不會像您的新產品那樣具有競爭力,而且折舊週期要快得多該產品的。

  • So I'd like you to, if you wouldn't mind, speak a little bit about how you're seeing that situation evolve itself with customers. As you move to Blackwell, they're going to have very large installed bases, obviously software compatible, but large installed bases of product that's not nearly as performant as your new generation stuff. And it'd be interesting to hear what you see happening with customers along that path.

    因此,如果您不介意的話,我希望您能談談您如何看待這種情況在客戶中的演變。當你搬到布萊克韋爾時,他們將擁有非常大的安裝基礎,顯然是軟體相容的,但大量的產品安裝基礎遠不如你的新一代產品那麼性能。聽聽您所看到的客戶在這條道路上發生的情況會很有趣。

  • Jensen Huang

    Jensen Huang

  • Yes. I really appreciate it. Three points that I'd like to make. If you're 5% into the build-out versus if you're 95% into the build-out, you're going to feel very differently. And because you're only 5% into the build-out anyhow, you build as fast as you can. And when Blackwell comes, it's going to be terrific. And then after Blackwell, as you mentioned, we have other Blackwells coming. And then there's a short -- we're in a 1-year rhythm as we've explained to the world. And we want our customers to see our road map for as far as they like, but they're early in their build-out anyways and so they had to just keep on building, okay? And so there's going to be a whole bunch of chips coming at them, and they just got to keep on building and just, if you will, performance-average your way into it. So that's the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them.

    是的。對此,我真的非常感激。我想講三點。如果你投入 5% 的精力與投入 95% 的投入,你的感受將會非常不同。而且因為無論如何你只完成了 5% 的構建,所以你可以盡可能快地構建。當布萊克威爾到來時,那將會非常棒。在布萊克威爾之後,正如你所提到的,我們還有其他布萊克威爾來。然後有一個簡短的——正如我們向世界解釋的那樣,我們處於一年的節奏中。我們希望我們的客戶能夠盡可能地看到我們的路線圖,但無論如何他們還處於建設的早期階段,所以他們必須繼續建設,好嗎?因此,將會有一大堆晶片向他們湧來,他們必須繼續構建,如果你願意的話,就可以以平均性能的方式進入它。所以這是明智之舉。他們今天需要賺錢。他們今天想省錢。時間對他們來說真的非常非常寶貴。

  • Let me give you an example of time being really valuable, why this idea of standing up a data center instantaneously is so valuable and getting this thing called time-to-train is so valuable. The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that's 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better? And that's the reason why this race, as in all technology races, the race is so important. And you're seeing this race across multiple companies because this is so vital to have technology leadership, for companies to trust the leadership that want to build on your platform and know that the platform that they're building on is going to get better and better. And so leadership matters a great deal. Time-to-train matters a great deal.

    讓我給您一個例子,說明時間非常寶貴,為什麼立即建立資料中心的想法如此有價值,並且獲得稱為訓練時間的東西如此有價值。原因是下一個到達下一個主要平台的公司將宣布突破性的人工智慧。之後的第二個則宣布了 0.3% 的改進。所以問題是,你想再次成為提供突破性人工智慧的公司還是提供更好 0.3% 的人工智慧的公司?這就是為什麼這場競賽與所有技術競賽一樣如此重要。你會看到多家公司之間的這場競賽,因為這對於擁有技術領先地位至關重要,讓公司信任想要在你的平台上建立的領導層,並知道他們正在建立的平台會變得更好,更好的。因此,領導力非常重要。訓練時間非常重要。

  • The difference between time-to-train that is 3 months earlier just to get it done, in order to get time-to-train on 3 months' project, getting started 3 months earlier is everything. And so it's the reason why we're standing up Hopper systems like mad right now because the next plateau is just around the corner. And so that's the second reason.

    提前 3 個月的培訓時間只是為了完成它,為了獲得 3 個月的專案培訓時間,提前 3 個月開始就是一切。這就是我們現在瘋狂地支持料斗系統的原因,因為下一個平穩期即將到來。這是第二個原因。

  • The first comment that you made is really a great comment, which is how is it that we're moving so fast and advancing them quickly, because we have all the stacks here. We literally build the entire data center and we can monitor everything, measure everything, optimize across everything. We know where all the bottlenecks are. We're not guessing about it. We're not putting up PowerPoint slides that look good. We're actually -- we also like our PowerPoint slides to look good, but we're delivering systems that perform at scale. And the reason why we know they perform at scale is because we built it all here.

    您發表的第一條評論確實是一個很棒的評論,這就是為什麼我們進展如此之快并快速推進它們,因為我們這裡擁有所有堆疊。我們確實建立了整個資料中心,我們可以監控一切、測量一切、優化一切。我們知道所有瓶頸在哪裡。我們不是猜測它。我們不會發布看起來不錯的 PowerPoint 幻燈片。事實上,我們也希望我們的 PowerPoint 幻燈片看起來不錯,但我們正在提供大規模運行的系統。我們之所以知道它們能夠大規模運行,是因為它們都是我們在這裡建造的。

  • Now one of the things that we do that's a bit of a miracle is that we build entire AI infrastructure here but then we disaggregate it and integrate it into our customers' data centers however they liked. But we know how it's going to perform and we know where the bottlenecks are. We know where we need to optimize with them, and we know where we have to help them improve their infrastructure to achieve the most performance. This deep intimate knowledge at the entire data center scale is fundamentally what sets us apart today. We build every single chip from the ground up. We know exactly how processing is done across the entire system. And so we understand exactly how it's going to perform and how to get the most out of it with every single generation. So I appreciate it. Those are the 3 points.

    現在,我們所做的一件有點奇蹟的事情是,我們在這裡建立了整個人工智慧基礎設施,然後我們將其分解並按照客戶喜歡的方式將其整合到客戶的資料中心中。但我們知道它將如何執行,也知道瓶頸在哪裡。我們知道我們需要在哪些方面與他們一起優化,我們也知道我們必須在哪些方面幫助他們改善基礎設施以實現最佳效能。這種對整個資料中心規模的深入了解從根本上使我們今天脫穎而出。我們從頭開始建造每一個晶片。我們確切地知道整個系統的處理是如何完成的。因此,我們確切地了解它的性能以及如何在每一代中充分利用它。所以我很感激。這就是3點。

  • Operator

    Operator

  • Your next question will come from the line of Mark Lipacis with Evercore ISI.

    您的下一個問題將來自 Evercore ISI 的 Mark Lipacis。

  • Mark John Lipacis - Senior MD

    Mark John Lipacis - Senior MD

  • Jensen, in the past, you've made the observation that general-purpose computing ecosystems typically dominated each computing era. And I believe the argument was that they could adapt to different workloads, get higher utilization, drive cost of compute cycle down. And this is a motivation for why you were driving to a general-purpose GPU CUDA ecosystem for accelerated computing. And if I mischaracterized that observation, please do let me know.

    Jensen,您過去曾觀察到,通用運算生態系統通常主導著每個運算時代。我認為爭論的焦點是它們可以適應不同的工作負載,獲得更高的利用率,降低運算週期的成本。這也是您推動通用 GPU CUDA 生態系統進行加速運算的動機。如果我錯誤地描述了這個觀察結果,請告訴我。

  • So the question is, given that the workloads that are driving demand for your solutions are being driven by neural network training and inferencing, which on the surface seem like a limited number of workloads, then it might also seem to lend themselves to custom solutions. And so then the question is, does the general purpose computing framework become more at risk or is there enough variability or a rapid enough evolution on these workloads that support that historical general-purpose framework?

    所以問題是,考慮到推動解決方案需求的工作負載是由神經網路訓練和推理驅動的,從表面上看,工作負載的數量似乎有限,那麼它似乎也適合客製化解決方案。那麼問題是,通用計算框架是否面臨更大的風險,或者支持歷史通用框架的這些工作負載是否有足夠的可變性或足夠快的演變?

  • Jensen Huang

    Jensen Huang

  • Yes. NVIDIA's accelerated computing is versatile but I wouldn't call it general purpose. Like, for example, we wouldn't be very good at running the spreadsheet. That was really designed for general-purpose computing. And so the control loop of an operating system code probably isn't fantastic for general-purpose computing, not for accelerated computing. And so I would say that we're versatile, and that's usually the way I describe it.

    是的。 NVIDIA 的加速運算用途廣泛,但我不認為它是通用的。例如,我們不太擅長運行電子表格。這確實是為通用計算而設計的。因此,作業系統程式碼的控制循環對於通用運算而言可能並不理想,對於加速運算而言則不然。所以我想說我們是多才多藝的,這通常是我描述它的方式。

  • There's a rich domain of applications that we're able to accelerate over the years, but they all have a lot of commonalities, maybe some deep differences, but commonalities. They're all things that I can run in parallel, they're all heavily threaded. 5% of the code represents 99% of the run-time, for example. Those are all properties of accelerated computing.

    多年來我們能夠加速的應用程式領域非常豐富,但它們都有很多共通性,也許存在一些深刻的差異,但也有共通性。它們都是我可以並行運行的東西,它們都是大量的線程。例如,5% 的程式碼代表 99% 的運行時間。這些都是加速計算的特性。

  • The versatility of our platform and the fact that we design entire systems is the reason why over the course of the last 10 years or so, the number of start-ups that you guys have asked me about in these conference calls, is fairly large. And every single one of them, because of the brittleness of their architecture, the moment generative AI came along or the moment the fusion models came along, the moment the next models are coming along now, and now all of a sudden, look at this, large language models with memory because the large language model needs to have memory so they can carry on a conversation with you, understand the context.

    我們平台的多功能性以及我們設計整個系統的事實是在過去十年左右的時間裡,你們在這些電話會議中向我詢問的新創公司數量相當大的原因。他們中的每一個,由於其架構的脆弱性,當生成人工智能出現的那一刻或融合模型出現的那一刻,下一個模型出現的那一刻,現在突然之間,看看這個,有記憶的大型語言模型,因為大型語言模型需要有記憶,這樣它們才能與你對話,理解上下文。

  • All of a sudden, the versatility of the Grace memory became super important. And so each one of these advances in generative AI and the advancement of AI really begs for not having a widget that's designed for one model but to have something that is really good for this entire domain, properties of this entire domain, but obeys the first principles of software: that software is going to continue to evolve, that software is going to keep getting better and bigger. We believe in the scaling of these models. There's a lot of reasons why we're going to scale by easily 1 million times in the coming few years for good reasons, and we're looking forward to it and we're ready for it.

    突然之間,Grace 記憶體的多功能性變得非常重要。因此,生成式人工智慧和人工智慧的進步中的每一項進步實際上都要求不要有一個專為某個模型設計的小部件,而是要有一些對整個領域、整個領域的屬性真正有益的東西,但要遵守第一個要求軟體原則:軟體將繼續發展,軟體將不斷變得更好、更大。我們相信這些模型的擴展性。我們有很多理由可以在未來幾年輕鬆擴展 100 萬倍,而且我們對此充滿期待並做好了準備。

  • And so the versatility of our platform is really quite key. And if you're too brittle and too specific, you might as well just build an FPGA or you build an ASIC or something like that, but that's hardly a computer.

    因此,我們平台的多功能性確實非常關鍵。如果你太脆弱、太具體,不妨建造一個 FPGA 或建立一個 ASIC 或類似的東西,但這幾乎不是電腦。

  • Operator

    Operator

  • Our next question will come from the line of Blayne Curtis with Jefferies.

    我們的下一個問題將來自 Blayne Curtis 和 Jefferies 的線路。

  • Blayne Peter Curtis - Equity Analyst

    Blayne Peter Curtis - Equity Analyst

  • I'm actually kind of curious, I mean, being supply constrained, how do you think about -- I mean, you came out with a product for China, H20. I'm assuming there'd be a ton of demand for it, but obviously, you're trying to serve your customers with the other Hopper products. Just kind of curious how you're thinking about that in the second half, if you could elaborate, any impact, what you're thinking for sales as well as gross margin.

    我實際上有點好奇,我的意思是,由於供應有限,你如何看待——我的意思是,你推出了一款針對中國的產品,H20。我假設對此會有大量需求,但顯然,您正在嘗試使用其他 Hopper 產品為您的客戶提供服務。只是有點好奇你在下半年如何看待這個問題,如果你能詳細說明一下,任何影響,你對銷售和毛利率的看法。

  • Jensen Huang

    Jensen Huang

  • I didn't hear your questions. Something bleeped out.

    我沒有聽到你的問題。有什麼東西發出嘟嘟聲。

  • Simona Jankowski - VP of IR

    Simona Jankowski - VP of IR

  • H20 and how you're thinking about allocating supply between the different Hopper products.

    H20 以及您如何考慮在不同的 Hopper 產品之間分配供應。

  • Jensen Huang

    Jensen Huang

  • Well, we have customers that we honor and we do our best for every customer. It is the case that our business in China is substantially lower than the levels of the past. And it's a lot more competitive in China now because of the limitations on our technology. And so those matters are true. However, we continue to do our best to serve the customers in the markets there and to the best of our ability, we'll do our best. But I think overall, the comments that we made about demand outstripping supply is for the entire market and particularly so for H200 and Blackwell towards the end of the year.

    嗯,我們有我們尊敬的客戶,我們為每一位客戶竭盡全力。事實上,我們在中國的業務大大低於過去的水準。由於我們技術的限制,現在在中國的競爭更加激烈。所以這些事情都是真的。然而,我們將繼續盡最大努力為當地市場的客戶提供服務,並盡我們最大的努力。但我認為總體而言,我們關於需求超過供應的評論是針對整個市場的,尤其是針對年底的 H200 和 Blackwell。

  • Operator

    Operator

  • Our next question will come from the line of Srini Pajjuri with Raymond James.

    我們的下一個問題將來自斯里尼·帕朱里 (Srini Pajjuri) 和雷蒙德·詹姆斯 (Raymond James) 的線路。

  • Srinivas Reddy Pajjuri - MD

    Srinivas Reddy Pajjuri - MD

  • Jensen, actually more of a clarification on what you said. GB200 systems, it looks like there is a significant demand for systems. Historically, I think you've sold a lot of HGX boards and some GPUs and the systems business was relatively small. So I'm just curious, why is it that now you are seeing such a strong demand for systems going forward? Is it just the TCO? Or is it something else? Or is it just the architecture?

    詹森,實際上更多的是對你所說的話的澄清。 GB200系統,看來系統需求很大。從歷史上看,我認為你們已經售出了很多 HGX 板和一些 GPU,而係統業務相對較小。所以我很好奇,為什麼現在你會看到對未來系統的如此強烈的需求?僅僅是 TCO 嗎?還是別的什麼?還是只是架構?

  • Jensen Huang

    Jensen Huang

  • Yes. I appreciate that. In fact, the way we sell GB200 is the same. We disaggregate all of the components that make sense and we integrate it into computer makers. We have 100 different computer system configurations that are coming this year for Blackwell. And that is off the charts. Hopper, frankly, had only half, but that's at its peak. It started out with way less than that even. And so you're going to see liquid-cooled version, air-cooled version, x86 versions, Grace versions, so on and so forth. There's a whole bunch of systems that are being designed. And they're offered from all of our ecosystem of great partners. Nothing has really changed.

    是的。我很感激。其實我們賣GB200的方式也是一樣的。我們分解所有有意義的組件,並將其整合到電腦製造商中。今年,我們將為 Blackwell 推出 100 種不同的電腦系統配置。這是史無前例的。坦白說,霍珀只有一半,但那是在巔峰時期。一開始甚至比這個少很多。所以你會看到液冷版本、風冷版本、x86 版本、Grace 版本等等。目前正在設計一大堆系統。它們是由我們所有優秀合作夥伴生態系統提供的。一切都沒有真正改變。

  • Now of course, the Blackwell platform has expanded our offering tremendously, the integration of CPUs and the much more compressed density of computing. Liquid cooling is going to save data centers a lot of money in provisioning power and not to mention to be more energy efficient. And so it's a much better solution. It's more expansive, meaning that we offer a lot more components of a data center. And everybody wins. The data center gets much higher performance, networking from networking switches, networking -- of course, NICs. We have Ethernet now so that we can bring AI to a large-scale NVIDIA AI to customers who only know how to operate Ethernet because of the ecosystem that they have. And so Blackwell is much more expansive. We have a lot more to offer our customers this generation around.

    當然,現在 Blackwell 平台大大擴展了我們的產品、CPU 的整合以及更壓縮的運算密度。液體冷卻將為資料中心在供電方面節省大量資金,更不用說更加節能。所以這是一個更好的解決方案。它的範圍更廣,這意味著我們提供更多的資料中心元件。每個人都贏了。資料中心獲得了更高的效能,透過網路交換器、網路——當然是網路卡進行網路連線。我們現在有了以太網,這樣我們就可以將 AI 引入大規模 NVIDIA AI 中,讓那些由於擁有生態系統而只知道如何操作以太網的客戶可以使用。因此佈萊克威爾的範圍要廣泛得多。我們可以為這一代的客戶提供更多服務。

  • Operator

    Operator

  • Our next question will come from the line William Stein with Truist Securities.

    我們的下一個問題將來自 Truist Securities 的 William Stein。

  • William Stein - MD

    William Stein - MD

  • Jensen, at some point, NVIDIA decided that while there were reasonably good CPUs available for data center operations, your ARM-based Grace CPU provides some real advantage that made that technology worth to bring to customers, perhaps related to cost or power consumption or technical synergies between Grace and Hopper or Grace and Blackwell. Can you address whether there could be a similar dynamic that might emerge on the client side whereby, while there are very good solutions, you've highlighted that Intel and AMD are very good partners and deliver great products in x86, but there might be some, especially in emerging AI workloads, advantage that NVIDIA can deliver that others have more of a challenge?

    Jensen,在某個時候,NVIDIA 認為,雖然有相當好的CPU 可用於資料中心操作,但基於ARM 的Grace CPU 提供了一些真正的優勢,使該技術值得為客戶帶來,可能與成本、功耗或技術有關格蕾絲和霍珀或格蕾絲和布萊克威爾之間的協同作用。您能否解決客戶端是否可能出現類似的動態,儘管有非常好的解決方案,但您強調英特爾和 AMD 是非常好的合作夥伴,並在 x86 中提供出色的產品,但可能存在一些問題特別是在新興的人工智慧工作負載中,NVIDIA 可以提供哪些優勢,而其他公司則面臨更多挑戰?

  • Jensen Huang

    Jensen Huang

  • Well, you mentioned some really good reasons. It is true that for many of the applications, our partnership with x86 partners are really terrific and we build excellent systems together. But Grace allows us to do something that isn't possible with the configuration, the system configuration today. The memory system between Grace and Hopper are coherent and connected. The interconnect between the 2 chips -- calling it 2 chips is almost weird because it's like a superchip. The two of them are connected with this interface that's like at terabytes per second. It's off the charts. And the memory that's used by Grace is LPDDR. It's the first data center-grade low-power memory. And so we save a lot of power on every single node.

    嗯,你提到了一些非常好的理由。確實,對於許多應用程序,我們與 x86 合作夥伴的合作關係非常好,我們共同建立了出色的系統。但 Grace 允許我們做一些目前的配置(系統配置)無法實現的事情。格蕾絲和霍珀之間的記憶系統是連貫且相連的。兩個晶片之間的互連——稱其為兩個晶片幾乎很奇怪,因為它就像一個超級晶片。他們兩個透過這個每秒 TB 的介面連接。這是破紀錄的。 Grace 使用的記憶體是 LPDDR。它是首款資料中心級低功耗記憶體。因此,我們在每個節點上節省了大量電力。

  • And then finally, because of the architecture, because we can create our own architecture with the entire system now, we could create something that has a really large NVLink domain, which is vitally important to the next-generation large language models for inferencing. And so you saw that GB200 has a 72-node NVLink domain. That's like 72 Blackwells connected together into 1 giant GPU. And so we needed Grace Blackwells to be able to do that. And so there are architectural reasons, there are software programming reasons and then there are system reasons that are essential for us to build them that way. And so if we see opportunities like that, we'll explore it.

    最後,由於架構,因為我們現在可以使用整個系統創建自己的架構,所以我們可以創建具有非常大的 NVLink 域的東西,這對於下一代大型推理語言模型至關重要。因此您看到 GB200 有一個 72 節點的 NVLink 域。這就像 72 個 Blackwell 連接在一起形成 1 個巨大的 GPU。因此,我們需要格蕾絲·布萊克韋爾斯(Grace Blackwells)來做到這一點。因此,有架構原因,有軟體程式設計原因,還有系統原因,這對我們以這種方式建構它們至關重要。因此,如果我們看到這樣的機會,我們就會探索它。

  • And today, as you saw at the build yesterday, which I thought was really excellent, Satya announced the next-generation PCs, Copilot+ PC, which runs fantastically on NVIDIA's RTX GPUs that are shipping in laptops. But it also supports ARM beautifully. And so it opens up opportunities for system innovation even for PCs.

    今天,正如您在昨天的構建中看到的那樣,我認為這非常出色,Satya 宣布了下一代 PC Copilot+ PC,它在筆記型電腦中配備的 NVIDIA RTX GPU 上運行得非常好。但它也很好地支援ARM。因此,它甚至為個人電腦的系統創新提供了機會。

  • Operator

    Operator

  • Our last question comes from the line of C.J. Muse with Cantor Fitzgerald.

    我們的最後一個問題來自 C.J. Muse 和 Cantor Fitzgerald 的台詞。

  • Christopher James Muse - Senior MD & Semiconductor Research Analyst

    Christopher James Muse - Senior MD & Semiconductor Research Analyst

  • I guess, Jensen, a bit of a longer-term question. I know Blackwell hasn't even launched yet, but obviously, investors are forward-looking. And amidst rising potential competition from GPUs and custom ASICs, how are you thinking about NVIDIA's pace of innovation? And your million-fold scaling over the last decade, truly impressive: CUDA, precision, Grace, Cohere and connectivity. When you look forward, what frictions need to be solved in the coming decade? And I guess, maybe more importantly, what are you willing to share with us today?

    我想,詹森,這是一個比較長期的問題。我知道 Blackwell 還沒有推出,但顯然投資者是前瞻性的。在 GPU 和客製化 ASIC 的潛在競爭日益激烈的情況下,您如何看待 NVIDIA 的創新步伐?在過去十年中,您的百萬倍擴展確實令人印象深刻:CUDA、精度、Grace、Cohere 和連接性。展望未來,未來十年有哪些摩擦力需要解決?我想,也許更重要的是,您今天願意與我們分享什麼?

  • Jensen Huang

    Jensen Huang

  • Well, I can announce that after Blackwell, there's another chip. And we are on a 1-year rhythm. And you can also count on us having new networking technology on a very fast rhythm. We're announcing Spectrum-X for Ethernet. But we're all in on Ethernet, and we have a really exciting road map coming for Ethernet. We have a rich ecosystem of partners. Dell announced that they're taking Spectrum-X to market. We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market. And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet to network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we're going to make it a much better computing fabric.

    好吧,我可以宣布,繼 Blackwell 之後,還有另一個晶片。我們的節奏是一年。您也可以信賴我們以非常快的節奏擁有新的網路技術。我們宣布推出適用於乙太網路的 Spectrum-X。但我們都在關注以太網,我們有一個非常令人興奮的乙太網路路線圖。我們擁有豐富的合作夥伴生態系統。戴爾宣布他們正在將 Spectrum-X 推向市場。我們擁有豐富的客戶和合作夥伴生態系統,他們將宣布將我們的整個人工智慧工廠架構推向市場。因此,對於想要終極效能的公司,我們擁有 InfiniBand 運算結構。 InfiniBand 是一種運算結構,從乙太網路到網路。多年來,InfiniBand 最初是一種運算結構,後來成為一個越來越好的網路。乙太網路是一種網絡,透過 Spectrum-X,我們將使其成為更好的運算結構。

  • And we're committed, fully committed, to all 3 links, NVLink computing fabric for single computing domain, to InfiniBand computing fabric, to Ethernet networking computing fabric. And so we're going to take all 3 of them forward at a very fast clip. And so you're going to see new switches coming, new NICs coming, new capability, new software stacks that run on all 3 of them. New CPUs, new GPUs, new networking NICs, new switches, a mound of chips that are coming. And all of it -- the beautiful thing is all of it runs CUDA. And all of it runs our entire software stack. So if you invest today on our software stack, without doing anything at all, it's just going to get faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs.

    我們完全致力於所有 3 個鏈路,即用於單一計算域的 NVLink 運算結構、InfiniBand 運算結構、乙太網路運算結構。因此,我們將以非常快的速度推動這三個項目的發展。因此,您將看到新交換器的出現、新 NIC 的出現、新功能以及在所有 3 個交換器上運行的新軟體堆疊。新的CPU、新的GPU、新的網路卡、新的交換器以及一大堆晶片即將到來。所有這一切——最美妙的事情就是這一切都運行 CUDA。所有這些都運行我們的整個軟體堆疊。因此,如果您今天投資我們的軟體堆疊,而不做任何事情,它只會變得越來越快。如果你今天投資我們的架構,而不做任何事情,它將進入越來越多的雲端和越來越多的資料中心,一切都會運作。

  • And so I think the pace of innovation that we're bringing will drive up the capability, on the one hand, and drive down the TCO on the other hand. And so we should be able to scale out with the NVIDIA architecture for this new era of computing and start this new industrial revolution where we manufacture not just software anymore, but we manufacture artificial intelligence tokens, and we're going to do that at scale. Thank you.

    因此,我認為我們帶來的創新步伐一方面會提高能力,另一方面會降低整體擁有成本。因此,我們應該能夠利用NVIDIA 架構在這個新的運算時代進行擴展,並開始這場新的工業革命,我們不再只是製造軟體,而是製造人工智慧代幣,而且我們將大規模地做到這一點。謝謝。

  • Operator

    Operator

  • That will conclude our question-and-answer session and our call for today. We thank you all for joining, and you may now disconnect.

    我們的問答環節和今天的呼籲就此結束。我們感謝大家的加入,現在您可以斷開連線。