儘管面臨中國新的出口管制挑戰,NVIDIA 仍報告季度收入強勁,達到 440 億美元,這得益於資料中心收入和 AI 工作負載的成長。本公司專注於AI推理需求,推出GB300 GPU等新產品,並擴展到自主AI、企業AI、工業AI等各個領域。
NVIDIA 對全年的持續成長持樂觀態度,並準備擴大規模並與主要 IT 供應商合作。
使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's first quarter fiscal 2026 financial results conference call. (Operator Instructions) Thank you. Toshiya Hari, you may begin your conference.
午安.我叫莎拉,今天我將擔任您的會議主持人。現在,我歡迎大家參加NVIDIA 2026財年第一季財務業績電話會議。(操作員指示)謝謝。Toshiya Hari,您可以開始您的會議了。
Toshiya Hari - Investor Relations
Toshiya Hari - Investor Relations
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President, and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website.
謝謝。大家下午好,歡迎參加NVIDIA 2026財年第一季電話會議。今天與我一起出席的還有 NVIDIA 總裁兼執行長 Jensen Huang 和執行副總裁兼財務長 Colette Kress。我想提醒您,我們的電話會議正在 NVIDIA 的投資者關係網站上進行網路直播。
The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
網路直播將可重播,直至電話會議討論 2026 財年第二季的財務業績。今天電話會議的內容屬於 NVIDIA 的財產。未經我們事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在本次電話會議中,我們可能會根據目前預期做出前瞻性陳述。這些都受到許多重大風險和不確定性的影響,我們的實際結果可能會有重大差異。有關可能影響我們未來業務財務表現的因素的討論,請參閱今天的收益報告中的披露內容、我們最新的 10-K 和 10-Q 表格以及我們可能向美國證券交易委員會提交的 8-K 表格報告。
All our statements are made as of today, May 28, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
我們所有的聲明都是根據我們目前掌握的資訊截至 2025 年 5 月 28 日做出的。除法律要求外,我們不承擔更新任何此類聲明的義務。在本次電話會議中,我們將討論非公認會計準則財務指標。您可以在我們網站上發布的 CFO 評論中找到這些非 GAAP 財務指標與 GAAP 財務指標的對帳表。
With that, let me turn the call over to Colette.
說完這些,讓我把電話轉給科萊特。
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Thank you, Toshiya. We delivered another strong quarter with revenue of $44 billion, up 69% year-over-year, exceeding our outlook in what proved to be a challenging operating environment. Data Center revenue of $39 billion grew 73% year-on-year. AR workloads have transitioned strongly to inference and AI factory build-outs are driving significant revenue. Our customers' commitments are firm.
謝謝你,Toshiya。我們又度過了一個強勁的季度,營收達到 440 億美元,年成長 69%,在充滿挑戰的經營環境中超越了我們的預期。資料中心營收390億美元,年增73%。AR 工作負載已強勢轉向推理,而 AI 工廠的建設正在帶來可觀的收入。我們對客戶的承諾是堅定的。
On April 9, the US government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory.
4月9日,美國政府對我們專為中國市場設計的資料中心GPU H20發布了新的出口管制。我們在前政府的批准下出售了 H20。儘管我們的 H20 已上市一年多,並且在中國以外沒有市場,但新的 H20 出口管制並沒有提供寬限期讓我們銷售庫存。
In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. The $4.5 billion charge was less than what we initially anticipated as we were able to reuse certain materials.
在第一季度,我們確認了 4 月 9 日之前發生的 46 億美元的 H20 收入,但也確認了 45 億美元的費用,因為我們減記了與 4 月 9 日之前收到的訂單相關的庫存和採購義務。由於新的出口管制,我們第一季無法交付價值 25 億美元的 H20 產品。由於我們能夠重複使用某些材料,因此 45 億美元的費用低於我們最初的預期。
We are still evaluating our limited options to supply data center compute products compliant with the US government's revised export control rules. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide.
我們仍在評估有限的選擇,以提供符合美國政府修訂的出口管制規則的資料中心計算產品。我們認為,失去中國人工智慧加速器市場(其規模將成長至近 500 億美元)將對我們未來的業務產生重大不利影響,並使我們在中國和全球的外國競爭對手受益。
Our Blackwell ramp, the fastest in our company's history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with the transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center-scale workloads and to achieve the lowest cost per inference token.
我們的 Blackwell 產能提升是公司史上最快的,推動資料中心營收年增 73%。隨著 Hopper 的轉型接近完成,Blackwell 在本季貢獻了近 70% 的資料中心計算收入。GB200 NVL 的引入是一項根本性的架構變革,旨在實現資料中心規模的工作負載並實現每個推理令牌的最低成本。
While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 NVL racks are now generally available for motor builders, enterprises, and sovereign customers to develop and deploy AI.
雖然這些系統的建置很複雜,但我們已經看到製造產量有了顯著提高,並且機架出貨量正在大幅提升到最終客戶。GB200 NVL 機架現已普遍可供汽車製造商、企業和主權客戶開發和部署 AI。
On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers.
平均而言,主要的超大規模資料中心營運商每週部署近 1,000 個 NVL72 機架或 72,000 個 Blackwell GPU,並且預計在本季進一步提高產量。例如,微軟已經部署了數萬個 Blackwell GPU,並預計將增加到數十萬個 GB200,OpenAI 是其主要客戶之一。
Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product road map, Blackwell Ultra. Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200.
從 GB200 坡道中獲得的關鍵經驗將使我們順利過渡到產品路線圖的下一階段,即 Blackwell Ultra。GB300 系統的樣品已於本月初在各大 CSP 開始提供,我們預計生產發貨將於本季晚些時候開始。GB300 將採用與 GB200 相同的架構、相同的實體佔用空間以及相同的電氣和機械規格。
The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields. GB300 GPUs with 50% more HBM will deliver another 50% increase in dense FP4 inference compute performance compared to the B200. We remain committed to our annual product cadence with our road map extending through 2028, tightly aligned with the multiple year planning cycles of our customers. We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step function leap in token generation.
GB300 嵌入式設計將允許 CSP 無縫過渡其用於 GB200 的系統和製造,同時保持高產量。與 B200 相比,HBM 增加 50% 的 GB300 GPU 將使密集 FP4 推理計算效能再提高 50%。我們仍然致力於我們的年度產品節奏,我們的路線圖延伸至 2028 年,與客戶的多年規劃週期緊密結合。我們正在見證推理需求的急劇增長。OpenAI、微軟和谷歌在代幣生成方面實現了階躍功能飛躍。
Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis. This exponential growth in Azure OpenAI is representative of strong demand for Azure AI Foundry as well as other AI services across Microsoft's platform. Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis.
微軟第一季處理了超過 100 兆個令牌,年增了五倍。Azure OpenAI 的這種指數級成長代表了對 Azure AI Foundry 以及微軟平台上其他 AI 服務的強勁需求。推理服務新創公司現在正在使用 B200 為模型提供服務,根據人工分析報告,它們的代幣生成率和相應的高價值推理模型(如 DeepSeek-R1)的收入增加了兩倍。
NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry. Developer engagements increased with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbox latency by 5x with Dynamo.
Blackwell NVL72 上的 NVIDIA Dynamo 為席捲整個產業的新推理模型將 AI 推理吞吐量提高了 30 倍。隨著採用率的提高,開發人員的參與度也隨之提高,從 Perplexity 等 LLM 供應商到 Capital One 等金融服務機構,他們使用 Dynamo 將代理聊天框延遲降低了 5 倍。
In the latest ELMO Perf inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our 8-GPU H200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance for GPU as well as 9x more GPUs all connected on a single NVLink domain.
在最新的 ELMO Perf 推理結果中,我們使用 GB200 NVL72 提交了第一批結果,與在具有挑戰性的 Llama 3.1 基準測試中使用的 8-GPU H200 相比,推理吞吐量提高了 30 倍。這項成就是透過將 GPU 效能提高三倍以及在單一 NVLink 網域上連接 9 倍以上的 GPU 來實現的。
And while Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone. We expect to continue improving the performance of Blackwell through its operational life as we have done with Hopper and AMP Pro. For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA's programmable CUDA architecture and rich ecosystem.
儘管 Blackwell 仍處於生命週期的早期階段,但僅在上個月,軟體優化就已經將其效能提高了 1.5 倍。我們希望在 Blackwell 的整個使用壽命期間繼續提高其性能,就像我們對 Hopper 和 AMP Pro 所做的那樣。例如,我們在兩年內將 Hopper 的推理表現提高了四倍。這就是NVIDIA可程式CUDA架構和豐富生態系統帶來的好處。
The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a twofold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period. And more AI factory projects are starting across industries and geographies. NVIDIA's full stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor, are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and the UAE.
AI 工廠部署的速度和規模正在加快,本季度有近 100 家由 NVIDIA 支持的 AI 工廠投入運營,同比增長兩倍,同時每家工廠支援的平均 GPU 數量也同期翻了一番。越來越多的人工智慧工廠計畫正在跨產業、跨地區啟動。NVIDIA 的全端架構正在為 AI 工廠部署提供支持,因為 AT&T、比亞迪、Capital One、富士康、聯發科和 Telenor 等行業領導者都是具有戰略重要性的主權雲,例如最近在沙烏地阿拉伯、台灣和阿聯酋宣布的主權雲。
We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not-too-distant future. The transition from generative to agentic AI, AI capable of receiving, reasoning, planning, and acting will transform every industry, every company and country.
我們預計在不久的將來會出現需要數十千兆瓦 NVIDIA AI 基礎設施的項目。從生成型人工智慧到代理型人工智慧的轉變,即具有接收、推理、規劃和行動能力的人工智慧將改變每個產業、每個公司和每個國家。
We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs or NVIDIA inference micro services with multiple sizes to meet diverse deployment needs.
我們將人工智慧代理設想為一種新型數位化勞動力,能夠處理從客戶服務到複雜決策過程等各種任務。我們推出了 Llama Nemotron 系列開放推理模型,旨在為企業增強代理 AI 平台。這些模型基於 Llama 架構構建,可作為 NIM 或 NVIDIA 推理微服務使用,具有多種尺寸,可滿足不同的部署需求。
Our post training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed, leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft or transforming work with our reasoning models.
我們的訓練後增強功能使準確率提高了 20%,推理速度提高了 5 倍,引領了包括埃森哲、Cadence、德勤和微軟在內的平台公司利用我們的推理模型轉變工作。
NVIDIA NeMo micro services are generally available across industries are being leveraged by leading enterprises to build, optimize and scale AI applications. With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. NASDAQ realized a 30% improvement in accuracy and response time in its AI platform's search capabilities. And Shell's Custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo's parallelism, techniques accelerated model training time by 20% when compared to other frameworks.
NVIDIA NeMo 微服務已普遍應用於各個行業,領導企業正在利用它來建立、優化和擴展 AI 應用程式。借助 NeMo,思科將其程式碼助手的模型準確率提高了 40%,回應時間提高了 10 倍。納斯達克人工智慧平台搜尋功能的準確率和回應時間提高了30%。而 Shell 的客製化 LLM 在使用 NVIDIA NeMo 進行訓練後,準確率提高了 30%。與其他框架相比,NeMo 的平行技術將模型訓練時間縮短了 20%。
We also announced a partnership with Yum! Brands, the world's largest restaurant company to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order-taking, optimize operations and enhance service across its restaurants. For AI-powered cybersecurity leading companies like Check Point, CrowdStrike and Paladin Networks are using NVIDIA's AI security and software stack to build, optimize and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.
我們也宣布與 Yum! 建立合作關係。Brands 是全球最大的餐飲公司,今年將把 NVIDIA AI 引入其 500 家餐廳,並計劃逐步擴展到 61,000 家餐廳,以簡化訂單處理流程、優化營運並提升其餐廳的服務。對於人工智慧驅動的網路安全領先公司,Check Point、CrowdStrike 和 Paladin Networks 等正在使用 NVIDIA 的人工智慧安全和軟體堆疊來建立、優化和保護代理程式工作流程,其中 CrowdStrike 實現了 2 倍更快的檢測分類速度,同時降低了 50% 的運算成本。
Moving to networking. Sequential growth in networking resumed in Q1 with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. We created the world's fastest switch, NVLink for scale up, our NVLink compute fabric in its fifth generation, offers 14x the bandwidth of PCIe Gen 5.
轉向網路。第一季網路業務恢復連續成長,營收季增 64%,達到 50 億美元。我們的客戶繼續利用我們的平台來有效地擴大和擴展人工智慧工廠的工作負載。我們創造了世界上最快的交換器 NVLink,用於擴展,我們的第五代 NVLink 運算結構提供了 PCIe Gen 5 的 14 倍頻寬。
NVLink 72 carries 130 terabytes per second of bandwidth in a single rack, equivalent to the entirety of the world's peak Internet traffic. NVLink is a new growth vector and is off to a great start with Q1 shipments exceeding $1 billion. At Computex, we announced NVLink Fusion.
NVLink 72 在單一機架中承載每秒 130 TB 的頻寬,相當於全球網路尖峰流量的總和。NVLink 是一個新的成長載體,開局良好,第一季出貨量超過 10 億美元。在 Computex 上,我們發布了 NVLink Fusion。
Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies and Astera Labs as well as CPU suppliers, such as Fujitsu and Qualcomm to leverage and relink Fusion to connect our respective ecosystems. For scale out, our enhanced Ethernet offerings delivered the highest throughput, low in its latency networking for AI.
超大規模客戶現在可以建立半客製化 CCU 和加速器,並透過 NVLink 直接連接到 NVIDIA 平台。我們現在正在支援主要合作夥伴,包括聯發科、Marvell、Alchip Technologies 和 Astera Labs 等 ASIC 供應商以及富士通和高通等 CPU 供應商,利用和重新連結 Fusion 來連接我們各自的生態系統。對於橫向擴展,我們增強的乙太網路產品為 AI 提供了最高的吞吐量和低延遲網路。
Spectrum-X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including CoreWeave, Microsoft Azure and Oracle Cloud and xAI.
Spectrum-X 實現了強勁的連續和同比增長,目前年收入超過 80 億美元。主要的 CSP 和消費者互聯網公司都廣泛採用了該技術,包括 CoreWeave、Microsoft Azure、Oracle Cloud 和 xAI。
This quarter, we added Google Cloud and Meta to the growing list of Spectrum-X customers. We introduced Spectrum-X and Quantum-X silicon photonics switches featuring the world's most advanced co-packaged optics. These platforms will enable next-level AI factory scaling to millions of DPUs through the increasingly power efficiency by 3.5x and network resiliency by 10x, while accelerating customer time to market by 1.3x.
本季度,我們將 Google Cloud 和 Meta 加入不斷成長的 Spectrum-X 客戶名單中。我們推出了採用世界上最先進的共封裝光學技術的 Spectrum-X 和 Quantum-X 矽光子開關。這些平台將透過將電源效率提高 3.5 倍、網路彈性提高 10 倍,將下一代 AI 工廠擴展到數百萬個 DPU,同時將客戶的產品上市時間加快 1.3 倍。
Transitioning to a quick summary of our revenue by geography. China as a percentage of our Data Center revenue was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue.
過渡到按地區對我們的收入進行快速總結。中國占我們資料中心收入的百分比略低於我們的預期,並且由於 H20 出口許可管製而環比下降。對於第二季度,我們預計中國資料中心收入將大幅下降。
As a reminder, while Singapore represented nearly 20% of our Q1 build revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200, and Blackwell data center compute revenue billed to Singapore was for orders from US-based customers.
提醒一下,雖然新加坡占我們第一季建築收入的近 20%,因為我們的許多大客戶都使用新加坡進行集中開票,但我們的產品幾乎總是運往其他地方。請注意,新加坡收取的 H100、H200 和 Blackwell 資料中心計算收入的 99% 以上來自美國客戶的訂單。
Moving to gaming and AI PCs. Gaming revenue was a record $3.8 billion, increasing 48% sequentially and 42% year-on-year. Strong adoption by gamers, creatives and AI enthusiasts have made Blackwell our fastest ramp ever. Against a backdrop of robust demand, we greatly improved our supply and availability in Q1 and expect to continue these efforts in Q2. AI is transforming PC and creator and gamers. With a 100 million user installed base, represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft's Copilot+.
轉向遊戲和人工智慧電腦。博彩收入達到創紀錄的 38 億美元,季增 48%,年增 42%。遊戲玩家、創意人員和人工智慧愛好者的大力採用使得 Blackwell 成為我們有史以來最快的成長點。在強勁需求的背景下,我們在第一季大大改善了供應和可用性,並預計在第二季度繼續這些努力。人工智慧正在改變個人電腦、創作者和遊戲玩家。擁有 1 億用戶安裝基礎,代表 PC 開發商的最大足跡。本季度,我們增加了 AI PC 筆記型電腦產品,包括能夠運行 Microsoft Copilot+ 的型號。
This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti starting at just $299. The RTX 5060 also debuted in laptop starting at $1,099. These systems that doubled the frame rate/latency. These GeForce RTX 50, 60 and 50-60TI desktop GPUs and laptops are now available. In console gaming, the recently unveiled Nintendo Switch 2 leverages NVIDIA's neuro rendering and AI technologies, including next-generation custom RTX GPUs with DLSS technology to deliver a giant leap in gaming performance to millions of players worldwide.
上個季度,我們推出了起價僅為 299 美元的 GeForce RTX 5060 和 5060 Ti,將 Blackwell 架構引入主流遊戲領域。RTX 5060 也首次亮相筆記型電腦,起價為 1,099 美元。這些系統使幀速率/延遲增加了一倍。這些 GeForce RTX 50、60 和 50-60TI 桌上型電腦 GPU 和筆記型電腦現已上市。在主機遊戲方面,最近發布的 Nintendo Switch 2 利用 NVIDIA 的神經渲染和 AI 技術,包括採用 DLSS 技術的下一代定制 RTX GPU,為全球數百萬玩家帶來遊戲性能的巨大飛躍。
Nintendo has shipped over 150 million switch consoles to date, making it one of the most successful gaming systems in history. Moving to Pro Visualization. Revenue of $509 million was flat sequentially and up 19% year-on-year. Tariff-related uncertainty temporarily impacted Q1 systems and demand for our AI workstations is strong, and we expect sequential revenue growth to resume in Q2. NVIDIA DGX Spark and station revolutionized personal computing.
迄今為止,任天堂的 Switch 遊戲機銷量已超過 1.5 億台,成為史上最成功的遊戲系統之一。轉向專業視覺化。營收 5.09 億美元,與上一季持平,年增 19%。與關稅相關的不確定性暫時影響了第一季的系統,並且對我們的人工智慧工作站的需求強勁,我們預計第二季營收將恢復連續成長。NVIDIA DGX Spark 和工作站徹底改變了個人運算。
By putting the power of an AI supercomputer in a desktop form factor. DGX Spark delivers up to one petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 Super Chip. DGX Spark will be available in calendar Q3 and DGX Station later this year.
透過將人工智慧超級電腦的強大功能融入桌上型設計中。DGX Spark 可提供高達每秒 1 千萬億次浮點運算的 AI 運算能力,而 DGX Station 則可提供每秒 20 千萬億次浮點運算的能力,並由 GB300 超級晶片提供支援。DGX Spark 將於第三季上市,DGX Station 將於今年稍後上市。
We have deepened Omni versus integration and adoption into some of the world's leading software platforms, including Databricks, SAP and Schneider Electric, new Omniverse blueprints such as Mega for at-scale robotic fleet management are being leveraged in Kion Group, Pegatron, Accenture and other leading companies to enhance industrial operations. At Computex, we showcased Omni versus great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron.
我們深化了 Omni 與一些世界領先軟體平台的整合和採用,包括 Databricks、SAP 和施耐德電氣,凱傲集團、和碩、埃森哲等領先公司正在利用 Mega 等新的 Omniverse 藍圖進行大規模機器人車隊管理,以增強工業運營。在台北國際電腦展上,我們展現了 Omni 與台積電、廣達、富士康、和碩等科技製造領導者的強大吸引力。
Using Omniverse, TSMC saves months in work by designing fabs virtually. Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%. Lastly with our automotive group. Revenue was $567 million, down 1% sequentially but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NAVs.
利用 Omniverse,台積電透過虛擬設計晶圓廠節省了數月的工作時間。富士康將熱模擬速度提高了 150 倍,和碩將組裝線缺陷率降低了 67%。最後是我們的汽車集團。營收為 5.67 億美元,季減 1%,但年增 72%。年比成長的動力來自於眾多客戶對自動駕駛技術的追求以及對 NAV 的強勁終端需求。
We are partnering with GM to build the next-gen vehicles, factories and robots using NVIDIA AI, simulation, and accelerated computing. And we are now in production with our full stack solution for Mercedes-Benz starting with the new CLA hitting roads in the next few months.
我們正在與通用汽車合作,利用 NVIDIA AI、模擬和加速運算打造下一代汽車、工廠和機器人。目前,我們正在為梅賽德斯-奔馳生產全端解決方案,新款 CLA 將在未來幾個月上市。
We announced Isaac Group and one, the world's first open fully customizable foundation model for humanoid robots enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmo World Foundation models. Leading companies include Onyx , Agility Robots, Robotics, Figure AI, Uber and Wobi.
我們宣布推出 Isaac Group and one,這是世界上第一個開放的、完全可自訂的人形機器人基礎模型,可實現廣義推理和技能發展。我們也推出了新的開放式 NVIDIA Cosmo World Foundation 模式。領先的公司包括 Onyx、Agility Robots、Robotics、Figure AI、Uber 和 Wobi。
We've begun integrating Kosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and Robotics are harnessing Isaac's simulation to advance their humanoid efforts. GE Healthcare is using the new NVIDIA Isaac platform for health care simulation built on NVIDIA Omniverse and using NVIDIA Cosmos. The platform speed, development of robotic imaging and surgery systems. The era of robotics is here, billions of robots, hundreds of millions of autonomous vehicles and hundreds of thousands of robotic factories and warehouses will be developed.
我們已經開始將 Kosmos 整合到他們的營運中以產生合成數據,而 Agility Robotics、Boston Dynamics 和 Robotics 正在利用 Isaac 的模擬來推進他們的人形機器人研究。GE Healthcare 正在使用基於 NVIDIA Omniverse 建置並使用 NVIDIA Cosmos 的全新 NVIDIA Isaac 平台進行醫療保健模擬。該平台加速了機器人成像和手術系統的發展。機器人時代已經到來,數十億台機器人、數億輛自動駕駛汽車以及數十萬個機器人工廠和倉庫將被開發。
All right. Moving to the rest of the P&L. GAAP gross margins and non-GAAP gross margins were 60.5% and 61%, respectively. Excluding the $4.5 billion charge, Q1 non-GAAP gross margins would have been 71.3%, slightly above our outlook at the beginning of the quarter.
好的。轉到損益表的其餘部分。GAAP毛利率和非GAAP毛利率分別為60.5%和61%。不計入 45 億美元的費用,第一季非 GAAP 毛利率將達到 71.3%,略高於我們在本季初的預期。
Sequentially, GAAP operating expenses were up 7% and non-GAAP operating expenses were up 6%, reflecting higher compensation and employee growth. Our investments include expanding our infrastructure capabilities and AI solutions, and we plan to grow these investments throughout the fiscal year.
與上一季相比,GAAP 營運費用上漲 7%,非 GAAP 營運費用上漲 6%,反映了更高的薪資和員工成長。我們的投資包括擴展我們的基礎設施能力和人工智慧解決方案,我們計劃在整個財政年度增加這些投資。
In Q1, we returned a record $14.3 billion to shareholders in the form of share repurchases and cash dividends. Our capital return program continues to be a key element of our capital allocation strategy. Let me turn to the outlook for the second quarter. Total revenue is expected to be $45 billion, plus or minus 2%. We expect modest sequential growth across all of our platforms.
第一季度,我們以股票回購和現金分紅的形式向股東返還了創紀錄的 143 億美元。我們的資本回報計畫仍然是我們資本配置策略的關鍵要素。讓我來談談第二季的展望。預計總收入為 450 億美元,上下浮動 2%。我們預計所有平台都將實現適度的連續成長。
In Data Center, we anticipate the continued ramp of Blackwell to be partially offset by a decline in China revenue. Note, our outlook reflects a loss in H20 revenue of approximately $8 billion for the second quarter. GAAP and non-GAAP gross margins are expected to be 71.8% and 72%, respectively, plus or minus 50 basis points. We expect or Blackwell profitability to drive modest sequential improvement in gross margins. We are continuing to work towards achieving gross margins in the mid-70s range late this year.
在數據中心,我們預計 Blackwell 的持續成長將被中國收入的下降部分抵消。請注意,我們的預測反映出第二季 H20 收入損失約 80 億美元。預計 GAAP 和非 GAAP 毛利率分別為 71.8% 和 72%,上下浮動 50 個基點。我們預計 Blackwell 的獲利能力將推動毛利率環比小幅提升。我們將繼續努力,力爭在今年底實現 70% 左右的毛利率。
GAAP and non-GAAP operating expenses are expected to be approximately $5.7 billion and $4 billion, respectively, and we continue to expect full year fiscal year 26 operating expense growth to be in the mid-30% range. GAAP and non-GAAP other income and expenses are expected to be an income of approximately $450 million, excluding gains and losses from nonmarketable and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items.
預計 GAAP 和非 GAAP 營運費用分別約為 57 億美元和 40 億美元,我們繼續預期 26 財年全年營運費用成長率將在 30% 左右。預計 GAAP 和非 GAAP 其他收入和支出約為 4.5 億美元的收入,不包括非流通股本證券和公開持有的股權證券的損益。預計 GAAP 和非 GAAP 稅率為 16.5%,上下浮動 1%,不包括任何單一項目。
Further financial details are included in the CFO commentary and other information available on our IR website, including a new financially information AI agent. Let me highlight upcoming events for the financial community.
進一步的財務細節包含在財務長評論和我們 IR 網站上提供的其他資訊中,包括一個新的財務資訊 AI 代理。讓我重點介紹一下金融界即將發生的事件。
We will be at the BofA Global Technology Conference in San Francisco on June 4. The Rosenblatt Virtual AI Summit and NASDAQ Investor Conference in London on June 10, and GTC Paris at VivaTech on June 11 in Paris. We look forward to seeing you at these events. Our earnings call to discuss the results of our second quarter of fiscal 2026 is scheduled for August 27. Well, now let me turn it over to Jensen to make some remarks.
我們將於 6 月 4 日參加在舊金山舉行的美國銀行全球技術會議。6 月 10 日在倫敦舉行的 Rosenblatt 虛擬 AI 高峰會和納斯達克投資者會議,以及 6 月 11 日在巴黎 VivaTech 舉行的 GTC Paris。我們期待在這些活動中見到您。我們的財報電話會議定於 8 月 27 日舉行,討論 2026 財年第二季的業績。好吧,現在讓我把時間交給 Jensen 來發表一些評論。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Thanks, Colette. We've had a busy and productive year. Let me share my perspective on some topics we're frequently asked. On export control. China is one of the world's largest AI markets and a springboard to global success.
謝謝,科萊特。我們度過了忙碌而富有成果的一年。讓我分享一下我對一些我們常被問到的話題的看法。關於出口管制。中國是世界上最大的人工智慧市場之一,也是全球成功的跳板。
With half of the world's AI researchers based there, the platform that wins China is positioned to lead globally. Today, however, the $50 billion China market is effectively closed to US industry. The H20 export ban ended our hopper data center business in China. We cannot reduce hopper further to comply.
全球一半的人工智慧研究人員都在中國,贏得中國的平台將在全球處於領先地位。然而,如今價值 500 億美元的中國市場實際上已對美國產業關閉。H20出口禁令終止了我們在中國的資料中心業務。我們無法進一步減少料斗以符合要求。
As a result, we are taking a multibillion-dollar write-off on inventory that cannot be sold or repurposed. We are exploring limited ways to compete, but Hopper is no longer an option. China's AI moves on with or without US chips. It has to compute to train and deploy advanced models.
結果,我們對無法出售或重新利用的庫存進行了數十億美元的註銷。我們正在探索有限的競爭方式,但 Hopper 不再是一種選擇。無論有沒有美國晶片,中國的人工智慧都會繼續發展。它必須計算來訓練和部署高級模型。
The question is not whether China will have AI, it already does. The question is whether one of the world's largest AR markets will run on American platforms. Shielding Chinese chipmakers from US competition only strengthens them abroad and weakens America's position. Export restrictions have spurred China's innovation and scale.
問題不在於中國是否會擁有人工智慧,而是它已經擁有了。問題在於,世界上最大的 AR 市場之一是否會在美國平台上運作。保護中國晶片製造商免受美國競爭只會增強它們的海外實力並削弱美國的地位。出口限制刺激了中國的創新和規模。
The AI race is not just about chips. It's about which stack the world runs on. As that stack grows to include 6G and quantum, US global infrastructure leadership is at stake. The US has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable and now it's clearly wrong. China has enormous manufacturing capability.
人工智慧的競賽不僅僅是關乎晶片。這與世界運作在哪個堆疊上有關。隨著該堆疊擴展到包括 6G 和量子,美國的全球基礎設施領導地位岌岌可危。美國的政策建立在中國無法製造人工智慧晶片的假設之上。這個假設一直受到質疑,現在顯然是錯的。中國擁有巨大的製造能力。
In the end, the platform that wins the AI developers win AI -- wins AI. Export controls should strengthen US platforms, not drive half of the world's AI talent to rivals. On DeepSeek, DeepSeek and Q1 from China are among the most -- among the best open source models. Released freely, they've gained traction across the US, Europe and beyond. DeepSeek R1, like ChatGPT, introduced reasoning AI that produces better answers, the longer it thinks. Reasoning AI enables step-by-step problem solving, planning and tool use, turning models into intelligent agents.
最終,贏得人工智慧開發者的平台將贏得人工智慧——贏得人工智慧。出口管制應該加強美國的平台,而不是將全球一半的人工智慧人才推向競爭對手。在 DeepSeek 上,來自中國的 DeepSeek 和 Q1 是最好的開源模型之一。這些作品免費發行後,在美國、歐洲及其他地區都獲得了關注。DeepSeek R1 與 ChatGPT 類似,引入了推理 AI,思考時間越長,給出的答案越好。推理人工智慧能夠逐步解決問題、規劃和使用工具,將模型轉化為智慧代理。
Reasoning is compute-intensive, requires hundreds to thousands more thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand. AI scaling laws remain firmly intact, not only for training, but now Inference two requires massive scale compute. DeepSeek also underscores the strategic value of open source AI.
推理是計算密集型的,與以前的一次性推理相比,每個任務需要數百到數千倍的標記。推理模型正在推動推理需求的階躍式成長。人工智慧縮放定律保持不變,不僅適用於訓練,現在推理二需要大規模計算。DeepSeek 也強調了開源 AI 的策略價值。
When popular models are trained and optimized on US platforms, it drives usage, feedback, and continuous improvement, reinforcing American leadership across the stack. US platforms must remain the preferred platform for open source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Q1 runs best on American infrastructure.
當流行模型在美國平台上進行訓練和優化時,它會推動使用、回饋和持續改進,從而加強美國在整個領域的領導地位。美國平台必須繼續成為開源人工智慧的首選平台。這意味著支持與包括中國在內的全球頂尖開發商的合作。當 DeepSeek 和 Q1 等模型在美國基礎設施上運行時,美國就獲勝了。
Regarding onshore manufacturing, President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs and strengthen national security. Future plants will be highly computerized in robotics. We share this vision. TSMC is building six fabs and two advanced packaging plants in Arizona to make chips for NVIDIA. Process qualification is underway with volume production expected by year-end.
關於國內製造業,川普總統提出了一個大膽的願景,讓先進製造業回歸國內,創造就業機會,加強國家安全。未來的工廠將高度實現機器人電腦化。我們認同這個願景。台積電正在亞利桑那州建造六座晶圓廠和兩座先進封裝廠,為 NVIDIA 生產晶片。目前工藝鑑定正在進行中,預計年底實現量產。
SPIL and Amcor are also investing in Arizona, constructing packaging, assembly, and test facilities. In Houston, we're partnering with Foxconn to construct a 1 million square foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we've made substantial long-term purchase commitments a deep investment in America's AI manufacturing future. Our goal from chip to supercomputer built in America within a year.
SPIL 和 Amcor 也在亞利桑那州投資建造封裝、組裝和測試設施。在休士頓,我們正在與富士康合作建造一座佔地 100 萬平方英尺的工廠來製造人工智慧超級電腦。緯創資通正在德州沃斯堡建造一座類似的工廠。為了鼓勵和支持這些投資,我們做出了大量長期採購承諾,對美國人工智慧製造業的未來進行了深入投資。我們的目標是在一年內在美國製造出從晶片到超級電腦。
Each GB200 NVLink72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job. On AI diffusion rule, President Trump rescinded the AI diffusion rule, calling it counterproductive, and proposed a new policy to promote US AI tech with trusted partners.
每個GB200 NVLink72機架包含120萬個組件,重量近2噸。還沒有人生產過如此規模的超級電腦。我們的合作夥伴正在做著非凡的工作。關於人工智慧擴散規則,川普總統撤銷了人工智慧擴散規則,稱其適得其反,並提出了一項新政策,與值得信賴的合作夥伴一起推廣美國人工智慧技術。
On his Middle East tour, he announced historic investments. I was honored to join him in announcing a 500-megawatt AI infrastructure project in Saudi Arabia and a 5-gigawatt AI campus in the UAE. President Trump wants US tech to lead. The deals he announced are wins for America, creating jobs, advancing infrastructure, generating tax revenue, and reducing the US trade deficit.
在中東之行中,他宣布了具有歷史意義的投資。我很榮幸與他一起宣佈在沙烏地阿拉伯建立一個500兆瓦的人工智慧基礎設施項目,並在阿聯酋建立一個5千兆瓦的人工智慧園區。川普總統希望美國科技發揮主導作用。他宣布的協議對美國來說是雙贏的,創造了就業機會,推進了基礎設施建設,增加了稅收,並減少了美國的貿易逆差。
The US will always be NVIDIA's largest market and home to the largest installed base of our infrastructure. Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities.
美國將永遠是 NVIDIA 最大的市場,也是我們基礎設施最大安裝基地的所在地。現在每個國家都將人工智慧視為下一次工業革命的核心,這是一個為每個經濟體提供智慧和必要基礎設施的新興產業。各國競相建置國家人工智慧平台,提升數位化能力。
At Computex, we announced Taiwan's first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure. Japan, Korea, India, Canada, France, the UK, Germany, Italy, Spain, and more are now building national AI factories to empower startups, industries, and societies. Sovereign AI is a new growth engine for NVIDIA. Toshiya, back to you.
在台北國際電腦展上,我們與富士康和台灣政府合作宣佈建立台灣首家人工智慧工廠。上週,我去了瑞典,參加了全國首個國家人工智慧基礎設施的啟動。日本、韓國、印度、加拿大、法國、英國、德國、義大利、西班牙等國家正在興建國家人工智慧工廠,為新創企業、產業和社會賦能。Sovereign AI 是 NVIDIA 的新成長引擎。Toshiya,回到你身邊。
Toshiya Hari - Investor Relations
Toshiya Hari - Investor Relations
Operator, we will now open the call for questions. Would you please poll for questions?
接線員,我們現在開始提問。您能進行民意調查提出問題嗎?
Operator
Operator
(Operator Instructions) Joe Moore, Morgan Stanley.
(操作員指示)摩根士丹利的喬·摩爾。
Joe Moore - Analyst
Joe Moore - Analyst
You guys have talked about this scaling up of inference around reasoning models for at least a year now. And we've really seen that come to fruition as you talked about. We've heard it from your customers. Can you give us a sense for how much of that demand you're able to serve and give us a sense for maybe how big the inference business is for you guys? And do we need full on NDL72 rack scale solutions for reasoning inference going forward?
你們已經討論了圍繞推理模型的推理擴展至少一年了。正如您所說的,我們確實看到了這一成果。我們從您的客戶那裡聽說了這一點。您能否告訴我們你們能夠滿足多少需求,並告訴我們推理業務對您們來說有多大?我們是否需要完整的 NDL72 機架規模解決方案來進行未來的推理?
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Well, we would like to serve all of it, and I think we're on track to serve most of it. Grace Blackwell NVLink72 is the ideal engine today, the ideal computer thinking machine, if you will, for reasoning AI. There's a couple of reasons for that. The first reason is that the token generation amount, the number of tokens reasoning goes through, is 100, 1,000 times more than a one-shot chatbot. It's essentially thinking to itself, breaking down a problem step-by-step.
嗯,我們願意為所有人提供服務,而且我認為我們有望為大多數人提供服務。Grace Blackwell NVLink72 是當今推理 AI 的理想引擎,可以說是理想的電腦思維機器。造成這種情況的原因有幾個。第一個原因是,代幣生成量,也就是推理所經過的代幣數量,比一次性聊天機器人多 100 倍、1,000 倍。它本質上是一種自我思考,逐步分解問題。
It might be planning multiple paths to an answer. It could be using tools, reading PDFs, reading web pages, watching videos, and then producing a result, an answer. The longer it thinks, the better the answer, the smarter the answer is. And so what we would like to do, and the reason why Grace Blackwell was designed to give such a giant step-up in inference performance, is so that you could do all this and still get a response as quickly as possible. Compared to Hopper, Grace Blackwell is some 40 times higher speed and throughput compared.
它可能正在規劃多條通往答案的路徑。它可以是使用工具、閱讀 PDF、閱讀網頁、觀看視頻,然後產生結果、答案。它思考的時間越長,答案就越好,答案就越聰明。因此,我們想要做的,以及 Grace Blackwell 設計為在推理性能上實現如此巨大提升的原因,就是讓您可以完成所有這些工作,並且仍然可以盡快得到回應。與 Hopper 相比,Grace Blackwell 的速度和吞吐量高出約 40 倍。
And so this is going to be a huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time. So that's the fundamental reason. That was the core driving reason for Grace Blackwell NVLink 72. Of course, in order to do that, we had to reinvent, literally redesign, the entire -- a way that these supercomputers are built. But now we're in full production. It's going to be exciting. It's going to be incredibly exciting.
因此,這將對降低成本、提高回應品質和提供優質服務產生巨大的好處。這就是根本原因。這是 Grace Blackwell NVLink 72 的核心驅動原因。當然,為了做到這一點,我們必須徹底改造,確切地說是重新設計整個超級電腦的建造方式。但現在我們已全面投入生產。這會很令人興奮。這將會非常令人興奮。
Operator
Operator
Vivek Arya, Bank of America Securities.
Vivek Arya 的美國銀行證券。
Vivek Arya - Analyst
Vivek Arya - Analyst
Just a clarification for Colette first. So on the China impact, I think previously, it was mentioned at about $15 billion, so you had the $8 billion in Q2. So is there still some left as a headwind for the remaining quarters just Colette, how to model that? And then a question, Jensen, for you. Back at GTC, you had outlined a path towards almost $1 trillion of AI spending over the next few years.
先向 Colette 澄清一下。因此,關於中國的影響,我認為之前提到的約為 150 億美元,所以第二季的影響是 80 億美元。那麼,對於 Colette 來說,剩下的幾季是否還會有一些不利因素,如何建模呢?接下來我要問你一個問題,詹森。在 GTC 大會上,您曾概述了未來幾年近 1 兆美元的人工智慧支出路線。
Where are we in that build-out? And do you think it's going to be uniform that you will see every spender, whether it's ESP, sovereigns, enterprises, _or build-out, should we expect some periods of digestion in between? Just what are your customer discussions telling you about how to model growth for next year?
我們目前處於建設的哪個階段?您是否認為,所有支出者(無論是 ESP、主權國家、企業或建設者)都會出現同樣的情況,我們是否應該預期其間會有一些消化期?您的客戶討論究竟告訴您如何規劃明年的成長?
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Yes, Vivek. Thanks so much for the question regarding H20. Yes, we recognized $4.6 billion H20 in Q1. We were unable to ship $2.5 billion so the total for Q1 should have been $7 billion. When we look at our Q2, our Q2 is going to be meaningfully down in terms of China data center revenue.
是的,維韋克。非常感謝您提出有關 H20 的問題。是的,我們在第一季確認了 46 億美元的 H20。我們無法運送 25 億美元,因此第一季的總額應該是 70 億美元。當我們展望第二季時,我們發現中國資料中心的營收將大幅下降。
And we had highlighted in terms of the amount of orders that we had planned for H20 in Q2, and that was $8 billion. Now going forward, we did have other orders going forward that we will not be able to fulfill. That is what was incorporated, therefore, in the amount that we wrote down of the $4.5 billion.
我們強調了第二季為 H20 計畫的訂單金額,即 80 億美元。現在展望未來,我們確實還有其他無法完成的訂單。因此,這就是我們減記的 45 億美元金額所包含的內容。
That write-down was about inventory and purchase commitments, and our purchase commitments were about what we expected regarding the orders that we had received. Going forward, though, it's a bigger issue regarding the amount of the market that we will not be able to serve. We assess that TAM to be close to about $50 billion in the future as we don't have a product to enable for China.
那次減記是關於庫存和採購承諾的,而我們的採購承諾是關於我們對收到的訂單的預期。但展望未來,更大的問題是我們無法服務的市場規模。由於我們目前沒有適合中國市場的產品,我們估計未來 TAM 將接近 500 億美元。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
In fact, the -- probably the best way to think through it is that AI is several things. Of course, we know that AI is this incredible technology that's going to transform every industry from, of course, the way we do software to health care and financial services to retail to, I guess, every industry, transportation, manufacturing. And we're at the beginning of that. But maybe another way to think about that is where do we need intelligence, where do we need digital intelligence? And it's in every country, it's in every industry.
事實上,思考這個問題的最佳方式可能是將人工智慧視為幾件事。當然,我們知道人工智慧是一項令人難以置信的技術,它將改變每個行業,當然,從我們做軟體的方式到醫療保健和金融服務到零售業,我想,每個行業,交通運輸,製造業。我們正處於這一進程的開始階段。但也許我們可以從另一個角度來思考這個問題:我們在哪裡需要智能,我們在哪裡需要數位智能?它存在於每個國家、每個產業。
And we know because of that, we recognize that AI is also an infrastructure. It's a way of developing a technology -- delivering a technology that requires factories and these factories produce tokens. And they, as I mentioned, are important to every single industry and every single country. And so on that basis, we're really at the very beginning of it because the adoption of this technology is really kind of in its early stages. Now we've reached an extraordinary milestone with AIs that are reasoning or thinking, what people call inference time scaling.
正因為如此,我們認識到人工智慧也是一種基礎設施。這是一種開發技術的方式——交付需要工廠的技術,而這些工廠會生產代幣。正如我所提到的,它們對每個行業和每個國家都很重要。因此,從這個基礎上來說,我們實際上還處於起步階段,因為這項技術的採用實際上還處於早期階段。現在,我們在推理或思考的人工智慧方面已經達到了一個非凡的里程碑,人們稱之為推理時間縮放。
Of course, it created a whole new -- we've entered an era where inference is going to be a significant part of the compute workload. But anyhow, it's going to be a new infrastructure, and we're building it out in the cloud. The United States is really the early starter and available in US clouds. And this is our largest market, our largest installed base and we continue to see that happening.
當然,它創造了一個全新的時代——我們已經進入了一個推理將成為計算工作負載重要組成部分的時代。但無論如何,這將是一個新的基礎設施,我們正在雲端建置它。美國確實是早期起步者,並且已在美國雲端中可用。這是我們最大的市場、最大的安裝基礎,我們將繼續看到這種情況發生。
But beyond that, we're going to have to -- we're going to see AI go into enterprise, which is on-prem because so much of the data is still on-prem. Access control is really important. It's really hard to move all of every company's data into the cloud. And so we're going to move AI into the enterprise. And you saw that we announced a couple of really exciting new products, our RTX Pro Enterprise AI server that runs everything enterprise and AI, our DGX Spark and DGX Station, which is designed for developers who want to work on-prem.
但除此之外,我們還必須——我們將看到人工智慧進入企業,而企業是在本地部署的,因為許多資料仍然在本地部署。存取控制確實很重要。將每家公司的所有資料遷移到雲端確實很困難。因此我們將把人工智慧引入企業。您看到我們發布了幾款非常令人興奮的新產品,我們的 RTX Pro Enterprise AI 伺服器可以運行所有企業和 AI 功能,我們的 DGX Spark 和 DGX Station 專為想要在本地工作的開發人員而設計。
And so enterprise AI is just taking off. Telcos. Today, a lot of the telco infrastructure will be in the future, software defined and built on AI, and so 6G is going to be built on AI and that infrastructure needs to be built out. And I said, it's very early stages. And then, of course, every factory today that makes things will have an AI factory that sits with it.
因此,企業人工智慧才剛起步。電信公司。今天,許多電信基礎設施將面向未來,由軟體定義並基於人工智慧構建,因此 6G 將基於人工智慧構建,並且需要建立該基礎設施。我說,現在還處於非常早期的階段。當然,如今每家生產產品的工廠都會擁有一個與之配對的人工智慧工廠。
And the AI factory is going to be -- drive creating AI and operating AI for the factory itself but also to power the products and the things that are made by the factory. So it's very clear that every company will have AI factories. And very soon, there'll be robotics companies, robot companies and those companies will be also building AIs to drive the robots. And so we're at the beginning of all of this build-out.
人工智慧工廠不僅將推動工廠本身人工智慧的創造和運行,還將為工廠生產的產品和物品提供動力。所以很明顯,每家公司都會有人工智慧工廠。很快,就會出現機器人公司、機器人公司,這些公司也將開發人工智慧來驅動機器人。因此,我們正處於整個建設的開始階段。
Operator
Operator
C.J. Muse, Cantor Fitzgerald.
C.J. Muse,領唱費茲傑拉。
CJ Muse - Analyst
CJ Muse - Analyst
There have been many large GPU cluster investment announcements in the last month, and you alluded to a few of them with Saudi Arabia, the UAE. And then also we heard from Oracle and xAI, just to name a few. So my question, are there other that have yet to be announced of the same kind of scale and magnitude? And perhaps more importantly, how are these orders impacting your lead times for Blackwell and your current visibility sitting here today almost halfway through 2025?
上個月有很多大型 GPU 集群投資公告,您提到了其中的幾個,包括沙烏地阿拉伯和阿聯酋。然後我們也聽取了 Oracle 和 xAI 的意見,僅舉幾例。所以我的問題是,是否有其他同等規模和等級的事件尚未宣布?或許更重要的是,這些訂單如何影響 Blackwell 的交貨時間以及您目前在 2025 年即將過去一半時的可見度?
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Well, we have more orders today than we did at the last time I spoke about orders at GTC. However, we're also increasing our supply chain and building out our supply chain. They're doing a fantastic job. We're building it here onshore in the United States. But we're going to keep our supply chain quite busy for several -- many more years coming.
嗯,我們今天的訂單比我上次在 GTC 談論訂單時還要多。然而,我們也在增加我們的供應鏈並建立我們的供應鏈。他們做得非常出色。我們正在美國本土建造它。但在未來幾年甚至很多年裡,我們的供應鏈仍將保持相當繁忙的狀態。
And with respect to further announcements, I'm going to be on the road next week through Europe. And it's -- just about every country needs to build out AI infrastructure and their umpteenth AI factories being planned. We're -- I think in the remarks, Colette mentioned there's some 100 AI factories being built. There's a whole bunch that haven't been announced. And I think the important concept here which makes it easier to understand is that like other technologies that impact literally every single industry, of course, electricity was one and it became infrastructure.
關於進一步的公告,我下週將前往歐洲。幾乎每個國家都需要建造人工智慧基礎設施,並且正在規劃無數個人工智慧工廠。我們——我想在評論中,科萊特提到正在建造大約 100 家人工智慧工廠。還有很多尚未公佈。我認為這裡的一個重要概念是,它更容易理解,就像其他影響每個行業的技術一樣,電力當然是其中之一,它成為了基礎設施。
Of course, the information infrastructure, which we now know as the Internet affects every single industry, every country, every society. Intelligence is surely one of those things. I don't know any company, industry, country who thinks that intelligence is optional. It's essential infrastructure. And so we've now digitalized intelligence.
當然,我們現在所知的網路資訊基礎設施影響著每一個產業、每一個國家、每一個社會。智力肯定是其中之一。我不知道有哪個公司、產業或國家認為智慧是可選的。這是至關重要的基礎設施。我們現在已將智慧數位化。
And so I think we're clearly in the beginning of the build-out of this infrastructure. And every country will have it, I'm certain of that. Every industry will use it, that I'm certain of. And what's unique about this infrastructure is that it needs factories. It's a little bit like the energy infrastructure, electricity.
因此我認為我們顯然正處於這項基礎設施建設的開始階段。我確信,每個國家都會有它。我確信每個行業都會使用它。這種基礎設施的獨特之處在於它需要工廠。這有點像能源基礎建設、電力。
It needs factories. We need factories to produce this intelligence, and the intelligence is getting more sophisticated. We were talking about earlier that we had a huge breakthrough in the last couple of years with reasoning AI. And now there are agents that reason and there are super-agents that use a whole bunch of tools and then there's clusters of super agents where agents are working with agents, solving problems.
它需要工廠。我們需要工廠來生產這種智能,而這種智能正在變得越來越複雜。我們之前談到,過去幾年我們在推理人工智慧方面取得了巨大突破。現在,有能夠推理的代理,有能夠使用大量工具的超級代理,還有超級代理集群,代理與代理協作,解決問題。
And so you could just imagine, compared to one-shot chatbots and the agents that are now using AI built on these large language models, how much more compute-intensive they really need to be and are. So I think we're in the beginning of the build-out, and there should be many, many more announcements in the future.
因此,你可以想像,與一次性聊天機器人和現在使用基於這些大型語言模型構建的人工智慧的代理相比,它們真正需要的計算密集程度要高得多。所以我認為我們正處於建設的開始階段,未來應該會有更多公告。
Operator
Operator
Ben Reitzes, Melius.
本·雷澤斯(Ben Reitzes),梅利厄斯(Melius)。
Ben Reitzes - Analyst
Ben Reitzes - Analyst
I wanted to ask, first to Colette, just a little clarification around the guidance and maybe putting it in a different way. The $8 billion for H20 just seems like it's roughly $3 billion more than most people thought with regard to what you'd be foregoing in the second quarter. So that would mean that with regard to your guidance, the rest of the business in order to hit [45] is doing $2 billion to $3 billion or so better. So I was wondering if that math made sense to you. And then in terms of the guidance, that would imply the non-China business is doing a bit better than the Street expected. So wondering what the primary driver was there in your view.
首先我想問科萊特,請她對指導意見做出一些澄清,或者換個說法。就第二季放棄的資金而言,80 億美元的 H20 資金似乎比大多數人想像的多出約 30 億美元。因此,這意味著,按照您的指導,其餘業務要達到 [45] 的銷售額需要好於 20 億美元到 30 億美元左右。所以我想知道這個數學對你來說是否有意義。從指引來看,這意味著非中國業務的表現比華爾街預期好一些。所以想知道在您看來主要的驅動因素是什麼。
And then this second part of my question, Jensen, I know you guide one quarter at a time, but with regard to the AI diffusion rule being lifted and this momentum with sovereign, there's been times in your history where you guys have said on calls like this, where you have more conviction and sequential growth throughout the year, et cetera.
然後是我問題的第二部分,詹森,我知道你每次都會指導一個季度,但關於人工智能擴散規則的取消以及主權的這種勢頭,你們歷史上曾多次在這樣的電話會議上說過,你們對全年的連續增長更有信心,等等。
And given the unleashing of demand with AI diffusion being revoked and the supply chain increasing, does the environment give you more conviction and sequential growth as we go throughout the year? So first one for Colette and then next one for Jensen.
並且考慮到隨著人工智慧的普及和供應鏈的增加而釋放的需求,環境是否會讓您對全年的連續成長更有信心?第一個是 Colette,第二個是 Jensen。
Colette Kress - Chief Financial Officer, Executive Vice President
Colette Kress - Chief Financial Officer, Executive Vice President
Thanks, Ben, for the question. When we look at our Q2 guidance and our commentary that we provided, that had the export controls not occurred, we would have had orders of about $8 billion for H20, that's correct. That was a possibility for what we would have had in our outlook for this quarter in Q2.
謝謝本提出這個問題。當我們查看第二季度的指導和我們提供的評論時,如果沒有發生出口管制,我們將獲得約 80 億美元的 H20 訂單,這是正確的。這是我們在第二季的展望中所預見的可能性。
So what we also have talked about here is the growth that we've seen in Blackwell, Blackwell across many of our customers as well as the growth that we continue to have in terms of supply that we need for our customers. So putting those together, that's where we came through with the guidance that we provided. I'm going to turn the rest over to Jensen to see how he wants to.
因此,我們在這裡也討論了我們在 Blackwell、Blackwell 的許多客戶中看到的成長,以及我們在滿足客戶需求的供應方面持續實現的成長。因此,將這些結合起來,這就是我們提供的指導。我將把剩下的交給 Jensen,看看他想怎麼做。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Yes. Thanks. Thanks, Ben. I would say compared to the beginning of the year, compared to GTC time frame, there are four positive surprises. The first positive surprise is the step function demand increase of reasoning AI. I think it is fairly clear now that AI is going through an exponential growth, and reasoning AI really busted through.
是的。謝謝。謝謝,本。我想說,與年初相比,與 GTC 時間框架相比,有四個積極的驚喜。第一個正面的驚喜是推理人工智慧的階躍函數需求增加。我認為現在很明顯,人工智慧正在經歷指數級成長,推理人工智慧真正取得了突破。
Concerns about hallucination or its ability to really solve problems, and I think a lot of people are crossing that barrier and realizing how incredibly effective agentic AI is and reasoning AI is. So number one is inference reasoning and the exponential growth there, demand growth. The second one, you mentioned AI diffusion. It's really terrific to see that the AI diffusion rule was rescinded. President Trump wants America to win, and he also realizes that we're not the only country in the race.
人們擔心幻覺或其真正解決問題的能力,我認為很多人正在跨越這一障礙,並意識到代理人工智慧和推理人工智慧是多麼有效。因此,第一是推理及其指數成長,需求成長。第二個,你提到了人工智慧的傳播。看到人工智慧擴散規則被廢除真是太棒了。川普總統希望美國獲勝,他也意識到我們並不是唯一參與競選的國家。
And he wants the United States to win and recognizes that we have to get the American stack out to the world and have the world build on top of American stacks instead of alternatives. And so AI diffusion happened, the rescinding of it happened at almost precisely the time that countries around the world are awakening to the importance of AI as an infrastructure, not just as a technology of great curiosity and great importance, but infrastructure for their industries and start-ups and society.
他希望美國獲勝,並認識到我們必須將美國體系推向世界,讓世界建立在美國體系之上,而不是其他選擇之上。因此,人工智慧的傳播和廢除幾乎恰逢世界各國開始意識到人工智慧作為基礎設施的重要性之時,人工智慧不僅是一項令人好奇和非常重要的技術,而且是其行業、新創公司和社會的基礎設施。
Just as they had to build out infrastructure for electricity and Internet, you got to build out an infrastructure for AI. I think that's an awakening, and that creates a lot of opportunity. The third is enterprise AI. Agents work and agents are doing -- these agents are really quite successful, much more than generative AI. Agentic AI is game-changing.
就像他們必須建造電力和網路基礎設施一樣,你也必須建造人工智慧基礎設施。我認為這是一種覺醒,它創造了很多機會。第三是企業AI。代理工作並且代理正在做——這些代理確實相當成功,遠遠超過生成式人工智慧。Agentic AI 正在改變遊戲規則。
Agents can understand ambiguous and rather implicit instructions and able to problem solve and use tools and have memory and so on. And so I think this is -- enterprise AI is ready to take off. And it's taken us a few years to build a computing system that is able to integrate and run enterprise AI stacks, run enterprise IT stacks but add AI to it.
代理人可以理解模糊和隱含的指令,能夠解決問題、使用工具、有記憶等等。所以我認為企業人工智慧已經準備好起飛了。我們花了幾年時間建立一個能夠整合和運行企業 AI 堆疊、運行企業 IT 堆疊並添加 AI 的運算系統。
And this is the RTX Pro Enterprise server that we announced at Computex just last week. And just about every major IT company has joined us, super excited about that. And so computing is one stack, one part of it. But remember, enterprise IT is really three pillars: it's compute, storage, and networking. And we've now put all three of them together for finally, and we're going to market with that.
這是我們上週在 Computex 上發布的 RTX Pro Enterprise 伺服器。幾乎所有大型 IT 公司都加入了我們,對此我們感到非常興奮。因此,計算是一個堆疊,是其中的一部分。但請記住,企業 IT 實際上有三大支柱:運算、儲存和網路。現在我們終於將這三者整合在一起,並將其推向市場。
And then lastly, industrial AI. Remember, one of the implications of the world reordering, if you will, is a region's onshoring manufacturing and building plants everywhere. In addition to AI factories, of course, there are new electronics manufacturing, chip manufacturing being built around the world. And all of these new plants in these new factories are creating exactly the right time when Omniverse and AI and all the work that we're doing with robotics is emerging. And so this fourth pillar is quite important.
最後是工業人工智慧。請記住,如果你願意的話,世界重新排序的影響之一就是一個地區在各地進行製造業和建廠。除了AI工廠之外,當然世界各地還在建設新的電子製造業、晶片製造業。這些新工廠中的所有這些新裝置都恰好創造了正確的時機,即 Omniverse 和 AI 以及我們正在使用機器人技術進行的所有工作正在興起的時機。所以第四個支柱非常重要。
Every factory will have an AI factory associated with it. And in order to create these physical AI systems, you really have to train a vast amount of data. So back to more data, more training, more AIs to be created, more computers. And so these four drivers are really kicking into turbocharge.
每個工廠都會有一個與之相關的AI工廠。為了創建這些實體 AI 系統,您必須訓練大量資料。所以回到更多的數據、更多的訓練、更多的人工智慧、更多的電腦。因此,這四個驅動因素確實發揮了重要作用。
Operator
Operator
Timothy Arcuri, UBS.
瑞銀的提摩西·阿庫裡。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Jensen, I wanted to ask about China. It sounds like the July guidance assumes there's no SKU replacement for the age 20. But if the President wants the US to win, it seems like you're going to have to be allowed to ship something into China. So I guess I had two points on that.
詹森,我想問有關中國的問題。聽起來 7 月的指導意見假設 20 歲年齡層沒有 SKU 替代品。但如果總統希望美國獲勝,那麼似乎就必須允許將某些東西運送到中國。所以我想我對此有兩點看法。
First of all, have you been approved to ship a new modified version into China? And you're currently building it but you just can't ship it in fiscal Q2? And then you were sort of run rating $7 billion to $8 billion a quarter into China. Can we get back to those sorts of quarterly run rates once you get something that you're allowed to ship back into China? I think we're all trying to figure out how much to add back to our models and when. So whatever you can say there would be great.
首先,你們是否已獲準將新的修改版本運往中國?您目前正在建造它,但您無法在第二財季發貨嗎?然後你估計每季中國市場的投資金額將達到 70 億到 80 億美元。一旦收到允許運回中國的產品,我們能恢復到那種季度運作率嗎?我想我們都在試圖弄清楚什麼時候在我們的模型中添加多少內容。所以無論您說什麼都會很棒。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
The President has a plan. He has a vision and I trust him. With respect to our export controls, it's a set of limits. And the new set of limits pretty much make it impossible for us to reduce Hopper any further for any productive use. And so the new limits, it's kind of the end of the road for Hopper.
總統有一個計劃。他有遠見,我相信他。關於我們的出口管制,這是一套限制。而新的限制幾乎使我們無法進一步減少 Hopper 以用於任何生產用途。所以新的限制對霍珀來說就意味著道路的終點。
We have some -- we have limited options. And so we just -- the key is to understand the limits. The key is to understand the limits and see if we can come up with interesting products that could continue to serve the Chinese market. We don't have anything at the moment and but we're considering it. We're thinking about it.
我們有一些——我們的選擇有限。所以我們只是——關鍵是要了解限制。關鍵是要了解局限性,看看我們是否能推出有趣的產品,繼續服務中國市場。我們目前還沒有任何消息,但我們正在考慮。我們正在考慮。
Obviously, the limits are quite stringent at the moment. And we have nothing to announce today. And when the time comes, we'll engage the administration and discuss that.
顯然,目前的限制相當嚴格。今天我們沒有什麼要宣布的。時機成熟時,我們將與政府接觸並討論此事。
Operator
Operator
Aaron Rakers, Wells Fargo.
富國銀行的 Aaron Rakers。
Jacob Wilhelm - Analyst
Jacob Wilhelm - Analyst
This is Jake on for Aaron. I was wondering if you could give some additional color around the strength you saw within the Networking business, particularly around the adoption of your Ethernet solutions at CSPs as well as any change you're seeing in network attach rates.
這是傑克 (Jake) 代替亞倫 (Aaron)。我想知道您是否可以進一步說明您在網路業務中看到的優勢,特別是關於 CSP 採用乙太網路解決方案的情況,以及您看到的網路連線率的變化。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Yes, thank you for that. We now have three networking platforms, maybe four. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do. Scaling out is easier to do but scaling up is hard to do.
是的,謝謝你。我們現在有三個網路平台,也許四個。第一個是擴大規模的平台,將一台電腦變成一台更大的電腦。擴大規模極為困難。擴展比較容易,但擴大卻很難。
And that platform is called NVLink. NVLink is -- comes with it chips and switches and NVLink spines and it's really complicated. But anyway, that's our new platform, scale-up platform. In addition to InfiniBand, we also have Spectrum-X. We've been fairly consistent that Ethernet was designed for a lot of traffic that are independent.
這個平台叫做 NVLink。NVLink 配有晶片、交換器和 NVLink 主幹,非常複雜。但無論如何,這是我們的新平台,擴展平台。除了 InfiniBand,我們還有 Spectrum-X。我們一直一致認為乙太網路是為大量獨立流量而設計的。
But in the case of AI, you have a lot of computers working together. And the traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking and it wants to get work on as quickly as possible, and you got a whole bunch of nodes working together. And so we enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet. And as a result, we improved the utilization of Ethernet in these clusters.
但在人工智慧方面,需要很多台電腦協同工作。而且AI的流量是突發性的。延遲非常重要,因為人工智慧正在思考,它希望盡快完成工作,並且需要一大堆節點協同工作。因此,我們增強了以太網,增加了極低延遲、擁塞控制、自適應路由等功能,這些技術僅在 InfiniBand 中可用於乙太網路。因此,我們提高了這些集群中乙太網路的利用率。
These clusters are gigantic, from as low as 50% to as high as 85%, 90%. And so the difference is if you had a cluster that's $10 billion and you improve its effectiveness by 40%, that's worth $4 billion. It's incredible. And so Spectrum-X has been really, quite frankly, a home run. And this last quarter, as we said in the prepared remarks, we added two very significant CSPs to the Spectrum-X adoption.
這些集群非常龐大,從最低的 50% 到最高的 85%、90%。所以差別在於,如果你擁有一個價值 100 億美元的集群,並且將其效率提高 40%,那麼它的價值就是 40 億美元。這真是令人難以置信。因此,坦白說,Spectrum-X 確實是一個本壘打。正如我們在準備好的發言中所說,上個季度我們在 Spectrum-X 採用中增加了兩個非常重要的 CSP。
And then the last one is BlueField, which is our control plane. And so in those four -- those -- the control plane network, which is used for storage. It's used for security and for many of these clusters that want to achieve isolation among its users, multi-tenant clusters and still be able to use and have extremely high-performance bare metal performance, BlueField is ideal for that and is used in a lot of these cases. And so we have these four networking platforms that are all growing and we're doing really well. I'm very proud of the team.
最後一個是 BlueField,它是我們的控制平面。因此,在這四個中,這些是用於儲存的控制平面網路。它用於安全性,對於許多想要在其用戶之間實現隔離的集群、多租戶集群,仍然能夠使用並具有極高性能的裸機性能,BlueField 是理想的選擇,並且在許多情況下都被使用。因此,我們有四個網路平台,它們都在不斷發展,而且我們做得非常好。我為這個團隊感到非常自豪。
Operator
Operator
That is all the time we have for questions. Jensen, I will turn the call back to you.
我們回答問題的時間就這麼多了。詹森,我會把電話轉給你。
Jensen Huang - Founder, President, Chief Executive Officer
Jensen Huang - Founder, President, Chief Executive Officer
Thank you. This is the start of a powerful new wave of growth. Grace Blackwell is in full production. We're off to the races. We now have multiple significant growth engines. Inference, once the light workload is surging with revenue-generating AI services. AI is growing faster and will be larger than any platform shifts before, including the Internet, mobile and cloud.
謝謝。這是新一輪強勁成長浪潮的開始。Grace Blackwell 正在全力製作中。我們要去參加比賽了。我們現在擁有多個重要的成長引擎。推理,曾經的輕量級工作量正隨著創造收入的人工智慧服務而激增。人工智慧的發展速度比以往任何平台轉變都要快,而且規模也將比網路、行動和雲端等平台轉變更大。
Blackwell is built to power the full AI life cycle from training frontier models to running complex inference and reasoning agents at scale. Training demand continues to rise with breakthroughs in post training and like reinforcement learning and synthetic data generation. But inference is exploding.
Blackwell 旨在為整個 AI 生命週期提供支持,從訓練前沿模型到大規模運行複雜的推理和推理代理。隨著後期訓練以及強化學習和合成資料生成等方面的突破,訓練需求持續上升。但推理正在爆炸式增長。
Reasoning AI agents require orders of magnitude more compute. But foundations of our next growth platforms are in place and ready to scale. Sovereign AI, nations are investing in AI infrastructure. They for electricity and Internet. Enterprise AI, AI must be deployable on prem and integrated with existing IT.
推理人工智慧代理需要更多數量級的計算。但我們下一個成長平台的基礎已經到位並準備好擴大規模。主權人工智慧,各國正在投資人工智慧基礎設施。他們提供電力和網路。企業AI,AI必須可在本地部署並與現有IT整合。
Our RTX Pro, DGX Park and DGX Station enterprise AI systems are ready to modernize the $500 billion IT infrastructure on-prem or in the cloud. Every major IT provider is partnering with us. Industrial AI from training to digital twin simulation to deployment, NVIDIA Omniverse and Isaac are powering next-generation factories and humanoid robotic systems worldwide.
我們的 RTX Pro、DGX Park 和 DGX Station 企業 AI 系統已準備好對本地或雲端價值 5,000 億美元的 IT 基礎架構進行現代化改造。每個主要的 IT 提供者都與我們合作。從訓練到數位孿生類比再到部署的工業 AI,NVIDIA Omniverse 和 Isaac 正在為全球下一代工廠和人形機器人系統提供支援。
The age of AI is here from AI infrastructures, inference at scale, sovereign AI, enterprise AI, and industrial AI, NVIDIA is ready. Join us at GTC Paris, our keynote at VivaTech on June 11, talking about quantum GPU computing, robotic factories and robots and celebrate our partnerships building AI factories across the region. The NVIDIA band will tour France, the UK, Germany, and Belgium.
人工智慧時代已經到來,從人工智慧基礎設施、大規模推理、自主人工智慧、企業人工智慧和工業人工智慧,NVIDIA 已經做好準備。歡迎參加 6 月 11 日在 VivaTech 舉行的 GTC Paris 主題演講,討論量子 GPU 運算、機器人工廠和機器人,並慶祝我們在整個地區建立 AI 工廠的合作夥伴關係。NVIDIA 樂團將在法國、英國、德國和比利時巡迴演出。
Thank you for joining us at the earnings call today. See you in Paris.
感謝您參加今天的收益電話會議。巴黎見。
Operator
Operator
This concludes today's conference call. You may now disconnect.
今天的電話會議到此結束。您現在可以斷開連線。