使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good afternoon. My name is Sarah and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's fourth-quarter earnings call. (Operator Instructions)
午安.我叫莎拉,今天將由我擔任你們的會議接線生。此時此刻,我謹代表英偉達歡迎各位參加第四季財報電話會議。(操作說明)
Toshiya Hari, you may begin your conference.
Toshiya Hari,你可以開始你的會議了。
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
謝謝。各位下午好,歡迎參加英偉達2026財年第四季電話會議。今天與我一同出席的還有英偉達總裁兼執行長黃仁勳,以及執行副總裁兼財務長科萊特·克雷斯。
Our call is being webcast live and NVIDIA's investor relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2027. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我們的電話會議將進行網路直播,並在英偉達投資者關係網站上同步播出。網路直播將提供回放,直至召開電話會議討論我們 2027 財年第一季的財務業績。本次電話會議的內容歸英偉達所有。未經我們事先書面同意,不得複製或轉錄。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results in business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在本次電話會議中,我們可能會根據目前的預期發表一些前瞻性聲明。這些都存在著許多重大風險和不確定性,我們的實際結果可能與預期有重大差異。有關可能影響我們未來財務表現的因素的討論,請參閱今天發布的收益報告中的披露資訊、我們最新的 10-K 表格和 10-Q 表格以及我們可能向美國證券交易委員會提交的 8-K 表格報告。
All our statements are made as of today, February 25, 2026, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
我們所有聲明均截至今日(2026 年 2 月 25 日),並基於我們目前掌握的資訊。除法律另有規定外,我們不承擔更新任何此類聲明的義務。
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在本次電話會議中,我們將討論非GAAP財務指標。您可以在我們的財務長評論中找到這些非GAAP財務指標與GAAP財務指標的調節表,該評論已發佈在我們的網站上。
With that, let me turn the call over to Colette.
那麼,現在讓我把電話交給科萊特。
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Thanks, Toshiya. We delivered another outstanding quarter with record revenue, operating income, and free cash flow. Total revenue of $68 billion was up 73% year over year, accelerating from Q3. Growth on a sequential basis was also a record as we added $11 billion in Data Center revenue across a diverse and expanding set of customers, including cloud providers, hyperscalers, AI model makers, enterprises, and sovereign nations.
謝謝你,Toshiya。我們又迎來了一個出色的季度,營收、營業收入和自由現金流均創歷史新高。總營收達 680 億美元,年增 73%,較第三季增速加快。與上一季相比,我們的成長也創下了紀錄,資料中心收入增加了 110 億美元,客戶群日益多元化且不斷擴大,包括雲端供應商、超大規模資料中心、人工智慧模型製造商、企業和主權國家。
Demand for our Blackwell architecture, extreme co-designed at data center scale, continues to strengthen as inference deployments grow in addition to training. The transition to accelerated computing and the infusion of AI across existing hyperscale workloads continue to fuel our growth. Agentic and physical AI applications built on increasingly smarter and multimodal models are beginning to drive our financial performance.
隨著推理部署和訓練的增加,對我們專為資料中心規模設計的 Blackwell 架構的需求持續增強。向加速運算的轉型以及人工智慧在現有超大規模工作負載中的應用,持續推動著我們的成長。基於日益智慧和多模態模型的智能體和實體人工智慧應用正在開始推動我們的財務表現。
On a full-year basis, Data Center generated revenue of $194 billion, up 68% year over year. We have now scaled our Data Center business by nearly 13x since the emergence of ChatGPT in fiscal 2023.
全年來看,資料中心業務收入達 1,940 億美元,較去年同期成長 68%。自 2023 財年 ChatGPT 推出以來,我們的資料中心業務規模已擴大了近 13 倍。
We look ahead, we expect sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity we shared last year. We believe we have inventory and supply commitments in place to address future demand, including shipments extending into calendar 2027.
展望未來,我們預計 2026 年全年營收將持續成長,超過去年我們分享的 Blackwell 和 Rubin 5,000 億美元營收機會所包含的金額。我們相信,我們已製定庫存和供應承諾,以滿足未來的需求,包括 2027 年的發貨。
Every data center is power-constrained. Customers make critical architectural decisions based on performance per watt given these constraints and the need to maximize AI factory revenue. SemiAnalysis declared NVIDIA Inference King as recent results from InferenceX reinforced our inference leadership, with GB300 NVL72 achieving up to 50x performance per watt and 35x lower cost per token compared with Hopper. And continuous optimization of CUDA software helped deliver up to 5 times better performance on GB200 NVL72 just within four months.
每個資料中心都面臨電力限制。在這些限制條件下,客戶會根據每瓦效能做出關鍵的架構決策,同時也要最大限度地提高人工智慧工廠的收入。SemiAnalysis 宣布 NVIDIA 為推理之王,因為 InferenceX 的最新結果鞏固了我們在推理領域的領先地位,GB300 NVL72 的每瓦性能比 Hopper 提高了 50 倍,每個代幣的成本降低了 35 倍。CUDA 軟體的持續優化使得 GB200 NVL72 的效能在短短四個月內提升了 5 倍。
NVIDIA produces the lowest cost per token, and data centers running on NVIDIA generate the highest revenues. Our pace of innovation, particularly at our scale, is unmatched. Fueled by an annual R&D budget approaching $20 billion and our ability to extreme co-design across compute and networking across chips, systems, algorithms, and softwares, we intend to deliver X factor leaps in performance per watt every generation and extend our leadership position over the long term.
NVIDIA 的代幣單價最低,而採用 NVIDIA 技術的資料中心所產生的收入最高。我們的創新速度,尤其是在我們這樣的規模下,是無與倫比的。憑藉每年近 200 億美元的研發預算,以及我們在晶片、系統、演算法和軟體等運算和網路領域進行極端協同設計的能力,我們計劃每一代產品都能在每瓦性能方面實現 X 倍的飛躍,並長期鞏固我們的領先地位。
Q4 data center revenue of $62 billion increased 75% year over year and 22% sequentially, driven primarily by sustained strength in Blackwell and the Blackwell Ultra ramp. With NVIDIA infrastructure in high demand, even Hopper and much of the six-year-old Ampere-based products are sold out in the cloud.
第四季資料中心營收達 620 億美元,年成長 75%,季增 22%,主要得益於 Blackwell 的持續強勁表現以及 Blackwell Ultra 的快速擴張。由於 NVIDIA 基礎設施需求旺盛,即使是 Hopper 和許多基於 Ampere 架構、已有六年歷史的產品在雲端也已售罄。
Nearly a year has passed since the release of our Grace Blackwell NVL72 systems. Today, nearly 9 gigawatts of infrastructure on Blackwell are deployed and consumed by the major cloud service providers, hyperscalers, AI model makers, and enterprises.
自從我們發布 Grace Blackwell NVL72 系統以來,已經過了將近一年。目前,Blackwell 上部署和使用的基礎設施容量接近 9 吉瓦,主要雲端服務供應商、超大規模資料中心、人工智慧模型製造商和企業都在使用這些資源。
Networking, a cornerstone of our data center scale infrastructure offering, was a standout this quarter, generating $11 billion in revenue, up more than 3.5x year over year. Demand for our scale-up and scale-out technologies reached record levels, both growing double-digit sequentially, driven by strong adoption of NVLink, Spectrum-X Ethernet, and InfiniBand.
網路是我們資料中心規模基礎設施產品的核心,本季表現突出,創造了 110 億美元的收入,較去年同期成長超過 3.5 倍。受 NVLink、Spectrum-X Ethernet 和 InfiniBand 的大力推廣,我們面向規模擴展和規模下擴展技術的需求達到了創紀錄的水平,兩者均實現了兩位數的環比增長。
On a year-over-year basis, growth was driven primarily by NVLink72 scale-up switches as Grace Blackwell Systems accounted for roughly two-thirds of Data Center revenue in the quarter. NVLink scale-up fabric has revolutionized computing and demonstrates the power of extreme co-design across all of the chips of the supercomputer and the full stack.
與去年同期相比,成長主要由 NVLink72 擴展交換機推動,Grace Blackwell Systems 在該季度貢獻了資料中心營收的約三分之二。NVLink 可擴展架構徹底改變了運算方式,並展示了超級電腦所有晶片和整個堆疊的極致協同設計的強大功能。
In Q4, we announced that we will enable AWS with NVLink to integrate with their custom silicon. Momentum is strong with our Spectrum-X Ethernet scale-up and scale across networking as customers work to unify distributed data centers into integrated gigascale AI factories. For the full year, our networking business exceeded $31 billion in revenue, up more than 10x compared to fiscal 2021, the year we acquired Mellanox.
第四季度,我們宣布將透過 NVLink 為 AWS 提供整合到其客製化晶片的功能。隨著客戶努力將分散式資料中心統一到整合的千兆級人工智慧工廠中,我們的 Spectrum-X 乙太網路擴展和網路擴展勢頭強勁。全年來看,我們的網路業務營收超過 310 億美元,比 2021 財年(我們收購 Mellanox 的那一年)成長了 10 倍以上。
Our demand profile is broad, diverse, and expanding beyond just chatbots. First, there is a fundamental platform shift from classical machine learning to generative AI. Strong evidence of ROI as hyperscalers upgrade massive traditional workloads to generative AI, including search, ad generation, and content recommender systems, is encouraging our largest customers to accelerate their capital spending.
我們的需求範圍廣泛、多元化,並且正在擴展到聊天機器人之外。首先,平台發生了根本性的轉變,從傳統的機器學習轉向了生成式人工智慧。隨著超大規模資料中心將龐大的傳統工作負載升級到生成式人工智慧(包括搜尋、廣告生成和內容推薦系統),投資報酬率的顯著提升正鼓勵我們最大的客戶加快資本支出。
For example, at Meta, advancements in their GEM Model drove a 3.5 increase in ad clicks on Facebook and more than 1% gain in conversations on Instagram, translating into meaningful revenue growth. With the same NVIDIA infrastructure, Meta Superintelligence labs can train and deploy their frontier agentic AI systems.
例如,Meta 公司 GEM 模型的改進使其在 Facebook 上的廣告點擊量增加了 3.5%,在 Instagram 上的對話量增加了 1% 以上,從而轉化為可觀的收入成長。借助相同的 NVIDIA 基礎設施,Meta Superintelligence 實驗室可以訓練和部署其前沿的智慧體 AI 系統。
Frontier agentic systems have reached an inflection point. Claude Code, Claude Cowork, and OpenAI Codex have achieved useful intelligence. Adoption is skyrocketing and tokens are profitable, driving extreme urgency to scale up compute.
前沿智能體系統已經達到一個轉折點。Claude Code、Claude Cowork 和 OpenAI Codex 都實現了有用的智慧。採用率飆升,代幣有利可圖,這使得擴大計算規模變得極為迫切。
Compute directly translate to intelligence and revenue growth. Analyst expectations for 2026 CapEx across the top five cloud providers and hyperscalers who collectively account for a little over 50% of our Data Center revenue are up nearly $120 billion since the start of the year and approaching $700 billion. We continue to expect the transition of classic data center workloads to GPU accelerated computing and the use of AI to enhance today's hyperscale workloads and contribute toward roughly half of our long-term opportunity.
運算能力可以直接轉化為智慧和收入成長。分析師預計,占我們資料中心收入略高於 50% 的前五名雲端供應商和超大規模營運商的 2026 年資本支出將比年初增加近 1,200 億美元,並接近 7,000 億美元。我們繼續期待傳統資料中心工作負載向 GPU 加速運算的過渡,以及人工智慧在增強當今超大規模工作負載方面的應用,並為我們長期發展機會貢獻約一半的份額。
Every country will build and operate some parts of its AI infrastructure, just like with electricity and internet today. In fiscal year 2026, our sovereign AI business more than tripled year over year and over $30 billion, driven primarily by customers based in Canada, France, the Netherlands, Singapore, and the UK. Over the long run, we expect our sovereign opportunity to grow at least in line with the AI infrastructure market as countries spend on AI proportional to their GDP.
每個國家都會建造和運營其人工智慧基礎設施的某些部分,就像今天的電力和互聯網一樣。在 2026 財年,我們的主權人工智慧業務年增超過三倍,達到 300 億美元以上,這主要得益於加拿大、法國、荷蘭、新加坡和英國的客戶。從長遠來看,我們預期隨著各國在人工智慧領域的支出與其GDP成正比,我們的主權機會至少會與人工智慧基礎設施市場保持同步成長。
While small amounts of H200 products for China-based customers were approved by the US government, we have yet to generate any revenue, and we do not know whether any imports will be allowed into China. Our competitors in China, bolstered by recent IPOs, are making progress and have the potential to disrupt the structure of the global AI industry over the long term.
雖然美國政府已批准向中國客戶少量進口 H200 產品,但我們尚未產生任何收入,也不知道是否會允許任何進口產品進入中國。我們的中國競爭對手,在近期IPO的推動下,正在取得進展,並且有可能在長期內顛覆全球人工智慧產業的結構。
To sustain its leadership position in AI compute, America must engage every developer and be the platform for choice for every commercial business, including those in China. We will continue to engage with the US and China governments and advocate for America's ability to compete around the world.
為了保持其在人工智慧運算領域的領先地位,美國必須吸引每位開發者,並成為包括中國企業在內的所有商業企業的首選平台。我們將繼續與美國和中國政府溝通,並倡導美國在全球競爭中保持競爭力。
We unveiled the Rubin platform last month at CES, comprised of six new chips, the Vera CPU, Rubin GPU, NVLink, 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet Switch. The platform will train MOE models with one-fourth the number of GPUs and reduce inference token costs by up to 10x compared to Blackwell.
我們上個月在 CES 上推出了 Rubin 平台,該平台由六款新晶片組成:Vera CPU、Rubin GPU、NVLink、6 Switch、ConnectX-9 SuperNIC、BlueField-4 DPU 和 Spectrum-6 乙太網路交換器。與 Blackwell 相比,該平台將使用四分之一數量的 GPU 來訓練 MOE 模型,並將推理令牌成本降低至多 10 倍。
We shipped our first Vera Rubin samples to customers earlier this week and we remain on track to commence production shipments in the second half of the year. Based on its modular, cable-free trade design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin.
本週早些時候,我們向客戶發送了第一批 Vera Rubin 樣品,我們仍按計劃在今年下半年開始生產發貨。基於模組化、無電纜的貿易設計,Rubin 將比 Blackwell 提供更高的彈性和可維護性。我們希望每個雲端模型建構者都能部署 Vera Rubin。
Moving to gaming. Gaming revenue of $3.7 billion increased 47% year on year, driven by strong Blackwell demand and improved supply. GeForce RTX is the leading platform for PC gamers, creators, and developers.
轉戰遊戲領域。受 Blackwell 強勁需求和供應改善的推動,遊戲收入達到 37 億美元,年增 47%。GeForce RTX 是 PC 遊戲玩家、內容創作者和開發者的領先平台。
In Q4, we added several new technologies and advancements, including DLSS 4.5, which uses AI to bring game visuals to a new level; G-SYNC Pulsar, bringing incredible clear graphics even in motion and 35% faster LLM inference across leading AI PC frameworks. Looking ahead, while end demand for our products remains strong and channel inventory levels are healthy, we expect supply constraints to be the headwind to gaming in Q1 and beyond.
第四季度,我們新增了多項新技術和改進,包括 DLSS 4.5,它利用人工智慧將遊戲畫面提升到一個新的水平;G-SYNC Pulsar,即使在運動中也能帶來令人難以置信的清晰畫面;以及在領先的 AI PC 框架中,LLM 推理速度提升了 35%。展望未來,雖然我們產品的終端需求依然強勁,通路庫存水準也較為健康,但我們預期供應限制將成為第一季及以後遊戲產業的阻力。
For professional visualization, it crossed the $1 billion mark for the first time, with revenue of $1.3 billion up 159% year over year and 74% sequentially. During the quarter, we launched the RTX PRO 5000 Blackwell workstation with 72 gigabytes of fast memory for AI developers running LLMs and agentic workflows.
專業視覺化領域首次突破 10 億美元大關,營收達 13 億美元,年增 159%,季增 74%。本季度,我們推出了配備 72 GB 高速記憶體的 RTX PRO 5000 Blackwell 工作站,專為運行 LLM 和代理程式工作流程的 AI 開發人員而設計。
Automotive revenue of $604 million was up 6% year over year and was driven by robust demand for self-driving solutions. At CES, we introduced Alpamayo, the world's first open portfolio of reasoning, vision, language, action models, simulation blueprints, and data sets enabling vehicles that can think. The first passenger car featuring Alpamayo built on NVIDIA Drive will be on the road soon in the new Mercedes-Benz CLA.
汽車業務收入達 6.04 億美元,年增 6%,這主要得益於市場對自動駕駛解決方案的強勁需求。在 CES 上,我們推出了 Alpamayo,這是世界上第一個開放的推理、視覺、語言、動作模型、模擬藍圖和資料集組合,使車輛能夠思考。首款搭載基於 NVIDIA Drive 的 Alpamayo 技術的乘用車即將上市,它將是新款梅賽德斯-奔馳 CLA。
Physical AI is here, having already contributed north of $6 billion in NVIDIA revenue in fiscal year 2026. Robotaxi rides are growing exponentially with commercial fleets from Waymo, Tesla, Uber, WeRide, and Zoox, and many others are expected to scale from thousands of vehicles in 2025 to millions over the next decade, creating a market poised to generate hundreds of billions of dollars of revenue. This expansion will demand orders of a magnitude more compute with every major OEM and service provider developing on NVIDIA's platform.
實體人工智慧已經到來,預計在 2026 財年將為 NVIDIA 帶來超過 60 億美元的收入。機器人計程車服務正呈指數級增長,Waymo、特斯拉、Uber、WeRide 和 Zoox 等眾多公司的商業車隊預計將從 2025 年的數千輛汽車擴展到未來十年的數百萬輛,從而創造一個有望產生數千億美元收入的市場。隨著各大OEM廠商和服務供應商在NVIDIA平台上進行開發,此次擴張將需要數量級更大的運算能力。
We continue to advance robotics development with the new NVIDIA Cosmos and Isaac Group, open models, frameworks, and NVIDIA's powered robots and autonomous machines for leading companies including Boston Dynamics, Caterpillar, Franco Robotics, LG Electronics, and NeuroRobotics. To accelerate industrial physical AI adoption, we also announced new expanding partnerships with Dassault Systems, Siemens, and Synopsys to bring NVIDIA AI infrastructure, Omniverse Digital Twins, world models, and CUDA-X libraries to millions of researchers, designers, and engineers building the world's industries.
我們透過 NVIDIA Cosmos 和 Isaac Group、開放式模型、框架以及 NVIDIA 的動力機器人和自主機器,繼續推進機器人技術的發展,為包括 Boston Dynamics、Caterpillar、Franco Robotics、LG Electronics 和 NeuroRobotics 在內的領先公司提供服務。為了加速工業實體 AI 的應用,我們也宣布與 Dassault Systems、Siemens 和 Synopsys 建立新的、不斷擴大的合作夥伴關係,將 NVIDIA AI 基礎設施、Omniverse 數位孿生、世界模型和 CUDA-X 庫帶給數百萬構建世界各行各業的研究人員、各行各業和工程師。
Let's move to the rest of the P&L. GAAP gross margin was 75% and non-GAAP gross margin was 75.2%, increasing sequentially as Blackwell continued to ramp. GAAP operating expenses were up 16% sequentially and up 21% on a non-GAAP basis related to new product introductions and compute and infrastructure costs.
接下來我們來看損益表的其餘部分。GAAP毛利率為75%,非GAAP毛利率為75.2%,隨著Blackwell持續擴張,毛利率較上季成長。GAAP 營運費用較上季成長 16%,非 GAAP 營運費用較上季成長 21%,主要與新產品推出以及運算和基礎設施成本有關。
Non-GAAP effective tax rate for the fourth quarter was 15.4%, below our outlook for the quarter, primarily due to the impact of a one-time tax benefit. Inventory grew 8% quarter over quarter, while purchase commitments also increased significantly, as we have strategically secured inventory and capacity to meet demand beyond the next several quarters. This is further out in time than usual and reflects the longer demand visibility we have.
第四季非GAAP有效稅率為15.4%,低於我們對該季度的預期,主要是由於一次性稅收優惠的影響。庫存季增 8%,採購承諾也大幅增加,因為我們已從策略上確保了庫存和產能,以滿足未來幾季的需求。這比往常的時間跨度更長,反映了我們對更長期需求的可視性。
While we expect tightness in the supply for our advanced architectures to persist, we remain confident in our ability to capitalize on the growth opportunity ahead with our scale, expansive supply chain, and the long-standing partnerships continuing to serve us well.
儘管我們預計先進架構的供應緊張狀況仍將持續,但我們仍然有信心憑藉我們的規模、廣泛的供應鏈以及長期以來為我們帶來良好效益的合作夥伴關係,抓住未來的成長機會。
We generated free cash flow of $35 billion in Q4 and $97 billion in fiscal year 2026. For the year, we returned $41 billion, or 43% of free cash flow, to our shareholders in the form of share repurchases and dividends.
我們在第四季創造了 350 億美元的自由現金流,預計在 2026 財年將達到 970 億美元。今年,我們以股票回購和股利的形式,向股東返還了 410 億美元,即自由現金流的 43%。
We continue to invest in our technology and our ecosystem to cultivate market development, drive long-term growth, and ultimately yield total shareholder returns superior to the market or a peer group. Importantly, we will continue to run a strategic and disciplined process as it relates to our investments and we remain committed to returning capital to our shareholders.
我們將繼續投資於我們的技術和生態系統,以促進市場發展,推動長期成長,並最終為股東帶來優於市場或同業集團的總回報。重要的是,我們將繼續在投資方面實行策略性和紀律性的工作流程,並繼續致力於為股東帶來資本回報。
Let me turn to the outlook for the first quarter. Starting this quarter, we will be including stock-based compensation expense in our non-GAAP results. Stock-based compensation is a foundational component of our compensation program to attract and retain world-class talent.
接下來,我將展望一下第一季的情況。從本季開始,我們將把股票選擇權費用計入非GAAP財務報表中。股票選擇權激勵是我們薪酬體系中吸引和留住世界級人才的基礎組成部分。
Let me first start with revenue. Total revenue is expected to be $78 billion, plus or minus 2%. We expect most of our growth to be driven by Data Center. Consistent with last quarter, we are not assuming any Data Center compute revenue from China in our outlook. GAAP and non-GAAP gross margins are expected to be 74.9% and 75%, respectively, plus or minus 50 basis points. For the full year, we continue to see gross margins in the mid-70s. We will keep you updated on our progress as we prepare for the Vera Rubin transition.
首先,我想從收入方面談起。預計總收入為 780 億美元,上下浮動 2%。我們預計大部分成長將由資料中心驅動。與上季一致,我們的展望中沒有假設來自中國的資料中心計算收入。GAAP 和非 GAAP 毛利率預計分別為 74.9% 和 75%,上下浮動 50 個基點。全年來看,毛利率持續維持在 70% 左右。我們將隨時向您報告我們為維拉·魯賓過渡所做的準備工作的最新進展。
GAAP and non-GAAP operating expenses are expected to be approximately $7.7 billion and $7.5 billion, respectively, including stock-based compensation expense of $1.9 billion. For the full year, we expect non-GAAP operating expenses to grow in the low-40s on a year-over-year basis as we continue to invest in our expanding opportunity set.
GAAP 和非 GAAP 營運費用預計分別為約 77 億美元和 75 億美元,其中包括 19 億美元的股票選擇權費用。預計全年非GAAP營運費用將年增40%左右,因為我們將繼續投資於不斷擴大的業務機會。
For the full-year fiscal year '27, we expect GAAP and non-GAAP tax rates to be in between 7% and 19% (sic - see press release, "17% and 19%"), excluding any discrete items and material changes to our tax environment.
對於 2027 財年全年,我們預計 GAAP 和非 GAAP 稅率將在 7% 至 19% 之間(原文如此 - 請參閱新聞稿,「17% 至 19%」),不包括任何特殊項目和我們稅務環境的重大變更。
With that, let me turn the call over to Jensen. I think he has a few words for us.
那麼,現在讓我把電話交給詹森。我想他有幾句話要對我們說。
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
This quarter, we significantly deepened and expanded our partnerships with leading frontier model makers. We recently celebrated OpenAI's launch of GPT-5.3-Codex, trained with and inferencing on Grace Blackwell NVLink72 systems. GPT-5.3-Codex can take on long-running tasks that involve research, tool use, and complex execution. 5.3-Codex is deployed broadly inside NVIDIA; our engineers love it.
本季度,我們顯著深化並擴大了與領先的前沿模型製造商的合作關係。我們最近慶祝了 OpenAI 發布 GPT-5.3-Codex,該版本使用 Grace Blackwell NVLink72 系統進行訓練和推理。GPT-5.3-Codex 可以承擔涉及研究、工具使用和複雜執行的長時間運行任務。5.3-Codex 在 NVIDIA 內部廣泛部署;我們的工程師非常喜歡它。
We continue to work with OpenAI toward a partnership agreement and believe we are close. We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company we've had the pleasure of partnering with since their first days.
我們正與 OpenAI 繼續努力達成合作協議,並相信我們即將達成協議。我們很高興能與 OpenAI 這家百年一遇的公司繼續保持合作關係,並從他們成立之初就與他們攜手共進。
Meta Super Intelligence Labs is scaling up at lightning speed. Last week, we announced that Meta is deploying millions of Blackwells and Rubin GPUs, NVIDIA CPUs, and Spectrum-X Ethernet for training and inference.
Meta 超級智慧實驗室正以閃電般的速度擴大規模。上週,我們宣布 Meta 正在部署數百萬個 Blackwell 和 Rubin GPU、NVIDIA CPU 和 Spectrum-X 乙太網路用於訓練和推理。
This quarter, we announced a partnership with Anthropic and a $10 billion investment in their company. Anthropic will train and inference on Grace Blackwell and Vera Rubin Systems. Anthropic's Claude Cowork agent platform is revolutionary and has opened the floodgates for enterprise AI adoption. Between Claude Cowork and OpenClaw, compute demand is skyrocketing and ChatGPT moment of agentic AI has arrived. With partnerships spanning Anthropic, Meta, OpenAI and xAI, NVIDIA deployed across every cloud, and with our ability to build full-stack AI infrastructure from the ground up or support them in the cloud. We're uniquely positioned to partner with frontier model builders at every stage, training, inference, and AI factory scale-out.
本季度,我們宣布與 Anthropic 建立合作夥伴關係,並向該公司投資 100 億美元。Anthropic 將對 Grace Blackwell 和 Vera Rubin 系統進行訓練和推理。Anthropic 的 Claude Cowork 代理平台具有革命性意義,為企業採用人工智慧打開了閘門。在 Claude Cowork 和 OpenClaw 的推動下,運算需求正在飆升,而 ChatGPT 的智慧人工智慧時代已經到來。NVIDIA 與 Anthropic、Meta、OpenAI 和 xAI 等公司建立了合作夥伴關係,其產品已部署到所有雲端平台上,並且我們有能力從零開始建立全端 AI 基礎設施,或在雲端為其提供支援。我們擁有獨特的優勢,可以在訓練、推理和 AI 工廠擴展的各個階段與前沿模型建構者合作。
Finally, we recently entered into a non-exclusive licensing agreement with Groq for its low-latency inference technology, and welcomed a team of brilliant engineers to NVIDIA. As we did with Mellanox, we will extend NVIDIA's architecture with Groq's innovations to enable new levels of AI infrastructure performance and value. We look forward to sharing more at GTC next month. Okay, back to you.
最後,我們最近與 Groq 就其低延遲推理技術達成了一項非獨家許可協議,並歡迎一支傑出的工程師團隊加入 NVIDIA。就像我們與 Mellanox 合作一樣,我們將利用 Groq 的創新技術擴展 NVIDIA 的架構,從而實現更高水準的 AI 基礎設施效能和價值。我們期待下個月在GTC大會上分享更多資訊。好了,輪到你了。
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
We will now transition to Q&A. Operator, please pull for questions.
接下來進入問答環節。操作員,請拉線詢問。
Operator
Operator
(Operator Instructions) Vivek Arya, Bank of America Securities.
(操作員說明)Vivek Arya,美國銀行證券。
Vivek Arya - Analyst
Vivek Arya - Analyst
I think you mentioned that you now have growth visibility into calendar '27 also, and I think your purchase commitments kind of reflect that confidence. But Jensen, I'm curious, you know, when you look at your top cloud customers, Cloud CapEx close to $700 billion this year, many investors are concerned that it would be harder for this level to grow into next year. And for several of them, their cash flow generation capability is also getting compressed.
我想你提到過,你現在對 2027 年的成長也有了清晰的認識,我認為你的採購承諾也反映了這種信心。但 Jensen,我很好奇,你知道,當你看看你們最大的雲端客戶,今年的雲端資本支出接近 7,000 億美元時,許多投資者擔心明年這個水準很難再繼續成長。而且,其中一些公司的現金流產生能力也正在受到擠壓。
So I know you're very confident about your roadmap, right, and your purchase commitments and whatnot. But how confident are you about your customers' ability to continue to grow their CapEx? And if their CapEx doesn't grow, can NVIDIA still find a way to grow in that envelope?
我知道你對你的發展路線圖、採購承諾等等都非常有信心,對吧?但您對客戶持續增加資本支出的能力有多大信心?如果他們的資本支出沒有成長,NVIDIA 還能在現有預算範圍內找到成長的方法嗎?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
I am confident in their cash flow growing. And the reason for that is very simple. We have now seen the inflection of agentic AI and the usefulness of agents across the world and enterprises everywhere. You're seeing incredible compute demand because of it.
我對他們的現金流成長充滿信心。原因很簡單。我們現在已經看到了智慧體人工智慧的轉變,以及智慧體在世界各地和各行各業的實用性。因此,你會看到運算需求出現驚人的成長。
In this new world of AI, compute is revenue. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues. So in this new world of AI, compute equals revenues. And I am certain that at this point, with the productive use of Codex and Claude Code and the excitement around Claude Cowork and just the incredible enthusiasm about OpenClaw and the enterprise versions of them, all of the enterprise ISVs who are now working on agentic systems on top of their tools platforms, I'm certain at this point that we are at the inflection point -- we've reached the inflection point and we're generating profitable tokens that are productive for customers and profitable for the cloud service providers.
在人工智慧的新世界裡,運算能力就是收入。如果沒有運算能力,就無法產生代幣。如果沒有代幣,就無法增加收入。因此,在人工智慧的新世界裡,運算能力等於收入。我確信,目前,隨著 Codex 和 Claude Code 的有效使用,以及 Claude Cowork 的蓬勃發展,還有人們對 OpenClaw 及其企業版的熱情,以及所有企業級獨立軟體開發商 (ISV) 在其工具平台之上開發代理系統,我們正處於轉折點——我們已經到達了轉折點,並且正在創造對客戶有利、對雲代幣有利的盈利代幣。
And so the simple logic of it, the simple way to think about it is computing has changed. What used to be software running on computers -- modest amount of computers, you know, call it $300 billion or $400 billion worth of CapEx each year, has now gone into AI. And AI, in order to have -- in order to generate tokens, you need compute capacity. And that translates directly to growth and that translates directly to revenues.
因此,其簡單的邏輯,或者說簡單的思考方式是:電腦技術已經改變了。過去運行在計算機上的軟體——你知道,數量不多的計算機,每年資本支出大約為 3000 億或 4000 億美元——現在已經發展成人工智能了。而人工智慧,為了生成代幣,需要運算能力。而這會直接轉化為成長,進而直接轉化為收入。
Operator
Operator
Joe Moore, Morgan Stanley.
喬摩爾,摩根士丹利。
Joseph Moore - Analyst
Joseph Moore - Analyst
Great, thank you, and congratulations on the numbers. You talked about some of the strategic investments that you've made into Anthropic and potentially OpenAI, CoreWeave as well, but also partners, Intel, Nokia, Synopsys. You know, you're clearly at the center of everything. Can you talk about the role of those investments and kind of how do you view the balance sheet as a tool to kind of grow the NVIDIA's position in the ecosystem and participate in that growth?
太好了,謝謝,也恭喜你們取得了這樣的成績。您談到了您對 Anthropic 以及可能還有 OpenAI、CoreWeave 的一些策略投資,以及對合作夥伴 Intel、Nokia、Synopsys 的投資。你知道,你顯然是這一切的核心。您能否談談這些投資的作用,以及您如何看待資產負債表作為提升英偉達在生態系統中的地位並參與這一增長的工具?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
As you know, fundamentally, at the core of everything, NVIDIA is our ecosystem. That's what everybody loves about our business, the richness of our ecosystem. Just about every startup in the world is working on NVIDIA's -- on NVIDIA's platform. We're in every cloud. We're in every on-prem data center. We're all over the world's edge and robotic systems. Thousands of AI natives are built on top of NVIDIA.
如您所知,從根本上講,NVIDIA 是我們一切的核心,也是我們的生態系統。這就是大家喜歡我們業務的原因,我們生態系統的豐富性。世界上幾乎所有新創公司都在使用英偉達的平台。我們身處雲端。我們已入駐所有本機資料中心。我們的業務遍及全球邊緣運算和機器人系統。數千款原生人工智慧應用都是基於NVIDIA平台建置。
We want to take the great opportunity that we have as we're in the beginning of this new computing era, this new computing platform shift, to put everybody on NVIDIA. Everything is already built on CUDA, and so we're starting from a really terrific starting point. But as we build out the entire AI ecosystem, whether it's in AI for language, or physical AI, or AI physics, or biology, or robotics, or manufacturing, we want all of these ecosystems to be built on top of NVIDIA. And this is such a wonderful opportunity for us to invest into the ecosystem across the entire stack.
我們希望抓住這個絕佳的機會,在這個新的運算時代、新的運算平台變革的開端,讓每個人都使用 NVIDIA 的產品。一切都已基於 CUDA 構建,所以我們從一個非常好的起點出發。但是,隨著我們建構整個人工智慧生態系統,無論是語言人工智慧、物理人工智慧、人工智慧物理學、生物學、機器人技術或製造業,我們都希望所有這些生態系統都能建立在 NVIDIA 的基礎上。這對我們來說是一個絕佳的機會,可以投資整個技術堆疊的生態系統。
Our ecosystem is also richer today than it used to be. We used to be largely a computing platform on GPUs, but now we're a computing AI infrastructure company, and we have computing platforms on, well, every aspect of that. And everything from computing to AI models to networking, to our DPU, all of that has computing stacks on top of it.
如今我們的生態系統也比以往更豐富。我們過去主要是一個基於 GPU 的運算平台,但現在我們是一家人工智慧運算基礎設施公司,我們的運算平台涵蓋了人工智慧的各個方面。從計算到人工智慧模型到網絡,再到我們的分散式實體單元 (DPU),所有這些都建立在計算堆疊之上。
And as I mentioned before, whether it's in enterprise or in manufacturing, industrial or science or robotics, each one of these ecosystems have different stacks, and we want to make sure that we continue to invest into our ecosystem. So our investments are focused very squarely, strategically on expanding and deepening our ecosystem reach.
正如我之前提到的,無論是在企業、製造業、工業、科學或機器人領域,每個生態系統都有不同的技術棧,我們希望確保繼續投資於我們的生態系統。因此,我們的投資非常明確地、策略性地集中在擴大和深化我們的生態系統覆蓋範圍。
Operator
Operator
Harlan Sur, JPMorgan.
哈蘭‧蘇爾,摩根大通。
Harlan Sur - Analyst
Harlan Sur - Analyst
Networking continues to rise as a percentage of your overall data center profile, right? Through fiscal '26, your networking revenues accelerated on a year-over-year basis every single quarter, right? With 3.6x growth, as you guys mentioned, year-over-year growth in Q4, obviously on the strength of your scale-up and scale-out networking product portfolios.
網路在資料中心整體配置中所佔的比例持續上升,對嗎?在 2026 財年,您的網路業務收入每季都實現了同比增長,對嗎?正如你們所提到的,第四季同比增長了 3.6 倍,這顯然得益於你們的縱向擴展和橫向擴展網路產品組合的強勁表現。
I seem to remember that first half of last year, your annualized run rate on your Spectrum-X Ethernet switching platform was around $10 billion annualized. It looks like that may have stepped up to around $11 billion, $12 billion in the second half of last year.
我記得去年上半年,你們Spectrum-X乙太網路交換平台的年化運作率約為100億美元。看起來去年下半年這個數字可能已經上升到 110 億美元到 120 億美元左右。
Jensen, looking at your order book, especially with Spectrum-XGS, upcoming 102T Spectrum-6 switching platforms launching soon, where is the Spectrum runway trending now and as you foresee exiting this calendar year?
Jensen,看看你的訂單簿,特別是Spectrum-XGS,以及即將推出的102T Spectrum-6交換平台,Spectrum目前的市場趨勢如何?你預計今年底前Spectrum的發展情況如何?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
As you know, we see ourselves as an AI infrastructure company, and the AI computing infrastructure includes CPUs, GPUs, and we invented NVLink to scale up one computing node into a giant computing rack. We invented the idea of a rack-scale computer. We don't ship nodes of computers, we ship racks of computers.
如您所知,我們把自己定位為一家人工智慧基礎設施公司,而人工智慧運算基礎設施包括 CPU、GPU,我們發明了 NVLink,可以將一個運算節點擴展到一個巨大的電腦架。我們發明了機架式計算機的概念。我們不單獨運送電腦節點,我們運送的是電腦機架。
And that NVLink switch scale-up system is then scaled out using Spectrum-X and InfiniBand. We support both. And then further, we also scale across data centers using Spectrum-X scale-across. And so the way we think about networking is really an extension. We offer everything openly so that people could decide to mix and match in different scale and however they would like to integrate it into their bespoke data center. But in the final analysis, it's all one big part of our platform.
然後,使用 Spectrum-X 和 InfiniBand 對 NVLink 交換器擴展系統進行橫向擴展。我們兩者都支持。此外,我們也使用 Spectrum-X 跨資料中心擴充功能,實現跨資料中心擴充。因此,我們對人脈的理解其實是一種延伸。我們公開提供所有服務,以便人們可以根據自己的需要,以不同的規模進行組合搭配,並將其整合到他們客製化的資料中心中。但歸根結底,這一切都是我們平台的重要組成部分。
And the invention of NVLink really turbocharged our networking business. Every rack comes with 9 nodes of switches. And each one of them has two chips in it. And in the future, they'll have more. And so the amount of switching that we do per rack is really quite incredible.
NVLink 的發明確實極大地推動了我們的網路業務發展。每個機架配備 9 個交換器節點。它們每個裡面都有兩塊晶片。未來,他們還會擁有更多。因此,我們每個機架的切換量確實非常驚人。
We're also now the largest networking company in the world. And if you look at Ethernet, we came into the Ethernet market about a couple of years ago into Ethernet switching. And I think that we're probably the largest Ethernet networking company in the world today, and surely will be soon. And so Spectrum-X Ethernet has been a home run for us.
我們現在也是全球最大的網路公司。再看看以太網,我們大約兩年前進入了以太網交換市場。我認為我們目前可能是世界上最大的乙太網路公司,而且肯定很快就會成為最大的乙太網路公司。因此,Spectrum-X乙太網路對我們來說是一項非常成功的選擇。
But, you know, we're open to however people want to do networking. Some people just really love the low latency and the scale-up capability of InfiniBand, and we will continue to support that, of course. And some people love to integrate their networking across their data center based on Ethernet. and we created an Ethernet capability that extends Ethernet with artificial intelligence, a way of processing in the data center and we're incredibly good at that. And our Spectrum-X performance really shows it.
但是,你知道,我們對人們進行人脈拓展的任何方式都持開放態度。有些人非常喜歡 InfiniBand 的低延遲和可擴展性,我們當然會繼續支持這一點。有些人喜歡基於乙太網路整合資料中心的網路。我們開發了一種乙太網路功能,利用人工智慧擴展以太網,這是一種資料中心處理方式,我們在這方面非常擅長。我們的 Spectrum-X 性能也充分證明了這一點。
The difference of when you built a $10 billion or $20 billion AI factory, the difference of 10% and it could be easily 20% on the effectiveness and the utilization of your network for your data center, that translates to real money. And so NVIDIA's networking business is really, really growing fast. And I think it's -- just because we built the AI infrastructure so effectively and the AI infrastructure of the business is growing incredibly fast.
當你建造一座價值 100 億美元或 200 億美元的 AI 工廠時,資料中心網路的效率和利用率可能會提高 10%,甚至很容易達到 20%,這會帶來實實在在的收益。因此,英偉達的網路業務發展速度非常非常快。我認為這是因為我們已經非常有效地建構了人工智慧基礎設施,而且企業的人工智慧基礎設施正在以驚人的速度成長。
Operator
Operator
CJ Muse, Cantor Fitzgerald.
CJ Muse,坎托·菲茨杰拉德。
C.J. Muse - Analyst
C.J. Muse - Analyst
I guess with CPX for large context Windows and Groq likely adding a decode-specific solution, curious how we should think about your future roadmap? Should we be thinking about customized silicon either by workload or customer as an increasing focus by NVIDIA, particularly helped by your move to a dilate architecture?
我猜想,CPX 適用於大型 Windows 系統,而 Groq 可能會添加專門針對解碼的解決方案,我很想知道我們應該如何看待你們未來的發展路線圖?我們是否應該考慮根據工作負載或客戶定制晶片,因為 NVIDIA 越來越重視這一點,尤其是在你們轉向擴展架構之後?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
We don't use -- we want to -- everybody should want to extend and push out dilate as long as they can. And the reason for that is because every time you cross a dilate, you have a dilate, you have to cross an interface. Every time you cross an interface, you add latency, you add power unnecessarily. We're not allergic to dilate, we use dilates already, but we try to use dilates only when we absolutely have no choice but to do so.
我們不使用——我們想要——每個人都應該想要盡可能地伸展和擴張。原因在於,每次你穿過一個膨脹層時,你就會遇到一個膨脹層,你必須穿過一個介面。每次跨越介面都會增加延遲,也會不必要地增加功耗。我們並非對擴張器過敏,我們平常也會使用擴張器,但我們盡量只在萬不得已的情況下才使用擴張器。
And so we -- if you look at the Grace Blackwell architecture and the Rubin architecture, we use two giant reticle limited dies and we about them and that reduces the amount of architecture crossing. The dilate tax shows up in the architecture effectiveness of the competitors. If you look at NVIDIA, people call it our software advantage, but where software starts and architecture starts and ends is kind of hard to tell.
因此,如果你看看 Grace Blackwell 架構和 Rubin 架構,我們使用了兩個巨大的十字線限制模具,並且我們對它們進行了處理,這減少了架構交叉的數量。擴張稅體現在競爭對手的架構有效性上。如果你看看英偉達,人們稱之為我們的軟體優勢,但軟體的起點和架構的起點和終點在哪裡,很難說清楚。
It's -- you know, our software is effective because our architecture is so good. And so the CUDA architecture is unquestionably more effective, more efficient, delivers more performance per flop, per watt than any computing architecture out there. And it's because of the way we architect.
你知道,我們的軟體之所以有效,是因為我們的架構非常出色。因此,CUDA 架構無疑比任何其他運算架構都更有效、更有效率,每浮點運算、每瓦特的效能都更高。這是因為我們的設計方式。
With respect to how we think about Groq and the low latency decoder, I've got some great ideas that I'd like to share with you at GTC. But the simple idea is that our infrastructure is incredibly versatile because of CUDA, and we're going to continue to do that.
關於我們如何看待 Groq 和低延遲解碼器,我有一些很棒的想法,想在 GTC 上與大家分享。但簡單來說,由於 CUDA 的存在,我們的基礎設施非常靈活,我們將繼續這樣做。
All of our GPUs are architecturally compatible, which means that when I'm working on optimizing models today for Blackwell, all of that work and all that dedication to optimizing software stacks and new models also benefit Hopper and also benefit Ampere. It's the reason why A100 continues to feel fresh and continues to stay performant years after we've deployed it into the world.
我們所有的 GPU 在架構上都是相容的,這意味著當我今天為 Blackwell 優化模型時,所有這些工作以及對優化軟體堆疊和新模型的投入,也同樣會使 Hopper 和 Ampere 受益。這就是為什麼 A100 在部署到世界各地多年後,仍然感覺新穎且性能卓越的原因。
Architecture compatibility allows us to do that. It allows us to invest enormously in software engineering and optimization knowing that our entire install base in the cloud, on-prem, everywhere, from generations of architectures of GPUs will all benefit. And so we'll continue to do that and allows us to extend the useful life, allows us to have innovation, flexibility, and velocity, which translates to performance, and very importantly, performance per dollar and performance per watt for our customers.
架構相容性使我們能夠做到這一點。這使我們能夠對軟體工程和最佳化進行大量投資,因為我們知道,我們在雲端、本地以及任何地方的所有安裝基礎,從幾代 GPU 架構到所有架構,都將受益。因此,我們將繼續這樣做,這使我們能夠延長產品的使用壽命,使我們能夠進行創新、靈活和快速地改進,從而提高性能,更重要的是,提高每美元的性能和每瓦的性能,從而為我們的客戶帶來更好的產品。
And so what we'll do with Groq is you'll come to see GTC, but what we'll do is we'll extend our architecture with Groq as an accelerator in very much the way that we extended NVIDIA's architecture with Mellanox.
因此,我們將對 Groq 進行改進,您將在 GTC 上看到我們的改進,我們將以 Groq 作為加速器來擴展我們的架構,就像我們用 Mellanox 擴展 NVIDIA 的架構一樣。
Operator
Operator
Stacy Rasgon, Bernstein Research.
Stacy Rasgon,伯恩斯坦研究公司。
Stacy Rasgon - Analyst
Stacy Rasgon - Analyst
Colette, I wanted to dig a little bit into the call for sequential growth through the year. So I mean, you grow this quarter more than $10 billion sequentially in Data Center, and the guide seems to imply, you know, the bulk of the increased $10 billion sequential in Data Center. How do you see that as we go through the year, especially as Rubin ramps into the back half?
科萊特,我想深入探討全年持續成長的需求。我的意思是,本季資料中心業務較上季成長超過 100 億美元,而且該指南似乎暗示,你知道,這 100 億美元環比成長的大部分來自資料中心。隨著賽季的進行,尤其是魯賓在下半季逐漸進入狀態,您如何看待這一趨勢?
Blackwell has been a pretty massive acceleration for sequential growth. Should we expect something similar as we get to Rubin?
Blackwell 的業績實現了相當大的成長。我們是否應該預期在魯賓那裡也會遇到類似的情況?
And then I was also just hoping you could comment on your expectations for gaming. I understand the memory issues and everything else. Do you think gaming can still grow year over year in fiscal '27? Or will that be under more pressure given memory? So those two questions, please.
另外,我還希望您能談談您對遊戲的期望。我理解記憶體問題以及其他所有問題。你認為遊戲產業在2027財年還能維持年增嗎?或者,考慮到記憶力,它會面臨更大的壓力嗎?那麼,請回答這兩個問題。
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Thank you. Thanks, Stacy. Let me start with the revenue going forward. Again, we're trying to look at revenue quarter by quarter. As you think about the full year, we are absolutely going to be still selling and providing Blackwell probably at the same time that we're also seeing Vera Rubin come to market. This is a very great architecture that helps them just today quickly standing up and have already planned on many different orders across the different customers to provide that.
謝謝。謝謝你,史黛西。首先,我想談談未來的收入狀況。我們再次嘗試逐季度查看收入狀況。展望全年,我們肯定會繼續銷售和供應 Blackwell 產品,同時 Vera Rubin 也將進入市場。這是一個非常棒的架構,它幫助他們今天迅速站穩腳跟,並且已經計劃為來自不同客戶的許多不同訂單提供服務。
It's too early yet to determine how much in terms of that Vera Rubin, that beginning ramp, will start in the second half, and then we'll get through it. But no confusion in terms of the strong demand and the interest. We do expect pretty much every single customer to be purchasing Vera Rubin. The question is how soon are we in market and how soon are they able to stand that up in terms of in their data centers. That was your first part.
現在判斷維拉·魯賓(Vera Rubin)的起步坡度在下半場能有多大起步還為時過早,然後我們再慢慢來。但強勁的需求和濃厚的興趣毋庸置疑。我們預計幾乎所有顧客都會購買 Vera Rubin 產品。問題是,我們多久才能進入市場,以及他們多久才能在他們的資料中心建立起相應的設施。那是你的第一部分。
The second part was focusing on our gaming. As much as we would love to have additional more supply, we do believe for a couple quarters it is going to be very tight. If things improve by the end of the year, there is an opportunity to think about what that is from a year-over-year growth. But it's still too early for us to know at this time, and we'll get back to you as soon as we can.
第二部分主要關注我們的遊戲體驗。儘管我們非常希望能夠增加供應量,但我們認為未來幾季供應將會非常緊張。如果到年底情況有所好轉,就有機會思考一下年成長意味著什麼。但現在我們還無法確定,我們會盡快回覆您。
Operator
Operator
Atif Malik, Citi.
阿提夫‧馬利克,花旗銀行。
Atif Malik - Analyst
Atif Malik - Analyst
Jensen, I'm curious if you can touch on the importance of CUDA as now more of the investment dollars in AI are coming from inference workloads.
Jensen,我很想知道 CUDA 的重要性,因為現在人工智慧領域的投資越來越多來自推理工作負載。
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
Without CUDA, we wouldn't know what to do with inference. The entire stack from TensorRT LLM that we introduced a few years ago, which is still the most performant inference stack in the world, optimizing it for NVLink requires us to discover and invent new parallelization algorithms that sits on top of CUDA to distribute the workload and the inferencing to take advantage of the aggregate bandwidth across NVLink72.
如果沒有 CUDA,我們就不知道該如何處理推理。我們幾年前推出的 TensorRT LLM 的整個堆疊,仍然是世界上效能最高的推理堆疊,要針對 NVLink 進行最佳化,就需要我們發現和發明新的平行化演算法,這些演算法基於 CUDA,用於分配工作負載和推理,從而利用 NVLink72 的聚合頻寬。
NVLink72 has enabled us to deliver generationally 50 times more performance per Y. It's just an incredible lead. And it's sensible. NVLink72 is a great invention. It was hard to do. The creation of the switching technology, disaggregating the switches, building the system racks, all of that, we did it all in plain sight and everybody knew how hard it was for us to do.
NVLink72 使我們能夠實現每平方英吋效能提升 50 倍。這簡直是不可思議的領先優勢。這很明智。NVLink72 是一項偉大的發明。這很難做到。交換技術的研發、交換器的分割、系統機架的搭建,所有這些,我們都在眾目睽睽之下完成了,每個人都知道這對我們來說有多困難。
But the results are incredible. So performance per watt is 50 times; performance per dollar, 35 times, and so the leap in inference is incredible.
但結果令人難以置信。因此,每瓦性能提高了 50 倍;每美元性能提高了 35 倍,因此推理能力的飛躍是驚人的。
It's really important to realize that inference equals revenues now for our customers because agents are generating so many tokens and the results are so effective. When the agents are coding, it's off generating thousands, tens of thousands, hundreds of thousands because they're running for minutes to hours. And so these systems -- these agentic systems are spawning off different agents working as a team.
真正重要的是要認識到,推理現在就等於我們客戶的收入,因為代理商正在產生大量的代幣,而且結果非常有效。當代理程式進行編碼時,它們會產生成千上萬、數萬、數十萬個數據,因為它們會持續運行幾分鐘到幾小時。因此,這些系統-這些智能體系統是由不同的智能體組成團隊共同協作而產生的。
The number of tokens that are being generated has really, really gone exponential. And so we need to inference at a much higher speed. And when you're inferencing at a much higher speed, and each one of those tokens are dollarized, it directly translates into revenues. And so inference performance equals revenues for our customers.
代幣的生成數量確實呈指數級增長。因此,我們需要以更高的速度進行推理。當你以更高的速度進行推理,並且每個代幣都以美元計價時,它就會直接轉化為收入。因此,推理表現直接關係到我們客戶的收入。
For the data centers, inference tokens per watt translates directly to the revenues of the CSPs. And the reason for that is because everybody is power limited. And so -- I mean, no matter how many data centers you have, each data center, you know, 100 megawatts or 1 gigawatt, has power limits. So the architecture that has the best performance per watt translates because each token -- the performance tokens per watt, each token is dollarized.
對於資料中心而言,每瓦的推理代幣數直接轉化為 CSP 的收入。原因在於每個人的權力都是有限的。所以——我的意思是,無論你擁有多少個資料中心,每個資料中心,你知道,100兆瓦或1吉瓦,都有功率限制。因此,每瓦性能最佳的架構可以轉換,因為每個代幣——每瓦性能代幣,每個代幣都是美元化的。
Tokens per watt translates to dollars per watt, which translates in a gigawatt directly to revenues. And so you could see that every CSP understands this now, every hyperscaler understands this, that CapEx translates to compute, compute with the right architecture translates to maximizing revenues, and compute equals revenues.
每瓦代幣數等於每瓦美元數,每瓦美元數又等於每吉瓦的收入。因此,你可以看到,現在每個 CSP 都明白這一點,每個超大規模業者都明白這一點,資本支出轉化為運算能力,採用正確架構的運算能力轉化為收入最大化,運算能力等於收入。
Without investing capacity today, without investing in compute, there cannot be revenue growth. and that I think everybody understands. Compute equals revenues. Choosing the right architecture is incredibly important. It's more than strategic now. It directly affects their earnings and choosing the right architecture, the one with the best performance per watt is literally everything.
今天如果不投資產能,不投資運算能力,就不可能實現收入成長。我想這一點大家都明白。計算結果等於收入。選擇合適的架構至關重要。現在這已經不只是戰略層面的問題了。它直接影響他們的收益,選擇合適的架構,也就是每瓦性能最佳的架構,至關重要。
Operator
Operator
Ben Reitzes, Melius Research.
本‧雷茨,梅利烏斯研究公司。
Ben Reitzes - Equity Analyst
Ben Reitzes - Equity Analyst
First, let me say kudos on including the stock comp in non-GAAP. I think that's a great move, but that isn't my question. My question's around gross margins and the sustainability of the mid-70s long-term. Should we read into the visibility on supply being available into calendar '27, that it's sustainable until then?
首先,我要讚揚一下將股票補償納入非GAAP準則的做法。我認為這是一個很棒的舉措,但這不是我的問題。我的問題是關於毛利率以及70年代中期水準的長期可持續性。我們是否應該將供應可見性(直至 2027 年)解讀為這種可見性能夠持續到那時?
And then, Jensen, what about after that? Are there innovations in memory consumption you can unveil that makes us feel better about the ability to keep margins at that level for a long time?
那麼,詹森,之後呢?記憶體消耗方面是否有創新技術可以公佈,讓我們更有信心長期維持目前的利潤率水準?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
The single most important lever of our gross margins is actually delivering generational leads to our customers. That is the single most important thing. If we could deliver, generationally, performance per watt that exceeds dramatically what Moore's Law can do, if we can deliver performance per dollar dramatically more than the cost of our systems than the price of our systems, then we can continue to sustain our gross margins. That's the simple, most important concept.
我們毛利率最重要的槓桿實際上是為我們的客戶帶來代際銷售線索。這是最重要的一件事。如果我們能夠實現每瓦性能遠超摩爾定律所能達到的水平,如果我們能夠實現每美元性能遠超系統成本的性能,那麼我們就能繼續維持我們的毛利率。這是最簡單、最重要的概念。
The reason why we're moving so fast is because, number one, the demand for tokens in the world as a result of the inflection points that we've gone through has now -- has gone completely exponential. I think we're all seeing that. To the point where even our six-year-old GPUs in the cloud are completely consumed and the pricing is going up.
我們發展如此迅速的原因在於,首先,由於我們經歷的轉折點,全球對代幣的需求現在已經呈指數級增長。我想我們都看到了這一點。甚至我們雲端使用了六年的 GPU 也完全被佔用,價格還在上漲。
And so we know that the amount of computation necessary, the amount of compute necessary for the modern way of doing software is growing exponentially. And so our strategy is to deliver an entire AI infrastructure every single year. This year, we introduced six new chips. Rubin next generation, will do many new chips as well. And every single generation, we are committed to deliver many X factors of performance per watt and performance per dollar.
因此我們知道,現代軟體開發方式所需的計算量正在呈指數級增長。因此,我們的策略是每年交付一套完整的AI基礎設施。今年,我們推出了六款新晶片。Rubin的下一代產品也將推出許多新的晶片。每一代產品,我們都致力於在每瓦性能和每美元性能方面實現許多卓越表現。
And that pace and our ability to do extreme co-design allows us to deliver that value and that benefit to the customers. And that is the single most vital thing as it relates to our value delivery.
正是這種速度以及我們進行深度協同設計的能力,使我們能夠為客戶帶來價值和利益。而這正是與我們價值交付息息相關的最關鍵的一點。
Operator
Operator
Antoine Chkaiban, New Street Research.
Antoine Chkaiban,新街研究公司。
Antoine Chkaiban - Analyst
Antoine Chkaiban - Analyst
I'd like to ask about space data centers, which some of your customers are considering. How feasible do you think that is, and what kind of horizon? And what do the economics look like today? And how do you think that could evolve over time?
我想諮詢太空資料中心的情況,你們的一些客戶正在考慮採用這種模式。你認為這有多可行?預期壽命有多長?那麼,當今的經濟情勢如何呢?你認為這種情況會如何隨著時間而演變?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
Well, the economics are poor today, but it's going to improve over time. As you know, the way that space works is radically different than how it works down here. There's an abundance of energy, but solar panels are large, but there's plenty of space in space. The heat dissipation, it's cold in space. There's no airflow. And so the only way to dissipate heat is through conduction. And the radiators that you need to create are fairly large.
目前經濟形勢不佳,但隨著時間的推移會好轉。如你所知,太空的運作方式與地球上的運作方式截然不同。能源很豐富,但太陽能板體積龐大,太空空間很充足。熱散失,太空很冷。沒有空氣流通。因此,散熱的唯一途徑就是經由傳導。你需要製作的散熱器尺寸相當大。
Liquid cooling is obviously out of the question because it's heavy and freezes. And so the methods that we use here on Earth are a little different than the way we would do it in space. But there are many different computing problems that really wants to be done in space. And so NVIDIA is already the world's first GPU in space, Hoppers in space, and one of the best use cases of GPUs in space is imaging, to be able to image at extremely high resolutions using, of course, optics and artificial intelligence and to be able to do that computation of reprojection of different angles and be able to up-res and do noise reduction and just be able to see, be able to image at very large, very high resolutions, extremely large scales, and very, very fast.
液冷顯然不可行,因為它很重而且會結冰。因此,我們在地球上使用的方法與我們在太空中使用的方法略有不同。但有很多不同的計算問題確實需要在太空中解決。因此,NVIDIA 已經是世界上第一個進入太空的 GPU,也就是太空中的 Hoppers。 GPU 在太空中的最佳應用場景之一是成像,它能夠利用光學和人工智慧技術以極高的分辨率進行成像,並能夠計算不同角度的重投影,提高分辨率,進行降噪,從而能夠以非常大的分辨率、極大的尺度和非常快的速度進行成像。
It's hard to do that by sending petabytes and petabytes of imaging data back here on Earth and doing that work. It's easier just to do it out in space, and then ignore all of the data collected and processed until you see something interesting. And so artificial intelligence in space will have very good, very interesting applications.
要透過向地球上發送數PB級的圖像資料並進行這項工作,是很難做到的。最簡單的辦法就是在太空中進行實驗,然後忽略收集和處理的所有數據,直到發現有趣的東西。因此,太空人工智慧將擁有非常好的、非常有趣的應用。
Operator
Operator
Mark Lipacis, Evercore ISI.
Mark Lipacis,Evercore ISI。
Mark Lipacis - Equity Analyst
Mark Lipacis - Equity Analyst
I want to pick up on the comment you made on the script about revenue diversification. I believe, Colette, you said that hyperscalers were over 50% of revenues, but growth was led by the rest of your Data Center customers. And as a clarification, I just want to make sure I understood that.
我想就您在劇本中關於收入多元化的評論做些補充。科萊特,我記得你說過超大規模資料中心客戶貢獻了超過 50% 的收入,但成長主要來自其他資料中心客戶。我再確認一下,確保我理解正確。
Does that imply your non-hyperscale customers grew faster? And if so, can you help us understand, what are the non-hyperscalers doing different? Are they doing different things than the hyperscalers or the same things on a different scale?
這是否意味著您的非超大規模客戶成長速度更快?如果是這樣,您能否幫助我們了解一下,非超大規模資料中心營運商有哪些不同的做法?它們和超大規模資料中心營運商做的事情是不同的,還是只是規模不同?
And does this -- do you expect this trend to continue? Do you expect your customer base to evolve to a point where non-hyperscalers become a bigger part of your -- the larger part of your business?
你認為這種趨勢會持續下去嗎?您是否預期您的客戶群會發展到非超大規模企業成為您業務中更大組成部分的地步?
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
Yes, let's see if we can help on this question. So when you think about our top five, as we articulated as being our CSPs, our hyperscalers, and they have right now sat at about 50% of our total revenue. It's a big organization, therefore, of diversity of all different other types of companies that we are working with, that it goes through our AI model makers, that goes through our enterprises, that goes to supercomputing, it goes to our sovereigns. There's a lot of other different facts on there.
是的,我們來看看能否幫上忙。所以,當我們想到我們排名前五名的客戶時,正如我們所說,他們是我們的 CSP(雲端服務提供者)、我們的超大規模資料中心供應商,他們目前約占我們總收入的 50%。因此,這是一個龐大的組織,涵蓋了我們正在合作的各種不同類型的公司,它透過我們的人工智慧模型製作者,透過我們的企業,透過超級計算,最終到達我們的主權國家。上面還有很多其他不同的事實。
But you are correct. It's a very fast-growing area as well. We have a strong position in terms of all of our different cloud providers on our platform, and now we also have an extreme diversity of different customers that we are seeing all the way across the world. And this will really benefit seeing that diversity and being able to serve all of those parts.
但你是對的。這是一個發展非常迅速的地區。我們在平台上與各種不同的雲端服務供應商都保持著強大的合作關係,現在我們的客戶也極為多元化,遍布世界各地。這將真正有利於看到這種多樣性,並能夠服務所有這些群體。
Let me see if Jensen wants to add a bit more.
我問詹森是否還想補充一點。
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
Yeah. This is one of the advantages that we have with our ecosystem, all built on top of CUDA. We're the only accelerated computing platform that is in every cloud, that's available through every single computer maker, available at the edge, and we're now cultivating telecommunications.
是的。這是我們的生態系統所具有的優勢之一,我們的生態系統完全建立在 CUDA 之上。我們是唯一一個在所有雲端平台上都可用、所有電腦製造商都可使用、可在邊緣端使用的加速運算平台,而且我們現在正在發展電信業務。
Obviously, the future radios will all be AI-driven radios, and the future wireless network will also be a computing platform. That is a foregone conclusion, but somebody has to go and invent the technologies to make that possible. And we created a platform called Aerial to go do that.
顯然,未來的無線電設備都將是人工智慧驅動的無線電設備,未來的無線網路也將是一個運算平台。這是必然的結果,但總得有人去發明實現這目標的技術。我們創建了一個名為 Aerial 的平台來實現這一點。
We're out in just about every single robot, every single self-driving car. Our ability, CUDA's ability, to have the benefit of the performance of specialized processors, on the one hand, with the Tensor cores inside our GPUs, on the other hand, the flexibility of CUDA allows us to solve language problems, computer vision problems, robotics problems, to biology problems, physics problems, and just about all kinds of AI and all kinds of computation algorithms. And so the diversity of our customer base is one of the greatest strengths that we have.
我們的技術幾乎應用在每一台機器人和每一輛自動駕駛汽車中。一方面,CUDA 能夠利用 GPU 內部的 Tensor 核心,發揮專用處理器的效能優勢;另一方面,CUDA 的靈活性使我們能夠解決語言問題、電腦視覺問題、機器人問題、生物學問題、物理學問題,以及幾乎所有類型的 AI 和所有類型的計算演算法。因此,我們客戶群的多樣性是我們最大的優勢之一。
The second thing, of course, is without our own ecosystem, even if our processor was programmable, if we didn't cultivate our ecosystem and talking about some of the things that we were doing today, investing in our future ecosystem and continue to enhance our ecosystem, without our ecosystem, it's hard for us to grow beyond what design wins we capture for somebody else's ecosystem. And so we could grow and expand our ecosystem very naturally because of the platform that we created.
當然,第二點是,如果沒有我們自己的生態系統,即使我們的處理器是可編程的,如果我們不培育自己的生態系統,不談論我們今天正在做的一些事情,不投資於我們未來的生態系統並繼續增強我們的生態系統,沒有我們自己的生態系統,我們就很難超越我們為別人的生態系統贏得的設計成果。因此,由於我們創建的平台,我們的生態系統得以非常自然地發展壯大。
And then lastly, one of the things that's really important is the partnerships that we have with OpenAI and Anthropic, with xAI, with Meta, now makes -- and of course, just about every single open source in the world. There's 1.5 million AI models on Hugging Face, all of it runs on NVIDIA CUDA. And Open Source, in totality, probably represents the largest -- second largest model in the world. OpenAI is the largest. Second largest is probably all the collection of all the open sources.
最後,真正重要的一點是我們與 OpenAI、Anthropic、xAI、Meta 等公司的合作關係,現在這些合作關係使得——當然,也使得我們與世界上幾乎所有開源軟體公司建立了合作關係。Hugging Face 上有 150 萬個 AI 模型,全部都在 NVIDIA CUDA 上運行。而開源軟體整體而言,可能是世界上最大的——或者說是第二大模式。OpenAI是規模最大的公司。第二大可能就是所有開源軟體的集合。
And so NVIDIA's ability to run all of that makes our platform super fungible, super easy to use and really safe to invest into. And so that creates the diversity of customers and the diversity of the platforms available in every single country because we support the whole world's ecosystem.
因此,NVIDIA 能夠運行所有這些程序,這使得我們的平台具有極強的可替代性、極易使用性,並且非常值得投資。因此,我們支持全球生態系統,從而造就了每個國家客戶的多樣性以及可用平台的多樣性。
Operator
Operator
Aaron Raikers, Wells Fargo.
Aaron Raikers,富國銀行。
Aaron Rakers - Analyst
Aaron Rakers - Analyst
I guess sticking with the idea of the platform and extreme co-design, some of the news over this last quarter has obviously been NVIDIA's ability or push to bring Vera CPUs to market on a standalone solution basis. So I guess, Jensen, I'm curious, what's the importance that Vera plays in this architecture evolution as we move forward?
我想,如果繼續沿用平台和極致協同設計的理念,那麼上個季度的一些新聞顯然是英偉達有能力或有動力將 Vera CPU 以獨立解決方案的形式推向市場。所以,Jensen,我很好奇,隨著我們不斷推進架構發展,Vera 在這種架構演進中扮演著怎樣的重要角色?
Is this being driven more by the proliferation or the heterogeneity of inference workloads? I'm just curious of how you see that evolving for NVIDIA, particularly on a standalone CPU basis.
這主要是由推理工作負荷的激增還是推理工作負荷的異質性所驅動的?我只是好奇您如何看待英偉達在這方面的發展,尤其是在獨立CPU方面。
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
Yeah, thanks. And I'll tell you some more about it at GTC. But at the highest level, we made fundamentally different architecture decisions about our CPUs compared to the rest of the world's CPUs. It's the only data center CPU that supports LPDDR5. It is designed to be focused on very high data processing capabilities. And the reason for that is because most of the computing problems that we're interested in are data driven, artificial intelligence being one.
嗯,謝謝。我會在GTC上詳細介紹。但從最高層次來看,我們針對CPU的架構決策與世界上其他CPU的架構決策有著根本的差異。它是唯一一款支援 LPDDR5 的資料中心 CPU。它的設計重點在於極高的資料處理能力。原因在於,我們感興趣的大多數計算問題都是數據驅動的,而人工智慧就是其中之一。
And the single-threaded performance in this ratio with bandwidth is just off the charts. And we made those architectural decisions because in the entire phase, the different phases of AI from data processing, before you even do training, you have to do data processing. So you have data processing, pre-training, and in post-training now, the AIs are learning how to use tools. And the usage of tools, many of those tools run in CPU-only environments, or they run in CPU with GPU-accelerated environments. And Vera was designed to be an excellent CPU for post-training. And so some of the use cases in the entire pipeline of artificial intelligence includes using a lot of CPUs.
而且,在這個頻寬比例下,單執行緒效能簡直超乎想像。我們做出這些架構決策是因為在人工智慧的整個階段,從資料處理到訓練,在進行訓練之前,必須先進行資料處理。所以現在有了資料處理、預訓練,以及後訓練階段,人工智慧正在學習如何使用工具。而且,許多工具都是在純 CPU 環境下運作的,或是在 CPU 和 GPU 加速環境下運作的。Vera 被設計成一款優秀的後訓練 CPU。因此,人工智慧整個流程中的一些應用場景包括使用大量的 CPU。
We love CPUs as well as GPUs, and when you accelerate the algorithms to the limit as we have, Amdahl's Law would suggest that you need really, really fast single-threaded CPUs, and that's the reason why we built Grace to be extraordinarily great at single-threaded performance, and Vera is off the charts better than that.
我們既喜歡 CPU 也喜歡 GPU,當像我們這樣將演算法加速到極限時,根據阿姆達爾定律,你需要非常非常快的單線程 CPU,這就是為什麼我們將 Grace 構建得在單線程性能方面非常出色,而 Vera 的性能更是遠超 Grace。
Operator
Operator
Tim Arcuri, UBS.
提姆‧阿庫裡,瑞銀集團。
Timothy Arcuri - Analyst
Timothy Arcuri - Analyst
Colette, I was wondering if you can talk about the deployment of capital. I know that you really jacked up the purchase commits, but it sounds like maybe you're over the hump on this, and you're going to probably generate about $100 billion in cash this year. And pretty much no matter how good the results have been, the stock hasn't really gone up much, so I think you probably feel like this is a pretty good price to be buying back a bunch of it here.
科萊特,我想請你談談資本部署的問題。我知道你們大幅提高了採購承諾額,但聽起來你們可能已經度過了難關,今年可能會獲得約 1000 億美元的現金。而且,無論業績多麼出色,股價實際上並沒有上漲多少,所以我認為你可能覺得現在是買回大量股票的好時機。
So I was wondering if you can talk about that, question being, why not put a big stake in the ground and just have a huge share of repo here?
所以我想知道您是否可以談談這個問題,問題是,為什麼不採取強硬措施,在這裡佔據很大的份額?
Colette Kress - Executive Vice President, Chief Financial Officer
Colette Kress - Executive Vice President, Chief Financial Officer
So thanks for the question. We look at our capital return very, very carefully, and we do believe that one of the most important things that we can do is really supporting the extreme ecosystem that's in front of us that stems from everywhere from our suppliers and the work that we need to do to assure that we can have the supply that's needed and help them from a capacity all the way that we are in terms of the early developers of the AI solutions that will be on our platform. So we will continue to make this a very important part of our process and strategic investments.
謝謝你的提問。我們非常非常仔細地審視我們的資本回報,我們相信,我們能做的最重要的事情之一就是真正支持我們面前的龐大生態系統,這個生態系統涵蓋了從我們的供應商到我們需要做的一切工作,以確保我們能夠獲得所需的供應,並儘我們所能地幫助他們,包括我們平台上人工智能解決方案的早期開發者。因此,我們將繼續把這當作我們流程和策略投資中非常重要的一部分。
But of course, we are still repurchasing our stock, we are still with our dividend as well, and we will continue to find the right unique opportunities within the year for doing those different purchases.
當然,我們仍在回購股票,也仍在派發股息,我們將繼續在年內尋找合適的獨特機會來進行這些不同的收購。
Operator
Operator
Jim Schneider, Goldman Sachs.
吉姆‧施耐德,高盛集團。
James Schneider - Analyst
James Schneider - Analyst
Jensen, you've previously outlined the potential to get to $3 trillion to $4 trillion of Data Center CapEx by 2030, which implies a potential acceleration in growth rates, which you've sort of guided to this place this next quarter.
Jensen,你之前曾概述過到 2030 年資料中心資本支出可能達到 3 兆至 4 兆美元,這意味著成長率可能會加快,而你也已經引導這一趨勢在下一個季度達到目前的水平。
The question is, what are some of the key application areas that you believe are most likely to drive that inflection? Is that physical AI, agentic, or something else? And do you still feel good about that $3 trillion to $4 trillion envelope?
問題是,您認為哪些關鍵應用領域最有可能推動這種轉變?這是實體人工智慧、智慧體人工智慧,還是其他什麼?你仍然對那3兆到4兆美元的預算感到滿意嗎?
Jensen Huang - President and Chief Executive Officer
Jensen Huang - President and Chief Executive Officer
Yeah. Let's back that up and just reason through it from a few different ways. So the first way is, on first principles, the way that software is done in the future using AI is token driven. and I think everybody talks about tokenomics and talks about data centers generating tokens and inference is about generating tokens and we generate tokens. We were just talking about tokens, how NVIDIA's NVLink72 enabled us to generate tokens at 50 times better performance per unit energy than the previous generation. And so token generation is at the center of almost everything that relates to software in the future and relates to computing.
是的。讓我們回到正題,從幾個不同的角度來分析。所以第一種方法,從根本原理上講,未來使用人工智慧的軟體開發方式是代幣驅動的。我認為每個人都在談論代幣經濟學,談論資料中心產生代幣,推理也是為了產生代幣,而我們也在產生代幣。我們剛才還在討論代幣,NVIDIA 的 NVLink72 如何使我們能夠以比上一代產品高 50 倍的單位能耗性能生成代幣。因此,代幣生成幾乎是未來所有與軟體和計算相關的事物的核心。
If you look at the way we use computing in the past, however, the amount of computation demand for software in the past is a tiny fraction of what is necessary in the future. And AI is here. AI is not going to go back. AI is only going to get better from here. And so if you think about it, and you said, okay, well, the world was investing about $300 billion to $400 billion a year in classical computing, and now AI is here, and the amount of computation necessary is a thousand times higher than the way we used to do computing.
然而,如果你回顧我們過去使用計算機的方式,你會發現過去軟體的計算需求量與未來所需的計算需求量相比只是很小的一部分。人工智慧時代已經來臨。人工智慧不會倒退。人工智慧只會越來越好。所以,如果你仔細想想,你可能會說,好吧,世界每年在傳統計算上投入大約 3000 億到 4000 億美元,而現在人工智慧來了,所需的計算量比我們過去進行計算的方式高出一千倍。
The computing demand is just a lot higher, and so if we continue to believe there's value in it -- and we'll talk about that in a second, then then the world will invest to produce that token. And so the amount of token generation capability that the world needs is a lot more than $700 billion. And I'm fairly confident that we're going to continue to generate tokens. We're going to continue to invest in compute capacity from this point out.
計算需求要高得多,所以如果我們繼續相信它有價值——我們稍後會談到這一點——那麼全世界就會投資來生產這種代幣。因此,世界所需的代幣生成能力遠遠超過 7,000 億美元。我相當有信心,我們會繼續發行代幣。從現在開始,我們將繼續投資運算能力。
And fundamentally because every single company depends on software, every software will depend on AI. And so every company will produce tokens. And that's the reason why I call them AI factories. And whether you're a company in the cloud data centers, you have AI factories to generate tokens for your revenues. If you're an enterprise software company, you're going to generate tokens for the agentic systems that are on top of your tools.
從根本上講,因為每一家公司都依賴軟體,所以每款軟體都將依賴人工智慧。因此,每家公司都會發行代幣。這就是我稱它們為人工智慧工廠的原因。無論你是位於雲端資料中心的公司,你都可以利用人工智慧工廠產生代幣以增加收入。如果你是一家企業軟體公司,你將為工具之上的代理系統產生令牌。
If you are a robotics factory and self-driving car is the first indication of that, you have huge supercomputers, which are basically AI factories to generate tokens that goes into your cars, that becomes its AI. And then you also have to put computers inside the cars to continuously generate tokens. And so we're fairly sure now that this is the future of computing.
如果你是一家機器人工廠,而自動駕駛汽車是其第一個標誌,那麼你就會擁有巨大的超級計算機,它們本質上是人工智慧工廠,用來產生進入你汽車的代幣,這些代幣就成為了汽車的人工智慧。然後你還得在車上安裝電腦,以便不斷生成代幣。因此,我們現在相當肯定,這就是計算機的未來。
Now, why is it so certain that this is the future of computing? And the reason for that is because the way we used to do software was pre-recorded. Everything was captured a priori. We pre-compiled the software. We pre-write the content. We pre-record the videos. But now, everything is generative in real time. And when it's generated in real time, it can take into context of the person, the situation, the query, and the intentions could all be taken into consideration to generate the outcome of this new software we call AI -- agentic AI.
那麼,為什麼可以如此肯定這就是電腦的未來呢?原因在於我們過去製作軟體的方式是預先錄製好的。一切都是事先安排好的。我們預先編譯了軟體。我們預先寫好內容。我們預先錄製了影片。但現在,一切都是即時生成的。當即時生成時,它可以考慮人的背景、情況、查詢和意圖,所有這些都可以納入考量,從而產生我們稱之為人工智慧(AI)的新軟體——智慧人工智慧(Agentic AI)的結果。
And so the amount of computation necessary is far, far greater than pre-recorded. just as a computer has a lot more computation capability than a DVD recorder, a DVD player that was pre-recorded. Artificial intelligence needs a lot more computing capability than the way we used to do software in the past.
因此,所需的計算量遠大於預先錄製的計算量。就像電腦的運算能力遠大於DVD錄影機或預先錄製的DVD播放器。人工智慧需要比我們過去開發軟體方式強大的運算能力。
Now, the question about computation, about sustainability at the first level is just at the computer science level, this is the way computing is going to be done. Now, from an industrial level, because all of our companies in the final analysis are powered by software, and the cloud companies are powered by software, and if the new software requires tokens to be generated, and the tokens are monetized, then it stands to reason that their data center build-out directly drives their revenues. And so compute drives revenues. And I think they all understand that. I think people are increasingly starting to understand that as well.
現在,關於計算的問題,關於永續性的問題,首先是在電腦科學層面,這就是計算將要進行的方式。現在,從工業層面來看,因為歸根結底,我們所有的公司都是由軟體驅動的,雲端公司也是由軟體驅動的,如果新軟體需要產生代幣,而代幣又可以貨幣化,那麼他們的資料中心建設直接推動了他們的收入,這是合理的。因此,計算能力驅動了收入。我想他們都明白這一點。我認為人們也越來越開始理解這一點了。
And then lastly, the benefits that AI produces for the world ultimately has to generate revenues. And we're seeing right in front -- right being developed as we see -- as we stand here, agentic AI has turned an inflection point. and it literally happened in the last couple, two, three months.
最後,人工智慧為世界帶來的好處最終必須產生收益。我們現在親眼目睹——就在我們眼前,正在發展中的——智能體人工智慧已經迎來了一個轉折點。而這一切,其實就發生在過去的兩三個月。
Of course, inside the industry, we've been seeing it for a while, probably six months or so, but the world is now awakened to the agentic AI inflection. The agents are super smart; they're solving real problems. Coding is obviously supported by agentic systems now. And all of our coders here at NVIDIA are using agentic systems, either Claude Code or OpenAI Codex, enormously, and oftentimes both, and Cursor, oftentimes all three, depends on the use case. But they have agents and co-design partners, engineering partners, to help them solve problems.
當然,在業界,我們已經看到這種情況一段時間了,大概有六個月左右,但現在全世界都意識到了人工智慧的智慧化趨勢。這些特工非常聰明,他們正在解決實際問題。如今,編碼顯然得到了智能體系統的支持。我們 NVIDIA 的所有程式設計師都在大量使用代理系統,要么是 Claude Code,要么是 OpenAI Codex,而且通常兩者都用,還有 Cursor,通常這三者都會用,具體取決於使用場景。但他們有代理商、聯合設計合作夥伴和工程合作夥伴來幫助他們解決問題。
And you could see their revenue skyrocketing. These companies, in the case of Anthropic, I think their revenue's 10x in a year and they are severely capacity constrained because demand is just incredible. And the token demand is incredible. The token generation rate is growing exponentially and the same thing with, of course, OpenAI; their demand is incredible.
你可以看到他們的收入飆升。以 Anthropic 為例,我認為這些公司的營收一年成長了 10 倍,但由於需求量龐大,它們的產能嚴重受限。代幣需求量非常驚人。代幣生成速度呈指數級增長,OpenAI 的情況當然也是如此;他們的需求量非常大。
And so the more compute that they can stand online, bring online, the faster their revenues will grow. And that goes back to the comment that I was saying, that inference is revenues, that compute equals revenues now in this new world.
因此,他們能夠投入使用的運算能力越多,他們的收入成長速度就越快。這又回到了我剛才說的評論,在這個新世界裡,推論就是收入,計算就等於收入。
And in a lot of ways, that's the reason why we say it's a new industrial revolution. There are new factories, new infrastructure being built, and this new way of doing computing is not going to go back. And so to the extent that we believe that producing tokens is going to be the future of computing, which I believe and I think largely the industry believes, then we're going to be building out this capacity from this point forward and continue to expand from here.
從很多方面來說,這就是我們稱之為新工業革命的原因。新的工廠正在建設,新的基礎設施正在建設,這種新的計算方式不會再回來了。因此,如果我們相信代幣生成將是計算的未來(我相信,而且我認為業內大多數人也這麼認為),那麼我們將從現在開始建立這種能力,並繼續擴大規模。
Now, the thing that is -- the wave that we're seeing now is the agentic AI inflection and the next inflection beyond that is physical AI, where we take AI and these agentic systems into the physical applications such as manufacturing, such as robotics and so that's a giant opportunity ahead.
現在,我們看到的浪潮是智慧人工智慧的轉捩點,而下一個轉折點是物理人工智慧,我們將人工智慧和這些智慧系統應用到物理領域,例如製造業、機器人技術等等,這是一個巨大的機會。
Operator
Operator
This concludes the question-and-answer session. I'll turn the call to Toshiya Hari.
問答環節到此結束。我把電話轉給Toshiya Hari。
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
Toshiya Hari - Vice President of Investor Relations & Strategic Finance
In closing, please note Jensen will be participating in a fireside chat at the Morgan Stanley TMT Conference in San Francisco on March 4. He'll also be giving a keynote at GTC in San Jose on March 16. Our earnings call to discuss the results of our first quarter of fiscal 2027 is scheduled for May 20.
最後,請注意,詹森將於 3 月 4 日在舊金山舉行的摩根士丹利 TMT 大會上參加爐邊談話。他還將於 3 月 16 日在聖何塞舉行的 GTC 大會上發表主題演講。我們將於 5 月 20 日召開財報電話會議,討論 2027 財年第一季的業績。
Thank you for joining us today. Operator, please go ahead and close the call.
感謝您今天蒞臨。接線員,請您結束通話。
Operator
Operator
Thank you. This concludes today's conference call. You may now disconnect.
謝謝。今天的電話會議到此結束。您現在可以斷開連線了。