使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Welcome to the fourth-quarter 2025 Arista Networks financial results earnings conference call. (Operator Instructions) As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call.
歡迎參加 Arista Networks 2025 年第四季財務業績財報電話會議。(操作員說明)提醒各位,本次會議正在錄音,會議結束後,您可以在 Arista 網站的投資者關係部分收聽錄音回放。
Mr. Rudolph Araujo, Arista's VP of Investor Advocacy, you may begin.
Arista公司投資者權益倡導副總裁魯道夫·阿勞霍先生,您可以開始了。
Rudolph Araujo - Head of Investor Advocacy
Rudolph Araujo - Head of Investor Advocacy
Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31, 2025. If you want a copy of the release, you can access it online on our website.
謝謝你,雷吉娜。各位下午好,感謝各位的參與。今天和我一起參加電話會議的有 Arista Networks 董事長兼執行長 Jayshree Ullal,以及 Arista 財務長 Chantelle Breithaupt。今天下午,Arista Networks 發布新聞稿,公佈了截至 2025 年 12 月 31 日的第四財季業績。如果您想要新聞稿副本,可以在我們的網站上線上取得。
During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model and financial outlooks for 2026 and beyond. Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefit of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements.
在本次電話會議期間,Arista Networks 管理層將發表前瞻性聲明,包括與 2026 財年第一季財務展望、長期商業模式以及 2026 年及以後的財務展望相關的聲明。我們的潛在市場規模和應對這些市場機會的策略,包括人工智慧、客戶需求趨勢、關稅和貿易限制、供應鏈約束、零件成本、製造產量、庫存管理以及通膨對我們業務的壓力、交貨時間、產品創新、營運資本優化以及收購帶來的收益,都受到我們在提交給美國證券交易委員會(SEC)的文件中詳細討論的風險和不確定性的影響,QK 10-Q表格中,這些風險和不確定性可能導致實際結果與這些聲明中預期的結果有重大差異。
These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call.
這些前瞻性陳述僅代表截至今日的觀點,您不應將其視為我們未來觀點的依據。我們不承擔在本次通話後更新這些聲明的義務。
This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all noncash stock-based compensation impacts, certain acquisition required charges and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release.
本次對我們第四季業績的分析以及 2026 年第一季的展望均基於非公認會計準則,不包括所有非現金股票選擇權補償影響、某些收購所需費用和其他非經常性項目。我們在獲利報告中提供了所選GAAP與非GAAP結果的完整調節表。
With that, I will turn the call over to Jayshree.
接下來,我將把電話交給傑伊什裡。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Thank you, Rudy, and thank you, everyone, for joining us this afternoon for our fourth-quarter and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI and cloud and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%.
謝謝魯迪,也謝謝各位今天下午參加我們的第四季及2025年全年業績電話會議。2025年對Arista來說又是個具有里程碑意義的一年。憑藉生成式人工智慧、雲端運算和企業級應用的強勁勢頭,我們實現了遠超預期的 28.6% 的成長目標,創造了 90 億美元的創紀錄收入,同時實現了 64.6% 的非 GAAP 毛利率和 48.2% 的非 GAAP 營業利潤率。
The Arista 2.0 momentum is clear as we surpassed 150 million cumulative ports of shipments in Q4 '25. International growth was a good milestone in both Asia and Europe growing north of 40% annually. As expected, we have exceeded our strategic goals of $800 million in campus and branch expansion as well as $1.5 billion in AI center networking.
Arista 2.0 的發展勢頭十分強勁,我們在 2025 年第四季累計港口出貨量突破 1.5 億。國際成長是一個重要的里程碑,亞洲和歐洲的年增長率均超過 40%。正如預期的那樣,我們在校園和分支機構擴張方面超額完成了 8 億美元的戰略目標,在人工智慧中心網路建設方面超額完成了 15 億美元。
Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32% while AI and specialty providers, which now includes Apple, Oracle, and their initiatives as well as emerging neoclouds performed strongly at 20%. We had two greater than 10 customer concentration in 2025. Customer A and B drove 16% and 26% of our overall business.
展望 2025 年,以年度客戶部門收入計算,雲端運算和人工智慧巨頭貢獻了 48% 的份額。企業和金融業佔 32%,而人工智慧和專業服務供應商(現在包括蘋果、甲骨文及其相關措施以及新興的新雲端)表現強勁,佔 20%。2025 年,我們有兩位客戶集中度超過 10%。客戶 A 和 B 分別貢獻了我們整體業務的 16% 和 26%。
We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering. With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers.
我們珍惜與合作夥伴之間長達 10 至 15 年的合作工程經驗。隨著人工智慧發展勢頭日益強勁,我們預計到 2026 年客戶群將更加多元化,其中可能包括一到兩個新增的 10% 客戶。
In terms of annual 2025 product lines, our core cloud AI and data center products built upon a highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent. This includes our portfolio of Etherlink AI and our 7,000 series platforms for best-in-class performance power efficiency, high availability, automation, agility for both the front and back-end compute, storage and all of the interconnect zones.
就 2025 年年度產品線而言,我們基於高度差異化的 Arista EOS 堆疊構建的核心雲 AI 和資料中心產品已成功部署在 10 千兆到 800 千兆乙太網路速度範圍內,1.6 太比特遷移即將到來。這包括我們的 Etherlink AI 產品組合和 7,000 系列平台,可為前端和後端運算、儲存以及所有互連區域提供一流的效能、能源效率、高可用性、自動化和敏捷性。
Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs but also realize our responsibility to broaden the OpenAI ecosystem, including leading companies such as AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage, and VAST Data, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at [tariff locks].
當然,我們與全球公認的 GPU 市場領導者 NVIDIA 進行互通,但我們也意識到我們有責任擴大 OpenAI 生態系統,其中包括 AMD、Anthropic、Arm、Broadcom、OpenAI、Pure Storage 和 VAST Data 等領先公司,這些公司共同打造了 21 世紀的現代 AI 技術棧。Arista 顯然正在成為運行這些高強度訓練和推理模型、處理大量令牌的 TB 級網路的黃金標準。[關稅鎖定]
Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms dubbed Netdi, that can run across both our flagship EOS and our open NOS platforms.
Arista 的核心產業收入佔總收入的 65%。根據大多數主要行業分析師的說法,我們有信心在高性能交換機市場份額中佔據第一的位置。我們推出了 Blue Box 計劃,提供名為 Netdi 的硬體平台增強診斷功能,該功能可以運行在我們的旗艦 EOS 和開放式 NOS 平台上。
We saw an excellent uptick in 800-gig adoption in 2025, gaining greater than 100 customers cumulatively for our Etherlink products and we are core designing several AI rack systems with 1.60 switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to $3.25 billion in AI networking revenue.
2025 年,800G 的普及率顯著提高,我們的 Etherlink 產品累計新增客戶超過 100 家,我們正在為今年即將推出的 1.60 交換技術設計多個 AI 機架系統。隨著知名度的提高,我們現在預計 2025 年至 2026 年人工智慧網路收入將翻一番,達到 32.5 億美元。
Our network adjacencies market is comprised of routing replacing routers and our cognitive AI-driven AVA campus. Our investments in cognitive wired and wireless, zero-touch operations, network identity, scale and segmentation, get several accolades in the industry. Our open modern stacking with SWAG, Switch Aggregation Group, and our recent VESPA for Layer 2 and Layer 3 wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogenous, secure, client to branch to campus solution with unified management domains.
我們的網路鄰接市場由路由替換路由器和我們認知人工智慧驅動的 AVA 園區組成。我們在認知型有線和無線技術、零接觸營運、網路身分、規模和細分方面的投資,在業界獲得了多項讚譽。我們採用 SWAG(交換器聚合組)的開放式現代化堆疊技術,以及我們最近推出的 VESPA(用於二層和三層有線和無線規模化),都是極具吸引力的園區差異化優勢。結合我們最近於 2025 年 7 月收購的 VeloCloud,我們正在推動實現從客戶端到分公司再到園區的同質化、安全化解決方案,並採用統一的管理域。
Looking ahead, we are committed to our aggressive goal of $1.25 billion for '26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine and peering use cases. In Q4 2025, Arista launched our flagship 7800 R4 spine for many routing use cases, including DCI, AI, spines, with that massive 460 terabits of capacity to meet the demanding needs of multiservice routing, AI workloads and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue.
展望未來,我們致力於實現2026年認知園區和分支機構12.5億美元的宏偉目標。我們也成功部署在許多路由邊緣、核心脊樑和對等互連用例中。2025 年第四季度,Arista 推出了旗艦級 7800 R4 脊骨網絡,適用於多種路由用例,包括 DCI、AI 和脊骨網絡,其高達 460 太比特的容量可滿足多業務路由、AI 工作負載和交換用例的苛刻需求。校園和周邊道路的綜合收益約佔總收入的 18%。
Our third and final category is the network software and services based on subscription models such as a-care, CloudVision, Observability, Advanced Security and even some branch edge services. We added another 350 CloudVision customers a day, almost one new customer a day, and deployed an aggregate of 3,000 customers with CloudVision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets.
我們的第三類也是最後一類是基於訂閱模式的網路軟體和服務,例如 a-care、CloudVision、可觀測性、進階安全性,甚至還有一些分行邊緣服務。過去十年間,我們每天新增 350 位 CloudVision 客戶,幾乎每天新增一位客戶,累計為 3,000 位客戶部署了 CloudVision。Arista 的訂閱式網路服務和軟體收入貢獻了約 17%,請注意,這不包括核心市場或鄰近市場中包含的永久軟體授權。
Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking with a highly differentiated software stack and a uniform CloudVision software foundation.
Arista 2.0 的發展動能十分明顯。我們發現自己處於關鍵任務網絡交易的中心。我們憑藉著高度差異化的軟體堆疊和統一的 CloudVision 軟體基礎,正成為客戶端到雲端和 AI 網路領域首選的網路創新者。
We are proud to power Warner Bros. distribution network streaming for 47 markets in 21 languages in the pan-European Winter Olympics that is happening as I speak. We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the $5 million to $10 million customer category as well as the $1 million customer category in 2025.
我們很榮幸能為華納兄弟發行網絡提供技術支持,該網絡以 21 種語言向 47 個市場播放正在進行的泛歐洲冬季奧運會賽事。我們現在累計客戶已超過 10,000 人,我對我們在 2025 年實現 500 萬至 1000 萬美元客戶以及 100 萬美元客戶的目標尤其感到欣慰。
Arista's 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers or AI centers regardless of their location.
Arista 的 2.0 願景與我們的客戶產生了共鳴,他們重視我們引領從不一致的孤島到可靠的資料中心的轉型。無論位於何處,資料都可以儲存在校園中心、資料中心、廣域網路中心或人工智慧中心。
Networking for AI has achieved production scale with an all Ethernet-based Arista AI center. In 2025, we are a founding member of the Ethernet-based standards for both scale up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 specification for scale-out AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front end of compute storage, WAN and classic cloud networking.
基於全乙太網路的 Arista AI 中心已實現人工智慧網路化生產規模化。2025 年,我們成為乙太網路標準的創始成員,既參與了 ESUN 的向上擴展標準制定,也完成了 Ultra Ethernet Consortium 1.0 規範,用於橫向擴展的 AI 網路。這些人工智慧中心將後端人工智慧加速器與前端運算儲存、廣域網路和傳統雲端網路無縫連接。
Our AI accelerated networking portfolio consisting of three families of Etherlink Spine Leaf fabric are successfully deployed in scale up, scale out, and scale across networks. Network architectures must handle both training and inference frontier models to mitigate congestion.
我們由三個系列的 Etherlink Spine Leaf 架構組成的 AI 加速網路產品組合已成功部署在縱向擴展、橫向擴展和跨網路擴展中。網路架構必須同時處理訓練和推理前沿模型,以緩解擁塞。
For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job, to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different. It's the time taken to a first token basically the amount of latency it takes for users submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it.
對於訓練而言,關鍵指標顯然是作業完成時間,也就是從將作業(訓練作業)新增到 AI 加速器叢集到訓練運行結束所花費的時間。對於推斷而言,關鍵指標略有不同。它指的是獲得第一個令牌所需的時間,基本上是指用戶提交查詢到收到第一個回應所需的延遲時間。Arista 顯然開發了一整套 AI 功能,能夠獨特地處理 AI 和雲端工作負載在多樣性、持續時間、流量大小以及與之相關的所有模式方面的保真度。
Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher-level functions. Together with our published subscribed state Foundation in EOS, NetDL, or Network Data Lake, we instrument our customers' networks to deliver proactive, predictive and prescriptive features for enhanced security, observability and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin and validation functionality, Arista platforms are perfectly optimized and suited for network as a service.
我們基於 AVA(自主虛擬助理)的 AI 網路策略,能夠整理資料以實現更高層級的功能。結合我們在 EOS、NetDL 或網路資料湖中發布的訂閱狀態基礎,我們為客戶的網路提供主動、預測和指導性功能,以增強安全性、可觀測性和智慧 AI 操作。結合 Arista 經過驗證的網路模擬、數位孿生和驗證功能設計,Arista 平台針對網路即服務進行了完美最佳化和適用。
Our global relevance with customers and channels is increasing. In 2025 alone, we conducted three large customer events across three continents: Asia, Europe, and United States and many other smaller ones, of course. We touched 4,000 to 5,000 strategic customers and partners in the enterprise.
我們與全球客戶和通路的關聯度正在不斷提高。光是 2025 年,我們就在三大洲(亞洲、歐洲和美國)舉辦了三場大型客戶活動,當然還有許多其他規模較小的活動。我們接觸了企業內部 4000 到 5000 位策略客戶和合作夥伴。
While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality, demonstrated by our highest Net Promoter Score of 93% and lowest security vulnerabilities in the industry. We now see the pace of acceptance and adoption accelerating in the enterprise customer base.
當許多客戶還在為傳統網路服務商的困境而苦惱時,Arista 因重新定義網路的未來而備受讚譽。客戶一直以來都非常欣賞我們的網路創新和質量,這體現在我們高達 93% 的淨推薦值和業內最低的安全漏洞率。我們現在看到,企業客戶群對產品的接受度和採用速度正在加快。
Our leadership team, including our newly appointed co-Presidents, Ken Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest Senior Vice President, who joined us with deep cloud operator experience has ignited our hyper growth across our AI and cloud titan customers.
我們的領導團隊,包括新任命的聯合總裁 Ken Duda 和 Todd Nightingale,推動了策略性和凝聚力的執行。Tyson Lamoreaux 是我們最新加入的高級副總裁,他擁有豐富的雲端營運經驗,他的加入加速了我們在人工智慧和雲端巨頭客戶中的快速成長。
Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A team and thank you, all employees for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista way principles, of innovation, culture, and customer intimacy.
截至 2025 年底,我們的員工人數約為 5,200 人,其中包括最近收購的 VeloCloud 員工。我為整個 Arista A 團隊感到無比自豪,並感謝所有員工的奉獻和辛勤工作。當然,我們一流的工程和領導團隊始終堅定不移地將創新、文化和客戶至上作為 Arista 的核心原則。
Well, I think you would agree that 2025 has indeed been a memorable year and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand with massive and a growing TAM of $100-plus billion. And so despite all of the news on the mounting supply chain, allocation, rising cost of memory and silicon fabrication, we increased our 2026 guidance to 25% annual growth, accelerating now to $11.25 billion.
我想您也會同意,2025年確實是令人難忘的一年,我們也期待2026年會是精彩的一年。我們正處於前所未有的網路需求時期,市場規模龐大且不斷成長,總市場規模超過 1,000 億美元。因此,儘管有關供應鏈、分配、記憶體和矽製造成本上升的新聞層出不窮,我們還是將 2026 年的業績預期提高到年增長率 25%,並加速成長至 112.5 億美元。
And with that, happy news, I turn it over to Chantelle, our CFO.
有了這些好消息,我把麥克風交給我們的財務長 Chantelle。
Chantelle Breithaupt - Chief Financial Officer
Chantelle Breithaupt - Chief Financial Officer
Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details.
謝謝你,Jayshree,也恭喜你和我們的員工在2025年取得輝煌成就。正如您所指出的,今年對公司來說是傑出的一年,而這種實力也清楚地體現在我們的財務表現中。讓我來詳細解釋一下。
To start off, total revenues in Q4 were $2.49 billion, up 28.9% year over year and above the upper end of our guidance of $2.3 billion to $2.4 billion. It was great to see that all geographies achieved strong growth within the quarter.
首先,第四季總營收為 24.9 億美元,年增 28.9%,高於我們先前預期的 23 億美元至 24 億美元上限。令人欣喜的是,所有地區在本季都實現了強勁成長。
Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some nonrecurring VeloCloud service renewal in the prior quarter. International revenues for the quarter came in at $528.3 million or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets.
服務和訂閱軟體在第四季度貢獻了約 17.1% 的收入,低於第三季的 18.7%,這反映了上一季一些非經常性 VeloCloud 服務續約後的正常化。本季國際營收為 5.283 億美元,佔總營收的 21.2%,高於上一季的 20.2%。這一季度環比成長主要得益於我們全球大型客戶在國際市場上的強勁貢獻。
The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62% to 63% and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI titan customers in the quarter.
第四季整體毛利率為 63.4%,略高於先前 62% 至 63% 的預期,但低於去年同期的 64.2%。與去年同期相比有所下降,這是由於本季面向雲端和人工智慧巨頭客戶的銷售額佔比較高所致。
Operating expenses for the quarter were $397.1 million or 16% of revenue, up from the last quarter at $383.3 million. R&D spending came in at $272.6 million or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation with a fiscal year '25 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million or 4% of revenue, down from $109.5 million last quarter. FY25 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model.
本季營運支出為3.971億美元,佔營收的16%,高於上一季的3.833億美元。研發支出達 2.726 億美元,佔營收的 11%,高於上一季的 10.9%。Arista 持續展現對網路創新的承諾與專注,2025 財年的研發支出約佔營收的 11%。銷售和行銷費用為 9,830 萬美元,佔營收的 4%,低於上一季的 1.095 億美元。2025 財年銷售和行銷支出佔全年的 4.5%,反映了 Arista 高效率的市場推廣模式。
Our G&A costs came in at $26.3 million or 1.1% of revenue up from $22.4 million last quarter, reflecting continued investment in systems and processes to scale Arista 2.0. For fiscal year '25, G&A expense held at 1% of revenue. Our operating income for the quarter was $1.2 billion or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion or 48.2% of revenue.
我們的管理費用為 2,630 萬美元,佔營收的 1.1%,高於上一季的 2,240 萬美元,這反映出我們持續投資於系統和流程,以擴大 Arista 2.0 的規模。 2025 財年,管理費用維持在收入的 1%。本季營業收入為 12 億美元,佔總營收的 47.5%。第四季強勁的業績表現,使得該公司 2025 財年的營業收入達到 43 億美元,佔總營收的 48.2%。
Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%. This lower-than-normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion or 42% of revenue.
本季其他收入和支出為1.02億美元,實際稅率為18.4%。低於正常水平的季度稅率反映了由於訴訟時效到期而釋放的稅收儲備。總體而言,這使得該季度淨收入達到 10.5 億美元,佔總收入的 42%。
It is exciting to see Arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement.
看到 Arista 首次實現超過 10 億美元的淨利潤,真是令人興奮。恭喜 Arista 團隊取得這項令人矚目的成就。
Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year '25, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year.
我們的稀釋後股份數量為 12.76 億股,因此本季稀釋後每股收益為 0.82 美元,比上年增長 24.2%。2025 財年,我們很高興實現了每股攤薄收益 2.98 美元,年增 28.4%。
Now turning to the balance sheet. Cash, cash equivalents and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share. Within fiscal 2025, we repurchased $1.6 billion of our common stock at an average price of $100.63 per share.
現在來看資產負債表。截至本季末,現金、現金等價物及有價證券約為 107.4 億美元。本季度,我們以每股 127.84 美元的平均價格回購了 6.201 億美元的普通股。在 2025 財年,我們以每股 100.63 美元的平均價格回購了價值 16 億美元的普通股。
Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price and other factors.
在2025年5月核准的15億美元股票回購計畫中,未來幾季仍有8.179億美元可供回購。未來回購的實際時間和數量將取決於市場和商業狀況、股價和其他因素。
Now turning to operating cash performance for the fourth quarter. We generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance with an increase in deferred revenue, offset by an increase in accounts receivable driven by higher shipments and end of quarter service renewals.
現在來看第四季的經營現金流表現。該期間,我們從經營活動中產生了約 12.6 億美元的現金。這一結果是強勁的獲利表現(遞延收入增加)所致,但應收帳款的增加(由出貨量增加和季度末服務續約推動)抵消了這一影響。
DSOs came in at 70 days, up from 59 days in Q3 driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 times last quarter. Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods.
DSO 為 70 天,高於第三季的 59 天,主要原因是續約和本季出貨時間的變化。庫存週轉率為 1.5 次,高於上一季的 1.4 次。庫存略微增加至 22.5 億美元,反映出原材料和成品庫存管理的嚴格性。
Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory and the lead times from our key suppliers.
本季末,我們的採購承諾額為 68 億美元,高於第三季末的 48 億美元。如前幾季所述,這種預期活動主要代表與新產品和人工智慧部署相關的晶片採購。由於新產品需求、零件價格(例如 DDR4 記憶體的供應限制)以及主要供應商的交貨週期等因素的綜合影響,未來幾季我們仍將面臨一些波動。
Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product-related. Our product deferred revenue increased approximately $469 million versus last quarter.
我們遞延收入總額為 54 億美元,高於上一季的 47 億美元。第四季度,大部分遞延收入餘額與產品相關。我們的產品遞延收入比上一季增加了約 4.69 億美元。
We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis, independent of underlying business drivers.
我們目前仍處於大力推動新產品開發、贏得新客戶以及拓展新應用場景(包括人工智慧)的階段。這些趨勢導致客戶特定驗收條款增加,以及我們產品遞延收入餘額的波動性增加。如前幾季所述,遞延餘額可能會按季度發生顯著變化,而與潛在的業務驅動因素無關。
Accounts payable days were 66 days, up from 55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100 million in CapEx during fiscal year 2025 for this project.
應付帳款週轉天數為 66 天,高於第三季的 55 天,反映了庫存收貨和付款的時間安排。本季資本支出為 3700 萬美元。2024 年 10 月,我們開始在聖克拉拉建造擴建設施的初步建設工作,並在 2025 財年為該項目投入了約 1 億美元的資本支出。
As we have moved through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raised our AI centers goal from $2.75 billion to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of 62% to 64% inclusive of mix and anticipated supply chain cost increases for memory and silicon.
隨著我們進入 2025 年,我們對 2026 財年有了更清晰的認識和信心。正如 Jayshree 所提到的,我們現在很高興地將 2026 財年的營收預期提高到 25%,達到約 112.5 億美元。我們維持 2026 年校園收入目標 12.5 億美元,並將人工智慧中心目標從 27.5 億美元提高到 32.5 億美元。對於毛利率,我們重申本財年的毛利率範圍為 62% 至 64%,其中包括記憶體和矽產品的組合成本和預期供應鏈成本的成長。
In terms of spending, we expect to continue to invest in innovation, sales and scaling the business to ensure our status as a leading pure-play networking company. With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026.
在支出方面,我們預計將繼續投資於創新、銷售和業務規模化,以確保我們作為領先的純粹網路公司的地位。由於我們提高了營收預期,我們現在有信心將 2026 年的營業利潤率預期提高到約 46%。
On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5% back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter Q4 '25.
在現金方面,我們將繼續努力優化營運資金投資,但由於採購承諾中零件到貨時間的不確定性,庫存預計會出現一些波動。我們的結構性稅率預計將達到 21.5%,恢復到通常的歷史稅率水平,高於上個季度(2025 年第四季)的季節性較低稅率 18.4%。
With all of this as a backdrop, our guidance for the first quarter is as follows: revenues of approximately $2.6 billion, gross margin between 62% and 63% and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.275 billion diluted shares.
基於上述背景,我們對第一季的業績預期如下:營收約 26 億美元,毛利率在 62% 至 63% 之間,營業利潤率約為 46%。預計我們的實際稅率約為 21.5%,稀釋股份約為 12.75 億股。
In closing, at our September Analyst Day, we had the theme of building momentum, and we are doing just that. In the campus WAN, data and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead.
最後,在九月的分析師日上,我們的主題是積蓄力量,而我們正在朝著這個目標努力。在園區廣域網路、資料中心和人工智慧中心,我們擁有獨特的優勢,能夠滿足客戶的需求。我們將繼續提供世界一流的客戶體驗和創新。我對我們即將到來的財政年度充滿熱情。
Now back to you, Rudy, for Q&A.
現在把時間交還給魯迪,進行問答環節。
Rudolph Araujo - Head of Investor Advocacy
Rudolph Araujo - Head of Investor Advocacy
(Operator Instructions)
(操作說明)
Operator
Operator
(Operator Instructions) Meta Marshall, Morgan Stanley.
(操作說明)Meta Marshall,摩根士丹利。
Meta Marshall - Analyst
Meta Marshall - Analyst
Great. And congratulations on the quarter. I guess in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers. I guess just digging more into that, what are the puts and takes of -- is it bottlenecks in terms of their building? Is it -- like what would make or break kind of whether those become two new additional kind of 10% customers?
偉大的。恭喜你本季取得佳績。我想,就你之前對那一兩個額外增加的 10% 客戶的評論而言,Jayshree。我想進一步探討一下,他們的建設過程中有哪些瓶頸和不足之處?是不是說-什麼因素會決定這兩位新客戶能否成為新增的10%客戶?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Thank you, Meta, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables. Some of it may be sitting in deferred. So there's an acceptance criteria that we have to meet and there's also timing associated with meeting the acceptance criteria.
謝謝你,Meta,你的祝福真好。所以很明顯,如果我沒有信心,我不敢這麼說的,對吧?但總會有變數。其中一些可能處於延期處理狀態。所以,我們必須符合驗收標準,而且符合驗收標準還有時間限制。
Some of it is demand that is still underway. And in this age of all the supply chain allocation and inflation, we've got to be sure we can shift. So we don't know if it's exactly a 10% or high-single digits or low-double digits, but a lot of variables will decide that final number. But certainly, the demand is there.
其中一些需求仍在進行中。在這個供應鏈分配和通貨膨脹的時代,我們必須確保能夠轉型。所以我們不知道最終結果究竟是10%,還是接近10%或接近兩位數,很多因素都會影響最終結果。但可以肯定的是,市場需求是存在的。
Operator
Operator
Samik Chatterjee, JPMorgan.
Samik Chatterjee,摩根大通。
Samik Chatterjee - Analyst
Samik Chatterjee - Analyst
Jayshree, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive. But since you're doing 30% is what the guidance is for 1Q, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year?
Jayshree,恭喜你本季業績出色,展望未來。我並不是說25%的成長率不令人印象深刻。但既然你們第一季的業績預期是 30%,那麼我能否了解一下,是什麼原因導致你們對今年剩餘時間的業績前景持謹慎態度?
Is it the sort of one to two new customers and their ramps that you're sort of more cautious about? Or is it availability of supply [indiscernible] some of the components or memory that's sort of giving you maybe a bit more cautiousness about the visibility for the remainder of the year? If you can understand the drivers there.
你是對那種只有一兩個新客戶和他們搭建的坡道比較謹慎的情況嗎?或者,是否某些組件或內存的供應情況(聽不清楚)讓你對今年剩餘時間的前景更加謹慎?如果你聽得懂那裡的司機說話的話。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. Thank you, Samik. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality. But I understand your views on caution, given all the CapEx numbers you see from customers.
是的。謝謝你,薩米克。首先,我不認為我做得夠謹慎。我覺得我竭盡全力讓你充分感受到了現實的殘酷。但是,考慮到您從客戶那裡看到的所有資本支出數據,我理解您謹慎的態度。
That's an important thing to understand that we don't track the CapEx. The first thing that happens in the CapEx is they got to build the data centers and get the power and get all of the GPUs and accelerators and the network comes and lags a little. So demand is going to be very good. But whether the shipments exactly fall into '26 or '27, to add, you can clarify when they really fall in, but there's a lot of variables there. That's one issue.
需要理解的一點是,我們不追蹤資本支出。資本支出的第一件事是建造資料中心,獲得電力,購買所有GPU和加速器,然後網路建設出現一些延遲。所以需求會非常旺盛。但是,這些貨物究竟屬於 2026 年還是 2027 年,你可以明確說明它們真正屬於哪一年,但這裡面有很多變數。這是其中一個問題。
The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI where customers are still in their first innings. So again, I'm giving you the greatest visibility I can fairly early in the year on the reality of what we can ship not what the demand might be. It might be a multiyear demand that ships over multiple years.
正如我所說,第二點是,其中許多都是新產品、新用例,與人工智慧密切相關,而客戶仍處於起步階段。所以,我再次強調,我會在年初盡可能地向大家展示我們實際上能夠出貨的產品,而不是可能的需求量。這可能是需要多年才能滿足的需求,並且需要分多年才能交付。
So let's hope it continues. But of course, you must understand that we're also facing a lot of large numbers. So 25% on a base of now $9 billion when we started last year at $8.25 billion is a really, really early and good start.
所以,讓我們期待它能繼續下去。當然,你也必須明白,我們也面臨許多龐大的數字。因此,以去年年初的 82.5 億美元為基數,現在成長了 25%,這是一個非常非常好的早期開端。
Operator
Operator
David Vogt, UBS.
David Vogt,瑞銀集團。
David Vogt - Analyst
David Vogt - Analyst
Maybe Chantelle and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter, and you had mentioned in this quarter, obviously, the supply chain does have some constraints.
Chantelle 和 Jayshree,你們能否幫忙量化一下,你們的指南中關於記憶體動態和限制條件所包含的收入影響和潛在毛利率影響?我知道上個季度,而且您在本季度也提到過,很明顯,供應鏈確實存在一些限制。
When you think about -- I think, Jayshree, you said kind of the real outlook that you see, maybe you can help parameterize what you think could hold you back, if that's the way to phrase it? And just give us a sense for what upside could be in a perfect world effectively if you could share that?
傑伊什裡,我想,你剛才說的是你看到的真實情況,也許你可以幫我界定一下你認為會阻礙你前進的因素,如果我這樣說可以的話?如果您能分享在理想情況下可能帶來的好處,那就太好了?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
I'm going to give some general commentary and Chantelle, if you don't mind adding to it. Our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they're more memory-intensive. Add to that, that we're expecting increases from the silicon fabrication that all the chips are made, as you know, centrally with one company, Taiwan Semiconductor.
我打算做一些整體評論,Chantelle,如果你不介意的話,也請你補充一些內容。業內同行可能比我們更早面臨這個問題,因為我認為伺服器產業可能最先發現了這個問題,因為伺服器對記憶體的需求更高。此外,我們預期矽製造成本也會增加,眾所周知,所有晶片都是由一家公司——台積電——集中生產的。
So Arista has taken a very thoughtful approach being aware of this since 2025 and frankly, absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get and the prices are horrendous. They're an order of magnitude, exponentially higher.
因此,Arista 從 2025 年起就意識到了這一點,並採取了非常周全的方法,坦白說,承擔了我們在 2025 年產生的許多費用。然而,到了 2026 年,情況明顯惡化。我們只能強顏歡笑,接受任何價格,現在的價格簡直高得離譜。它們的數量級要高得多,呈指數級增長。
So clearly, with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see, reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector. But clearly, it's not going to be easy, but it's going to favor those who plan and those who can spend the money for it.
顯然,隨著情況惡化,而且預計這種情況還會持續多年,我們正面臨記憶體短缺的問題。值得慶幸的是,正如你所看到的,從我們的採購承諾中可以看出,我們正在為此做準備。我知道,對於人工智慧和汽車產業來說,記憶體現在是新的黃金。但很顯然,這並不容易,但這將有利於那些有計劃的人和那些有能力為此花錢的人。
Chantelle Breithaupt - Chief Financial Officer
Chantelle Breithaupt - Chief Financial Officer
I think the only thing I'd add to your question, David, and thank you for that, is that -- so we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did. So we're comfortable we have a path to there within the numbers we've provided.
大衛,我想我唯一要補充的是——謝謝你的提問——我們對這份指南感到滿意,這也是我們制定這份指南以及提高人數的原因。因此,我們有信心在我們提供的數據範圍內找到實現目標的途徑。
The range of 62% to 64%. I think we are pleased to hold despite this kind of pressure coming into it. This has been our guide since September at our Analyst Day. So we're pleased to hold that guide and find ways to mitigate this journey. Now whether it ends up being 62.5% versus 63.5% in the guide in that range, that's where we'll continue to update you, but the range we're comfortable with.
範圍在 62% 到 64% 之間。儘管面臨這樣的壓力,但我認為我們很高興能夠保級成功。自九月的分析師日以來,這一直是我們的指導原則。所以我們很高興能有這份指南,並找到減輕這段旅程痛苦的方法。至於最終結果是指南中提到的 62.5% 還是 63.5%,我們會繼續向您更新,但我們對這個範圍感到滿意。
Operator
Operator
Aaron Rakers, Wells Fargo.
Aaron Rakers,富國銀行。
Aaron Rakers - Analyst
Aaron Rakers - Analyst
Congrats as well on the quarter and the guide. I guess when we think about the $3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring if any, from scale-up networking opportunity, how do you see -- is that more still of a [27]?
也恭喜你本季業績和指南的發布。我想,當我們考慮今年人工智慧領域32.5億美元的預算時,傑伊什裡,我很好奇,你是否考慮了規模化發展帶來的網路機會,以及你如何看待這方面——這是否仍然是一個更重要的因素?[27]?
And also, can you unpack like ex the AI and ex the campus contribution, it appears that you're guiding still pretty muted, low-single-digit growth on non-AI, just curious how you see the non-AI and non-campus growth.
另外,您能否詳細分析人工智慧和校園貢獻等情況?看起來您在非人工智慧領域的成長仍然相當平緩,只有個位數,我只是好奇您如何看待非人工智慧和非校園領域的成長。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Okay. Yeah. Well, rising tide rises all boats but some go higher and some go lower. But to answer your specific question, what was it around?
好的。是的。嗯,當水位上漲時,所有的船都會上升,但有些船會上升得更高,有些船會下降。但要回答你的具體問題,它周圍是什麼?
Aaron Rakers - Analyst
Aaron Rakers - Analyst
The scale up.
規模擴大。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
The scale up. We have consistently described that today's configurations are mostly a combination of scale out and scale up were largely based on 800 gig and smaller ratings now that the ESUN specification is well underway. And Ken Duda, you can -- I think the spec will be done in a year or this year, for sure. So Ken and Hugh Holbrook are actively enrolled in that.
規模擴大。我們一直強調,如今的配置大多是橫向擴展和縱向擴展的組合,主要基於 800 Gb 及更小的額定值,而 ESUN 規範現在已經進展順利。Ken Duda,你可以——我認為規範肯定會在一年內或今年內完成。所以肯·霍爾布魯克和休·霍爾布魯克都積極參與其中。
We need a good solid spec. Otherwise, we'll be shipping proprietary products like some people in the world do today. And so we will tie our scale-up commitments greatly to availability of new products and a new ESUN spec, which we expect the earliest to be Q4 this year. And therefore, majority of the -- we'll be in some trials where a lot of -- Andy Bechtolsheim and the team is working on a lot of active AI racks with scale up in mind, but the real production level will be in 2027 primarily centered around not just 800 gig but 1.6T.
我們需要一份完善的規格說明。否則,我們將像世界上某些人一樣,銷售專有產品。因此,我們將把擴大規模的承諾與新產品的上市以及新的 ESUN 規範的出台緊密聯繫起來,我們預計最早也要到今年第四季才能實現。因此,大部分——我們將進行一些試驗,其中很多——Andy Bechtolsheim 和他的團隊正在開發許多具有擴展性的活躍 AI 機架,但真正的生產水平將在 2027 年主要圍繞 1.6T 而不是 800 GB。
Operator
Operator
Amit Daryanani, Evercore ISI.
Amit Daryanani,Evercore ISI。
Amit Daryanani - Equity Analyst
Amit Daryanani - Equity Analyst
Congrats from my end as well for some really good numbers here. Jayshree, if I think some of these model builders like Anthropic that I think you folks have talked about, they're starting to build these multibillion-dollar clusters on their own now.
我也要祝賀你們取得了非常不錯的成績。Jayshree,如果我沒記錯的話,像你們之前提到的 Anthropic 這樣的模型建造商,他們現在已經開始獨立建造這些價值數十億美元的集群了。
Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that? And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well as you build out TP or premium clusters? I'd love to just understand how that kind of business scales up for you folks.
您能否談談您參與這些建設項目的能力,無論是在 DCI 方面,還是在其他方面?由此延伸,這是否能讓你有機會與這些模型建構者逐漸合作的一些大型雲端公司建立聯繫,並逐步建立TP或高階集群?我很想了解你們是如何擴大這種業務規模的。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. No. Amit, that's a very thoughtful question. And I think you're absolutely right. The network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us initially, we were largely working with one or two models in there and one or two accelerators, NVIDIA and AMD, and OpenAI was primarily dominant one.
是的。不。阿米特,你問了一個很有見地的問題。我認為你的說法完全正確。網路基礎設施在許多方面都對這些模型建構者起著至關重要的作用。如果你回顧我們最初的發展歷程,你會發現我們當時主要只使用一到兩種模式和一到兩種加速器,分別是 NVIDIA 和 AMD,而 OpenAI 則佔據了主導地位。
But today, we see that there's really multiple layers in a cake where you've got the GPU accelerators, of course, you've got power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders and appropriately, whether it is Gemini or xAI or Anthropic Claude or OpenAI and many more coming, these models and the multiprotocol algorithm or nature of these models is something we have to make sure we build the network correctly for. So that's one.
但如今,我們看到這就像一塊蛋糕,其實有很多層,其中當然包括 GPU 加速器,而電力是最難取得的。但是 Arista 需要處理多個領域和模型構建器,無論是 Gemini、xAI、Anthropic Claude、OpenAI 或更多即將出現的模型,這些模型以及多協議演算法或這些模型的性質,都是我們必須確保正確構建網路的原因。這是其中之一。
And then to your second point, you're absolutely right. I think the biggest issue is not only the model builders but they're are no more in silos in one data center, and you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we've historically not worked with this. So I think you'll see more Copilot versions of it, if you will, with a number of our cloud titans.
至於你的第二點,你完全正確。我認為最大的問題不僅在於模型建構者,還在於他們不再局限於一個資料中心,你會看到他們分佈在多個託管機房、多個地點,並與我們過去從未合作過的雲端巨頭客戶建立多個合作夥伴關係。所以我認為你會看到更多類似 Copilot 的版本,它們會與我們的眾多雲端巨頭合作。
So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans and bringing the cloud and AI together.
因此,我們希望與他們作為人工智慧專業供應商合作,同時也希望與雲端運算巨頭合作,將雲端運算和人工智慧結合。
Operator
Operator
George Notter, Wolfe Research.
喬治諾特,沃爾夫研究公司。
George Notter - Analyst
George Notter - Analyst
I was just curious about the product deferred revenue and how you see that coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter. So a few questions here.
我只是好奇產品遞延收入以及您最終如何看待它從資產負債表中消失。顯然,這個問題已經一個季度又一個季度地累積起來了。這裡有幾個問題。
Does that come off in big chunks that we'll see different quarters in the future? Does it come off more gradually? Does it continue to build? Like what does the profile look like for that product deferred coming off the balance sheet and flowing through the P&L? And then also I'm curious about how much product deferred do you have in the full-year revenue guidance to 25%.
這會以大塊的形式推出,以便我們在未來分幾季看到嗎?它會逐漸脫落嗎?它會繼續發展嗎?例如,該產品從資產負債表遞延並最終計入損益表的財務狀況是怎樣的?另外,我還想知道你們全年營收預期中,有多少產品被延後上市(預計為 25%)。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
George, thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases, the great new use case is AI. The acceptance criteria for that, for the larger deployments is 12 to 18 months. Some can be as short as 6 months. So there's a wide variety that goes in.
喬治,謝謝你的提問。從我們進行這種對話的方式來看,並沒有太大變化。延期開發涉及新產品、新客戶、新用例,而人工智慧就是一個很棒的新用例。對於規模較大的部署,驗收標準是 12 到 18 個月。有些療程短至6個月。所以其中包含的種類非常多。
Deferred has balances coming in and out every quarter. We don't guide deferred and we don't say product-specific. What I can tell you in your question is that there will be times where there are larger deployments, but will feel a little lumpier as we go through. But again, it's a net release of a balance. So it depends what comes in at that same quarter timing.
遞延帳戶每季都有餘額流入和流出。我們不提供延期指導,也不針對特定產品進行說明。關於你的問題,我可以告訴你的是,有時會有規模較大的部署,但隨著時間的推移,整個過程會感覺有些不平穩。但總的來說,這是一種平衡的淨釋放。所以這取決於同一季度內有哪些資料入帳。
George Notter - Analyst
George Notter - Analyst
Got it. Okay. Any sense for what's in the full-year guidance? I assume not much? Is that fair to say?
知道了。好的。有人知道全年業績指引的內容嗎?我猜應該不多吧?這樣說公平嗎?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
It's super hard, George. It's when the acceptance criteria happens. If it happens December [22], it's a different situation. If it all happens in Q2, Q3, Q4, that's a difference. So that's something we really have to work with the customer. So sorry that we're not able to be clairvoyant on that.
喬治,這太難了。這是驗收標準生效的時候。如果發生在 12 月 [22] 日,情況就不同了。如果所有事情都發生在第二、第三、第四季度,那就不一樣了。所以這確實是我們需要和客戶一起合作解決的問題。很抱歉,我們無法預知未來。
Operator
Operator
Ben Reitzes, Melius Research.
本‧雷茨,梅利烏斯研究公司。
Ben Reitzes - Equity Analyst
Ben Reitzes - Equity Analyst
I guess my congrats to guys. This execution and guide is really something. So I wanted to ask about two things that I just was wondering if you could talk a little bit more about your neocloud momentum and what that is looking like in terms of materiality?
我想向你們表示祝賀。這套操作指南真是太棒了。所以我想問兩個問題,我想知道您能否再多談談您的 neocloud 發展勢頭以及它在物質層面上的表現?
And then also, if you don't mind touching on AMD with the launch, we're kind of hearing about you getting a lot of networking attached to the 450-type product or their new chips? Wondering if that is a catalyst or not as you go throughout the year?
另外,如果您不介意談談AMD的發表會,我們聽說你們在450系列產品或他們的新晶片方面投入了大量精力?想知道這是否會成為一年中各種因素的催化劑嗎?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear -- in the past, it used to be content providers, Tier 2 cloud providers. But AI is clearly driving that section. And it's a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI.
是的。所以本,你可以想像,專業雲端服務供應商歷來都是各種類型供應商混雜在一起的局面。我們現在清楚地看到,人工智慧是其中一個關鍵因素——過去,它曾經是內容提供者和二線雲端供應商的領域。但人工智慧顯然正在推動這一領域的發展。而且,這是一系列客戶,其中一些客戶擁有真正的財務實力,現在正尋求投資、增加投資並轉型到人工智慧領域。
So the rate at which they pivot in AI will greatly define how well we do that. And they're not yet titans, but they want to be or could be titans just the way to look at it. So -- and we're going to invest with them, and these are healthy customers. It's nothing like the dot-com era, we feel good about that.
因此,他們在人工智慧領域轉型的速度將很大程度上決定我們在這方面做得如何。他們現在還不是泰坦,但他們想成為泰坦,或者說他們有能力成為泰坦——從這個角度來看。所以——我們將與他們進行投資,而這些都是健康的客戶。現在的情況和網路泡沫時期完全不同,我們對此感到欣慰。
There are a set of neoclouds that we watch more carefully because some of them are oil money converted into AI or crypto money converted into AI. And over there, we are going to be much more careful because some of those neoclouds are looking at Arista as the preferred partner, but we would also be looking at the health of the customer or they may just be a one kind. We don't know the exact nature of the business. And those will be smaller and they don't contribute in large dollars, but they are becoming increasingly plentiful in quantity, even if they're not yet in numbers.
有一些新興科技公司值得我們密切關注,因為其中一些是由石油資金轉化為人工智慧,或是由加密貨幣轉化為人工智慧。在那邊,我們會更加謹慎,因為有些新雲端服務商將 Arista 視為首選合作夥伴,但我們也會關注客戶的健康狀況,或者他們可能只是單一類型的服務商。我們不清楚這家公司的具體業務性質。這些項目規模較小,貢獻的金額也不大,但數量卻不斷增加,即使數量上還不夠多。
So I think you're seeing this dichotomy of two types in that category or three types. The classic CDN and security specialty providers, Tier 2 cloud, the AI specialty are going to lean in and invest and then the neoclouds in different geographies.
所以我認為你看到的是該類別中的兩種類型,或者說是三種類型。傳統的 CDN 和安全專業提供者、二級雲端、AI 專業提供者將會加大投入,然後是不同地區的新型雲端。
Ben Reitzes - Equity Analyst
Ben Reitzes - Equity Analyst
And AMD?
AMD呢?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yes, the AMD question. A year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, 20%, maybe a little more, 20% to 25%, where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they're building best-of-breed building blocks for the [NIC], for the network, for the IO and they want open standards as opposed to a full on vertical step from one vendor.
是的,就是關於AMD的問題。一年前,我想我跟你說過這話,但我再說一次。一年前,NVIDIA 的產品幾乎佔據了 99% 的市場份額,對吧?如今,當我們審視我們的部署情況時,我們發現大約有 20%、20%,甚至可能更多,20% 到 25% 的部署中,AMD 正在成為首選的加速器。在這些情況下,Arista 顯然更受歡迎,因為他們為網卡、網路和 I/O 構建了一流的構建模組,並且他們想要開放標準,而不是來自單一供應商的完全垂直升級。
So you're right to point out that AMD and in particular, it's a joy to work with Lisa and Forrest and the whole team, and we do very well in that multi-vendor open considerations.
所以你說得對,AMD,特別是與 Lisa、Forrest 和整個團隊合作非常愉快,我們在多廠商開放的合作中表現出色。
Operator
Operator
Tim Long, Barclays.
提姆朗,巴克萊銀行。
Tim Long - Equity Analyst
Tim Long - Equity Analyst
Yeah, appreciate a little color. Jayshree, maybe we could touch a little bit on scale across. It's obviously gotten a lot of attention, particularly on the optics layer from some others in the industry. Obviously, you guys have been in DCI, which is kind of a similar-type technology. But curious what you think as far as Arista's participation in more of these next-gen scale across networks?
是的,喜歡有點色彩。Jayshree,或許我們可以稍微談談規模的問題。顯然,它引起了很多關注,尤其是業內其他一些人對它的光學層給予了高度評價。顯然,你們都接觸過 DCI,這是一種類似的技術。但我很好奇,對於 Arista 參與更多此類跨網路的下一代規模化項目,您有什麼看法?
And is this something that would be good for like a Blue Box-type of product? Or would that more be in the scale up? So if you could give a little color there, that would be great.
這種產品適合做成藍盒包裝之類的嗎?或者說,這更體現在規模擴大?所以,如果你能為那裡增添一些色彩,那就太好了。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Right. Okay. So most of our participation today, we thought would be scale out. But what we are finding is due to the distributed nature of where and how they can get the power and the by-sectional bandwidth growth where essentially the throughput scale out or scale across is all about how much data you can move, right? As the workloads become more and more complex, you have to make them more and more distributed because you just can't fit them in one data center, both from a power bandwidth throughput capacity.
正確的。好的。所以,我們今天大部分的參與,我們認為都會是規模擴大。但我們發現,這是由於電力獲取地點和方式的分佈式特性,以及按區域劃分的頻寬增長,本質上吞吐量橫向擴展或橫向擴展都取決於你能傳輸多少數據,對吧?隨著工作負載變得越來越複雜,你必須讓它們越來越分散,因為無論從功率、頻寬或吞吐量來看,你都無法將它們全部放在一個資料中心裡。
Also, these GPUs are trying to minimize the collective degradation. So as you scale up or out, the communication patterns become very, very much of a bottleneck. And one way to solve it is to extend this across data centers, both through fiber. And as you rightly pointed out, a very high injection bandwidth DCI routing. And then there's a sustained real-world utilization you need across all of these.
此外,這些GPU也在努力減少整體效能下降。因此,隨著規模擴大或業務拓展,溝通模式就會變成非常嚴重的瓶頸。解決這個問題的一個方法是將其擴展到各個資料中心,透過光纖實現。正如您所指出的,DCI 路由具有非常高的注入頻寬。此外,還需要在所有這些方面進行持續的實際應用。
So for all these reasons, we are pleasantly surprised with the role of coherent long-haul optics, which we don't build but we have worked in the past very greatly with companies that do, and they're seeing the lift. And the 7800 Spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
綜上所述,我們對相干長距離光學器件的作用感到驚喜,雖然我們自己不生產這種器件,但我們過去曾與生產這種器件的公司有過非常深入的合作,他們也看到了這種器件帶來的提升。而 7800 Spine 機殼作為旗艦平台和首選方案,是由我們的工程團隊多年來為實現這種強大的配置而設計的。
So let's Blue Box there and much, much more of a full-on Arista flagship box with the US and all of the virtual output queuing and buffering to interconnect regional data centers with extremely high levels of routing and high availability, too. So this really lends into everything Arista stands for coming all together in a universal AI spine.
所以,讓我們看看 Blue Box,以及更全面的 Arista 旗艦級設備,它擁有美國的所有虛擬輸出隊列和緩衝功能,可以將區域資料中心互連,並具有極高的路由水平和高可用性。所以,這真正體現了 Arista 的所有理念,並將所有元素整合到一個通用的 AI 核心架構中。
Operator
Operator
Karl Ackerman, BNP Paribas.
卡爾‧阿克曼,法國巴黎銀行。
Karl Ackerman - Equity Analyst
Karl Ackerman - Equity Analyst
Yeah. Agentic AI should support an uptake in conventional server CPUs where you have -- where your switches have high share within data centers. And so given your upwardly revised outlook of 25% growth for this year, could you speak to the demand prospects you are seeing for front-end high-speed switching products that address agentic AI products?
是的。智能體人工智慧應該能夠促進傳統伺服器 CPU 的普及,尤其是在交換器在資料中心中佔有較高份額的情況下。鑑於您已將今年的成長預期上調至 25%,您能否談談您對面向智慧 AI 產品的前端高速交換產品的需求前景?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. Exactly, Karl. I think in the beginning -- well, let's just go back in time in history. It's not that long ago. Three years ago, we had no AI. We were staring at [indiscernible] being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? Three years later, I'm actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it's tied to the back-end GPU clusters. And it's an all Ethernet, all AI system for agentic AI applications.
是的。沒錯,卡爾。我認為在最初——好吧,讓我們回到歷史的過去。時間並不久遠。三年前,我們還沒有人工智慧。我們當時正盯著後端到處部署的[無法辨認]。為了更清楚地說明,我們基本上把人工智慧定義為僅用於後端,對吧?三年後,我其實可以告訴你,我們今年的收入可能會超過 30 億美元,而且還在成長,對吧?這個數字肯定包含了前端,因為它與後端 GPU 叢集相連。這是一個完全基於乙太網路、完全基於人工智慧的系統,適用於智慧體人工智慧應用。
Now a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers. But I don't rule out the possibility. You could see this in our numbers with loads of 8,800-gig customers that many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing, science, automation of software, I don't know.
現在,許多智慧體人工智慧應用大多運行在我們一些最大的雲端人工智慧和專業服務提供者那裡。但我並不排除這種可能性。從我們的數據中可以看出這一點,我們有很多 8800GB 的客戶,其中許多都將流入企業,以及用於基因組定序、科學、軟體自動化等的智慧人工智慧應用,我不知道具體是什麼。
I don't think can any of us believe that AI is eating software, but AI is definitely enabling better software, right? And we're certainly seeing that and can see as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
我不認為我們當中有人會相信人工智慧正在吞噬軟體,但人工智慧肯定能夠讓軟體變得更好,對吧?我們確實看到了這一點,而且從我們採納這種方法中也可以看出這一點。因此,智慧AI的興起不僅會增加GPU的需求,還會增加後端和前端可以使用的所有XPU等級的需求。
Operator
Operator
Simon Leopold, Raymond James.
西蒙·利奧波德,雷蒙德·詹姆斯。
Simon Leopold - Analyst
Simon Leopold - Analyst
I wanted to come back on the issue around sort of what's going on with the memory market. So two aspects to this is. One, I'm wondering how much of a tool has been price hikes, you raising your prices to customers or -- and/or whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there. So you've prepurchased memory effectively at much lower prices in the spot market today.
我想再談談記憶體市場目前的情況。所以這件事有兩個面向。首先,我想知道價格上漲(即提高價格給客戶)在多大程度上是一種工具,或者——以及/或者,在你大量的採購承諾中,是否存在重要的內存因素。所以,你實際上是以遠低於現貨市場價格的價格預購了記憶體。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Thank you. Okay. I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory-intensive switches, we have clearly been absorbing it and memory is in our purchase commitments. But -- so as everything else, the entire silicon portfolio is in our purchase commitments.
謝謝。好的。我真希望我能告訴你,我們確實買到了所有需要的記憶體。不,我們沒有。雖然業內同業已經多次提價,尤其是在伺服器市場或記憶體密集型交換器領域,但我們顯然已經消化了這些漲價,而且記憶體也在我們的採購承諾中。但是-就像其他所有產品一樣,我們所有的矽產品組合都在我們的採購承諾之內。
Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a onetime increase on selected especially memory-intensive SKUs to deal with it. And we cannot absorb it if the prices keep going up the way they have in January and February. And I would tell you that all the purchase commitments I have in my current and Chantelle's current commitments are not enough. We need more memory.
由於供應鏈的一些反應,我和 Todd 一直在審查此事,我們認為為了應對這種情況,某些特定內存密集型 SKU 將會有一次性漲價。如果物價繼續像一月和二月那樣上漲,我們將無法負擔。我要告訴你,我和 Chantelle 目前的所有購買承諾加起來還不夠。我們需要更多記憶體。
Operator
Operator
James Fish, Piper Sandler.
詹姆斯·菲什,派珀·桑德勒。
James Fish - Analyst
James Fish - Analyst
Ladies, great quarter, great end to the year. Jayshree, are hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull-in of demand potentially here, including for your own Blue Box initiative?
女士們,本季表現出色,年末也圓滿結束。Jayshree,現在超大規模資料中心營運商在提前下單方面是否開始感到緊張了?您認為這裡潛在的需求拉動有多大,包括您自己的“藍色盒子”計劃?
And Chantelle tell for you, just going back to George's question, are you -- I know it's difficult to answer, but are you anticipating that, that product deferred revenue is going to continue to grow through the year? Or just -- it's way too difficult to predict and you've got customers that could just say, we accept, ship them all now and so we end up with a big quarter, but product deferred down.
Chantelle,回到George的問題,你-我知道這很難回答,但你預計該產品的遞延收入將在今年繼續成長嗎?或者說——這太難預測了,有些客戶可能會說,我們接受,現在就全部發貨,這樣我們最終會迎來一個大季度,但產品交付卻被推遲了。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
I'm going to let Chantelle answer some difficult questions [over and] over again.
我要讓 Chantelle 一遍又一遍地回答一些棘手的問題。
Chantelle Breithaupt - Chief Financial Officer
Chantelle Breithaupt - Chief Financial Officer
Happy. Thank you, James. I appreciate it. So I think for deferred, generally is -- so we don't guide deferred, but to try to give you more insight, there will be -- back to George's question, there will be certain deployments that get accepted and released. But the part that's difficult is what comes into the balance, right, James. So I can't guide that would be a wild guess on what's going to go in, which is not prudent, I think, from my perspective.
快樂的。謝謝你,詹姆斯。謝謝。所以我認為對於延期部署,一般來說是——所以我們不會指導延期部署,但為了給你更多了解,會有——回到喬治的問題,會有一些部署被接受並發布。但難點在於如何掌握平衡,對吧,詹姆斯。所以我無法給出指導,因為猜測會放什麼進去是徒勞的,從我的角度來看,我認為這是不明智的。
So we'll continue to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement. But that's probably as much as I can tell you with a responsible answer looking forward.
所以我們會繼續介紹其中的內容。我們將繼續透過餘額數據向您展示。我們將在劇本中從運動的角度來討論它。但就目前而言,我能負責任地回答你的問題大概也就這些了。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
James, this is one of those times no matter how many times you ask this question in several different ways, the answer doesn't change. Okay.
詹姆斯,這個問題無論你用多少種不同的方式問,答案都不會改變。好的。
James Fish - Analyst
James Fish - Analyst
I mean we're all -- [and Chantelle] is doing the same thing over and over again.
我的意思是,我們所有人——(香特爾)都在一遍又一遍地做同樣的事情。
Chantelle Breithaupt - Chief Financial Officer
Chantelle Breithaupt - Chief Financial Officer
Yeah.
是的。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
I know. So on the hyperscalers, I don't think they're getting nervous. You've seen what a strong business they had, how much cash they put out and how successful they are. But I just think they are working more closely with us. Typically, we had a three- to six-month visibility. We're getting [rid of it].
我知道。所以,我認為超大規模資料中心營運商們並沒有感到緊張。你已經看到了他們強大的業務實力、投入的資金以及取得的成功。但我認為他們正在與我們更緊密地合作。通常情況下,我們有三到六個月的可見性。我們正在獲得[擺脫它]。
Operator
Operator
Tal Liani, Bank of America.
塔爾·利亞尼,美國銀行。
Tal Liani - Analyst
Tal Liani - Analyst
I almost had the same question to you what I asked you last quarter because you grew -- you increased the guidance.
我幾乎想問你和上個季度一樣的問題,因為你的業績成長了——你提高了業績預期。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
(inaudible - microphone inaccessible)
(聽不清楚 - 麥克風無法使用)
Tal Liani - Analyst
Tal Liani - Analyst
Yes. No, it's -- I'll explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. The -- and if I look at -- it's very simple to the sector numbers.
是的。不,是——我來解釋一下。你增加了指導力度,但增加的指導力基本上都來自雲端。——如果我看一下——這和行業數字非常接近。
If I remove campus and I remove cloud and you provide these two numbers for both '25 and '26, the rest of the business, which is 60% of the business you guided to, grow zero. And in previous years, it was -- I can make estimate, it was anywhere from 10% to 30% growth.
如果我移除校園業務和雲端業務,並且你提供了 2025 年和 2026 年的這兩個數字,那麼你引導的剩餘業務(佔總業務的 60%)將成長為零。而前幾年,我估計成長率在 10% 到 30% 之間。
So the question is, why are you guiding this way that 60% of the business is not going to grow. Is it because it's just conservatism?
所以問題是,為什麼要這樣引導公司發展,導致 60% 的業務無法成長?難道只是因為保守主義嗎?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Now, can I pause you there because I know you like to dissect our math several different ways and come up with conclusions.
現在,我能打斷一下嗎?因為我知道你喜歡用幾種不同的方式剖析我們的數學,並得出結論。
We're not guiding that our business is going to be flat or we're not going to grow here or grow there. But generally, when something is very fast paced and growing, then other things grow less. And exactly whether it would be flat or grow double digits or single digits, Tal, it's February. I don't know what the rest of the year will be, okay? So --
我們並不是說我們的業務會停滯不前,或者我們不會在某些方面發展,也不會在其他方面發展。但一般來說,當某件事發展迅速且成長很快時,其他事物的發展速度就會減慢。至於具體是保持平穩還是成長兩位數或個位數,塔爾,現在可是二月。我不知道今年剩下的時間會怎樣,好嗎?所以--
Tal Liani - Analyst
Tal Liani - Analyst
No. But that's the question. The question is, is there allocation here? Meaning if you -- let's say you have only set a number of memory slots, so you allocate it to cloud and then the rest of the business doesn't get it or is it just conservatism and lack of ability to -- [lack of visibility]?
不。但這正是問題所在。問題是,這裡是否有分配?也就是說,假設你只設定了記憶體插槽的數量,然後你把這些記憶體分配給了雲端,那麼公司的其他部門就無法使用這些內存,或者這僅僅是出於保守主義和能力不足?——(缺乏可見性)?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
It's neither of the above. It's -- we don't allocate to our customers. It's first in, first served. And in fact, the enterprise customers get a very high sense of priority as to our cloud. Customers come first.
以上兩種都不是。我們不向客戶分配資金。先到先得。事實上,企業客戶對我們的雲端服務有著非常高的重視。顧客至上。
So -- but allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don't know. It's too early in the year. We're confident that we could guide six months after our Analyst Day to a higher number, but we don't know what the next four quarters will look like to the precision you're asking for.
所以——但是記憶體分配可能會讓我們處於需求大於供給能力的情況。我們不知道。現在還太早了。我們有信心在分析師日後的六個月內給出更高的預期數字,但我們無法像您要求的那樣精確地預測未來四個季度的業績。
Operator
Operator
Atif Malik, Citi.
阿提夫‧馬利克,花旗銀行。
Adrienne Colby - Analyst
Adrienne Colby - Analyst
It's Adrienne Colby for Atif. I was hoping to ask about for an update on Arista's four large AI customers. I know that, that fourth customer, you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there.
是艾德麗安·科爾比為阿提夫服務。我原本想詢問一下 Arista 的四家大型 AI 客戶的最新進展。我知道,你提到的第四位客戶在達到 10 萬個 GPU 的規模時速度稍慢一些。請問您能否告知我們他們在那邊的進展?
And perhaps what's next for the other few customers that have already crossed that threshold? And lastly, is there any indication that, that fifth customer that ran into funding challenges might come back to you?
那麼,對於其他幾位已經跨越了這項門檻的客戶來說,接下來又會發生什麼事呢?最後,有沒有跡象表明,那位遇到資金困難的第五位客戶可能會再次光顧?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Okay. Adrienne, I'll give you some update. I'm not sure I have precise updates, but we are, in all four customers, deploying AI with Ethernet. So that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs and are now growing from there.
好的。艾德琳,我為你報告最新情況。我不太確定是否有確切的最新消息,但我們在所有四個客戶中都部署了基於乙太網路的 AI。這是個好消息。其中三家公司累計已部署了 10 萬個 GPU,並且正在以此為基礎繼續成長。
And clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand so it's still below 100,000 GPUs at this time, but I fully expect them to get there this year, and then we shall see how they get beyond that.
很明顯,現在正從試點和生產階段轉移到其他中心轉移,電力是最大的限制因素。我們的第四位客戶正在從 InfiniBand 遷移,所以目前 GPU 數量還不到 10 萬個,但我完全相信他們今年就能達到這個目標,然後我們再看看他們如何超越這個目標。
Operator
Operator
Michael Ng, Goldman Sachs.
Michael Ng,高盛。
Michael Ng - Analyst
Michael Ng - Analyst
I just have one and one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with cloud and AI and AI specialty. What's the philosophy around that? And does that kind of signal more opportunity in places like Oracle and the neoclouds?
我只有一個問題和一個後續問題。首先,我想請您談談貴公司利用雲端運算、人工智慧和人工智慧專長推出的全新客戶細分方案。這背後的理念是什麼?這是否意味著在 Oracle 和新雲端運算等領域會有更多機會?
And then second, with cloud and AI at 48% of revenue and AMD had a combined 36%, you have 12% left over. Is that a hyperscale customer, because it does kind of imply that you have a new hyperscaler that is approaching 10% because obviously, we thought that the next biggest one would have been Oracle, but that's moved out of cloud now. So any thoughts there would be great.
其次,雲端運算和人工智慧佔總收入的 48%,而 AMD 加起來佔 36%,還剩下 12%。這是超大規模客戶嗎?因為這似乎意味著你們有一個新的超大規模客戶,市場佔有率接近 10%,因為很明顯,我們原本以為下一個最大的客戶會是 Oracle,但它現在已經退出雲端領域了。所以,大家有什麼想法嗎?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. Sure, Michael. So -- well, first of all, my math is 26% and 16%, so 42%, so I don't have 12% unless you had 58%. It's really only 6%. So on the cloud and AI tightness, the way we classified that is a significantly large scale customers with greater than 1 million servers, greater than 100,000 GPUs, an R&D focus on models and sometimes even their own XPUs. And this can, of course, change. Some others may come into it. But it's a very select few set of customers, less than five or above five, that's the way to think of it. right?
是的。當然可以,麥可。所以——首先,我的計算結果是 26% 和 16%,所以是 42%,除非你有 58%,否則我沒有 12%。實際上只有6%。因此,關於雲端和 AI 的緊密程度,我們的分類方式是:規模相當大的客戶,擁有超過 100 萬台伺服器、超過 10 萬個 GPU,研發重點放在模型上,有時甚至擁有自己的 XPU。當然,這種情況也可能改變。可能還有其他人會參與其中。但這只是極少數的特定客戶群,少於五個或多於五個,這樣理解沒錯吧?
On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI with some cloud as opposed to cloud with some AI. So when it's a $0.70 AI-centric especially with Oracle's AI [Acceleron] and multi-tenant partnerships that they've created, they have naturally got a dual personality, some of which is OCI, the Oracle Cloud, but some of it is really AI -- fully AI based. So we -- the shift in their strategy made us shift the category and bifurcate the two.
關於專業雲端的變化,正如我所說,我們注意到一些客戶真正專注於人工智慧以及一些雲端服務,而不是專注於雲端服務以及一些人工智慧。所以,當一個以 0.70 美元 AI 為中心的產品出現時,尤其是 Oracle 的 AI [Acceleron] 和他們創建的多租戶合作夥伴關係,它自然而然地具有雙重特性,一部分是 OCI,即 Oracle 雲,但一部分是真正的 AI——完全基於 AI。因此,他們的策略轉變促使我們改變了分類,並將兩者分開。
Rudolph Araujo - Head of Investor Advocacy
Rudolph Araujo - Head of Investor Advocacy
Regina, we have time for one last question.
雷吉娜,我們還有最後一個問題。
Operator
Operator
Ryan Koontz, Needham & Company.
Ryan Koontz,Needham & Company。
Ryan Koontz - Analyst
Ryan Koontz - Analyst
Jayshree, in your prepared remarks, you talked about your telemetry capabilities. I wondered if you could expand on that and discuss where are you seeing that key differentiation? What sort of use cases you're able to really seize the upper hand competitively with your telemetry capabilities?
Jayshree,你在事先準備好的演講稿中談到了你的遙測能力。我想請您詳細闡述一下,您認為這種關鍵差異體現在哪裡?在哪些類型的應用程式場景中,您可以利用遙測功能真正取得競爭優勢?
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Yeah. I'm going to say some, and I think Ken who's been designing this and working on it will say even more. Ken Duda, our President and CTO.
是的。我會說一些,而且我認為一直負責設計和開發這個專案的 Ken 會說更多。Ken Duda,我們的總裁兼技術長。
So telemetry is at the heart of our -- both our EOS software stack as well as our CloudVision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it's constantly keeping track of all us, which is it isn't just a pretty management tool.
因此,遙測技術是我們的核心——無論是我們的 EOS 軟體堆疊,還是針對企業客戶的 CloudVision。我們擁有一個從古至今一直陪伴著我們的即時串流遙測系統,它不斷地追蹤著我們所有人,所以它不僅僅是一個漂亮的管理工具。
And at the same time, our cloud customers and AI customers are seeking some of that visibility too. And so we have developed some deeper AI capabilities for telemetry as well.
同時,我們的雲端客戶和人工智慧客戶也在尋求類似的可見性。因此,我們也發展了一些更深層的遙測人工智慧功能。
Over to you, Ken, for some more detail.
接下來由你,肯,來補充一些細節。
Kenneth Duda - President, Chief Technology Officer
Kenneth Duda - President, Chief Technology Officer
Yeah. No, thanks for that question. That's great. Look, the EOS architecture is based on stable orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever the television or whatever system we can then receive. And we are extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters and also host level information, including what's going on in the RDMA stock on the host, what's going on with collectives, latencies, any full control problems or buffering problems in the host NIC. And we pull those -- that information altogether in CloudVision and give the operator a unified view of what's happening in the network, and what's happening in the host.
是的。不,謝謝你的提問。那太棒了。你看,EOS架構是基於穩定的方向的。我們的想法是,捕獲網路狀態,然後將該狀態從交換器上的系統資料庫串流傳輸到我們可以接收的電視或其他系統。我們正在透過結合與流量控制、RDMA 計數器、緩衝和擁塞計數器相關的網路內資料來源以及主機層級資訊(包括主機上的 RDMA 庫存情況、集體操作情況、延遲、主機網路卡中的任何完全控制問題或緩衝問題)來擴展 AI 的這種能力。我們將這些資訊匯總到 CloudVision 中,為營運商提供網路運作狀況和主機運作狀況的統一視圖。
And this greatly aids our customers in building an overall working solution because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.
這極大地幫助我們的客戶建立整體可行的解決方案,因為當不同的系統收集網路和主機資料時,網路和主機之間的互動可能很複雜,難以調試。
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Jayshree Ullal - Chairman of the Board, Chief Executive Officer
Great job, Ken. I can't wait for that product.
幹得好,肯。我迫不及待想買到那款產品。
Rudolph Araujo - Head of Investor Advocacy
Rudolph Araujo - Head of Investor Advocacy
This concludes Arista Network's fourth-quarter 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access in the Investors section of our website. Thank you for joining us today and for your interest in Arista.
Arista Network 2025 年第四季財報電話會議到此結束。我們已發布一份演示文稿,提供有關我們業績的更多信息,您可以在我們網站的「投資者」部分訪問該演示文稿。感謝您今天蒞臨本站,也感謝您對 Arista 的關注。
Operator
Operator
Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.
謝謝各位的到來。今天的電話會議到此結束。您現在可以斷開連線了。