Arista Networks Inc (ANET) 2024 Q1 法說會逐字稿

內容摘要

Arista Networks 公佈了強勁的第一財季業績,營收為 15.7 億美元,非 GAAP 每股收益為 1.99 美元。該公司致力於將自己打造成數據驅動的人工智慧網路市場的網路創新者。營運長 Anshul Sadana 即將辭職,公司不會替換該職位。

Arista 看到了所有領域的發展勢頭,並繼續以品質和創新給客戶留下深刻印象。他們專注於網路即服務策略,目標是到 2025 年在企業園區市場達到 7.5 億美元的規模。

他們預計未來會有伺服器更新周期,並正在為未來的技術準備網路基礎設施。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Welcome to the First Quarter 2024 Arista Networks Financial Results Earnings Conference Call. (Operator Instructions) And as a reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the Arista website following this call. Ms. Liz Stine, Arista's Director of Investor Relations, you may begin.

    歡迎參加 Arista Networks 2024 年第一季財務業績電話會議。 (操作員說明)謹此提醒,本次會議正在錄製中,並可在本次電話會議後在 Arista 網站的投資者關係部分重播。 Arista 投資者關係總監 Liz Stine 女士,您可以開始了。

  • Liz Stine - Director of IR Advocacy

    Liz Stine - Director of IR Advocacy

  • Thank you, operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jayshree Ullal, Arista Networks' Chairperson and Chief Executive Officer; and Chantelle Breithaupt, Arista's Chief Financial Officer.

    謝謝你,接線生。大家下午好,感謝您加入我們。與我一起參加今天電話會議的有 Arista Networks 董事長兼執行長 Jayshree Ullal;以及 Arista 財務長 Chantelle Breithaupt。

  • This afternoon, Arista Networks issued a press release announcing the results for its fiscal first quarter ending March 31, 2024. If you would like a copy of this release, you can access it online at our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the second quarter of the 2024 fiscal year, longer-term financial outlooks for 2024 and beyond. Our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead times, product innovation, working capital optimization and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K and which could cause actual results to differ materially from those anticipated by these statements.

    今天下午,Arista Networks 發布了一份新聞稿,宣布截至 2024 年 3 月 31 日的第一個財季業績。在本次電話會議期間,Arista Networks 管理階層將做出前瞻性聲明,包括與 2024 財年第二季的財務前景、2024 年及以後的長期財務前景相關的聲明。我們的整體潛在市場和應對這些市場機會的策略,包括人工智慧、客戶需求趨勢、供應鏈限制、零件成本、製造產量、庫存管理和我們業務的通膨壓力、交貨時間、產品創新、營運資本優化和效益收購的風險和不確定性,這些風險和不確定性受到我們在向SEC 提交的文件中詳細討論的影響,特別是在我們最新的10-Q 表格和10-K 表格中,並且可能導致實際結果與預期存在重大差異通過這些陳述。

  • These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release.

    這些前瞻性陳述從今天起適用,您不應依賴它們來代表我們未來的觀點。我們不承擔在本次電話會議後更新這些聲明的義務。另請注意,我們在本次電話會議中使用的某些財務指標是在非公認會計原則的基礎上表示的,並已進行調整以排除某些費用。我們在收益新聞稿中提供了這些非公認會計原則財務指標與公認會計原則財務指標的調節表。

  • With that, I will turn the call over to Jayshree.

    這樣,我會將電話轉給 Jayshree。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Thank you, Liz. Thank you, everyone, for joining us this afternoon for our First Quarter 2024 Earnings Call. Amidst all the network consolidation, Arista is looking to establish ourselves as the pure-play networking innovator, for the next era, addressing at least a $60 billion TAM in data-driven client-to-cloud AI networking.

    謝謝你,莉茲。感謝大家今天下午參加我們的 2024 年第一季財報電話會議。在所有網路整合過程中,Arista 希望將自己打造成下一個時代的純粹網路創新者,解決數據驅動的客戶端到雲端 AI 網路中至少 600 億美元的 TAM。

  • In terms of Q1 specifics, we delivered revenue of $1.57 billion for the quarter with a non-GAAP earnings per share of $1.99. Services and Software Support Renewals contributed strongly at approximately 16.9% of revenue. Our non-GAAP gross margins of 64.2% was influenced by improved supply chain and inventory management, as well as favorable mix of the enterprise.

    就第一季的具體情況而言,我們該季度的營收為 15.7 億美元,非 GAAP 每股收益為 1.99 美元。服務和軟體支援續約貢獻強勁,約佔營收的 16.9%。我們的非 GAAP 毛利率為 64.2%,這得益於供應鏈和庫存管理的改善以及企業的有利組合。

  • International contribution for the quarter registered at 20% and with the Americas strong at 80%. As we kick off 2024, I'm so proud of the Arista team work and our consistent execution. We have been fortunate to build a seasoned management team for the past 10 to 15 years. Our core founders are very engaged in the company for the past 20 years. Ken is still actively programming and writing code, while Andy is our full-time chief architect for next-generation AI, silicon and optics initiatives. Hugh Holbrook, our recently promoted Chief Development Officer, is driving our major platform initiatives in tandem, with John McCool and Alex on the hardware side.

    本季國際貢獻率為 20%,其中美洲貢獻高達 80%。在 2024 年伊始,我為 Arista 團隊的工作和我們一貫的執行力感到非常自豪。在過去的 10 到 15 年裡,我們很幸運地建立了一支經驗豐富的管理團隊。過去20年來,我們的核心創辦人對公司非常投入。 Ken 仍在積極編程和編寫程式碼,而 Andy 是我們下一代人工智慧、晶片和光學計畫的全職首席架構師。我們最近晉升的首席開發長 Hugh Holbrook 正在與硬體方面的 John McCool 和 Alex 一起推動我們的主要平台計劃。

  • This engineering team is one of the best in tech and networking that I have ever had the pleasure of working with. On behalf of Arista though, I would like to express our sincere gratitude for Anshul Sadana's 16-plus wonderful years of instrumental service to the company in a diverse set of roles. I know he will always remain a well-wisher and supporter of the company. But Anshul, I'd like to invite you to say a few words.

    這個工程團隊是我有幸合作過的技術和網路方面最好的團隊之一。不過,我謹代表 Arista 對 Anshul Sadana 16 多年來為公司提供的各種崗位的出色服務表示誠摯的謝意。我知道他將永遠是公司的祝福者和支持者。但是安舒爾,我想邀請你說幾句話。

  • Anshul Sadana - COO (Leave of Absence)

    Anshul Sadana - COO (Leave of Absence)

  • Thank you, Jayshree. The Arista journey has been a very special one. We've come a long way from our startup base to over an $80 billion company today. Every milestone, every event, the ups and downs are all etched in my mind.

    謝謝你,傑什裡。 Arista 之旅是一次非常特別的旅程。從新創公司到如今市值超過 800 億美元的公司,我們已經走過了漫長的道路。每一個里程碑,每一個事件,風風雨雨都銘刻在我的腦海裡。

  • I've had a multitude of roles and learned and grown more than what I could have ever imagined. I have decided to take a break and spend more time with family, especially when the kids are young. I'm also looking at exploring different areas in the future. I want to thank all of you on the call today, our customers, our investors, our partners and all the well wishes over these years.

    我扮演過多種角色,學到的東西和成長的東西超越了我的想像。我決定休息一下,花更多時間陪伴家人,尤其是在孩子還小的時候。我也正在考慮在未來探索不同的領域。我要感謝今天參加電話會議的所有人、我們的客戶、我們的投資者、我們的合作夥伴以及這些年來所有的良好祝愿。

  • Arista isn't just a workplace. It's family to me. It's the people around you that make life fun. Special thanks to Arista leadership, Chris, Ashwin, John McCool, Mark Foss, Ita and Chantelle, Marc Taxay, Hugh Holbrook, Ken Duda and many more. Above all, there are 2 very special people I want to thank. Andy Bechtolsheim for years of vision, passion, guidance and listening to me. And of course, Jayshree. She hasn't been just my manager, but also by mentor and coach for over 15 years. Thank you for believing in me. I will always continue to be an Arista well-wisher.

    Arista 不僅僅是一個工作場所。對我來說這是家人。正是你周圍的人讓生活變得有趣。特別感謝 Arista 的領導、Chris、Ashwin、John McCool、Mark Foss、Ita 和 Chantelle、Marc Taxay、Hugh Holbrook、Ken Duda 等。最重要的是,我要感謝兩個非常特別的人。安迪·貝托爾斯海姆(Andy Bechtolsheim)多年來的遠見、熱情、指導和傾聽。當然還有傑什裡。她不僅是我的經理,而且在超過 15 年的時間裡一直是我的導師和教練。謝謝你相信我。我將永遠繼續成為 Arista 的支持者。

  • Back to you, Jayshree.

    回到你身邊,傑什裡。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Anshul, thank you for that very genuine and hard-sell expression of your huge contributions to Arista. It gives me goosebumps hearing your nostalgic memories. We will miss you and hope someday you will return back home.

    Anshul,感謝您以非常真誠和強硬的方式表達了您對 Arista 的巨大貢獻。聽到你懷舊的回憶讓我起雞皮疙瘩。我們會想念你,希望有一天你能回家。

  • At this time, Arista will not be replacing the COO role and instead flattening the organization. We will be leveraging our deep bench strength of our executives who stepped up to drive our new Arista 2.0 initiatives. In particular, John McCool, our Chief Platform Officer; and Ken Kiser, our Group Vice President, have taken standard responsibility for our cloud, AI, tech initiatives, operations and sales.

    目前,Arista 不會取代 COO 角色,而是會扁平化組織。我們將利用我們主管的深厚後備力量,他們加緊推動我們新的 Arista 2.0 計畫。特別是我們的首席平台長 John McCool;我們的集團副總裁 Ken Kiser 對我們的雲端、人工智慧、技術規劃、營運和銷售承擔標準責任。

  • On the noncloud side, 2 seasoned executives have been promoted Ashwin Kohli, Chief Customer Officer; and Chris Schmidt, Chief Sales Officer, will together address the global enterprise and provider opportunity. Our leaders have grown up in Arista for a long time with long tenures of a decade or more.

    在非雲端方面,兩位經驗豐富的主管被晉升為首席客戶長 Ashwin Kohli;和首席銷售長 Chris Schmidt 將共同探討全球企業和供應商的機會。我們的領導者在 Arista 成長了很長時間,任期長達十年或更長時間。

  • We are quite pleased with the momentum across all our 3 sectors: Cloud and AI Titans, Enterprise and Providers. Customer activity is high as Arista continues to impress our customers and prospects with our undeniable focus on quality and innovation. As we build our programmable network on delays based on our Universal Leaf/Spine topology, we are are also constructing network, as I said, the suite of overlays such as zero-touch automation, security, telemetry and observability. I would like to invite Ken Duda, our Founder, CTO and recently elected to the Arista Board to describe our enterprise NaaS strategy, as we drive to our enterprise campus goal of $750 million in 2025.

    我們對雲端和人工智慧巨頭、企業和供應商這三個領域的發展勢頭感到非常滿意。由於 Arista 對品質和創新的不可否認的關注,繼續給我們的客戶和潛在客戶留下深刻的印象,因此客戶活動非常活躍。當我們基於通用葉/主幹拓撲構建基於延遲的可編程網絡時,正如我所說,我們也在構建網絡,即零接觸自動化、安全性、遙測和可觀察性等覆蓋套件。我想邀請我們的創辦人、技術長、最近當選為 Arista 董事會成員的 Ken Duda 來描述我們的企業 NaaS 策略,以推動我們在 2025 年實現 7.5 億美元的企業園區目標。

  • Over to you, Ken.

    交給你了,肯。

  • Kenneth Duda - Co-Founder, CTO, Senior VP of Software Engineering and Director

    Kenneth Duda - Co-Founder, CTO, Senior VP of Software Engineering and Director

  • Thank you, Jayshree, and thanks, everyone, for being here. I'm Ken Duda, CTO of Arista Networks. Excited to talk to you today about NetDL, the Arista Network Data Link and how it supports our Network-as-a-Service strategy.

    謝謝你,Jayshree,謝謝大家來到這裡。我是 Ken Duda,Arista Networks 的技術長。今天很高興與您討論 NetDL、Arista 網路資料鏈路以及它如何支援我們的網路即服務策略。

  • From the inception of networking decades ago, networking has involved rapidly changing data. Data about how the network is operating, which paths through the network our best and how the network is being used. But historically, most of this data was to simply discarded as the network changes state and that which was collected can be difficult to interpret because it lacks context. Network addresses and port numbers by themselves, provide a little insight into what users are doing or experiencing.

    從幾十年前網路誕生以來,網路就涉及快速變化的數據。有關網路如何運作、通過網路的最佳路徑以及網路如何使用的資料。但從歷史上看,隨著網路狀態的變化,大部分數據都會被簡單地丟棄,並且收集到的數據可能很難解釋,因為它缺乏上下文。網路位址和連接埠號碼本身可以讓您了解使用者正在做什麼或正在經歷什麼。

  • Recent developments in AI have proved the value of data. But to take advantage of these breakthroughs, you need to gather and store large data sets, labeled suitably for machine learning. Arista is solving this problem with NetDL, we continually monitor every device, not simply taking snapshots, but rather streaming every network event, every counter, every piece of data in real time, archiving a full history in NetDL.

    人工智慧的最新發展證明了數據的價值。但要利用這些突破,您需要收集和儲存大型資料集,並為機器學習進行適當標記。 Arista 正在透過 NetDL 解決這個問題,我們持續監控每個設備,不僅僅是拍攝快照,而是即時傳輸每個網路事件、每個計數器、每個數據,並在 NetDL 中存檔完整的歷史記錄。

  • Alongside this device data, we also collect flow data and inbound network telemetry data gathered by our switches. Then we enrich this performance data further with user, service and application layer data from external sources outside the network, enabling us to understand not just how each part of the network is performing, but also which users are using the network for what purposes. And how the network behavior is influencing their experience.

    除了這些設備資料之外,我們還收集交換器收集的流量資料和入站網路遙測資料。然後,我們利用來自網路外部的外部來源的使用者、服務和應用程式層數據進一步豐富這些效能數據,使我們不僅能夠了解網路每個部分的效能,還能了解哪些使用者將網路用於什麼目的。以及網路行為如何影響他們的體驗。

  • NetDL is a foundational part of the EOS stack, enabling advanced functionality across all of our use cases. For example, in AI fabrics, NetDL enables fabric-wide visibility, integrating network data and NIC data to enable operators to identify misconfigurations or misbehaving hosts and pinpoint performance bottlenecks. But for this call, I want to focus on how NetDL enables Network-as-a-Service.

    NetDL 是 EOS 堆疊的基礎部分,可在我們的所有用例中實現進階功能。例如,在 AI 結構中,NetDL 可實現結構範圍內的可見性,整合網路數據和 NIC 數據,使操作員能夠識別錯誤配置或行為不當的主機,並找出效能瓶頸。但對於這次電話會議,我想重點討論 NetDL 如何實現網路即服務。

  • Network-as-a-Service or NaaS is Arista's strategy for up-leveling our relationship with our customers, taking us beyond simply providing network hardware and software by also providing customers or service provider partners with tools for building and operating services. The customer selects the service model, configure service instances and Arista's CV NaaS handles the rest, equipment selection, deployment, provisioning, building, monitoring and troubleshooting.

    網路即服務或 NaaS 是 Arista 提升與客戶關係的策略,使我們不僅提供網路硬體和軟體,還為客戶或服務供應商合作夥伴提供用於建置和營運服務的工具。客戶選擇服務模型、配置服務實例,Arista 的 CV NaaS 處理其餘的工作,包括設備選擇、部署、配置、建置、監控和故障排除。

  • In addition, CV NaaS provides end user self-service, enabling customers to manage their service systems, provision new endpoints, provision new virtual topologies, set traffic prioritization policies, set access rules and get visibility into their use of the service and its performance. One can think of NaaS as applying cloud computing principles to the physical network, reusable design patterns, scale autonomous operations, multi-tenant from top to bottom with cost-effective automated end user self service.

    此外,CV NaaS 提供最終用戶自助服務,使客戶能夠管理其服務系統、配置新端點、配置新虛擬拓撲、設定流量優先順序策略、設定存取規則並了解服務的使用情況及其效能。人們可以將 NaaS 視為將雲端運算原理應用於實體網路、可重用的設計模式、規模自治操作、自上而下的多租戶以及具有成本效益的自動化最終用戶自助服務。

  • And we couldn't get to the starting line without NetDL, as NetDL provides a database foundation of NaaS service deployment and monitoring. Now NaaS is not a separate SKU, but really refers to a collection of functions in audition. For example, Arista Validated Designs or AVD, is a provisioning system. It's an early version of our NaaS Service Instance Configuration tool.

    如果沒有 NetDL,我們就無法到達起跑線,因為 NetDL 為 NaaS 服務部署和監控提供了資料庫基礎。現在NaaS不是單獨的SKU,而是真正指試聽中功能的集合。例如,Arista 驗證設計或 AVD 是一個設定係統。它是我們的 NaaS 服務實例配置工具的早期版本。

  • Our AGNI services provide global location independent identity management needed to identify customers within NaaS. Our UNO product or Universal Network Observability, will ultimately become the service monitoring element of NaaS. And finally, our NaaS solution has security integrated through our ZTN or Zero Trust Networking product that we showcased at RSA this week.

    我們的 AGNI 服務提供在 NaaS 中識別客戶所需的全球位置獨立身分管理。我們的 UNO 產品或通用網路可觀測性最終將成為 NaaS 的服務監控元素。最後,我們的 NaaS 解決方案透過我們本週在 R​​SA 上展示的 ZTN 或零信任網路產品整合了安全性。

  • Thus, our NaaS vision simultaneously represents a strategic business opportunity for us, while also serving as a guiding principle for our immediate CloudVision development efforts. While we are really excited about the future here, our core promise to our investors and customers is unchanging and uncompromised we will always put all these first. We are incredibly proud of the amount of success customers have had deploying our products because they really work. And as we push hard building sophisticated new functions in the NetDL and NaaS areas, we will never put our customers' networks at risk by cutting corners on quality. Thank you.

    因此,我們的 NaaS 願景同時代表了我們的策略性商業機會,同時也成為我們目前 CloudVision 開發工作的指導原則。雖然我們對這裡的未來感到非常興奮,但我們對投資者和客戶的核心承諾是不變的、不妥協的,我們將永遠把所有這些放在第一位。我們對客戶部署我們的產品所取得的成功感到非常自豪,因為它們確實有效。當我們努力在 NetDL 和 NaaS 領域建立複雜的新功能時,我們絕不會因為在品質上偷工減料而讓客戶的網路面臨風險。謝謝。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Thank you, Ken, for your tireless execution in the typical Arista way. In an era characterized by stringent cybersecurity, observability is an essential perimeter and imperative. We cannot secure what we cannot see. We launched CloudVision and UNO in February 2024 based on the EOS Network Data Link Foundation that Ken just described for universal network observability.

    謝謝你,Ken,以典型的 Arista 方式不知疲倦地執行。在一個以嚴格的網路安全為特徵的時代,可觀察性是一個重要的邊界和必要條件。我們無法確保我們看不到的東西。我們於 2024 年 2 月基於 Ken 剛剛描述的通用網路可觀測性的 EOS 網路資料鏈路基礎推出了 CloudVision 和 UNO。

  • CloudVision UNO delivers fall detection, correction and recovery. It also brings deep analysis to provide a composite picture of the entire network with improved discovery of applications, hosts, workloads and IT systems of record.

    CloudVision UNO 提供跌倒偵測、修正和復原功能。它還提供深入分析,提供整個網路的綜合圖片,並改進對應用程式、主機、工作負載和 IT 記錄系統的發現。

  • Okay. Switching to AI. Of course, no call is complete without that. As generative AI training tasks evolve, they are made up of many thousands of individual iterations. Any slowdown due to network and critically impact the application performance, creating inefficient wait stage and idling away processor performance by 30% or more.

    好的。轉向人工智慧。當然,如果沒有這一點,任何通話都是不完整的。隨著生成式人工智慧訓練任務的發展,它們由數千個單獨的迭代組成。由於網路而導致的任何減速都會嚴重影響應用程式效能,造成低效的等待階段並使處理器效能閒置 30% 或更多。

  • The time taken to reach coherence known, as job completion time is an important benchmark achieved by building proper scale-out AI networking to improve the utilization of these precious and expensive GPUs. Arista continues to have customer success across our innovative AI for networking platforms.

    達到一致性所需的時間眾所周知,因為作業完成時間是透過建立適當的橫向擴展人工智慧網路來提高這些寶貴且昂貴的 GPU 的利用率所實現的重要基準。 Arista 透過我們創新的 AI 網路平台持續為客戶帶來成功。

  • In a recent blog from one of our large Cloud and AI Titan customers, Arista was highlighted for building a 24,000 node GPU cluster based on our flagship 7,800 AI Spine. This cluster tackles complex AI training tasks that involve a mix of model and data penalization across thousands of processors and ethernet is proving to offer at least 10% improvement of job completion performance across all packet sizes versus InfiniBand.

    在我們的一位大型雲端和 AI Titan 客戶最近發布的部落格中,Arista 因基於我們的旗艦產品 7,800 AI Spine 構建了 24,000 個節點 GPU 叢集而受到關注。該叢集可處理複雜的 AI 訓練任務,這些任務涉及數千個處理器和乙太網路的模型和資料懲罰組合,事實證明,與 InfiniBand 相比,所有資料包大小的作業完成效能至少提高了 10%。

  • We are witnessing an inflection of AI networking and expect this to continue throughout the year and decade. Ethernet is emerging as a critical infrastructure across both front-end and back-end AI data centers. AI applications simply cannot work in isolation and demand seamless communication among the compute nodes, consisting of back-end GPUs and AI accelerators and as well as the front end nodes like the CPUs, alongside storage and IP/WAN systems as well.

    我們正在見證人工智慧網路的變化,並預計這種變化將在今年和十年內持續下去。乙太網路正在成為前端和後端人工智慧資料中心的關鍵基礎設施。人工智慧應用程式根本無法單獨工作,需要運算節點之間的無縫通信,其中包括後端 GPU 和人工智慧加速器以及 CPU 等前端節點,以及儲存和 IP/WAN 系統。

  • If you recall, in February, I shared with you that we are progressing well in 4 major AI Ethernet clusters, that we won versus InfiniBand recently. In all 4 cases, we are now migrating from trials to pilots, connecting thousands of GPUs this year, and we expect production in the range of 10,000 to 100,000 GPUs in 2025. Ethernet at scale is becoming the de facto network at premier choice for scale-out AI training workloads.

    如果你還記得,我在二月份與大家分享過,我們在 4 個主要的 AI 乙太網路集群方面進展順利,最近我們在與 InfiniBand 的比賽中獲勝。在所有 4 個案例中,我們現在正從試驗轉向試點,今年將連接數千個 GPU,預計 2025 年產量將達到 10,000 到 100,000 個 GPU。 大規模乙太網路正在成為事實上的大規模網路首選-out AI 訓練工作量。

  • A good AI network needs a good data strategy, delivered by our highly differentiated EOS and network data lake architecture. We are, therefore, becoming increasingly constructive about achieving our AI target of $750 million in 2025.

    良好的人工智慧網路需要良好的資料策略,由我們高度差異化的 EOS 和網路資料湖架構提供。因此,我們對於在 2025 年實現 7.5 億美元的人工智慧目標變得越來越有建設性。

  • In summary, as we continue to set the direction of Arista 2.0 networking, our visibility to new AI and cloud projects is improving and our enterprise and provider activity continues to progress well. We are now projecting above our Analyst Day range of 10% to 12% annual growth in 2024. And with that, I'd like to turn it over to Chantelle for the very first time as Arista's CFO, to review financial specifics and tell us more. Warm welcome to you, Chantelle.

    總而言之,隨著我們繼續確定 Arista 2.0 網路的方向,我們對新 AI 和雲端專案的可見性正在提高,我們的企業和供應商活動繼續進展順利。我們現在預計 2024 年的年增長率將高於分析師日預測的 10% 至 12%。 。熱烈歡迎你,尚特爾。

  • Chantelle Breithaupt - Senior VP & CFO

    Chantelle Breithaupt - Senior VP & CFO

  • Thank you, Jayshree, and good afternoon. The analysis of our Q1 results and our guidance for Q2 2024 is based on non-GAAP and excludes all noncash stock-based compensation impacts, certain acquisition-related charges and other nonrecurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release.

    謝謝你,Jayshree,下午好。我們對第一季業績的分析和 2024 年第二季的指導是基於非公認會計準則,不包括所有非現金股票薪酬影響、某些收購相關費用和其他非經常性項目。我們的收益報告中提供了我們選擇的 GAAP 與非 GAAP 績效的全面對帳。

  • Total revenues in Q1 were $1.571 billion, up 16.3% year-over-year and above the upper end of our guidance of $1.52 billion to $1.56 billion. This year-over-year growth was led by strength in the enterprise vertical with cloud doing well as expected.

    第一季總營收為 15.71 億美元,年增 16.3%,高於我們 15.2 億美元至 15.6 億美元指引上限。這一同比增長是由企業垂直領域的實力帶動的,其中雲端運算的表現正如預期的那樣好。

  • Services and subscription software contributed approximately 16.9% of revenue in the first quarter, down slightly from 17% in Q4. International revenues for the quarter came in at $316 million or 20.1% of total revenue, down from 22.3% in the last quarter. This quarter-over-quarter reduction reflects the quarterly volatility and includes the impact of an unusually high contribution from our EMEA in-region customers in the prior quarter.

    服務和訂閱軟體在第一季貢獻了約 16.9% 的收入,略低於第四季的 17%。本季國際營收為 3.16 億美元,佔總營收的 20.1%,低於上一季的 22.3%。這一環比下降反映了季度波動性,並包括上一季度歐洲、中東和非洲地區客戶異常高的貢獻的影響。

  • In addition, we continue to see strong revenue growth in the U.S. with solid contributions from our Cloud Titan and Enterprise customers. Gross margin in Q1 was 64.2%, above our guidance of approximately 62%. This is down from 65.4% last quarter and up from 60.3% in Q1 FY '23.

    此外,在我們的 Cloud Titan 和企業客戶的堅實貢獻下,我們在美國的營收持續強勁成長。第一季毛利率為 64.2%,高於我們約 62% 的指引值。比例低於上季的 65.4%,高於 23 財年第一季的 60.3%。

  • The year-over-year margin accretion was driven by 3 key factors: Supply chain productivity gains led by the efforts of John McCool, Mike Capes and his operational team, a stronger mix of Enterprise business and a favorable revenue mix between product, services and software.

    The year-over-year margin accretion was driven by 3 key factors: Supply chain productivity gains led by the efforts of John McCool, Mike Capes and his operational team, a stronger mix of Enterprise business and a favorable revenue mix between product, services and軟體.

  • Operating expenses for the quarter were $265 million or 16.9% of revenue, up from last quarter at $262.7 million. R&D spending came in at $164.6 million or 10.5% of revenue, down slightly from $165 million last quarter. This reflected increased head count offset by lower new product introduction costs in the period due to timing of prototypes and other costs associated with our next-generation products.

    本季營運費用為 2.65 億美元,佔營收的 16.9%,高於上季的 2.627 億美元。研發支出為 1.646 億美元,佔營收的 10.5%,略低於上季的 1.65 億美元。這反映出,由於原型設計的時間安排和與我們的下一代產品相關的其他成本,該期間新產品推出成本的降低抵消了員工數量的增加。

  • Sales and marketing expense was $83.7 million or 5.3% of revenue, compared to $83.4 million last quarter, with increased head count costs offset by discretionary spending that is delayed until later this year. Our G&A costs came in at $16.7 million or 1.1% of revenue, up from 0.9% of revenue in the prior quarter. Income from operations for the quarter was $744 million or 47.4% of revenue.

    銷售和行銷費用為 8,370 萬美元,佔收入的 5.3%,而上一季為 8,340 萬美元,增加的人員成本被推遲到今年稍後的可自由支配支出所抵消。我們的一般管理費用為 1,670 萬美元,佔營收的 1.1%,高於上一季佔營收的 0.9%。該季度營運收入為 7.44 億美元,佔營收的 47.4%。

  • Other income for the quarter was $62.6 million, and our effective tax rate was 20.9%. This resulted in net income for the quarter of $637.7 million or 40.6% of revenue. Our diluted share number was 319.9 million shares, resulting in a diluted earnings per share number for the quarter of $1.99, up 39% from the prior year.

    本季的其他收入為 6,260 萬美元,我們的有效稅率為 20.9%。這使得該季度的淨利潤達到 6.377 億美元,佔營收的 40.6%。我們的稀釋後股票數量為 3.199 億股,導致本季稀釋後每股收益為 1.99 美元,比上一年增長 39%。

  • Now turning to the balance sheet. Cash, cash equivalents and investments ended the quarter at approximately $5.45 billion. During the quarter, we repurchased $62.7 million of our common stock. And in April, we repurchased an additional 82 million for a total of $144.7 million at an average price of $269.80 per share. We have now completed share repurchases under our existing $1 billion Board authorization, whereby we repurchased 8.5 million shares at an average price of $117.20 per share.

    現在轉向資產負債表。本季末現金、現金等價物和投資約為 54.5 億美元。本季度,我們回購了價值 6,270 萬美元的普通股。 4 月份,我們以每股 269.80 美元的平均價格回購了 8,200 萬股,總額為 1.447 億美元。我們現已根據現有 10 億美元的董事會授權完成了股票回購,以每股 117.20 美元的平均價格回購了 850 萬股股票。

  • In May 2024, our Board of Directors authorized a new $1.2 billion stock repurchase program, which commences in May 2024 and expires in May 2027. The actual timing and amount of future repurchases will be dependent upon market and business conditions, stock price and other factors.

    2024年5月,我們的董事會批准了一項新的12億美元股票回購計劃,該計劃於2024年5月開始,於2027年5月到期。市場和業務狀況、股價和其他因素。

  • Now turning to operating cash performance for the first quarter. We generated approximately $513.8 million of cash from operations in the period, reflecting strong earnings performance, partially offset by ongoing investments in working capital. DSLs Came in at 62 days, up from 61 days in Q4 driven by significant end of quarter service renewals. Inventory turns were 1, flat to last quarter. Inventory increased slightly to $2 billion in the quarter, up from $1.9 billion in the prior period, reflecting the receipt of components from our purchase commitments and an increase in switch-related finished goods.

    現在轉向第一季的營運現金表現。在此期間,我們從營運中產生了約 5.138 億美元的現金,反映出強勁的獲利表現,但部分被持續的營運資本投資所抵消。 DSL 的交付週期為 62 天,高於第四季的 61 天,這是由於季度末服務續訂量較大所致。庫存週轉率為 1,與上季持平。本季庫存小幅增加至 20 億美元,高於上一期的 19 億美元,反映出我們從採購承諾中收到的零件以及與開關相關的製成品的增加。

  • Our purchase commitments at the end of the quarter were $1.5 billion, down from $1.6 billion at the end of Q4. We expect this number to level off as lead times continue to improve but will remain somewhat volatile as we ramp up new product introductions. Our total deferred revenue balance was $1.663 billion, up from $1.506 billion in Q4 of fiscal year 2023. The majority of the deferred revenue balance is services related and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis.

    本季末我們的採購承諾為 15 億美元,低於第四季末的 16 億美元。我們預計,隨著交貨時間的持續改善,這一數字將趨於平穩,但隨著我們加大新產品的推出力度,這一數字仍將保持一定程度的波動。我們的遞延收入餘額總額為16.63 億美元,高於2023 財年第四季度的15.06 億美元。的時間和期限可能會按季度變化。

  • Our product deferred revenue balance decreased by approximately $25 million versus last quarter. We expect 2024 to be a year of significant new product introductions, new customers and expanded use cases. These trends may result in increased customer-specific acceptance clauses and increase the volatility of our product deferred revenue balances.

    我們的產品遞延收入餘額比上季減少了約 2,500 萬美元。我們預計 2024 年將是推出重要新產品、新客戶和擴大用例的一年。這些趨勢可能會導致客戶特定接受條款的增加,並增加我們產品遞延收入餘額的波動性。

  • As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers. Accounts payable days were 36 days, down from an unusually high 75 days in Q4, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $9.4 million.

    正如前幾季所提到的,遞延餘額可能會按季度大幅變動,而與基本業務驅動因素無關。應付帳款天數為 36 天,低於第四季異常高的 75 天,反映了庫存收付的時間。該季度的資本支出為 940 萬美元。

  • Now turning to our outlook for the second quarter and beyond. I have now had a quarter working with Jayshree, the leadership team and the broader Arista ecosystem, and I am excited about both our current and long-term opportunities in the markets that we serve. The passion for innovation, our agile business operating model and employee commitment to our customers' success are foundational. We are pleased with the momentum being demonstrated across the segments, Enterprise, Cloud and Providers.

    現在轉向我們對第二季及以後的展望。我現在已經與 Jayshree、領導團隊和更廣泛的 Arista 生態系統合作了四分之一,我對我們所服務的市場中當前和長期的機會感到興奮。對創新的熱情、靈活的業務營運模式以及員工對客戶成功的承諾是基礎。我們對企業、雲端和供應商等細分市場所展現的動能感到高興。

  • With this, we are raising our revenue guidance to an outlook of 12% to 14% growth for fiscal year 2024. On the gross margin front, given the expected end customer mix combined with continued operational improvements, we remain with the fiscal year 2024 outlook of 62% to 64%.

    因此,我們將 2024 財年的營收指引上調至 12% 至 14% 的成長前景。 64%。

  • Now turning to spending and investments. We continue to monitor both the overall macro environment and overall market opportunities. which will inform our investment prioritization as we move through the year. This will include a focus on targeted hires and leadership roles, R&D and the go-to-market team as we see opportunities to acquire strong talent.

    現在轉向支出和投資。我們持續關注整體宏觀環境和整體市場機會。這將為我們今年的投資優先順序提供資訊。這將包括重點關注有針對性的招募和領導角色、研發和進入市場團隊,因為我們看到了獲得強大人才的機會。

  • On the cash front, while we will continue to focus on supply chain and working capital optimization, we expect some continued growth in inventory on a quarter-by-quarter basis, as we receive components from our purchase commitments. With these sets of conditions and expectations, our guidance for the second quarter, which is based on non-GAAP results and excludes any noncash stock-based compensation impacts and other nonrecurring items is as follows: revenues of approximately $1.62 billion to $1.65 billion, gross margin of approximately 64% and operating margin at approximately 44%. Our effective tax rate is expected to be approximately 21.5% with diluted shares of approximately 320.5 million shares.

    在現金方面,雖然我們將繼續專注於供應鏈和營運資本優化,但我們預計庫存將逐季度持續成長,因為我們從採購承諾中收到了零件。考慮到這些條件和預期,我們對第二季度的指導(基於非 GAAP 業績,不包括任何非現金股票薪酬影響和其他非經常性項目)如下: 收入約為 16.2 億至 16.5 億美元,毛額利潤率約64%,營業利益率約44%。我們的有效稅率預計約為 21.5%,稀釋後股份約為 3.205 億股。

  • I will now turn the call back to Liz for Q&A. Liz?

    我現在將把電話轉回給莉茲進行問答。麗茲?

  • Liz Stine - Director of IR Advocacy

    Liz Stine - Director of IR Advocacy

  • Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.

    謝謝你,尚特爾。我們現在將進入 Arista 財報電話會議的問答部分。為了讓更多人參與,我想請大家只回答一個問題。感謝您的體諒。接線員,把它拿走。

  • Operator

    Operator

  • (Operator Instructions) And your first question comes from the line of Atif Malik with Citi.

    (操作員說明)您的第一個問題來自花旗集團的 Atif Malik 線路。

  • Unidentified Analyst

    Unidentified Analyst

  • It's Adrienne for Atif. I was hoping you could comment on your raised expectations for the full year with regards to customer mix it sounds like from your gross margin guidance, you're seeing a higher contribution from Enterprise, but I was hoping you could comment on the dynamics you're seeing with your Cloud Titans.

    這是阿蒂夫的艾德麗安。我希望您能評論一下您對全年客戶組合的提高的期望,從您的毛利率指導來看,您看到企業的貢獻更高,但我希望您能評論一下您的動態正在與您的雲泰坦見面。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. So as Chantelle and I described, when we gave our guidance in November, we didn't have much visibility beyond 3 to 6 months, and so we had to go with that. The activity in Q1 alone, and I believe it will continue in the first half, has been much beyond what we expected. And this is true across all 3 sectors, Cloud and AI Titans, Providers and Enterprise. So we're feeling good about all 3 and therefore, have raised our guidance earlier than we probably would have done in May. I think we would have ideally liked to look at 2 quarters, Chantelle, what do you think, but I think we felt good enough.

    是的。正如 Chantelle 和我所描述的,當我們在 11 月給出指導時,我們對 3 到 6 個月後的情況沒有太多的了解,因此我們必須遵循這一點。僅第一季的活動就遠遠超出了我們的預期,而且我相信這種活動將在上半年持續下去。這對於雲端和人工智慧巨頭、供應商和企業這三個領域都是如此。因此,我們對這三者都感覺良好,因此比 5 月可能更早地提高了我們的指導意見。我認為我們理想地希望看兩個季度,Chantelle,你覺得怎麼樣,但我認為我們感覺足夠好。

  • Chantelle Breithaupt - Senior VP & CFO

    Chantelle Breithaupt - Senior VP & CFO

  • Yes. No, I think we saw because of the diversified momentum and the mix of the momentum that gave us confidence.

    是的。不,我認為我們看到的是多元化的動力和混合動力給了我們信心。

  • Operator

    Operator

  • And your next question comes from the line of Samik Chatterjee with JPMorgan.

    您的下一個問題來自摩根大通的 Samik Chatterjee。

  • Samik Chatterjee - Head of IT Hardware and Networking Equipment

    Samik Chatterjee - Head of IT Hardware and Networking Equipment

  • I guess, Jayshree and Chantelle, I appreciate the sort of raise in guidance for the full year here. But when I look at it on a half-over-half basis in terms of what you're implying. If I'm doing the math correct, you're implying about a sort of 5%, 6% half-over-half growth, which when I go back and look at previous years, you're -- probably there's only 1 year out of the last 5 or 6 that you've been in that sort of range or below that.

    我想,Jayshree 和 Chantelle,我很欣賞這裡全年指導的提升。但當我根據你所暗示的內容以一半以上的基礎來看待它時。如果我計算正確的話,你是在暗示大約 5%、6% 的一半以上的增長,當我回顧過去幾年時,你可能只有 1 年的時間過去5 或6 次中,您處於該範圍或低於該範圍。

  • Every other year it's been better than that. I'm just wondering you mentioned the Q1 activity that you've seen across the board, why are we not seeing a bit more of a half-over-half uptick than in sort of the momentum in the back half?

    每隔一年,情況都比這更好。我只是想知道您提到了您全面看到的第一季活動,為什麼我們沒有看到比後半段的勢頭更多的半過半的上升?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • It's like anything else. Our numbers are getting larger and larger. So activity has to translate to larger numbers. So of course, if we see it improve even more, we'll guide appropriately for the quarter.

    就像其他任何事情一樣。我們的人數越來越多。因此,活動必須轉化為更大的數字。當然,如果我們看到它有更多改善,我們將為本季提供適當的指導。

  • But at the moment, we're feeling very good just increasing our guide from 10% to 12% to 12% to 14%. As you know, Arista doesn't traditionally do that so early in the year. So please read that as confidence but cautiously confident or optimistically confident, but nevertheless confident.

    但目前,我們感覺非常好,只是將我們的指南從 10% 增加到 12% 到 12% 到 14%。如您所知,按照慣例,Arista 不會在年初那麼早這樣做。因此,請將其解讀為「有信心但謹慎地有信心」或「樂觀地有信心但仍有信心」。

  • Operator

    Operator

  • And your next question comes from the line of Ben Reitzes with Melius Research.

    您的下一個問題來自 Melius Research 的 Ben Reitzes。

  • We will move on to the next question from George Notter with Jefferies.

    我們將繼續討論喬治·諾特和傑弗里斯提出的下一個問題。

  • George Charles Notter - MD & Equity Research Analyst

    George Charles Notter - MD & Equity Research Analyst

  • I want to key in on something I think you guys said earlier in the monologue. You mentioned that Ethernet was 10% better than InfiniBand. And I -- my notes are incomplete here. Could you just remind me exactly what you were talking about there? What is the comparison you're making to InfiniBand? And just anything, I'd love to learn more about that.

    我想強調你們之前在獨白中說過的一些話。您提到乙太網路比 InfiniBand 好 10%。而我——我的筆記在這裡並不完整。你能準確地提醒我你在那裡談論的是什麼嗎?您與 InfiniBand 進行的比較是什麼?任何事情,我都想了解更多。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Certainly, George. Historically, as you know, when you look at InfiniBand and Ethernet in isolation, there are a lot of advantages of each technology. Traditionally, InfiniBand has been considered lossless and Ethernet is considered to have some loss properties.

    當然,喬治。如您所知,從歷史上看,當您單獨看待 InfiniBand 和乙太網路時,您會發現每種技術都有許多優點。傳統上,InfiniBand 被認為是無損的,而乙太網路被認為具有一些損耗特性。

  • However, when you actually put a full GPU cluster together along with the optics and everything, and you look at the coherents of the job completion time across all packet sizes, data has shown that and this is data that we have gotten from third parties, including Broadcom, that just about in every packet size in a real-world environment independent of the comparing those technologies, the job completion time of Ethernet was approximately 10% faster.

    然而,當您實際上將完整的GPU 叢集以及光學元件和所有元件放在一起時,您會看到所有資料包大小的作業完成時間的連貫性,資料表明了這一點,這是我們從第三方獲得的數據,包括 Broadcom 在內,在現實環境中的幾乎每個數據包大小中,與這些技術進行比較無關,乙太網路的作業完成時間大約快 10%。

  • So you can look at these things in silos. You can look at it in a practical cluster and in a practical cluster we are already seeing improvements on Ethernet. Now don't forget, this is just Ethernet as we know it today. Once we have the ultra Ethernet consortium and some of the improvements you're going to see on packet spring and dynamic load balancing and congestion control, I believe those numbers will get even better.

    所以你可以孤立地看待這些事情。您可以在實際集群中查看它,在實際集群中我們已經看到了以太網的改進。現在不要忘記,這只是我們今天所知道的乙太網路。一旦我們有了超以太網聯盟以及您將在數據包彈簧、動態負載平衡和擁塞控制方面看到的一些改進,我相信這些數字會變得更好。

  • George Charles Notter - MD & Equity Research Analyst

    George Charles Notter - MD & Equity Research Analyst

  • Got it. I assume you're talking about RoCE here as opposed to just straight up Ethernet, is that correct?

    知道了。我假設您在這裡談論的是 RoCE,而不是直接的以太網,對嗎?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • In all cases, right now, pre UEC, we're talking about RDMA or Ethernet, exactly. RoCE version too, which is the most deployed NIC you have in most scenarios. But with (inaudible) RoCE, we're seeing 10% improvement, Imagine when we go to UEC.

    在所有情況下,現在,在 UEC 之前,我們確切地討論的是 RDMA 或乙太網路。 RoCE 版本也是如此,這是大多數情況下部署最多的 NIC。但透過(聽不清楚)RoCE,我們看到了 10% 的改進,想像一下當我們去 UEC 時。

  • George Charles Notter - MD & Equity Research Analyst

    George Charles Notter - MD & Equity Research Analyst

  • I know you guys are also working on your own version of Ethernet, presumably, it blends into the UEC standard over time. But what do you think the differential might be there relative to InfiniBand? Do you have a sense on what that might look like?

    我知道你們也在開發自己的乙太網路版本,想必它會隨著時間的推移融入 UEC 標準。但您認為相對於 InfiniBand 可能會有什麼差異呢?你知道那會是什麼樣子嗎?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • We have metrics yet, but it's not like we're working on our own version of Ethernet, we are working on the UEC compatible and compliant version of Ethernet. And there's 2 aspects of it. What we do on the switch and what others do on the NIC, right? So what we do on the switch, I think, will be -- we've already built an architecture we call it, the Etherlink architecture that takes into consideration the buffering, the congestion control, the load balancing and largely, we'll have to make some software improvements.

    我們已經有了指標,但我們並不是在開發自己的乙太網路版本,而是在開發與 UEC 相容且相容的乙太網路版本。它有兩個面向。我們在交換器上做什麼,別人在網路卡上做什麼,對嗎?因此,我認為我們在交換器上所做的將是——我們已經建立了一個我們稱之為Etherlink 架構的架構,該架構考慮了緩衝、擁塞控制、負載平衡,並且在很大程度上,我們將擁有進行一些軟體改進。

  • The NICs, especially at 400 and 800 is where we are looking to see more improvements because that will give us additional performance from the server onto the switch. So we need both half to work together. Thanks, George.

    NIC,尤其是 400 和 800 的 NIC,是我們希望看到更多改進的地方,因為這將為我們提供從伺服器到交換器的額外效能。所以我們需要雙方共同努力。謝謝,喬治。

  • Operator

    Operator

  • Your next question comes from the line of Ben Reitzes with Melius Research.

    您的下一個問題來自 Melius Research 的 Ben Reitzes。

  • Benjamin Alexander Reitzes - MD & Head of Technology Research

    Benjamin Alexander Reitzes - MD & Head of Technology Research

  • I was wondering if you can characterize how you're seeing NVIDIA in the market right now. And are you seeing yourselves go more head to head? How do you see that evolving? And if you don't mind also, I think NVIDIA moves to a more systems-based approach potentially with Blackwell. How you see that impacting your competitiveness with NVIDIA?

    我想知道您是否可以描述您目前在市場上如何看待 NVIDIA。你們是否看到自己更加正面交鋒?您如何看待這種演變?如果您也不介意的話,我認為 NVIDIA 可能會透過 Blackwell 轉向更基於系統化的方法。您認為這對您與 NVIDIA 的競爭力有何影響?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. Thanks, Ben, for a loaded question. First of all, I want to bank NVIDIA and Jensen. I think it's important to understand that we wouldn't have a massive AI networking opportunity if NVIDIA didn't build some fantastic GPUs. So yes, we see them in the market all the time, mostly using our networks to their GPUs and NVIDIA is the market leader there, and I think they've created an incremental market opportunity for us that we are very, very reduced by.

    是的。謝謝本,提出了一個有意義的問題。首先,我想支援 NVIDIA 和 Jensen。我認為重要的是要明白,如果 NVIDIA 沒有建立一些出色的 GPU,我們就不會擁有巨大的人工智慧網路機會。所以,是的,我們一直在市場上看到他們,主要使用我們的網路來連接他們的GPU,而NVIDIA 是那裡的市場領導者,我認為他們為我們創造了一個增量市場機會,而我們卻非常非常地減少了這一機會。

  • Now do we see them in the market? Of course, we do. I see them on GPUs. We also see them on the RoCE or RDMA ethernet NIC side. And then sometimes we see them, obviously, when they're pushing InfiniBand, which has been, for most part, the de facto network of choice. You might have heard me say, last year or the year before, I was outside looking into this AI networking.

    現在我們在市場上看到它們了嗎?當然,我們這樣做。我在 GPU 上看到它們。我們也在 RoCE 或 RDMA 乙太網路 NIC 端看到它們。有時我們會看到他們,顯然,當他們推動 InfiniBand 時,InfiniBand 在很大程度上已成為事實上的選擇網絡。你可能聽過我說,去年或前年,我在外面研究這個人工智慧網路。

  • But today, we feel very pleased that we are able to be the scale-out network for NVIDIA's, GPUs and NICs based on Ethernet. We don't see NVIDIA as a direct competitor yet on the Ethernet side. I think it's 1% of their business. It's 100% of our business. So we don't worry about that overlap at all. And we think we've got 20 years of founding to now experience to make our Ethernet switching better and better at both on the front end and back end. So we're very confident that Arista can build the scale up network and work with NVIDIA scale-up GPUs. Thank you, Ben.

    但今天,我們感到非常高興,我們能夠成為 NVIDIA、GPU 和基於乙太網路的 NIC 的橫向擴展網路。我們還不認為 NVIDIA 在乙太網路方面是直接競爭對手。我認為這是他們業務的 1%。這是我們 100% 的業務。所以我們根本不用擔心這種重疊。我們認為我們從成立至今已有 20 年的經驗,可以使我們的乙太網路交換在前端和後端方面變得越來越好。因此,我們非常有信心 Arista 能夠建立擴展網路並與 NVIDIA 擴展 GPU 配合使用。謝謝你,本。

  • Operator

    Operator

  • Your next question comes from the line of Amit Daryanani with Evercore ISI.

    您的下一個問題來自 Evercore ISI 的 Amit Daryanani。

  • Amit Jawaharlaz Daryanani - Senior MD & Fundamental Research Analyst

    Amit Jawaharlaz Daryanani - Senior MD & Fundamental Research Analyst

  • I guess, Jayshree, given some of the executive transitions you've seen at Arista, can you just perhaps talk about (inaudible) you can, the discussion you've had with the Board around your desire, your commitment to remain the CEO, does anything (inaudible) that would be really helpful.

    我想,Jayshree,考慮到您在Arista 看到的一些高管變動,您能否談談(聽不清楚)您可以,您與董事會圍繞您的願望、您對繼續擔任首席執行官的承諾進行的討論,做任何真正有幫助的事情(聽不清楚)。

  • And then if I just go back to the job completion data that you talked about, given what you just said and the expected improvement, what are the reasons a customer would still use InfiniBand versus Switch more aggressively with Ethernet?

    然後,如果我回到您談到的作業完成數據,考慮到您剛才所說的話以及預期的改進,與乙太網路相比,客戶仍然更積極地使用 InfiniBand 和 Switch 的原因是什麼?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Well, first of all, you heard Anshul, I'm sorry to see Anshul decide to do other things. I hope he comes back. We've had a lot of executives make a U-turn over time, and we call them boomerangs. So I certainly hope that's true with Anshul. But we have a very strong bench. And we've had -- we've been blessed to have a very constant bench for the last 15 years, which is very rare in our industry and in the Silicon Valley.

    好吧,首先,你聽到安舒爾了,我很遺憾看到安舒爾決定做其他事情。我希望他能回來。隨著時間的推移,我們有許多高階主管態度掉頭,我們稱他們為「迴力鏢」。所以我當然希望安舒爾也是如此。但我們有一個非常強大的替補席。我們很幸運在過去 15 年裡擁有一個非常穩定的替補席,這在我們的行業和矽谷都是非常罕見的。

  • So while we're sorry to see Anshul make a personal decision to take a break, we know he'll remain a well-wisher. And we know the bench strength, below Anshul will now step up to do greater things. As for my commitment, to the Board, I have committed for multiple years. I think it's the wrong order. I wish Anshul had stayed and I had retired, but I'm committed to staying here for a long time.

    因此,雖然我們很遺憾看到安舒爾做出休息的個人決定,但我們知道他仍然會是一個祝福者。我們知道板凳實力,下面的安舒爾現在將站出來做更大的事情。至於我對董事會的承諾,我已經承諾了好幾年。我認為這是錯誤的順序。我希望安舒爾留下來,而我已經退休了,但我決心要在這裡待很久。

  • Operator

    Operator

  • And your next question comes from the line of Antoine Chkaiban with New Street Research.

    您的下一個問題來自 New Street Research 的 Antoine Chkaiban。

  • Antoine Chkaiban - Research Analyst

    Antoine Chkaiban - Research Analyst

  • So as you can see NVIDIA introduced in-network computing capabilities with NVSwitch, performing some calculations inside the switch itself. Perhaps now is not the best time to announce new products, but I'm curious about whether this is something the broader merchant silicon and Ethernet ecosystem could introduce at some point?

    如您所看到的,NVIDIA 透過 NVSwitch 引入了網路內運算功能,在交換器本身內部執行一些運算。也許現在不是宣布新產品的最佳時機,但我很好奇這是否是更廣泛的商業晶片和以太網生態系統在某個時候可以引入的東西?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Antoine, are you asking what is our new products for AI? Is that the question?

    Antoine,您是想問我們的人工智慧新產品是什麼嗎?這是問題嗎?

  • Antoine Chkaiban - Research Analyst

    Antoine Chkaiban - Research Analyst

  • No, I'm asking specifically about in-network computing capabilities, NVSwitch can do some matrix multiply and add inside the switch itself. And I was wondering if this is something that the broader merchant silicon ethernet ecosystem could introduce as well?

    不,我具體詢問的是網路內運算能力,NVSwitch 可以在交換器本身內部進行一些矩陣乘法和加法。我想知道更廣泛的商業矽乙太網路生態系統是否也可以引入這一點?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. So just for everyone else's benefit, a lot of the in-network compute is generally done as closest to the compute layer as possible, where you're processing the GPU. So that's a very natural place. I don't see any reason why we could not do those functions in the network and offload the network for some of those compute functions.

    是的。因此,為了其他人的利益,許多網路內運算通常是在盡可能靠近運算層(即處理 GPU 的地方)進行的。所以那是一個非常自然的地方。我不明白為什麼我們不能在網路中執行這些功能並為其中一些計算功能卸載網路。

  • It would require a little more state and built in processing power, et cetera, but it's certainly very doable. I think it's going to be 601 and half a dozen of the other. Some would prefer it closest to the compute layer and some would like it network-wide for network scale at the network layer. So the feasibility is very much there in both cases, Antoine.

    它需要更多的狀態和內建的處理能力等等,但這當然是非常可行的。我認為這將是 601 和其他六個。有些人希望它最接近計算層,有些人希望它在網路範圍內實現網路層的網路規模。因此,安托萬,這兩種情況都具有很大的可行性。

  • Operator

    Operator

  • And your next question comes from the line of James Fish with Piper Sandler.

    你的下一個問題來自詹姆斯·菲什和派珀·桑德勒的台詞。

  • James Edward Fish - MD & Senior Research Analyst

    James Edward Fish - MD & Senior Research Analyst

  • Anshul, we'll miss having you around. I echo my sentiments there, but I hope to see you soon. Jayshree, how are you guys thinking about timing of the 800 gig optics availability versus kind of use in systems? And you keep alluding to kind of next-gen product announcements for multiple quarters, not just this one, but -- should we expect this to be more around adjacent use cases, the core, including AI or Software, kind of take us in the product road map direction, if you can?

    安舒爾,我們會想念你在身邊的。我在那裡表達了我的觀點,但我希望很快就能見到你。 Jayshree,你們如何考慮 800 gig 光學元件可用性的時間安排與系統中的使用類型?你不斷提到多個季度的下一代產品公告,不僅僅是這個,但是——我們是否應該期望這更多地圍繞相鄰的用例、核心,包括人工智慧或軟體,有點帶我們進入產品路線圖方向,可以嗎?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. James, you might remember like deja vu, we've had similar discussions on 400 gig too. And as you well know, to build a good switching system, you need an ecosystem around it, whether it's the NICs, the optics, the cables, the accessories. So I do believe you'll start seeing some early introduction of optical and switching products for 800 gig, but to actually build the entire ecosystem and take advantage, especially in the NICs, I think will take more than a year. So I think probably more into '25 or even '26.

    是的。 James,你可能記得似曾相識,我們也曾就 400 場演出進行過類似的討論。眾所周知,要建立一個良好的交換系統,您需要一個圍繞它的生態系統,無論是網路卡、光學器件、電纜還是配件。因此,我相信您會開始看到一些早期推出的 800 GB 光纖和交換產品,但要真正建立整個生態系統並利用這一優勢,特別是在 NIC 方面,我認為將需要一年多的時間。所以我認為可能更多的是 25 年甚至 26 年。

  • That being said, I think you're going to see a lot of systems I had this discussion earlier. You're going to see 601 and half a dozen of the other, you're going to see a lot of systems where you can demonstrate high rating scale with 400 gig and go East West much wider and build large clusters that are in the tens of thousands. And then once you need -- once you have GPUs that source 800 gig, which even some of the recent GPUs don't, then you'll need not just higher ratings, but higher performance. So I don't see the ecosystem of 800 gig limiting the deployment of the AI networks. That's an important thing to remember.

    話雖這麼說,我想您會看到很多我之前討論過的系統。您將看到 601 和六個其他系統,您將看到很多系統,您可以在其中展示 400 gig 的高評級規模,並向東向西延伸更廣,並構建數十個大型集群數千。然後,一旦您需要 - 一旦您擁有可提供 800 GB 的 GPU(甚至一些最新的 GPU 也沒有),那麼您不僅需要更高的評級,還需要更高的效能。所以我不認為 800G 的生態系統會限制人工智慧網路的部署。這是一件需要記住的重要事情。

  • Operator

    Operator

  • And your next question comes from the line of Simon Leopold with Raymond James.

    你的下一個問題來自西蒙·利奧波德和雷蒙德·詹姆斯的台詞。

  • W. Chiu - Senior Research Associate

    W. Chiu - Senior Research Associate

  • This is Victor Chiu in for Simon Leopold. Do you expect Arista to see a knock-on effect from AI networking in the front end or at the edge as customers eventually deploy more AI workloads based -- I'm sorry, biased towards inferencing. And then maybe help us understand how we might be able to size this, if that's the case?

    這是維克多·趙(Victor Chiu)飾演西蒙·利奧波德(Simon Leopold)。隨著客戶最終部署更多基於推理的人工智慧工作負載,您是否期望 Arista 看到前端或邊緣人工智慧網路的連鎖反應?然後也許可以幫助我們了解如何確定大小(如果是這樣的話)?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Simon, just (inaudible) in your question. We haven't (inaudible) consideration, that's Phase 2 production. But you're absolutely right to say as you have more back end than the back end has to connect to something, which typically rather than reinventing IP and adaptive routing, you would connect to the front end of your compute and storage and WAN networks.

    西蒙,只是(聽不清楚)你的問題。我們沒有(聽不清楚)考慮,那是第二階段的生產。但你的說法絕對正確,因為你擁有的後端數量多於後端必須連接到的東西,這通常不是重新發明 IP 和自適應路由,而是連接到計算和儲存以及 WAN 網路的前端。

  • So while we do not take that into consideration and our $750 million projection in 2025, we naturally see the deployment of more back-end clusters resulting in a more uniform compute, storage, memory, overall front-end, back-end holistic network for AI coming in the next phase.

    因此,雖然我們沒有考慮到這一點,而且我們預計到2025 年將投入7.5 億美元,但我們自然會看到更多後端叢集的部署,從而帶來更統一的運算、儲存、記憶體、整體前端、後端整體網路。

  • So I think it makes a lot of sense. We -- but we first want to get the clusters deployed and then we'll do the -- a lot of our customers are fully expecting that holistic connection. And that's one -- by the way, one of the reasons they look so favorably at us. They don't want to build these disparate silos and islands of AI clusters. They really want to bring it in terms of a full uniform AI data center.

    所以我認為這很有意義。我們——但我們首先希望部署集群,然後我們會做——我們的許多客戶完全期待這種整體連接。順便說一句,這就是他們如此看好我們的原因之一。他們不想建立這些不同的人工智慧集群孤島。他們確實希望建立一個完全統一的人工智慧資料中心。

  • Operator

    Operator

  • And your next question comes from the line of Meta Marshall with Morgan Stanley.

    你的下一個問題來自摩根士丹利的梅塔馬歇爾。

  • Meta A. Marshall - VP

    Meta A. Marshall - VP

  • Maybe I'll flip James' question and just kind of ask what do you see as kind of some of the bottlenecks from going to -- from pilots to ultimate deployments? It sounds like it's not necessarily 800 gig. And so is it just a matter of time? Are there other pieces of the ecosystem that are -- that need to fall into place before some of those deployments can take place?

    也許我會翻轉詹姆斯的問題,只是問你認為從試點到最終部署的一些瓶頸是什麼?聽起來不一定是800G。那麼這只是時間問題嗎?在進行某些部署之前,生態系中是否還有其他部分需要落實到位?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • I wouldn't call them, Meta, bottlenecks. I would definitely say it's a time-based and familiarity-based situation. The Cloud everybody knows how to deploy that, it's sort of plug and play in some ways. And -- but even in the Cloud, if you may recall, there were many use cases that emerged. .

    我不會稱它們為「Meta」、「瓶頸」。我肯定會說這是一個基於時間和基於熟悉程度的情況。每個人都知道如何部署雲,在某些方面它是即插即用的。而且,如果您還記得的話,即使在雲端中,也出現了許多用例。 。

  • The first use case that's emerging for AI networking is, let's just build the fastest training workloads and clusters. And they're looking at performance. Power is a huge consideration, the cooling of the GPUs is a huge part of it. You would be surprised to hear a lot of times, it's just waiting on the facilities and waiting for the infrastructure to be set up, right?

    人工智慧網路出現的第一個用例是,讓我們建立最快的訓練工作負載和叢集。他們正在關注性能。功耗是一個重要的考慮因素,GPU 的冷卻是其中的重要組成部分。很多時候你會驚訝地聽到,它只是在等待設施,等待基礎設施建立起來,對吧?

  • Then on the OS and operating side, and Ken has been quiet here. I'd love for him to chime in. But there's a tremendous amount of foundational discovery that goes into what do they need to do in the cluster. Do they need to do some hashing? Do they need to do load balancing? Do they need to do this set Layer 2, Layer 3? Do they need visibility features? Do they need to connect it across the WAN or interconnect?

    然後在作業系統和操作方面,Ken 在這裡一直保持沉默。我很希望他插話。他們需要進行一些哈希處理嗎?他們需要做負載平衡嗎?他們需要做這組第2層、第3層嗎?他們需要可見性功能嗎?他們是否需要透過 WAN 或互連進行連線?

  • So -- and of course, as you rightly pointed out, there's the whole 400 to 800. We're -- but we're seeing less of that because a lot of it is familiarity and understanding how to operate the cluster with the best job completion time and visibility, manageability and availability of the GPUs, Nobody can tolerate downtime.

    所以——當然,正如您正確指出的那樣,有 400 到 800 個。可見性、可管理性和可用性,沒有人可以容忍停機。

  • Ken, I'd love to hear your point of view on this.

    肯,我很想聽聽你對此的看法。

  • Kenneth Duda - Co-Founder, CTO, Senior VP of Software Engineering and Director

    Kenneth Duda - Co-Founder, CTO, Senior VP of Software Engineering and Director

  • Yes. Thanks, Jayshree. Look, I think that what's lacking people's deployment is the availability of all the pieces. And so there's a huge pent-up demand for this stuff and we see these clusters getting built, as fast as people can build the facilities, get the GPUs and get the networking they need.

    是的。謝謝,傑什裡。聽著,我認為人們缺乏的部署是所有部件的可用性。因此,對這些東西有巨大的被壓抑的需求,我們看到這些集群正在建立,人們可以以最快的速度建造設施、獲得 GPU 並獲得他們需要的網路。

  • I think that we're extraordinarily well positioned here because we've got years of experience building scaled storage clusters in some of the world's largest cloud players and storage clusters are not identical to AI clusters but they have some of the same issues with managing a massive scale back-end network that needs to be properly low-balanced, needs a lot of buffer to manage bursts.

    我認為我們在這方面處於非常有利的位置,因為我們在一些世界上最大的雲廠商中擁有多年構建大規模存儲集群的經驗,並且存儲集群與人工智能集群並不相同,但它們在管理存儲集群方面存在一些相同的問題。

  • And so -- and then some of the congestion management stuff we've done there is also useful in AI networks. And in particular, this InfiniBand topic keeps coming up. And I'd just like to point out that Ethernet is about 50 years old. And over those 50 years, Ethernet has come head-to-head with a bunch of technologies like Token ring, SONET, ATM, FDDI, HIPPI, Scalable Coherent Interconnect, [Mirrornet]. And all of these battles have one thing in common. Ethernet won. And the reason why is because of Metcalfe's law, the value of a network is quadratic in the number of nodes of the interconnect. And so anybody who tries to build something which is not Ethernet, is starting off with a very large quadratic disadvantage. And any temporary advantage they have because of the -- some detail of the tech cycle is going to be quickly overwhelmed by the connectivity advantage you have with Ethernet.

    因此,我們所做的一些擁塞管理工作在人工智慧網路中也很有用。尤其是 InfiniBand 話題不斷出現。我想指出乙太網路已有大約 50 年的歷史。在這 50 年裡,乙太網路與令牌環、SONET、ATM、FDDI、HIPPI、可擴展相干互連、[Mirrornet] 等一系列技術進行了正面交鋒。所有這些戰鬥都有一個共同點。乙太網路贏了。原因是根據梅特卡夫定律,網路的價值是互連節點數量的二次方。因此,任何試圖建造非乙太網路的東西的人都會從一個非常大的二次劣勢開始。他們因技術週期的某些細節而擁有的任何暫時優勢將很快被乙太網路的連接優勢所淹沒。

  • So I think exactly how many years it takes for InfiniBand to go to [waver] fiber channel, I'm not sure, but that's where it's all headed.

    因此,我認為 InfiniBand 需要多少年才能進入[搖擺]光纖通道,我不確定,但這就是一切的發展方向。

  • Operator

    Operator

  • And your next question comes from the line of Ben Bollin with Cleveland Research Company.

    您的下一個問題來自克利夫蘭研究公司的 Ben Bollin。

  • Benjamin James Bollin - Senior Research Analyst

    Benjamin James Bollin - Senior Research Analyst

  • Jayshree, you made a comment that back when we had guidance in November, you had about 3 to 6 months of visibility. Could you take us through what type of visibility you have today? And maybe compare and competes the different subsets of customers and how they differ?

    Jayshree,您評論說,當我們在 11 月提供指導時,您將有大約 3 到 6 個月的可見度。您能為我們介紹一下您今天的可見度嗎?也許可以對不同的客戶子集進行比較和競爭,以及它們有何不同?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Thank you, Ben. That's a good question. So let me take it by category, like you said. In the Cloud and AI Titans in November, we were really searching for even 3 months visibility, 6 would have been amazing. Today, I think after a year of tough situations for us where the Cloud Titans were pivoting rather rapidly to AI and not thinking about the Cloud as much.

    謝謝你,本。這是個好問題。所以讓我按類別來分類,就像你說的。在 11 月的雲端和人工智慧泰坦中,我們確實在尋找 3 個月的可見性,6 個月就已經很了不起了。今天,我認為在我們經歷了一年的艱難處境之後,雲泰坦們相當迅速地轉向人工智慧,而沒有過多地考慮雲端。

  • We're now seeing a more balanced approach where they're still doing AI, which is exciting, but they're also expanding their regions on the Cloud. So I would say our visibility has now improved to at least 6 months and maybe it gets longer as time goes by.

    我們現在看到了一種更平衡的方法,他們仍然在做人工智慧,這令人興奮,但他們也在雲端上擴展他們的區域。所以我想說,我們的可見度現在已經提高到至少 6 個月,而且隨著時間的推移,它可能會變得更長。

  • On the Enterprise, I don't know. I'm not a bellwether for macro, but everybody else is citing macro, but I'm not seeing macro. What we're seeing with Chris Schmidt and Ashwin and the entire team is a profound amount of activity in Q1, better than we normally see in Q1. Q1 is usually they come back from the holidays. January slow. There's some East Coast storms to deal with, winter is still strong. But we have had one of the strongest activities in Q1, which leads us to believe that it can only get better for the rest of the year, and hence, the guide increase from an otherwise conservative team of Chantelle and myself, right?

    至於企業,我不知道。我不是宏觀經濟的領頭羊,但其他人都在引用宏觀經濟,但我沒有看到宏觀經濟。我們在 Chris Schmidt 和 Ashwin 以及整個團隊中看到的是第一季的大量活動,比我們通常在第一季度看到的要好。 Q1 通常是他們放假回來。一月緩慢。東海岸有一些風暴需要應對,冬季仍然很猛烈。但我們在第一季進行了最強勁的活動之一,這讓我們相信今年剩下的時間裡它只會變得更好,因此,由Chantelle 和我自己組成的保守團隊的指導增加了,對嗎?

  • And then the Tier 2 cloud providers, I want to speak to them for a moment because not only are they strong for us right now, but they are starting to pick up some AI initiatives as well. So they're not as large as close as the Cloud Titans, but the combination of the Service Providers and the Tier 2 Specialty Providers is also seeing some momentum. So overall, I would say our visibility has now improved from 3 months to over 6 months. And in the case of the Enterprise, obviously, our sales cycles can be even longer. So it takes time to convert into wins. But the activity has never been higher.

    然後是二級雲端供應商,我想和他們談談,因為他們現在不僅對我們來說很強大,而且他們也開始採取一些人工智慧計畫。因此,它們的規模不如雲端泰坦那麼大,但服務提供者和二級專業提供者的結合也看到了一些動力。總的來說,我想說我們的可見度現在已經從 3 個月提高到 6 個月以上。就企業而言,顯然我們的銷售週期可能更長。因此,轉化為勝利需要時間。但活動卻從未如此之高。

  • Operator

    Operator

  • And your next question comes from the line of Michael Ng with Goldman Sachs.

    你的下一個問題來自高盛的 Michael Ng。

  • Michael Ng - Research Analyst

    Michael Ng - Research Analyst

  • It was very encouraging to hear about the migration of trials to pilots with ANET's production roll out to support GPUs in the range of, I think you said 10,000 to 100,000 GPUs for 2025. And First, I was just wondering if you could talk about some of the key determinants about where we -- how we end up in that range, high end versus low end?

    聽到 ANET 的生產部署將試驗遷移到試點以支援 GPU 的情況非常令人鼓舞,我想您說過 2025 年將有 10,000 到 100,000 個 GPU。所處位置的關鍵決定因素- 我們如何最終進入該範圍,高端還是低端?

  • And then second, assuming $250,000 per GPU, that would imply about $25 billion of compute spending. ANET's target of $750 million would only be about 3% of the high end. And I think you've talked about 10% to 15% networking as a percentage of compute historically. So I was just wondering if you could talk about what I may be missing there, if there's anything to call out in those assumptions.

    其次,假設每個 GPU 花費 25 萬美元,這意味著大約 250 億美元的運算支出。 ANET 的 7.5 億美元目標僅佔高階目標的 3% 左右。我認為您曾經談論過網路在歷史上佔計算的百分比 10% 到 15%。所以我只是想知道你是否可以談談我可能缺少什麼,這些假設中是否有什麼需要指出的。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. Thank you, Michael. I think we could do better next year. But your point is well taken that in order to go from 10,000 of GPUs to 30,000, 50,000, 100,000, a lot of things have to come together. First of all, let's talk about the data center or AI center facility itself.

    是的。謝謝你,麥可。我認為明年我們可以做得更好。但您的觀點很好理解:為了從 10,000 個 GPU 增加到 30,000、50,000、100,000 個,必須將很多事情結合在一起。首先,我們來談談資料中心或AI中心設施本身。

  • There's a tremendous amount of work and lead time that goes into the power, the cooling, the facilities. And so now when you're talking this kind of production as opposed to proving something in the lab, that's a key factor.

    電力、冷卻和設施需要大量的工作和交貨時間。因此,現在當你談論這種生產而不是在實驗室中證明某些東西時,這是一個關鍵因素。

  • The second one is the GPU, the number of GPUs, the location of the GPUs, the scale of the GPUs, the locality of these GPUs, should they go with Blackwell should they build with a scale up inside the server or scale out to the network. So the whole center of gravity, what's nice to watch which is why we're more constructive on the 2025 numbers is that the GPU lead times have significantly improved, which means more and more of our customers will get more GPUs, which in turn means they can build out to scale our network. But again, a lot of work is going into that.

    第二個是 GPU,GPU 的數量、GPU 的位置、GPU 的規模、這些 GPU 的位置,它們是否應該與 Blackwell 一起使用,是否應該在伺服器內部縱向擴展或橫向擴展到伺服器內部?所以整個重心,值得一看的就是為什麼我們對2025 年的數字更具建設性,因為GPU 的交貨時間顯著改善,這意味著越來越多的客戶將獲得更多的GPU,這反過來又意味著他們可以擴展我們的網路。但同樣,這方面還有很多工作要做。

  • And the third thing I would say is the scale, the performance, how much ratings they want to put in. And then I'll give a quick analogy here. We ran into something similar on the Cloud when we were talking about 4-way CMP or 8-way CMP or these railways designs that is often called and the number of NICs you connect to go 8-way or 4-way or 12-way or switch off and go to 800 gig the performance and scale will be the third metric. So I think power GPU locality and performance of the network are the 3 major considerations that allow us to get more positive on the rate of production in 2025.

    我要說的第三件事是規模、表現、他們想要給予多少收視率。當我們談論 4 路 CMP 或 8 路 CMP 或這些經常被稱為的鐵路設計以及連接到 8 路、4 路或 12 路的 NIC 數量時,我們在雲中遇到了類似的情況或關閉並轉到800 gig,性能和規模將是第三個指標。因此,我認為強大的 GPU 局部性和網路效能是讓我們對 2025 年生產力更加樂觀的 3 個主要考慮因素。

  • Operator

    Operator

  • And your next question comes from the line of Matthew Niknam with Deutsche Bank.

    您的下一個問題來自德意志銀行的 Matthew Niknam。

  • Matthew Niknam - Director

    Matthew Niknam - Director

  • I got to ask one more on AI. Sorry to beat a dead horse. But as we think about the stronger start to the year and the migration from trials to pilot specific in relation to AI, is there a ramp towards getting to that $750 million next year? And I guess more importantly, is there any material contribution baked into this year's outlook? Or is there any contribution that may be driving the 2 percentage point increase relative to the prior guide for '24.

    我得再問一個關於人工智慧的問題。抱歉,死馬當活馬醫。但當我們考慮到今年的強勁開局以及與人工智慧相關的從試驗到試點的轉變時,明年達到 7.5 億美元的目標是否會增加?我想更重要的是,今年的展望中是否有任何實質貢獻?或是否有任何貢獻可能導致相對於 24 年之前的指南增加 2 個百分點。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Chantelle, you want to take that? I've been talking of AI a lot. I think you should.

    尚特爾,你想要那個嗎?我一直在談論人工智慧。我想你應該。

  • Chantelle Breithaupt - Senior VP & CFO

    Chantelle Breithaupt - Senior VP & CFO

  • Yes, I can take this AI question. So I think that when you think about the $750 million target that has become more constructive to Jayshree's prepared remarks, that's a glide path. So it's not 0 in '24, It's a glide path to '25. So I would say there is some assumed in the sense of it's a glide path, but it will end in 2025 at the $750 million in the glide path, not a hockey stick. Yes.

    是的,我可以回答這個人工智慧問題。因此,我認為,當你想到 7.5 億美元的目標時,這對 Jayshree 準備好的言論變得更具建設性,這是一條下滑路徑。所以24年它不是0,而是通往25年的下滑路徑。所以我想說,有一些假設是一條滑行路徑,但到 2025 年,它將以滑行路徑結束,而不是曲棍球棒,價值 7.5 億美元。是的。

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • It's not 0 this year Matt. for sure.

    今年不是 0 馬特。一定。

  • Chantelle Breithaupt - Senior VP & CFO

    Chantelle Breithaupt - Senior VP & CFO

  • Yes.

    是的。

  • Operator

    Operator

  • And your next question comes from the line of Sebastien Naji with William Blair.

    你的下一個問題來自塞巴斯蒂安·納吉和威廉·布萊爾的對話。

  • Sebastien Cyrus Naji - Associate

    Sebastien Cyrus Naji - Associate

  • I've got a non-AI question here. So maybe you can talk a little bit about some of the incremental investments that you're making within your go-to-market this year, particularly as you look to grab some share from competitors. A lot of them are going through some type of disruption, one or the other acquisitions, et cetera. And then what you might be doing with the channel partners to land more of those mid-market customers as well?

    我這裡有一個非人工智慧問題。因此,也許您可以談談今年在上市過程中進行的一些增量投資,特別是當您希望從競爭對手那裡奪取一些份額時。他們中的許多人正在經歷某種類型的顛覆、一項或另一項收購等等。那麼您可能會與通路合作夥伴一起做些什麼來吸引更多的中階市場客戶呢?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. Sebastian, we're probably doing a little more on investment than we have done enough progress on channel partners, to be honest. But last couple of years, we were getting very apologize about our lead times. Our lead times have improved. So we have stepped up our investment on go-to-market, where I'm expecting Chris Schmidt and Ashwin's team to grow significantly and judging from the activities they've had and the investments they've been making in '23 and '24, we're definitely going to continue to pedal to the metal on that.

    是的。塞巴斯蒂安,說實話,我們在投資方面所做的可能比在通路合作夥伴方面取得的進展要多一些。但過去幾年,我們對交貨時間感到非常抱歉。我們的交貨時間有所改善。因此,我們加大了進入市場的投資,從他們在 23 年和 24 年開展的活動和投資來看,我預計 Chris Schmidt 和 Ashwin 的團隊將顯著成長,我們肯定會繼續全力以赴。

  • I think our investments in AI and Cloud Titans remain about the same because while there is a significant technical focus on the systems engineering and product side, we don't see a significant change on the go-to-market side. And on the channel partners, I would say, where this really comes to play and this will play out over multiple years, it's not going to happen this year is on the campus.

    我認為我們對人工智慧和雲端泰坦的投資保持不變,因為雖然技術重點集中在系統工程和產品方面,但我們在進入市場方面沒有看到重大變化。對於通路夥伴,我想說的是,如果這真正發揮作用,而且會持續多年,今年不會在校園裡發生。

  • Today, our approach on the campus is really going after our larger Enterprise customers. We got 9,000 customers, probably 2,500 that we're really going to target. And so our mid-market is more targeted at specific verticals like health care, education, public sector. and then we appropriately work with the channel partners in the region, in the country, to deal with that.

    今天,我們在園區的做法其實是為了追求更大的企業客戶。我們有 9,000 名客戶,我們真正的目標客戶可能是 2,500 名。因此,我們的中端市場更針對特定的垂直產業,如醫療保健、教育、公共部門。然後我們與該地區、該國的通路合作夥伴適當合作來解決這個問題。

  • To get to the first billion, I think this will be a fine strategy. As we aim beyond $750 million to $1 billion, and we need to go to the second billion, absolutely, we need to do more work on channels. This is still work in progress.

    為了達到第一個十億,我認為這將是一個很好的策略。由於我們的目標是超過 7.5 億至 10 億美元,而我們需要達到第二個 10 億美元,所以我們絕對需要在通路方面做更多的工作。這項工作仍在進行中。

  • Operator

    Operator

  • Your next question comes from the line of Aaron Rakers with Wells Fargo.

    你的下一個問題來自富國銀行的 Aaron Rakers。

  • Aaron Christopher Rakers - MD of IT Hardware & Networking Equipment and Senior Equity Analyst

    Aaron Christopher Rakers - MD of IT Hardware & Networking Equipment and Senior Equity Analyst

  • I'm going to shift gears away from AI actually. Jayshree, if we look at the server market over the past handful of quarters, we've seen unit numbers down quite considerably. I'm curious, as you look at some of your larger Cloud customers, how you would characterize the traditional server side and whether or not you're seeing signs of them moving past this kind of optimization phase and whether or not you think a server refresh cycle in front of you could be a incremental catalyst to the company?

    實際上我要放棄人工智慧了。 Jayshree,如果我們觀察過去幾季的伺服器市場,我們會發現單位數量大幅下降。我很好奇,當您觀察一些較大的雲端客戶時,您將如何描述傳統伺服器端的特徵,以及您是否看到他們經過這種優化階段的跡象,以及您是否認為伺服器端您面前的更新周期可能會成為公司的增量催化劑嗎?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. No, I think if you remember, there was this one dreadful year where we -- one of our customers skipped a service cycle. But generally speaking, on the front-end network now, we're going back to the cloud. And we do see service refresh and service cycles continue to be in the 3 to 5 years.

    是的。不,我想如果您還記得的話,那是可怕的一年,我們的一位客戶跳過了一個服務週期。但總的來說,現在在前端網路上,我們正​​在回歸雲端。我們確實看到服務更新和服務週期持續在 3 到 5 年。

  • For performance upgrades, they like 3, but occasionally, some of them may go a little higher. So absolutely, we believe there will be another cloud cycle because of the server refresh. And the associated use cases because once you do that on the server, there's appropriately the regional spine and then the data center interconnect and the storage and some much ripple effect from that server use case upgrade.

    對於性能升級,他們喜歡 3,但偶爾,有些可能會更高一些。因此,我們絕對相信,由於伺服器更新,將會出現另一個雲端週期。以及相關的用例,因為一旦您在伺服器上執行此操作,就會出現適當的區域主幹,然後是資料中心互連和存儲,以及伺服器用例升級帶來的一些連鎖反應。

  • That side of compute and CPU is not changing. It's continuing to happen. In addition to which we're also seeing more and more regional expansion. New regions are being created and designed and outfitted for the cloud by our major Titans.

    計算和 CPU 的這一面並沒有改變。這種事還在繼續發生。除此之外,我們也看到越來越多的區域擴張。我們的主要巨頭正在為雲端創建、設計和裝備新的區域。

  • Operator

    Operator

  • And your next question comes from the line of Karl Ackerman with BNP Paribas.

    您的下一個問題來自法國巴黎銀行的卡爾‧阿克曼 (Karl Ackerman)。

  • Karl Ackerman - Research Analyst

    Karl Ackerman - Research Analyst

  • Jayshree, you spoke about how you are not seeing any slowness in Enterprise. I'm curious whether that is being driven by the growing mix of your software revenue? And do you think the deployment of AI Networks on-prem can be a more meaningful driver for your Enterprise and financial customers in the second half of fiscal '24? Or will that be more of a fiscal '25 event?

    Jayshree,您談到您在 Enterprise 中沒有看到任何緩慢的情況。我很好奇這是否是由您的軟體收入不斷增長所推動的?您認為在 2024 財年下半年,本地部署人工智慧網路是否能為您的企業和金融客戶帶來更有意義的推動力?還是這更像是 25 財年的活動?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Well, that's a very good question. I have to analyze this some more. I would say our Enterprise activity is really driven by the fact that Ken has produced some amazing software quality and innovation. And we have a very high quality, universal topology, where you don't have to buy 5 different OSs and 50 different images and operate this network with thousands of people.

    嗯,這是一個非常好的問題。我必須對此進行更多分析。我想說,我們的企業活動實際上是由肯創造了一些令人驚嘆的軟體品質和創新這一事實所驅動的。我們擁有非常高品質的通用拓撲,您無需購買 5 個不同的作業系統和 50 個不同的映像,也無需與數千人一起操作該網路。

  • It's a very elegant architecture that applies to the data center use case that you just outlined, for Leaf/Spine. The same Universal Spine can apply to the campus. It applies to the wide area. It applies to the branch. It applies to security. It applies to observability. And you bring up a good point that while the enterprise use cases for AI are small, we are seeing some activity there as well. Relative to the large AI Titans, they're still very small.

    這是一個非常優雅的架構,適用於您剛剛概述的 Leaf/Spine 資料中心用例。同樣的Universal Spine也可以應用在校園。適用範圍廣。它適用於分支機構。它適用於安全。它適用於可觀察性。您提出了一個很好的觀點,即雖然人工智慧的企業用例很小,但我們也看到了一些活動。相對於大型人工智慧泰坦來說,它們仍然很小。

  • But think of them as back in the trial phase I was describing earlier, trials, pilot production, -- so a lot of our enterprise customers are starting to go in the trial phase of GPU clusters. So that's a nice use case as well. But the biggest ones are still in the data center campus and the general purpose Enterprise.

    但將它們視為我之前描述的試驗階段、試驗、試生產,因此我們的許多企業客戶開始進入 GPU 叢集的試驗階段。所以這也是一個很好的用例。但最大的仍然是資料中心園區和通用企業。

  • Liz Stine - Director of IR Advocacy

    Liz Stine - Director of IR Advocacy

  • Operator, we have time for one last question.

    接線員,我們還有時間回答最後一個問題。

  • Operator

    Operator

  • And your final question comes from the line of David Vogt with UBS.

    你的最後一個問題來自瑞銀集團的大衛‧沃格特。

  • David Vogt - Analyst

    David Vogt - Analyst

  • So Jayshree, I have a question about -- I want to go back to AI, the road map and the deployment schedule for Blackwell. So it sounds like it's a bit slower than maybe initially expected with initial customer delivery late this year. How are you thinking about that in terms of your road map specifically and how that plays into what you're thinking about '25 in a little bit more detail.

    Jayshree,我有一個問題——我想回到人工智慧、Blackwell 的路線圖和部署時間表。因此,聽起來比今年稍後首次向客戶交付的最初預期要慢一些。你是如何具體考慮你的路線圖的,以及它如何影響你對 25 年的更詳細的考慮。

  • And does that late delivery maybe put a little bit of a pause on maybe some of the cloud spend in the fall of this year as there seems to be somewhat of a technology transition going on towards Blackwell away from the Legacy product?

    由於似乎正在從傳統產品向 Blackwell 進行某種技術過渡,因此延遲交付是否可能會導致今年秋季的部分雲端支出暫停?

  • Jayshree V. Ullal - President, CEO & Chairperson

    Jayshree V. Ullal - President, CEO & Chairperson

  • Yes. We're not seeing a pause yet. I don't think anybody is going to wait for Blackwell necessarily in 2024 because they're still bringing up their GPU clusters. And how a cluster is divided across multiple tenants, the choice of host, memory, storage architectures, optimizations on the GPU for collective communication, libraries, specific workloads, resilience, visibility, all of that has to be taken into consideration.

    是的。我們還沒有看到暫停。我認為沒有人會在 2024 年等待 Blackwell,因為他們仍在推出 GPU 叢集。叢集如何跨多個租戶劃分、主機、記憶體、儲存架構的選擇、用於集體通訊的 GPU 最佳化、庫、特定工作負載、彈性、可見性,所有這些都必須考慮在內。

  • All this to say, a good scale-out network has to be built, no matter whether you're connecting to today's GPUs or future Balckwells. And so they're not going to pause the network because they're waiting for Blackwell. they're going to get ready for the network, whether it connects to a Blackwell or a current H100.

    所有這些都表明,無論您是連接到今天的 GPU 還是未來的 Balckwell,都必須建立良好的橫向擴展網路。所以他們不會因為等待布萊克威爾而暫停網路。他們將為網路做好準備,無論是連接到 Blackwell 還是目前的 H100。

  • So as we see it, the training workloads and the urgency of getting the best job completion time is so important that they're not going to spare any investments on the network side and the network side can be ready no matter what the GPU is.

    因此,正如我們所看到的,訓練工作量和獲得最佳作業完成時間的緊迫性是如此重要,以至於他們不會在網路側進行任何投資,並且無論GPU 是什麼,網路側都可以做好準備。

  • Liz Stine - Director of IR Advocacy

    Liz Stine - Director of IR Advocacy

  • Thanks, David. This concludes the Arista Networks First Quarter 2024 Earnings Call. We have posted a presentation, which provides additional information on our results, which you can access on the Investors section of our website. Thank you for joining us today, and thank you for your interest in Arista.

    謝謝,大衛。 Arista Networks 2024 年第一季財報電話會議到此結束。我們發布了一份演示文稿,其中提供了有關我們業績的更多信息,您可以在我們網站的投資者部分訪問這些信息。感謝您今天加入我們,並感謝您對 Arista 的興趣。

  • Operator

    Operator

  • Ladies and gentlemen, thank you for joining. This concludes today's call, and you may disconnect now.

    女士們先生們,感謝您的加入。今天的通話到此結束,您現在可以掛斷電話了。