Astera Labs Inc (ALAB) 2024 Q2 法說會逐字稿

內容摘要

在人工智慧產業對其智慧連接解決方案的強勁需求推動下,Astera Labs 在 2024 財年第二季度實現了創紀錄的收入。他們正在投資擴大組織規模、擴大產品供應,並專注於 COSMOS 軟體套件。該公司對其超越行業成長率的能力充滿信心,並致力於持續創新和成長。

他們公佈的第二季營收為創紀錄的 7,690 萬美元,預計第三季營收為 95-1 億美元。 Astera Labs 專注於五個成長方向,使其業務成長速度快於產業成長率,包括與超大規模企業和人工智慧平台供應商的合作。他們也討論了AI ASIC加速器對GPU銷售的影響、AI技術的成長潛力以及市場向第六代技術的過渡。

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

    Operator

  • Good afternoon. My name is Audra, and I will be your conference operator today. At this time, I would like to welcome everyone to the Astera Labs Q2 2024 earnings conference call. (Operator Instructions). Thank you.

    午安.我叫奧德拉,今天我將擔任你們的會議操作員。此時,我歡迎大家參加 Astera Labs 2024 年第二季財報電話會議。(操作員說明)。謝謝。

  • I will now turn the call over to Leslie Green, Investor Relations for Astera Labs. Leslie, you may begin.

    我現在將電話轉給 Astera Labs 投資者關係部門的 Leslie Green。萊斯利,你可以開始了。

  • Leslie Green - IR Contact

    Leslie Green - IR Contact

  • Good afternoon, everyone, and welcome to the Astera Labs second-quarter 2024 earnings conference call. Joining us on the call today are Jitendra Mohan, Chief Executive Officer and Co-Founder; and Sanjay Gajendra, President and Chief Operating Officer and Co-Founder; and Mike Tate, Chief Financial Officer.

    大家下午好,歡迎參加 Astera Labs 2024 年第二季財報電話會議。今天加入我們電話會議的還有執行長兼聯合創始人 Jitendra Mohan; Sanjay Gajendra,總裁兼營運長兼聯合創辦人;和財務長邁克·泰特。

  • Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements regarding, among other things, expected future financial results, strategies, and plans; future operations; and the markets in which we operate.

    在我們開始之前,我想提醒大家,今天的電話會議中發表的某些評論可能包括有關預期未來財務業績、策略和計劃等的前瞻性陳述;未來的營運;以及我們經營所在的市場。

  • These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and the periodic reports and filings that we file from time to time with the SEC, including the risks set forth in the final prospectus relating to our IPO.

    這些前瞻性陳述反映了管理層當前對未來事件的信念、預期和假設,這些事件本質上受到風險和不確定性的影響,這些風險和不確定性在今天的收益發布以及我們不時向SEC 提交的定期報告和文件中詳細討論。

  • It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements.

    公司管理階層不可能預測可能影響這些前瞻性陳述的所有風險和不確定性,也不可能預測任何因素或因素組合可能導致實際結果與任何前瞻性陳述中包含的結果有重大差異的程度。 。

  • In light of these risks, uncertainties, and assumptions, the results, events or circumstances reflected in the forward-looking statements discussed during this call may not occur, and actual results could differ materially from those anticipated or implied.

    鑑於這些風險、不確定性和假設,本次電話會議期間討論的前瞻性陳述中反映的結果、事件或情況可能不會發生,實際結果可能與預期或暗示的結果有重大差異。

  • All of our statements are based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law.

    我們的所有聲明均基於截至今日管理層可獲得的信息,本公司不承擔在本次電話會議之後更新此類聲明的義務,以因新信息、未來事件或我們期望的變化而符合這些聲明,法律要求的除外。

  • Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with US GAAP.

    此外,在這次電話會議中,我們也將提及某些非公認會計準則財務指標,我們認為這是衡量公司績效的重要指標。這些非公認會計原則財務指標是根據美國公認會計原則編制的財務結果的補充,而不是替代或優於根據美國公認會計原則編制的財務結果。

  • A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website.

    我們今天發布的收益報告中討論了我們為什麼使用非GAAP 財務指標以及我們的GAAP 和非GAAP 財務指標之間的調節,您可以透過我們網站的投資者關係部分訪問該報告,也可以將其包含在在我們的報告中。

  • With that, I'd like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?

    說到這裡,我想將電話轉給 Astera Labs 執行長 Jitendra Mohan。吉騰德拉?

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Thank you, Leslie. Good afternoon, everyone, and thanks for joining our second-quarter conference call for fiscal 2024. AI continues to drive a strong investment cycle as entire industries look to expand their creative output and overall productivity. The velocity and dynamic nature of this investment in AI infrastructure is generating highly complex and diverse challenges for our customers. Astera Lab's intelligent and flexible connectivity solutions are developed ground up to navigate these fast-paced complicated deployments.

    謝謝你,萊斯利。大家下午好,感謝您參加我們的 2024 財年第二季電話會議。隨著整個產業尋求擴大創意產出和整體生產力,人工智慧繼續推動強勁的投資週期。人工智慧基礎設施投資的速度和動態性質為我們的客戶帶來了高度複雜和多樣化的挑戰。Astera Lab 智慧且靈活的連接解決方案經過專門開發,旨在應對這些快節奏的複雜部署。

  • We are working closely with our hyperscaler customers to help them solve these challenges across diverse AI platform architectures that feature both third-party and internally developed accelerators. In addition to these favorable secular trends, we are also benefiting from new company-specific product cycles across multiple technologies, which will also contribute to our growth in the form of higher average silicon content per AI platform.

    我們正在與超大規模客戶密切合作,幫助他們在採用第三方和內部開發的加速器的不同人工智慧平台架構中解決這些挑戰。除了這些有利的長期趨勢外,我們還受益於跨多種技術的新公司特定產品週期,這也將以每個人工智慧平台更高的平均矽含量的形式促進我們的成長。

  • A strong leadership position and great execution by our team resulted in record revenue for Astera Labs in the June quarter, supports our strong outlook for the third quarter, and gives us confidence in our ability to continue outperforming industry growth rates. Astera Labs delivered strong Q2 results, setting our first consecutive record for quarterly revenue, strong non-GAAP operating margin, and positive operating cash flows.

    我們團隊強大的領導地位和出色的執行力為 Astera Labs 在六月季度創造了創紀錄的收入,支持了我們對第三季度的強勁前景,並使我們對繼續超越行業增長率的能力充滿信心。Astera Labs 交付了強勁的第二季業績,連續創下季度收入、強勁的非 GAAP 營運利潤率和正營運現金流的記錄。

  • Our revenue in Q2 was $76.9 million up 18% from the previous quarter and up 619% from the same period in 2023. Non-GAAP operating margin was 24.4%, and we delivered $0.13 of non-GAAP diluted earnings per share. Operating cash flow generation was also strong during the quarter, coming in at $29.8 million. With continued business momentum and a broadening set of growth opportunities, we are investing in our customers by rapidly scaling the organization.

    我們第二季的營收為 7,690 萬美元,比上一季成長 18%,比 2023 年同期成長 619%。非 GAAP 營業利潤率為 24.4%,我們實現了 0.13 美元的非 GAAP 攤薄每股收益。本季營運現金流量也很強勁,達到 2,980 萬美元。憑藉持續的業務動能和不斷擴大的成長機會,我們正在透過快速擴展組織來投資我們的客戶。

  • During the quarter, we expanded our Cloud-Scale Interop Lab to Taiwan and announced the opening of a new R&D center in India. We also announced the appointment of Bethany Mayer to our Board of Directors, bringing additional strategic leadership to the company.

    本季度,我們將雲端規模互通實驗室擴展到台灣,並宣佈在印度開設新的研發中心。我們也宣布任命 Bethany Mayer 為董事會成員,為公司帶來額外的策略領導力。

  • Today, Astera Labs is focused on three core technology standards: PCI Express, Ethernet, and Compute Express Link. We are shipping three separate product families supporting these different connectivity protocols, all generating revenue and in various stages of adoption.

    如今,Astera Labs 專注於三個核心技術標準:PCI Express、乙太網路和 Compute Express Link。我們正在推出三個獨立的產品系列,支援這些不同的連接協議,所有這些都產生收入並處於不同的採用階段。

  • Let me touch upon our business with each of these product families and how we support them with our differentiated architecture and COSMOS software suite. Then I will turn the call over to Sanjay to dive deeper into our growth strategy. Finally, Mike will provide additional details on our Q2 results and our Q3 financial guidance.

    讓我談談我們與每個產品系列的業務,以及我們如何透過我們的差異化架構和 COSMOS 軟體套件來支援它們。然後我會將電話轉給 Sanjay,更深入地探討我們的成長策略。最後,麥克將提供有關我們第二季業績和第三季財務指導的更多詳細資訊。

  • First, let us talk about PCI Express. During the quarter, we saw continued strong demand for our Aries product family to drive reliable PCI Gen 5 connectivity in AI systems by delivering robust signal integrity and link stability. While merchant GPU suppliers drove early adoption of PCI Gen 5 into their systems over the past year, we are now also seeing our hyperscaler customers introduce and ramp new AI server programs based upon their internally developed accelerators utilizing PCI Gen 5.

    首先我們來談談PCI Express。本季度,我們看到對 Aries 產品系列的持續強勁需求,透過提供強大的訊號完整性和連結穩定性來驅動人工智慧系統中可靠的 PCI Gen 5 連線。雖然商用GPU 供應商在過去一年中推動了PCI Gen 5 的早期採用到他們的系統中,但我們現在也看到我們的超大規模客戶基於他們利用PCI Gen 5 內部開發的加速器引入並推出新的AI 伺服器程式。

  • Looking ahead, AI accelerator processing power is continuing to increase at an incredible pace. The next milestone for the AI technology evolution is the commercialization of PCI Gen 6, which doubles the connectivity bandwidth within AI servers, creating new challenges for link reach, reliability, and latency.

    展望未來,人工智慧加速器的處理能力將繼續以令人難以置信的速度成長。AI 技術發展的下一個里程碑是 PCI Gen 6 的商業化,它將 AI 伺服器內的連接頻寬增加了一倍,為鏈路覆蓋範圍、可靠性和延遲帶來了新的挑戰。

  • Our Aries 6 PCIe retimers family helps to solve these challenges with the next generation of our software-defined architecture, offering a seamless upgrade path to our widely deployed and field-tested Gen 5 solutions. We have started shipping initial quantities of pre-production orders of our PCIe Gen 6 solution, Aries 6.

    我們的 Aries 6 PCIe 重定時器系列有助於透過下一代軟體定義架構解決這些挑戰,為我們廣泛部署和現場測試的第 5 代解決方案提供無縫升級路徑。我們已開始運送 PCIe Gen 6 解決方案 Aries 6 的首批預生產訂單。

  • We ship and support our hyperscaler customers' initial program developments that are based on Nvidia's Blackwell platform, including GB200. We look forward to supporting more significant production ramps in the quarters to come.

    我們交付並支援超大規模客戶基於 Nvidia Blackwell 平台(包括 GB200)的初始程式開發。我們期待在未來幾季支持更大幅度的產量提升。

  • Next, let us talk about Ethernet. Our portfolio of Taurus Ethernet smart cable modules helps relieve connectivity bottlenecks by overcoming reach, signal integrity, and bandwidth issues by enabling robust 100-gig per lane connectivity over copper cables or AEC. Today, we are pleased to announce that our 400-gig Taurus Ethernet SCMs have shifted into volume production with an expected ramp through the back half of 2024.

    接下來我們來談談乙太網路。我們的 Taurus 乙太網路智慧電纜模組產品組合透過銅纜或 AEC 實現每通道 100 GB 的強大連接,從而克服覆蓋範圍、訊號完整性和頻寬問題,有助於緩解連接瓶頸。今天,我們很高興地宣布,我們的 400 GB Taurus 乙太網路 SCM 已投入大量生產,預計將於 2024 年下半年實現量產。

  • This ramp is happening across multiple platforms in multiple cable configurations, and we are working with multiple cable partners to support the expected volumes. Taurus will be ramping across a multitude of 400-gig applications to scale out connectivity on both AI compute platforms, as well as general-purpose compute systems. We are excited about the breadth and diversity of our Taurus design wins and expect the product family to be accretive to our corporate growth rate going forward.

    這種增長是在多種電纜配置的多個平台上發生的,我們正在與多個電纜合作夥伴合作以支援預期的數量。Taurus 將在眾多 400 兆應用程式中擴展,以擴展人工智慧運算平台以及通用運算系統上的連接。我們對 Taurus 設計的廣度和多樣性感到興奮,並期望該產品系列能夠促進我們未來的企業成長率。

  • Next is Compute Express Link or CXL. We continue to work closely with our hyperscaler customers on a variety of use cases and applications for CXL. In Q2, we shipped material volume of our Leo products for pre-production large-scale deployment in data centers.

    接下來是Compute Express Link 或CXL。我們繼續與超大規模客戶就 CXL 的各種用例和應用程式密切合作。第二季度,我們運送了 Leo 產品的材料量,用於資料中心的預生產大規模部署。

  • We expect to see data center platform architects utilize CXL technology to solve memory bandwidth and capacity bottlenecks using our Leo family of products. The initial deployments are targeting memory expansion use cases, with production ramps starting in 2025 when new CXL-capable CPUs are broadly available.

    我們期望看到資料中心平台架構師利用我們的 Leo 系列產品利用 CXL 技術來解決記憶體頻寬和容量瓶頸。最初的部署針對的是記憶體擴展用例,生產量將從 2025 年開始,屆時支援 CXL 的新 CPU 將會廣泛上市。

  • Finally, I would like to spend a moment on COSMOS, which is a software platform that brings all of our product families together. We have discussed how COSMOS not only runs on our chips but also in our customers' operating stacks to deliver seamless customization, optimization, and monitoring.

    最後,我想花點時間談談 COSMOS,這是一個將我們所有產品系列整合在一起的軟體平台。我們討論了 COSMOS 如何不僅在我們的晶片上運行,而且在客戶的操作堆疊中運行,以提供無縫的客製化、優化和監控。

  • The combination of our semiconductor and hardware solutions with COSMOS software enables our product to become the eyes and ears of connectivity infrastructure, helping fleet managers to ensure their AI and cloud infrastructure is operating at peak utilization. By improving the efficiency of their data centers, our customers are able to generate higher ROI and reduce downtime.

    我們的半導體和硬體解決方案與 COSMOS 軟體相結合,使我們的產品成為連接基礎設施的眼睛和耳朵,幫助車隊經理確保他們的人工智慧和雲端基礎設施在峰值利用率下運作。透過提高資料中心的效率,我們的客戶能夠獲得更高的投資報酬率並減少停機時間。

  • To summarize, sustained secular trends in AI adoption; design wins across diverse AI platforms at hyperscalers, featuring both third-party and internally developed accelerators; and increasing average dollar content in next-generation GPU-based AI platforms gives us confidence in our ability to outperform industry growth rates.

    總而言之,人工智慧採用的持續長期趨勢;設計在超大規模企業的不同人工智慧平台上獲勝,包括第三方和內部開發的加速器;下一代基於 GPU 的人工智慧平台的平均美元含量不斷增加,讓我們對超越產業成長率的能力充滿信心。

  • With that, let me turn the call over to our President and COO, Sanjay Gajendra, to discuss some of our recent product announcements and our long-term growth strategy.

    接下來,讓我將電話轉給我們的總裁兼營運長 Sanjay Gajendra,討論我們最近發布的一些產品和我們的長期成長策略。

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Thanks, Jitendra. And good afternoon, everyone. We are pleased with our robust Q2 results and strong top-line outlook for Q3. But we are even more excited about the volume and breadth of opportunities that lie ahead. Today, I will focus on five growth vectors that we believe will help us to grow our business faster than industry growth rates over the long term.

    謝謝,吉騰德拉。大家下午好。我們對第二季強勁的業績和第三季強勁的營收前景感到滿意。但我們對未來機會的數量和廣度更加感到興奮。今天,我將重點放在五個成長方向,我們相信這些成長方向將有助於我們的業務長期成長速度快於產業成長率。

  • First, Astera Labs is in a unique position with design wins across diverse AI platform architectures featuring both third-party and internally developed accelerators. This diversity gives us multiple paths to grow our business. This hybrid approach of using third-party and internally developed accelerators allows hyperscalers to optimize their fleet to support unique workload requirements and infrastructure limitations, while also improving capital investment efficiency.

    首先,Astera Labs 處於獨特的地位,在採用第三方和內部開發的加速器的多種人工智慧平台架構中取得了設計勝利。這種多樣性為我們提供了多種發展業務的途徑。這種使用第三方和內部開發的加速器的混合方法使超大規模企業能夠優化其車隊,以支援獨特的工作負載要求和基礎設施限制,同時也提高資本投資效率。

  • Our intelligent connectivity platform with its flexible software-based architecture enables portability and seamless reuse between platforms while creating growth opportunities for all our product families. In addition to the third-party GPU platforms, we also expect to see several large deployments based on internally developed AI accelerators hitting production volume over the next few quarters and driving incremental PCIe and Ethernet volumes for us.

    我們的智慧連接平台具有靈活的基於軟體的架構,可實現平台之間的可移植性和無縫重複使用,同時為我們所有產品系列創造成長機會。除了第三方 GPU 平台之外,我們還預計基於內部開發的 AI 加速器的多個大型部署將在未來幾季實現量產,並為我們推動 PCIe 和乙太網路銷售的成長。

  • Second, we see increasing content on next-generation AI platforms. Nvidia's Blackwell GPU architecture is particularly exciting for us as we expect to see strong growth opportunities based on our design wins as hyperscalers compose solutions based on Blackwell GPUs, including GB200, across their data center infrastructure.

    其次,我們看到下一代人工智慧平台上的內容不斷增加。Nvidia 的Blackwell GPU 架構對我們來說尤其令人興奮,因為我們預計,隨著超大規模企業在其資料中心基礎設施中基於Blackwell GPU(包括GB200)構建解決方案,我們將看到基於我們設計成果的強勁成長機會。

  • To support various AI workloads, infrastructure challenges, software, power and cooling requirements, we expect multiple deployment variants for this new GPU platform. For example, Nvidia cited 100 different configurations for Blackwell in their most recent earnings call.

    為了支援各種人工智慧工作負載、基礎設施挑戰、軟體、電源和冷卻需求,我們預計這個新的 GPU 平台有多種部署變體。例如,Nvidia 在最近的財報電話會議中引用了 Blackwell 的 100 種不同配置。

  • This growing trend of complexity and diversity presents an exciting opportunity for Astera Labs as our flexible silicon architecture and COSMOS software suite can be harnessed to customize the connectivity backbone for a diverse set of deployment scenarios. Overall, we expect our business to benefit from the Blackwell introduction with higher average dollar content of our products per GPU, driven by a combination of increasing volumes and higher ASPs.

    這種複雜性和多樣性不斷增長的趨勢為 Astera Labs 提供了令人興奮的機會,因為我們可以利用靈活的晶片架構和 COSMOS 軟體套件為各種部署場景定制連接主幹。總體而言,我們預計我們的業務將從 Blackwell 的推出中受益,在產量增加和平均售價提高的共同推動下,每個 GPU 的產品平均美元含量更高。

  • The next growth vector is the broadening applications and use cases for our Aries product family. Aries is in its third generation now and represents the gold standard for PCIe retimers in the industry. The introduction of the new Aries 6 retimers built upon the company's widely deployed and battle-tested PCIe 5 retimers and the industry transition to PCIe Gen 6 will be a catalyst for increasing PCIe retimer content for Astera. Our learnings from hundreds of design wins and production deployment over the last several years enables us to quickly deploy PCIe Gen 6 technology at scale.

    下一個成長方向是擴大我們 Aries 產品系列的應用和用例。Aries 目前已進入第三代,代表了業界 PCIe 重定時器的黃金標準。基於該公司廣泛部署和久經考驗的 PCIe 5 重定時器而推出的新型 Aries 6 重定時器以及行業向 PCIe Gen 6 的過渡將成為增加 Astera 的 PCIe 重定時器內容的催化劑。我們從過去幾年的數百個設計勝利和生產部署中汲取經驗教訓,使我們能夠快速大規模部署 PCIe Gen 6 技術。

  • As Jitendra noted, we are now shipping initial quantities of pre-production volume for Aries 6 and currently have meaningful backlog in place to support the initial deployment of hyperscaler AI servers featuring Nvidia's Blackwell GPUs, including GB200. We're also very excited about the incremental PCIe connectivity market expansion that will be driven by multi-rack GPU clustering.

    正如 Jitendra 所指出的,我們現在正在運送 Aries 6 的首批預生產量,目前有大量積壓訂單,以支援採用 Nvidia Blackwell GPU(包括 GB200)的超大規模 AI 伺服器的初始部署。我們也對多機架 GPU 叢集驅動的 PCIe 連線市場增量擴張感到非常興奮。

  • Similar to the dynamic within the Ethernet AEC business, the reach limitations of passive PCIe copper cables are a bottleneck for the number of GPUs that can be clustered together. Our purpose-built Aries smart cable modules solve these issues by providing robust signal integrity and link stability over materially longer distances, improving rack airflow while actively monitoring and optimizing link health.

    與乙太網路 AEC 業務中的動態類似,被動 PCIe 銅纜的範圍限制是可叢集在一起的 GPU 數量的瓶頸。我們專用的 Aries 智慧電纜模組透過在更長的距離上提供強大的訊號完整性和鏈路穩定性、改善機架氣流、同時主動監控和優化鏈路運行狀況來解決這些問題。

  • This PCIe AEC opportunity is in the early stages of adoption and deployment, and we view the multi-rack GPU clustering application as a new and growing market opportunity for our Aries product family. In June, we announced the industry's first demonstration of end-to-end PCIe optical connectivity to provide unprecedented reach for larger GPU clusters.

    這個 PCIe AEC 機會正處於採用和部署的早期階段,我們將多機架 GPU 叢集應用視為我們 Aries 產品系列的一個新的、不斷增長的市場機會。6 月,我們宣布業界首次展示端對端 PCIe 光纖連接,為更大的 GPU 叢集提供前所未有的覆蓋範圍。

  • We are proud to broaden our PCIe leadership once again by demonstrating robust PCIe links over optical interconnects between GPUs, CPUs, CXL memory devices and other PCIe endpoints. This breakthrough expands our intelligent connectivity platform to allow customers to seamlessly scale and extend high-bandwidth low-latency PCI interconnects over optics.

    我們很自豪能夠透過在 GPU、CPU、CXL 記憶體設備和其他 PCIe 端點之間的光學互連上展示強大的 PCIe 鏈路,再次擴大我們的 PCIe 領導地位。這項突破擴展了我們的智慧連接平台,使客戶能夠透過光學無縫擴展和擴展高頻寬低延遲 PCI 互連。

  • Overall, we expect our Aries PCIe retimer business to deliver strong growth as system complexity, platform diversity, and speeds continue to increase and on average, result in higher retimer content per GPU in next-generation AI platforms.

    總體而言,隨著系統複雜性、平台多樣性和速度持續增加,我們預計Aries PCIe 重定時器業務將實現強勁增長,平均而言,將導致下一代AI 平台中每個GPU 的重定時器內容更高。

  • Next, in addition to the strong growth prospect of our Aries product family across the PCIe ecosystem, we are also seeing our Taurus product family for Ethernet AEC application start to meaningfully contribute to the growth in the back half of 2024. What is exciting about these ramps is the diversity in application and use cases.

    接下來,除了我們的Aries 產品系列在PCIe 生態系統中的強勁成長前景之外,我們還看到我們用於乙太網路AEC 應用的Taurus 產品系列開始為2024 年下半年的成長做出有意義的貢獻。這些斜坡令人興奮的是應用程式和用例的多樣性。

  • We are seeing demand for our Taurus product family for both AI and general compute platforms. We are supporting the market with multiple cable configurations, including straight, Y cables, and X cables. We will be shipping volume into hyperscaler build-outs, supporting multiple cable vendors to enable a diverse supply chain that is crucial for hyperscalers. Overall, we are very excited about Taurus becoming yet another engine of growth as we look to expand the top line while also diversifying our product family contributions.

    我們看到人工智慧和通用運算平台對我們的 Taurus 產品系列的需求。我們為市場提供多種電纜配置,包括直電纜、Y 電纜和 X 電纜。我們將把產量運送到超大規模企業的擴建中,支持多家電纜供應商,以實現對超大規模企業至關重要的多元化供應鏈。總體而言,我們對金牛座成為另一個成長引擎感到非常興奮,因為我們希望擴大收入,同時使我們的產品系列貢獻多樣化。

  • Last but not least, CXL is an important technology to solve memory bandwidth and capacity bottlenecks in compute platforms. We are working closely with our hyperscaler partners to demonstrate various use cases for this technology and starting to deploy our Leo CXL controllers in pre-production racks in data centers. We have incorporated the learnings, customization, and security requirements into our COSMOS software and have the most robust cloud-ready CXL solution in the industry.

    最後但並非最不重要的一點是,CXL 是解決運算平台記憶體頻寬和容量瓶頸的重要技術。我們正在與超大規模合作夥伴密切合作,演示該技術的各種用例,並開始在資料中心的預生產機架中部署我們的 Leo CXL 控制器。我們已將學習、客製化和安全要求納入我們的 COSMOS 軟體中,並擁有業內最強大的雲端就緒 CXL 解決方案。

  • We have demonstrated that our Leo CXL Smart Memory Controllers improve application performance and reduce TCO in compute platforms. Very importantly, we can accomplish many of these performance gains with zero application-level software changes or upgrades. Overall, we remain very excited about the potential of CXL in data center applications.

    我們已經證明,我們的 Leo CXL 智慧記憶體控制器可提高運算平台中的應用程式效能並降低 TCO。非常重要的是,我們可以透過零應用程式級軟體更改或升級來實現許多效能提升。總的來說,我們對 CXL 在資料中心應用中的潛力仍然感到非常興奮。

  • Finally, our close collaboration and front-row seat with hyperscalers and AI platform providers continues to yield valuable insights regarding the direction of compute technologies and the connectivity topologies that will be required to support them. This close collaboration is helping us identify new product and business opportunities and additional engagement models across our entire intelligent connectivity platform, which we believe will drive strong, long-term growth for Astera.

    最後,我們與超大規模企業和人工智慧平台提供商的密切合作和前排席位繼續就計算技術的方向以及支援它們所需的連接拓撲提供有價值的見解。這種密切合作正在幫助我們在整個智慧連接平台上發現新產品和商業機會以及其他參與模式,我們相信這將推動 Astera 的強勁、長期成長。

  • With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q2 financial results and our Q3 outlook.

    接下來,我將把電話轉給我們的財務長 Mike Tate,他將討論我們第二季的財務表現和第三季的前景。

  • Michael Tate - Chief Financial Officer

    Michael Tate - Chief Financial Officer

  • Thanks, Sanjay, and thanks to everyone for joining the call. This overview of our Q2 financial results and Q3 guidance will be on a non-GAAP basis. The primary difference in Astera Labs' non-GAAP metrics is stock-based compensation and its related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q3 financial outlook, as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call.

    謝謝桑傑,也謝謝大家加入通話。我們第二季財務表現和第三季指導的概述將基於非公認會計原則。Astera Labs 的非 GAAP 指標的主要區別在於基於股票的薪酬及其相關的所得稅影響。請參閱我們網站投資者關係部分今天發布的新聞稿,以了解有關我們的GAAP 和非GAAP 第三季度財務前景的更多詳細信息,以及本次電話會議中提出的GAAP 與非GAAP 財務指標的調節表。

  • For Q2 of 2024, Astera Labs delivered record quarterly revenue of $76.9 million, which was up 18% from the previous quarter and 619% higher than the revenue in Q2 of 2023. During the quarter, we shipped products to all major hyperscalers and AI accelerator manufacturers. We recognized revenue across all three of our product families during the quarter, with the Aries product being the largest contributor that is seen from continued momentum in AI based platforms.

    2024 年第二季度,Astera Labs 實現創紀錄的季度收入 7,690 萬美元,比上一季成長 18%,比 2023 年第二季營收高出 619%。在本季度,我們向所有主要的超大規模供應商和人工智慧加速器製造商發貨了產品。我們在本季度確認了所有三個產品系列的收入,從基於人工智慧的平台的持續成長勢頭可以看出,Aries 產品是最大的貢獻者。

  • In Q2, Taurus revenues contributed to -- continued to primarily shift into 200-gig Ethernet-based systems, and we expect Taurus revenue to now diversify further as we begin to ship volume into 400-gig Ethernet-based systems in the third quarter. Q2 Leo revenues were largely from customers purchasing pre-production volumes for the development of their next-generation CXL-capable compute platforms, with our customers' production launch timing being dependent on the data center server CPU refresh cycle.

    第二季度,Taurus 的收入繼續主要轉向基於 200 GB 乙太網路的系統,隨著我們在第三季度開始向基於 400 GB 乙太網路的系統出貨,我們預計 Taurus 的收入將進一步多元化。第二季 Leo 的收入主要來自客戶購買預生產量以開發下一代支援 CXL 的運算平台,而我們客戶的生產啟動時間取決於資料中心伺服器 CPU 刷新週期。

  • Q2 non-GAAP gross margins was 78% and was down 20 basis points compared to 78.2% in Q1 of 2024 and better than our guidance of 77%. Non-GAAP operating expenses for Q2 were $41.2 million, up from $35.2 million in the previous quarter and consistent with our guidance. Within non-GAAP operating expenses, R&D expenses was $27.1 million, sales and marketing expense was $6.3 million, and general and administrative expenses was $7.8 million. Non-GAAP operating margin for Q2 was 24.4%.

    第二季非 GAAP 毛利率為 78%,比 2024 年第一季的 78.2% 下降 20 個基點,優於我們 77% 的指引。第二季非 GAAP 營運費用為 4,120 萬美元,高於上一季的 3,520 萬美元,與我們的指引一致。在非 GAAP 營運費用中,研發費用為 2,710 萬美元,銷售和行銷費用為 630 萬美元,一般和管理費用為 780 萬美元。第二季非 GAAP 營業利益率為 24.4%。

  • Interest income in Q2 was $10.3 million. Our non-GAAP tax provision was $6.8 million for the quarter, which represents a tax rate of 23% on a non-GAAP basis. Non-GAAP fully diluted share count for Q2 was 175.3 million shares, and our non-GAAP diluted earnings per share for the quarter was $0.13. Cash flow from operating activities for Q2 was $29.8 million, and we ended the quarter with cash, cash equivalents, and marketable securities of just over $830 million.

    第二季利息收入為 1,030 萬美元。本季我們的非 GAAP 稅項準備金為 680 萬美元,以非 GAAP 計算,稅率為 23%。第二季非 GAAP 完全稀釋後股票數量為 1.753 億股,本季非 GAAP 稀釋後每股收益為 0.13 美元。第二季經營活動產生的現金流量為 2,980 萬美元,本季末我們的現金、現金等價物及有價證券略高於 8.3 億美元。

  • Now turning to our guidance for Q3 of fiscal 2024, we expect Q3 revenue to increase within a range of $95 million and $100 million, up roughly 24% to 30% sequentially from the prior quarter. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q3, driven by growing volume deployment with our customers' AI servers.

    現在轉向我們對 2024 財年第三季的指導,我們預計第三季營收將成長在 9,500 萬美元至 1 億美元範圍內,比上一季連續成長約 24% 至 30%。我們相信,在客戶人工智慧伺服器部署量不斷增長的推動下,我們的 Aries 產品系列將繼續成為收入的最大組成部分,並將成為第三季度環比增長的主要驅動力。

  • We also expect our Taurus family to drive solid growth quarter over quarter as design wins within new 400-gig Ethernet-based systems ramp into volume production. We expect non-GAAP gross margins to be approximately 75%. The sequential decline in gross margin is being driven by an expected product mix shift towards hardware solutions during the quarter.

    我們也預計,隨著新型 400 吉以太網系統中的設計勝利進入批量生產,我們的 Taurus 系列將逐季度推動穩健成長。我們預計非 GAAP 毛利率約為 75%。毛利率連續下降是由於本季預期產品組合轉向硬體解決方案所致。

  • We expect non-GAAP operating expenses to be in the range of approximately $46 million to $47 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property. Interest income is expected to be approximately $10 million. Our non-GAAP tax rate should be approximately 20%, and our non-GAAP fully diluted share count is expected to be approximately 177 million shares. Adding this all up, we're expecting non-GAAP fully diluted earnings per share of a range of approximately $0.16 to $0.17.

    我們預計非 GAAP 營運費用將在約 4,600 萬美元至 4,700 萬美元之間,因為我們將繼續積極擴大研發資源庫的人員數量和智慧財產權。利息收入預計約1000萬美元。我們的非 GAAP 稅率應約為 20%,我們的非 GAAP 完全稀釋股數預計約為 1.77 億股。將所有這些加起來,我們預計非 GAAP 完全稀釋每股收益約為 0.16 美元至 0.17 美元。

  • This concludes our prepared remarks. And once again, we are very much appreciative of everyone joining the call. And now we will open the line for questions. Operator?

    我們準備好的演講到此結束。我們再次非常感謝大家加入這次電話會議。現在我們將開通提問熱線。操作員?

  • Operator

    Operator

  • (Operator Instructions) Harlan Sur, JPMorgan.

    (操作員指令)Harlan Sur,摩根大通。

  • Harlan Sur - Analyst

    Harlan Sur - Analyst

  • Good afternoon. Thanks for taking my question and congratulations on the strong results. During the quarter, lots of concerns around your large GPU customer and one of their next-generation GPU SKUs, the GB200. Glad that the team could clarify that your dollar content across all Blackwell GPU SKUs is actually rising versus prior-generation Hopper.

    午安.感謝您提出我的問題並祝賀我們取得了優異的成績。在本季度,您的大型 GPU 客戶及其下一代 GPU SKU 之一 GB200 引起了許多擔憂。很高興團隊能夠澄清,與上一代 Hopper 相比,所有 Blackwell GPU SKU 的美元含量實際上都在上升。

  • But as you guys mentioned, AI ASIC accelerator mix is rapidly rising, and actually, we believe outgrowing GPUs both this year and next year and accounting for 50% of the XPU mix sort of next year, right? And with ASIC, it's 100% PCIe based. And as you pointed out, right, many of these ASIC customers are still in the early stages of the ramp.

    但正如你們所提到的,AI ASIC 加速器組合正在迅速成長,實際上,我們相信今年和明年 GPU 的成長都將超過 GPU,並佔明年 XPU 組合的 50%,對嗎?借助 ASIC,它 100% 基於 PCIe。正如您所指出的,許多 ASIC 客戶仍處於發展的早期階段。

  • So given all of this, given some of the new product ramps with your AEC solution, what's the team's visibility and confidence level on driving continued quarter-on-quarter growth from here maybe over the next several quarters?

    因此,考慮到所有這些,考慮到 AEC 解決方案帶來的一些新產品的推出,團隊在未來幾季推動持續季度環比增長的可見度和信心水平如何?

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Harlan, thank you so much for the question. It is great to be in the place that we are here today. We feel very confident about what is to come. Clearly, we don't guide more than one quarter out, so please don't take that as any guidance.

    哈倫,非常感謝你的提問。很高興來到今天的這個地方。我們對未來充滿信心。顯然,我們的指導不會超過四分之一,所以請不要將此視為任何指導。

  • But we really believe that we are in the early innings of AI here. All of the hyperscalers are increasing their CapEx targets for the rest of the year. 2025 is expected to be even higher. We heard that the Llama model requires 10 times more compute in order to solve that. So all of these trends are basically driving a radical shift in technology.

    但我們確實相信我們正處於人工智慧的早期發展階段。所有超大規模企業都在提高今年剩餘時間的資本支出目標。 2025 年預計會更高。我們聽說 Llama 模型需要 10 倍的計算才能解決這個問題。因此,所有這些趨勢基本上都在推動技術的根本轉變。

  • We are seeing, as you correctly pointed out, a lot of our hyperscaler customers ramp their internally developed AI accelerators in addition to deploying third-party AI accelerators. And we are very pleased that we have design wins across all of these different platforms. Our customers are ramping their platforms, and we are ramping multiple product families.

    正如您正確指出的那樣,我們看到,除了部署第三方人工智慧加速器之外,我們的許多超大規模客戶還加強了內部開發的人工智慧加速器的力度。我們非常高興我們在所有這些不同平台上都獲得了設計勝利。我們的客戶正在擴大他們的平台,我們也在擴大多個產品系列。

  • As Sanjay mentioned, both Aries and Taurus are ramping into these new platforms. So we feel very good about what is in store for the future and feel that with the rising content on a per-GPU basis, we will be able to outpace the market growth in the long term.

    正如桑傑所提到的,白羊座和金牛座都在進軍這些新平台。因此,我們對未來的前景感到非常滿意,並認為隨著每個 GPU 內容的不斷增加,我們將能夠在長期內超越市場成長。

  • Harlan Sur - Analyst

    Harlan Sur - Analyst

  • No, I appreciate that. And on top of the strong AI demand trend pulls, on top of the new product ramps that you guys articulated today, one thing I haven't baked into my model is the penetration of your retimer technology into general-purpose servers, right?

    不,我很欣賞這一點。除了強勁的人工智慧需求趨勢拉動之外,除了你們今天闡述的新產品升級之外,我還沒有納入我的模型中的一件事是你們的重定時器技術滲透到通用伺服器中,對吧?

  • And the good news is that we are finally starting to see the flash vendors aggressively bringing -- finally bringing Gen 5 PCIe SSDs to the market, which could potentially unlock the retimer opportunities in general-purpose servers where the Gen 5 retimer content today is still zero.

    好消息是,我們終於開始看到快閃記憶體供應商積極地將第5 代PCIe SSD 推向市場,這可能會釋放通用伺服器中的重定時器機會,而目前第5 代重定時器內容仍存在於通用伺服器中。

  • So what's the team's outlook? Do you see that there may be some penetration starting in 2025 of your retimer solutions into general-purpose server? And maybe if you could size that potential opportunity for us?

    那麼團隊的前景如何?您是否認為從 2025 年開始,您的重定時器解決方案可能會滲透到通用伺服器中?也許您能為我們估量一下潛在的機會?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes, absolutely, Harlan. Sanjay here. Good to hear your voice. Yes. So in general, that's a correct statement. We have several design wins on the compute side. Just for reasons like you highlighted, either SSDs not being Gen 5 ready or for dollars being sort of factoring into the AI platforms, there has been a slower-than-expected growth on the general compute.

    是的,絕對是,哈倫。桑傑在這裡。很高興聽到你的聲音。是的。所以總的來說,這是一個正確的說法。我們在計算方面取得了多項設計成果。正是由於像您強調的那樣的原因,無論是 SSD 尚未準備好第 5 代,還是出於將美元計入人工智慧平台的因素,通用運算的成長速度低於預期。

  • But at some point, like we keep saying, the servers are going to fall off the rack at some point, given how long they've been in the fleet. So we do expect the general compute to start picking up, especially as both AMD and Intel get to production with their Turin and the kind of Granite Rapids based CPUs.

    但在某些時候,就像我們一直說的那樣,考慮到伺服器在機群中存在的時間,它們會在某個時候從機架上掉下來。因此,我們確實預計通用計算將開始回升,特別是當 AMD 和英特爾都開始生產它們的 Turin 和基於 Granite Rapids 的 CPU 時。

  • So overall, 2025, we do expect that the compute platform will start figuring in terms of being meaningful revenue growth. Like I noted, we do have design wins already in these platforms for Aries retimers. But also, I would like to add that we do have design wins for our Taurus Ethernet module application as well in general compute.

    因此,總的來說,到 2025 年,我們確實預期計算平台將開始實現有意義的收入成長。正如我所指出的,我們確實已經在 Aries 重定時器的這些平台中獲得了設計勝利。而且,我想補充一點,我們的 Taurus 乙太網路模組應用以及通用運算方面確實取得了設計成果。

  • So we should see sort of the two-engine growth story along the general compute to go along with all of the things we shared on AI, both for third-party or merchant GPUs, as well as the big change that we are seeing now as the ramp in the internally developed accelerators, those things being a meaningful and a significant driver for our growth.

    因此,我們應該看到通用運算領域的雙引擎成長故事,以及我們在人工智慧上分享的所有內容,無論是第三方還是商用 GPU,以及我們現在看到的巨大變化內部開發的加速器的成長,這些是我們成長的有意義且重要的驅動力。

  • Harlan Sur - Analyst

    Harlan Sur - Analyst

  • Thank you.

    謝謝。

  • Operator

    Operator

  • Joe Moore, Morgan Stanley.

    喬摩爾,摩根士丹利。

  • Joe Moore - Analyst

    Joe Moore - Analyst

  • Great. Thank you. I wonder if you could talk about the competitive dynamics within PCI Gen 5 retimers. Are you seeing any number of people who have qualified solutions in China and in the US? Are you seeing any encroachment there? And then in terms of PCI Gen 6, can you talk about the prospects for when you start to see volume there?

    偉大的。謝謝。我想知道您是否可以談談 PCI Gen 5 重定時器的競爭動態。您在中國和美國看到有多少人擁有合格的解決方案?你看到那裡有任何侵犯嗎?那麼就 PCI Gen 6 而言,您能談談當您開始看到那裡的銷量時的前景嗎?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes, absolutely. Let me take that, Joe. So overall, this is a big and growing market. I think that fact is clear. I mean, the fact that you have larger names jumping into the mix sort of validates the market that the retimer represents. Now a couple of points to keep in mind is that connectivity products, especially PCI Express, tends to have a certain nuance to it which is the fact that we are the device in the middle.

    是的,絕對是。讓我接受,喬。總的來說,這是一個巨大且不斷成長的市場。我想這個事實是很清楚的。我的意思是,事實上,有更大的名字加入其中,這驗證了重定時器所代表的市場。現在要記住的幾點是,連接產品,尤其是 PCI Express,往往有一定的細微差別,事實上我們是中間的裝置。

  • We are always in between GPU, storage, networking, and so on and interoperation, especially at high-volume cloud-scale deployment becomes critical. So what we have done in the last three years, four years is really work shoulder to shoulder with our hyperscaler and AI platform providers to ensure that the interoperation is met, the platform-level deployment, whether it is diagnostic, telemetry, firmware management is all addressed, including the COSMOS software that we provide from a management -- fleet management and diagnostic type of capability.

    我們始終處於 GPU、儲存、網路等之間,互通尤其是在大批量雲端規模部署時變得至關重要。所以我們在過去三年、四年所做的實際上是與我們的超大規模和人工智慧平台提供者並肩工作,以確保滿足互通性,平台級部署,無論是診斷、遙測、韌體管理都是如此。

  • Those all have been integrated into our customers' operating stack. So in general, the picture I'm trying to paint here is that the tribal knowledge that we have built, the penetration that we have not just with the silicon but also software does give us a significant advantage compared to our competitors.

    這些都已整合到我們客戶的操作堆疊中。因此,總的來說,我在這裡試圖描繪的畫面是,我們已經建立的部落知識,我們不僅在晶片上而且在軟體上的滲透確實給我們帶來了與競爭對手相比的顯著優勢。

  • Now having said that, we will continue to work hard. We have several design wins for PCIe Gen 6 like we shared in today's call that are all designed around the next-generation GPU platform, specifically the Blackwell-based GPUs from Nvidia, which I publicly noted to support Gen 6.

    話雖如此,我們仍將繼續努力。我們在PCIe Gen 6 方面取得了多項設計成果,就像我們在今天的電話會議中分享的那樣,這些設計都是圍繞下一代GPU 平台設計的,特別是Nvidia 基於Blackwell 的GPU,我公開指出該平台支援Gen 6。

  • So we'll continue to work through them. We are currently shipping pre-production volume to support some of the initial ramps, including for GB200-based platforms. So overall, we feel good about the position that we are in, both in terms of Gen 5, as well as transitioning those designs into Gen 6 as the platforms develop and grow.

    因此,我們將繼續解決這些問題。我們目前正在交付預生產量,以支援一些初始的升級,包括基於 GB200 的平台。因此,總的來說,我們對自己所處的位置感到滿意,無論是在第 5 代方面,還是隨著平台的發展和成長將這些設計過渡到第 6 代。

  • Joe Moore - Analyst

    Joe Moore - Analyst

  • Great. Thank you.

    偉大的。謝謝。

  • Operator

    Operator

  • Blayne Curtis, Jefferies LLC.

    布萊恩·柯蒂斯,傑弗里斯有限責任公司。

  • Blayne Curtis - Analyst

    Blayne Curtis - Analyst

  • Hey, thanks for taking my question. I just want to ask you in terms of the September outlook, you talked about meaningful revenue from AEC. I mean, I think the other point was that the gross margin was because of the mix, which I'm assuming is because of that ramp. But just trying to size it, I know you don't break out the segments, but if you can kind of just give us some broad strokes as to how much of the growth is coming from retimers versus Aries in September?

    嘿,謝謝你回答我的問題。我只是想問您對 9 月的展望,您談到了 AEC 的有意義的收入。我的意思是,我認為另一點是毛利率是由於混合造成的,我認為這是因為這種增長。但只是想確定它的大小,我知道你沒有分解這些部分,但如果你能給我們一些大致的印象,關於 9 月份的重定時器與白羊座的增長有多少?

  • Michael Tate - Chief Financial Officer

    Michael Tate - Chief Financial Officer

  • Yes, Blayne. The margins will come down to the extent we sell more hardware versus silicon, so Taurus is definitely one of those drivers. Also, we do modules on the Aries side, and both -- we're seeing growth in both of those.

    是的,布萊恩。當我們銷售更多的硬體而不是晶片時,利潤率就會下降,所以金牛座絕對是這些驅動因素之一。此外,我們在白羊座方面做了模組,而且我們都看到了這兩個方面的增長。

  • So when you look at the growth guidance we are giving in third quarter, you have the contribution from Taurus, you have the incremental modules on Aries. But also, we are seeing a lot of growth just from Aries Gen 5 going into AI servers. And a lot of new platforms -- and the platforms generally are getting more content per platform. So when you look at the growth, I think it is kind of balanced between those three drivers, largely.

    因此,當您查看我們在第三季給出的成長指引時,您可以看到金牛座的貢獻,您可以看到白羊座的增量模組。而且,我們也看到 Aries Gen 5 進入人工智慧伺服器領域取得了很大的成長。還有很多新平台——而且這些平台通常每個平台都會獲得更多內容。因此,當你觀察成長時,我認為這三個驅動因素在很大程度上是平衡的。

  • Blayne Curtis - Analyst

    Blayne Curtis - Analyst

  • Got you, thanks. And then I want to ask on the Gen 6 adoption moving from pre-production to production. The main GPU in the market supports Gen 6. I think the CPUs that would talk Gen 6 are going to be a bit of a way -- over your way. So I'm just kind of curious, the catalyst there, do you expect Gen 6 to be in the market next or even if there is not CPUs that kind of speak Gen 6?

    明白了,謝謝。然後我想問第六代的採用從預生產到生產的情況。市場上主流的GPU都支援Gen 6。我認為支援第六代的 CPU 將會有點超出你的範圍。所以我只是有點好奇,那裡的催化劑,你是否預計第六代會在下一個市場上出現,或者即使沒有那種支持第六代的CPU?

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Blayne, it is a great observation. Let me say that as these compute platform gets more and more powerful to address these growing AI models, the only way to keep them fed, to keep these GPUs utilized is to get more and more data in and out of these platforms.

    布萊恩,這是一個很好的觀察。我想說的是,隨著這些運算平台變得越來越強大,能夠滿足這些不斷增長的人工智慧模型的需求,保持它們的供給、保持這些GPU 的利用率的唯一方法就是從這些平台中獲取越來越多的數據。

  • So in the past, the CPUs played a very central role in terms of navigating -- in terms of being the conduit for all of this information. But with the new accelerated compute architecture, CPU is largely an orchestration of a control engine. For the most part, it does do a few other things. But in general, you are trying to get the data in and out of the GPU using the scale-out and scale-up networks that are made up of either PCI Express, Ethernet or NVLink protocols.

    因此,在過去,CPU 在導航方面發揮著非常核心的作用 - 作為所有這些資訊的管道。但有了新的加速運算架構,CPU 在很大程度上是控制引擎的編排。在大多數情況下,它確實做了一些其他事情。但一般來說,您會嘗試使用由 PCI Express、乙太網路或 NVLink 協定組成的橫向擴展和縱向擴展網路將資料傳入和傳出 GPU。

  • And as these protocols go faster and faster, we end up seeing more and more demand for the product that we have. And as a result, as these new systems get deployed, we see higher content for us on a per-GPU basis, and it's largely to improve the GPU utilization through these increased data rates.

    隨著這些協議的發展越來越快,我們最終會看到對我們產品的需求越來越多。因此,隨著這些新系統的部署,我們在每個 GPU 的基礎上看到了更高的內容,這很大程度上是為了透過提高資料速率來提高 GPU 利用率。

  • Blayne Curtis - Analyst

    Blayne Curtis - Analyst

  • Thanks so much.

    非常感謝。

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Blayne, if I can add one more point, you didn't quite directly ask us for the September quarter growth. I do want to be abundantly clear on one point, which is the growth that we are forecasting for September quarter is based upon not just the power sampling but all of the additional production ramps that we are seeing for both the third-party platforms but also internally developed accelerators. That is what is modeling and driving the growth that we are highlighting for September, although there are maybe other things that you can look at the overall stuff.

    Blayne,如果我可以補充一點的話,您並沒有直接詢問我們九月季度的成長情況。我確實想非常清楚地說明一點,那就是我們預測 9 月季度的成長不僅基於功率取樣,還基於我們看到的第三方平台以及內部開發的加速器。這就是我們在 9 月強調的建模和驅動成長的內容,儘管您可能還可以查看整體內容中的其他內容。

  • Operator

    Operator

  • Tom O'Malley, Barclays.

    湯姆·奧馬利,巴克萊銀行。

  • Thomas O'Malley - Analyst

    Thomas O'Malley - Analyst

  • Hey, guys. Thanks for taking my questions. Congrats on the nice results. I just wanted to ask a broader network architecture question. You talked a little bit more about PCIe over optical. And when you look at the back end today, I think there's a lot of efforts to improve the Ethernet offering as it compares to kind of the leader in the market as they kind of expand NVLink.

    嘿,夥計們。感謝您回答我的問題。恭喜取得好成績。我只是想問一個更廣泛的網路架構問題。您更多地談論了基於光纖的 PCIe。當你看看今天的後端時,我認為與市場上的領導者相比,我們在改進乙太網路產品方面付出了很多努力,因為他們擴展了 NVLink。

  • Could you talk about when you see the inflection point with PCIe over optical kind of being the majority of the back end? Is that something that's coming sooner, just kind of the timeframe there. And then just explain a little further, I think you mentioned that it comes with a lot of additional retiming content when you use those cables. Just anything additional there, and then I have a follow up.

    您能否談談當您看到 PCIe 超過光學類型成為後端的大部分的拐點時?那是不是很快就會發生的事情,只是時間框架而已。然後再進一步解釋一下,我想你提到過,當你使用這些電纜時,它附帶了很多額外的重定時內容。只要還有其他的事情,然後我就會跟進。

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Yes. Let me take that. This is Jitendra, Tom. The architectures for AI systems are definitely evolving. And actually, I would say, they are evolving at a very rapid pace. Different customers use different architectures to craft their systems. If you look at Nvidia-based systems, they do use NVLink, which is, of course, a proprietary closed interface. The rest of the world largely uses protocols that are either PCI Express or Ethernet, or they are based on PCI Express and Ethernet.

    是的。讓我來吧。這是吉騰德拉,湯姆。人工智慧系統的架構無疑正在不斷發展。事實上,我想說,它們正在以非常快的速度發展。不同的客戶使用不同的架構來建構他們的系統。如果您查看基於 Nvidia 的系統,您會發現它們確實使用 NVLink,這當然是專有的封閉介面。世界其他地區主要使用 PCI Express 或乙太網路協議,或它們基於 PCI Express 和乙太網路。

  • And the choice of particular protocol is really dependent upon the infrastructure that the hyperscalers have and how they choose to deploy this technology. Clearly, we play in both. Our Taurus Ethernet Smart Cable Modules support Ethernet. And now with our Aries Smart Cable Modules, we are able to support our PCI Express as well.

    特定協議的選擇實際上取決於超大規模企業擁有的基礎設施以及他們選擇如何部署該技術。顯然,我們兩者都參與。我們的 Taurus 乙太網路智慧電纜模組支援乙太網路。現在,借助我們的 Aries 智慧電纜模組,我們也能夠支援 PCI Express。

  • And if you think about the evolution, we started with Aries retimers for driving mostly within the box connectivity and shorter distance connectivity over passive cables. As these networking architectures evolved and you needed to cluster more GPUs together, we went with the Aries Smart Cable Modules that allow you to connect multiple racks together, up to 7 meters of copper cables.

    如果你考慮演變,我們從 Aries 重定時器開始,主要用於驅動盒內連接和透過被動電纜的短距離連接。隨著這些網路架構的發展,您需要將更多 GPU 聚集在一起,我們採用了 Aries 智慧電纜模組,它允許您使用長達 ​​7 公尺的銅電纜將多個機架連接在一起。

  • And as it expands into even further distances, we go into optical, where we demonstrated running a very popular GPU over 50 meters of optical fiber. So these are all of the tools that we are making available to our hyperscaler partners for them to craft their solutions and deploy AI at the data center scale.

    隨著它擴展到更遠的距離,我們進入了光纖領域,我們展示了在 50 公尺光纖上運行非常受歡迎的 GPU。因此,這些是我們向超大規模合作夥伴提供的所有工具,供他們制定解決方案並在資料中心規模部署人工智慧。

  • Thomas O'Malley - Analyst

    Thomas O'Malley - Analyst

  • Helpful. As a follow up, I know this is a bit of a tougher question, but I do think that there is a lot of confusion out there. And I just would appreciate your thoughts. You mentioned in the prepared remarks hundreds of different types of deployment styles for the GB200. Obviously, certain hyperscalers are going to do it their way. And then certain hyperscalers are going to take what's called the kind of entire system, so the 36 of the 72.

    有幫助。作為後續行動,我知道這是一個有點棘手的問題,但我確實認為其中存在著許多混亂。我很欣賞你的想法。您在準備好的評論中提到了 GB200 的數百種不同類型的部署方式。顯然,某些超大規模企業會按照自己的方式行事。然後某些超大規模企業將採用所謂的整個系統,即 72 個系統中的 36 個。

  • Can you talk about your assumptions for what you think will be the percentage that goes towards the full system and then kind of towards the hyperscalers that use kind of their own methods and talk about the content opportunities if they would kind of play out in those two scenarios?

    您能否談談您對整個系統的百分比的假設,然後是使用自己方法的超大規模提供者的百分比,並談談內容機會(如果它們會在這兩個方面發揮作用)場景?

  • I do think that Nvidia and others are talking potentially about more systems than historical, but just maybe the puts and takes upon how different hyperscalers or architect systems and what it means for your content. Thank you.

    我確實認為 Nvidia 和其他公司可能正在談論比歷史更多的系統,但也許只是對不同的超大規模或架構系統的投入和承擔以及它對您的內容意味著什麼。謝謝。

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Yes, great questions. And as you pointed out, a lot of moving pieces obviously, right? But here is, I think, what we know and what we can comment on. First of all, all the hyperscalers are indeed deploying new AI platforms that are based on merchant silicon or third-party accelerators, as well as their own accelerators. And overall, we do expect our retimer content to go up.

    是的,很好的問題。正如您所指出的,顯然有很多變化,對吧?但我認為,這就是我們所知道的以及我們可以評論的內容。首先,所有超大規模企業確實都在部署基於商用晶片或第三方加速器以及自己的加速器的新人工智慧平台。總的來說,我們確實預期我們的重定時器內容會增加。

  • Now if you double-click specifically on Nvidia or the Blackwell system, it comes in many, many different flavors. If you think about the overall Blackwell platform, it is really pushing the technology boundaries. And what that is doing is it's creating more challenges for the hyperscalers, whether it is power delivery or thermals, software complexities. or connectivity.

    現在,如果你專門雙擊 Nvidia 或 Blackwell 系統,它有很多很多不同的風格。如果您考慮整個 Blackwell 平台,您會發現它確實在突破技術界限。這給超大規模企業帶來了更多挑戰,無論是電力傳輸還是散熱、軟體複雜性。或連接性。

  • As these systems grow bigger, they run faster, become more complex. We absolutely think that the need for retimer goes up. And that drives our content higher on a per GPU basis. Now it is harder to predict which particular platform will have what kind of share. That's not really our business to predict. What we are doing is we are supporting our customers, our AI platform providers, as well as hyperscalers to make sure that these kinds of high-tech platforms can be deployed as easily as possible.

    隨著這些系統變得越來越大,它們運作得越來越快,變得越來越複雜。我們絕對認為對重定時器的需求會增加。這使得我們的內容在每個 GPU 的基礎上變得更高。現在很難預測哪個特定平台將擁有什麼樣的份額。這確實不是我們該預測的事。我們正在做的是支援我們的客戶、人工智慧平台供應商以及超大規模企業,以確保這些高科技平台能夠盡可能輕鬆地部署。

  • And at the end of the day, what you will find is hyperscalers will have to either adapt their data centers to these new technologies or they'll have to adapt this new technology to their data centers. And that creates a great opportunity for our products. We already have design wins across multiple form factors of hyperscaler GPUs as well as the third-party GPUs. And overall, we expect our business to continue to grow strongly. Very exciting times for us.

    最終,您會發現超大規模企業要么必須使其資料中心適應這些新技術,要么必須使這種新技術適應其資料中心。這為我們的產品創造了絕佳的機會。我們已經在超大規模 GPU 以及第三方 GPU 的多種外形尺寸上取得了設計勝利。總體而言,我們預計我們的業務將繼續強勁成長。對我們來說非常令人興奮的時刻。

  • Thomas O'Malley - Analyst

    Thomas O'Malley - Analyst

  • Thank you very much.

    非常感謝。

  • Operator

    Operator

  • Tore Svanberg, Stifel, Nicolaus.

    托雷·斯文伯格、史蒂菲爾、尼可拉斯。

  • Jeremy Kwan - Analyst

    Jeremy Kwan - Analyst

  • Yes. Good afternoon. This is Jeremy calling for Tore. And let me also add my congratulations on a very strong quarter and outlook. A couple of questions: first, could you provide maybe a revenue breakout between the three product segments here? I'm not sure if that was covered at all.

    是的。午安.這是傑里米打電話給托雷。我還要對非常強勁的季度和前景表示祝賀。有幾個問題:首先,您能否提供這三個產品細分市場之間的收入細分?我不確定這是否被涵蓋。

  • Michael Tate - Chief Financial Officer

    Michael Tate - Chief Financial Officer

  • Yes. We don't break out specifically the revenue by product. But like we said on the call, the Q2 revenues were driven heavily by the AI growth for Gen 5 and the broadening out of our design win portfolio. When you look into Q3, it's -- the three main drivers are the initial Taurus ramp into 400 gig, the broadening out of AI servers for Gen 5 in both merchant as well as internally developed accelerator programs, and then also we are doing back-end clustering with our Aries SCM modules. So when you look at that, those three drivers are mainly giving us the growth in Q3.

    是的。我們不會具體按產品細分收入。但正如我們在電話會議上所說,第二季的收入在很大程度上受到第五代人工智慧成長和我們設計獲勝產品組合擴大的推動。當你觀察第三季時,三個主要驅動因素是 Taurus 最初的 400 兆位元成長、第五代人工智慧伺服器在商業和內部開發的加速器專案中的擴展,然後我們也在做後台工作。的Aries SCM 模組結束叢集。因此,當你看到這一點時,這三個驅動因素主要為我們帶來了第三季的成長。

  • Jeremy Kwan - Analyst

    Jeremy Kwan - Analyst

  • Great. And then I guess maybe looking more into the Leo CXL, I understand you are shipping pre-production. Is the -- when are you expecting to see more of a material ramp for Leo?

    偉大的。然後我想也許會更多地研究 Leo CXL,我知道你們正在運送預生產產品。您預計什麼時候會看到獅子座的物質增長?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes, so in terms of material ramp, it's a function of CPUs being available that supports CXL 2.0. So we are of course, tracking the announcements from AMD and Intel to essentially get to production in the second half of this year for Turin and Granite Rapids. So in general, these things will take a little time in terms of engineering those things into platforms. So what we are guiding is 2025 is when we expect production ramps to pick up on CXL.

    是的,所以就材質增量而言,這是支援 CXL 2.0 的 CPU 的功能。因此,我們當然會追蹤 AMD 和英特爾的公告,以便在今年下半年為都靈和 Granite Rapids 基本上投入生產。所以總的來說,將這些東西工程化到平台上需要一些時間。因此,我們預計 CXL 的產量將在 2025 年加快。

  • Jeremy Kwan - Analyst

    Jeremy Kwan - Analyst

  • Great. Thank you. And if I could squeeze one last question in, can you give us maybe a sense of your revenue, how it might break out between modules and standalone retimers? Is there a way to kind of look at revenues in that way and how that can impact your SAM growth over time? Thank you.

    偉大的。謝謝。如果我能提出最後一個問題,您能否讓我們了解您的收入,以及模組和獨立重定時器之間的情況如何?有沒有一種方法可以以這種方式看待收入以及隨著時間的推移這將如何影響您的 SAM 成長?謝謝。

  • Michael Tate - Chief Financial Officer

    Michael Tate - Chief Financial Officer

  • Yes. Taurus predominantly is modules. The Aries, we are doing the back-end clustering of GPUs with the modules predominantly, but the bulk of the revenues is standalone retimers in that product family. Leo, once it ramps, we'll do add-in cards and silicon, but they'll be heavily skewed towards silicon.

    是的。金牛座主要是模組。Aries,我們主要使用模組進行 GPU 後端集群,但大部分收入來自該產品系列中的獨立重定時器。Leo,一旦它開始發展,我們將製作附加卡和晶片,但它們將嚴重偏向晶片。

  • Operator

    Operator

  • Quinn Bolton, Needham & company.

    奎因·博爾頓,李約瑟公司。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • Hey, guys. Thanks for taking my question. I guess, maybe a follow up just on the Blackwell question. It looks like there have been some recent architectural or system-level changes at Nvidia with sort of the introduction of the GB200A that looks like it uses PCI interconnect or PCI Express to connect the GPUs and the CPUs and perhaps a de-emphasis in the HGX platforms. Just wondering if you see any shifts in content, if that's favorable, if it's about a wash going from one platform to the other. And then I've got a follow up.

    嘿,夥計們。感謝您提出我的問題。我想,也許是對布萊克威爾問題的後續行動。看起來 Nvidia 最近在架構或系統層級發生了一些變化,例如引入了 GB200A,看起來它使用 PCI 互連或 PCI Express 來連接 GPU 和 CPU,並且可能不再強調 HGX平台。只是想知道您是否看到內容有任何變化,是否有利,是否涉及從一個平台到另一個平台的清洗。然後我有後續行動。

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Yes. Thank you, Quinn. Unfortunately, it will not be appropriate for us to kind of comment on rumors and third-party information that seems to be circulating around. What we will say is that we are committed to whatever platform our customers want to deploy. Whether it's a full rack or it's an HGX server or something in between, we are working with them very, very closely, shoulder to shoulder every day.

    是的。謝謝你,奎因。不幸的是,我們不適合對似乎流傳的謠言和第三方資訊發表評論。我們要說的是,我們致力於客戶想要部署的任何平台。無論是全機架伺服器還是 HGX 伺服器或介於兩者之間的伺服器,我們每天都與他們非常非常密切地並肩合作。

  • As Sanjay mentioned, we already have multiple design wins in the Blackwell family, including the GB200. We are shipping initial quantities of pre-production to the early adopters. And we do have backlog in place that serves the Blackwell platform, including GB200.

    正如 Sanjay 所提到的,我們已經在 Blackwell 系列中獲得了許多設計勝利,其中包括 GB200。我們正在向早期採用者運送首批預生產產品。我們確實有為 Blackwell 平台服務的積壓訂單,包括 GB200。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • Got it. Okay. Thank you for that. And just maybe a clarification on the Taurus 400-gig ramp as well as the Aries SCM ramps. Are those ramping across multiple hyperscalers? Are they driven by a lead hyperscaler initially, and then you would expect to broaden it out to other hyperscalers as we move into 2025?

    知道了。好的。謝謝你。也許只是對 Taurus 400-g 坡道以及 Aries SCM 坡道進行澄清。這些是否會跨越多個超大規模企業?它們最初是由領先的超大規模提供者驅動的嗎?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes. Good question. Let me take that. So if you think about AECs, in general, 800 gig, where you're running 100 gig per lane is the first broad use case that we believe for AEC applications. If you look at data rates lower than that, let's say, 400 gig and so on, it tends to be very, frankly, case by case. It depends on the topology, application, and so on.

    是的。好問題。讓我來吧。因此,如果您考慮 AEC,一般而言,800 gig(每通道運行 100 gig)是我們認為 AEC 應用程式的第一個廣泛用例。如果您查看低於該值的資料速率,例如 400 GB 等,坦白說,這往往是具體情況而定。這取決於拓撲、應用程式等。

  • So the good thing about the design wins we have is that these scale across multiple platforms both from an AI and general compute standpoint and supporting various different topologies. And the revenue drivers that we are essentially highlighting for 3Q and beyond is based on supporting these applications. With 800 gig, it becomes much more broader with several different customers essentially requiring AECs.

    因此,我們所取得的設計勝利的好處是,從人工智慧和通用運算的角度來看,它們可以跨多個平台擴展,並支援各種不同的拓撲。我們強調的第三季及以後的收入驅動因素本質上是基於支援這些應用程式。有了 800 個演出,它變得更加廣泛,有幾個不同的客戶本質上需要 AEC。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • Is it similar for the Aries SCMs for back-end clustering as well?

    用於後端叢集的 Aries SCM 是否也類似?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Exactly. It depends on the topology for what it is in terms of how the back-end networks are designed for the AI subsystems. In general, all of this -- when it comes to active cabling type of technology, it becomes case by case depending on the infrastructure and how exactly systems are being put together compared to a component like a PCIe retimer that goes across a broad array of use cases across multiple different deployment scenarios. So that's the nuance to keep in mind when you look at AEC markets.

    確切地。這取決於人工智慧子系統後端網路的設計方式的拓撲結構。一般來說,所有這一切- 當涉及有源佈線類型的技術時,它會根據具體情況而定,具體取決於基礎設施以及系統如何準確地組合在一起,與諸如PCIe 重定時器之類的組件相比,它涵蓋了廣泛的領域跨多個不同部署場景的用例。因此,當您研究 AEC 市場時,請記住這一細微差別。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • Got it. Thank you.

    知道了。謝謝。

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes, and so the volume and the deployment scale tends to be very broad, right, if you are looking at how infrastructures are being put together. So it is one of those things where you look at case by case. But as long as you're able to address a wide variety of applications, it does very significantly add up.

    是的,如果您正在研究基礎設施是如何組合在一起的,那麼數量和部署規模往往非常廣泛,對吧。因此,這是需要具體情況具體分析的事情之一。但只要您能夠解決各種各樣的應用程序,它的價值就會非常顯著。

  • Operator

    Operator

  • Ross Seymore, Deutsche Bank.

    羅斯·西莫爾,德意志銀行。

  • Ross Seymore - Analyst

    Ross Seymore - Analyst

  • Hi, guys. Thanks for taking the question. Apologies if I go back to one that's been hit on a couple of me. I want to do it nonetheless, and kind of the Blackwell topic and the content topic. You guys gave us the punchline that you believe your content, on average, will go up per GPU generation to generation. It also seems like you're getting across that the customization of it is still very broad based.

    嗨,大家好。感謝您提出問題。如果我回到一個已經對我幾個人造成傷害的地方,我深感抱歉。儘管如此,我還是想這樣做,並且是布萊克威爾主題和內容主題。你們給了我們一個妙語,你們相信你們的內容平均來說會隨著 GPU 的一代又一代而提升。您似乎還了解到,它的定制仍然具有非常廣泛的基礎。

  • And so just looking at the vanilla system SKUs and reference design Nvidia itself has might be misleading. Two-part question to this. Are you of the belief that your content is equal across the board in the same way it was in Hopper?

    因此,僅查看 Nvidia 本身的普通系統 SKU 和參考設計可能會產生誤導。對此的問題分為兩部分。您是否相信您的內容與 Hopper 中的內容一樣全面?

  • Or do things get more skewed where there will be places where you'll have a significant step up in content and some configurations and others where you would have a significant step down, and the difference between those two might be where investors are getting a little bit confused?

    或者事情會變得更加傾斜,在某些地方你會在內容和某些配置上有顯著的進步,而在其他地方你會有顯著的下降,而這兩者之間的區別可能是投資者獲得了一些有點困惑?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Let me try to add a little bit more color on that. But before I do that, let me give you and remind two data points we've already covered in the Q&A so far. First point, let us be very clear that our PCIe retimer content per GPU, on average, will continue to grow as the AI systems scales across various different topologies. And this applies to both third party, like standard merchant GPUs, as well as internally developed GPUs.

    讓我嘗試在此基礎上添加更多色彩。但在此之前,讓我向您提供並提醒我們迄今為止在問答中已經涵蓋的兩個數據點。首先,讓我們非常明確的是,隨著人工智慧系統在各種不同拓撲中擴展,我們每個 GPU 的 PCIe 重定時器內容平均將持續成長。這適用於第三方(例如標準商用 GPU)以及內部開發的 GPU。

  • The second reminder that I want to kind of note is that specifically for Blackwell, we expect our PCIe content per GPU to go up. Now what you are asking is specifically about the deployment scenarios, which right now is evolving, right? So we have design wins for several different topologies, including the GB200.

    我想指出的第二個提醒是,特別是對於 Blackwell,我們預計每個 GPU 的 PCIe 內容會增加。現在您所問的具體是關於部署場景,目前該場景正在不斷發展,對吧?因此,我們贏得了多種不同拓撲的設計,包括 GB200。

  • But if you look at the various different options that Nvidia is offering and how those are being composed and considered by the hyperscalers, that situation is evolving at the moment.

    但如果你看看 Nvidia 提供的各種不同選項以及超大規模提供者如何組合和考慮這些選項,你會發現這種情況目前正在改變。

  • The key message that we want to deliver is that overall, our PCIe content is going to be higher than the Hopper generation. We expect that the design wins that we are starting to see and we're starting to ship from a pre-production standpoint are all meaningful that will essentially allow us to continue to have a robust growth engine, as far as our PCIe retimer business is concerned.

    我們想要傳遞的關鍵訊息是,整體而言,我們的 PCIe 內容將高於 Hopper 世代。我們預計,我們開始看到的設計勝利以及從預生產的角度開始發貨都是有意義的,這將從本質上使我們能夠繼續擁有強勁的增長引擎,就我們的 PCIe 重定時器業務而言擔心的。

  • Ross Seymore - Analyst

    Ross Seymore - Analyst

  • Thanks for that. And I guess as a follow up, you guys have focused more on this call about the internally developed accelerators than you have in calls in the past. And I realize there haven't been too many since your IPO.

    謝謝你。我想作為後續行動,你們比過去的電話會議更關注有關內部開發的加速器的電話會議。我意識到自從你們首次公開募股以來,並沒有太多。

  • But are you trying to get across the key message that those are really growing as a percentage of your mix, that those are penetrating the market and kind of catching up and taking relative share from the GPU side of things? Or is your commentary meant to get across that Astera itself, with its retimers and other components, will take significant share in that kind of ASIC market relative to the GPU side?

    但您是否試圖傳達這樣一個關鍵訊息:這些產品在您的組合中所佔的比例確實在增長,它們正在滲透市場,並且正在迎頭趕上,並從 GPU 方面奪取相對份額?或者您的評論是為了傳達 Astera 本身及其重定時器和其他組件將在此類 ASIC 市場(相對於 GPU 方面)佔據重要份額嗎?

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • Yes. It's probably both, to be honest with you, in the sense that we do see it. It's no secret, right? I think many of the hyperscalers are doing their own accelerators, which are driven by the workloads or the business models that they pursue. I think that will continue as a macro trend in terms of internally developed accelerators going hand in hand with GPUs that are available from Nvidia or AMD or others. So that's the model that we believe will be here to stay, that hybrid approach.

    是的。老實說,從我們確實看到的意義上來說,可能兩者兼而有之。這不是什麼秘密,對吧?我認為許多超大規模企業正在開發自己的加速器,這是由他們追求的工作負載或業務模式所驅動的。我認為,就內部開發的加速器與 Nvidia、AMD 或其他公司提供的 GPU 並進而言,這種宏觀趨勢將持續下去。因此,我們相信這種混合方法將繼續存在。

  • And for us really, the reason we are highlighting is that of course, we have had a significant business that has grown in the last year or two years from the designs that we have been supporting with the merchant GPU deployments that have happened. But at the same time, now we're reaching a point where the accelerator volumes are also starting to ramp up.

    對我們來說,我們強調的原因當然是,我們的重要業務在過去一兩年裡從我們一直支援的商業 GPU 部署的設計中獲得了成長。但同時,現在我們已經達到了加速器數量也開始增加的地步。

  • And for us, the good news is that we are on all the major AI accelerator platforms from a design win standpoint or at least all the major ones that are out there. And for us, we have multiple parts to grow our business, and that is a very positive thing that we believe will continue to allow us to keep delivering the kind of reserves that we're doing. And as new CPU/GPU architectures come about, just like the Nvidia's Blackwell platform, we do expect to gain from it both on the retimer content as well as other products that we can service to this space.

    對我們來說,好消息是,從設計獲勝的角度來看,我們已經進入了所有主要的人工智慧加速器平台,或者至少是現有的所有主要人工智慧加速器平台。對我們來說,我們有多個部分來發展我們的業務,這是一件非常積極的事情,我們相信這將繼續使我們能夠繼續提供我們正在做的那種儲備。隨著新的 CPU/GPU 架構的出現,就像 Nvidia 的 Blackwell 平台一樣,我們確實希望從重定時器內容以及我們可以服務該領域的其他產品中獲益。

  • Ross Seymore - Analyst

    Ross Seymore - Analyst

  • Thank you.

    謝謝。

  • Operator

    Operator

  • Richard Shannon, Craig-Hallum Capital Group.

    理查德·香農,克雷格-哈勒姆資本集團。

  • Richard Shannon - Analyst

    Richard Shannon - Analyst

  • Hey, guys. Thanks for taking my question. Maybe a question on PCI Express Gen 6 here. Last call, you talked about some of the wins -- the designs being decided in the next six months to nine months are obviously three to six months -- three more months farther forward here.

    嘿,夥計們。感謝您提出我的問題。也許有關於 PCI Express Gen 6 的問題。上次通話中,您談到了一些勝利 - 在接下來的六個月到九個月內決定的設計顯然是三到六個月 - 這裡再往後三個月。

  • Obviously, you've got some wins already on Gen 6, but I just want to get a sense of the share of the market kind of looking backwards. How much of that market has been decided versus up for grabs? Maybe you can help characterize what's left here to win in the next three months to six months.

    顯然,你已經在第六代上取得了一些勝利,但我只是想了解一下回顧過去的市場份額。這個市場有多少是已經決定的,還是可供爭奪的?也許您可以幫助描述在接下來的三個月到六個月內剩下的勝利。

  • Sanjay Gajendra - President, Chief Operating Officer, Director

    Sanjay Gajendra - President, Chief Operating Officer, Director

  • I'm trying to see how best to answer that question. So you got to -- let me try to provide some color. The design win window is whatever -- for these platforms, it's -- you're looking at -- once GPUs become available, you're looking at 6 to 12 months before they go to production. So that's one thing to keep in mind.

    我正在嘗試看看如何最好地回答這個問題。所以你必須——讓我嘗試提供一些顏色。設計獲勝窗口是任意的——對於這些平台,你會看到——一旦 GPU 可用,你會看到在投入生產之前需要 6 到 12 個月的時間。所以這是需要記住的一件事。

  • But also -- please also think about how hyperscalers go about doing their stuff, right? Everyone is in an arms' race right now, getting to production as quickly as possible. In many different situations, resources are also limited. Meaning for every 10 engineers that they may need, they might have two or three, just given the number of platforms and how quickly everyone is trying to move.

    但也請考慮一下超大規模企業如何做他們的事情,對吧?現在每個人都在進行軍備競賽,希望能盡快投入生產。在許多不同的情況下,資源也是有限的。這意味著,考慮到平台的數量以及每個人的移動速度,他們可能需要每 10 名工程師,其中可能需要 2 到 3 名。

  • And to that standpoint, what is happening is that many of these engineers are familiar with our Gen 5 retimers. They've designed it across multiple platforms. They've built software tools and capabilities around it. And now our Gen 6 retimers are essentially a seamless upgrade from a software standpoint, from a hardware standpoint.

    從這個角度來看,這些工程師中的許多人都熟悉我們的第 5 代重定時器。他們跨多個平台設計了它。他們圍繞著它建立了軟體工具和功能。現在,從軟體角度、硬體角度來看,我們的第六代重定時器本質上是無縫升級。

  • So it does offer the lowest risk and fastest path to our customers. And that plays well within their own objectives of trying to get something out quickly and dealing with resources that might not be available at the levels that are required. So overall, we're starting to gain from it, and we are essentially being the leader in the space, being the one that is getting the first crack at these opportunities. And we are doing everything we can to convert those things into design wins and revenue.

    因此,它確實為我們的客戶提供了最低風險和最快的路徑。這很好地實現了他們自己的目標,即試圖快速獲得某些東西並處理可能無法達到所需級別的資源。因此,總體而言,我們開始從中受益,並且我們本質上是該領域的領導者,是第一個抓住這些機會的人。我們正在盡一切努力將這些東西轉化為設計勝利和收入。

  • Richard Shannon - Analyst

    Richard Shannon - Analyst

  • Okay. Great. My follow-on question is a pretty simple one, just looking at the Taurus line here. Great to see the ramp. You're at 400 gig. And I don't want to kind of get too far ahead of what looks to be a pretty nice ramp here in the second half of the year, but I think you've talked about the 800-gig generation ramping later in 2025. Any update on that timing and how your design win is looking so far?

    好的。偉大的。我的後續問題是一個非常簡單的問題,只需看看這裡的金牛座線。很高興看到坡道。你現在有 400 演出。我不想在今年下半年看起來相當不錯的增長之前走得太遠,但我認為您已經談到了 2025 年晚些時候的 800 GB 一代的增長。到目前為止,您的設計獲勝情況如何?

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • Yes, good question. So the 800-gig timing, we believe, is going to be late in 2025. Right now, what we are seeing is 400-gig applications for some of the AI systems as well as actually, we are seeing them for general purpose compute as well where you are doing the traditional server to top-of-the-rack connection.

    是的,好問題。因此,我們認為 800 場演出的時機將晚於 2025 年。現在,我們看到的是一些人工智慧系統的 400 G 應用程序,實際上,我們看到它們用於通用計算以及傳統伺服器到架頂連接的情況。

  • So that will continue on for the rest of this year for 400-gig deployments. And then as we get some of the newer mix that are capable of 100 gig and 200 gig per lane, et cetera, trying to get to 800 gig is where we see broadening of this market and more deployments across different hyperscalers across different platforms in the latter half of 2025.

    因此,今年剩餘時間將繼續進行 400 兆部署。然後,當我們獲得一些每通道能夠支援100 GB 和200 GB 等的新組合時,我們會看到這個市場正在擴大,並且在不同平台上的不同超大規模伺服器上進行了更多部署,試圖達到800 GB。

  • Richard Shannon - Analyst

    Richard Shannon - Analyst

  • Okay, great. Thanks, guys.

    好的,太好了。謝謝,夥計們。

  • Operator

    Operator

  • Suji Desilva, ROTH Capital.

    蘇吉·德西爾瓦,羅仕資本。

  • Unidentified Analyst

    Unidentified Analyst

  • Hi, Jitendra, it's [Andre]. And Mike, congrats on the progress here. This question maybe -- may not have been asked explicitly, but can you give us a relative content framework for internally developed versus third-party processors or accelerators? Is it higher for internally developed on average? Or is it hard to generalize like that?

    嗨,吉騰德拉,這是[安德烈]。麥克,祝賀這裡的進展。這個問題可能 - 可能沒有被明確提出,但您能給我們一個內部開發與第三方處理器或加速器的相對內容框架嗎?內部開發的平均高嗎?或者說很難一概而論?

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • I would say it's a little bit hard to generalize. It varies quite a bit. Even one particular platform, you can have different form factors. Even if you look at, let's say, Blackwell, you have HGX, you have MGX, you have NVLs, you have custom racks that are getting deployed. And if you look at each one of them, you'll find different amount of content.

    我想說這有點難概括。它變化很大。即使是特定的平台,也可以有不同的外形尺寸。即使你看看 Blackwell,你有 HGX、你有 MGX、你有 NVL、你有正在部署的客製化機架。如果您查看其中每一項,您會發現不同數量的內容。

  • Number of retimers will vary where they get placed. It will vary -- but what is very consistent is that the overall content does go up for us. Now the other factor to consider is the choice of back-end network. Again, for example, if you look at the Blackwell family, they use NVLink, which is a closed proprietary standard, which we do not participate in.

    重定時器的數量將隨放置位置的不同而變化。它會有所不同——但非常一致的是,整體內容確實對我們有利。現在要考慮的另一個因素是後端網路的選擇。再舉個例子,如果你看看 Blackwell 系列,他們使用 NVLink,這是一個封閉的專有標準,我們不參與其中。

  • But when somebody uses a PCI Express or PCI Express-based protocol at their back-end connectivity, then our content goes up pretty significantly because now we are shipping not only our retimers but also the smart cable -- Aries smart cable modules into that application. Similarly, if the back-end interconnected Ethernet, that will benefit our Taurus family of product lines. So it really varies greatly on what the architecture is of the platform and what form factor is getting deployed in.

    但是,當有人在後端連接中使用PCI Express 或基於PCI Express 的協定時,我們的內容就會顯著增加,因為現在我們不僅將重定時器交付給該應用程序,還交付了智慧電纜—— Aries 智慧電纜模組。同樣,如果後端互連以太網,將有利於我們Taurus系列產品線。因此,平台的架構和部署的外形尺寸確實有很大差異。

  • Unidentified Analyst

    Unidentified Analyst

  • Okay, great. That's very helpful color. Thanks. And then just a quick follow up here. Was there something inherent in the Blackwell transition from Hopper that made this much platform diversification and our ex-diversification possible? Or was it just the hyperscalers getting more sophisticated about what they are trying to do? Or was it availability of things like Astera's PCI products? Any color there would be helpful as to how this kind of proliferation of architectures kind of came about.

    好的,太好了。這是非常有用的顏色。謝謝。然後在這裡快速跟進。布萊克韋爾從霍珀的轉型中是否存在某種固有的因素,使得如此多的平台多元化和我們以前的多元化成為可能?還是只是超大規模企業對他們想要做的事情變得更成熟?還是 Astera 的 PCI 產品等產品的可用性?那裡的任何顏色都會有助於了解這種建築的擴散是如何發生的。

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • I mean if you look at the Blackwell family, it's like a marvel of technology. The amount of content that is being pushed into that platform is incredible. And as I mentioned earlier, that does create other problems, right? There is so much compute packed in such small space. They're delivering power to those racks -- to those GPUs themselves, and the CPUs is the challenge. How to cool them, it becomes a challenge, and the fact that modern data centers are just not equipped to handle many of these issues.

    我的意思是,如果你看看布萊克威爾家族,你會發現它就像一個技術奇蹟。推送到該平台的內容數量令人​​難以置信。正如我之前提到的,這確實會產生其他問題,對吧?在這麼小的空間裡裝滿瞭如此多的計算。他們正在為這些機架以及 GPU 本身提供電力,而 CPU 就是挑戰。如何冷卻它們成為一個挑戰,而且現代資料中心沒有能力處理其中許多問題。

  • So what the hyperscalers are doing is they're taking these broad platforms in our technology and trying to adapt it so that it fits into their data centers. And that's where we see a lot of opportunity for our existing products, the ones that we have talked about, as well as some new products that we've been working on, again, shoulder to shoulder with our hyperscaler and AI platform customers.

    因此,超大規模企業正在做的就是他們在我們的技術中採用這些廣泛的平台,並嘗試對其進行調整,以使其適合他們的資料中心。這就是我們看到我們的現有產品有很多機會的地方,我們已經討論過的產品,以及我們一直在與我們的超大規模和人工智慧平台客戶並肩開發的一些新產品。

  • So very excited to see how these new platforms will get rolled out, including Blackwell, including the hyperscaler internal AI platforms, and the increased content that we have there.

    非常高興看到這些新平台將如何推出,包括 Blackwell,包括超大規模內部人工智慧平台,以及我們在那裡增加的內容。

  • Unidentified Analyst

    Unidentified Analyst

  • Okay, so Blackwell pushed the envelope. Great. Thanks for the color.

    好吧,布萊克威爾挑戰了極限。偉大的。謝謝你的顏色。

  • Operator

    Operator

  • Quinn Bolton, Needham & Company.

    奎因·博爾頓,李約瑟公司。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • Hey, guys, just a quick follow up. I know you had a potential for an early lockup expiring Thursday morning. Just wanted to see if you guys could confirm, are we still within the 10-day measuring period so that you could trigger that early lockup? Or does the release of second-quarter results sort of end that period and we're now looking at a September 16 lockup expiration? Thank you.

    嘿,夥計們,快速跟進一下。我知道你有可能在周四早上提前鎖定。只是想看看你們是否可以確認,我們是否仍在 10 天的測量期內,以便你們可以觸發提前鎖定?或者第二季業績的發布是否會結束該時期,我們現在正在考慮 9 月 16 日鎖定到期?謝謝。

  • Michael Tate - Chief Financial Officer

    Michael Tate - Chief Financial Officer

  • Yes. The release of our earnings today releases a lockup that opens up on Thursday.

    是的。我們今天發布的財報解除了周四開放的禁售期。

  • Quinn Bolton - Analyst

    Quinn Bolton - Analyst

  • It opens Thursday. Okay, thank you.

    週四開業。好的,謝謝。

  • Jitendra Mohan - Chief Executive Officer, Director

    Jitendra Mohan - Chief Executive Officer, Director

  • The early lockup already expired long ago.

    早期鎖定期早已到期。

  • Operator

    Operator

  • There are no further questions at this time. I will turn the call back over to Leslie Green for closing remarks.

    目前沒有其他問題。我會將電話轉回給萊斯利·格林 (Leslie Green) 作結束語。

  • Leslie Green - IR Contact

    Leslie Green - IR Contact

  • Thank you, everyone, for your participation and questions. We look forward to updating you on our progress during our Q3 earnings conference call later this fall. Thank you.

    謝謝大家的參與與提問。我們期待在今年秋季稍晚的第三季財報電話會議上向您通報我們的最新進展。謝謝。

  • Operator

    Operator

  • This concludes today's conference call. Thank you for your participation. You may now disconnect.

    今天的電話會議到此結束。感謝您的參與。您現在可以斷開連線。