使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Thank you for standing by. My name is Regina, and I will be your conference operator today.
謝謝你的支持。我叫雷吉娜,今天我將擔任你們的會議操作員。
At this time, I would like to welcome everyone to the Astera Labs' first-quarter 2024 earnings conference call. (Operator Instructions)
此時,我歡迎大家參加 Astera Labs 2024 年第一季財報電話會議。(操作員說明)
I will now turn the call over to Leslie Green, Investor Relations for Astera labs. Leslie, you may begin.
我現在將把電話轉給 Astera 實驗室投資者關係部的 Leslie Green。萊斯利,你可以開始了。
Leslie Green - Investor Relations
Leslie Green - Investor Relations
Thank you, Regina. And good afternoon, everyone, and welcome to the Astera Labs' first-quarter 2024 earnings call. Joining us today on the call are Jitendra Mohan, Chief Executive Officer and Co-Founder; Sanjay Gajendra, President, Chief Operating Officer, and Co-Founder; and Mike Tate, Chief Financial Officer.
謝謝你,雷吉娜。大家下午好,歡迎參加 Astera Labs 的 2024 年第一季財報電話會議。今天加入我們電話會議的是執行長兼聯合創始人 Jitendra Mohan; Sanjay Gajendra,總裁、營運長兼聯合創辦人;和財務長邁克·泰特。
Before we get started, I would like to remind everyone that certain comments made in this call today may include forward-looking statements, regarding among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate. These forward-looking statements reflect management's current beliefs, expectations, and assumptions about future events, which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in the final perspective relating to our IPO.
在我們開始之前,我想提醒大家,今天的電話會議中發表的某些評論可能包括前瞻性陳述,其中包括預期的未來財務業績、策略和計劃、未來營運以及我們經營所在的市場。這些前瞻性陳述反映了管理層當前對未來事件的信念、預期和假設,這些信念、預期和假設本質上受到風險和不確定性的影響,這些風險和不確定性在今天的收益發布以及我們不時向管理層提交的定期報告和文件中詳細討論。
It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward-looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward-looking statements. In light of these risks, uncertainties, and assumptions, the results, events, or circumstances reflected in the forward-looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied.
公司管理階層不可能預測可能影響這些前瞻性陳述的所有風險和不確定性,也不可能預測任何因素或因素組合可能導致實際結果與任何前瞻性陳述中包含的結果有重大差異的程度。 。鑑於這些風險、不確定性和假設,本次電話會議期間討論的前瞻性陳述中反映的結果、事件或情況可能不會發生,實際結果可能與預期或暗示的結果有重大差異。
All of our statements are made based on information available to management as of today, and the company undertakes no obligation to update such statements after the date of this call to conform to these as a result of new information, future events, or changes in our expectations, except as required by law.
我們的所有聲明均基於截至今天管理層可獲得的信息,本公司不承擔在本次電話會議之後更新此類聲明的義務,以因新信息、未來事件或我們的變化而符合這些聲明。要求的除外。
Also during this call, we will refer to certain non-GAAP financial measures, which we consider to be an important measure of the company's performance. These non-GAAP financial measures are provided in addition to and not as a substitute for or superior to financial results prepared in accordance with US GAAP. A discussion of why we use non-GAAP financial measures and reconciliations between our GAAP and non-GAAP financial measures is available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website and will also be included in our filings with the SEC, which will also be accessible through the Investor Relations portion of our website.
此外,在這次電話會議中,我們也將提及某些非公認會計準則財務指標,我們認為這是衡量公司績效的重要指標。這些非公認會計原則財務指標是根據美國公認會計原則編制的財務結果的補充,而不是替代或優於根據美國公認會計原則編制的財務結果。我們今天發布的收益報告中討論了我們為什麼使用非 GAAP 財務指標以及我們的 GAAP 和非 GAAP 財務指標之間的調節,您可以透過我們網站的投資者關係部分存取該報告,該報告也將包含在我們向SEC 提交的文件,也可以透過我們網站的投資者關係部分存取。
With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs. Jitendra?
說到這裡,我想將電話轉給 Astera Labs 執行長 Jitendra Mohan。吉騰德拉?
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first earnings conference call as a public company. This year is off to a great start with Astera Labs seeing strong and continued momentum along with the successful execution of our IPO in March.
謝謝你,萊斯利。大家下午好,感謝您參加我們作為上市公司的第一次財報電話會議。今年是一個良好的開端,隨著 3 月 IPO 的成功執行,Astera Labs 看到了強勁且持續的勢頭。
First and foremost, I would like to thank our investors, customers, partners, suppliers, and employees for their steadfast support over the past six years. We have built Astera Labs from the ground up to address the connectivity bottlenecks to unlock the full potential of AI in the cloud. With your help, we've been able to scale the company and deliver innovative technology solutions to the leading hyperscalers and AI platform providers worldwide.
首先,我要感謝我們的投資者、客戶、合作夥伴、供應商和員工在過去六年的堅定支持。我們從頭開始建立 Astera Labs,以解決連接瓶頸,從而釋放人工智慧在雲端中的全部潛力。在您的幫助下,我們能夠擴大公司規模,並向全球領先的超大規模企業和人工智慧平台供應商提供創新的技術解決方案。
But our work is only just beginning. We're supporting the accelerated pace of AI infrastructure deployments with leading hyperscalers by developing new product categories while also exploring new market segments. Looking at industry reports over the past several weeks, it is clear that we remain in the early stages of a transformative investment cycle by our customers to build out the next generation of infrastructure that is needed to support their AI roadmaps.
但我們的工作才剛開始。我們透過開發新產品類別同時探索新的細分市場,支援領先的超大規模企業加快人工智慧基礎設施部署的步伐。縱觀過去幾週的產業報告,很明顯,我們仍處於客戶變革性投資週期的早期階段,以建立支援其人工智慧路線圖所需的下一代基礎設施。
According to recent earning reports, on a consolidated basis, CapEx spend during the first quarter for the four largest US hyperscalers grew by roughly 45% year on year to nearly $50 billion. Qualitative commentary implies continued quarterly growth in CapEx for this group through the balance of the year. This is truly an exciting time for technology innovators within the cloud and AI infrastructure market, and we believe Astera Labs is well positioned to benefit from these growing investment trends.
根據最近的獲利報告,綜合來看,美國四大超大規模企業第一季的資本支出年增約 45%,達到近 500 億美元。定性評論意味著該組資本支出在今年剩餘時間內持續季度增長。對於雲端和人工智慧基礎設施市場的技術創新者來說,這確實是一個令人興奮的時刻,我們相信 Astera Labs 處於有利位置,可以從這些不斷增長的投資趨勢中受益。
Against the strong industry backdrop, Astera Labs delivered strong Q1 results with record revenue, strong non-GAAP operating margin, positive operating cash flows while also introducing two new products. Our revenue in Q1 was $65.3 million, up 29% from the previous quarter and up 269% from the same period in 2023. Non-GAAP operating margin was 24.3%, and we delivered $0.10 of pro forma non-GAAP diluted earnings per share.
在強勁的行業背景下,Astera Labs 實現了強勁的第一季業績,包括創紀錄的收入、強勁的非公認會計原則營運利潤率、正營運現金流,同時也推出了兩款新產品。我們第一季的營收為 6,530 萬美元,比上一季成長 29%,比 2023 年同期成長 269%。非 GAAP 營運利潤率為 24.3%,非 GAAP 攤薄每股收益預計為 0.10 美元。
I will now provide some commentary around our position in this rapidly evolving AI market. Then I will turn the call over to Sanjay to discuss new products and our growth strategy. Finally, Mike will provide additional details on our Q1 results and our Q2 financial guidance.
我現在將圍繞我們在這個快速發展的人工智慧市場中的地位提供一些評論。然後我會將電話轉給 Sanjay,討論新產品和我們的成長策略。最後,麥克將提供有關我們第一季業績和第二季財務指導的更多詳細資訊。
Complex AI model sizes continue doubling about every six months, fueling the demand for high-performance AI platforms running in the cloud. Modern GPUs and AI accelerators are phenomenally good at compute, but without equally fast connectivity, they remain highly underutilized. Technology innovation within the AI accelerator market has been moving forward at an incredible pace. And the number and variety of architectures continues to expand to handle trillion parameters models by improving AI infrastructure utilization.
複雜的人工智慧模型大小大約每六個月增加一倍,刺激了對在雲端中運行的高效能人工智慧平台的需求。現代 GPU 和人工智慧加速器在運算方面表現出色,但如果沒有同樣快速的連接,它們的利用率仍然很低。人工智慧加速器市場的技術創新一直以令人難以置信的速度向前發展。架構的數量和種類不斷擴大,透過提高人工智慧基礎設施利用率來處理萬億個參數模型。
We continue to see our hyperscaler customers utilize the latest merchant GPUs and proprietary AI accelerators to compose unique data center-scale AI infrastructure. However, no two clouds are the same. The major hyperscalers are architecting their systems to deliver maximum AI performance based on the specific cloud infrastructure requirements from power and cooling to connectivity.
我們不斷看到超大規模客戶利用最新的商用 GPU 和專有的 AI 加速器來建立獨特的資料中心規模的 AI 基礎架構。然而,沒有兩朵雲是相同的。主要的超大規模企業正在建造其係統,以根據從電源、冷卻到連接的特定雲端基礎設施要求提供最大的人工智慧效能。
We are working alongside our customers to ensure these complex and different architectures achieve maximum performance and operate reliably even as data rates continue to double. As the systems continue to move data faster and growing complexity, we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development.
我們正在與客戶合作,確保這些複雜且不同的架構即使在資料速率持續翻倍的情況下也能實現最高效能並可靠運作。隨著系統繼續以更快的速度移動數據和不斷增加的複雜性,我們預計每個人工智慧平台的平均美元內容將會增加,隨著我們正在開發的新產品的增加,情況會更是如此。
Our conviction in maintaining and strengthening our leadership position in the market is rooted in our comprehensive intelligent connectivity platform and our deep customer partnerships. The foundation of our platform consists of semiconductor base and software-defined connectivity ICs, modules, and boards, which all support our COSMOS software suite. We provide customers with a complete customizable solution, chips,, hardware, and software, which maximizes flexibility without performance penalties, delivers deep fleet management capabilities, and massive space with the ever quickening product introduction cycles of our customers.
我們維持和加強市場領導地位的信念植根於我們全面的智慧連結平台和深厚的客戶合作夥伴關係。我們平台的基礎由半導體基礎和軟體定義的連接 IC、模組和板組成,它們都支援我們的 COSMOS 軟體套件。我們為客戶提供完整的可自訂解決方案、晶片、硬體和軟體,在不影響性能的情況下最大限度地提高靈活性,提供深入的車隊管理功能,並隨著客戶不斷加快的產品推出週期提供巨大的空間。
Not only does COSMOS software run on our entire product portfolio, but it is also integrated within our customers' operating stacks to deliver seamless customization, optimization, and monitoring. Today Astera Labs is focused on three core technology standards: PCI Express, Ethernet, and Compute Express Link. We're shipping three separate product families, all generating revenue and in various stages of adoption and deployment, supporting these different connectivity protocols.
COSMOS 軟體不僅在我們的整個產品組合上運行,而且還整合在我們客戶的操作堆疊中,以提供無縫的客製化、最佳化和監控。如今,Astera Labs 專注於三個核心技術標準:PCI Express、乙太網路和 Compute Express Link。我們正在推出三個獨立的產品系列,所有產品系列均能產生收入,並處於採用和部署的不同階段,支援這些不同的連接協議。
Let me touch upon each of these critical data center connectivity standards and how we support them with our differentiated solutions. First, PCI Express. PCIe is a native interface in all AI accelerators, CPUs, and GPUs and is the most prevalent protocol for moving data at high bandwidth and low latency inside servers. Today, we see PCIe Gen 5 getting widely deployed in AI servers. PCIe servers are becoming increasingly complex. Faster signal speeds in combination with complex server topologies are driving significant signal integrity challenges.
讓我談談這些關鍵的資料中心連接標準以及我們如何透過差異化解決方案來支援它們。首先是 PCI Express。PCIe 是所有 AI 加速器、CPU 和 GPU 中的本機接口,是在伺服器內以高頻寬和低延遲移動資料的最受歡迎的協定。如今,我們看到 PCIe Gen 5 在人工智慧伺服器中廣泛部署。PCIe 伺服器變得越來越複雜。更快的訊號速度與複雜的伺服器拓撲相結合正在帶來重大的訊號完整性挑戰。
We have solved these problems. Our hyperscalers and AI accelerator customers utilize our PCIe Smart DSP Retimers to extend the reach of PCIe Gen 5 between various components within a heterogeneous compute architecture. Our Aries product family represents the gold standard in the industry for performance, robustness, and flexibility and is the most widely deployed solution in the market today. Our leadership position with millions of critical data links running through our Aries Retimers and our COSMOS software enables us to do something more, become the eyes and ears to monitor the connectivity infrastructure and help fleet managers ensure their AI infrastructure is operating at peak utilization.
我們已經解決了這些問題。我們的超大規模和人工智慧加速器客戶利用我們的 PCIe 智慧 DSP 重定時器來擴展異質運算架構中各個元件之間 PCIe Gen 5 的覆蓋範圍。我們的 Aries 產品系列代表了行業性能、穩健性和靈活性的黃金標準,是當今市場上部署最廣泛的解決方案。我們的領導地位是透過Aries Retimers 和COSMOS 軟體運行數百萬個關鍵數據鏈路,這使我們能夠做更多事情,成為監控連接基礎設施的眼睛和耳朵,並幫助車隊經理確保他們的人工智慧基礎設施在峰值利用率下運作。
Deep diagnostics and monitoring capabilities in our chips and extensive fleet management features in our COSMOS software, which are deployed together in our customers' fleet, have become a material differentiator for us. Our COSMOS software provides the easiest and fastest path to deploy the next generation of our devices.
我們的晶片中的深度診斷和監控功能以及 COSMOS 軟體中廣泛的車隊管理功能(這些功能一起部署在我們客戶的車隊中)已成為我們的重要優勢。我們的 COSMOS 軟體提供了部署下一代設備的最簡單、最快的途徑。
We see AI workloads and newer GPUs driving the transition from PCIe Gen 5, selling at 32 gigabits per second per lane to PCIe Gen 6, running at 64 gigabits per second per lane. Our customers are evaluating our Gen 6 solutions now, and we expect them to make design decisions in the next six to nine months. In addition, while this year, Aries devices being heavily deployed today for interconnecting AI accelerators with CPUs and networking, we also expect our Aries devices to play an increasing role in backend fabrics, interconnecting AI accelerators to each other in AI clusters.
我們看到 AI 工作負載和更新的 GPU 正在推動從 PCIe Gen 5(每通道每秒 32 GB)轉向 PCIe Gen 6(每通道每秒 64 GB)的轉變。我們的客戶現在正在評估我們的第六代解決方案,我們希望他們在未來六到九個月內做出設計決策。此外,雖然今年Aries 設備被大量部署,用於將AI 加速器與CPU 和網路互連,但我們也預計Aries 設備將在後端結構中發揮越來越重要的作用,在AI 叢集中將AI 加速器相互互連。
Next, let's talk about Ethernet. Ethernet protocol is extensively deployed to build large-scale networks within data centers. Today, Ethernet makes up the vast majority of connections between servers and top of rack switches. Driven by AI workload's insatiable need for speed, Ethernet data rates are doubling roughly every two years, and we expect the transition from 400 gig Ethernet to a 800 gig Ethernet to take place later in 2025. 800 Ethernet is based on 100 gigabits per second per lane signaling rate, which is placing tremendous pressure on conventional passive cabling solutions.
接下來我們來談談乙太網路。乙太網路協定被廣泛部署以在資料中心內建置大規模網路。如今,乙太網路構成了伺服器和架頂交換器之間的絕大多數連線。在人工智慧工作負載對速度永不滿足的需求的推動下,乙太網路資料速率大約每兩年翻一番,我們預計從400 GB 乙太網路到800 GB 乙太網路的過渡將於2025 年稍後發生。800 乙太網路基於每通道每秒 100 吉比特的訊號傳輸速率,這給傳統的被動佈線解決方案帶來了巨大的壓力。
Like our PCIe Retimers, our portfolio of Taurus Ethernet Retimers helps relieve these connectivity bottlenecks by overcoming reach, signal integrity, and bandwidth issues by enabling robust 100 gig per lane connectivity over copper. Unlike our Aries portfolio, which is largely sold in a chip format, we sell our Taurus portfolio largely in the form of smart cable modules that are assembled into active electrical cables by our cable partners.
與我們的 PCIe 重定時器一樣,我們的 Taurus 乙太網路重定時器產品組合透過銅纜實現每通道 100 GB 的強大連接,從而克服覆蓋範圍、訊號完整性和頻寬問題,從而幫助緩解這些連接瓶頸。與我們的 Aries 產品組合主要以晶片形式銷售不同,我們的 Taurus 產品組合主要以智慧電纜模組的形式銷售,這些模組由我們的電纜合作夥伴組裝成有源電纜。
This approach allows us to focus on our strengths and fully leverage our COSMOS software suite to offer customization, easy qualification, deep telemetry, and field upgrades to our customers. At the same time, this model enables our cable partners to continue to excel at bringing the best cabling technology to our common end customers.
這種方法使我們能夠專注於我們的優勢,並充分利用我們的 COSMOS 軟體套件,為我們的客戶提供客製化、輕鬆鑑定、深度遙測和現場升級。同時,這種模式使我們的電纜合作夥伴能夠繼續擅長為我們的普通最終客戶提供最好的佈線技術。
We expect 400 gig deployments based on our Taurus Smart Cable Modules to begin to ramp in the back half of 2024. We see the transition to 800 gig Ethernet starting to happen in 2025, resulting in broad demand for AECs to both scale up and scale out AI infrastructure and strong growth for our Taurus Ethernet Smart Cable Module portfolio over the coming years.
我們預計基於 Taurus 智慧電纜模組的 400 個演出部署將在 2024 年下半年開始增加。我們預計到2025 年將開始向800 GB 乙太網路過渡,這將導致對AEC 的廣泛需求,以擴展和橫向擴展人工智慧基礎設施,以及我們的Taurus 乙太網路智慧電纜模組產品組合在未來幾年的強勁增長。
Last is Compute Express Link or CXL. CXL is a low-latency cash coherent protocol, which runs on top of PCIe protocol. CXL provides an open standard for disaggregating memory from compute. CXL allows you to balance the memory bandwidth and capacity requirements independently from compute requirements, resulting in better utilization of compute infrastructure.
最後是Compute Express Link 或CXL。CXL 是一種低延遲現金一致性協議,運行在 PCIe 協議之上。CXL 提供了將記憶體與計算分離的開放標準。CXL 可讓您獨立於運算要求來平衡記憶體頻寬和容量需求,從而更好地利用運算基礎架構。
Over the next several years, data center platform architects plan to utilize CXL technology to solve memory bandwidth and capacity bottlenecks that are being exacerbated by the exponential increase in compute capability of CPUs and GPUs. Major hyperscalers are actively exploring different application of CXL memory expansion.
在接下來的幾年中,資料中心平台架構師計劃利用 CXL 技術來解決記憶體頻寬和容量瓶頸,而這些瓶頸因 CPU 和 GPU 運算能力的指數級增長而加劇。各大超大規模廠商正積極探索 CXL 記憶體擴充的不同應用。
While the adoption of CXL technology is currently at its infancy, we do expect to see increased deployments with the introduction of next-generation CXL capable data centers server CPUs, such as Granite Rapids, Turin, and others. Our first two market portfolio of Leo CXL memory connectivity controllers is very well positioned to enable our customers to overcome memory bottlenecks and deliver significant benefits to their end customers. We have worked closely with our hyperscaler customers and CPU partners to optimize our solution to seamlessly deliver these benefits without any application-level software changes.
雖然 CXL 技術的採用目前還處於起步階段,但我們預計,隨著下一代支援 CXL 的資料中心伺服器 CPU(例如 Granite Rapids、Turin 等)的推出,部署量將會增加。我們的 Leo CXL 記憶體連接控制器的前兩個市場產品組合非常適合幫助我們的客戶克服記憶體瓶頸,並為其最終客戶帶來顯著的好處。我們與超大規模客戶和 CPU 合作夥伴密切合作,優化我們的解決方案,以無縫地提供這些優勢,而無需進行任何應用程式級軟體變更。
Furthermore, we have used as COSMOS software to include significant learnings we have had over the last 18 months and to customize our Leo memory expansion solution to the different requirements from each hyperscaler. We anticipate memory expansion will be the first high-volume use case that will drive design wins into volume production in 2025 timeframe. We remain very excited about the potential of CXL in data center applications and believe that most new CPUs that supports CXL and hyperscalers will increasingly deploy innovative solutions based on CXL.
此外,我們還使用 COSMOS 軟體來包含我們在過去 18 個月中獲得的重要經驗,並根據每個超大規模企業的不同要求自訂我們的 Leo 記憶體擴充解決方案。我們預計記憶體擴充將成為第一個大批量使用案例,將推動設計成果在 2025 年進入大量生產。我們對 CXL 在資料中心應用中的潛力仍然感到非常興奮,並相信大多數支援 CXL 和超大規模的新型 CPU 將越來越多地部署基於 CXL 的創新解決方案。
With that, let me turn the call over to our President and CEO, Sanjay Gajendra, to discussion some of our recent product announcements and our long-term growth strategy.
接下來,讓我將電話轉給我們的總裁兼執行長 Sanjay Gajendra,討論我們最近發布的一些產品和我們的長期成長策略。
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Thanks, Jitendra, and good afternoon, everyone. Astera Labs is well positioned to demonstrate long-term growth through a combination of three factors: one, we have a strong secular tailwinds with increased AI infrastructure investment; two, the next generation of products within existing product lines are gaining traction; and third, the introduction of new product lines.
謝謝 Jitendra,大家下午好。Astera Labs 處於有利地位,可以透過三個因素的結合來展示長期成長:第一,隨著人工智慧基礎設施投資的增加,我們擁有強大的長期推動力;第二,現有產品線中的下一代產品正在受到關注;第三,新產品線的推出。
Over the past three months, we announced two new and significant products that play an important role in enabling next-generation AI platforms and provide incremental revenue opportunities as early as the second half of 2024. First, we expanded our widely deployed field proven Aries Smart DSP Retimer portfolio with the introduction and public demonstration of our Aries 6 PCIe Retimer that delivered robust, low-power PCIe Gen 6 and CXL 3 connectivity between next generation GPUs, AI accelerators, CPUs, NICs, and CXL memory controllers.
在過去的三個月裡,我們發布了兩款重要的新產品,它們在實現下一代人工智慧平台方面發揮著重要作用,並最早在 2024 年下半年提供增量收入機會。首先,我們透過推出和公開展示 Aries 6 PCIe 重定時器擴展了經過廣泛部署的現場驗證的 Aries 智慧 DSP 重定時器產品組合,該重定時器在下一代 GPU、AI 加速器、CPU、 NIC 和CXL 內存控制器。
Aries 6 is the third generation of our PCIe Smart Retimer portfolio and provides the bandwidth required to support data-intensive AI workloads while maximizing utilization of next-generation GPUs operating at 64 gigabit per second per lane. Fully compatible with our field deployed COSMOS software suite, Aries 6 incorporates the tribal knowledge we have acquired for the past four years by partnering and enabling hyperscalers to deploy AI infrastructure in the cloud.
Aries 6 是我們的第三代 PCIe 智慧重定時器產品組合,提供支援資料密集型 AI 工作負載所需的頻寬,同時最大限度地提高每通道每秒 64 GB 運行速度的下一代 GPU 的利用率。Aries 6 與我們現場部署的 COSMOS 軟體套件完全相容,融合了我們在過去四年中透過合作並支援超大規模企業在雲端部署 AI 基礎設施而獲得的部落知識。
Aries 6 also enables a seamless upgrade path from current PCIe Gen 5 based platforms to next-generation PCIe Gen 6 based platforms for our customers. With Aries 6, we demonstrated industry's lowest power at 11 watts at Gen 6 in full 16 lane configuration running at 64 gigabit per second per lane, significantly lower than our competitors and even lower than our own Aries Gen 5 Retimer. Through collaboration with leading providers of GPUs and CPUs such as AMD, ARM, Intel, and NVIDIA, Aries 6 is being rigorously tested at Astera's cloud-scale interop lab and in customers' platforms to minimize inter operation risk, lower system development costs, and reduced time to market.
Aries 6 也為我們的客戶提供了從目前基於 PCIe Gen 5 的平台到下一代基於 PCIe Gen 6 的平台的無縫升級路徑。借助Aries 6,我們在全16 通道配置的第6 代中展示了業界最低的功耗(11 瓦),每通道每秒運行64 Gb,顯著低於我們的競爭對手,甚至低於我們自己的Aries Gen 5 Retimer。透過與AMD、ARM、Intel 和NVIDIA 等領先的GPU 和CPU 供應商合作,Aries 6 正在Astera 的雲端規模互通實驗室和客戶平台上進行嚴格測試,以最大程度地降低互通風險,降低系統開發成本,並提高性能。
Aries 6 was demonstrated at NVIDIA's GTC event during the week of March 18. Aries 6 is currently sampling to leading AI and cloud infrastructure providers, and we expect initial volume ramps to begin in 2025. We also announced the introduction and sampling of our Aries PCIe and CXL Smart Cable Modules for active electrical cables or AECs to support robust and long reach up to 7 meters copper cable connectivity. This is 3x the standard reach defined in the PCIe spec.
Aries 6 在 3 月 18 日這一週的 NVIDIA GTC 活動上進行了演示。Aries 6 目前正在向領先的人工智慧和雲端基礎設施供應商提供樣品,我們預計初始銷售量將於 2025 年開始增加。我們還宣布推出適用於主動電纜或 AEC 的 Aries PCIe 和 CXL 智慧電纜模組並提供樣品,以支援長達 7 公尺的穩健且長距離的銅纜連接。這是 PCIe 規範中定義的標準範圍的 3 倍。
Our new PCIe AEC solution is designed for GPU clustering applications by extending PCI backend fabric deployment to multiple racks. This new Aries product category expands our market opportunity from within the rack to across racks. As with our entire product portfolio, Aries Smart Cable modules support our COSMOS software suite to deliver a powerful yet familiar array of link monitoring, fleet management, and DRaaS tools, which are customizable for diverse needs of our hyperscaler customers.
我們的新 PCIe AEC 解決方案專為 GPU 叢集應用而設計,將 PCI 後端結構部署擴展到多個機架。這個新的 Aries 產品類別將我們的市場機會從機架內擴展到跨機架。與我們的整個產品組合一樣,Aries 智慧電纜模組支援我們的COSMOS 軟體套件,以提供一系列功能強大且熟悉的連結監控、佇列管理和DRaaS 工具,這些工具可根據超大規模客戶的不同需求進行客製化。
We leveraged our expertise in silicon, hardware, and software to deliver a complete solution in record time, and we expect initial shipments to begin later this year for the PCIe AECs. We believe this new Aries product announcement represents another concrete example of Astera Labs driving the PCIe ecosystem with technology leadership with an intelligent connectivity platform that includes silicon chips, hardware modules, and COSMOS software suites.
我們利用我們在晶片、硬體和軟體方面的專業知識,在創紀錄的時間內提供了完整的解決方案,我們預計 PCIe AEC 的首次發貨將於今年晚些時候開始。我們相信,Aries 新產品的發布代表了 Astera Labs 透過智慧連接平台(包括矽晶片、硬體模組和 COSMOS 軟體套件)的技術領先地位來推動 PCIe 生態系統的另一個具體例子。
Over the coming quarters, we anticipate ongoing generational product upgrades to existing product lines and introduction of new product categories developed from the ground up to fully utilize the performance and productivity capabilities of generative AI.
在接下來的幾個季度中,我們預計現有產品線將持續進行換代產品升級,並推出從頭開始開發的新產品類別,以充分利用生成式人工智慧的效能和生產力能力。
In summary, over the past few years, we have built a great team that is delivering technology that is foundational to deploying AI infrastructure that scales. We have gained the trust and support of our world-class customer base by executing, innovating, and delivering to our commitments. These tight relationships are resulting in new product developments and enhanced technology road map for Astera. We look forward to continued collaboration with our partners as a new era unfolds driven by AI applications.
總之,在過去的幾年裡,我們建立了一支優秀的團隊,他們提供的技術是部署可擴展的人工智慧基礎設施的基礎技術。透過執行、創新和兌現我們的承諾,我們贏得了世界級客戶群的信任和支持。這些緊密的關係促進了 Astera 的新產品開發和增強的技術路線圖。隨著人工智慧應用驅動的新時代的展開,我們期待與合作夥伴繼續合作。
With that, I will turn the call over to our CFO, Mike Tate, who will discuss our Q1 financial results and Q2 outlook.
接下來,我將把電話轉給我們的財務長 Mike Tate,他將討論我們第一季的財務表現和第二季的前景。
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
Thanks, Sanjay, and thanks to everyone for joining. This overview of our Q1 financial results and Q2 guidance will be on a non-GAAP basis. The primary difference in Astera non-GAAP metrics is stock-based compensation and related income tax effects. Please refer to today's press release available on the Investor Relations section of our website for more details on both our GAAP and non-GAAP Q2 financial outlook as well as a reconciliation of our GAAP to non-GAAP financial measures presented on this call.
謝謝桑傑,也謝謝大家的加入。我們第一季財務表現和第二季指導的概述將基於非公認會計原則。Astera 非公認會計準則指標的主要差異在於以股票為基礎的薪酬和相關所得稅影響。請參閱我們網站投資者關係部分今天發布的新聞稿,以了解有關我們的GAAP 和非GAAP 第二季度財務前景的更多詳細信息,以及本次電話會議中提出的GAAP 與非GAAP 財務指標的調節表。
For Q1 of 2024, Astera Labs delivered record quarterly revenue of $65.3 million, which was up 29% versus the previous quarter and 269% higher than the revenue in Q1 of 2023. During the quarter, we shipped products to all the major hyperscalers and AI accelerator manufacturers.
2024 年第一季度,Astera Labs 實現創紀錄的季度收入 6,530 萬美元,比上一季成長 29%,比 2023 年第一季營收高出 269%。本季度,我們向所有主要的超大規模供應商和人工智慧加速器製造商發貨了產品。
We recognized revenues across all three of our product families during the quarter with Aries products being the largest contributor. Aries enjoyed solid momentum in AI-based platforms as customers continued to introduce and ramp their PCIe Gen 5 capable AI systems, along with overall strong unit growth with the industry's growing investment in generative AI. Also, we continue to make good progress with our Taurus and Leo product lines, which are in the early stages of revenue contribution.
我們在本季度確認了所有三個產品系列的收入,其中 Aries 產品是最大的貢獻者。隨著客戶不斷推出並提升支援 PCIe Gen 5 的人工智慧系統,Aries 在基於人工智慧的平台方面享有強勁的發展勢頭,並且隨著行業對生成式人工智慧的投資不斷增加,整體單位數量強勁增長。此外,我們的 Taurus 和 Leo 產品線繼續取得良好進展,這些產品線正處於收入貢獻的早期階段。
In Q1, Taurus revenues were primarily shipping into 200 gig Ethernet-based systems, and we expect Taurus revenues to sequentially track higher as we progress through 2024 as we also begin to ship into 400 gig Ethernet-based systems. Q1 Leo revenues were largely from customers purchasing preproduction volumes for the development of their next-generation CXL capable compute platforms expected to launch late this year with the next server CPU refresh cycle.
在第一季度,Taurus 的收入主要用於基於 200 GB 乙太網路的系統,我們預計隨著 2024 年的進展,Taurus 的收入將連續上升,因為我們也開始向基於 400 GB 乙太網路的系統發貨。第一季 Leo 收入主要來自客戶購買預生產量,用於開發下一代 CXL 運算平台,預計今年底將隨下一個伺服器 CPU 更新周期推出。
Q1 non-GAAP gross margins was 78.2% and was up 90 basis points compared with 77.3% in Q4 2023. The positive gross margin performance during the quarter was driven by healthy product mix. Non-GAAP operating expenses for Q1 were $35.2 million, up from $27 million in the previous quarter. With non-GAAP operating expenses, R&D expense was $22.9 million, sales and marketing expense was $6 million, and general and administration expenses were $6.3 million.
第一季非 GAAP 毛利率為 78.2%,較 2023 年第四季的 77.3% 上升 90 個基點。本季毛利率的積極表現得益於健康的產品組合。第一季非 GAAP 營運費用為 3,520 萬美元,高於上一季的 2,700 萬美元。以非 GAAP 營運費用計,研發費用為 2,290 萬美元,銷售和行銷費用為 600 萬美元,一般和管理費用為 630 萬美元。
Non-GAAP operating expenses during Q1 increased largely due to a combination of increased headcount and incremental costs associated with being a public company. The largest delta between non-GAAP and GAAP operating expenses in Q1 was stock-based compensation recognized in connection with our recent IPO and its associated employer payroll taxes, and to a lesser extent, our normal quarterly stock-based compensation expense.
第一季的非公認會計準則營運費用增加主要是由於員工數量的增加以及與上市公司相關的增量成本的結合。第一季非公認會計原則和公認會計原則營運費用之間最大的差異是與我們最近的首次公開發行及其相關的雇主工資稅相關的股票薪酬,以及我們正常的季度股票薪酬費用。
Non-GAAP operating margins for Q1 was 24.3% as revenue scaled in proportion with our operating expenses on a sequential basis. Interest income in Q1 was $2.6 million. Our non-GAAP tax provision was $4.1 million for the quarter, which represents a tax rate of 22% on a non-GAAP basis. Pro forma non-GAAP fully diluted share count for Q1 was 147.5 million shares. Our pro forma non-GAAP diluted earnings per share for the quarter was $0.10.
第一季的非 GAAP 營業利潤率為 24.3%,因為收入與我們的營業費用較上季成長。第一季的利息收入為 260 萬美元。本季我們的非 GAAP 稅款準備金為 410 萬美元,以非 GAAP 稅率計算,稅率為 22%。第一季預估非 GAAP 完全稀釋股數為 1.475 億股。我們預計本季非 GAAP 攤薄每股收益為 0.10 美元。
The pro forma non-GAAP diluted shares includes the assumed conversion of our preferred stock for the entire quarter while our GAAP share count only includes the conversion of our preferred stock for the stub period following our March IPO. Going forward, given that all the preferred stock has now been converted to common stock upon our IPO, those preferred shares will be fully included in the share count for both GAAP and non-GAAP.
備考非 GAAP 稀釋股票包括我們整個季度優先股的假設轉換,而我們的 GAAP 股票數量僅包括我們 3 月份 IPO 後剩餘期間優先股的轉換。展望未來,鑑於我們 IPO 時所有優先股現已轉換為普通股,這些優先股將完全納入 GAAP 和非 GAAP 的股數中。
Cash flow from our operating activities for Q1 was $3.7 million, and we ended the quarter with cash, cash equivalents, and marketable securities of just over $800 million.
第一季我們經營活動產生的現金流量為 370 萬美元,本季末我們的現金、現金等價物及有價證券略高於 8 億美元。
Now turning to our guidance for Q2 of fiscal 2024, we expect Q2 revenues to increase from Q1 levels within a range of 10% to 12% sequentially. We believe our Aries product family will continue to be the largest component of revenue and will be the primary driver of sequential growth in Q2.
現在轉向我們對 2024 財年第二季的指導,我們預計第二季營收將較第一季水準較上季成長 10% 至 12%。我們相信我們的 Aries 產品系列將繼續成為收入的最大組成部分,並將成為第二季度環比增長的主要驅動力。
Within the Aries product family, we expect the growth to be driven by increased unit demand for AI servers as well as the ramp of new product designs with our customers. We expect non-GAAP gross margins to be approximately 77%, given a modest increase in hardware shipments relative to standalone ICEs.
在 Aries 產品系列中,我們預計人工智慧伺服器的單位需求增加以及客戶新產品設計的增加將推動成長。鑑於硬體出貨量相對於獨立 ICE 略有成長,我們預期非 GAAP 毛利率約為 77%。
We believe as our hardware solutions grow as a percentage of revenue over the coming quarters, our gross margins will begin to trend towards our long-term gross margin model of 70%. We expect non-GAAP operating expenses to be approximately $40 million as we remain aggressive in expanding our R&D resource pool across headcount and intellectual property while also scaling our back office functions.
我們相信,隨著未來幾季我們的硬體解決方案佔營收的比例不斷增長,我們的毛利率將開始朝著 70% 的長期毛利率模式發展。我們預計非 GAAP 營運費用約為 4000 萬美元,因為我們將繼續積極擴大我們的研發資源庫,同時擴大我們的後台職能。
Interest income is expected to be $9 million. Our non-GAAP tax rate should be approximately 23%, and our non-GAAP fully diluted share count is expected to be approximately 180 million shares. Adding this all up, we are expecting non-GAAP fully diluted earnings per share of approximately $0.11.
利息收入預計為900萬美元。我們的非 GAAP 稅率應約為 23%,我們的非 GAAP 完全稀釋股數預計約為 1.8 億股。將所有這些加起來,我們預計非 GAAP 完全稀釋每股收益約為 0.11 美元。
This concludes our prepared remarks. Once again, we very much appreciate everyone joining the call and now we open the line for questions. Operator?
我們準備好的演講到此結束。我們再次非常感謝大家加入電話會議,現在我們開放提問專線。操作員?
Operator
Operator
(Operator Instructions) Harlan Sur, JPMorgan.
(操作員指令)Harlan Sur,摩根大通。
Harlan Sur - Analyst
Harlan Sur - Analyst
Good afternoon, and congratulations on the strong results and guidance post your first quarter as a public company.
下午好,祝賀上市公司第一季取得的強勁業績和指導。
As you guys mentioned, many new AI XPU programs coming to the market, GPU, ASIC AI chip programs, accelerators. In terms of total XPU shipments this year, I think only half is going to be NVIDIA based, so it is starting to broaden out. The good news is, obviously, the Astera team has exposure to all of these XPU programs. It does seem that the pace of deploying these XPU platforms has accelerated even over the past few months.
正如你們所提到的,許多新的AI XPU程式即將上市,GPU、ASIC AI晶片程式、加速器。就今年 XPU 的總出貨量而言,我認為只有一半是基於 NVIDIA 的,因此它開始擴大。顯然,好消息是 Astera 團隊已經接觸過所有這些 XPU 程式。即使在過去的幾個月裡,部署這些 XPU 平台的步伐似乎確實有所加快。
So how much of the strong results and guidance is due to this acceleration, broadening in customer deployments? How much is more just kind of higher content of Retimers versus your prior expectations? And then do you guys see the strong momentum continuing to the second half of this year?
那麼,強勁的業績和指導有多少是因為客戶部署的加速和擴大?與您先前的期望相比,重定時器的內容更高多少?那麼你們認為這種強勁勢頭會持續到今年下半年嗎?
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
Thanks, Harlan. This is Mike. We started shipping into AI servers really in Q3 of last year, so just in the early innings. A lot of our customers have not fully deployed their AI systems. So we're seeing incremental growth just from adding on to different platforms that we have design wins in, but it's on a -- in a backdrop where there's clearly a growing investment in AIs as well. So overall unit growth is also playing out.
謝謝,哈倫。這是麥克。我們從去年第三季開始真正進入人工智慧伺服器,所以只是在早期階段。我們的許多客戶尚未完全部署他們的人工智慧系統。因此,我們看到,僅僅透過添加我們在設計上獲勝的不同平台就可以實現增量成長,但這是在人工智慧投資明顯增加的背景下進行的。因此,整體單位的成長也在發揮作用。
As we look at to the balance of this year, there's still a lot of programs that have not ramped yet. So we have high confidence that the Gen 5 Aries platform has a lot of growth ahead of it, and that continues into 2026 -- 2025 as well.
當我們回顧今年的剩餘時間時,仍然有許多計劃尚未啟動。因此,我們非常有信心第五代 Aries 平台將迎來巨大的成長,而這種成長也將持續到 2026 年至 2025 年。
Harlan Sur - Analyst
Harlan Sur - Analyst
I appreciate that. And as you mentioned, there's been a lot of focus on next gen PCIe Gen 6 platforms well, obviously, with the rollout of NVIDIA's Blackwell based platform? And, obviously, with any market that is viewed of as fast growing, you are going to attract competitors. We have seen some announcing by competitors. We know most of the Gen 5 design wins have already been locked up by the Astera team. You've been working with customers, as you mentioned, on Gen 6, for some time now. Maybe how do you compare the customer engagement momentum on Gen 6 versus the same period back when you were working with customers on Gen 5?
我很感激。正如您所提到的,隨著 NVIDIA 基於 Blackwell 的平台的推出,下一代 PCIe Gen 6 平台受到了很多關注?而且,顯然,任何被認為快速成長的市場都會吸引競爭對手。我們看到了一些競爭對手的公告。我們知道大部分第五代設計勝利已經被 Astera 團隊鎖定。正如您所提到的,您已經在 Gen 6 上與客戶合作一段時間了。或許您如何比較第 6 代的客戶參與勢頭與您在第 5 代上與客戶合作時的同期情況?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Good question, Harlan. It's Sanjay here. Let me take that. So like you correctly said, Gen 5 is still a lot of legs on it. Let's be really clear on that. Like Mike noted, we do have platforms that are still ramping and still to come. So to standpoint , we do expect Gen 5 to be with us for some time.
好問題,哈倫。我是桑傑。讓我來吧。正如您所說,第五代仍然有很多優勢。讓我們明確一點。正如麥克指出的那樣,我們確實擁有仍在不斷發展和未來的平台。因此,從長遠來看,我們確實預計 Gen 5 會陪伴我們一段時間。
And in terms of Gen 6, again, it's driven by the of the pace of innovation that's happening on the AI side. There is -- as you probably know, GPUs are not fully utilized. Some reports put it at around 50%. So there's still a lot of growth in terms of connectivity, which is essentially holding it back, right? Meaning there is a pace and a need to adopt faster speeds and links.
就第六代而言,它再次受到人工智慧方面創新步伐的推動。您可能知道,GPU 並未充分利用。一些報告稱這一比例約為 50%。因此,連接性方面仍然有很大的增長,這基本上阻礙了它,對吧?這意味著採用更快的速度和連結是有節奏和需要的。
So with NVIDIA announcing their Blackwell platform, those are the first set of GPUs that have Gen 6 on that. And so to standpoint, we do expect some of those deployments to happen in 2025. But in general, others are not far behind of based upon public information that's out there. So we do expect the cycle time for Gen 6 adoption to perhaps be a little bit shorter than Gen 5, especially on the AI server application or more so than the general-purpose compute, which is still going to be lagging when it comes to PCIe Gen 6 adoption.
因此,隨著 NVIDIA 宣布推出 Blackwell 平台,這些是第一批搭載第六代 GPU 的 GPU。因此,從長遠來看,我們確實預計其中一些部署將在 2025 年進行。但總的來說,根據現有的公開訊息,其他人也不落後。因此,我們確實預期第 6 代採用的周期時間可能會比第 5 代短一點,特別是在人工智慧伺服器應用程式上,或者比通用運算更是如此,而通用運算在 PCIe 方面仍然會落後第6代採用。
Operator
Operator
Joe Moore, Morgan Stanley.
喬摩爾,摩根士丹利。
Joe Moore - Analyst
Joe Moore - Analyst
Great, thank you. Following on from that, can you talk about PCI Gen 5 in general purpose servers. It seems like if I look at the CPU penetration of Gen 5, we're still at a pretty early stage. Do you see growth from general purpose and what are the applications driving that?
太好了謝謝。接下來,您能談談通用伺服器中的 PCI Gen 5 嗎?如果我看看第五代的 CPU 滲透率,我們似乎仍處於相當早期的階段。您是否看到了通用用途的成長?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Absolutely. And primarily on general-purpose compute, the main places where the PCIe timer gets used tends to be on the storage connectivity where you have SSDs that are on the back of the server. So to that standpoint, again, it's so there are two things that have been holding it back or three things perhaps. One is just the focus on AI. I mean, most of dollars are going to the AI server application compared to general compute.
絕對地。主要在通用運算中,使用 PCIe 計時器的主要位置往往是在伺服器背面有 SSD 的儲存連接上。因此,從這個角度來看,有兩件事或三件事阻礙了它的發展。一是只關注人工智慧。我的意思是,與通用運算相比,大部分資金都流向了人工智慧伺服器應用程式。
The second thing is just the ecosystem readiness for Gen 5, primarily on the SSD side, which is starting to evolve with many of the major SSD and NVMe players providing or ramping up on Gen 5 based and NVME drives. The third one really has been the CPU platforms. If you think about it, both from Intel and AMD are they're all on the cusp of introducing the next significant on a platform, whether it is Granite Rapids for Intel or Turin from AMD. So that is expected to drive the introduction of new platform.
第二件事是第 5 代生態系統的準備情況,主要是在 SSD 方面,隨著許多主要 SSD 和 NVMe 廠商提供或增強基於第 5 代的驅動器和 NVME 驅動器,該生態系統已開始發展。第三個確實是 CPU 平台。如果你仔細想想,英特爾和 AMD 都處於推出下一個重要平台的風口浪尖,無論是英特爾的 Granite Rapids 還是 AMD 的 Turin。因此,預計這將推動新平台的推出。
And if you combine that with the SSDs being ready for Gen 5 and based on the design wins that we already have, you can expect that those things will be a contributing factor as the dollar start flowing back into the compute side, general-purpose compute side.
如果將其與為第 5 代做好準備的 SSD 結合起來,並基於我們已經取得的設計成果,您可以預期,隨著美元開始回流到計算端、通用計算,這些因素將成為一個促成因素。 。
Joe Moore - Analyst
Joe Moore - Analyst
Great, thank you. And for my follow-up, you just mentioned Grand Rapids and Turin, which are the first and a volume platform supporting CXL 2. What are you hearing in terms of the CPUs will be out, but at what will be the initial adoption and how quickly do you think that technology can roll out 2025?
太好了謝謝。對於我的後續內容,您剛才提到了 Grand Rapids 和 Turin,它們是第一個支援 CXL 2 的批量平台。您聽說 CPU 將會問世,但最初採用的時間是什麼時候?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Yes, let me start off by saying, CXL. Every hyperscaler is a -- is in some shape or form are evaluating and working with the technology. So it's well and alive. I think where the focus really has been in terms of CXL is on the memory expansion use case, specifically for CPUs. And the expansion could been for reasons like adding more memory for large database applications, more capacity, memory.
是的,讓我先說,CXL。每個超大規模企業都以某種形式正在評估和使用該技術。所以它很好而且還活著。我認為 CXL 的真正焦點在於記憶體擴充用例,特別是 CPU。擴展可能是出於為大型資料庫應用程式添加更多記憶體、更多容量、記憶體等原因。
And the second use case, of course, is for more memory bandwidth which are for HPC type of applications. So the thing that's been holding back is the availability of CPUs that support CXL at a production quality level that will change with Granite Rapids and Turin being available. So at this point, what we can say is that we've been providing chips for quite some time.
當然,第二個用例是用於 HPC 類型應用程式的更多記憶體頻寬。因此,一直阻礙的事情是支援 CXL 的 CPU 的可用性,其生產品質水準將隨著 Granite Rapids 和 Turin 的推出而改變。所以在這一點上,我們可以說的是,我們提供晶片已經有一段時間了。
We've been in preproduction and supported the various different evaluation, POC type of activities that have happened with our hyperscaler customers. So to that standpoint, we do expect revenue to start coming in 2025 from memory expansion use case for CXL.
我們一直在進行預生產,並支援與我們的超大規模客戶進行的各種不同的評估、POC 類型的活動。因此,從這個角度來看,我們確實預期 CXL 的記憶體擴充用例將於 2025 年開始產生收入。
Operator
Operator
Tore Svanberg, Stifel.
托雷‧思文凱 (Tore Svanberg)、史蒂菲爾 (Stifel)。
Tore Svanberg - Analyst
Tore Svanberg - Analyst
Yes, thank you. And let me add my congratulations. My first question is on Gen 6 TCI. So Sanjay, you just mentioned that the design-in cycle is going to be shorter than Gen 5 now since it's backwards compatible for your Gen 5 and especially given the COSMOS software platform, should we assume that you will basically retain most of those sockets that you already had in Gen 5? And then obviously, somebody wants as well for Gen 6?
是的,謝謝。讓我表達我的祝賀。我的第一個問題是關於第 6 代 TCI。Sanjay,您剛才提到,現在的設計週期將比 Gen 5 短,因為它向後兼容 Gen 5,特別是考慮到 COSMOS 軟體平台,我們是否應該假設您基本上會保留大部分插槽你已經在第五代了嗎?顯然,有人也想要第六代?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
That's the goal for the company. We have the COSMOS software and like I noted of PCI Express, it's one of those protocols, which unlike Ethernet, tends to be a little messy, meaning it's something that's been around for a long time to great technology, but it also requires a lot of handholding and for us, what has happened is being in the customers' platforms, bringing up in our systems that ramp up to millions of devices has allowed us to understand what are the nuances, what works, what doesn't work, how do you make the link perform at the highest rate.
這就是公司的目標。我們有COSMOS 軟體,就像我提到的PCI Express 一樣,它是這些協定之一,與乙太網路不同,它往往有點混亂,這意味著它已經存在很長時間了,是一項偉大的技術,但它也需要很多對我們來說,發生的事情是在客戶的平台中,在我們的系統中引入數百萬台設備,這使我們能夠了解其中的細微差別,什麼有效,什麼無效,如何做您使連結以最高速率執行。
So that tribal knowledge is something that we have captured within the COSMOS software that we've built running both on our chips as well as customers' platforms. So we do expect that as Gen 6 starts to materialize, a lot of those learnings will be carried over. You're right, that there's been a lot of competition that has come in as well.
因此,我們在 COSMOS 軟體中捕獲了部落知識,該軟體在我們的晶片和客戶平台上運行。因此,我們確實預計,隨著第六代開始實現,其中的許多經驗將被傳承下去。你是對的,也出現了很多競爭。
But we believe that, when it comes to competition, they could have a similar product like us, but no matter what there is full time that's essential when it comes to connectivity type of chips, just given the interoperation and getting the kings out and so on. Meaning you could have a perfect chip yet have a failing system.
但我們相信,在競爭方面,他們可以擁有像我們一樣的類似產品,但無論什麼,在連接類型的晶片方面,全職時間都是必不可少的,只要考慮到互通性和讓國王出局等等在。這意味著您可能擁有完美的晶片,但係統卻出現故障。
The reason for that is the complexity of the system and how PCI Express standard is defined So to that standpoint, I agree with what you said in the sense that we have the leading position now in the Retimer market for PCIe, and we expect to build on that both with the new features we have added in PCI Gen 6 or the Aries 6 product line and also the tribal knowledge that we have built by working with our partners over the last three, four years.
原因是系統的複雜性以及 PCI Express 標準的定義方式 因此,從這個角度來看,我同意您所說的,因為我們現在在 PCIe 的重定時器市場中處於領先地位,並且我們希望建立這既包括我們在PCI Gen 6 或Aries 6 產品線中新增的功能,也包括我們在過去三、四年與合作夥伴合作建立的部落知識。
Tore Svanberg - Analyst
Tore Svanberg - Analyst
That's a great perspective. And as my follow-up, I had a question on AEC. It sounds like that business is going to start ramping late this year. First of all, is that with multiple cable partners. And then related to that, are you the only company today that have a AEC at 7 meters?
這是一個很棒的觀點。作為我的後續行動,我有一個關於 AEC 的問題。聽起來這項業務將於今年稍後開始成長。首先,是與多個有線電視合作夥伴的合作。與此相關的是,你們是當今唯一一家擁有 7 公尺 AEC 的公司嗎?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
I don't know about the only customer, I would probably request maybe you need to do some research on it on where the competition is, but from a three timeless time point which goes on this, we do have a leading position. So based on that, I would imagine that we are the main provider here, both based on that and the customer traction that we're seeing.
我不知道唯一的客戶,我可能會要求您可能需要對競爭的情況進行一些研究,但從這三個永恆的時間點來看,我們確實擁有領先地位。因此,基於此,我想我們是這裡的主要供應商,無論是基於這一點還是基於我們所看到的客戶吸引力。
So this one is an interesting use case. So far PCI Express, as you know, was defined to be inside the server, but what is happening now, and this is why we're excited about PCIe, AECs is that now we are opening up a new front in terms of farm clustering, GPUs, meaning interconnecting accelerators. That is where the AECs will play. And that is a new opportunity that goes along with the Ethernet AECs that term that we already provide, which are also used for interconnecting GPUs on the backend network.
所以這是一個有趣的用例。到目前為止,如您所知,PCI Express 被定義為在伺服器內部,但現在發生的事情,這就是我們對 PCIe 感到興奮的原因,AEC 是我們現在正在農場叢集方面開闢一個新的前沿,GPU,意思是互連加速器。這就是 AEC 將發揮作用的地方。這是一個新的機會,與我們已經提供的乙太網路 AEC 一起出現,它也用於互連後端網路上的 GPU。
So overall, we do believe that of combining our PCIe AEC solution and Ethernet AEC solution, we are well set for some of these evolving trends and our revenue, we expect to start coming in for the latter half of this year.
因此,總的來說,我們確實相信,透過結合我們的PCIe AEC 解決方案和乙太網路AEC 解決方案,我們已經為其中一些不斷發展的趨勢和我們的收入做好了準備,我們預計將在今年下半年開始收入。
And on PCIe, can we do believe we are the only one, just to make sure I clarify what I initially said just that I don't know if there is someone else talking about it, that's not yet in the public domain.
在 PCIe 上,我們是否可以相信我們是唯一的,只是為了確保我澄清我最初所說的只是我不知道是否有其他人在談論它,這還沒有進入公共領域。
Operator
Operator
Blayne Curtis, Jefferies.
布萊恩·柯蒂斯,杰弗里斯。
Blayne Curtis - Analyst
Blayne Curtis - Analyst
Hey, good afternoon. Thanks for taking my questions. Maybe first one for you, Jitendra. Just curious, you mentioned the right architectures. I think Harlan asked on I was just kind of curious about obviously the lead customer and a lot of CPU, the GPU connections, that's the nature of the market who has the volume. But I'm curious, you mentioned back, the backend fabrics a bunch. Kind of curious, is that still conceptual? Are you seeing designs for it? And maybe just talk about the widening out of just applications for what the retailers are being used for?
嘿,下午好。感謝您回答我的問題。也許第一個適合你,Jitendra。只是好奇,你提到了正確的架構。我認為哈蘭問我只是對主要客戶和大量的 CPU、GPU 連接感到好奇,這就是擁有數量的市場的本質。但我很好奇,你剛剛提到了後端布料一堆。有點好奇,這還是概念性的嗎?你看到它的設計了嗎?也許只是談論零售商用途的應用範圍擴大?
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Great question. And so there are many applications where we use Retimers. Of course, we are most known for the connectivity from the GPU to the head node. That is where a lot of the deployments are happening but these new applications also speak to how rapidly the AI systems are evolving. And every few months we see a new AI platform come up and that opens up additional opportunities for us. And one of those is to cluster GPUs together.
很好的問題。因此,我們在許多應用程式中都使用重定時器。當然,我們最熟悉的是從 GPU 到頭節點的連接。這是許多部署發生的地方,但這些新應用程式也說明了人工智慧系統的發展速度有多快。每隔幾個月,我們就會看到一個新的人工智慧平台出現,這為我們帶來了更多機會。其中之一是將 GPU 叢集在一起。
There are two main protocols that are used in addition to NVLink, of course, which are used to cluster, GPUs, that is PCI Express and Ethernet. And as Sanjay just mentioned, we now have a solutions available to interconnect GPUs together, whether they are for PCI Express and our Ethernet. But specifically in the case of PCI Express, some of our customers who want to use PCI Express for clustering, GPUs together are now able to do so by using our PCI Express Retimers, which are offered in the form of an active electrical cables.
當然,除了 NVLink 之外,還有兩種主要協定用於叢集、GPU,即 PCI Express 和乙太網路。正如 Sanjay 剛才提到的,我們現在擁有可將 GPU 互連在一起的解決方案,無論它們是用於 PCI Express 還是我們的乙太網路。但特別是在 PCI Express 的情況下,我們的一些想要使用 PCI Express 進行叢集和 GPU 的客戶現在可以透過使用我們的 PCI Express 重定時器(以有源電纜的形式提供)來實現這一點。
So this business is going to be in addition to the sustaining business that we have today in connecting GPUs to have nodes. Now we are connecting GPUs together in a cluster. And as you know, these are very intense, very dense mesh connections, so they can grow very, very rapidly. So we're very excited about where this will grow, starting with some revenue contributions late this year.
因此,這項業務將成為我們今天連接 GPU 和節點的維持業務的補充。現在我們將 GPU 連接到一個叢集中。如您所知,這些是非常密集、非常密集的網狀連接,因此它們可以非常非常快速地增長。因此,我們對這項業務的成長感到非常興奮,從今年稍後的一些收入貢獻開始。
Blayne Curtis - Analyst
Blayne Curtis - Analyst
And then maybe a question for Mike. The gross margin remain quite high you said it was mix. I mean maybe you're just being kind of conservative with the IPO, but I was just kind of curious, did the mix come in? I mean, I think it's mostly retailers, I know, whereas the other products start to ramp that will be the headwinds. I'm just kind of how do you think about the rest of the year should we kind of just have kind of come down with mix gradually as those new products ramp off the [77%] that you're guiding to?
然後也許有個問題要問麥克。毛利率仍然很高,你說這是混合的。我的意思是,也許你只是對首次公開募股持保守態度,但我只是有點好奇,混合進來了嗎?我的意思是,我認為主要是零售商,我知道,而其他產品開始增加,這將是逆風。我只是想問一下,您如何看待今年剩下的時間,隨著這些新產品從您指導的 [77%] 逐步下降,我們是否應該逐漸進行混合?
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
Yes. So just to remind everybody, our standalone ICs carry a pretty high margin relative to our hardware solutions. So when the mix gets a little more a balance with hardware versus standalone ICs, we're expecting our long-term gross margins to [10% to 70%]. In Q1, we were heavily weighted to standalone ICs, a very favorable mix, and that's how we enjoyed strong gross margins.
是的。所以提醒大家,我們的獨立 IC 相對於我們的硬體解決方案具有相當高的利潤。因此,當硬體與獨立 IC 的組合更加平衡時,我們預計我們的長期毛利率將達到[10% 至 70%]。在第一季度,我們專注於獨立 IC,這是一個非常有利的組合,這就是我們享受強勁毛利率的原因。
As we go through the balance of this year and into next year, we will see an increasing mix of our modules and also add-in cards for CXL as well. So we think we'll have a gradual trend down towards your long-term model over time as that mix changes.
當我們度過今年和明年的餘下時間時,我們將看到我們的模組以及 CXL 附加卡的組合不斷增加。因此,我們認為,隨著時間的推移,隨著這種組合的變化,我們將逐漸向長期模型傾斜。
Operator
Operator
Thomas O'Malley, Barclays.
托馬斯·奧馬利,巴克萊銀行。
Thomas O'Malley - Analyst
Thomas O'Malley - Analyst
Hey, guys. Thanks for taking my question. Mike, I just wanted to ask I know you may not be giving segment detail specifically, but could you talk about what you're able to what contributed to the revenue in the quarter.
大家好。感謝您提出我的問題。麥克,我只是想問一下,我知道您可能不會具體提供細分市場的詳細信息,但您能否談談您對本季度收入的貢獻。
And then looking out into June, could you talk about from a revenue mix perspective, maybe some sequential help on what's growing? Obviously, the non-ICs business is growing just given the fact that gross margins are pressured a bit. But just any color on the segments would be helpful to start?
然後展望六月,您能否從收入組合的角度談談,也許對成長有一些連續的幫助?顯然,非IC業務在毛利率受到一定壓力的情況下仍在成長。但是片段上的任何顏色都會有助於開始嗎?
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
Sure. So as I mentioned we started shipping into AI server platforms in volume in Q3, and a lot of our customers are still in ramp mode to extent we've been shipping and sort of past couple of quarters. But there's still a lot of designs that have you begun to ramp. So we're still in the early phases that as we look out in time, we see the Gen 5 piece of it in AI continue to grow into next year as well.
當然。正如我所提到的,我們在第三季度開始大量向人工智慧伺服器平台發貨,而且我們的許多客戶仍處於斜坡模式,就我們過去幾個季度的發貨量而言。但仍然有很多設計讓你開始升級。因此,我們仍處於早期階段,隨著時間的推移,我們看到人工智慧中的第五代部分也將繼續發展到明年。
So as you look into Q2, the growth that we're guiding to is still largely driven by the Aries Gen 5 deployment in AI servers, both for existing platforms with increased volumes, but also the new customers begin their terms of ramps as well.
因此,當您展望第二季時,我們所引導的成長仍然主要由 AI 伺服器中的 Aries Gen 5 部署推動,既適用於數量增加的現有平台,也適用於新客戶開始的爬坡期。
Thomas O'Malley - Analyst
Thomas O'Malley - Analyst
Helpful. And then just a broader one. In talking with NVIDIA there, they're referencing your GP-200 architecture becoming a bigger percent of the mix, NVLink 72 being more of the deployments that hyperscalers are taking. When you look at the Hopper architecture versus the Blackwell architecture and their NV72 platform where they're using NVLink amongst their GPUs.
有幫助。然後是更廣泛的一個。在與 NVIDIA 交談時,他們提到 GP-200 架構將在混合架構中佔據更大的比例,而 NVLink 72 則更成為超大規模企業正在採用的部署。當您查看 Hopper 架構與 Blackwell 架構及其 NV72 平台時,他們在 GPU 中使用 NVLink。
Can you talk about the puts and takes when it comes to your retiming products? Do you see an attach rate that's any different than the current generation?
您能談談您的重定時產品的投入和支出嗎?您認為附加率與當前世代有什麼不同嗎?
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Let me take that question. First, let me say that we are just at the beginning phases of where we will continue to see new architectures are being produced by AI platform providers at a very rapid pace, just match up with the growth in AI models. And on top of that, we'll see innovative ways that hyperscalers will deploy these platforms in the cloud.
讓我來回答這個問題。首先,我要說的是,我們正處於起步階段,我們將繼續看到人工智慧平台供應商以非常快的速度生產新架構,與人工智慧模型的成長相符。最重要的是,我們將看到超大規模企業在雲端部署這些平台的創新方式。
So as these architectures evolve, so do the connectivity challenges. Some challenges are going to be incremental and some are going to be completely new. And so what we believe is, given these given speeds increasing complexities with this new platform, we do expect our dollar content per AI platform to increase over time. We see these developments, providing us good tailwinds going here into the future.
因此,隨著這些架構的發展,連結挑戰也隨之增加。有些挑戰將是漸進的,有些挑戰將是全新的。因此,我們相信,鑑於這個新平台的給定速度增加了複雜性,我們確實預計每個人工智慧平台的美元含量會隨著時間的推移而增加。我們看到這些發展,為我們走向未來提供了良好的推動力。
So now to your question about the GP-200 specifically. First of all, we cannot speak about specific customer architectures, but here is something that is very clear to see our ad the ad platform providers produces new architectures, the hyperscalers will choose different form factors to deploy them.
現在具體回答您關於 GP-200 的問題。首先,我們不能談論特定的客戶架構,但從我們的廣告中可以清楚地看到廣告平台提供者產生新的架構,超大規模企業將選擇不同的外形尺寸來部署它們。
And in that way, no two clouds are the same. Each hyperscaler has a unique requirements, unique constrained to deploy these AI platforms, and we are working with all of them to enable these deployments. This combination of new platforms and very cloud specific deployment strategy. It presents great opportunities for our PCI connectivity portfolio.
這樣一來,就沒有兩朵雲是相同的了。每個超大規模企業都有獨特的要求,以及部署這些人工智慧平台的獨特約束,我們正在與他們所有人合作實現這些部署。這種新平台和非常特定於雲端的部署策略的組合。它為我們的 PCI 連接產品組合提供了巨大的機會。
And to that point, as Sanjay mentioned, we announced the sampling of our Gen 6 Retimer during GTC. If you look at our press release, you will see that our broad support from AI platform providers. And to this day, to the best of our knowledge, we are still the only one sampling agency solutions. So on the whole, given the fact that the speeds are increasing, complexity is increasing and in fact, the pace of innovation is going up as well. And these all play to our strengths, and we have customers coming to us for new approaches to solve these problems. So we feel very good about the potential to grow our PCI connectivity business.
就這一點而言,正如 Sanjay 所提到的,我們在 GTC 期間宣布了第 6 代重定時器的樣品。如果您查看我們的新聞稿,您會看到我們得到了人工智慧平台提供者的廣泛支援。直到今天,據我們所知,我們仍然是唯一提供抽樣解決方案的機構。總的來說,鑑於速度不斷提高,複雜性不斷增加,事實上,創新的步伐也在加快。這些都發揮了我們的優勢,並且有客戶向我們尋求解決這些問題的新方法。因此,我們對 PCI 連接業務的成長潛力感到非常滿意。
Operator
Operator
Quinn Bolton, Needham.
奎因·博爾頓,李約瑟。
Quinn Bolton - Analyst
Quinn Bolton - Analyst
Hey. Let me offer my congratulations on the nice results and outlook I just wanted to follow up on the use of PCI Express in the GPU, the GPU back-end networks. I think that's something historically you had excluded from your TAM, but it looks like it's becoming and opportunity here and starts to ramp in the second half of this year. Wondering if you could just talk about the breadth of some of the custom AI accelerators that are choosing PCI Express as their interconnect over, say, Ethernet? And then I've got a follow-up.
嘿。讓我對良好的結果和前景表示祝賀,我只是想跟進 PCI Express 在 GPU(GPU 後端網路)中的使用。我認為這在歷史上是您從 TAM 中排除的東西,但現在看來它正在成為機遇,並在今年下半年開始增加。想知道您是否可以談談一些選擇 PCI Express 作為其互連(例如乙太網路)的客製化 AI 加速器的廣度?然後我有一個後續行動。
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Again, good question. So just to kind of follow up the response that we provided before. There are two key dominant protocols that are used to cluster GPUs together. The one that's most well-known, of course, is NVLink, which is what NVIDIA was administered proprietary interface. The other two are Ethernet and PCI Express.
再說一遍,好問題。只是為了跟進我們之前提供的回應。有兩個關鍵的主導協定用於將 GPU 叢集在一起。當然,最著名的是 NVLink,這是 NVIDIA 管理的專有介面。另外兩個是乙太網路和 PCI Express。
We do see some of our customers using PCI Express and having it not be appropriate to say who. But certainly PCI Express is a fairly common protocol. It is the one that's natively found on our GPUs and CPUs and others, you know, data center components, Ethernet is also very popular. And to the extent that a particular customer chooses to you, these are that our PCI Express, we are able to support them both with our solutions, the Aries PCIe Retimer family as well as the Taurus Ethernet Retimer family. We do expect these to make meaningful contributions to our revenue.
我們確實看到一些客戶使用 PCI Express,但不方便透露是誰。但 PCI Express 無疑是相當常見的協定。它是我們的 GPU 和 CPU 以及其他資料中心元件中原生存在的一種,乙太網路也非常流行。如果特定客戶選擇了我們的 PCI Express,我們能夠透過我們的解決方案、Aries PCIe Retimer 系列以及 Taurus 乙太網路 Retimer 系列來支援它們。我們確實希望這些能為我們的收入做出有意義的貢獻。
As I mentioned, starting with the end of this year and then of course, continuing into next year.
正如我所提到的,從今年年底開始,然後當然會持續到明年。
Quinn Bolton - Analyst
Quinn Bolton - Analyst
Perfect. And my second question is you guys have talked about introduction of new products, our new TAM expansion activity, and I'm not going to ask you to introduce them today. But just in terms of timing, as we think out of these new products, it time line sort of introduction later this year or 2025 with revenue ramp in 2026, is that the general framework that investors should be thinking about the new products that you've discussed.
完美的。我的第二個問題是,你們已經談到了新產品的推出,我們新的TAM擴展活動,我今天不會要求你們介紹它們。但就時間安排而言,正如我們對這些新產品的思考一樣,它的時間安排是在今年晚些時候或2025 年推出,並在2026 年實現收入增長,這是投資者應該考慮的新產品的總體框架。
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Again, I think we of the company, we don't talk about unreleased products, the timing of it. But what I can share with you is the following. First, we've been very fortunate to be in the front-row seat of AI deployment and enjoy a great relationship with the hyperscalers and AI platform providers. So we get to see a lot we get to hear a lot in terms of some of the requirements.
再說一遍,我認為我們公司不會談論未發布的產品及其時機。但我可以與您分享的是以下內容。首先,我們非常幸運地處於人工智慧部署的前沿,並與超大規模企業和人工智慧平台供應商保持良好的關係。因此,我們在某些要求方面看到了很多,也聽到了很多。
So clearly we are going to be developing products that address the bottlenecks, whether it is on the data side network side or on the memory side. So we are working on sort of the products that you can imagine that would all be developed ground up for AI infrastructure and enable connectivity solutions that will deploy the AI applications sooner.
很明顯,我們將開發解決瓶頸的產品,無論是在資料側網路側還是在記憶體側。因此,我們正在開發一些你可以想像的產品,這些產品都是為人工智慧基礎設施而開發的,並支援更快部署人工智慧應用程式的連接解決方案。
There is lot going on, a lot of new infrastructure, a lot of new GPU announcements, CPU announcement. So you can imagine given the pace of this market and the changes that are upcoming, we do anticipate that this will all start having meaningful impact and incremental revenue impact to our business.
有很多事情正在發生,很多新的基礎設施,很多新的 GPU 公告,CPU 公告。因此,您可以想像,考慮到這個市場的步伐和即將發生的變化,我們確實預計這一切都將開始對我們的業務產生有意義的影響和增量收入影響。
Operator
Operator
Ross Seymore, Deutsche Bank.
羅斯·西莫爾,德意志銀行。
Ross Seymore - Analyst
Ross Seymore - Analyst
Hi guys. Thanks for [asking] question. I wanted to go into the ASIC versus GPU side of things. As ASIC starts to penetrate this market to certain degrees. How does that change, if any, the Retimer TAM that you would have? And I guess even the competitive dynamic in that equation, considering one of the biggest ASICs suppliers is also an aspiring competitor of yours?
嗨,大家好。感謝您[提出]問題。我想深入了解 ASIC 與 GPU 方面的情況。隨著 ASIC 開始在一定程度上滲透到這個市場。如果有的話,這會如何改變您擁有的重定時器 TAM?我想即使是這個等式中的競爭動態,考慮到最大的 ASIC 供應商之一也是您有抱負的競爭對手?
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
So great question again. Let me just refer back to what I said, which is we will see more and more different solutions come to the market to address the evolving AI requirements. Some of them are going to be GPUs from the kind of known AI providers like NVIDIA, AMD and others. And some others will be custom-built ASICs that are built typically by hyperscalers, whether they are AWS or Microsoft or Google and others.
又是一個很好的問題。讓我回顧一下我所說的,我們將看到越來越多不同的解決方案進入市場,以滿足不斷變化的人工智慧需求。其中一些將是 NVIDIA、AMD 等知名人工智慧供應商的 GPU。其他一些將是客製化的 ASIC,通常由超大規模企業構建,無論是 AWS、微軟還是谷歌等。
And the requirements for these systems are common in some way, but they do differ, for example, what particular type of back-end connectivity we use in, exactly what are the ins and outs that are going to each of these chips. The good news is with the breadth of portfolio that we have and the close engagement with the several asset providers as well as the GPU providers.
這些系統的要求在某種程度上是共同的,但它們確實有所不同,例如,我們使用什麼特定類型的後端連接,每個晶片的詳細資訊到底是什麼。好消息是我們擁有廣泛的產品組合以及與多家資產提供者以及 GPU 供應商的密切合作。
We understand the challenges of these systems very well. And not only are we providing solutions that address those today with the current generation, we are engaged with them very closely on the next generation on the upcoming platforms, whether they are GPU-based or ASIC based to provide these solutions. Great example was the Aries SCM based, where we able using a trusted solution for PCI Express retirements. We enabled a new way of connecting some of these ASICs on the back-end network.
我們非常了解這些系統面臨的挑戰。我們不僅提供解決當前世代問題的解決方案,而且還在即將推出的下一代平台上與他們密切合作,無論它們是基於 GPU 還是基於 ASIC 來提供這些解決方案。一個很好的例子是基於 Aries SCM,我們能夠使用值得信賴的解決方案來實現 PCI Express 退役。我們啟用了一種在後端網路上連接某些 ASIC 的新方法。
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
And just maybe if I can add to that, one way to visualize connectivity market or subsystem is the nervous system within the human anatomy, right? It's one of those things where you don't want to mess with it, there will be sick vendor. There are options are off the shelf. Once the nervous system is built, tested, especially like what we have double-up, where the nervous system that we've built is specifically done for AI applications.
也許我可以補充一點,可視化連接市場或子系統的一種方法是人體解剖學中的神經系統,對嗎?這是你不想搞亂的事情之一,因為會有生病的供應商。有現成的選項。一旦神經系統建成並進行測試,特別是像我們的雙倍系統,我們建造的神經系統是專門為人工智慧應用而設計的。
And there's lot of qualification lot of software investment that that hyperscalers have done and they want to reuse that across different kinds of topologies, whether it is ASIC based or merchant silicon-based. And we do see a trend happening of when we look at customers that we've engaged with today and for protocols like PCI Express, Ethernet, and CXL and especially with Taurus plays. These are standard space. So that standpoint, whatever end possible architecture is being used, we believe that we will stand to gain from that.
超大規模廠商已經進行了大量的軟體投資,他們希望在不同類型的拓撲中重複使用這些拓撲,無論是基於 ASIC 還是基於商業矽。當我們觀察我們今天接觸的客戶以及 PCI Express、乙太網路和 CXL 等協議,尤其是 Taurus 遊戲時,我們確實看到了一種趨勢正在發生。這些是標準空間。因此,從這個角度來看,無論最終使用何種可能的架構,我們相信我們都會從中受益。
Ross Seymore - Analyst
Ross Seymore - Analyst
Thanks for that. I guess as my follow-up, one quick one for Mike. How should we think about OpEx beyond the second quarter? I know there's a bigger step-up there with a full quarter of being a publicly traded company, et cetera. But just walk us through your OpEx plans for rest of year or even to the target?
感謝那。我想作為我的後續行動,為麥克做一個快速的行動。我們該如何看待第二季之後的營運支出?我知道那裡有一個更大的進步,四分之一是一家上市公司,等等。但請向我們介紹一下您今年剩餘時間的營運支出計劃,甚至是目標?
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
Yeah. I mean, thanks Ross. We are continuing to invest quite a bit in headcount, particularly in R&D. There's so many opportunities ahead of us that we love to get a jump on those products and also improve the time to market. And that being said, we're pretty selective on who we bring into the company. So that will just meter our growth.
是的。我的意思是,謝謝羅斯。我們將繼續在員工隊伍方面進行大量投資,特別是在研發方面。我們面前有如此多的機會,我們希望能夠搶佔這些產品的先機,並縮短上市時間。話雖這麼說,我們對引入公司的人非常挑剔。所以這只會衡量我們的成長。
And we believe our OpEx, although it's going to be increasing will probably not increase at a rate of revenue over the near and long term. And that's why we feel good about a long-term operating margin model 40%. So over time, we do feel confident we can trend that direction even with increasing investment in is up in OpEx.
我們相信,我們的營運支出雖然會增加,但在短期和長期內可能不會以收入的速度增加。這就是為什麼我們對 40% 的長期營業利潤率模型感到滿意。因此,隨著時間的推移,即使營運支出的投資不斷增加,我們也確實有信心朝著這個方向發展。
Operator
Operator
Suji Desilva, Roth MKM.
蘇吉·德西爾瓦,羅斯·MKM。
Suji Desilva - Analyst
Suji Desilva - Analyst
Hi, Jitendra, Sanjay, Mike. Congrats on the first quarter here. Under the back end, the addressable market here, that's non-NVLink, I'm try to understand if the PCIe and Ethernet opportunities there will be adopted at a similar pace out of the gate or whether PCI would leave that adoption in the non-NVLink backend opportunity.
嗨,吉滕德拉、桑傑、麥克。在這裡恭喜第一季。在後端,這裡的潛在市場,即非 NVLink,我試圖了解那裡的 PCIe 和乙太網路機會是否會以類似的速度被採用,或者 PCI 是否會將這種採用保留在非 NVLink 中。 。
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
It's hard to say at this point just because there is so much of development going on here. I mean, you can imagine the non-NVIDIA ecosystem, they will rely on standard technologies, whether it is PCI Express or Ethernet. And the advantage of PCI Express is that it's low latency, right? Significantly low latency compared to Ethernet. So there are some benefits to that. And there are certain extensions that people consider to add on top of PCI Express are when it comes to the proprietary implementation.
目前還很難說,因為這裡正在進行大量的開發工作。我的意思是,你可以想像非 NVIDIA 生態系統,他們將依賴標準技術,無論是 PCI Express 還是乙太網路。PCI Express 的優點是低延遲,對嗎?與乙太網路相比,延遲顯著降低。所以這樣做有一些好處。當涉及專有實施時,人們考慮在 PCI Express 之上添加某些擴充。
So overall we do see this from a technology standpoint, PCI experts will have that advantage. Now Ethernet also has been around. So we'll have to wait and see how all of this develops over the next, let's say, 6 to18 months.
因此,總的來說,我們確實從技術角度來看這一點,PCI 專家將擁有這一優勢。現在乙太網路也已經出現了。因此,我們必須等待,看看這一切在接下來的 6 到 18 個月內會如何發展。
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Yes, add to what Sanjay said. I think the good news for us in some ways is that we don't have to pick. We don't have to decide which one. We have chips, we have hardware and we have software. So we have customers that come to us and say, I need this for my new AI platform. Can you help me that? And then that's what we've been doing.
是的,補充一下 Sanjay 所說的。我認為在某些方面對我們來說好消息是我們不必做出選擇。我們不必決定選擇哪一個。我們有晶片,我們有硬件,我們有軟體。因此,有客戶來找我們說,我的新人工智慧平台需要這個。你能幫我嗎?這就是我們一直在做的事情。
Suji Desilva - Analyst
Suji Desilva - Analyst
Okay, great. And just a question perhaps for Mike. The initial AEC programs are ramping maybe a few customers this year, fewer customers next year, maybe perhaps all of them this year. But do you perceive that those will be larger lumpier program-based ramps, Mike, or will those be steady kind of build-outs of servers, grow?
好的,太好了。也許只是問麥克一個問題。最初的 AEC 計畫今年可能會增加一些客戶,明年會減少一些客戶,也許今年會增加全部客戶。但是,邁克,您是否認為這些將是更大、更笨重的基於程序的斜坡,或者這些將是穩定的伺服器構建,會增長嗎?
Michael Tate - Chief Financial Officer
Michael Tate - Chief Financial Officer
I think the product ramps will mirror, other product groups will do so gradually build over a few quarters to the hit steady-state. If you layer on top of each other just continues to build a nice growing revenue profile. So as you look at Taurus in 2024, we're shipping 200 gig right now and then in the back half, we start to ship 400 gig. If you look into 2025, 800 gig, which is ultimately the biggest opportunity and a much broader set of customers will be when the market really it becomes very large.
我認為產品的成長將反映出其他產品組將在幾個季度內逐漸達到穩定狀態。如果你們相互疊加,就會繼續建立一個好的成長收入狀況。因此,當你看看 2024 年的 Taurus 時,我們現在正在運送 200 演出,然後在下半年,我們開始運送 400 演出。如果你展望 2025 年,當市場真正變得非常大時,將出現 800 場演出,最終將是最大的機會和更廣泛的客戶群。
Operator
Operator
Richard Shannon, Craig Hallum.
理查德·香農,克雷格·哈勒姆。
Richard Shannon - Analyst
Richard Shannon - Analyst
Well, hi, guys. Thanks for taking my questions. Congratulations on coming public here. I guess I wanted to follow up on a couple of topics here that have been hit on including Suji's question here about the PCI Express AEC opportunity. Are these design wins or are these kind of pre-design win ramps you're talking about this year? And I guess ultimately my question on this topic here is, is can this opportunity these PCI Express AECs become as big as your Taurus family in the foreseeable future?
好吧,嗨,夥計們。感謝您回答我的問題。恭喜您在這裡公開。我想我想跟進這裡的幾個主題,包括 Suji 在這裡提出的關於 PCI Express AEC 機會的問題。這些是設計勝利還是您今年談論的預先設計勝利斜坡?我想最終我對這個主題的問題是,在可預見的未來,這些 PCI Express AEC 能否成為像你們的 Taurus 系列一樣大的機會?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Yes. So these are design wins to clarify. We have been shipping that we announced with them on distributed Atom and public forums. So from that standpoint, it's an opportunity that we're excited about. And like we noted earlier, we do expect it to start contributing revenue for a later half of this year.
是的。因此,這些是需要澄清的設計勝利。我們一直在分散式 Atom 和公共論壇上發布我們與他們宣布的內容。因此,從這個角度來看,這是一個讓我們感到興奮的機會。正如我們之前指出的,我們確實預計它將在今年下半年開始貢獻收入。
Richard Shannon - Analyst
Richard Shannon - Analyst
Okay, perfect. Thank you. And then second question is on CXL. I think you've mentioned a couple of applications here. Maybe if you can kind of express the breadth of interest here across hyperscalers and other customers for the ones you mentioned and also for the for the next ones that are a little bit more expansive in nature. How do you see the testing and specking out of those? Are those coming to market at the time you're hoping for it or a little bit more development required to get those to market?
好的,完美。謝謝。第二個問題是關於 CXL 的。我想您在這裡提到了幾個應用程式。也許您可以表達超大規模企業和其他客戶對您所提到的以及接下來的本質上更廣泛的客戶的廣泛興趣。您如何看待這些測試和檢驗?這些產品是否在您希望的時間上市,或者需要更多的開發才能將其推向市場?
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Yeah. So there are two questions there. Let me take the first one, which is the CXL side. For CXL, there are four main use cases to keep in mind memory expansion, memory tiering where you're trying to go for a TCO type of final memory pooling and what is called as its memory drives that Samsung and others are providing. We believe memory drives are more suitable for the enterprise customers and that as the first three are more suitable for cloud scale deployment.
是的。所以有兩個問題。讓我看第一個,即CXL 側。對於 CXL,有四個主要用例需要記住記憶體擴展、記憶體分層(您試圖實現 TCO 類型的最終記憶體池)以及三星和其他公司提供的所謂記憶體驅動器。我們認為記憶體驅動器更適合企業客戶,且前三種驅動器更適合雲端規模部署。
And there, again, memory pooling is something that's further out in time is our belief just because it requires software changes for the ones that are more sort of short term, medium term memory expansion and memory tiering. And like I noted early on, all the major hyperscalers, at least in the US are all engaged on the CXL technology, but it is going to be a matter of time with both CPUs being available and dollars being available from a general-purpose compute standpoint.
再說一次,我們相信記憶體池是一種更遙遠的東西,因為它需要對軟體進行更改,以實現短期、中期記憶體擴展和記憶體分層。正如我之前指出的,所有主要的超大規模企業(至少在美國)都致力於 CXL 技術,但 CPU 的可用以及通用計算的資金投入只是時間問題立場。
Okay. And then in terms of your second question was, was that more on new products? Was that the context for it?
好的。然後就你的第二個問題而言,更多的是關於新產品嗎?這是它的背景嗎?
Richard Shannon - Analyst
Richard Shannon - Analyst
Yes.
是的。
Sanjay Gajendra - President, Chief Operating Officer, Director
Sanjay Gajendra - President, Chief Operating Officer, Director
Yes. So again, we don't talk about the exact timeframe, but you can imagine our last product we announced was a little over a year ago, so our engineers have not been quite so they've been working hard. So to that standpoint, we are working very diligently and hard based upon a lot of interest and engagement from customers that we have already been working with.
是的。再說一次,我們不會談論確切的時間範圍,但你可以想像我們發布的最後一個產品已經是一年多前了,所以我們的工程師還沒有完全做到這一點,所以他們一直在努力工作。因此,從這個角度來看,基於我們已經合作過的客戶的大量興趣和參與,我們正在非常勤奮和努力地工作。
Operator
Operator
There are no further questions at this time. I'll turn the call back over to Leslie Green for closing remarks.
目前沒有其他問題。我會將電話轉回給萊斯利·格林 (Leslie Green) 作結束語。
Leslie Green - Investor Relations
Leslie Green - Investor Relations
Thank you, everyone, for your participation and questions. We look forward to seeing many of you at various financial conferences this summer, and updating you on our progress on our Q2 earnings conference call. Thank you.
謝謝大家的參與與提問。我們期待在今年夏天的各種財務會議上見到你們,並向你們介紹我們在第二季度財報電話會議上的最新進展。謝謝。
Jitendra Mohan - Chief Executive Officer, Director
Jitendra Mohan - Chief Executive Officer, Director
Thank you, guys.
感謝你們。
Operator
Operator
This concludes today's conference call. You may now disconnect.
今天的電話會議到此結束。您現在可以斷開連線。