使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Operator
Good morning, and thank you for standing by. My name is John, and I will be your conference operator today. At this time, I would like to welcome everyone to the DigitalOcean fourth-quarter earnings conference call. (Operator Instructions)
早上好,感謝您的耐心等待。我叫約翰,今天我將擔任你們的會議接線生。在此,我謹代表 DigitalOcean 歡迎各位參加第四季財報電話會議。(操作說明)
I would now like to turn the conference over to Melanie Strate, Head of Investor Relations.
現在我將把會議交給投資者關係主管梅蘭妮·斯特拉特。
Please go ahead.
請繼續。
Melanie Strate - Head of Investor Relations
Melanie Strate - Head of Investor Relations
Thank you, and good morning. Thank you all for joining us today to review DigitalOcean's fourth quarter and full-year 2025 financial results and an investor update.
謝謝,早安。感謝各位今天與我們一起回顧 DigitalOcean 2025 年第四季和全年財務表現以及投資者最新情況。
Joining me on the call today are Paddy Srinivasan, our Chief Executive Officer; and Matt Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information.
今天與我一起參加電話會議的有我們的執行長 Paddy Srinivasan 和我們的財務長 Matt Steinfort。在開始之前,請允許我提醒各位,今天電話會議上所作的某些陳述可能被視為前瞻性陳述,這些陳述反映了管理層根據當前可獲得的信息所作出的最佳判斷。
Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook. I direct your attention to the risk factors contained in our filings with the SEC as well as those referenced in today's press release that is posted on our website. DigitalOcean expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today.
我們的實際結果可能與這些前瞻性聲明中預測的結果有重大差異,包括我們的財務展望。請您注意我們在提交給美國證券交易委員會的文件中列出的風險因素,以及今天在我們網站上發布的新聞稿中提到的風險因素。DigitalOcean明確聲明,不承擔任何義務或責任公開更新或修訂今天發表的任何前瞻性聲明。
Additionally, non-GAAP financial measures will be discussed on this conference call and reconciliations to the most directly comparable GAAP financial measures can be found in today's earnings press release as well as in our investor presentation that outlines the discussion on today's call. A webcast of today's call is also available in the IR section of our website.
此外,本次電話會議還將討論非GAAP財務指標,您可以在今天的盈利新聞稿以及概述今天電話會議討論內容的投資者演示文稿中找到與最直接可比的GAAP財務指標的調節表。您也可以在我們網站的投資者關係版塊觀看今天電話會議的網路直播。
And with that, I will turn the call over to Paddy.
接下來,我將把電話交給帕迪。
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Thank you, Melanie. Good morning, everyone, and thank you for joining us. We had a fantastic quarter and a very strong finish to the year, and I'm excited to share the details with all of you. We ended the year with 18% revenue growth in Q4, reaching $901 million for the full year. We delivered $51 million in incremental organic ARR, the highest in the company's history.
謝謝你,梅蘭妮。各位早安,感謝各位的參與。我們上個季度業績非常出色,年底也取得了非常強勁的成績,我很高興與大家分享詳情。我們第四季營收成長18%,全年營收達9.01億美元。我們實現了 5,100 萬美元的新增有機 ARR,創下公司歷史新高。
Our 1 million customers reached $133 million in ARR, growing at 123% year over year. We maintained financial discipline and strong profitability with 42% adjusted EBITDA margins and 19% adjusted free cash flow margins for the year. There is a lot to be excited about. And given this momentum that we are seeing and the progress we are making against our long-term strategy, we wanted to provide a more comprehensive update today rather than wait for a separate Investor Day. Our prepared remarks will be slightly longer than usual.
我們的 100 萬客戶實現了 1.33 億美元的年度經常性收入 (ARR),年增 123%。我們維持了財務紀律和強勁的獲利能力,本年度調整後 EBITDA 利潤率為 42%,調整後自由現金流利潤率為 19%。有很多令人興奮的事情。鑑於我們目前取得的進展勢頭以及我們在長期策略方面取得的進展,我們希望今天就提供更全面的最新消息,而不是等到單獨的投資者日。我們準備的發言稿會比平常稍長。
We'll advance slides from our earnings presentation on the webcast as we go, and we'll leave plenty of time for questions. AI is reshaping entire industries, and we are built for this shift. Software is being disrupted, not by incremental AI features, but by a structural shift to agentic systems operating at scale. Cloud and AI native disruptors are moving beyond AI experimentation at a breakneck speed. We are deploying agents that reason, act retain memory and run continuously.
我們將在網路直播中逐步展示收益報告的幻燈片,並留出充足的時間回答問題。人工智慧正在重塑整個產業,而我們正是為這種轉變而生的。軟體產業正在遭受顛覆,不是因為人工智慧功能的漸進式發展,而是因為向大規模運作的智慧體系統的結構性轉變。雲端運算和人工智慧原生領域的顛覆者正以驚人的速度超越人工智慧實驗階段。我們正在部署能夠推理、行動、保留記憶體並持續運行的代理程式。
In this structural shift, we see a secular hyperscale size opportunity by serving AI and cloud native companies driving this disruption. When markets are disrupted like this, there is typically a short window to take advantage of the opportunity, and let me tell you how we are seizing it.
在這種結構性轉變中,我們看到了一個長期超大規模的機遇,透過服務推動這項變革的人工智慧和雲端原生公司來實現這一目標。當市場像這樣受到衝擊時,通常會有一個短暫的窗口期來抓住機遇,讓我來告訴你我們是如何抓住這個機會的。
First, our top customers are now our growth engine. We have turned what was once viewed as a weakness into a competitive strength. Our top digital native customers or DNE which include cloud and AI native companies are now our fastest-growing cohort and in fact, growing significantly faster than the market on DL. In a nutshell, Scaling our top customers was one-second train. Today, it's our growth engine.
首先,我們最重要的客戶現在是我們的成長引擎。我們把曾經被視為弱點的東西變成了競爭優勢。我們最頂尖的數位原生客戶(DNE),包括雲端原生公司和人工智慧原生公司,現在是我們成長最快的群體,事實上,他們的成長速度明顯快於深度學習市場。簡而言之,拓展我們最重要的客戶群就像坐一秒鐘的火車一樣迅速。如今,它是我們發展的引擎。
Second, we are on the right side of software disruption driven by AI. Modern cloud and AI native companies are going after large markets with disruptive AI-centric software innovation. They are increasingly choosing DigitalOcean at their natural platform to build and scale their authentic AI software.
其次,我們站在了人工智慧驅動的軟體變革的正確一邊。現代雲端運算和人工智慧原生公司正憑藉顛覆性的以人工智慧為中心的軟體創新,瞄準龐大的市場。他們越來越傾向於選擇 DigitalOcean 作為建構和擴展其原創 AI 軟體的天然平台。
And when these companies disrupt and scale at unprecedented rates on our platform, we win. Third, we put the cloud in Neocloud. These AI natives need more than just GPU rentals or inference APIs. They need access to optimized AI models, both closed and open source, production-grade inferencing and a full stack cloud for their software, all working together at global scale. We deliver all of it in one integrated agentic inference cloud.
當這些公司以前所未有的速度顛覆和擴展我們的平台時,我們就贏了。第三,我們將雲端技術融入了Neocloud。這些人工智慧原生開發者需要的不僅僅是GPU租賃或推理API。他們需要存取優化的 AI 模型(包括閉源和開源模型)、生產級推理以及用於其軟體的全端雲,所有這些都需要在全球範圍內協同工作。我們在一整套整合式智慧推理雲端平台上提供所有這些功能。
And finally, we are building a durable and profitable growth engine. We are investing responsibly while driving balanced growth. Without chasing the GPU training arm race, we expect to deliver 21% revenue growth in 2026, reaching 25% plus growth by Q4 2026 and 30% growth in 2027. We are on a path to being a weighted rule of 50 company next year on the back of our existing committed data center capacity alone. Put simply, we are accelerating growth the DigitalOcean way.
最後,我們正在打造一個可持續且獲利的成長引擎。我們在進行負責任的投資的同時,也在推動均衡成長。如果不參與 GPU 訓練的軍備競賽,我們預計 2026 年營收將成長 21%,到 2026 年第四季將成長 25% 以上,2027 年將成長 30%。僅憑我們現有的資料中心容量,我們明年就有望成為一家符合加權 50 法則的公司。簡而言之,我們正在以 DigitalOcean 的方式加速成長。
In December, we crossed a major milestone, surpassing $1 billion revenue run rate. This is a remarkable achievement for a company that was founded through Techstars in 2012. This success is a testament to our passionate team and the vision of our original founders. I also extend my deepest gratitude to all our incredible customers who have supported us throughout this journey. But what matters more than this milestone is where we are going.
12 月,我們跨越了一個重要的里程碑,營收年化率突破了 10 億美元。對於一家於 2012 年透過 Techstars 創立的公司來說,這是一項了不起的成就。這一成功證明了我們充滿熱情的團隊和我們創辦人的遠見卓識。我還要向所有一路支持我們的了不起的客戶們致以最深切的謝意。但比這個里程碑更重要的是我們的未來方向。
We exited 2025 at 18% year-over-year growth and are on a path to deliver 21% growth in 2026 with an exit growth rate of 25% plus in Q4 of 2026. We are picking up momentum, and we have outgrown the old narrative. Let me elaborate. Our top customers are now our growth engine. For our first decade, we built an iconic developer cloud.
我們在 2025 年實現了 18% 的同比增長,並預計在 2026 年實現 21% 的增長,並在 2026 年第四季度實現 25% 以上的增長率。我們正在積蓄力量,並且已經超越了舊有的敘事方式。讓我詳細說明一下。我們最重要的客戶現在是我們成長的引擎。在公司成立的第一個十年裡,我們打造了一個標誌性的開發者雲端平台。
That foundation still matters, and we have over 4 million active developers on our platform that absolutely love us. Over the last several quarters, we have deliberately shifted focus towards serving our top DNE and eliminating any reason for them to leave DigitalOcean as their scale and that focus is working.
這個基礎依然很重要,我們的平台上有超過 400 萬活躍開發者,他們非常喜歡我們。在過去的幾個季度裡,我們特意將重點轉移到服務我們最頂尖的DNE用戶,消除他們因規模擴大而離開DigitalOcean的任何理由,而這種做法正在奏效。
In Q4, we delivered a record organic incremental ARR of $51 million and $150 million on a trailing 12-month basis, both surpassing even our peak COVID era quarters. This record trailing 12-month incremental ARR was balanced across AI and cloud customers. ARR from DNE reached $604 million in Q4, which is now 62% of total ARR, growing 30% year-over-year.
第四季度,我們實現了創紀錄的有機增量 ARR 5,100 萬美元,過去 12 個月的 ARR 達到 1.5 億美元,均超過了我們在新冠疫情期間的峰值季度。過去 12 個月的年度經常性收入 (ARR) 創下歷史新高,人工智慧客戶和雲端客戶均實現了均衡成長。DNE 第四季的 ARR 達到 6.04 億美元,佔總 ARR 的 62%,年成長 30%。
And our DNE NDR reached 102%, continuing to outperform developer NDR. And like I've been reporting for a while now, our largest customers in the DNE cohort are accelerating the fastest. Our $100,000 customers are growing at 58%, our $500,000 customers are growing at 97%, and our $1 million customers who reached $133 million in ARR are growing at 123% year over year, all well ahead of market growth rates. And NDR also increases meaningfully as these customers scale. Q4 was 102% for our $100,000 customers, 106% for our $500,000 customers and 115% for our 1 million customers.
我們的 DNE NDR 達到了 102%,繼續優於開發商 NDR。正如我之前報導的那樣,我們在 DNE 群體中最大的客戶發展速度最快。我們年收入 10 萬美元的客戶成長率為 58%,年收入 50 萬美元的客戶成長率為 97%,年經常性收入達到 1.33 億美元的 100 萬美元的客戶年增 123%,所有這些成長率都遠超市場成長率。隨著這些客戶規模的擴大,NDR 也顯著提高。第四季度,年收入 10 萬美元的客戶銷售額達到 102%,年收入 50 萬美元的客戶銷售額達到 106%,年收入 100 萬美元的客戶銷售額達到 115%。
Churn for our $1 million customers was 0 in Q4 and has averaged 0% over the last 12 months which clearly shows that our top customers are now scaling with us and becoming our growth engine. You should also effectively debunk any misconception that our most successful customers will outgrow our platform.
第四季度,我們年營收 100 萬美元的客戶流失率為 0,過去 12 個月的平均流失率為 0%,這清楚地表明,我們的頂級客戶現在正與我們共同發展壯大,並成為我們成長的引擎。你也應該有效地消除任何誤解,即我們最成功的客戶將會超越我們的平台。
Recapping this section, we are accelerating past the $1 billion revenue run rate milestone and our top customers are driving this acceleration. We are no longer defined just by entry-level developers experimenting on our platform. We are defined by high-growth cloud and AI native companies running production workloads scaling revenue and building their businesses on DigitalOcean.
回顧本節內容,我們正在加速突破 10 億美元的營收里程碑,而我們的頂級客戶正在推動這項加速進程。我們不再僅僅由在我們平台上進行實驗的入門級開發者來定義。我們指的是那些在 DigitalOcean 上運行生產工作負載、擴大收入並建立業務的高成長雲端原生和 AI 原生公司。
Said simply scaling our top customers was once a constraint. Today, it's our growth engine. On to the next point. We are on the right side of software disruption. There is a structural shift happening in software and DigitalOcean is emerging as a preferred platform for cloud and AI native companies that are driving this disruption.
曾經,僅僅擴大我們頂級客戶的規模就是一個限制。如今,它是我們發展的引擎。接下來是下一個要點。我們站在了軟體顛覆的正確一邊。軟體產業正在發生結構性轉變,DigitalOcean 正成為推動這項變革的雲端和人工智慧原生公司的首選平台。
The last generation of Software as a Service or SaaS monetized per user per seat, value, scale with headcount. This next generation of AI-centric software monetizes per token for inference request Value scales with intelligence delivered as AI model capabilities accelerate entire categories of horizontal and vertical software are being reinvented. Incumbents are reacting to transformational change by layering AI into their workflows, seeking to enhance their existing software. But AI native companies are starting from first principles. For them, AI isn't a feature.
最新一代的軟體即服務 (SaaS) 按使用者、席位、價值和員工人數進行貨幣化。下一代以人工智慧為中心的軟體按推理請求的代幣進行貨幣化,隨著人工智慧模型能力的加速發展,智慧交付的價值也隨之擴展,整個橫向和縱向軟體類別正在被重新發明。現有企業正在透過將人工智慧融入其工作流程來應對變革性變化,力求增強其現有軟體。但人工智慧原生公司都是從零開始的。對他們來說,人工智慧不是一項功能。
It is the very engine that defines their product. Every time they deliver value, inference runs, tokens are consumed and intelligence is produced. DigitalOcean is uniquely positioned to serve these disruptors, and that is evident in the traction we are getting from leading AI native companies. We have signed and expanded production workloads with scale, cloud and AI native companies like character.ai, workato and Hippocratic AI, companies with product market fit, real revenue and rapidly scaling demand. Our work with character.ai demonstrates this clearly.
正是這台引擎定義了他們的產品。每次它們提供價值時,推理就會運行,代幣會被消耗,智慧就會產生。DigitalOcean 在服務這些顛覆者方面擁有獨特的優勢,這一點從我們從領先的 AI 原生公司獲得的關注中可見一斑。我們已經與 character.ai、workato 和 Hippocratic AI 等規模化、雲端原生和 AI 原生公司簽署並擴大了生產工作負載,這些公司擁有產品市場契合度、實際收入和快速成長的需求。我們與 character.ai 的合作清楚地證明了這一點。
We delivered 100% throughput increase and roughly 50% lower cost per token. For character.ai on our production inference cloud powered by AMD Instinct GPUs at production scale. This is not a lab benchmark. This is on live traffic across tens of millions of customers. This demonstrates our ability to support production scale inferencing for leading AI companies with our differentiated performance cost efficiency and integrated AI and cloud platform built for inference first production workloads.
我們實現了吞吐量提升 100%,每個代幣的成本降低了約 50%。用於在我們由 AMD Instinct GPU 提供支援的生產級推理雲端上進行 character.ai 的生產規模部署。這不是實驗室基準測試。這是基於數千萬用戶的即時流量數據。這證明了我們有能力透過差異化的性能成本效益以及專為推理優先的生產工作負載而構建的整合式人工智慧和雲端平台,為領先的人工智慧公司提供生產規模的推理支援。
Another AI native with a proven product market fit is Hippocratic AI who builds health care-focused conversational AI, designed to support clinical workflows and patient engagement. Hippocratic AI selected DO's agentic inference Cloud to power HIPAA-compliant clinical AI workloads. This validates not just our performance but our enterprise-grade security and compliance.
另一家擁有成熟產品市場契合度的 AI 原生企業是 Hippocratic AI,該公司建構以醫療保健為中心的對話式 AI,旨在支援臨床工作流程和患者互動。Hippocratic AI 選擇 DO 的智慧推理雲來支援符合 HIPAA 標準的臨床 AI 工作負載。這不僅驗證了我們的業績,也驗證了我們企業級的安全性和合規性。
For Hippocratic AI, we optimize their multimodal deployment on NVIDIA hardware, reinforcing the importance of vertical innovation from GPUs to networking, kernel optimization, cloud integration and inference software. These AI natives also scale very differently.
對於 Hippocratic AI,我們優化了其在 NVIDIA 硬體上的多模態部署,強調了從 GPU 到網路、核心最佳化、雲端整合和推理軟體的垂直創新的重要性。這些原生人工智慧的擴展方式也截然不同。
While traditional cloud customers may take years to reach $1 million in ARR, AI native can cross that threshold in months or even weeks. When inference is your product demand compounds quickly. DigitalOcean is purpose-built for these disruptors.
傳統雲端客戶可能需要數年時間才能達到 100 萬美元的年度經常性收入 (ARR),而 AI 原生客戶可以在數月甚至幾週內跨越這一門檻。當推論是你的產品需求快速成長時。DigitalOcean 就是為這些顛覆者量身打造的。
As software becomes more intelligent and AI-centric, we are building the vertically integrated inferencing cloud designed to power the next generation of AI natives, putting us squarely on the right side of this AI-driven disruption and our Agentic Inference Cloud is capitalizing these disruptors. Next, let me explain how we are enabling this.
隨著軟體變得越來越智能,越來越以人工智慧為中心,我們正在建立垂直整合的推理雲,旨在為下一代人工智慧原生應用提供支持,這使我們正好站在了這場人工智慧驅動的變革的正確一邊,而我們的智能推理雲正在利用這些顛覆性因素。接下來,讓我解釋一下我們是如何實現這一點的。
We do this by putting the cloud in Neocloud. Over the last couple of years, the new category of Neocloud has emerged that is largely optimized for one thing, large-scale AI model training, dense GPU farms, high-performance networking, frontier AI model training workloads. This is an important layer of the AI stack, but serving inferencing is different. As AI diffuses into every software company, workloads shift from training a handful of frontier models to running millions of real-world applications. and real-world AI-centric software needs more than GPU farms.
我們透過將雲端技術整合到 Neocloud 中來實現這一點。在過去的幾年裡,出現了一種新的Neocloud類別,它主要針對一個目標進行了優化:大規模AI模型訓練、密集GPU叢集、高效能網路、前沿AI模型訓練工作負載。這是人工智慧堆疊中的重要層,但提供推理服務則有所不同。隨著人工智慧滲透到每一家軟體公司,工作負載從訓練少數前沿模型轉變為運行數百萬個實際應用。而實際應用中以人工智慧為中心的軟體需要的不僅是GPU叢集。
They need compute, storage, databases, networking, observability, security, all working seamlessly together with predictable and transparent unit economics. Over the past 4 quarters, we have evolved our Agentic Inference Cloud to meet that reality. We have combined specialized inference infrastructure with our full stack cloud platform, purpose-built for production AI while staying true to what defines DigitalOcean, simplicity, open standards, enterprise-grade performance and SLAs and predictable and transparent unit economics.
他們需要計算、儲存、資料庫、網路、可觀測性、安全性,所有這些都需要無縫協作,並具有可預測和透明的單位經濟效益。在過去的四個季度裡,我們不斷發展我們的智慧推理雲,以適應這個現實。我們將專門的推理基礎設施與我們專為生產 AI 而構建的全端雲端平台相結合,同時秉承 DigitalOcean 的定義:簡單、開放標準、企業級性能和 SLA 以及可預測和透明的單位經濟效益。
A good recent example of this in action is OpenClaw, which recently took the world by storm by demonstrating the power of agentic software, giving us a glimpse into what AI-centric software future will look like. OpenClaw is an open source AI agent framework that allows developers to run real-world task-driven agents.
最近一個很好的例子就是 OpenClaw,它透過展示智慧軟體的強大功能,席捲了世界,讓我們得以一窺以人工智慧為中心的軟體未來會是什麼樣子。OpenClaw 是一個開源的 AI 代理框架,允許開發人員運行現實世界的任務驅動型代理程式。
When customers deploy OpenClaw on big solution, they need more than just GPUs, because AI agents are stateful. They reason, they take action, they retain memory. They interact with third-party APIs. All this requires more than just a GPU form. It takes a full cloud and AI stack working together side by side.
當客戶在大型解決方案中部署 OpenClaw 時,他們需要的不僅僅是 GPU,因為 AI 代理是有狀態的。他們會推理,會行動,會記憶。它們與第三方API互動。這一切需要的不只是GPU。這需要完整的雲端技術和人工智慧技術堆疊協同工作。
Customers increasingly understand this as inference is the heartbeat of modern AI native. It is their primary operating cost, their performance level and their competitive moat. Their production traction scales directly with model quality, inference performance and unit economics.
客戶越來越理解這一點,因為推理是現代人工智慧原生應用的核心。這是他們的主要營運成本、績效水準和競爭優勢。它們的產量與模型品質、推理表現和單位經濟效益直接相關。
As they grow, they don't build their products around a single close source model, but rather orchestrate multiple models in real time, often leveraging open source and a mixture of expert approaches to optimize both accuracy and unit economics. Our platform delivers flexibility at every layer, from serverless inference APIs to dedicated clusters and GPU droplets, allowing customers to precisely match performance and cost to their workload requirements.
隨著他們的發展壯大,他們不再圍繞單一的閉源模型來建立產品,而是即時協調多個模型,通常利用開源和專家方法的結合來優化準確性和單位經濟效益。我們的平台在每一層都提供了靈活性,從無伺服器推理 API 到專用叢集和 GPU 實例,使客戶能夠根據其工作負載需求精確地匹配效能和成本。
We pair that with performance optimized open source models, delivering high accuracy, strong throughput, low latency and compelling unit economics. And this isn't a stand-alone inference platform. It is deeply integrated with our full stack cloud that we have hardened over the last dozen years so that customers can build, deploy and scale their entire AI application in one integrated environment with enterprise SLAs.
我們將其與效能優化的開源模型結合,從而實現高精度、高吞吐量、低延遲和極具吸引力的單位經濟效益。而且這並非一個獨立的推理平台。它與我們過去十多年來不斷完善的全端雲端深度集成,以便客戶能夠在具有企業級服務等級協定的整合環境中建置、部署和擴展其整個人工智慧應用程式。
Our agent development platform takes them from experimentation to production with real-world AI agents. Underpinning all of this is a deep lineup of GPUs from NVIDIA and AMD, supported by rapidly expanding global data center footprint, built and operated with years of operational expertise supporting mission-critical workloads.
我們的代理商開發平台可協助他們從實驗階段過渡到使用真實世界的 AI 代理進行生產。這一切的基礎是 NVIDIA 和 AMD 的強大 GPU 產品線,以及快速擴張的全球資料中心網絡,這些資料中心的建置和營運都凝聚了多年的營運經驗,旨在支援關鍵任務工作負載。
This integrated platform and flexibility of choice is precisely what makes DigitalOcean a natural platform for agentic software. Let me explain this again using OpenClaw as an example. Customers can build and deploy OpenClaw agents on distillation in two distinct ways, depending on their need for control, scale and operational complexity. The first path optimizes on simplicity and speed. Customers can launch a preconfigured one-click GPU droplet and have an OpenClaw agent running in minutes.
DigitalOcean 的整合平台和靈活的選擇正是其成為代理軟體天然平台的原因。讓我以 OpenClaw 為例再解釋一下。客戶可以根據自身對控制、規模和操作複雜性的需求,以兩種不同的方式在蒸餾上建置和部署 OpenClaw 代理程式。第一條路徑以簡潔性和速度為優化目標。客戶可以啟動預先配置的一鍵式 GPU droplet,並在幾分鐘內執行 OpenClaw 代理程式。
This model gives full control over the environment. Ideal for experimentation, customization, performance tuning and for teams that want direct access to the infrastructure layer. The second path optimizes for global scale. Customers can deploy OpenClaw on DO's managed serverless platform where DigitalOcean handles provisioning, scaling, security, container orchestration and operational management. This approach is ideal for teams that are scaling a global application.
此模型可完全控制環境。非常適合進行實驗、客製化、效能調優,以及希望直接存取基礎架構層的團隊。第二條路徑針對全球規模進行了最佳化。客戶可以將 OpenClaw 部署在 DigitalOcean 的託管無伺服器平台上,DigitalOcean 將負責資源調配、擴充、安全性、容器編排和維運管理。這種方法非常適合正在擴展全球應用程式的團隊。
Both approaches run on the same integrated cloud with access to managed databases for agentic memory object storage for artifact, virtual private cloud networking, observability and GPU backed inference. That's what vertical integration looks like in the inference economy, not just providing bare metal GPUs or even just generating inference tokens, but providing a secure, scalable and manageable foundation for intelligent stateful systems.
兩種方法都在同一個整合雲上運行,可存取託管資料庫,用於代理記憶體物件儲存工件、虛擬私有雲網路、可觀測性和 GPU 支援的推理。這就是推理經濟中的垂直整合,它不僅提供裸機 GPU,甚至不僅僅是生成推理令牌,而是為智慧有狀態系統提供安全、可擴展和可管理的基礎。
Within days of launching OpenClaw, nearly 30,000 native DigitalOcean, One-click OpenClaw droplets were created, and that was just the starting point. thousands of other open cloud deployments were activated by customers, signaling the emergence of a new ecosystem almost overnight. The success of OpenClaw is an early view of how the AI market will continue to evolve and can serve as a blueprint for AI native businesses on how a new generation of software will be built around autonomous agents that orchestrate complex multistep workflows across systems, continuously reason with data and context and execute tax end-to-end with minimal human involvement.
OpenClaw上線幾天內,就創建了近3萬個原生DigitalOcean一鍵式OpenClaw雲實例,而這只是個開始。客戶隨後啟動了數千個其他開放雲端部署,標誌著一個全新生態系統幾乎在一夜之間誕生。OpenClaw 的成功讓我們得以初步了解人工智慧市場將如何繼續發展,並可作為人工智慧原生企業的藍圖,展示如何圍繞自主代理構建新一代軟體,這些自主代理能夠協調跨系統的複雜多步驟工作流程,持續地對數據和上下文進行推理,並在最大限度減少人工幹預的情況下執行端到端的稅務工作。
As these AI native companies move from proof of concept to production agents, the richness of the underlying platform, the security posture, manageability, scalability and predictable unit economics become mission-critical. And that is exactly where distillation is fast emerging as the natural platform for building and scaling AI agentic software. The competitive landscape is crowded with companies speaking to their ability to address the inference market, but our differentiation from these competitors is very clear. Neoclouds rent out GPUs. Inference wrapper providers stop at inference APIs and model libraries.
隨著這些人工智慧原生公司從概念驗證階段過渡到生產代理階段,底層平台的豐富性、安全態勢、可管理性、可擴展性和可預測的單位經濟效益變得至關重要。而這正是蒸餾技術迅速崛起,成為建構和擴展人工智慧代理軟體的天然平台的原因。競爭格局中充斥著聲稱能夠滿足推理市場需求的公司,但我們與這些競爭對手的差異非常明顯。Neoclouds出租GPU。推理封裝提供者僅限於推理 API 和模型庫。
We continue to effectively compete with hyperscalers who bring scale, but also come with complexity and cost structures that are aimed at traditional large enterprise companies. While each of these competitors address a component of the inference value chain, real-world identic software requires a tightly integrated environment where inference, orchestration, persistence, networking and security are designed to work together with simplicity, global scale, enterprise SLAs and predictable unit economics. That is where DigitalOcean wins.
我們繼續與超大規模資料中心營運商展開有效的競爭,這些營運商雖然能夠提供規模優勢,但也帶來了複雜性和成本結構,而這些結構是針對傳統的大型企業公司的。雖然每個競爭對手都針對推理價值鏈的一個組成部分,但現實世界中相同的軟體需要一個緊密整合的環境,在這個環境中,推理、編排、持久性、網路和安全性被設計成能夠以簡單性、全球規模、企業級 SLA 和可預測的單位經濟效益協同工作。這就是DigitalOcean的優點。
This differentiation is clear to our customers, but it's also very clear in our financial profile. As a full stack cloud provider, that has operated mission-critical workloads for cloud and AI native for over a decade, we look very different from a financial perspective than other players chasing the AI training market or components of the inference market.
我們的客戶能夠清楚地感受到這種差異,而我們的財務狀況也充分體現了這一點。作為一家全端雲端供應商,我們營運雲端和原生人工智慧的關鍵任務工作負載已有十餘年,從財務角度來看,我們與其他追逐人工智慧訓練市場或推理市場組成部分的參與者截然不同。
Where Neocloud has very high revenue concentration with just a few very large customers making up the vast majority of their revenue, DigitalOcean Holdings, Inc.'s top 25 customers represent only 10% of our revenue. While GPU rental providers own bare metal revenue and margins on their infrastructure, DigitalOcean drives higher revenue and margin from our full stack inference and cloud solutions. And when a growing number of Neoclouds are investing massive amounts of capital and burning near-term profits and cash for future returns, this solution is already profitable and generating cash. Our traction with cloud and AI native is no accident. It is the result of relentless focused investment and disciplined execution.
Neocloud 的收入集中度非常高,只有少數幾個大客戶貢獻了其絕大部分收入,而 DigitalOcean Holdings, Inc. 的前 25 名客戶僅占我們收入的 10%。GPU 租賃供應商依靠其基礎設施的裸機收入和利潤,而 DigitalOcean 則透過我們的全端推理和雲端解決方案獲得更高的收入和利潤。當越來越多的Neocloud投入大量資金,並為了未來的回報而消耗短期利潤和現金時,這種解決方案已經獲利並產生現金流。我們在雲端運算和人工智慧原生領域的成功並非偶然。這是持續專注的投資和嚴謹執行的結果。
We recently strengthened our executive team by adding Vinay Kumar as our Chief Product and Technology Officer. As a founding member of Oracle Cloud Infrastructure, or OCI, Vinay brings deep hyperscale expertise and leads our product, platform, infrastructure and security teams, having built a hyperscaler from the ground up at OCI, he looks forward to scaling up another one at DigitalOcean, one that is purpose-built to meet the complex needs of cloud and AI native workloads globally.
我們最近透過聘請 Vinay Kumar 擔任首席產品和技術官,加強了我們的高階主管團隊。身為 Oracle 雲端基礎架構 (OCI) 的創始成員,Vinay 擁有深厚的超大規模專業知識,領導我們的產品、平台、基礎架構和安全團隊。他曾在 OCI 從零開始建立超大規模雲端平台,現在期待在 DigitalOcean 擴展另一個超大規模雲端平台,該平台專為滿足全球雲端和 AI 原生工作負載的複雜需求而建置。
In the meantime, our R&D team has been very busy continuing to ship products and features that are helping our customers scale on our platform. On Core Cloud, we launched remote MCP support embedding AI directly into the control plane, enabling secure zero setup infrastructure management. On our AI platform, we introduced the age and development kit, an enhanced agent evaluation tools to help customers move from experimentation to production with measurable performance and reliability.
同時,我們的研發團隊一直非常忙碌,不斷推出產品和功能,幫助我們的客戶在我們的平台上擴展業務。在 Core Cloud 上,我們推出了遠端 MCP 支持,將 AI 直接嵌入到控制平面中,從而實現了安全的零設定基礎設施管理。在我們的 AI 平台上,我們引入了年齡和開發工具包,這是一個增強的代理評估工具,旨在幫助客戶從實驗階段過渡到生產階段,並獲得可衡量的性能和可靠性。
With GPU observability, managed NFS and multi-node GPU support, we significantly expanded our ability to run large-scale mission-critical inference in production. This is what vertical integration looks like, infrastructure, inference, observability, agent tooling, all built to seamlessly work and scale together. And we're just getting started. We'll share the next wave of innovation on our Agentic Inference Cloud at our next deploy conference in San Francisco on April 28, as we continue building the platform, purpose built for the inference economy. Our differentiation is durable and will continue to grow as the market shifts from training to inference.
借助 GPU 可觀測性、託管 NFS 和多節點 GPU 支持,我們顯著增強了在生產環境中運行大規模關鍵任務推理的能力。這就是垂直整合的體現:基礎設施、推理、可觀測性、代理工具,所有這些都是為了無縫協作和擴展而建構的。我們才剛起步。我們將於 4 月 28 日在舊金山舉行的下一屆部署大會上分享我們 Agentic Inference Cloud 的下一波創新成果,我們將繼續建立專為推理經濟而打造的平台。我們的差異化優勢是持久的,隨著市場從訓練轉向推理,這種差異化優勢將持續成長。
To give investors clearer visibility into this momentum, we are introducing a new metric, ,AI customer revenue. AI customer revenue includes all revenue from customers leveraging our AI products, including both inference and core cloud services.
為了讓投資者更清楚地了解這一發展勢頭,我們引入了一個新的指標—AI客戶收入。AI 客戶收入包括所有使用我們 AI 產品(包括推理和核心雲端服務)的客戶所獲得的收入。
Because AI natives don't just buy GPUs, they build, operate and scale applications which need a full stack inference cloud. In fact, 70% of our AI customer ARR in Q4 2025 was already coming from inference services or general-purpose cloud products rather than from bare metal GPU rentals. And these customers are growing rapidly with Q4 AI customer ARR reaching $120 million, growing 150% year-over-year, now making up 12% of total ARR.
因為人工智慧原生開發者不只是購買 GPU,他們還會建置、營運和擴展需要全端推理雲端的應用程式。事實上,到 2025 年第四季度,我們 AI 客戶 ARR 的 70% 已經來自推理服務或通用雲端產品,而不是來自裸機 GPU 租賃。這些客戶正在快速成長,第四季 AI 客戶 ARR 達到 1.2 億美元,年增 150%,目前佔總 ARR 的 12%。
In summary, we don't just rent GPUs. We run production AI. We are not a GPU landlord. We are an AI cloud platform. We deliver hyperscaler grade infrastructure and reliability purpose-built infant services co-located and integrated with a full stack general-purpose cloud designed for the next generation of AI native.
總而言之,我們不只是出租GPU。我們運行生產級人工智慧。我們不是GPU出租方。我們是一個人工智慧雲端平台。我們提供超大規模基礎設施和可靠性,以及專為下一代 AI 原生應用而設計的、與全端通用雲端共置和整合的嬰兒服務。
Or put simply, DigitalOcean puts the cloud in Neocloud. Now on to my final takeaway. We are building a durable and profitable growth engine. At our Investor Day last April, we laid out a plan to return the business to 18% to 20% growth by 2027. On our last earnings call, we pulled that growth projection forward by a full year guiding that we would reach that 18% to 20% growth range in 2026.
或者簡單來說,DigitalOcean 為 Neocloud 注入了雲端元素。最後,我想分享我的最後一個心得。我們正在打造一個可持續且獲利的成長引擎。在去年四月的投資者日上,我們制定了一項計劃,力爭到 2027 年將業務成長率恢復到 18% 至 20%。在上次財報電話會議上,我們將成長預期提前了一整年,預計到 2026 年將達到 18% 至 20% 的成長範圍。
And just 9 months after setting that original plan, we've already reached the bottom end of the target range at 18% growth in Q4 of 2025, achieving it two full years ahead of our original target. And the momentum we are seeing gives us even greater confidence. We now expect to deliver 21% revenue growth for the full year 2026 with an exit growth rate of 25% plus by Q4 and reaching 30% growth in 2027.
在製定最初計劃僅 9 個月後,我們已經達到了目標範圍的下限,即 2025 年第四季度增長 18%,比我們最初的目標提前了整整兩年實現。我們目前看到的勢頭讓我們更有信心。我們現在預計 2026 年全年營收成長 21%,到第四季退出成長率將超過 25%,並在 2027 年達到 30% 的成長。
As we ramp into our committed 31 megawatts incremental capacity this year, there will be measured near-term pressure on gross margin and adjusted EBITDA, but we remain confident in our 18% to 20% unlevered adjusted free cash flow margin guide for the year. The near-term pressure is just a physics problem, given the start-up cost timing and revenue ramp characteristics of quickly adding new capacity.
隨著我們今年逐步實現承諾的 31 兆瓦新增產能,毛利率和調整後 EBITDA 將在短期內面臨一定壓力,但我們仍然對今年 18% 至 20% 的未槓桿調整後自由現金流利潤率預期充滿信心。短期壓力只是一個物理問題,因為快速增加新產能的啟動成本、時間和收入成長特性決定了其面臨的挑戰。
It is the natural result of pursuing high-return growth opportunities, but we remain disciplined operators. Demand continues to far outstrip supply. And we will take advantage of opportunities to further accelerate growth when they present themselves. We will do so responsibly and we'll continue to pursue investments with attractive returns match investments with revenue timing, maintain a strong balance sheet and allocate capital with trigger even as we accelerate. Growth and discipline are not trade-offs for us.
這是追求高回報成長機會的必然結果,但我們仍將保持嚴謹的經營作風。需求持續遠超過供應。我們將抓住一切機會,進一步加速成長。我們將以負責任的態度行事,並將繼續尋求回報可觀的投資,使投資與收入時機相匹配,保持強勁的資產負債表,並在加速發展的同時,透過觸發機制分配資本。對我們而言,成長和自律並非互相排斥。
They are both operating principles. With that, I will turn it over to Matt to walk through the quarter and the year in more detail and to provide additional color on our updated outlook. Matt, over to you.
它們都是運作原則。接下來,我將把發言權交給 Matt,讓他更詳細地介紹本季和本年度的情況,並對我們更新後的展望進行更詳細的闡述。馬特,該你了。
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Thanks, Paddy. Good morning, everyone, and thanks for joining us today. As Paddy just shared, we're a very different company today than we were just a few years ago. It's an exciting time at DigitalOcean. We are a rapidly growing and profitable company that is incredibly well positioned to take advantage of the hyperscale sized inference market opportunity.
謝謝你,帕迪。各位早安,感謝大家今天收看我們的節目。正如帕迪剛才所說,如今的我們公司與幾年前相比已經截然不同了。對於 DigitalOcean 來說,這是一個令人興奮的時刻。我們是一家快速發展且獲利能力強的公司,擁有得天獨厚的優勢,能夠抓住超大規模推理市場的機會。
This excitement is clearly evident in both our recent financial performance, and in our higher near-term and long-term outlook. Revenue growth has reaccelerated. We've reversed declines from our top customers, turning them into a key driver of our growth. We have scaled our AI customer ARR to $120 million, growing 150% year-over-year. And we've done this profitably, growing adjusted EBITDA and adjusted free cash flow on both an absolute and a margin basis.
這種興奮情緒在我們近期的財務表現以及我們更樂觀的近期和長期前景中都得到了充分體現。營收成長再次加速。我們已經扭轉了主要客戶的下滑趨勢,使他們成為我們成長的關鍵驅動力。我們的 AI 客戶年度經常性收入已達 1.2 億美元,年增 150%。而且我們實現了盈利,調整後的 EBITDA 和調整後的自由現金流在絕對值和利潤率方面均實現了成長。
While we are pleased with our progress over the past several years, it is our recent momentum that gives us the confidence to further increase our near-term and long-term outlook. Fourth quarter revenue was $242 million, up 18% year over year and we closed '25 with full year revenue of $901 million.
雖然我們對過去幾年的進展感到滿意,但最近的良好勢頭讓我們更有信心進一步提升近期和長期前景。第四季營收為 2.42 億美元,年增 18%,2025 年全年營收為 9.01 億美元。
We delivered sustained acceleration through the back half of 2025, driving a 500 basis point increase in Q4 growth from the same period just a year ago. We delivered the accelerated revenue growth with strong margins and growing profits even as we increased our investments. Fourth quarter gross profit was $142 million, up 13% year over year, with a gross margin of 59%.
我們在 2025 年下半年實現了持續加速成長,第四季成長率比去年同期提高了 500 個基點。我們實現了營收加速成長,同時保持了強勁的利潤率和不斷增長的利潤,即便我們增加了投資。第四季毛利為 1.42 億美元,年增 13%,毛利率為 59%。
For the full year, gross profit was $540 million, up 16% year-over-year, with a gross margin of 60%. Adjusted EBITDA in the fourth quarter was $99 million, an adjusted EBITDA margin of 41%. Full year adjusted EBITDA was $375 million, a 42% adjusted EBITDA margin. Trailing 12-month adjusted free cash flow was $168 million in Q4 or 19% of revenue. We maintained our attractive free cash flow margins in '25, in part by expanding our financial toolkit to include equipment financing.
全年毛利為 5.4 億美元,年增 16%,毛利率為 60%。第四季調整後 EBITDA 為 9,900 萬美元,調整後 EBITDA 利潤率為 41%。全年調整後 EBITDA 為 3.75 億美元,調整後 EBITDA 利潤率為 42%。過去 12 個月的調整後自由現金流在第四季為 1.68 億美元,佔營收的 19%。2025 年,我們維持了可觀的自由現金流利潤率,部分原因是擴大了我們的金融工具組合,包括設備融資。
This better aligns infrastructure investment timing with the revenue that it supports. We will continue to utilize a combination of upfront asset purchases and equipment leasing as we invest to fuel our growth. We continue to be disciplined financial stewards for our investors. We prudently use stock-based compensation to attract and retain our critical talent while repurchasing shares to mitigate dilution. SBC declined to 9% of revenue in 2025, down from 12% in the prior year.
這樣可以更好地使基礎設施投資時間與它所支持的收入相符。我們將繼續採用前期資產購買和設備租賃相結合的方式進行投資,以推動我們的成長。我們將繼續秉持嚴謹的財務管理原則,為投資人創造價值。我們謹慎地運用股票激勵措施來吸引和留住關鍵人才,同時回購股票以減少股權稀釋。SBC 到 2025 年的營收佔比下降至 9%,低於前一年的 12%。
To put that number in context, we have a 33% margin if you subtract SBC from adjusted EBITDA. At 33% margin, we are just above the 80th percentile of a broad software comp set on an adjusted EBITDA less SBC basis. And we are well above the 13% median of that group. Non-GAAP weighted average shares outstanding increased slightly from 103 million to 105 million over the same period. To reduce dilution, we repurchased 2.4 million shares in 2025 for $82 million at an average price of approximately $35.
為了更好地理解這個數字,如果從調整後的 EBITDA 中減去 SBC,我們的利潤率為 33%。以調整後 EBITDA 減去 SBC 計算,我們的利潤率為 33%,略高於大範圍軟體競爭對手的 80% 分位點。我們遠高於該群體13%的中位數。同期,非GAAP加權平均流通股數從1.03億股略增至1.05億股。為了減少股權稀釋,我們在 2025 年以平均每股約 35 美元的價格回購了 240 萬股股票,總額為 8,200 萬美元。
Note that we ended 2025 with our full $100 million buyback authorization in place and that authorization continues through July 31, 2027. While we continue to view share repurchases as an important long-term tool, our near-term capital allocation priorities are squarely focused on organic growth and balance sheet flexibility. GAAP diluted net income per share in the quarter was $0.24 and $2.52 for the full year. 183% year-over-year increase. Non-GAAP diluted net income per share in the quarter was $0.44.
請注意,截至 2025 年底,我們已獲得 1 億美元的全部股票回購授權,該授權將持續到 2027 年 7 月 31 日。雖然我們仍然將股票回購視為一項重要的長期工具,但我們近期的資本配置重點完全集中在內生成長和資產負債表靈活性。本季以美國通用會計準則計算的稀釋後每股淨收益為0.24美元,全年為2.52美元,較去年同期成長183%。本季非GAAP稀釋後每股淨收益為0.44美元。
For the full year, non-GAAP diluted net income per share was $2.12, a 10% year-over-year increase. As a quick reminder, recall that our 2025 net income per share metrics were impacted by the actions we took in '25 to strengthen our balance sheet. In 2025, we proactively addressed the upcoming maturity of our 2026 convertible notes. We did this through a series of successful financing transactions that have given us significant balance sheet flexibility. These transactions included the establishment of an $800 million bank facility, the issuance of $625 million of 2030 convertible notes and the repurchase of the majority of our then outstanding 2026 convertible notes.
全年非GAAP稀釋後每股淨收益為2.12美元,較去年同期成長10%。再次提醒大家,我們 2025 年的每股淨收入指標受到了我們在 2025 年為加強資產負債表而採取的措施的影響。2025年,我們積極回應了2026年可轉換債券即將到期的問題。我們透過一系列成功的融資交易實現了這一點,從而獲得了相當大的資產負債表彈性。這些交易包括設立 8 億美元的銀行信貸額度、發行 6.25 億美元的 2030 年可轉換票據以及回購我們當時大部分未償還的 2026 年可轉換票據。
Excluding the effects of these financing transactions, non-GAAP diluted net income per share would have been $2.29 for the year and $0.53 for the quarter. With our 2026 notes largely addressed, we ended the year with a strong balance sheet. We have sufficient liquidity and projected cash generation to address the remaining $312 million balance of our outstanding '26 convertible notes.
若排除這些融資交易的影響,非GAAP稀釋後每股淨收益全年為2.29美元,季為0.53美元。由於 2026 年到期的票據問題基本上得到解決,我們以強勁的資產負債表結束了這一年。我們擁有充足的流動資金和預計現金流,足以償還剩餘的 3.12 億美元 2026 年到期可轉換票據。
Having drawn down the remaining $120 million on our Term Loan A in February, we will repurchase or redeem the remaining '26 notes for cash before or at the maturity in December of '26. Beyond this, we have no other material maturity until 2030, and we entered 2025 with approximately 3.2 times net leverage.
2 月我們提取了 A 類定期貸款剩餘的 1.2 億美元後,我們將在 2026 年 12 月到期之前或到期時以現金回購或贖回剩餘的 2026 年到期票據。除此之外,我們在 2030 年之前沒有其他實質到期項目,而我們進入 2025 年時的淨槓桿率約為 3.2 倍。
Before I get into guidance, I want to highlight an action we are taking to further concentrate our investments on our key growth levers. We are sunsetting a small legacy dedicated Bare Metal CPU offering. We expect approximately $13 million of ARR to roll off by the end of Q1 2026. As this revenue is noncore, we have excluded this legacy product revenue from our customer-specific year-over-year growth metrics. 0 Shifting back to guidance.
在給出具體指導意見之前,我想重點介紹我們正在採取的一項行動,即進一步將投資集中於我們的關鍵成長槓桿。我們將停止提供小規模的傳統專用裸機CPU產品。我們預計到 2026 年第一季末,約有 1,300 萬美元的年度經常性收入 (ARR) 將到期。由於這部分收入屬於非核心收入,我們已將這部分傳統產品收入從針對特定客戶的年成長指標中剔除。0 轉回指導。
We entered 2026 with tremendous momentum and confidence. Paddy spoke of the material demand we're seeing for our Agentic Inference Cloud. We also continue to improve visibility on our near-term revenue growth as we increased RPO in Q4 to $134 million, up 121% sequentially, up close to 500% year over year. With this growing demand and visibility, we are again increasing our near-term growth outlook. For the first quarter of 2026, we expect revenue in the range of $249 million to $250 million, which is approximately 18% to 19% year-over-year growth.
我們帶著強勁的勢頭和十足的信心進入了2026年。Paddy談到了我們目前對智慧推理雲端平台的大量需求。隨著第四季 RPO 增至 1.34 億美元,季增 121%,年增近 500%,我們近期營收成長的可見度也持續提高。隨著需求和市場關注度的不斷提高,我們再次提高了近期成長預期。我們預計 2026 年第一季的營收將在 2.49 億美元至 2.5 億美元之間,年增約 18% 至 19%。
We expect first quarter adjusted EBITDA margins in the range of 36% to 37%. We expect non-GAAP diluted net income per share of $0.22 to $0.27 based on approximately 111 million to 112 million weighted average fully diluted shares outstanding. For the full year 2026, we expect revenue growth between 19% and 23%. This is 21% at the mid -- beyond the 18% to 20% growth outlook that we shared just last quarter. And it is important to highlight that this would be 21% to 24% projected growth if we exclude the impact of our discontinued legacy Bare Metal CPU offering.
我們預計第一季調整後 EBITDA 利潤率在 36% 至 37% 之間。我們預計,根據約 1.11 億至 1.12 億股加權平均完全稀釋流通股計算,非 GAAP 稀釋後每股淨收益為 0.22 至 0.27 美元。我們預計 2026 年全年營收成長率在 19% 到 23% 之間。這相當於21%的中間值——高於我們上個季度分享的18%至20%的成長預期。值得強調的是,如果我們排除已停產的傳統裸金屬 CPU 產品的影響,預計成長率將達到 21% 至 24%。
We will deliver this accelerated growth while maintaining attractive margins. We project full year 36% to 38% adjusted EBITDA margin and 18% to 20% unlevered adjusted free cash flow margins, which is $207 million at the midpoint. We expect non-GAAP diluted net income per share of $0.75 to $1 on 111 million to 112 million weighted average fully diluted shares outstanding. This growth outlook is based on the incremental data center and GPU capacity investments that we have already committed that will come online over the course of 2026. As we look at the quarterly progression within 2026, it is important to understand the timing of this incremental capacity and how that timing impacts our financials.
我們將在保持可觀利潤率的同時,實現加速成長。我們預計全年調整後 EBITDA 利潤率為 36% 至 38%,未槓桿調整後自由現金流利潤率為 18% 至 20%,中間值為 2.07 億美元。我們預計,以加權平均完全稀釋後流通股數計算,非GAAP稀釋後每股淨收益為0.75美元至1美元,流通股數為1.11億股至1.12億股。這項成長預期是基於我們已經承諾的、將在 2026 年陸續上線的資料中心和 GPU 容量增量投資。當我們展望 2026 年的季度進展時,了解新增產能的時間安排以及該時間表如何影響我們的財務狀況至關重要。
We are bringing 31 megawatts of new data center capacity online and three new facilities in 2026. The smallest of our three new facilities will start ramping revenue in the second quarter. The remaining two start ramping revenue in the second half of 2026.
我們將於 2026 年推出 31 兆瓦的新資料中心容量和三個新設施。我們三座新工廠中最小的一座將於第二季開始逐步增加收入。其餘兩家公司將於 2026 年下半年開始實現營收成長。
Aligned with this capacity ramp, we expect second quarter revenue growth to remain around 18% to 19%, with revenue growth then ramping in Q3 before exiting the year at 25% plus in Q4. While there are always supply chain and implementation timing risk to manage, we believe our implementation time line is realistic.
與產能提升一致,我們預計第二季營收成長將維持在 18% 至 19% 左右,第三季營收成長將進一步提升,第四季營收成長將達到 25% 以上。雖然供應鏈和實施時間方面總是會存在風險需要管控,但我們相信我們的實施時間表是現實的。
Increased data center lease expense and equipment depreciation expense will both hit our financials several months before we generated our first revenue in these facilities. Given this lag between expenses and revenue, cost of goods sold from higher GPU-related depreciation and operating expenses from new data center operating leases will increase in the early part of the year as we ramp into the new capacity. These increased costs will cause the expected upfront drops in gross margin and net income that we have seen when we turned our previous data centers.
資料中心租賃費用和設備折舊費用的增加,都會在我們首次從這些設施獲得收入之前的幾個月就對我們的財務狀況造成影響。鑑於支出和收入之間存在這種滯後,隨著我們逐步利用新的容量,年初時因 GPU 相關折舊增加和新的資料中心營運租賃的營運費用而導致的銷售成本將會增加。這些增加的成本將導致毛利率和淨收入出現預期的初期下降,就像我們之前改造資料中心時所看到的那樣。
The initial impact will just be larger as we are turning up more capacity at one time than we've done in the past. Near-term adjusted EBITDA margins will also be impacted somewhat from these dynamics although the impact is less as adjusted EBITDA is only impacted by the higher data center operating.
由於我們一次性投入的產能比以往更大,因此初期影響也會更大。短期內,這些動態也會對調整後的 EBITDA 利潤率產生一定影響,儘管影響較小,因為調整後的 EBITDA 僅受資料中心營運成本增加的影響。
Net leverage is projected to be above 4 times and in short term as we add finance lease obligations to fund our GPU and CPU investments, this increases net debt several months ahead of revenue and adjusted EBITDA ramp. We anticipate returning below 4 times net leverage over the medium to long term as we increase utilization in these data centers and ramp revenue and adjusted EBITDA.
預計淨槓桿率將超過 4 倍,短期內,由於我們增加了融資租賃義務來為 GPU 和 CPU 投資提供資金,這將導致淨債務在收入和調整後 EBITDA 增長之前幾個月就增加。我們預計,隨著這些資料中心的利用率提高以及收入和調整後 EBITDA 的成長,中長期內淨槓桿率將回落到 4 倍以下。
We will achieve these growth targets by focusing on our two primary growth levers, scaling our top DNE customers and expanding our base of AI native customers. We will focus our investments on meeting the needs of our top DNE customers so that they can continue to scale on DigitalOcean as they grow their own businesses. We will continue to invest both in our differentiated Agentic Inference Cloud and in the data center and GPU capacity required to support AI native.
我們將透過專注於兩個主要的成長槓桿來實現這些成長目標,即擴大我們頂級 DNE 客戶的數量並擴大我們的 AI 原生客戶群。我們將集中投資,滿足我們頂級 DNE 客戶的需求,以便他們能夠在 DigitalOcean 上隨著自身業務的成長而不斷擴展。我們將繼續投資於我們差異化的智慧推理雲,以及支援原生人工智慧所需的資料中心和 GPU 容量。
While we are excited by our growth potential in 2026, we are just getting started as we reach full utilization on our existing committed capacity, we expect to reach 30% revenue growth in 2027. We will drive this growth while delivering projected 20% plus unlevered adjusted free cash flow margins, which would make us a rule of 50-plus company in 2027. We will achieve this while making smart investments, earning attractive margins and maintaining a healthy balance sheet.
雖然我們對 2026 年的成長潛力感到興奮,但這只是個開始。隨著我們現有承諾產能的充分利用,我們預計 2027 年的營收成長將達到 30%。我們將推動這一成長,同時實現預計超過 20% 的未槓桿調整自由現金流利潤率,這將使我們在 2027 年成為一家業績超過 50 家的公司。我們將透過明智的投資、獲得可觀的利潤和維持健康的資產負債表來實現這一目標。
We have both the tools and the discipline in place to continue to take advantage of opportunities as they arise. We will continue to share details on our leading indicators and our progress as we execute. We are increasingly confident in our ability to build a durable and profitable growth engine.
我們具備必要的工具和紀律,可以繼續把握機會。我們將繼續分享我們的領先指標和執行進展的詳細資訊。我們越來越有信心打造一個可持續且獲利的成長引擎。
With that, I'd like to turn it back over to Paddy to close this out before we get to Q&A.
接下來,我想把發言權交還給帕迪,讓他來總結一下,然後再進入問答環節。
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Thank you, Matt. Before we move to Q&A, let me leave you with a few thoughts. We crossed $1 billion revenue run rate in December, but that milestone is not the headline. The headline is where we are heading. We are no longer a niche developer cloud, with a platform that high-growth cloud and AI natives are increasingly choosing to run production AI workloads at scale.
謝謝你,馬特。在進入問答環節之前,我想和大家分享幾點想法。我們在 12 月實現了 10 億美元的營收,但這並非新聞頭條。標題預示著我們將走向何方。我們不再是小眾的開發者雲端平台,越來越多高成長的雲端原生和人工智慧原生企業選擇我們的平台來大規模運行生產級人工智慧工作負載。
We are projecting to exit 2026 at 25% plus revenue growth with a clear path to 30% growth in 2027 with the existing committed data center capacity alone. Our top customers are accelerating and are growing significantly faster than the market on DO. We have outgrown the old DigitalOcean narrative. Scaling our top customers was once a constraint. Today, it's our growth engine.
我們預計到 2026 年底,營收成長率將達到 25% 以上,僅憑現有的資料中心容量,到 2027 年,成長速度將達到 30%。我們的頂級客戶正在加速發展,其成長速度明顯快於 DO 市場。我們已經超越了以往 DigitalOcean 的敘事模式。擴大我們頂級客戶的規模曾經是一個限制。如今,它是我們發展的引擎。
Our $1 million customers are at $133 million ARR, growing at 123% year-over-year. The world of software is shifting from seats to tokens from experimentation to production for model training to inferencing at scale. And in that shift, the winners in inference will be more than just GPU landlords. They will be vertically integrated AI cloud platforms that deliver performance, great unit economics and simplicity that embraces open source, exactly what we have and what we continue to build. Our AI customer ARR reached $120 million in Q4, growing 150% year-over-year with 70% of that coming from inference and core cloud products, not from Bare Metal.
我們年收入 100 萬美元的客戶,其年度經常性收入 (ARR) 達到 1.33 億美元,較去年同期成長 123%。軟體世界正從座位數轉向代幣數,從實驗轉向生產,從模型訓練轉向大規模推理。在這種轉變中,推理領域的贏家將不僅僅是 GPU 擁有者。它們將是垂直整合的 AI 雲端平台,提供高效能、良好的單位經濟效益和簡潔性,並擁抱開源,這正是我們擁有的,也是我們將繼續建構的。我們的 AI 客戶 ARR 在第四季度達到 1.2 億美元,年增 150%,其中 70% 來自推理和核心雲產品,而不是來自裸機。
And we're doing it without chasing the GPU training arms race. Without sacrificing discipline, without compromising profitability, we are building something durable. AI is reshaping entire industries, and we are built for this shift. I'm incredibly excited to be part of DigitalOcean at this critical inflection point where a new era of software is being ushered in. I take incredible pride in building a platform that AI pioneers are increasingly leveraging to disrupt software.
而且我們並沒有捲入GPU訓練的軍備競賽。在不犧牲紀律、不影響獲利能力的前提下,我們正在打造一個經久耐用的事業。人工智慧正在重塑整個產業,而我們正是為這種轉變而生的。我非常興奮能夠在這個軟體新時代即將到來的關鍵轉折點加入 DigitalOcean。我為能夠建立這樣一個平台而感到無比自豪,人工智慧先驅們正越來越多地利用這個平台來顛覆軟體產業。
I thank all of you for your partnership and support, and I hope you will join us in San Francisco on April 28 to learn about our platform, our innovation and our customers. With that, let's open it up for your questions.
感謝各位的合作與支持,希望各位能於 4 月 28 日蒞臨舊金山,了解我們的平台、創新與客戶。那麼,現在就讓我們進入問答環節吧。
Operator
Operator
(Operator Instructions) Raimo Lenschow, Barclays.
(操作說明)Raimo Lenschow,巴克萊銀行。
Raimo Lenschow - Analyst
Raimo Lenschow - Analyst
Congrats from me. That's amazing how a company is transforming right in front of my eyes. Paddy, can you talk a little bit about the customers that you're seeing? Like the talk in the market, a lot of that is just opening on trades, maybe Google and they are basically doing everything and nobody else really comes up. When you talk looking at your customers, looking at the pipeline of customers out there.
恭喜你!一家公司在我眼前發生如此巨大的轉變,真是令人驚嘆。帕迪,你能簡單談談你接待的客戶嗎?就像市場上的傳言一樣,很多都只是關於交易的開盤,也許谷歌基本上包辦了一切,其他公司根本無人問津。當你談論客戶時,要專注於你的客戶群,並專注於潛在客戶的來源。
How do you see that inference market evolving in terms of how broad that will be? Is it just unproper doing everything? Or what are you seeing out there in the field? And then I had one follow-up for that.
您認為推理市場的發展趨勢,以及它將擴展到什麼程度?這樣做是不是太不合適了?或者,你在實地考察中看到了什麼?然後我還有一個後續問題。
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes, Raimo, thank you for the question. It's a very thoughtful way to get started. Of course, OpenAI, Gemini and Anthropic get all the headlines in the mainstream news coverage. But as we talk to AI native companies and even examples that I was using in my script, and you will hear a lot more about this at our deploy conference with very specific benchmarks and data. But what we are hearing from these AI native companies is that while these close source models are really, really good, the open source alternatives are extraordinarily important to manage the unit economics as these companies came because the cost per token for the open source model is about 90% cheaper, right?
是的,雷莫,謝謝你的提問。這是一個非常周到的開始方式。當然,OpenAI、Gemini 和 Anthropic 佔據了主流媒體報導的所有頭條。但是,當我們與人工智慧原生公司交流,甚至在我腳本中舉的例子,以及在我們的部署大會上您將聽到更多關於這方面的信息,其中包含非常具體的基準和數據。但我們從這些人工智慧原生公司那裡聽到的是,雖然這些閉源模型確實非常好,但開源替代方案對於管理單位經濟效益來說極為重要,因為開源模型的代幣成本要便宜約 90%,對吧?
So with a very comparable accuracy as these open source models mature. So we have many AI native customers that are using as I mentioned, a variety of open source models at real time when they're doing inferencing, they want us to manage a multitude of open source models and even route their request intelligently to these open source models, and of course, use close source expensive model on a case-by-case basis, it could be for certain prompts, which are better served by these close source models and route everything else to these open source models so that they can have a balanced unit economics.
隨著這些開源模型的成熟,它們的準確度也越來越接近。因此,我們有許多人工智慧原生客戶,正如我所提到的,他們在進行推理時會即時使用各種開源模型。他們希望我們管理大量的開源模型,甚至能夠智慧地將他們的請求路由到這些開源模型。當然,他們也會根據具體情況使用閉源的昂貴模型。對於某些提示,閉源模型可能更適合,而將其他所有請求路由到這些開源模型,以便它們能夠獲得均衡的單位經濟效益。
So it is by no means -- and if you look at data from open router, 30% of the traffic already today is served by open source. That is without a lot of optimization that is without companies like DigitalOcean really stepping up and taking full ownership and guardianship of these open source models. So we are doing a lot of work in this regard over the next couple of months, and you will see it in our Deploy Conference.
所以情況絕非如此——如果你查看 Open Router 的數據,你會發現如今已有 30% 的流量由開源服務提供。也就是說,如果沒有大量的最佳化,沒有像 DigitalOcean 這樣的公司真正站出來,全面負責並維護這些開源模型。因此,在接下來的幾個月裡,我們將在這方面進行大量工作,您將在我們的部署大會上看到這些工作。
But this 30% is only going to grow as these real-world AI native workloads explore we are going to see a lot of open-source adoption. Even in the open deployments that we are seeing, there is a very healthy adoption of open source model serving these open class agentic -- agent farms. So it is really interesting to see how this is evolving. And I want to say there is definitely a world beyond these closed source models. The open source ecosystem is thriving, and it is only going to grow in strength from here on.
但隨著這些現實世界的 AI 原生工作負載的探索,這 30% 的比例只會增長,我們將看到開源軟體的大量採用。即使在我們看到的開放式部署中,開源模型也得到了非常健康的採用,為這些開放類別代理(代理集群)提供服務。所以,觀察它的發展演變真的很有意思。我想說,在這些閉源模式之外,肯定還有更廣闊的世界。開源生態系統蓬勃發展,未來只會越來越強大。
Raimo Lenschow - Analyst
Raimo Lenschow - Analyst
Yes. Okay. Perfect. And Matt, one question that comes up a lot at the moment is on the weighted rule of 50 numbers. If you look at your weighting, and then there's a lot of questions about the free cash flow margins that you think about in 2027. Can you maybe kind of go a little bit deeper there because that comes up a lot here at the moment?
是的。好的。完美的。馬特,目前經常被問到的一個問題是關於 50 個數字的加權規則。如果你看一下你的權重,那麼就會有很多關於2027年自由現金流利潤率的問題需要考慮。您能不能再深入探討一下這個問題,因為最近這裡常提到它?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes. Thanks, Raimo. The weighted rule of 50 is pretty simple for us. We multiply revenue growth by 1.5% and add 0.5 times the free cash flow margin. And that's effectively saying that you're counting revenue growth 3x as valuable as the point of free cash flow margin.
是的。謝謝你,雷莫。對我們來說,50 的加權規則非常簡單。我們將營收成長率乘以 1.5%,再加上自由現金流利潤率的 0.5 倍。這其實意味著,你認為營收成長的價值是自由現金流利潤率的 3 倍。
But the important thing to note is while we talk about weighted rule of 50, if you look at the growth projections we provided were actually a regular weighted -- a regular rule of 50 as well with projected 30% revenue growth in '27 with 20% unlevered free cash flow margins. So that is, I think, a very big testament to the growth opportunity that we have in front of us.
但要注意的是,雖然我們談論的是加權 50 法則,但如果你看一下我們提供的成長預測,實際上也是常規的加權——常規的 50 法則,預計 2027 年收入增長 30%,未槓桿自由現金流利潤率為 20%。所以我認為這充分證明了我們面前蘊藏著巨大的發展機會。
But also the disciplined financial discipline that we've been employing with the ability to accelerate revenue growth while still maintaining very attractive EBITDA margins and very attractive free cash flow margins is kind of part of the model, and it's the benefit of us not chasing the GPU training kind of arms race. It's -- we believe that we'll differentiate based on software and a differentiated platform, and we see a tremendous opportunity to drive really attractive margins as we expand and invest appropriately.
但同時,我們一直採用的嚴謹的財務管理,能夠在保持極具吸引力的 EBITDA 利潤率和自由現金流利潤率的同時加速收入增長,也是我們模式的一部分,這是我們沒有參與 GPU 訓練軍備競賽的好處。我們相信,我們將憑藉軟體和差異化的平台實現差異化,並且我們看到了巨大的機會,可以透過適當的擴張和投資來獲得極具吸引力的利潤率。
Operator
Operator
Kingsley Crane, Canaccord Genuity.
金斯利·克萊恩,Canaccord Genuity。
Kingsley Crane - Analyst
Kingsley Crane - Analyst
Congrats to the whole team on the results. I think you've done an excellent job with the investor update. I actually want to circle back to the inference cloud dynamic with open source models. We've been looking open router data as well. I mean some of these models come and go pretty quickly, how many max can you cater to? How are you thinking about quickly providing support for those classes and models? Is there any operational tax to quickly provide support?
恭喜整個團隊取得如此佳績。我認為你撰寫的投資人更新報告非常出色。我其實想再回到使用開源模型的推理雲動態這個主題。我們也一直在查看開放的路由器資料。我的意思是,有些型號的車型出現得很快,消失得很快,你最多能滿足多少人的需求呢?您打算如何快速為這些類別和模型提供支援?是否有任何營運稅可以快速提供支援?
And then just how to think about them driving growth, both from a revenue and profit standpoint. Could there be more of a Jevons paradox dynamic there with the lower cost models?
然後,我們該如何看待它們如何從收入和利潤的角度推動成長?低成本車型是否會更明顯地體現傑文斯悖論的動態?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Thank you, Kingsley. That's a good question. So you asked two different questions. One, from an operational overhead in terms of days or support to these models, obviously, we've been extending day support for a majority of these open source models as they come out.
是的。謝謝你,金斯利。這是個好問題。所以你問了兩個不同的問題。第一,從營運成本(例如天數或支援時間)來看,顯然,我們已經為大多數開源模型延長了天數支援。
And there are a couple of things there. One is, obviously, there's a little bit of manual overhead in supporting these models. But a large portion of this test and readiness harness is automated. And it is only going to grow in automation, and you will see a lot more details around this at our deployed conference.
那裡有兩件事。顯而易見,支援這些模型需要一些人工操作。但這項測試和準備工作的大部分都是自動化的。而且它在自動化領域只會不斷發展,您將在我們的部署大會上看到更多相關細節。
And the second part of your question was really around the Jevons paradox of as these open source models proliferate, how should we think about the growth profile of not just our platform but also these companies, I think it is only going to aid in the deployment of AI native software in pretty much every segment of the market.
你問題的第二部分實際上圍繞著傑文斯悖論,即隨著這些開源模型的普及,我們應該如何看待不僅是我們平台,還有這些公司的成長前景?我認為這只會促進人工智慧原生軟體在幾乎所有市場領域的部署。
And I think we should also not think about AI native workloads at open source or close source. What we are seeing is the mixture of both for the same use case, for the same inference call even, some parts of the application stack -- based on the pumps, we do intelligence routing.
而且我認為我們也不應該考慮開源或閉源的 AI 原生工作負載。我們看到的是,對於同一個用例,甚至對於同一個推理調用,兩者混合使用——基於泵,我們在應用程式堆疊的某些部分進行智慧路由。
Right now, it's fairly manual, but we are working on different types of algorithms to route it in a much more intelligent and smart fashion. So you will see a universe going into the future where prompts are going to get routed to different models all working together at the same time to deliver high throughput, low latency, acceptable accuracy with great unit economics of token throughput. So this is coming.
目前,路由過程相當依賴人工操作,但我們正在研究不同類型的演算法,以便以更智慧的方式進行路由。因此,你會看到一個面向未來的世界,其中提示將被路由到不同的模型,所有模型同時協同工作,以提供高吞吐量、低延遲、可接受的準確性以及良好的代幣吞吐量單位經濟效益。所以這件事即將發生。
We are already seeing it from many of our AI native workloads. And that is how I see the market evolve as open source model continue to catch up with these closed source systems. The close source systems are really important to be on the bleeding edge of innovation, but a vast majority of these long-running agentic software like OpenClaw can very materially run on these open source systems.
我們已經從許多人工智慧原生工作負載中看到了這一點。我認為市場的發展趨勢是,隨著開源模式不斷趕上這些封閉源系統,市場將會朝著這個方向演變。閉源系統對於走在創新前沿固然重要,但像 OpenClaw 這樣的長期運作的智能體軟體絕大多數都可以在這些開源系統上實際運作。
Kingsley Crane - Analyst
Kingsley Crane - Analyst
Thanks, Paddy. That's really helpful. And then for Matt, obviously, $22 million for ARR per megawatt is a clear differentiator. I'm curious now that Atlanta is close to full utilization. Any insights you have on just what a full utilized megawatt can look like in terms of a revenue efficiency standpoint for AI.
謝謝你,帕迪。這真的很有幫助。對 Matt 來說,顯然,每兆瓦 2200 萬美元的 ARR 是一個明顯的差異化因素。我現在很好奇,亞特蘭大的房屋利用率已經接近飽和了。從人工智慧的效益效率角度來看,充分利用兆瓦電力究竟意味著什麼?您有什麼見解?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes, that's a great question, Kingsley. If you look at the public data that's available to like a Neocloud, which is more of a bare metal model, they show like, what, $9 million to $12 million, I think, in ARR per megawatt. Clearly, we believe we can deliver more than that. And if you look at the guidance that we've given, what you'll see is that while it's [22] now, that's, again, with a small less than 10% or right around 10% of our ARR in AI. So as we grow AI, it will come down.
是的,金斯利,你問得好。如果你查看像 Neocloud 這樣的公司(它更像是裸機模型)的公開數據,他們顯示每兆瓦的 ARR 大約是 900 萬到 1200 萬美元。顯然,我們相信我們能做得更多。如果你看看我們給的指導,你會發現,雖然現在是 [22],但這仍然只占我們 ARR 的一小部分,不到 10%,或者說只佔 AI 的 10% 左右。所以隨著人工智慧的發展,它終將衰落。
We'll add incremental ARR per megawatt greater than what you're seeing from the Neoclouds, but the drop from a bigger mix of AI by the end of '27 once we're fully ramped with the incremental [31], it will only drop by a couple of million. That will be around $20 million. And so if you think of us as not having, okay, we've got AI investments and we've got core cloud investments, but we have more of an overall AI cloud platform that has GPUs, it's got CPUs.
我們將增加每兆瓦的增量 ARR,比您從 Neoclouds 看到的要多,但到 2027 年底,一旦我們完全利用增量 [31] 實現更大的 AI 組合,下降幅度只會達到數百萬。那大約是2000萬美元。所以,如果你認為我們沒有,好吧,我們有人工智慧投資,我們有核心雲端投資,但我們擁有一個更全面的人工智慧雲端平台,它有GPU,有CPU。
It's got core compute and bandwidth and all the capabilities that you need, we still expect to deliver materially higher ARR per megawatt than what you're seeing in the Neocloud space. So we feel really good about the returns that we're getting and the margin that we're able to drive.
它擁有核心運算能力、頻寬以及您所需的所有功能,我們仍然預計每兆瓦的 ARR 將比您在 Neocloud 領域看到的要高得多。因此,我們對獲得的收益和能夠實現的利潤率感到非常滿意。
And this is only going to increase. I mean, you saw the chart in the deck about how many -- how much of the AI customer revenue is coming from non Bare Metal that's 70%. That's only going to increase. And that smaller lever of core cloud is only going to increase as customers become entrenched on our platform and they start putting in database and storage and some of the other higher-margin capabilities that are sticky. We're very excited about our ability to serve the kind of full addressable wallet of the AI native.
而且這種情況只會愈演愈烈。我的意思是,你在簡報中看到了圖表,顯示人工智慧客戶收入中有多少來自非裸機,那就是 70%。這種情況只會愈演愈烈。隨著客戶在我們的平台上紮根,並開始部署資料庫、儲存和其他一些利潤更高、更具黏性的功能,核心雲的這種較小槓桿作用只會增加。我們非常高興能夠為人工智慧原生用戶提供完全可尋址的錢包服務。
Operator
Operator
Josh Baer, Morgan Stanley.
喬許貝爾,摩根士丹利。
Josh Baer - Analyst
Josh Baer - Analyst
Congrats on the strong results and impressive targets. Just wanted to clarify, the incremental 31 megawatts that all comes online by the end of '26 driving that 25% revenue growth exiting the year. But then as utilization increases, the capacity is enough to reach the full 30% growth in 2027 revenue?
恭喜你們取得優異成績並達成令人矚目的目標。我只是想澄清一下,到 2026 年底全部併網的新增 31 兆瓦發電量將推動年底收入成長 25%。但隨著利用率的提高,產能是否足以實現 2027 年 30% 的營收成長目標?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
That's absolutely right, Josh. You nailed it. as we said in the call that the smallest of the three facilities, which is 6 megawatts, is going to come -- start ramping revenue in the second quarter. But the other two start ramping in the second half. And just with the -- what we believe is appropriate assumptions around the timing and the ramp of that we'll hit 25% in Q4 as an exit growth rate, 25% plus.
沒錯,喬許。你說得對。正如我們在電話會議中所說,三個設施中最小的那個,裝置容量為 6 兆瓦,將在第二季開始增加收入。但另外兩名球員在下半場開始發力。僅憑我們認為合理的關於時間和成長速度的假設,我們將在第四季達到 25% 的退出成長率,甚至超過 25%。
And then if all we did was kind of fill those -- continue to fill those up, we've hit 30% for the full year in 2027. And we feel very good about, again, the returns that we would generate there and the growth trajectory that we would be on at that point.
如果我們只是把這些填滿——繼續填滿這些,那麼到 2027 年,我們就能達到全年 30% 的目標。我們再次感到非常樂觀,相信我們屆時將獲得的收益以及我們將達到的成長軌跡。
Josh Baer - Analyst
Josh Baer - Analyst
Okay. That's helpful. And I was just hoping you could sort of review some of what Vinay Kumar's top priorities are at this point, there's been so many positive changes from a product and innovation perspective over the last couple of years. What are his priorities? What changes should we expect going forward?
好的。那很有幫助。我希望您能回顧一下 Vinay Kumar 目前的首要任務是什麼,在過去幾年裡,從產品和創新角度來看,已經發生了許多積極的變化。他的首要任務是什麼?未來我們應該預期會發生哪些變化?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Thanks, Josh. So as I was mentioning in my prepared remarks, given his background at RF Cloud, he has really hit the ground running. His top one or two priorities are going to be building -- continue to build out the inference cloud. And you will see a lot of very detailed announcements on April 28 at our deploy conference on how the next generation of this inference cloud capabilities is going to look like.
是的。謝謝你,喬希。正如我在準備好的演講稿中提到的那樣,鑑於他在 RF Cloud 的背景,他真的很快就進入了工作狀態。他最重要的幾項任務之一就是繼續建立推理雲。4 月 28 日,在我們的部署大會上,您將看到許多非常詳細的公告,介紹下一代推理雲端功能將是什麼樣子。
The team is super heads down and busy working on it now. We also will continue to raise the bar on our core cloud capabilities because our cloud native digital native enterprise companies are also scaling tremendously on our platform and they require continuous innovation from our side on advanced things like different types of databases and different scale aspects, scalability aspects of our database as a service and various parts of our core cloud infrastructure like high-performance storage, network file systems.
團隊現在埋頭苦幹,忙得不可開交。我們將繼續提高核心雲端能力的標準,因為我們的雲端原生數位原生企業公司也在我們的平台上實現了巨大的規模擴張,他們需要我們在高級技術方面不斷創新,例如不同類型的資料庫和不同的規模方面,資料庫即服務的可擴展性方面,以及我們核心雲端基礎設施的各個部分,如高效能儲存、網路檔案系統。
So one of the things that Vinay is working on is delivering innovation in our core infrastructure that is applicable to both AI native and cloud native. So there is a huge intersection that when you look at companies like the AI native that we are rapidly scaling up on our platform, they require very similar things from, say, high-performance storage as an example. Like I don't want to preannounce stuff that we are working on, which we will come out on April 28 with.
因此,Vinay 正在努力的方向之一是為我們的核心基礎設施帶來創新,使其既適用於 AI 原生應用,也適用於雲端原生應用程式。因此,當你觀察像我們正在平台上快速擴展的 AI 原生公司時,你會發現它們之間存在巨大的交集,例如,它們對高效能儲存的需求非常相似。我不想提前透露我們正在開發的項目,這些項目將於 4 月 28 日發布。
But a lot of those things are very similar to what our cloud-native companies can also benefit from. So there is a quite a robust lineup of capabilities that we are working on for both the inference cloud as well as some of the underlying infrastructure enhancements that will be applicable to digital native enterprise companies. So that's what he's focused on delivering. And as I mentioned, given his background, he's almost hit the ground running in terms of ramping up the innovation on the core inference cloud.
但很多方面與我們的雲端原生公司也能從中受益的面向非常相似。因此,我們正在為推理雲端以及一些底層基礎設施增強功能開發一系列相當強大的功能,這些功能將適用於數位原生企業公司。所以這就是他一直努力的方向。正如我之前提到的,鑑於他的背景,他在核心推理雲的創新方面幾乎立刻就取得了進展。
Operator
Operator
Wamsi Mohan, Bank of America.
Wamsi Mohan,美國銀行。
Wamsi Mohan - Analyst
Wamsi Mohan - Analyst
Great to see this growth acceleration here. Firstly, maybe, Paddy, just visibility around the 30% growth. How should we think about that in terms of -- I mean, historically, obviously, dilution is a very different company today. But historically, you really not had like long-term contract, long-term visibility. You're talking about very meaningful acceleration as you go to 30% plus.
很高興看到這裡的成長速度加快。首先,或許,帕迪,只是需要了解 30% 的成長情況。我們該如何看待這個問題呢?我的意思是,從歷史角度來看,很顯然,稀釋效應如今已經與以往大不相同了。但從歷史上看,你真的沒有長期合約,也沒有長期的前景。你指的是當成長率達到 30% 以上時,那種意義非常重大的加速。
Maybe if you could dissect some of the underlying drivers of what you're looking at, which gives you the confidence? And maybe just split that between Infrastructure-as-a-Service and Platform-as-a-Service, that would be maybe a different way to slice and give people a view over there. And I have a quick follow-up.
或許如果你能剖析你所關注事物的根本驅動因素,就能增強你的信心了?或許可以將其拆分為基礎設施即服務 (IaaS) 和平台即服務 (PaaS),這或許是一種不同的劃分方式,可以提供人們不同的視角。我還有一個後續問題。
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Thank you, Wamsi. The -- I think Matt broke down some of the physics of the acceleration, right? So we have new capacity that is ramping up throughout this year and going into next year as well. So that gives us a lot of visibility. So first of all, maybe I should take a step back and talk about the fact that our -- the demand that we are seeing now is very, very robust.
謝謝你,瓦姆西。我想Matt已經解釋過一些關於加速度的物理原理了,對吧?因此,我們新增的產能將在今年全年以及明年逐步提升。這大大提高了我們的知名度。首先,或許我應該退一步,談談我們現在看到的需求非常非常強勁。
And it far exceeds the supply that we currently have from an infrastructure point of view. So we are being super responsible in ramping up our capacity. We are being super aggressive in the time lines. We are working very closely together with the data center providers and the OEMs to get this capacity online and the fastest possible speed of flight scenario as much as we can. So given the schedule that we are currently working on, we feel very confident that as we bring this capacity online, we have enough demand in the pipeline to be able to fill up this capacity with very responsible unit economics.
從基礎設施的角度來看,這遠遠超過了我們目前的供應能力。因此,我們在提升產能方面非常負責。我們在時間安排上非常積極主動。我們正與資料中心供應商和原始設備製造商緊密合作,以盡快實現此容量的上線和最快的飛行速度。因此,根據我們目前正在製定的計劃,我們非常有信心,隨著這些產能的上線,我們有足夠的潛在需求,能夠以非常合理的單位經濟效益來填滿這些產能。
So that's what is giving us the confidence to provide the outlook of 25% plus exiting this year and 30% for next year. And also, our RPO has been going up steadily, and that is one leading indicator. But also, I should add the fact that inferencing is very different, right? I mean this is a -- these are real-world workloads. As opposed to training where a company can just raise venture capital money and just commit to a two-year, three-year contract to burn dollars to build a frontier model inferencing workloads are typically paid by end customers.
因此,我們有信心預測今年退出市場的比例將超過 25%,明年將達到 30%。此外,我們的RPO(招募點差)一直在穩步上升,這是一個領先指標。但是,我還應該補充一點,推理是非常不同的,對吧?我的意思是,這些都是現實世界的工作負載。與公司可以透過籌集創投資金並簽訂兩年、三年的合約來燒錢建立前沿模型進行培訓不同,推理工作負載通常由最終客戶付費。
So for us, that is super exciting because we are typically working with post product market fit companies that have real revenue, working with real consumers or business to business like hypocritic AI. They're deploying in some of the world's largest health care providers. So we know that as their demand picks up, they're going to need more and more inference capabilities.
所以對我們來說,這非常令人興奮,因為我們通常與產品市場契合後的公司合作,這些公司擁有真正的收入,與真正的消費者或企業對企業合作,例如 Hypcritic AI。他們正在一些全球最大的醫療保健機構部署這項技術。所以我們知道,隨著他們的需求增加,他們將需要越來越多的推理能力。
So our confidence really stems from the visibility we are getting into our customers and the real-world influence demand. So I feel if you look at it from a customer perspective or you look at it from a capacity point of view, those are the one -- those are the data points that we used to triangulate our guidance for exiting this year and next year.
因此,我們的信心真正源自於我們對客戶的了解以及對實際影響力的需求。所以我覺得,無論是從客戶的角度來看,還是從產能的角度來看,這些就是我們用來驗證今年和明年業績預期的數據點。
Wamsi Mohan - Analyst
Wamsi Mohan - Analyst
Okay. And then maybe one quick one for Matt. So can you just talk a little bit about the margin progression? I guess you mentioned some near-term margin compression given your capacity ramp should we expect that will persist through all of 2026, given the timing of the ramp? And then as you ramp into '27, we should be back to 2025 levels.
好的。然後也許可以給馬特快速寫一篇。那麼,您能簡單談談利潤率的變化嗎?我猜您提到了由於產能提升,短期內利潤率可能會受到壓縮。考慮到產能提升的時間安排,我們是否可以預期這種情況會持續到 2026 年全年?然後,隨著我們逐步進入 2027 年,我們應該能夠恢復到 2025 年的水平。
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Thanks, Wamsi. Yes, there's certainly going to be some near-term pressure, as we said, on the gross margin, for example, but the metrics that we think are the best indicators of profitability for us are continue to be adjusted EBITDA margin and free cash flow margin, both on an unlevered basis and on a levered basis. And if you look at the margin guidance that we provided for the full year '26 and the ranges for '27, you see exactly what you just described, which is we'll have a little bit more pressure this year as we ramp.
謝謝你,Wamsi。是的,正如我們所說,短期內肯定會面臨一些壓力,例如毛利率,但我們認為衡量盈利能力的最佳指標仍然是調整後的 EBITDA 利潤率和自由現金流利潤率,無論是在無槓桿基礎上還是在有槓桿基礎上。如果你看一下我們為 2026 年全年提供的利潤率指引和 2027 年的利潤率範圍,你會發現你剛才描述的情況,那就是隨著我們產能的提升,今年我們將面臨更大的壓力。
But then as we grow into that, the utilization increases, that kind of catches back up and you should see an upward trajectory on the margins. The mix of AI services versus the core cloud, that's certainly -- that's a longer duration impact because as we add more AI capabilities and more AI revenue, margins are lower than the core cloud margin.
但隨著我們發展壯大,利用率提高,這種情況就會迎頭趕上,你應該會看到邊際效益呈現上升趨勢。AI 服務與核心雲的組合,這肯定會產生更持久的影響,因為隨著我們增加更多的 AI 功能和更多的 AI 收入,其利潤率會低於核心雲的利潤率。
So you'll have a little bit of a mix impact in addition to the timing impact, but all of that is net out in the very, very strong adjusted EBITDA margins that we're projecting and the very strong adjusted free cash flow margins and unlevered adjusted free cash flow margins.
因此,除了時間上的影響外,還會有一些組合上的影響,但所有這些影響都會被抵消,我們預計的調整後 EBITDA 利潤率、調整後自由現金流利潤率和未槓桿調整後自由現金流利潤率都非常強勁。
Operator
Operator
Gabriela Borges, Goldman Sachs.
加布里埃拉·博爾赫斯,高盛。
Gabriela Borges - Analyst
Gabriela Borges - Analyst
Congratulations to the DigitalOcean team. Paddy, I have a little bit of a longer-term question for you. If I think about DigitalOcean core value proposition, on democratizing access to cloud. That has been true for many years now. My question for you is, what do you think is structurally different with the AI compute cycle that will allow digitation to essentially capture and hold on to a higher share of wallet in AI inference compute relative to the cloud cycle.
恭喜 DigitalOcean 團隊。帕迪,我有個比較長期的問題想問你。如果我思考 DigitalOcean 的核心價值主張,那就是普及雲端服務。多年來一直如此。我的問題是,您認為人工智慧運算週期在結構上與雲端運算週期有何不同,使得數位化能夠在人工智慧推理運算領域佔據並維持更高的市場份額。
And the reason asking is because there are 32 companies that show up in the semi-analysis cost-to-max benchmarking airport. We know that demand is early. We know the inference cycle is early. How do you think about DigitalOcean's ability to durably capture higher share relative to the 31 of the competitors in the long term?
之所以這麼問,是因為在半分析成本最大化基準測試中出現了 32 家公司。我們知道需求已經出現。我們知道推理週期還處於早期階段。您認為 DigitalOcean 是否有能力在長期內持續獲得比 31 家競爭對手更高的市佔率?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Thank you, Gabriela. And I'm sure if semi analysis was around in 2011 or 2012 when cloud was taking off, they would have been 32 BPS providers as well. And we went from that to a $1 billion run rate in 12, 13 years. And if I take a step back and think about how durable our mission is in the world of AI, I think I hit on a few different things, right? I fundamentally believe that inference workloads are also workloads or real-world applications as well as the application scales, you need a variety of different things all working together, right?
謝謝你,加布里埃拉。我相信,如果半分析技術在 2011 年或 2012 年雲端運算興起時就已經存在,他們也會成為 32 家 BPS 供應商之一。12、13 年內,我們的年收入就從 10 億美元成長到了 10 億美元。如果我退後一步,思考一下我們在人工智慧領域的使命有多持久,我想我想到了一些不同的事情,對吧?我從根本上認為推理工作負載也是現實世界應用程式的工作負載,而且隨著應用程式規模的擴大,你需要各種不同的東西協同工作,對吧?
AI natives do not want to just use one provider for token generation, go to another provider for database, go to a third provider for their application experience and go to a fourth provider for some of the other core storage and other artifacts. They want an integrated cloud that is co-located and all of these perimeters to work hand-in-hand together so that they can focus on building their business are not mess around with infrastructure.
AI 原生開發者不希望只使用一個提供者來產生令牌,然後使用另一個提供者來獲取資料庫,再使用第三個提供者來獲得應用程式體驗,最後使用第四個提供者來獲取其他一些核心儲存和其他元件。他們想要一個整合的雲端平台,該平台是託管的,並且所有這些邊界都能協同工作,這樣他們就可以專注於發展業務,而不是在基礎設施上浪費時間。
The other part that I feel very confident is something that we are going to be talking and dealing with a lot in our deployed conference on April 28, which is this emergence of a mixture of AI models that is required to run efficient unit economics in inferencing mode. So the difference in the unit economics between closed source models and open source models is 90 -- so open source models are 90% more cost effective than it compares to close source models. And it already has 30% market share with just a handful of open source models on the market.
我非常有信心的另一部分,也是我們將在 4 月 28 日的部署會議上重點討論和處理的內容,那就是為了在推理模式下實現高效的單位經濟效益,需要混合使用各種人工智慧模型。因此,閉源模型和開源模型在單位經濟效益上的差異為 90——也就是說,開源模型比閉源模型更具成本效益 90%。僅憑市場上寥寥幾個開源模型,它就已經佔據了 30% 的市場份額。
So I feel this is only going to go from strength to strength. And that has been a big differentiator for DigitalOcean throughout the years as well. So we talk about 32 companies showing up in some of these market landscapes. But when OpenClaw became viral a couple of weeks ago or a month ago, we were one of the natural places where developers started deploying it. As I mentioned, we have more than 30,000 of these agents running, and we barely did anything from a marketing point of view.
所以我認為它只會越來越好。多年來,這也一直是 DigitalOcean 的一大優點。所以我們談到有 32 家公司出現在這些市場格局中。但當 OpenClaw 在幾週前或一個月前開始流行時,我們自然而然地成為了開發者開始部署它的地方之一。正如我之前提到的,我們有超過 30,000 個這樣的代理商在運營,但我們在行銷方面幾乎沒有做任何工作。
In fact, we did no marketing. All we did is scramble our jets to make sure that developers have first-class experience deploying these agents on our platform and we were such a natural choice for running these long-running agentic software because they need a lot more than just access to GPUs or just access to inference tokens.
事實上,我們沒有做任何市場推廣。我們所做的就是緊急調派飛機,確保開發人員在我們的平台上部署這些代理商時擁有一流的體驗。對於運行這些長時間運行的代理軟體來說,我們是一個自然而然的選擇,因為它們需要的不僅僅是對 GPU 的存取或對推理令牌的存取。
So I feel very good that we -- our product strategy is working, and we are able to serve the needs of inference workloads running in production. So we're already starting to see the proof points for where different parts of our inference cloud are getting lit up. And the slide that I walked through in terms of our AI customer revenue, 70% of our revenue already is from non-bare metal.
因此,我非常高興我們的產品策略正在奏效,我們能夠滿足生產環境中運行的推理工作負載的需求。所以我們已經開始看到推理雲的不同部分被啟動的跡象。我剛剛展示的幻燈片顯示,就我們的人工智慧客戶收入而言,我們70%的收入已經來自非裸機。
And that should give us a lot of confidence that our platform services, higher-margin services are resonating with our customers. They're increasingly coming on -- coming to us as they recognize that bare metal is not going to be sufficient for them.
這應該讓我們更有信心,我們的平台服務和高利潤服務能夠引起客戶的共鳴。他們越來越傾向於選擇我們,因為他們意識到裸金屬已經無法滿足他們的需求。
Gabriela Borges - Analyst
Gabriela Borges - Analyst
Yes. Really good color. I'll stay on this one -- 70% non Bare Metal data point. And I'll ask the question to Matt. Payback period on GPUs. The last time we talked about this, I think you've told us it was around three years, but that was before you all had focused on maximizing or improving the ARR per megawatt of capacity. So my question is, how is payback periods in GPU engine?
是的。顏色真好看。我會繼續關注這一點——70% 的數據點並非裸機。我會把這個問題問給馬特。GPU的投資回收期。上次我們談到這個問題時,我想你告訴我們大概是三年前,但那是在你們都專注於最大化或提高每兆瓦容量的 ARR 之前。所以我的問題是,GPU引擎的投資回收期如何?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Well, that's a great question, Gabriela. And one of the things that I want to make sure everybody understand is if you think about why did we lease gear, like why are we doing equipment leasing, it's to address exactly this challenge. If you said, okay, well, you're going to spend hundreds of millions of dollars on GPUs and you're going to have to wait three, four years to pay them back. That's a model. That's not the model that we're pursuing.
嗯,這是一個很好的問題,加布里埃拉。我想確保每個人都明白的一點是,如果你想想我們為什麼要租賃設備,為什麼要進行設備租賃,那就是為了應對這項挑戰。如果你說,好吧,那你打算在GPU上花費數億美元,然後你得等三四年才能把錢賺回來。那是一個模型。這不是我們想要推行的模式。
Our model is we're leasing the gear, which means we're earning more ARR per megawatt and pretty associated GPU investment than what a Neocloud would earn. But we're also earning cash on that within months of actually deploying it. right? As soon as we deploy that and we start earning revenue and it ramps, we're paying on a monthly basis for that gear over four or five years, and we're earning more than 2 times that in revenue. So from a payback period, we still have the same kind of payback hurdles that we've had before.
我們的模式是租賃設備,這意味著我們每兆瓦的 ARR 和相關的 GPU 投資獲得的收益比 Neocloud 獲得的收益要高。但我們在實際部署後的幾個月內就從中獲得了收益,對吧?一旦我們部署好這些設備,開始獲利並逐步擴大規模,我們將在四到五年內按月支付這些設備的費用,而我們獲得的收入將是設備成本的兩倍多。因此,從投資回收期來看,我們仍然面臨與先前相同的投資回收障礙。
You'd like to see three-year paybacks on most of your investments. You might be willing to extend that to win some early customers. But if you actually think about the mechanics, that there isn't -- that's a little bit of an intellectual exercise because we're paying -- we're already paying our gear back within a month or two because we're earning more cash than we've spent on that gear. And that's the reason you align your investment with revenue.
你希望大部分投資都能在三年內回收成本。為了贏得一些早期客戶,你或許願意延長這一期限。但如果你仔細想想其中的原理,你會發現其實並沒有——這有點像是在進行智力遊戲,因為我們是在付費——我們已經在一兩個月內把設備的錢賺回來了,因為我們賺的錢比花在設備上的錢要多。這就是為什麼你要讓投資與收入一致的原因。
Operator
Operator
Param Singh, Oppenheimer.
Param Singh,奧本海默。
Paramveer Singh - Analyst
Paramveer Singh - Analyst
First of all, Paddy, I wanted to get a sense of your (inaudible) AI platform, obviously, that's driving a lot of growth. But where do you think some of the missing pieces are in terms of your technology given that the Neoclouds are starting to get a little bit more aggressive, do you think you have a sustainable competitive advantage? And how do you plan to sustain that?
首先,帕迪,我想了解你的(聽不清楚)人工智慧平台,顯然,它正在推動公司快速成長。但考慮到Neoclouds公司開始變得更加咄咄逼人,您認為您的技術方面還缺少哪些環節?您認為您擁有可持續的競爭優勢嗎?你打算如何維持這種情況?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Param, not only do I think we have an advantage now and our lead is increasing compared to other Neoclouds because they are coming from a training world, which is which is totally different, right? The needs all the way from the way GPUs are network and the cluster sizes, everything is so different.
是的。Param,我不僅認為我們現在有優勢,而且與其他 Neocloud 相比,我們的領先優勢還在不斷擴大,因為他們來自一個完全不同的訓練環境,對吧?從 GPU 的網路方式到叢集規模,所有方面的需求都截然不同。
Inferencing is very different, as I explained, and if you look at the slide that shows the richness of our inference cloud stack, each one has taken us years and years to perfect. And as we work very closely with cloud AI native companies, we are -- we are understanding and getting an appreciation for their real challenges, right?
正如我解釋的那樣,推理非常不同,如果你看一下展示我們推理雲堆疊豐富性的幻燈片,你會發現每一項都花了我們多年的時間才得以完善。隨著我們與雲端人工智慧原生公司密切合作,我們正在了解並體會他們面臨的真正挑戰,對吧?
The example that I was talking to you about where customers need orchestration across different AI models at real time when they are trying to parse out a prompt and so that query or make real-time decisions. So we are getting so much intelligence just working hand-in-hand with our customers. I feel like our lead is only going to increase from here on.
我之前跟你提到的例子是,當客戶嘗試解析提示資訊並進行查詢或做出即時決策時,他們需要在不同的 AI 模型之間進行即時協調。因此,我們透過與客戶攜手合作,獲得了大量有價值的資訊。我覺得我們的領先優勢只會越來越大。
And it's not to say that we won't have competition, but I feel very confident in our ability to out-invent these other companies in terms of our inference cloud. And the durability is for you to see are we have 0% churn in our $1 million per customer. So something is working, and that is our agent inference cloud.
這並不是說我們不會面臨競爭,但我對我們在推理雲端領域的創新能力非常有信心,我們一定能夠超越其他公司。至於持久性,您可以看看,我們每位價值 100 萬美元的客戶流失率為 0%。所以有些東西正在發揮作用,那就是我們的智能體推理雲。
Paramveer Singh - Analyst
Paramveer Singh - Analyst
And as my follow-up, do you feel you're constrained by the availability of power and physical location at this point? Or put conversely, given the opportunity to invest even heavier and grow faster, given the demand from the AI natives, what would you prefer at this time? Or would you rather have a slower pace of investment? Happy if you could give me some insight on really appreciated.
我的後續問題是,您目前是否覺得受到電力供應和地理位置的限制?或者反過來說,如果現在有機會加強投資力度,實現更快成長,考慮到人工智慧原住民的需求,你此時會選擇什麼?或者您更傾向於放慢投資步伐?如果您能給我一些建議,我將不勝感激。
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes. As Paddy said, we have more demand than we have supply. But we're also making, I think, very prudent and appropriate investment decisions. We don't want to go all in with like a single customer. We don't want to go all in on a single generation of GPU technology.
是的。正如帕迪所說,我們的需求量大於供應量。但我認為,我們也做出了非常謹慎和適當的投資決策。我們不想把所有資源都投入到單一客戶身上。我們不想把所有資源都投入到單一的GPU技術中。
We believe that building a diverse set of customers that are very heavy in the inferencing workloads and not chasing training, we'll build a durable model for us. And so we'll continue to evaluate opportunities to continue to accelerate our growth and we'll make good appropriate financial decisions, and we're doing it in a very balanced way across a diverse set of customers. But we're very, very highly concentrated on what we're good at and where we're differentiated and where we can earn a good return, and that's what's driving our investment decisions.
我們相信,透過建立一批在推理工作負荷方面投入巨大而非追求訓練的多元化客戶群,我們將建立一個持久的商業模式。因此,我們將繼續評估各種機會,以繼續加速成長,並做出適當的財務決策,同時以非常平衡的方式服務多元化的客戶群。但我們非常非常專注於我們擅長的領域、我們的差異化優勢以及我們可以獲得良好回報的領域,而這正是我們投資決策的驅動力。
Operator
Operator
Radi Sultan, UBS.
Radi Sultan,瑞銀集團。
Radi Sultan - Analyst
Radi Sultan - Analyst
First one for Paddy, kind of on a similar line of questioning, just sort of that longer-term capacity add framework. As you guys think about sort of how much capacity you want to procure and maybe stretch it out over the next several years? Like what are you looking at specifically to inform that decision? And then what gives you confidence in being able to fill that capacity over the next several years?
首先是 Paddy 的問題,和之前的問題有點類似,都是關於長期產能增加框架的問題。你們在考慮要購買多少產能,以及是否計劃在未來幾年內分階段採購嗎?具體來說,你在做決定時會考慮哪些因素?那麼,是什麼讓您有信心在未來幾年內勝任這項工作呢?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Thank you. We look at many, many factors, but the dominant one is we look at our customer demand, look at what they're dealing with, how they are projecting their needs. So that is a big, big input factor for us. The second one is we look at the footprint from the perspective of for inferencing, obviously, we need to have a really good geographic spread co-locating and for all of our new data centers, we have both core cloud as well as AI capacity, all running on the same server stack.
是的。謝謝。我們會考慮很多很多因素,但最主要的是我們會專注於客戶的需求,了解他們面臨的問題,以及他們如何預測自己的需求。所以這對我們來說是一個非常非常重要的投入因素。第二點是,我們從推理的角度來看規模,顯然,我們需要非常好的地理分佈,進行託管,對於我們所有的新資料中心,我們既有核心雲容量,也有 AI 容量,所有這些都運行在同一個伺服器堆疊上。
So that's an important aspect for us to have all of these things colocated. The third thing we always look at is how we are going to keep up with the generational leap frogs of OEMs, including AMD and NVIDIA and perhaps others in the future.
所以對我們來說,將所有這些東西集中放在一個重要的方面。我們始終關注的第三件事是,我們將如何跟上 OEM 廠商(包括 AMD 和 NVIDIA,以及未來可能出現的其他廠商)的世代飛躍式發展。
So these are all important factors that we take into account as we consider how our footprint is going to look like over the next several years. And we are always making this evaluation, we are looking at various options as we build out our long-term plan. And as I said, primary driver is always looking at our customer needs, customer demand, what kind of workloads are they ramping up.
因此,這些都是我們在考慮未來幾年我們的業務足跡將如何發展時需要考慮的重要因素。我們一直在進行評估,在製定長期計畫的過程中,我們會考慮各種方案。正如我所說,首要驅動力始終是專注於客戶的需求、客戶的期望,以及他們正在增加的工作量。
The demand for their application is a big driver for us. So those are some of the input factors that we use to plan our capacity.
市場對他們應用程式的需求是我們前進的一大動力。以上是我們用來規劃產能的一些輸入因素。
Radi Sultan - Analyst
Radi Sultan - Analyst
Got it. Just a quick follow-up for Matt. Does the [27%] EBITDA margin and free cash flow guidance contemplate any additional capacity investments next year? Or is that just reflect some of the 31 megawatts you're bringing online this year?
知道了。給馬特一個簡短的後續問題。27% 的 EBITDA 利潤率和自由現金流預期是否考慮了明年的任何額外產能投資?或者這僅僅反映了您今年新增的 31 兆瓦電力中的一部分?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
It's just reflective of the 31 megawatts that we're bringing on this year.
這正好反映了我們今年新增的 31 兆瓦裝置容量。
Operator
Operator
James Fish, Piper Sandler.
詹姆斯·菲什,派珀·桑德勒。
James Fish - Analyst
James Fish - Analyst
Maybe just following up on that. If AI is growing as fast as it is, you guys are needing to bring on capacity now to meet all this demand, aren't you going to need more capacity then? And Matt, additionally, it looks like you're excluding finance leases and your free cash flow metric. Why treating it like this as if it wasn't finance, you'd still have CapEx, and it does seem to imply, I'm getting a lot of this question premarket here.
或許只是想跟進一下。如果人工智慧發展如此迅速,你們現在就需要增加產能來滿足所有這些需求,那麼到時候你們不需要更多的產能嗎?另外,Matt,看起來你把融資租賃和自由現金流指標排除在外了。為什麼要這樣對待它,就好像它不是金融一樣?你仍然會有資本支出,而且這似乎暗示著,我在盤前交易中經常被問到這個問題。
It seems like you're implying about 10% reported free cash flow on '27. So can you walk us through that? And I know this is a loaded question, but A lot of those that are providing lease servers are implementing memory cost increases. So I guess how are you thinking about what commitments you actually have from them and potential pass-through of memory costs?
你似乎暗示2027年公佈的自由現金流約為10%。能給我們詳細解釋一下嗎?我知道這是一個棘手的問題,但很多提供租賃伺服器的公司都在提高記憶體成本。所以我想知道,您是如何考慮您實際從他們那裡得到的承諾以及內存成本可能轉嫁到您身上的問題的?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes. Just I'll take that in reverse. So yes, we've seen increased component costs, the same as others in the industry, and that's all reflected in our guidance. And again, it hasn't changed our return expectations or the economics that we'd see. It's just it means that there's more costs associated with some of the service that we're bringing on.
是的。我倒過來理解。是的,我們和其他業內公司一樣,都看到了零件成本的上漲,而這些都反映在我們的業績預期中。而且,這並沒有改變我們的回報預期或我們所看到的經濟效益。這意味著我們推出的一些服務會產生更多成本。
But this is -- I'm glad, Fish, you brought this up, which is you got to think about our free cash flow in tiers, right? So you say, okay, well, you got unlevered free cash flow, which, again, people should be using from a valuation standpoint, and that we're talking about being in the 18% to 20% range. When you add the interest expense, you get the levered free cash flow, which is what we've historically -- that's our adjusted free cash flow margin. And you're only you're only giving up a couple of percentage points there. And that interest right now is half like the TLA and it's half equipment leasing.
但是——我很高興,Fish,你提出了這一點,那就是我們必須分階段地考慮我們的自由現金流,對吧?所以你會說,好吧,你有了未槓桿自由現金流,這同樣是人們從估值角度應該使用的,我們說的是18%到20%的範圍內。加上利息支出後,就得到了槓桿自由現金流,也就是我們歷史上一直擁有的——這就是我們調整後的自由現金流利潤率。你只不過是損失幾個百分點而已。目前,這種興趣一半來自TLA,一半來自設備租賃。
And then as you point out, you have the principal payments that are more of a financing transaction. That's why they're not captured in either the adjusted free cash flow or leverage adjusted free cash flow. But if you take those financing transactions and if you're going to lump everything in it and you say, what about the mandatory prepayment of $25 million a year on your term loan, okay?
正如你所指出的,還有本金支付,這更像是一種融資交易。這就是為什麼它們沒有反映在調整後的自由現金流或槓桿調整後的自由現金流中的原因。但是,如果你把這些融資交易都算進去,如果你要把所有事情都歸到其中,然後你說,那麼每年強制提前償還定期貸款 2500 萬美元又該怎麼算呢?
We'll throw that in there. If you take all of the cash payments, including the principal payments, including the prepayment of the Term Loan A, so that's all financing stuff. So again, you're mixing metaphors here. So if you throw that all in, we're still generating cash. So you're saying, hey, well, it's [10]. I'm like, hey, it's [10]. I think we're generating cash.
我們會把這個也加進去。如果把所有現金支付都算進去,包括本金支付,包括定期貸款 A 的提前還款,那麼這些都是融資方面的事情。所以,你又把比喻混用了。所以即使把這些都算進去,我們仍然在產生現金流。所以你的意思是,嘿,好吧,它是[10]我心想,嘿,是[10]我認為我們正在創造現金流。
while we're accelerating the growth of this business into the 30s and on an unlevered free cash flow basis, it's 18% to 20%. So it's a testament to our ability to dramatically accelerate growth. We've taken growth from 11%, 12%, 13% to guiding to 30%, and we're generating incredibly strong unlevered free cash flow. We're generating very strong levered free cash flow. And if you throw the kitchen sink in there and all the payments that we have to make, we're still generating cash.
雖然我們正在加速這項業務的成長,使其達到 30% 以上,但以未槓桿自由現金流計算,其成長率為 18% 至 20%。這證明了我們有能力大幅加速成長。我們已將成長率從 11%、12%、13% 提高到 30%,我們正在產生非常強勁的無槓桿自由現金流。我們產生了非常強勁的槓桿自由現金流。即使把所有費用都算進去,包括我們必須支付的所有款項,我們仍然能夠產生現金流。
I mean that's an incredibly strong position to be in, and we have a very flexible balance sheet. So we feel very good about the cash generation that we're setting out while we're delivering this growth.
我的意思是,我們處於非常有利的地位,而且我們的資產負債表非常靈活。因此,我們對在實現成長的同時所設定的現金流目標感到非常滿意。
James Fish - Analyst
James Fish - Analyst
Yes. I mean the growth acceleration looks good. And Paddy, for you on slide 20, I got asked a couple of questions ago to a degree. But probably you point out the difference between you guys and Neocloud and inference wrappers. And maybe being humble about it, you point out that you're about 75% of the way in the first three categories. And so is this something that we should be expecting to hear about at the April event? Or what do you guys need to do to get to that full 100% difference?
是的。我的意思是,成長加速看起來不錯。帕迪,關於你在第 20 張幻燈片上的問題,我之前也被問過幾個類似的問題。但您可能想指出您與 Neocloud 和推理封裝器之間的差異。或許你應該謙虛一點,指出你在前三個類別中已經完成了大約 75%。那麼,我們是否會在四月的活動中聽到相關消息呢?或者,你們需要做些什麼才能達到100%的差距?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Fish, I don't know if I will ever call myself 100% in those things because that market is changing so fast. Like if we ask five of our customers today, what they want versus what they thought they wanted three months ago is meaningfully different, right? Because as they are going into their customer base and deploying their solutions, new things come up all the time. The capability of AI models evolve all the time.
是的。Fish,我不知道我是否能百分之百稱自己精通這些領域,因為這個市場變化太快了。例如,如果我們今天詢問五位客戶,他們現在想要的和三個月前他們以為自己想要的相比,肯定有很大的不同,對吧?因為當他們進入客戶群並部署解決方案時,總是會有新的情況出現。人工智慧模型的能力不斷發展。
So this is going to be a moving target for the next couple of years. But the first part of your question, absolutely, that is where our R&D team is super heads down, inventing new technologies, inventing new parts of the stack. So you will hear a lot more about this on April 28. But I would say this is where I feel very confident that we already have a lead and that lead is only going to grow over the next few quarters.
所以,未來幾年,這將會是一個不斷變化的目標。但對於你問題的第一部分,答案是肯定的,我們的研發團隊正在全力以赴,發明新技術,發明技術堆疊的新部分。所以4月28日你會聽到更多相關消息。但我認為,我們在這方面已經取得了領先優勢,而且在接下來的幾個季度裡,這種領先優勢只會越來越大。
Operator
Operator
Thomas Blakey, Cantor Fitzgerald.
Thomas Blakey,坎托‧費茲傑拉。
Thomas Blakey - Analyst
Thomas Blakey - Analyst
Congratulations on a great quarter and a great outlook here. Maybe some follow-ups to my peers. Paddy, you mentioned, I think it was to a previous question about demand outstripping supply and giving you great visibility that you've kind of alluded to in this call, not expecting you to give calendar '28 commentary if you wanted to because like you looked out two years on the April '25 call, that would be great. But in addition to that, I'm interested in what you're seeing in a pricing dynamic. If demand is outstripping supply, you're lining up these new AI natives. Just maybe some commentary on pricing would be helpful.
恭喜你們本季業績出色,前景一片光明。或許可以向我的同行提出一些後續問題。帕迪,你之前提到過,我想是在回答之前關於需求超過供應的問題時,這讓你對市場前景有了更清晰的了解,你在這次電話會議中也略有提及。如果你願意的話,我不會期望你對 2028 年的日曆進行評論,因為就像你在 2025 年 4 月的電話會議上展望了兩年一樣,那將非常棒。但除此之外,我還想了解您在價格動態方面觀察到的情況。如果需求超過供應,你就需要引進這些新一代的人工智慧人才。或許對定價方面做一些評論會很有幫助。
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Thanks, Tom. So I think we have already talked about what we are going to talk about in -- for 2027. But in terms of the -- yes, the demand is clearly there, and we are moving as fast as we can to first deliver on these three data centers that Matt talked about. From a pricing point of view, it is -- we have competition from all kinds of different players and the pricing is holding.
是的。謝謝你,湯姆。所以我認為我們已經討論過我們將在 2027 年討論什麼了。但就目前而言——是的,需求顯然存在,我們正在盡最大努力加快速度,首先交付馬特提到的這三個資料中心。從定價角度來看,我們面臨來自各種不同參與者的競爭,但價格仍然保持穩定。
And in some cases, it has gone up. And we are very, very attuned to what is going on in the market. And there is a lot of scarcity of supply across the board. So we are also in a position where we work very closely with our customers to ensure that we are calibrating the prices that we have, both on demand as well as contractual prices to keep pace with what the market dynamics are at this point. But I would say nothing has materially changed.
在某些情況下,價格甚至上漲了。我們非常密切關注市場動態。而且各方面都存在著供應短缺的問題。因此,我們也與客戶密切合作,以確保我們能夠根據市場動態調整價格,包括按需定價和合約價格。但我認為,實質上並沒有發生任何變化。
And the pricing is also a function of the generation of the GPUs that we are talking about, right? At the lowest level, if a customer wants access to GPUs, it is priced GPU dollars per hour. And at that layer, it really depends on the generation of the GPU, whether it is Blackwell or the Hopper series from NVIDIA or the 350, 355 from AMD or the 300 or 325. So it really depends on the nature of the generation. There are also other dependencies like the cluster sizes, the cluster configuration, what kind of networking they want and so forth.
價格也取決於我們所討論的GPU的代數,對吧?最基本的層面上,如果客戶想要使用 GPU,則以 GPU 美元/小時計費。而在這個層面上,它實際上取決於 GPU 的代數,無論是 NVIDIA 的 Blackwell 或 Hopper 系列,還是 AMD 的 350、355,或是 300 或 325。所以這真的取決於這一代的特質。還有其他一些依賴項,例如叢集規模、叢集配置、他們想要的網路類型等等。
And as you move up stack, if you look at my slide 19, as -- and each -- the one thing that I did not mention in Slide 19 is that customers can enter our stack at pretty much any layer of the stack, right? So the higher up you go in the stack, you're not pricing by per GPU hour, but your pricing per token. And there, we have a lot more degrees of freedom in terms of how we price versus competition. Because there, you're doing dollar per token, but also you have the flexibility of running it in different types of hardware. You can also change up the AI model that is servicing this token request.
當你向上移動堆疊時,如果你看一下我的第 19 張投影片,你會發現——而且每一層——我在第 19 張投影片中沒有提到的一件事是,客戶幾乎可以從堆疊的任何一層進入我們的堆疊,對吧?因此,隨著堆疊層級的升高,定價不再是以 GPU 小時數計算,而是以代幣計算。因此,與競爭對手相比,我們在定價方面擁有更大的自由度。因為在那裡,你不僅可以按代幣價格收費(每枚代幣一美元),還可以靈活地在不同類型的硬體上運行它。您也可以變更處理此令牌請求的 AI 模型。
So we have more degrees of freedom in customers, some customers need that flexibility, and they are willing to live with the higher orders of the stack rather than dictating which generation of hardware they want to run in.
因此,客戶擁有更大的自由度,有些客戶需要這種靈活性,他們願意接受更高階的技術棧,而不是指定他們想要在哪一代硬體上運行。
Thomas Blakey - Analyst
Thomas Blakey - Analyst
Right. That's super helpful, Paddy. And just maybe as an extension of that flexibility, it was impressive to hear about the 0% churn in the large $1 million-plus cohort with 115% NRR. I'd love to know what the overlap there is with regard to the AI native exposure, if you could maybe kind of talk about just those customers and how much of that is from AI and for Matt relatedly, is -- and you're improving -- are we finally including AI and ML revenue there? And if not, when can we expect that?
正確的。帕迪,這太有幫助了。或許正是這種靈活性的延伸,令人印象深刻的是,在收入超過 100 萬美元的大客戶群中,流失率為 0%,淨收入率 (NRR) 達到了 115%。我很想知道在 AI 原生應用程式方面,兩者的重疊情況如何。您能否談談這些客戶,以及其中有多少來自 AI?另外,對於 Matt 來說,您也正在改進,我們是否最終將 AI 和 ML 的收入也算進去了?如果沒有,我們什麼時候才能等到呢?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes. Thanks, Tom. So it's about -- on a customer account basis, it's about half of the $1 million customers. Our AI customers and half are core cloud or general purpose cloud only. It's a little bit more on a revenue basis or an ARR basis, a little bit more AI, but not a lot.
是的。謝謝你,湯姆。所以,以客戶帳戶計算,大約有 100 萬美元客戶的一半左右。我們的人工智慧客戶有一半僅使用核心雲端或通用雲端。從收入或年度經常性收入(ARR)的角度來看,它稍微多一些,人工智慧也稍微多一些,但差異不大。
It's not too far off of 50-50. And as you saw in the materials, 3 or 4 -- 48% of the trailing 12-month incremental ARR is coming from those -- from AI customers. So that's kind of how the split is. In terms of the -- no, it's not in there yet. And the reason that we disclosed the AI customer revenue, and we will continue to disclose that as a metric in the growth rate.
差不多是五五開吧。正如你在資料中看到的,過去 12 個月新增 ARR 的 3 或 48% 來自這些 AI 客戶。大致情況就是這樣。至於——不,它還沒有被加到裡面。我們之所以揭露人工智慧客戶收入,是因為我們將繼續將該指標作為成長率的衡量標準來揭露。
And also looking at the RPO, which is, again, a decent chunk of that, not all, but a decent chunk of that is also AI. We're trying to give you better leading indicators of the performance of the AI customer base. The NDR, if you look at some of the charts that we showed with some of the bigger inferencing providers, those -- they just got started on the platform in kind of the June, July time frame.
此外,RPO(重新定位目標)中也有相當一部分(雖然不是全部)是人工智慧造成的。我們正在努力為您提供更好的人工智慧客戶群績效領先指標。NDR,如果你看一下我們展示的一些大型推理提供者的圖表,你會發現他們——他們只是在六、七月份才開始使用這個平台。
And there's a big difference in the, I'd say, the size and caliber of the customers that we've been winning in the last 6 months on what now seven, eight months, I guess, on the AI side. Those, we think, will have more of your traditional kind of NDR like characteristics where they grow and expand on the platform using inferencing, which is more of a production workload versus a lot of our earlier customers were smaller customers doing experimentation, doing projects, and they just don't look like revenue was growing like crazy because we would be adding a ton of those customers.
而且,我認為,過去 6 個月(現在應該是 7、8 個月了)我們在人工智慧方面贏得的客戶規模和品質有很大的不同。我們認為,這些客戶將更具有傳統 NDR 的特徵,他們會在平台上使用推理進行成長和擴展,這更像是生產工作負載,而我們早期的許多客戶都是規模較小的客戶,他們進行實驗、開展項目,他們的收入似乎並沒有瘋狂增長,因為我們沒有增加大量的這類客戶。
But if you look at any of the individual customers, it was it was hard to see a pattern. And what MDR is a SaaS metric is it looks for, okay, it looks for patterns where you bring on a customer and you can expect them to do XYZ over the next 12 months. And we just didn't see that. There's no noise in our AI customer revenue kind of lumpiness early that we see changing. So we'll continue to evaluate that every quarter and at the appropriate time, we'll contemplate rolling that in.
但如果你仔細觀察任何一個客戶,很難發現其中的規律。MDR 是一種 SaaS 指標,它尋找的是模式,即當你獲得一個客戶後,你可以預期他們在接下來的 12 個月內完成 XYZ 操作。而我們當時並沒有意識到這一點。我們沒有看到人工智慧客戶收入出現早期波動或出現變化的跡象。因此,我們將繼續每季對此進行評估,並在適當的時候考慮將其納入其中。
But it's probably still 12 months away.
但可能還要再等12個月。
Operator
Operator
Patrick Walravens, Citizens.
派崔克‧沃爾拉文斯,市民。
Patrick Walravens - Analyst
Patrick Walravens - Analyst
Congratulations on the quarter. And I have to say congratulations on the slide deck. It's fantastic, and I'm sure all of your investors are going to appreciate it. So Paddy, I was looking back at my note from 2 years ago when you joined and at the time, one of the things you said was that our durable competitor differentiator for us long term is going to be in the software layer.
恭喜你本季取得佳績。我還要祝賀你的幻燈片做得非常棒。太棒了,我相信你們所有的投資人都會很欣賞的。帕迪,我回顧了兩年前你加入公司時我寫的筆記,當時你說過,我們長期的持久競爭優勢將體現在軟體層面。
And you said you were focused on bringing simple, easy-to-use AI ML capabilities on both hardware and software to developers. So what I'm wondering is, as you look back -- and you've got -- and you're growing 11% when you joined and decelerating, right? So as you look back, what parts -- which of the growth drivers that have caused you to accelerate, now we're talking about 30%. Did you anticipate and which were fortuitous is probably the wrong word, what favors the prepared, but which were sort of unexpected?
您曾表示,您致力於為開發者提供簡單易用的硬體和軟體人工智慧機器學習功能。所以我想知道的是,當你回顧過去——你加入時增長率為 11%,然後增速放緩,對嗎?所以回顧過去,哪些部分——哪些成長驅動因素促使你加速發展,現在我們談論的是 30%。你是否預料到了哪些事情?哪些事情是偶然的,用「偶然」這個詞可能不太恰當,因為有準備的人往往更有優勢,但哪些事情是意料之外的呢?
Paddy Srinivasan - Chief Executive Officer, Director
Paddy Srinivasan - Chief Executive Officer, Director
Yes. Thank you, Patrick. I would say what was pricing -- and maybe I'll take some creative liberty in answering your question. So what took a few quarters for us to get right was -- as I mentioned several times during this call, we had -- we had a constraint in keeping up with customers that were scaling rapidly and scaling big on our platform that I joined. So it took us a few quarters to really understand, get to the bottom of their needs -- and there was a lot of work that had to be done for us to get to the 0% churn that I was so proud to share with all of you this morning.
是的。謝謝你,派崔克。我會說價格是多少——也許我會發揮一些創意的視角來回答你的問題。所以,我們花了幾個季度才解決的問題是——正如我在這次電話會議中多次提到的——我們遇到了一個限制,那就是在我們平台上快速擴張和大規模擴張的客戶群中,我們無法滿足他們的需求。因此,我們花了幾個季度的時間才真正了解他們的需求——為了實現我今天早上非常自豪地與大家分享的 0% 客戶流失率,我們做了很多工作。
So that took a lot of engineering effort. And I'm super proud of my team and it's a lot of very complex technology work all the way from advanced networking to fortifying our storage to inventing new things in our database offering and so forth. So that took a tremendous amount of heavy lifting and that job is not done yet as we get to -- we started with 100,000 customers, then we focused on 500,000 customers. Now we are focused on million-dollar customers. And who knows in the next couple of years, we'll be talking about $5 million and $10 million customers.
所以這需要大量的工程投入。我為我的團隊感到無比自豪,他們做了很多非常複雜的技術工作,從先進的網路技術到加強存儲,再到資料庫產品中的創新等等。所以這需要付出巨大的努力,而且這項工作還沒有完成,因為我們最初只有 10 萬客戶,後來我們專注於 50 萬客戶。現在我們專注於百萬美元等級的客戶。誰知道呢,未來幾年,我們或許就會談到年收入 500 萬美元甚至 1,000 萬美元的客戶了。
So that bar racing is an ongoing endeavor for us. And on the more fun side of things is literally participating from the starting point with the AI native ecosystem. So we are learning at their learning, and we are inventing alongside them, and that is a great luxury to have because we feel like we can write their growth curve and as their needs increase, and they're learning the right way to do this from a workload perspective. We are just trying to keep up pace and they're super appreciative of us inventing on their behalf to make their lives easier so that they can focus on their domain and invent new things for their customers. So we'll share a lot more of this on April 28, but that's how I would answer your question, Patrick.
所以,酒吧競速是我們持續進行的工作。更有趣的是,你可以從一開始就參與人工智慧原生生態系統。所以,我們是在和他們一起學習,和他們一起創新,這是一種莫大的榮幸,因為我們感覺我們可以掌控他們的成長曲線,隨著他們需求的增加,他們也在學習如何從工作量的角度正確地完成這項工作。我們只是努力跟上步伐,他們非常感謝我們代表他們進行發明創造,讓他們的工作更輕鬆,這樣他們就可以專注於自己的領域,為他們的客戶創造新產品。所以我們將在4月28日分享更多內容,但這就是我對你問題的回答,派崔克。
Operator
Operator
Mike Cikos, Needham & Company.
Mike Cikos,Needham & Company。
Michael Cikos - Equity Analyst
Michael Cikos - Equity Analyst
Congrats on the strong growth guardrails you're providing us. Matt, if I could just come back, and I know that the free cash flow topic has come up a couple of times here, but you can see as well as anybody, just how sensitive the investors are in this market to the AI CapEx investments that are required or different financing vehicles that are out there.
恭喜你們為我們提供了強而有力的成長保障措施。馬特,如果我能回到剛才的話題,我知道自由現金流這個話題在這裡已經被提及過幾次了,但你和任何人都能看出,這個市場的投資者對所需的 AI 資本支出投資或現有的不同融資工具有多麼敏感。
Just to be clear, when we look at the calendar '26 versus the calendar '27 guide, that unlevered to just free cash flow or just the free cash flow guide the 3-point delta is expected to widen to about 10 points in calendar '27. If we take that one step further, and I know that your guidance for -- or those guardrails for '27 currently don't contemplate additional capacity coming online. But it seems fair that we should be assuming more capacity.
需要明確的是,當我們比較 2026 年日曆年和 2027 年日曆年指導值時,如果不考慮槓桿因素,僅考慮自由現金流或僅考慮自由現金流指導值,預計 3 個百分點的差距將在 2027 年擴大到約 10 個百分點。如果我們更進一步,我知道你們的指導意見——或者說目前針對 2027 年的那些指導方針——並沒有考慮到新增產能上線的情況。但假設產能較高似乎是合理的。
And if that's the case, would those different -- would that delta between the unlevered and the levered free cash flow margin widen further from there? Is that fair?
如果情況真是如此,那麼無槓桿自由現金流利潤率和有槓桿自由現金流利潤率之間的差距是否會進一步擴大?這樣公平嗎?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
The way I think you got to think about it is, again, if you're looking at the levered free cash flow, it's got other stuff in it besides the equipment leases. It's got TLA interest. It's got other things. If you look at the -- as Fish was saying, if you look at the other cash, there's mandatory prepayments of the term loan A. So you got to be real careful about what you're using for what purpose, right?
我認為你應該這樣想:如果你查看槓桿自由現金流,其中除了設備租賃之外還有其他因素。它引起了TLA的興趣。它還有其他東西。正如菲什所說,如果你看看其他現金,你會發現其中有定期貸款 A 的強制提前還款。所以你必須非常謹慎地考慮你把錢用在了什麼用途上,對吧?
So if you said, hey, what's the steady state cash flow generation capability of this business. Again, because we lease equipment, we don't have an upfront capital requirement that makes it super lumpy. We can make that smoother and we can grow.
所以如果你問,嘿,這家企業的穩定現金流產生能力如何。再說一遍,因為我們是租賃設備,所以我們沒有前期資本投入,因此不會出現資金波動過大的情況。我們可以讓這個過程更順利,我們也能成長。
However, when you're growing a business even with that model and you're adding data center capacity, you have a couple of months where you're actually taking data center lease expense and you haven't generated any revenue when you lease year unlike if you buy gear, you put it in your warehouse, you actually don't expense it until you actually deploy it. When you lease gear, you start that lease expense as soon as it's shipped.
然而,即使採用這種模式,當你發展業務並增加資料中心容量時,你也會有幾個月的時間實際上承擔資料中心租賃費用,而租賃年度你還沒有產生任何收入。這與購買設備不同,購買設備後,將其放入倉庫,直到實際部署時才產生費用。租賃設備時,租賃費用從設備出貨之日起就開始計算。
So you have front-loaded cost that don't catch up the revenue right away. But because you didn't have a big giant slug of capital, as soon as revenue starts generating, you're immediately generating cash and you're improving your margins with utilization.
所以前期投入的成本無法立即彌補收入的增加。但因為你沒有一大筆巨額資金,所以一旦開始產生收入,你就能立即產生現金流,並透過提高利用率來改善利潤率。
So the steady state, like if you said, that's why we've been very crisp about what's included in the numbers. It's to give you a sense of what the margins look like on a steady-state basis. If we just continually assume, well, we're going to add incremental capacity, which I can't tell you how much incremental capacity we're going to have because we haven't contracted it, and we haven't committed anything to incremental capacity.
所以,就像你說的,穩定狀態,這就是為什麼我們對數字中包含的內容一直非常明確的原因。這是為了讓您對穩定狀態下的利潤率有一個大致的了解。如果我們只是不斷地假設,我們會增加增量產能,但我無法告訴你我們會有多少增量產能,因為我們還沒有簽訂合同,也沒有對增量產能做出任何承諾。
So what we're showing is when we add 31 megawatts as an example, and you roll that forward a year, you have incredibly strong cash flow characteristics to that. And there's going to be a short-term impact on gross margins and net income because of the timing thing I described, but that works itself through relatively quickly.
所以,我們展示的是,以 31 兆瓦為例,如果將這一數字向前推算一年,就會產生非常強勁的現金流。由於我剛才描述的時間因素,短期內會對毛利率和淨收入產生影響,但這會相對迅速地緩解。
And so you would expect that as we saw other opportunities to accelerate our business with similar economics, that we would make similarly good decisions and that engine will keep going. And so it's -- I view it in a very different way than what you're describing. I view it as we're going to commit to more capacity.
因此,當我們看到其他以類似經濟效益加速業務發展的機會時,我們理應做出同樣明智的決定,而這股引擎也將繼續運作下去。所以——我的看法和你描述的非常不同。我認為我們將致力於擴大產能。
It's because we have more growth opportunities and the returns are incredibly compelling, and we're doing it in a way where we match the revenue and the costs, and we're not going out above our skis beyond our skis and making massive commitments chasing the data center and GPU arms race. We're doing it methodically. We're doing it where we have an advantage, where we earn a good return and we're able to do it. Well, again, taking 13%, 11%, 13% revenue growth to $30 million while still maintaining really good margins. So we're really excited about the potential we have and the economics that we're delivering.
這是因為我們有更多的成長機會,而且回報非常誘人,我們以一種能夠平衡收入和成本的方式來實現這一點,我們不會脫離自身能力範圍,也不會為了追逐資料中心和GPU軍備競賽而做出巨大的承諾。我們正在有條不紊地進行。我們選擇在我們具有優勢、能夠獲得良好回報且有能力做到的地方去做這件事。再次強調,營收分別成長了 13%、11% 和 13%,達到 3,000 萬美元,同時也維持了非常好的利潤率。因此,我們對我們所擁有的潛力以及我們所取得的經濟效益感到非常興奮。
Michael Cikos - Equity Analyst
Michael Cikos - Equity Analyst
Maybe for a quick follow-up here. Understood on the accelerating growth you guys are looking at throughout calendar '26, just based on the megawatts coming online. One thing I wanted to ask, again, I'm sure that you guys have your own models as you're looking at the AI customers ramping, but to drive that 25%-ish growth exiting calendar '26. Can you provide any additional color for what you're assuming in terms of ARR directly from those AI customers, if I'm thinking about the $120 million that we see today exiting '25?
或許這裡需要快速跟進。我明白你們所說的2026年全年加速成長的趨勢,這主要基於新增併網發電量。我還有一件事想問,我相信你們在觀察人工智慧客戶的成長情況時肯定有自己的模型,但是為了在 2026 年結束前實現 25% 左右的成長,你們是如何做到的?如果您指的是我們今天看到的 2025 年底的 1.2 億美元,您能否就您假設的直接來自 AI 客戶的 ARR 提供一些補充資訊?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
The only thing I would say is what we said is that AI customer ARR in Q4 was $120 million grow at 150%. We have more demand than we have supply. We're bringing on supply. You should expect that it doesn't slow down.
我唯一想補充的是,正如我們之前所說,第四季度人工智慧客戶的年度經常性收入 (ARR) 為 1.2 億美元,成長了 150%。我們的需求大於供給。我們正在增加供應。你應該預料到它不會減速。
Operator
Operator
Mark Zhang, Citi.
馬克·張,花旗集團。
Mark Zhang - Analyst
Mark Zhang - Analyst
Just given the strong demand environment, should we see more capacity comments coming, I guess, like you announced today? And this -- if that's not the case and is there enough incremental capacity or megawatt capacity in your current footprint to support continued growth. Just any insights there will be appreciated.
鑑於目前強勁的需求環境,我想我們是否會看到更多關於產能的消息,就像您今天宣布的那樣?如果情況並非如此,那麼在你目前的業務範圍內是否有足夠的新增產能或兆瓦級產能來支援持續成長呢?任何見解都將不勝感激。
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Sure. So Mark, as we said, there's enough capacity in committed capacity. There's enough growth potential in the committed capacity to get us to 30% growth in 2027. Clearly, we're very cognizant of the data center market and very active in terms of the evaluation of that. We haven't made any commitments at this juncture to share with the market.
當然。所以馬克,正如我們所說,已承諾的產能是足夠的。現有產能具有足夠的成長潛力,足以讓我們在 2027 年達到 30% 的成長。顯然,我們非常了解資料中心市場,並且在評估該市場方面非常積極。目前我們還沒有做出任何與市場分享的承諾。
And if we get to a point where we make a commitment, we'll certainly share that. But at this point, again, we thought it was incredibly important for people to understand how to digest capacity as we bring it on. And that's why we've guided to what we have based solely on the 31 megawatts we've already committed, and it gives you a good sense of how it ramps and what the economics are. And should we bring down the incremental capacity, you'll have a good model to add on to the growth ramps that we've already articulated.
如果我們最終做出承諾,我們一定會分享的。但此時,我們再次認為,讓人們了解如何消化我們不斷增加的產能至關重要。因此,我們僅根據已承諾的 31 兆瓦發電量來製定指導方針,這可以讓你很好地了解發電量的增加情況以及經濟效益。如果我們降低增量產能,你們就能得到一個很好的模型,可以加入我們已經闡明的成長階梯。
Mark Zhang - Analyst
Mark Zhang - Analyst
Okay. Great. And then maybe related to that, can you -- is there a sense of utilization of your current estate, maybe like given in terms of the what the current capacity is we know the current capacity, could there maybe any sense of the contracted capacity that you have on the books?
好的。偉大的。那麼,或許與此相關的是,您能否-對您目前的資產利用情況有所了解,例如,考慮到我們目前的產能,我們能否了解您帳面上已簽訂的合約產能?
Matt Steinfort - Chief Financial Officer
Matt Steinfort - Chief Financial Officer
Yes. So from a contracted capacity standpoint, again, if you're talking about data centers, we've got 31 megawatts that we're adding to our roughly kind of, call it, 43 or 44 , which will put us at 70 -- just about 75 megawatts when we're done. So the 6 megawatts -- so we're sitting at, call it, sitting at 43 and we're adding 6 that will come online or generating revenue in the second quarter and the balance of the incremental 31, which is about $25 million will come on and start ramping revenue and in the second half.
是的。所以從合約容量的角度來看,如果你說的是資料中心,我們將增加 31 兆瓦,加上我們大約 43 或 44 兆瓦的容量,完工後我們的容量將達到 70 兆瓦左右。所以,這 6 兆瓦——我們目前的總裝置容量是 43 兆瓦,我們將在第二季度新增 6 兆瓦,這些新增裝置容量將投入使用或產生收入,而剩餘的 31 兆瓦(約 2500 萬美元)將在下半年投入使用並開始增加收入。
And we expect to reach -- whether we are at full utilization as a function of whether we decide to fill them all with GPs right away or we do it over time because we'd like to strike out the generations of GPUs. We don't like to go all in on one type of a generation of GPUs but we'll be at a very healthy utilization in -- at some point in 2027, which is enabling us to get to that 30% growth.
我們預計能夠達到——是否能充分利用,取決於我們是決定立即用 GP 全部填滿它們,還是隨著時間的推移逐步填滿,因為我們希望淘汰一代又一代的 GPU。我們不喜歡把所有資源都投入到某一代GPU上,但到2027年的某個時候,我們將達到非常健康的利用率,這將使我們能夠實現30%的成長。
Operator
Operator
At this time, we have no further questions. That concludes our Q&A session and today's conference call. We would like to thank you for your participation. You may now disconnect your lines. Have a pleasant day.
目前我們沒有其他問題了。我們的問答環節和今天的電話會議到此結束。感謝您的參與。現在您可以斷開線路了。祝您今天愉快。