使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主
Operator
Good afternoon.
My name is Jason, and I will be your conference operator today.
At this time, I would like to welcome everyone to NVIDIA's third quarter financial results conference call.
(Operator Instructions) Simona Jankowski, you may begin your conference.
Simona Jankowski - VP of IR
Thank you.
Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2021.
With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website.
The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter of fiscal 2021.
The content of today's call is NVIDIA's property.
It can't be reproduced or transcribed without our prior written consent.
During this call, we may make forward-looking statements based on current expectations.
These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.
For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
All our statements are made as of today, November 18, 2020, based on information currently available to us.
Except as required by law, we assume no obligation to update any such statements.
During this call, we will discuss non-GAAP financial measures.
You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
With that, let me turn the call over to Colette.
Colette M. Kress - Executive VP & CFO
Thank you, Simona.
Q3 was another exceptional quarter with record revenues of $4.73 billion, up 57% year-on-year, up 22% sequentially and well above our outlook.
Our new NVIDIA Ampere GPU architecture is ramping with excellent demand across our major market platforms.
Q3 was also a landmark quarter both for us and the industry as a whole as we announced plans to acquire Arm from SoftBank for $40 billion.
We are incredibly excited about the combined company's opportunities, and we are working through the regulatory approval process.
For today, we will focus our remarks on our quarterly performance.
Starting with Gaming.
Revenue was a record $2.27 billion, up 37% year-on-year, up 37% sequentially and ahead of our high expectations.
Driving strong growth was our new NVIDIA Ampere architecture-based GeForce RTX 30 Series of gaming GPUs.
The GeForce RTX 3070, 3080 and 3090 GPUs offer up to 2x the performance and 2x the power efficiency over the previous Turing-based generation.
Our second-generation NVIDIA RTX combines ray tracing and AI to deliver the greatest ever generational leap in performance.
First announced on September 1 and ranging in price from $499 to $1,499, these GPUs have generated amazing reviews and overwhelming demand.
PC World called them "staggeringly powerful" while Newegg cited "more traffic than Black Friday." Many of our retail and Etail partners sold out instantly.
The RTX 30 series drove our biggest ever launch.
While we had anticipated strong demand, it exceeded even our bullish expectations.
Given industry-wide capacity constraints and long cycle times, it may take a few more months for product availability to catch up with demand.
In addition to the NVIDIA Ampere GPU architecture, we announced powerful new tools for gamers as well as for tens of millions of live streamers, broadcasters, eSports professionals, artists and creators.
NVIDIA Reflex is a new technology that improves reaction time in games, reducing system latency by up to 50%.
NVIDIA Reflex is being integrated into popular eSports games such as Apex Legends, Call of Duty: Warzone, Fortnite and Valorant.
NVIDIA Broadcast is a universal plug-in for videoconferencing and live streaming applications that enhances the quality of microphones, speakers and webcams with NVIDIA AI effects such as audio noise removal, virtual background effects and webcam audio frame.
With it, remote workers and live streamers can turn any room into a broadcast studio.
Blockbuster games continue to adopt NVIDIA's RTX ray tracing and AI technology.
At the games announced at Fortnite, which has more than 350 million players worldwide, it's adding NVIDIA RTX real-time ray tracing, NVIDIA DLSS AI super-resolution and NVIDIA Reflex, making the game more beautiful and even more responsive.
Other major new titles featuring RTX this holiday season includes Watch Dogs: Legions, Call of Duty: Black Ops Cold War and much anticipated Cyberpunk 2077.
Gaming laptop demand was also strong with double-digit year-on-year growth for the 11th quarter in a row.
NVIDIA GeForce laptops support the most demanding applications for creators and designers, while doubling as a powerful gaming rig by night.
We also had record gaming console revenue on strong demand for the Nintendo Switch.
And we continue to grow our cloud gaming service, GeForce NOW, which has doubled in the past 7 months to reach over 5 million registered users.
GeForce NOW is unique as an open platform that connects to popular game stores including Steam, Epic Games and Ubisoft Connect, allowing gamers access to the titles they already own.
750 games are currently available on GFN, the most of any cloud gaming platforms, including 75 free-to-play games, with more games added every Thursday.
GFN supports many popular clients including PCs, Macs and Chromebooks.
Stay tuned for more devices to come in the near future.
In addition, GFN's reach continues to expand through our telco partners in a growing list of countries including Japan, Korea, Taiwan, Russia and Saudi Arabia.
We are also providing technology that enables the cloud gaming services to an expanding number of partners.
Following our earlier announcement with Tencent, Amazon and Facebook are beginning to offer cloud gaming services powered by NVIDIA.
Moving to Pro Vis.
Q3 revenue was $236 million, down 27% year-on-year and up 16% sequentially, ahead of our expectations.
Sequential growth was driven by strength in notebooks, which posted record revenue, boosted by work-from-home initiatives and the shift to thin and light mobile workstations.
This was particularly offset by a decline in desktop workstations, which continued to be impacted by the pandemic and drove the year-on-year decline.
From an industry demand perspective, stronger verticals including health care, public sector, higher education and research and financial services, we continue to win new business in a number of areas.
In health care, we added Medtronic for visual surgical applications and Philips for medical imaging.
In technology and media and entertainment, we gained wins for design, rendering and broadcast applications.
During the quarter, we announced that Omniverse, the world's first 3D collaboration and simulation platform, has entered open beta.
Omniverse enables the tens of millions of designers, architects and creators to collaborate real-time on-premises or remotely.
Fusing the virtual and physical world, Omniverse brings together NVIDIA breakthroughs in graphics, simulation and AI.
It will help enterprises address evolving requirements as workforces become increasingly distributed.
Initial market response from this transformative platform has been phenomenal.
Over 400 individual creators and developers in diverse industries have been evaluating Omniverse and early adopters including Ericsson, BMW, Foster + Partners and Lucasfilm.
The pandemic is accelerating development of AR, VR and mixed reality technologies, which will have a profound impact on how we work and play.
For example, our work with NASCAR to enable a variety of AR and VR services at the edge is revolutionizing the racing experience for millions of fans across the globe.
With our industry-leading real-time ray tracing graphics, AI and simulation hardware and software stacks, NVIDIA is in a unique position to enable the future of blending the physical and virtual worlds.
Moving to Automotive.
Q3 revenue was $125 million, down 23% year-on-year and up 13% sequentially.
Sequential growth was driven by a recovery in global automotive production volumes as well as continued growth in AI cockpit revenue.
The year-on-year decline was due to the expected ramp down of legacy infotainment revenue.
In September, Mercedes-Benz debuted its redesign of S-Class sedan featuring an all-new NVIDIA-powered MBUX AI cockpit system with an augmented reality heads-up display, AI voice assistant and rich interactive graphics to enable every passenger in the vehicle to enjoy personalized intelligent features.
Also in September, Li Auto, a leading electric car brand in China, announced that it will develop its next generation of vehicles using the software-defined NVIDIA Drive AGX Orin platform.
Orin delivers nearly 7x the performance and 3x the energy efficiency of our previous generation SSC, making it uniquely capable to power next-generation autonomous electric vehicles.
We have excellent traction with EV start-ups.
Finally, last week, NVIDIA and Hyundai Motor Group announced that the automaker's entire lineup of Hyundai, Kia and Genesis models will come standard with NVIDIA DRIVE in-vehicle infotainment systems starting in 2022.
This feature-rich software-defined computing platform will allow vehicles to be perpetually upgraded with the latest AI cockpit features.
Now moving to Data Center.
Revenue was a record $1.9 billion, up 162% year-over-year and up 8% sequentially.
Driving growth was the strong ramp of our A100-based platforms, continued growth with Mellanox and record T4 shipments for inference.
Let me give you a little bit of color on each.
Our new NVIDIA Ampere architecture gained further adoption by cloud and hyperscale customers and started ramping into vertical industries.
Over the past weeks, Amazon Web Services, Oracle Cloud Infrastructure and Alibaba Cloud announced general availability of the A100 following Google Cloud Platform and Microsoft Azure.
A100 adoption by vertical industries drove strong growth as we began shipments to server OEM partners, whose broad enterprise channels reach a large number of end customers.
We also ramped the DGX A100 server and began shipping NVIDIA DGX SuperPOD, the first turnkey AI infrastructure.
These range from 20 to 140 DGX A100 systems interconnected with Mellanox's HDR InfiniBand networking and enable customers to install incredibly powerful AI supercomputers in just a few weeks' time.
In fact, we have announced plans to build an 80-node DGX SuperPOD with 400 petaflops of AI performance called Cambridge-1, which will be in the U.K.'s fastest AI supercomputer that will be used by NVIDIA researchers for collaborative research within the U.K.'s AI and health care community across academia, industries and start-ups.
It joins other systems in NVIDIA's complex of AI supercomputers powered by our R&D and autonomous vehicles, conversational AI, robotics, graphics, HPC and other domains.
This includes Selene, now the world's fifth fastest supercomputer and fastest commercial supercomputer, and the new NVIDIA DGX SuperPOD, which ranks first on the Green500 list of the world's most energy-efficient supercomputers.
A great example of the tremendous opportunities for AI and health care is our new partnership with GSK for applying computational to the drug and vaccine discovery process.
GSK's London-based AI hub will utilize biomedical data, AI methods and advanced computing platforms to unlock genetic and clinical data with increased precision and scale.
In addition to this investment in NVIDIA's DGX A100 system, GSK will have access to NVIDIA's Cambridge-1, the NVIDIA Clara Discovery software and NVIDIA scientists.
In Q3, the A100 swept the industry standard MLPerf benchmark for AI inference performance following our sweep in the prior quarter's MLPerf benchmark for AI training.
Notably, our performance lead in AI inference actually extended compared with last year's benchmark.
For example, in the ResNet-50 test for image recognition, our A100 GPU beat CPU-only system by 30x this year versus 6x last year.
Additionally, A100 outperformed CPUs by up to 237x in the newly added recommender test, which represents some of the most complex and widely used AI models on the Internet.
Our winning performance in AI inference is translating to continued strong revenue growth.
Alongside the continued ramp of the A100, T4 sales set a record as the NVIDIA AI inference adoption is in full throttle.
We estimate that NVIDIA's installed GPU capacity for inference across the 7 largest public clouds now exceeds that of the aggregate CPU capacity in the cloud, a testament to the tremendous performance and TCO advantage of our GPUs.
Hundreds of companies now operate AI-enabled services on NVIDIA's inference platform, including the A100 or T4 GPU and our Triton Inference serving software.
For example, Tencent uses NVIDIA AI inference to recommend videos, music, news and apps, supporting billions of queries per day.
Microsoft uses NVIDIA AI inference for grammar correction in Microsoft Office, supporting 0.5 trillion queries a year.
And American Express uses it for real-time fraud detection.
We also gave tremendous traction in supercomputing.
We announced that NVIDIA technology, including Ampere architecture GPUs and HDR InfiniBand networking, will power 5 systems awarded by Euro HPC, a European initiative to build exascale supercomputing.
This includes Cineca, a university consortium in Italy and one of the world's most important supercomputing centers, which will use NVIDIA's accelerated computing platform to build the world's fastest AI supercomputer.
Cineca's supercomputer named Leonardo advances the age of exascale AI, delivering 10 exaflops of AI performance to enable AI and high-performance computing converge application use cases.
It is built with nearly 14,000 NVIDIA Ampere architecture-based GPUs and Mellanox HDR 200-gigabit per second InfiniBand networking.
And just released top 500 list of supercomputers show that NVIDIA GPUs or networking powered nearly 70% and 8 of the 10 top supercomputers on the list.
Mellanox had another record quarter with double-digit sequential growth well ahead of our expectations, contributing 13% of overall company revenue.
The upside reflected sales to a China OEM that will not recur in Q4.
As a result, we expect a meaningful sequential revenue decline for Mellanox in Q4, though still growing 30% from last year.
Mellanox reached record revenue in both InfiniBand and Ethernet driven by cloud, enterprise and supercomputing customers.
Strong demand for high-performance interconnects where Mellanox is a leader is being fueled by AI and increasingly complex applications which demand faster, smarter, lower scalable networks.
As the data center becomes the new unit of computing in the age of AI, Mellanox networking is foundational to modern scale-out architectures.
At GTC in October, we unveiled the BlueField-2 DPU, or data processing unit, a new kind of processor which offloads critical networking, storage and security task from the CPU.
A single BlueField 2 DPU can deliver the same data center services that consume up to 125 CPU cores.
This frees up valuable CPU cores to run a wide range of other enterprise applications.
In addition, it enables zero trust security features to prevent data breaches and cyberattacks and accelerates overall performance.
VMware announced that it will offload, accelerate and isolate its industry leading ESXi Hypervisor with NVIDIA's BlueField-2 DPU, boosting vSphere and data center performance and efficiency.
We also unveiled our 3-year DPU road map, unifying Mellanox's leading network capabilities with NVIDIA's GPU and the new NVIDIA DOCA, or data center on a chip, architecture, software development kit for building DPU-accelerated applications.
We believe that over time, DPUs will ship on millions of servers, unlocking a $10 billion total addressable market.
BlueField-2 is sampling now with major hyperscale customers and will be integrated into the enterprise server offerings of major OEMs.
This was our busy period for product launches.
Earlier this week at Supercomputing '20, we announced the new double-capacity A100 80-gigabyte GPUs and DGX systems for organizations to build, train and deploy massive AI models.
We also announced the new DGX Station A100, a powerful workgroup server with 4 A100 GPUs and a massive 320-gigabyte GPU memory for data scientists and AI researchers working in offices, research facilities, labs or at home.
All these additions to the NVIDIA Ampere architecture family of products will be available early next year.
At SC20, we also announced the next-generation NVIDIA Mellanox 400-gigabit per second InfiniBand architecture, getting AI developers and scientific researchers the fastest available networking performance.
This doubles data throughput and adds new in-network computing engines to provide additional acceleration.
Solutions based on this new architecture are expected to sample in the second quarter of calendar 2021.
Moving to the rest of the P&L.
Q3 GAAP gross margins was 62.6%, and non-GAAP gross margin was 65.5%.
GAAP gross margin declined year-on-year, primarily due to charges related to the Mellanox acquisition partially offset by product mix.
This sequential increase was driven by the absence of nonrecurring inventory step-up expense related to the Mellanox acquisition.
Non-GAAP gross margins increased by 140 basis points year-on-year, reflecting a shift in product mix with higher Data Center sales, including the contribution from Mellanox.
Non-GAAP gross margin was down 50 basis points sequentially, in line with our expectations, driven by product mix.
Q3 GAAP operating expenses were $1.56 billion, and non-GAAP operating expenses were $1.1 billion, up 6% and 42% from a year ago, respectively.
Q3 GAAP EPS was $2.12, up 46% from a year earlier.
And non-GAAP EPS was $2.91, up 63% from a year ago.
Q3 cash flow from operations was $1.28 billion.
With that, let me turn to the outlook for the fourth quarter of fiscal 2021.
As a reminder, Q4 includes a 14th week, which we expect to be incrementally an addition to revenue and operating expenses.
We expect Gaming to be up sequentially in what is typically a seasonally down quarter as we continue to ramp up our new RTX 30 Series products.
We expect Data Center to be down slightly versus Q3.
With that, we expect computing products to grow in the mid-single digits sequentially, more than offset by a sequential decline in Mellanox.
We expect continued sequential growth in Auto and Pro Vis though not yet returning to year-on-year growth.
And we expect a seasonal decline in OEM.
Revenue is expected to be $4.8 billion, plus or minus 2%.
GAAP and non-GAAP gross margins are expected to be 62.8% and 65.5%, respectively, plus or minus 50 basis points.
GAAP and non-GAAP operating expenses are expected to be approximately $1.64 billion and $1.18 billion, respectively.
GAAP and non-GAAP other income and expenses are both expected to be an expense of approximately $55 million.
GAAP and non-GAAP tax rates are both expected to be 8%, plus or minus 1%, excluding discrete items.
Capital expenditures are expected to be approximately $300 million to $325 million.
Further financial details are included in the CFO commentary and other information available on our IR website.
In closing, let me highlight upcoming events for the financial community.
We'll be virtually attending the Crédit Suisse Technology Conference on November 30; Wells Fargo TMT Summit, December 1; and the UBS TMT Conference on December 7. Our earnings call to discuss the fourth quarter and full year results is scheduled for Wednesday, February 24.
We will now open the call for questions.
Operator, would you please poll for questions?
Operator
(Operator Instructions) Your first question comes from the line of John Pitzer from Crédit Suisse.
John William Pitzer - MD, Global Technology Strategist and Global Technology Sector Head
Sorry, can you hear me?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
John William Pitzer - MD, Global Technology Strategist and Global Technology Sector Head
Congratulations on the solid results.
Just Colette, going back to your commentary around Mellanox, it seems like you're guiding the January quarter to about 500 million, which means the core data center business is still growing nicely, call it, 6%, 7% sequentially.
I'm just kind of curious when you look at the core data center business, I know there's not a direct correlation to server business, but we're clearly going through a cloud digestion in server and core vertical markets enterprise for servers are weak.
When you look at your core data center business, do you feel as though that's having an impact and this is sort of the digestion that you saw kind of in late fiscal '20 into -- sorry, fiscal '19 into '20, but you're doing it still growing significantly year-over-year?
Or how would you characterize the macro backdrop?
Colette M. Kress - Executive VP & CFO
Sure.
Let me make sure we clarify for those also on the call.
Yes, we expect our Data Center revenue in total to be down slightly quarter-over-quarter.
The computing products, NVIDIA computing products, is expected to grow in the mid-single digits quarter-over-quarter as we continue the NVIDIA AI adoption and particularly as A100 continues to ramp.
Our networking, our Mellanox networking, is expected to decline meaningful quarter-over-quarter as sales to that China OEM will not recur in Q4, though we still expect the results to be growth of 30% or more year-over-year.
The timing of some of this business therefore shifted from Q4 to Q3.
But overall, H2 is quite strong.
So in referring to overall digestion, the hyperscale business remains extremely strong.
We expect hyperscales to grow quarter-over-quarter in computing products as A100 continues to ramp.
The A100 continues to gain adoption not only across those hyperscale customers, but again we're also receiving great momentum in inferencing with the A100 and the T4.
I'll turn it over here to Jensen to see if he has more that he would like to add.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
Colette captured it very well.
The only thing I would add is our inference initiative is really gaining great momentum.
Inference is one of the hardest computer science problems.
Compiling these gigantic neural network computational graphs into a target device is really, really -- has proven to be really, really hard.
The models are diverse, ranging from vision to language to speech.
And there are so many different types of models being created.
The model sizes are doubling every couple of months.
The latency expectations are increasing all the time -- or latency is decreasing all the time.
And so the pressure on inference is really great.
The technology pressure is really great.
And our leadership there is really pulling ahead.
We're in our seventh generation TensorRT.
We, over the course of the last couple of years, developed an inference server.
It's called Triton, has been adopted all over the place.
We have several hundred customers now using NVIDIA AI to deploy their AI services.
This is the early innings, and I think this is going to be our largest near-term growth opportunity.
So we're really firing on all cylinders there, between the A100s ramping in the cloud, A100s beginning to ramp in enterprise, and all of our inference initiatives are really doing great.
John William Pitzer - MD, Global Technology Strategist and Global Technology Sector Head
Jensen, maybe to follow on there, just on the vertical markets, clearly work from home and COVID this year kind of presented a headwind to new technology deployments on-prem.
I'm kind of curious, if we expect sort of an enterprise recovery in general next year, how do you think that will translate into your vertical market strategy?
And is there anything else above and beyond that you can do to help accelerate penetration of AI into that end market?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
John, that's a good point.
I mean the -- it's very clear that the inability to go to work is slowing down the adoption of new technology in some of the verticals.
Of course, we're seeing rapid adoption in certain verticals, like for example using AI in health care to rapidly discover new vaccines and early detection of outbreaks and robotic applications.
So warehouses, digital retail, last mile delivery, we're just seeing just really, really great enthusiasm around adopting new AI, robotics technology.
But in some of the old -- some of the more traditional industries, new capabilities and new technologies are slower to deploy.
One of the areas that I'm really super excited about is the work that we're doing in the remote work and making it possible for people to collaborate remotely.
We have a platform called Omniverse.
It's in early beta.
The feedback from the marketplace has been really great.
And so I've got a lot more to report to you guys in the upcoming months around Omniverse.
And so -- but anyways, I think when the industry recover, we serve -- our fundamental purpose as a company is to solve the greatest challenges that impact the industry where ordinary computers can't.
And these challenges are -- serve some of the most important applications in the verticals that we address.
And they're not commodity applications.
They're really impactful, needle-moving applications.
So I have every confidence that when the industries recover, things will get designed.
Cars will be designed, and planes will be designed, and ships will be designed, and buildings will be designed.
And we're going to see a lot of design, and we're going to see a lot of simulation.
We're going to see a lot of robotics applications.
Operator
(Operator Instructions) Your next question comes from the line of C.J. Muse from Evercore.
Christopher James Muse - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst
You talked about in your prepared remarks limited availability of capacity components.
You suggested perhaps a few months to catch up.
Curious if you can speak to the visibility that you have for both Gaming and Data Center into your April quarter.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
Colette, do you want me to take that real quick and maybe you can help me out?
Colette M. Kress - Executive VP & CFO
Yes.
Yes, absolutely.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
So C.J., first off, we have a lot of visibility into the channel, as you know, especially for Gaming.
And we know how many weeks of inventory is in what parts of the channel.
We've been draining down the channel inventory for Turing for some time.
And meanwhile, we've also expected a very, very successful launch with Ampere.
And even with our bullish demand expectation and all of the Amperes that we built, which is one of the fastest ramps ever, the demand is still overwhelming.
And this, I guess in a lot of ways, is kind of expected.
The circumstances are -- it's been a decade since we've invented a new type of computer graphics.
10 years ago, we invented a programmable shader, and it set the industry on a course to create the type of images that we see today.
But it's very clear that the future is going to look something much, much more beautiful, and we invented NVIDIA RTX to do that.
And it has 2 capabilities, one based on ray tracing, and the other one's based on artificial intelligence image generation.
The combination of those 2 capabilities is creating images that people are pretty ecstatic about.
And at this point, it's defined the next-generation content.
And so when we -- it took us 10 years to invent it.
We launched it 2 years ago and took our second generation to really achieve the level of quality and performance that the industry really, really expect.
And now the demand is just overwhelming.
And so we're going to continue to ramp fast, and this is going to be one of our most successful ramps ever.
And it gives our installed base of some 200 million-plus GeForce gamers the best reason to upgrade in over a decade.
And so this is going to be a very large generation for us is my guess.
And then with respect to Data Center, we're ramping into A100.
A100 is our first generation of GPUs that does several things at the same time.
It's universal.
We position it as a universal because it's able to do all of the applications that we in the past had to have multiple GPUs to do.
It does training well.
It does inference incredibly well.
It does high-performance computing.
It does data analytics.
And so it's able -- the Ampere architecture is able to do all of this at the same time.
And so the utilization for data centers is -- and the utility is really, really fantastic.
And the reception has been great.
And so we're going to ramp into all of the world's clouds.
I think starting this quarter, we're now in every major cloud provider in the world, including Alibaba, Oracle, and of course the giants, the Amazons, the Azure and the Google Clouds.
And we're going to continue to ramp into that.
And then of course we're starting to ramp into enterprise, which in my estimation long term will still be the largest growth opportunity for us, turning every industry into an AI, turning every company into AIs and augment it with Al and bringing the iPhone moment to all of the world's largest industries.
And so we're ramping into that, and we're seeing a great deal of enthusiasm.
Operator
Your next question comes from the line of Stacy Rasgon from Bernstein Research.
Stacy Aaron Rasgon - Senior Analyst
You said that the extra week was contributing incrementally to revenue and OpEx.
Can you give us some feeling for how much is contributing to revenue and OpEx in Q4?
And does that impact, at least on the revenue side, differ, say, between like Gaming and Data Center?
And then how should we think about it impacting seasonality into Q1 as that extra week rolls off?
Colette M. Kress - Executive VP & CFO
Sure.
Let me try this one, Jensen.
Yes, we've incorporated that 14th week into our guidance for both revenue and OpEx.
We will likely have incrementally positive impact on revenue, although it is tough to quantify, okay?
Our outlook also reflects incremental OpEx for Q4 in primarily 2 different areas in terms of compensation and depreciation.
And given that our employees are such a material part of our OpEx, it will -- it can be close to 1/14 of the quarter.
Now when we look a little bit further, we should think about the incremental positive in both Gaming and Data Center from that extra week as there hopefully will be extra supply, but not likely as much as 1/14 of the quarter of revenue as enterprise demand is essentially project-based and gaming demand though is tied to the number of gamings that -- gamers that might be shopping for the overall holiday.
So again, still very hard for us to determine at this time.
Normally, between Q4 and Q1, there is seasonality in Gaming, seasonality downward.
But we'll just have to see as we are still supply-constrained within this Q4 to see what that looks like.
From an OpEx standpoint, we'll probably expect our OpEx to be relatively flattish as we move from Q4 to Q1.
Operator
Your next question comes from the line of Vivek Arya from Bank of America.
Vivek Arya - Director
Congratulations on the strong growth.
Jensen, my question is on competition from internally designed products by some of your larger cloud customers, Amazon and Google and others.
We hear about competition from time to time, and I wanted to get your perspective.
Is this a manageable risk?
Is the right way to think that they are perhaps using more of your product in their public cloud, but they are moving to internal products for internal workloads?
Just how should we think about this risk going forward?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Thanks, Vivek.
Most of the cloud vendors, in fact I believe all of the cloud vendors, use the same infrastructures largely for their internal cloud and external cloud, or have the ability to or largely do.
And there's -- the competition, we find to be really good.
And the reason for that is this.
It just suggests that acceleration -- makes it very clear that acceleration is the right path forward for training and inference.
The vast majority of the world's training models are doubling in size every couple of months, and it's one of the reasons where our demand is so great.
The second is inference.
The vast majority of the world's inference is done on CPUs.
And nothing is better than the whole world recognizing that the best way forward is to do inference on accelerators.
And when that happens, our accelerator is the most versatile.
It's the highest performance.
We move the fastest.
Our rate of innovation is the fastest because we're also the most dedicated to it.
We're the most committed to it, and we have the largest team in the world to it.
Our stack is the most advanced, giving us the greatest versatility and performance.
And so we see spots of announcement here and there, but they're also our largest customers.
And as you know that we're ramping quite nicely at Google, we're ramping quite nicely at Amazon and Microsoft and Alibaba and Oracle and others.
And so I think the big takeaway is that -- and the great opportunity for us if you look at the vast amount of workload, AI workload in the world, the vast majority of it today is still on CPUs.
And it's very clear now this is going to be an accelerated workload, and we're the best accelerator in the world.
And this is going to be a really big growth opportunity for us in the near term.
In fact, we believe it's our largest growth opportunity in the near term, and we're in the early innings of it.
Operator
Your next question comes from the line of Harlan Sur from JPMorgan.
Harlan Sur - Senior Analyst
Great job on the quarterly execution.
The Mellanox networking connectivity business was up 80% year-over-year.
I think it was up about 13%, 14% sequentially.
And I know there was upside in October from one China customer.
But it did grow 70% year-over-year last quarter, and you're still expecting 30% year-over-year growth next quarter.
If I remember correctly, I think InfiniBand is about 40% of that business; ethernet cloud is about 60%.
Jensen, what are the big drivers, especially since we're in the midst of a cloud spending digestion cycle?
And I just saw that the team announced their next-gen 400-gig InfiniBand solution, which should drive another strong adoption cycle with your supercomputer customers.
When does this upgrade cycle start to fire?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
Let's see.
Our Data Center business consists of supercomputing centers, which is small, High-performance computing was a much larger part of supercomputing, much larger than supercomputing.
And then hyperscale and enterprise, which -- about 50-50.
The -- of the Data Center business, the accelerated computing part is not very much associated with digestion and others.
It's much more associated with workloads and our new product cycles, the TCO that we bring and AI inference, the type of models that the cloud service providers are deploying, whether they're deploying new AI models based on deep learning, and how much of it that we -- how much of those workloads that we've completed the porting to our accelerators and readying it for deployment.
And so those are the factors associated with accelerated computing.
It's really about the apps.
It's really about the workloads and really driven by AI.
On the other hand, the networking part of our business is more connected to CPU business because they're much more broad-based.
The networking part of our business is driven by this idea of new hyperscale data center architecture called disaggregation.
It's software disaggregation, not necessarily hardware disaggregation.
Software disaggregation, where this type of software called Kubernetes orchestrate micro services that are deployed across the data center.
So one service, one application, isn't monolithic running on one computer anymore.
It's distributed across multiple computers and multiple nodes, so that the hyperscale data centers can more easily scale up and scale out according to the workloads and according to the demand on the data center.
And so this disaggregation has caused the networking between the compute nodes to be of all vital importance.
And because Mellanox is the lowest-latency, highest-performance, highest-bandwidth network that you can get, that the TCO benefit at the data center scale is really fantastic.
And so when they're building out data centers, Mellanox is going to be much more connected to that.
In the enterprise side of it, depending on new CPU cycles, it could affect them.
If a CPU cycle were to delay a little bit, it would affect them by a quarter.
But if it was a pull-in by a quarter, it would affect them by a pull-in of a quarter.
And so those are kind of the dynamics of it.
I think that the net-net of it is that it's a foregone conclusion at this point, that AI is going to be the future of the way software is written.
AI is the most powerful technology force of our time, and acceleration is the best path forward.
And so that's what drives our computing business.
And the networking business has everything to do with the way -- architecture of data centers, cloud data centers, which is architected with micro services now.
And that's what foundationally drives their -- our networking business demand.
And so we're really well positioned in these 2 fundamental dynamics because, as we know, AI is the future and cloud computing is the future.
Both of those dynamics are very favorable to us.
Operator
Your next question comes from the line of Timothy Arcuri from UBS.
Timothy Michael Arcuri - MD and Head of Semiconductors & Semiconductor Equipment
I wanted to ask a question that was asked before but in a different way.
If I look at the core business excluding Mellanox, the core data center business, it was up about 6% sequentially the past 2 quarters, and your guidance sort of implies up about that much again in January, which is certainly good under some cloud digestion, but of course you have Ampere still ramping as well, which should be a pretty good tailwind.
So there seems to be some offsetting factors.
So I guess I wonder if you feel like your core data center revenue is still being constrained right now by some market digestion and kind of how you sort of balance or handicap these 2 factors.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Our growth is -- in the near term is more affected by the cycle time of manufacturing and flexibility of supply.
We are in a good shape to -- and all of our supply is -- informs our guidance.
But we would appreciate shorter cycle times.
We would appreciate more agile supply chains.
But the world is constrained at the moment.
And so we just have to make the best of it.
But even in that condition, we've -- all of that is in our guidance, and we expect to grow.
Operator
Your next question comes from the line of Aaron Rakers from Wells Fargo.
Aaron Christopher Rakers - MD of IT Hardware & Networking Equipment and Senior Analyst
Congratulations on the quarter.
I wanted to go back to kind of the Mellanox question.
I know prior to the acquisition, Mellanox was growing maybe in the mid- to high 20% range.
These last 2 quarters, it's grown over 75%.
I guess the simple question is how do you think about the growth rate for Mellanox going forward?
And on that topic, we've started to hear you talk more about BlueField and data processing units.
I think in your commentary, you alluded to server OEM design wins incorporating these DPUs.
What are you looking at?
Or when should we think about the DPU business really starting to inflect and become a material driver for the business?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Long term, every computer in the world will be built like a data center, and every node of a data center is going to be a data center in itself.
And the reason for that is because we want the attack surface to be basically 0. And today, most of the data centers are only protected as a periphery.
But in the future, if you would like cloud computing to be the architecture for everything and every data center is multi-tenant, every data center is secure, then you're going to have to secure every single node.
And each one of those nodes are going to be a software-defined networking, software-defined storage, and it's going to have per application security.
And so the processing that is -- that it will need to offload the CPU is really quite significant.
In fact, we believe that somewhere between 20% to 40% of today's data centers, cloud data centers is the capacity, the throughput, the computational load is consumed running basically the infrastructure overhead.
And that's what the DPU is intended -- was designed to do.
We're going to offload that, number one.
And number two, we're going to make every single application secure.
And confidential competing, zero trust computing, that will become a reality.
And so the importance is really quite tremendous.
And I believe therefore that every single server in the world will have a DPU inside someday just because we care so much about security and just because we care so much about throughput and TCO.
And it's really the most cost-effective way of building a data center.
And so I expect our DPU business to be quite large.
And so that's the reason why we're putting so much energy into it.
It's a programmable data center on a chip, think of it that way, a data center infrastructure on a chip.
It is the reason why we're working with VMware on taking the operating system in the data center, the software-defined operating system in the data center, putting it on the BlueField.
And so this is a very important initiative for us.
I'm very excited about it, as you can imagine.
Operator
Your next question comes from the line of Ambrish Srivastava from BMO Capital Markets.
Ambrish Srivastava - MD of Semiconductor Research & Senior Research Analyst
Colette, and I apologize if I missed it, but for Mellanox, do you expect it to get back to the growth trajectory on a sequential basis in the April quarter?
And I'm assuming that the shortfall in the current quarter is from a pull-in from Huawei.
Colette M. Kress - Executive VP & CFO
So our impact to our Q4 guidance for Mellanox, yes, is impacted by a sale to a China OEM for Mellanox that will not recur in Q4.
And as we look forward into Q1 of April, we're going to take this a quarter at a time and provide thoughts and guidance for that once we turn the corner to the new fiscal year.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
At the highest level, Colette, I think the -- it's safe to say that high-speed networking is going to be one of the most important things in cloud data centers as we go forward.
And the vast majority of the world's data center is still built for the traditional hyper-converged architecture, which is all moving over to micro services-based disaggregate -- software-defined disaggregated architectures.
And that journey is still in its early days.
And so I fully expect future cloud data centers, all future data centers are going to be connected with high-speed networking inside.
They call it east-west traffic.
And all of the traffic will be secured.
And so imagine building firewalls into every single server.
And imagine every single transaction, every single transmission inside the data center to be high speed and fully encrypted.
And so pretty amazing amount of computation is going to have to be installed into future data centers.
But that's an accepted requirement now.
And I think our networking business, Mellanox, is in the early innings of growth.
Operator
Your final question today comes from the line of William Stein from Truist Securities.
William Stein - MD
You've given us some pieces of this puzzle, but I'm hoping maybe you can address directly the sort of SKU-by-SKU rollout of Ampere.
We know that we didn't have a ton of SKUs last quarter.
There were more in this quarter that you just announced.
Now you're doing sort of this refresh, it sounds like, with double the memory on the A100.
Is the T4 going to be refreshed?
And if so, when does that happen?
And are there other either systems or chips that are still waiting for the Ampere refresh that could potentially contribute to an extended cycle as we look at the next year?
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Yes.
In terms of the total number of SKUs that we've ramped of Ampere, we're probably somewhere along 1/3 to 1/2 of the SKUs at this point, maybe a little bit less.
Yes, it's less.
The way that you could think through it, you could reverse engineer it is like this.
You know what our Gaming lineup looks like for desktops.
And so traditionally we try to have a new architecture in every single segment.
And we've not gone below 499 yet.
And so there's a very big part of the marketplace that we're still in the process of addressing.
And then the second thing is laptops.
None of those -- none of the Ampere architecture has launched for laptops.
And then there's workstations.
And you do the same thing with desktops workstations and laptops workstations.
And none of them -- none of those have gone out yet.
And then there's data center.
In our data center business for cloud, you've seen some of the early versions of it, A100.
But then there's cloud computing for graphics.
There's cloud gaming.
There's enterprise -- edge enterprise applications, enterprise data analytics applications.
And so there's a fair number of exciting new products we still have in front of us.
Operator
That concludes our Q&A for today.
I now turn the call back to Ms. Jankowski for closing remarks.
Simona Jankowski - VP of IR
Actually, that will be for Jensen.
Operator
My apologies.
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Okay.
Thank you.
Thank you, Simona.
This was a terrific quarter.
NVIDIA is firing on all cylinders.
And the RTX has reinvented graphics and has made real-time ray tracing the standard of next-generation content, creating the best ever reason to upgrade for hundreds of millions of NVIDIA gamers.
AI, where software writes software no humans can, is the most powerful technology force of our time and is impacting every industry.
NVIDIA AI again swept MLPerf training and now inference as well, extending our leadership in this important new way of doing computing.
NVIDIA AI's new Triton inference server, a platform that I will speak a lot more about in the future and a lot more frequently because it's important -- and our full stack optimized platform are gaining rapid adoption to operate many of the world's most popular AI-enhanced services, opening a major growth opportunity.
Data centers are the new unit of computing.
Someday, we believe there will be millions of autonomous data centers distributed all over the globe.
NVIDIA's BlueField DPU programmable data center on a chip and our rich software stack will help place AI data centers in factories, warehouses, 5G base stations and even on wheels.
And with our pending acquisition of ARM, the company that builds the most -- the world's most popular CPU, we will create the computing company for the age of AI, with computing extending from the cloud to trillions of devices.
Thank you for joining us today.
I wish all of you a happy holidays.
And please do stay safe, and I look forward to seeing you guys next time.
Operator
That concludes today's conference call.
You may now disconnect.