NVDA: More highlights from the Q2 call

“Demand remains strong for our DGX AI supercomputer, as organizations take on multiple systems to build out AI-enabled applications. Facebook disclosed a system incorporating 128 DGXs. We have shipped systems to more than 300 unique customers, with 1,000-plus in the pipeline.”

So 128 times $149K = $19M sold to Facebook (assuming no volume discount).

1000 in the pipeline is $149M worth of revenue (assuming no discounting). As a reference to last year’s datacenter revenue, Q3 2017 total datacenter revenue was $240M. And datacenter products also include discrete V100 GPUs and older generations of product. Datacenter will sometimes buy the one-off GPUs rather then the DGX-1 units.

GPUs versus CPUs

Analyst asked: <I<“And then the bigger question is, Jensen, it seems the data center market is bifurcating between your GPU approach on one side and ASICs on the other side. What are you doing to make sure that the balance stays in your favor as the market matures from here?”

Huang answered: "So first of all, Q2 was a transition quarter for our data center. I thought we did great. We almost tripled year over year, and we ramped Volta into volume production. And because Volta was so much better than our last generation processor – Volta is 100 times faster than Kepler, 100 times faster than Kepler just four years ago, and Kepler was already 10 times faster than CPUs. And so Volta was such a giant leap when we announced it in GTC right at the beginning of the quarter. I thought the team did fantastically transitioning the customer base to Volta, and now Volta is in high-volume production.

The application of data center – you asked a larger question about data center. Data center is a very large market, as you know, and the reason for that is because the vast majority of the world’s future computing will be largely done in data centers. And there’s a very well accepted notion now that GPU acceleration of servers delivers extraordinary value proposition. If you have a data-intensive application, and the vast majority of the future applications in data centers will be data intensive, a GPU could reduce the number of servers you require or increase the amount of throughput pretty substantially. Just adding one GPU to a server could reduce several hundred thousand dollars of reduction in number of servers. And so the value proposition and the cost savings of using GPUs is quite extraordinary.

There are several applications in data centers. First of all, there’s training and there’s high-performance computing. There’s cloud virtual PC, as what Amazon AWS G3 announcement was about this quarter. And then there are also new applications such as inferencing as these models are now going into production, and the new applications that are coming online, which is likely to overwhelm the Internet in the near future, which is live video, consumers taking live video on their phones and sharing it with their friends.

And there are going to be hundreds of millions of these happening all the time, and each one of these videos will have to be transcoded to a variety of formats to be shared with their friends and also has to be – you have to perform AI on it instantaneously so that you could avoid inappropriate video from being streamed to large audiences. And so the number of applications where GPUs are valuable, from training to high-performance computing to virtual PCs to new applications like inferencing and transcoding and AI, are starting to emerge.

The one area where you’re talking about ASICs and TPUs, TPU is basically an ASIC. The way to think about that is this. After four generations of evolution of our GPU, NVIDIA GPU is basically a TPU that does a lot more. We could perform deep learning applications, whether it’s in training or in inferencing now, starting with the Pascal P4 and the Volta generation. We can inference better than any known ASIC on the market that I’ve ever seen. And so the new generation of our GPUs is essentially a TPU that does a lot more. And we can do all the things that I just mentioned and the vast number of applications that are emerging in the cloud.

And so our belief is this. Our belief is that, number one, a GPU has to be versatile to handle the vast array of big data and data-intensive applications that are happening in the cloud, because the cloud is a computer. It’s not an appliance. It’s not a toaster. It’s not a lightbulb. It’s not a microphone. The cloud has a large number of applications that are data-intensive. And second, we have to be world-class at deep learning, and our GPUs have evolved into something that can be absolutely world-class at TPU, but it has to do all of the things that a data center needs to do."

So I see NVDA dominating the data center market, taking massive share from CPUs. And that datacenter market is also rapidly growing. This is NVDA’s biggest near-term opportunity and it’s going growing like a weed and it’s going to be huge.

Huang on cryptocurrency: "Cryptocurrency and blockchain is here to stay. The market need for it is going to grow, and over time it will become quite large. It is very clear that new currencies will come to market, and it’s very clear that the GPU is just fantastic at cryptography. And as these new algorithms are being developed, the GPU is really quite ideal for it. And so this is a market that is not likely to go away anytime soon, and the only thing that we can probably expect is that there will be more currencies to come. It will come in a whole lot of different nations. It will emerge from time to time, and the GPU is really quite great for it.

What we’ve done, our strategy is to stay very, very close to the market. We understand its dynamics really well. And we offer the coin miners a special coin-mining SKU. And this product configuration – this GPU configuration is optimized for mining. We stay very close to the market. We know its every single move and we know its dynamics.

And then last thing that I can say is that the larger of a GPU company you are, the greater ability you could absorb the volatility. And so between the combination of the fact that we have GPUs at just about every single price point, we have such incredibly efficient designs that we’re so close to the marketplace. And because we have such large volumes, we have the ability to rock and roll with this market as it goes. But this is an important market that likely will continue to grow over time."

Huang is basically saying a lot of different things here.

  1. Crytocurrency will be large but it will have a lot of up and downs in demand.

  2. GPUs are ideal for mining cryptocurrency.

  3. NVDA has a wide range of GPUs at different price points and configurations so NVDA can tailor a product or products to meet this demand (give it the right features and price without affecting NVDA’s other business lines). AMD had problems because the ran out of GPU inventory for the gaming market because a lot were sold to cryptocurrency miners. AMD didn’t offer a separate product for cryptocurrency and said they won’t. AMD is not well positioned to segment the markets and control inventory to different target markets. Note: NVDA also ran out of some supply of gaming GPUs but appartantly, NVDA handled it much better than AMD.

  4. NVDA can handle fluctuations in demand while AMD can’t.

More on cryptocurrency cause shortages for gaming GPU demand:

“There are still small miners that buy GeForces here and there, and that probably also increased the demand of GeForces. There were a lot of shortages all over the world. And as we go into this quarter, there’s still cryptocurrency mining demand that we know is out there. And based on our analytics and understanding of the marketplace, there will be some amount of demand for the foreseeable future. But it’s also the case that there were gamers whose needs and demands were not filled last quarter.”

If both AMD and NVDA couldn’t meet all gaming demand last quarter, it means that no one sold GPUs to some gamers who wanted the GPUs. Therefore, there could be spill over demand from Q2 to Q3.

Huang on the Volta launch and ramp: “First of all, it’s very difficult to reverse-engineer from the first ramp of Volta any impact on gross margins, and the reason for that is because the first ramps tend to be more costly, and you’re still trying to stabilize yield. There are a lot of complexities involved. But what I can tell you is that we shipped a lot of Voltas. We shipped a lot of Voltas, and Volta is fully ramped. Customers are clamoring for it. The leap generationally for deep learning is quite extraordinary. And so we’re expecting Volta to be very, very successful.

We will need to watch the datacenter market for NVDA over the next few quarters, but to me a 10x improvement over the prior generation suggests that demand could be enormous especially when you add the GPUs can do what CPUs can do also. I wouldn’t want to own Intel because who knows how much business they will lose.

Huang says more about Volta: "So the first way to think about our ASP is to think about the value proposition that our GPUs provide. Whenever you include a Volta in your data center, in your server that is doing data-intensive processing, the number of commodity servers that it replaces and the number of just NICs [Network Interface Cards] and cables that it replaces is pretty extraordinary. Every single Volta allows you to save several hundred thousand dollars.

And so the price of Volta is driven by the fact that, of course, the manufacturing cost is quite extraordinary. These are expensive things to go and design. The manufacturing cost itself, you guys can estimate it, is probably in the several hundred dollars to close to $1,000. However, the software intensity of developing Volta, the architectural intensity of developing Volta, all of the software intensity associated with all the algorithms and optimizing all the algorithms of Volta is really where the value-add ultimately ends up. And so I guess the pricing – your question relates is pricing. We expect pricing to be quite favorable for Volta.

And then your second question I think is related to acceleration. The data center growth opportunity for us is quite significant, as you know. There are several applications that demand GPUs today. Almost every single data center in the world today recognizes that GPU is the path forward for data-intensive processing. Every single OEM and every single cloud service provider now supports NVIDIA GPUs and offer video GPUs, and Volta is going be the engine that serves them. So I’m expecting a lot of good things from Volta."

I can’t see why a datacenter wouldn’t implement a lot of NVDA Voltas. They will spend less and get more.

Wow a COGS under $1000 and a list price close to $19,000. Looks like gross margins for the Volta are around 95%. WOW!

Huang is saying that all datacenter know they need to be using GPUs. Which ones do you think they will buy? The fastest and most powerful ones, of course. They will get the most value out of those.

Huang’s view on the autonomous car roadmap going forward. When will autonomous cars hit the market/road:

"The roadmap for auto looks like this. For this year and next, what you should see is development partnerships that we have with a growing number of car companies, and they’re reflected in NRE projects, development systems, and purchasing of our AI supercomputers like DGX. And so for the next I would say this year and the vast majority of next year, that’s what you should expect from the autonomous driving perspective.

Starting next year, you’re going to start to see robot taxis start to come to the road. We’re working with a handful, maybe I guess about six or seven really exciting robot taxi projects around the world. And you could see them start to go into a prototype or beta testing starting now, and then next year you’ll see a lot more of them. And starting 2019, you’ll see them go into real commercial services. And so those are robot taxis, what some in the industry call Level 5s, basically driverless cars.

And then the fully autonomous drivered cars, driven cars, branded cars will start hitting the road around 2020 and 2021. So the way to think about it is this year and next is really about development. Starting next year and the following year is robot taxis. And then 2021 to forward you’re going to see a lot of Level 4s."

So for the most part autonomous car projects are currently in development. This means small sales for NVDA because the customers don’t need that many GPUs for the training of the cars. The opportunity should get big in a few years.

Huang on NVDA’s competitive position of their offerings in the various markets and then his comment about being able to meet any demand above what NVDA has guided for Q3:

"The first factor is our strategic position. Our competitive lineup is probably the best it’s ever been, better than last year even, which was incredibly strong, better than the year before that because it was incredibly strong. I think our strategic position and the value of our architecture is more powerful today than ever. And so I think number one is our strategic position.

The second, if the demand were there in the second half with respect to – from a perspective of gaming demand and if there’s any residual crypto demand, we will surely be able to serve it. And then lastly, the factors related to our guidance, our guidance is we’re comfortable with our guidance. We’re happy with our guidance, and we want to have an opportunity to come back and give you an update in Q3."

How powerful is Volta?

"Volta was a giant leap. It’s got 120 teraflops. Another way to think about that is eight of them in one node is essentially one petaflops, which puts it among the top 20 fastest supercomputers on the planet. And the entire world’s top 500 supercomputers are only 700 petaflops. And with eight Voltas in one box, we’re doing artificial intelligence that represents one of them. So Volta is just a gigantic leap for deep learning and it’s such a gigantic leap for processing that – and we announced it at GTC, if you recall, which is practically right at the beginning of the quarter.

Now looking forward, there’s a whole bunch of growth drivers for our data center business. Deep learning is – training is a growth driver. Cloud computing, high-performance computing is a growth driver, and we have new growth drivers with inferencing. And so I’m pretty excited about our prospects going into the age of – the generation of Volta."

Wow! One DGX-1 for $149,000 is among the top 20 super computers currently in the world. It’s hard to imaging that this won’t sell like gangbusters. It’s also hard to image that a competitor will catch NVDA on performance in the coming years. NVDA must already be working on their next generation high end GPU.

More on Volta’s potential:

"The applications that we serve is really diverse now. It used to be just high-performance computing and supercomputing. But the number of applications we serve in Internet service providers, manufacturing, healthcare, financial services, transportation, the number of data-intensive applications and industries that need them is really growing very fast.

And so how fast does that – what does that imply in terms of long-term growth? It’s hard to say. But first principles would suggest that every single data center in the world will be GPU-accelerated someday. And I’ve always believed that, and I believe that even more today. Because I believe that in the future, this new computing model that we all finally call AI is going be a highly data-intensive business model, and the GPU is the ideal computing model for that.

So I’m not exactly sure if that completely answers your question, and partly because I’m not exactly sure. I just know that on first principles, the computing architecture is ideal. There’s every evidence that every single data center and every single OEM and every single Internet service provider is jumping on this architecture and jumping on Volta. And I believe that AI is going be the future of computing. And so somewhere between those beliefs and executing the business is the truth."

Sure makes it seem like every industry and every datacenter already wants or is going to want NVDA’s high performance GPUs. And then you consider that the world is going to the cloud operated by datacenters. How can I not buy more NVDA???

Huang on NVDA’s software advantage, backward compatibility, and building different configurations tailored to different target markets:

"it’s not really possible because our GPUs are all architecturally compatible, which at some level is one of our strengths. There are hundreds of millions of NVIDIA GPUs in the world, and they’re all CUDA compatible, and they’re all 100% CUDA compatible. And we’re so rigorous and so disciplined about ensuring their compatibility that for developers it’s really a wonderful platform.

However, we’re thoughtful about how we configure the GPUs so that they’re best for the applications. Some applications would like to have the maximum amount of performance in a few nodes. Some would like to have the maximum amount of performance within 30 watts. Some would like to have the maximum amount of flexibility with all of the I/O and connectors and all the display connectors. Some people like to have multi-GPUs and that they have the ability to configure them together. And so every market has a slightly different need, and we have to understand the market needs and understand what it is that the customers are looking for, and configure something that is best for them."

NVDA and its GPUs are outpacing Moore’s Law which is enabling AI:

"A neural net in terms of complexity is approximately – not quite, but approximately doubling every year. And this is one of the exciting things about artificial intelligence. In no time in my history of looking at computers in the last 35 years have we ever seen a double exponential where the GPU computing model, our GPUs are essentially increasing in performance by approximately three times each year. In order to be 100 times in just four years, we have to increase overall system performance by a factor of three, by over a factor of three every year.

And yet on the other hand, on top of it, the neural network architecture and the algorithms that are being developed are improving in accuracy by about twice each year. And so object recognition accuracy is improving by twice each year, or the error rate is decreasing by half each year. And speech recognition is improving by a factor of two each year. And so you’ve got these two exponentials that are happening, and it’s pretty exciting. That’s one of the reasons why AI is moving so fast."

There’s some more good stuff but you get the idea. NVDA seems to be AI and AI is a powerful force on par with electricity. It’s going to be ubiquitous and NVDA is basically (currently) the sole supplier that will enable every company to become more intelligent. I have trouble putting a maximum upside on this opportunity because AIs implications I can’t really predict. What I can predict is that NVDA is going to make a lot of money off of it.

Chris

49 Likes

Chris,
Based on your previous post, seems there would be a correlation between the massive improvements in NVDA’s data center GPU applications and use of ANET products/services. Would there be a significant correlation between growth in NVDA and growth in ANET’s offering? Please comment. I am no expert in this, and appreciate you weighing in and your phenominal posts.

Sjo

1 Like

Based on your previous post, seems there would be a correlation between the massive improvements in NVDA’s data center GPU applications and use of ANET products/services. Would there be a significant correlation between growth in NVDA and growth in ANET’s offering? Please comment. I am no expert in this, and appreciate you weighing in and your phenominal posts.

That’s a great question. I’m not an expert on datacenter networking so I don’t know the answer. The following are the questions I would ask:

  1. GPUs are much more powerful than CPUs so more computing can be done within a single GPU. Does this mean that as more and more GPUs get adopted and replace CPUs that you will see a need for fewer and fewer network connections between processors?

  2. If #1 is true then the switch density would decrease and you might expect ANET and other companies that focus on switching and routing solutions will negatively affected.

  3. Even if #1 and #2 are true will there be enough increase in overall global demand for processing power to allow ANET to still sell increasing numbers of products?

Again, I’m just guessing but many some of the technical networking experts like Andy can weigh in.

Chris

4 Likes

Chris, just to quickly thank you for the CC excerpts which were so interesting.

1 Like

https://finance.yahoo.com/m/19301c7f-011c-324a-bf79-58091012…

For specific reason why NVDA may have sold off on earnings, not by that much in the scheme of things, but we all hate to see such things, is that the data center numbers were less than the whisper numbers. 2% sequential growth, and only 17% YoY growth for next quarter. The prior year numbers are becoming more difficult to keep growing at the rapid pace. Thus growth may come down to a more usual pace, at least for awhile.

This ignores what may happen if the new Volta chip starts to take off. It is also short sighed as it appears that every major business in the world will be using AI or not be in business. At least that is the rhetoric from every major business in the world. The market that NVDA is addressing in the data center is bigger than we have imagined, although it may be lumpy to get there.

Thus the short term reason for the sell off, other than North Korea. TTD had no such handicap in after hours trading.

So there is reason to the madness. It could very well be that the doubling of the stock that has become the norm is over, at least for now. That won’t change the fact that NVDA is the best positioned AI company in the world, bar none.

Tinker

8 Likes

Based on your previous post, seems there would be a correlation between the massive improvements in NVDA’s data center GPU applications and use of ANET products/services. Would there be a significant correlation between growth in NVDA and growth in ANET’s offering? Please comment. I am no expert in this, and appreciate you weighing in and your phenominal posts.

That’s a great question. I’m not an expert on datacenter networking so I don’t know the answer. The following are the questions I would ask:

1) GPUs are much more powerful than CPUs so more computing can be done within a single GPU. Does this mean that as more and more GPUs get adopted and replace CPUs that you will see a need for fewer and fewer network connections between processors?

2) If #1 is true then the switch density would decrease and you might expect ANET and other companies that focus on switching and routing solutions will negatively affected.

3) Even if #1 and #2 are true will there be enough increase in overall global demand for processing power to allow ANET to still sell increasing numbers of products?

Again, I’m just guessing but many some of the technical networking experts like Andy can weigh in.

That is a great question. This is outside my skill set. I believe that these GPU’s are going into servers. It is interesting that they said the customers would need less Nics with the new GPU’s. I know we have some experts on this board that know how these go together. Do the servers connect directly to each other or do they need to go through a switch? If someone could expound on this it would be appreciated.

Andy

1 Like

The sheer growth of internet traffic, and the resulting increase in network traffic within the datacenters will support continued growth of ANET. I just read elsewhere that car-to-car and car-to-cloud (i.e. car-to-datacenter) data traffic is expected to increase by a factor of ten thousand be 2025. Certainly, NVDA will profit handsomely from the rise of the autonomous vehicles, but so will ANET.

Having better, faster processors within servers will not mitigate the need to get data in and out of the server. NVDA and ANET can both continue to grow as more and more data flows into and across data centers, and more and more AI applications are put to work on those data.

Tiptree, happily watching, and profiting from, one of the greatest revolutions of all time

5 Likes

Gaucho,

There are many reasons that Nvidia is a compelling long term investment. One of those not discussed is how Nvidia goes out of its way to make its new products (100x faster or whatever) backward compatible with its old by rigidly sticking with CUDA.

If you run a data center you are going to want Nvidia because of its superiority, yes. And it is unlikely Nvidia will ever (for any material length of time) fall behind in performance to a competitor in any material manner.

However, even if a competitor might produce some better performance or cost, what they cannot do is guarantee backward compatibility with your existing data center that is filled with legacy Nvidia GPUs and Intel CPUs.

Does intel have 99% (or very close to this) market share in data centers for CPUs because AMD cannot make CPUs that perform as well at better cost than Intel? No. The issue is maintaining compatibility and reliability of every thing that you need to run on the data center.

Can you imagine the headaches if you mixed and matched AMD and Intel CPUs in a huge datacenter? I experienced it with 1 AMD powered desktop computer. It simply was not as backward compatible with software as we are lead to believe. For the most part it is not a problem, but when it is a problem it is a problem.

Here, we have data centers dominated by Nvidia GPUs running CUDA. It is why Gorillas are made in the Tornado. Datacenter growth is happening so fast that datacenters need to just buy and deploy, and before they know it they are dominated by Nvidia products. They thereafter have to manage their data centers based on this fact.

This is why just having a little bit faster or little bit cheaper GPU will not displace Nvidia, anymore than Intel CPUs are displaced.

Tinker

23 Likes