NVDA: 2018 self driving safety report

https://www.nvidia.com/content/dam/en-zz/Solutions/self-driv…

The above link is to the report that NVDA released last week.

Here is a link to a summary of what’s in the report:

https://blogs.nvidia.com/blog/2018/10/23/introducing-self-dr…

Chris

3 Likes

Pillar 2: Development Infrastructure That Supports Deep Learning

A single test vehicle can generate petabytes of data annually. Capturing, managing and processing this massive amount of data for not just one car, but a fleet, requires an entirely new computing architecture and infrastructure.

From the report this one stands out in regards to what we do on this and the NPI board. “An entirely new architecture and infrastructure is needed to examine all this massive data from now use this one car, but from the entire fleet of cars. After awhile I would think the data capture would be reduced as the important stuff vs throw away would be identified, but as of now Nvidia is talking about capturing all this data, from every vehicle.

Anyone have any information as to what alternative architecture and framework Nvidia may be mentioning?

Not difficult to understand what Nvidia is talking about in regard to processing it - a whole lot of GPUs on the edge, in edge computing data centers (something Nutanix would love to able to be the software running). But what about the rest of the architecture and infrastructure?

Tinker

2 Likes

edge computing data centers

OK, I’ll bite: what is an “edge computing data center”?

Because my read is that it’s more than an oxymoron, it’s a self-contradictory, non-existent thing.

1 Like

Smorg, I hesitate to respond since you seem to think the information that I bring is that of a very naive idiot who could not understand the two sides of a coin. But I will anyways.

I am not a tech professional. That probably gives me investing advantages in tech because I do not get weighed down by the minutia but instead am able to understand the core aspects as they relate to investing. We still talk about the surgeon who stated that daVinci would go nowhere because it was no better than laparoscopic surgery and he has used DaVinci, knows people who do, etc. and it was not worth investing in. He as simply too close to things and to the minutia so that he focused on the one tree in great detail without looking at the forest.

I am surprised that you are not aware of edge data centers. Nutanix is not shy about talking about them. Within the next 5 years edge data centers are expected to collect and possibly process 50% or more of all data produced in the world. The reason for this is IoT. IoT can be a thermostat, or it can be the daily petabyte produced by one car. The data is so massive that it cannot be effectively analyzed through core data centers, and in many instances the latency is unacceptable, such as in autonomous machines.

Nutanix’s Sherlock project is 100% about this. Red Hat’s new HCI release is important in regard because for the first time a completely open sourced, simple HCI product is released, and Red Hat is using it for ROBO offices. But such a product may be perfect for edge computing, if stable and inexpensive, because the number of edge data centers are expected to be a very large number.

These largely do not exist now, although there is at least one start up who has a commercial server in regard. These will exist because data will need to be captured and analyzed near the source, to the extent possible, and then what is necessary and portable sent back to the core data center. You cannot expect a petabyte of data per car to all be sent through the World Wide Web back to core data centers to be analyzed in real time and acted upon.

Nutanix hopes to extend its products to also manage this millions,if not more, core data centers and make it one whole network. HCI is considered to essential to edge data center management.

https://blog.schneider-electric.com/datacenter/2018/06/15/45…

This blog may be a good start for you. Good start for all of us as I expect Saul will own at least one stock enabling this in good course, if he does not already (Nutanix).

Tinker

11 Likes

<<<With eyes toward the future of Internet of Things (IoT) devices plugging into the internet, enterprises are also shifting focus on how hyperconverged infrastructures (HCIs) may assist computing in the IoT era. In fact, Pivot3 boldly states that “combined with the IoT, it’s (HCI) going to change the way the world works.”

Pivot3 is not the only vendor proclaiming the benefits to IoT from HCI. Originally perceived as an ideal product to deliver cloud computing and remote office and branch office (ROBO) networking, vendors came to the epiphany that HCI will also deliver multi-access edge computing (MEC), which is a distributed cloudlet, and that the characteristics needed for both cloud and MEC are the same requisites to maintain IoT connectivity and monitoring. VMware announced at the Mobile World Congress in 2018 its intention to utilize HCI for edge computing and IoT. NetApp also released an HCI for IoT product that focuses on consolidating workloads and scalability to deliver the data center required to enable IoT latching onto networking sources.>>>

https://www.sdxcentral.com/cloud/converged-datacenter/defini…

Another succinct article but more on point. What you can read into it though is that the HCI ark et for edge data centers will be very crowded and Nutanix is nowhere near the first mover here. Since things are so nascent it may not matter, but w in 5 years, like electrical nodes or cellphone towers popping up everywhere, so will edge data centers. Unmanned, highly automated and one place suggested to build them is at existing cell tower locations. But who knows.

Using the Tinker ratio Nutanix scores very high on growth rate/TAM to grow into, along w very high score on relative valuation (RV), but on CAP…sigh.

Despite being number one in the market w high gross margins Nutanix has to spend more and more to retain its position and expand its market and has a plethora of competitors. In traditional HCI it is a duopoly, and Nutanix gets good CAP marks f me. But moving to the edge, that will be critical, HCI itself will become commoditized. Stability and price will be the primary selling factors, and it will need to be smaller and simpler than what is so,d at the core. Thus low CAP marks as the data center expands to the edge. Thus why utanix is so tempting but yet not confidence inspiring.

Compare that to Nvidia. They will sell GPUs into every edge data center w near zero marginal marketing and R&D costs. Ie, 54% ROE. That is a CAP score that presently (a competitor may find its way into the market for this or that niche) has to rival what a younger Microsoft or Intel had. Hard to materially argue otherwise. Nvidia’s rateof revenue growth will be slower in the Tinker Ratio (than your SHOP or ZS or Nutanix - but not by as much as you might think) but its rate of earnings growth should make up for this in spades. Thus Nvidia top top marks in CAP, top top marks in RV, mixed marks in growth but in context w RV and earnings leverage it may not even be a factor once all these vertical market Tornadoes hit.

Thus my Tinker Ratio analysis for NVDA and NTNX. Strengths and weaknesses and works very well for me to rank investment options.

Mongo…that is a good discussion for Tinker Ratio, but for a different thread and time. Mongo will be used on the edge as well.

Twilio…another interesting one…but another thread.

Tinker

4 Likes

Smorg, I hesitate to respond since you seem to think the information that I bring is that of a very naive idiot who could not understand the two sides of a coin.

I don’t think Tinker is naive nor an idiot. I think he has an instinctive view of technology that he doesn’t really understand. As a result, his explanations can mislead people. In this particular case, he’s confusing two different architectures, probably due to industry nomenclature.

Note he didn’t actually answer my question, which was simply: "What is an “edge computing data center”? That’s because that thing doesn’t exist. Go ahead and google that term with the quotes.

I am surprised that you are not aware of edge data centers.

Ah, well that’s not what he actually said. He’s apparently confusing “Edge Data Centers” with a typical IoT architecture that involves both “Edge Computing” and “Data Centers.” These are two different things. My guess his confusion comes because they both use the term “edge.”

Here’s a good explanation of what “the edge” is: https://www.theverge.com/circuitbreaker/2018/5/7/17327584/ed…

The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work.

So you see why “edge computing data centers” is an odd phrase and caught my attention. And I didn’t think of “edge data centers” because those don’t apply to the Autonomous Vehicle case under discussion here.

Back to my article, here’s an example of edge computing: Amazon Alex voice recognition: Your Echo has to process your speech, send a compressed representation of it to the cloud, the cloud has to uncompress that representation and process it, and then the cloud sends your Echo the answer.

If Alexa were “smarter” then it could do the voice recognition part itself, send the resulting query as text to the cloud for the answer. Doing the voice recognition part would then be characterized as “edge computed.”

OK, so that is what most of the world thinks of as the edge and edge computing.

Now, an “Edge Data Center” is quite different. It’s a data center that is set up near the “edge of the network” to reduce latency (how long it takes to send something and get something back), and to be less exposed to single point of failures. Here’s one article attempting to describe it: https://www.datafoundry.com/blog/5-ways-to-define-edge-data-…

So, let’s now examine this information in the context of this thread: Nvidia and Autonomous Vehicles. The Nvidia white paper has as Pillar #1 its “AI Design and Implementation Platform.” This is the in-vehicle compute, via AGX Xavier or AGX Pegasus hardware and software. Pillars #2 and #3 utilize Nvidia’s DGX
AI Supercomputer hardware in a data center to provide training (deep learning) and simulation for testing.

There’s two parts to this: a deep neural network implementation in the data center for training that is fed raw information from vehicles (the edges). Once learned and tested and downloaded to the vehicle, the vehicle then runs stand-alone. From the Nvidia white paper: Deep neural networks (DNNs) can be trained on a GPU-based server in the data center, then fully tested and validated in simulation before seamlessly deployed to run on our AI computer in the vehicle. To safely operate, self-driving vehicles require supercomputers powerful enough to process all the sensor data in real time.

So, there is no Edge Data Center in this picture. There is data center processing (the neural network for training and testing via simulators) and there is edge computing (processing data from the vehicle’s sensors to figure out how how fast to go and in what direction). So not only was Tinker’s “edge computing data center” not a real thing, what he was actually referring to, “edge data centers,” is not an appropriate solution for this problem.

Since vehicles are mobile and since you want to get all the data from all the cars to teach your neural net, there is no opportunity for an edge data center to come into play. Perhaps not the best example, but if you think of a retail store, then your “cash registers” might be pretty dumb edge compute units that can scan an item’s barcode and determine the SKU, but then that SKU number is sent to a local data center either in the store, or at least within the corporate network, to determine the actual price. Then the edge (register) adds up the prices, subtracts returns, etc., to come up with the total. There’s then a communication to a typical central data center, like Visa, to approve and record the transaction.

This architecture makes no sense for Autonomous Vehicles. There is no opportunity or even advantage for an eventual Nutanix Sherlock to setup and run a local data center for some small set of vehicles but not others. So when Tinker writes:

Pillar 2: Development Infrastructure That Supports Deep Learning…
Not difficult to understand what Nvidia is talking about in regard to processing it - a whole lot of GPUs on the edge, in edge computing data centers.

I have to take exception with that statement at multiple levels:

  1. Nvidia’s “whole lot of GPUs” was explicitly described as being in a central data center Capturing, managing, and processing this massive amount of data for not just one car, but a fleet
  2. The edge in this case is the vehicle, and there is no discussion in Nvidia’s paper about doing deep learning wholly within a single vehicle
  3. There is no data center within the vehicle (edge).
  4. Edge Data Center doesn’t apply to the Autonomous Vehicle learning case.

Tinker also writes: You cannot expect a petabyte of data per car to all be sent through the World Wide Web back to core data centers to be analyzed in real time and acted upon.

This is true. But, first, this is not what Nvidia is describing. Nvidia is not talking about deep learning “in real time”; no-one does. And Nvidia does not talk about the actual autonomous driving aspect requiring connection to a data center, especially not an edge data center. That all happens on board the edge (vehicle) using the on-board AGX Xavier or AGX Pegasus hardware. It’s Edge Computing, not “Edge Computing Data Center” (or whatever).

As for the size of the data transmitted per year, although not explicitly described, I suspect that in most cases the on-board vehicle hardware some some pre-processing (edge compute) to reduce the size of the data sent to the central data center to deep learning.

The bottom line is that there is no “edge data center” described in the Nvidia paper, and given the mobile nature of vehicles there is little to no opportunity to incorporate an edge data center, and so for any of us to interpret Tinker’s non-existant “edge computing data center” as what he now describes is illogical. There is “edge computing” and there is a “data center” and any discussion of Nutanix or Sherlock or HCI is wholly off-base in this use case.

16 Likes

And how does any of that change what is relevant to an investment?

Tinker

2 Likes

Smog and Tinker are both correct.

Smog is technically correct about edge data center.

Tinker is correct that as computing power gets so small, cheap, powerful, etc. it will proliferate to the edge and make NVDA a truckload of money.

Let’s simplify this a bit. A data center is like a brain, a center where information is sent to be analyzed, where models are built, and where decisions are made. After the numbers have been crunched the results are sent back out to those who are leveraging the cost benefits of this centralized sharing.

Now there are some markets and applications where data needs to be analyzed quickly and decisions need to be made quickly. These are customer requirements (local speed and reliability) for these applications.

Faster, smaller, less power hungry GPUs plus NVDA’s integration of multiple different functions into a single GPU now make it possible to address an increasing number of new and emerging applications for computing and decision making at the edge. The horsepower that was previously only possible in a large box due to cost, power, and other constraints can now be decentralized. As this decentralization happens NVDA’s addressable markets expand…I would argue tremendously. As NVDA improves the technology further, the number of applicable applications widens further.

Tinker is right again that for making a good investment decisions one needs only to zero in on the pertinent information, even if it is not always 100% understood in a technical way. Saul does it with the businesses’ financials and Tinker does it by examining capabilities, relative functional advantages/solutions, and customer pain points to predict the future adoption of various solutions to business problems and opportunities. What is more important is to have the vision to see what will be the adoption and enabled in various markets, how large those opportunities are, and what will be the impact of this adoption on NVDA’s financials. I think that we have the information available today to predict that there’s going to be a whole bunch of GPUs used deployed.

Chris

14 Likes

Note he didn’t actually answer my question, which was simply: "What is an “edge computing data center”? That’s because that thing doesn’t exist. Go ahead and google that term with the quotes.

OK, here is what I get:

https://blog.schneider-electric.com/datacenter/2018/04/03/da…

Adapting to Trends like Cloud Computing and Edge Data Centers

https://www.cbronline.com/opinion/edge-computing-future-data…

The huge data center model won’t become obsolete by any means and will still be used for a wide range of functions, but the rise of edge computing could see a larger number of smaller data centers built closer to population centers like cities and business parks.

http://www.datacentergurublog.com/2018/05/edge-computing-and…

An edge data facility is still a data center. The state of the art for data center design and construction with all of the considerations for facility and system availability, maintainability and resiliency remain in force. We anticipate that the disaggregation that edge poses will distribute compute and a storage assets and may place those assets in non-data center facilities.

https://datacenterfrontier.com/tag/edge-computing/

Download the new Data Center Frontier Special Report, brought to you by BaseLayer, that explores the possibilities of the “edge data center” and how edge computing is changing the colocation landscape.

I could go on for quite some time but I loath these type of personal attacks against Tinker. You fancy yourself an expert…invest accordingly.

7 Likes

Anyone have any information as to what alternative architecture and framework Nvidia may be mentioning?

Tinker this is specifically what Nvidia is talking about here.

https://www.nvidia.com/en-us/self-driving-cars/data-center/

It takes a high-performance, energy-efficient AI computing infrastructure to create the future of autonomous vehicles . The key to success is optimizing the data load for training and operating these vehicles without compromising safety. The more information that cars are can gather and process, the faster and better AI can learn and make decisions.
Scaling your data center with GPU-powered NVIDIA® DGX™ Systems is the best way to build an AI infrastructure that can deliver safe autonomous vehicles to consumers. NVIDIA DGX-1 is an AI supercomputer that makes training and management of deep learning algorithms effortless. It delivers 3X faster training speed than other GPU-based systems—and works right out of the box.

Experiment faster, train larger models, and gain insights starting on day one.
Improve AI innovation with an open, end-to-end platform that extends from your desk to the data center to the car.
Streamline and accelerate your workflow with today’s most popular deep learning frameworks and AI tools—on-location or in the NVIDIA GPU Cloud (NGC).

It would be DGX-1 or 2 supercomputers for sure.

If you recall we had a discussion (in a PSTG thread) about Zenuity awhile back. Zenuity is a tech company working with some of the big euro car companies to bring autonomous driving tech from NVDA to the field. They had data center bottle neck problems and had a contest where Pure blew out like 10 other competitors.

This is how AIRI was born.
http://images.nvidia.com/content/dgx/dgx-1-zenuity-case-stud…

So yeah it’s all about data collection, processing, training the new stuff, writing new neural networks, proving in simulation, and then deploying. It is a continuous loop.

NVDAs edge platform is AGX. AGX is two sub platforms Drive and Jeston. Each of those sub platforms has more specialized SOC chips with varying inputs capabilities and power consumption.

For autonomous cars, what the Waymos of the world do is an easier problem. They are driving typically along fixed routes over and over again and not doing more complicated things like entering into highways and such. Those will deploy first.

Nvidia’s only autonomous car platform competition is from Intel.

There is also a lot of autonomous well everything. That is Jetson.
Here is 3 examples of Jetson. These further need the training data center, either DGX or cloud compute like from AWS.

AI vision system to monitor traffic for cities such as Boston or for monitoring foot traffic and lingering time for effectiveness of promotional displays. Jetson on device, training on AWS GPUs. Deployed at 1500 locations.
https://blogs.nvidia.com/blog/2018/10/26/motionloft-ai-retai…

Agriculture robotics. Agrobot is an autonomous machine that moves along and uses AI to pick ripe strawberries perfectly at the stem. And a “Lettuce Robot” that gets towed behind a tractor and intelligently and precisely sprays weeds. Reduces herbicides by 90% and increases yields by 10%. In use on 10% of US lettuce fields. In the case of Agrobot they trained on a desk top in the demo video. Could have been cloud or a DGX type in the field or doing the actual training with the Jetson.
https://blogs.nvidia.com/blog/2018/10/24/how-robots-could-sa…

Or how about handheld Genomic Sequencer. This company trains neural networks on Nvidia GPUs and the Jetson handheld device can sequence DNA and detect anomalies in the field.
https://blogs.nvidia.com/blog/2018/10/10/oxford-nanopore-dna…

I’m not sure of any other company bringing this kind of edge AI platform to market. Maybe there is. But it’s whole ecosystem from training to deployment. Seems like it’s spreading like a weed into so many industries.

Autonomous cars are where some really big money will be, some time in the future. But the ecosystem and line up will produce a lot of revenue from the AGX platform along the way. Data center, simulation, and other edge autonomous and AI enables devices.

AGX is also the platform for CLARA healthcare applications.
https://blogs.nvidia.com/blog/2018/09/12/nvidia-clara-platfo…

AGX is not really a GPU though it has CUDA cores. It is an SOC, containing CPU, GPU, tensor cores, and multiple specific accelerators for specific tasks.

Darth

9 Likes

Thanks Darth. There is so much going on with Nvidia it is difficult to keep up with. One thing that Nvidia has an advantage that prior dominant companies just had to figure out along the way, if we know the process of innovation and disruptive innovation and innovator’s dilemma, and Nvidia is determined to stay in front of everybody, everywhere, that a high end processing solution is needed. Same strategy they used with gaming, but this is on steroids. Even in the lower end that they do not want to address (like smart refrigerators or the like) Nvidia has given away their technology with the plan that other companies will run with it and further embed the Nvidia way and infrastructure as the standard to do everything AI (or as much as they can possibly get the world to do).

Nvidia really has little competition. CPUs will more and more give way as a greater portion of the work will move to more and more GPUs demanding fewer CPUs per GPU because it is more power efficient and more production per dollar.

One start up is creating an AI chip (I do not know if they call it a “Nvidia killer” that cuts out the CPU in its entirety, and that is the key to what the company says is its superior performance - thus the CPU had been identified as a system limiter, even though CPUs will never not be necessary, minimizing their usage increases efficiency).

To go along with little competition (Intel is in autonomous), AMD is in gaming (well, they were - they now lack competitive gaming chips except in laptops and other lower end niches (but probably not higher end gaming laptops anymore), there is no one else who is attacking the whole product of AI, and creating and building and investing in each market vertical.

As an example, medical. Nvidia is focused on it as if medical was its only business vertical. The low hanging fruit is imaging (and GE and Nvidia are partnered in regard) but Nvidia is not satisfied with the low hanging fruit. They are working with this enormous healthcare industry to bring AI to everywhere it makes sense and seed and create these market segments. AMD cannot even keep up with each product cycle from Nvidia much less devote resources to attacking each vertical. How is AMD suppose to gain material traction in the medical AI field when Nvidia is creating the total product and all the relationships and infrastructure in the field?

Autonomous driving, same with AMD (they cannot gain any material traction). And pretty much the same thing with each and every market vertical that Nvidia is specifically focusing on, again, as if that was their total business. AMD is trying just to stay relevant in gaming, and attempting to get its GPUs to be used in AI in the data centers (to little success to date). AMD is making good progress with CPUs against Intel, thus they are not dead in the water, but I cannot figure out why AMD is trying to divide its resources between two product categories, when their competition in each market has 5 to 10x more money to invest solely in their segment, much less having to divide resources into two different tech areas.

There are all these start ups out there, perhaps one of them will create a true competitive challenge. Absent this (and there is nothing commercially visible at this point in time), not only is Nvidia without effective competition in every market it is in (with Intel, perhaps effective competition in autonomous driving, but trailing Nvidia except for the Google partnership and a few other nice partnerships (I believe Google chose Intel because Nvidia competes with Google in AI), Nvidia is actively investing as if it were a start up focusing on one specific vertical, in each of these enormous verticals to create the whole product, but even more than this, to identify, seed, create and expand the markets themselves.

It is truly amazing. I do not know if I have ever seen a company move so fast, into so many markets, with such proficiency and dominance. To add to this (similar to AYX who uses their own tech to better maximize their selling strategy and thus 90% gross margins) Nvidia is probably the #1 user in all the world in the use of AI for product development. And the more data Nvidia gets the more its lead in AI utilization will grow.

I am trying to hold back some enthusiasm here but it is difficult. Really, Nvidia has but one FUD issue surrounding them and that is the entrance into the market of some new tech that can effectively create real competition for Nvidia. I guess the other FUD issue is just the rule of thumb rules dealing with the cyclical nature of the semiconductor industry.

If someone has actually seen effective competition anywhere in the world for Nvidia let us know. I know China is investing heavily in AI technology. Fujitsu was said in 2017 to have a “Nvidia” killer. Suppose to be just awesome. Then no news for a year. Then we find out that Fujitsu created a chip based upon a Japanese government contract, with product not to be delivered for 2 or 3 more years, to specifications as requested by the Japanese government (in regard to capabilities). As you read into it, this chip will be antiquated by the time it is delivered, but it is a government contract, so they will still be able to sell it to the Japanese government).

Gaucho and Darth know more than I do regarding specifics. I just cannot look away from what is a worldwide effective monopoly in the greatest technological advance of our time and yet, if earnings go as I anticipate, this year a 23 P/E, next year a 15 P/e, and materially lower if one takes into account enterprise value and dividend payments.

Thus, if anyone has seen any thing that refutes this let us know. Thanks.

Tinker

4 Likes

And how does any of that change what is relevant to an investment?

You said: Nutanix’s Sherlock project is 100% about this. Red Hat’s new HCI release is important in regard because for the first time a completely open sourced, simple HCI product is released, and Red Hat is using it for ROBO offices. But such a product may be perfect for edge computing, if stable and inexpensive, because the number of edge data centers are expected to be a very large number.

I do not see how Nutanic’s Sherlock or Red Hat’s HCI are anything at all about this Nvidia Autonomous Vehicle use case.

There is no edge data center in this AV use case that I can envision.

There is processing on the edge, but that doesn’t mean a cloud nor a server will be on the edge (which in this case is a car). There is no use for HCI in the vehicle, and no use for Sherlock in the vehicle unless Nutanix expands the Sherlock offering to include embedded code for edges, which seems unlikely

So while there is a need for edge computing, for this AV use case there appear to be no need for HCI powered servers.

2 Likes

The obvious longer term challenger to the use of a general purpose GPU for edge applications is a special purpose chip. Not the first thing, since one has to prove feasibility, but potentially cost effective and possibly even better as the market matures.

ASICS are always a possibility. What we are talking about is the AI computer that makes all of the decisions based off data the car is feeding it. To do that it would have hundreds of different trained neural networks running simultaneously. One to process and locate road signs, one to determine road hazards, one to determine safe paths, one to determine location and directional information, one to determine depth perception, and so on. Then within in that you have to sub divide that with one that works with images from cameras verse LIDAR images and all of the other sensors. Redundancy amongst the parts.

With an ASIC for the core computing everything has to be on a common framework and I just don’t think that’s feasible for the general market. Google’s TPU, the most talked about AI ASIC, is a TensorFlow product. But increasingly AI workloads make use of multiple frameworks. Going back to the real world Zenuity example.

http://images.nvidia.com/content/dgx/dgx-1-zenuity-case-stud…

The primary workloads are neural network training jobs where GPUs are processing huge amounts of data gathered from test cars. The data consists of images from surround cameras, as well as from radar and LIDAR(a laser scanner). Data scientists at Zenuity utilize the most popular frameworks being TensorFlow and PyTorch.

So they are using at least two different frameworks. And with what they use of TensorFlow, who knows how much customization they do on the models. Customization of the TensorFlow model is a no-no for the TPU according to Google. Data scientists tinker and tailor to fit their needs and things develop far astray from the base model. It just seems that too specialized hardware might place too many limitations on development for the general market.

And really the Nvidia AGX Drive solution is a specialized chip for the purpose of AV. It has six processors including the Tensor GPU portion with CUDA cores and Deep Learning Accelerator Cores and image processing accelerators, etc. So it has specific computing accelerators but retains programmability with CUDA to work with every framework there is.

That is the challenge to would be competitors for general market. Not only performance but also retaining programmability.

3 Likes

OK, here is what I get:…

You didn’t search with the quotes as I stated. Searching any of the content in links you provided for that phrase comes up empty as well.

The fact remains that the Edge Data Center using HCI that Tinker was apparently discussing has no relevance to this Nvidia AV use case, and thus is not a valid reason to invest in any of these companies.

I could go on for quite some time but I loath these type of personal attacks against Tinker.

I am sorry that you and Tinker feel that my posts contain personal attacks. I honestly don’t intend my factual corrections on technical matters to be attacks of any kind. If Tinker also wishes, I’ll just keep my mouth shut in the future and let any potential future inaccuracies stand, since you both don’t feel they affect any investing strategy.

6 Likes

Smorg,

Rigid arrogance is not productive. When I say something is something it is something and as you can ask Duma, I am happy to be corrected when wrong (which happnens). But when it comes to the big picture I do not recall being wrong, at least as things stand at the time and space. When I specified that serverless does what Pivotal does I was correct. Both are methods to abstract the infrastructure to increase developer productivity. And yes, I know there are vast technical differences between the two and methodology differences, but the relevant point remained as to why they both existed. As an example.

On the point of edge data centers:

Remember I said data centers located at cellular phone towers to be the edge:

https://www.isemag.com/2018/02/towers-are-the-edge/

Please let me know how I was wrong in regard. Btw, you do know that autonomous cars are not really a thing yet either as they are at best in beta. You do know that before before 2012 HCI was not a thing either, but now, 6 years later it is everywhere. Autonomous cars will not be in beta for much longer, and neither will it be true for edge micro data centers. That is one reason a start-up is designing servers for exactly this purpose as we speak.

And yes, as long Nvidia GPUs are the primary processing technology they will be at each and every one of these micro data centers. And that is how the world stands today. There will also be Intel CPUs, but it will be the GPUs that are the enabling technology and the CPU as an ancillary necessity unless Nvidia, like the start up that I mentioned in another post who says they have an architecture that processes directly without having to parse through the CPU, develops similar capabilities themselves if such an architecture proves to be valuable.

Let me know if you have any questions I can answer for you further in regard to edge data centers. And yes, btw, I did answer your question, and I have answered it again.

Tinker

1 Like

In case you did not read the entire story, or look at the graphic of the photo-type edge data center to be placed at cell phone towers: The latter approach is the only one that makes sense financially and technically. It is already happening. EdgeMicro, for example, is in the process of implementing hundreds of these micro data center deployments in markets where the pain points are the most intense for end users, MNOs, and content providers. This represents a major new era for computing that is just as significant as the wireless revolution, and, once again, cell towers are at the center of this tech transformation.

In fact, our belief is that the cell tower industry has THE key ingredients for opening the door to this next big wave of the Internet. It’s a very good time to be in the wireless industry, because we will be making the future a reality with projects that transform cell towers into the central nervous system for Mobile Edge Computing worldwide.

But perhaps as you say I still have not answered your question.

Tinker

1 Like

I am happy to be corrected when wrong

Clearly not, as you continually twist your statements around to mean whatever corrections I supply. Here goes one more time, very specifically then:

• You stated that edge data centers were needed to process “the daily petabyte produced by one car.” However, the Nvidia document states not a petabyte per car per day, but PER YEAR.

  • You stated: “The data is so massive that it cannot be effectively analyzed through core data centers, and in many instances the latency is unacceptable, such as in autonomous machines.” There are at least three problems with this statement:
  1. Massive data processing is not within the realm of edge data centers.
  2. Autonomous machines operate without the need for connection to any data center, whether “edge computing” or “edge” or “micro” or any combination of words you might put together.
  3. You’re conflating the training use case with the operational use case.

The test vehicle generating lots of data is part of the training use case. That data is fed into a neural net for deep learning. This is not where latency matters, and it’s unlikely that the test vehicles transmit their sensor data via cellular towers (yes, you’re wrong on that, too). More likely the test vehicles are brought back to some central location and the ~5 TB of data per day transferred via swappable hard drive or some such high speed mechanism. The neural net runs in a core data center, not edge data centers. That’s Nvidia’s whole point. Note that it’s likely that there’s some pre-processing of the raw vehicle data before being sent to the neural net. That could be done in vehicle, but it also could be done in a different part of the central data center.

The operating use case is different. Here the sensor data is processed wholly within the vehicle by the results of the neural net training. Latency does matter, but since the sensor data is processed locally, only the speed of the on board computers matter. In the AV operational case, the data must be wholly processed locally, as reliance on any off vehicle processing would be dangerous. There are cases where data from V2V and V2X systems can enter into autonomous vehicle operation, but the data being sent is relatively small.

When I specified that serverless does what Pivotal does I was correct. Both are methods to abstract the infrastructure to increase developer productivity.

Really, you want to bring up a discussion that I let go days ago in a different thread here? Kind of telling in my view.

But when it comes to the big picture I do not recall being wrong, at least as things stand at the time and space.

In this thread alone, you have been wrong on many aspects of the Autonomous Vehicle use case under discussion. You were wrong by two orders of magnitude on the size of the data, and you confused training data processing with operational data processing in terms of size, latency requirements, data transmission requirements, and where the processing is performed. You conflated “edge computing” with “edge data center” used a term that no-one else uses. You were wrong to bring edge data centers and HCI and Nutanix Sherlock and Red Hat HCI into this Nvidia AV discussion since they don’t apply to that use case, despite you claiming they were 100% about this.

Rigid arrogance is not productive.

Time to hold up a mirror, Tinker.

Don’t worry, this post will be the last post from me in reply to a post from you. I truly hope your lucky streak of being wrong about technology yet right about investing continues.

You can have the last word here, and forever more from me.

14 Likes

Smorg, good bye okay.

Tinker

3 Likes

Time to hold up a mirror, Tinker.

Don’t worry, this post will be the last post from me in reply to a post from you. I truly hope your lucky streak of being wrong about technology yet right about investing continues.

You can have the last word here, and forever more from me.

Smorg:

I have been around Tinker for 18+ years and if there is one personality trait that I can easily say he personifies…“passion”.

IMO, your posts became a test of wills rather than a collaborative interchange. As you probably know, most higher end institutions and successful corporations seek people who are less competitive (need to be right) and are instead more collaborative.

Likewise, collaboration is the power of these boards, the crowdsourcing phenomenon.

Obviously you are a technical guy and read each word carefully, but sometimes in collaborative environments, it isn’t necessary to correct punctuation because the main theme is perfectly intact. For example, nothing you argued has changed the investment thesis for NVDA…in the big scheme.

Believe me when I say that Tinker greatly appreciates (and requests) people to shoot holes in his investment thesis…we have mutually played the devils advocate with ideas over the years…too numerous times to count…trying to find holes in any investment thesis.

One paradox we have learned over the past 20 years is that the most technically adept folks, often are the least investment adept. I know it doesn’t make sense and I am not sure I have an answer why it is so, but I could give you numerous examples over the years (ISRG perhaps the best example where numerous physicians told us it was BS and useless…some 7-8 multiple ago).

Maybe it is because investment specialists make broad strokes in their investment thesis and technical guys get bogged down in minutia. The strange aspect of this is that often they are BOTH right! But IMO, the broad strokes often wins…perhaps because holders of the stock also don’t have your level of technical expertise or perhaps because the issue in contention really didn’t matter in the big picture.

You and Tinker are better off collaborating together in future discussions…you and he will be the better for it. But I hope they won’t be test of wills because the minutia that is drawn from those competitive discussions often yields no actionable investment intelligence.

Thanks for moving on from this discussion but I hope this post properly puts perspective in how we can all get the most from these boards including both you and Tinker. Let’s bury this hatchet quickly and get back to working in each other’s best interest.

Best:

Duma

62 Likes