Cloudflare Partners with NVIDIA

https://www.cloudflare.net/news/news-details/2021/Cloudflare…

“Cloudflare Workers is one of the fastest and most widely adopted edge computing products with security built into its DNA,” said Matthew Prince, co-founder & CEO of Cloudflare. “Now, working with NVIDIA, we will be bringing developers powerful artificial intelligence tools to build the applications that will power the future.”

Long NET and NVDA

30 Likes

I don’t understand the tech about what this partnership means, but it must be important as Cloudflare is up 11% on the news. Apparently it will bring “machine learning” and “deep learning” to Cloudflare Workers at the Edge.
Saul

9 Likes

I don’t understand the tech about what this partnership means, but it must be important as Cloudflare is up 11% on the news. Apparently it will bring “machine learning” and “deep learning” to Cloudflare Workers at the Edge.
Saul,

While NVIDA is not a “Saul” stock, and I bought it and sold it at a small
loss when Bitcoin cratered, it is the hardware that matters in AI.

https://youtu.be/eAn_oiZwUXA

Jenson Huang the CEO makes the sales presentation for his new stuff in the 1 hour and 45 minute keynote presentation. I was a little disappointed that I would not be able to listen to the entire GTC conference as there are something like 1600 presentations. I will finish Mr. Huang’s message and attempt to hit the highlight presentations. Considering how much graphic processors make AI possible in the cloud and how much I have invested in cloud companies and how
much my career is tied to the cloud, via telecommunications infrastructure, I consider it required education.

Cheers
Qazulight

6 Likes

NVDA announced partnership with Cloudflare today. Looks like Cloudflare Pops maintaining the AI models NVDA has developed along with the Tensor Flow hardware Cloudflare has been using themselves and making all of it available to the Developer as needed, milliseconds from the user!

Cloudflare is seamlessly solving for their security, performance, and reliability needs while NVIDIA provides developers with a broad range of AI-powered application frameworks including Jarvis for natural language processing, Clara for healthcare and life sciences, and Morpheus for cybersecurity.
The combination of NVIDIA accelerated computing technology and Cloudflare’s edge network will create a massive platform on which developers can deploy applications that use pre-trained or custom machine learning models in seconds. By leveraging the TensorFlow platform developers can use familiar tools to build and test machine learning models, and then deploy them globally onto Cloudflare’s edge network.
Previously machine learning models were deployed on expensive centralized servers or using cloud services that limited them to “regions” around the world. Together, Cloudflare and NVIDIA, will put machine learning within milliseconds of the global online population enabling high performance, low latency AI to be deployed by anyone. And, because the machine learning models themselves will remain in Cloudflare’s data centers, developers can deploy custom models without the risk of putting them on end user devices where they might risk being stolen.
“As companies are increasingly data-driven, the demand for AI technology grows,” said Kevin Deierling, senior vice president of networking at NVIDIA. “NVIDIA offers developers AI frameworks to support applications ranging from robotics and healthcare to smart cities and now cybersecurity with the recently launched Morpheus.”
Internally, Cloudflare uses machine learning for a variety of needs including business intelligence, bot detection, anomaly identification, and more. Cloudflare uses NVIDIA accelerated computing to speed up training and inference tasks and will bring the same technology to any developer that uses Cloudflare Workers. :thinking::flushed:.

5 Likes

The main advantage of edge networks like the one NET operates is super low latency (delay in communication from user to network). This is particularly important in applications that require real time communication (think multiplayer video games, adaptive autonomous driving technology that require network connection and other connected AI applications). NVIDIA is big in AI. They like to think of themselves as providing the computing power behind AI applications. NVIDIA picking NET vs somebody like FSLY is huge for NET. It not only validates the edge that edge networks have over traditional networks for these applications but specifically endorses Cloudflare’s approach. This is big.

https://www.nvidia.com/en-us/deep-learning-ai/products/solut…

https://heartbeat.fritz.ai/will-nvidia-gpus-push-ai-on-mobil…

39 Likes

This is particularly important in applications that require real time communication (think:
- multiplayer video games,
- adaptive autonomous driving technology that require network connection
- and other connected AI applications).

Video games? Sure

  1. Autonomous driving – doubtful. The car has to be able to drive and be safe when there is no network otherwise it is failure, IMO.

  2. Adaptive autonomous driving? If you mean the car sees and learns something new and it shares it back to the cloud – well yes. But Tesla’s already do this and you don’t need low latency for this. A new datapoint is generated and it goes back to the mother ship (perhaps at the end of the day) and over many days or more is trained into the neural net model, tested, retested and regression tested to make sure it does not break old test cases before it is released back to the fleet. Otherwise hackers could create all sorts of fake data points to attempt cause havoc.

About the only things you’d want real-time sharing from a car back to the cloud would be local traffic conditions (speed), detours, etc. You don’t really need low latency AI for this.

  1. Other. Yes, like voice recognition.

Mike

9 Likes

Hi Mike

Re autonomous driving. You’re right. The car should be able to decide on its own but you can greatly enhance that decision making by feeding data from cars driving in front of it and other information in real time. It may seem like distant future where cars communicate with each other but Tesla is very much heading in that direction. To accomplish that, you absolutely need super low latency network (both wired and wireless).

Below is a good write up on that topic. Cloudflare alone may not be able to solve this problem. You need a fast 5G network to even get to Cloudflare’s network but by making this move they are positioning themselves to benefit from these trends which is something I very much love to see as an investor. Short term this will make them money in gaming but in a not too distant future the applications are much broader and perhaps even more profitable (especially when combined with a high level of security that Cloudflare already provides, which address security concerns you correctly raise) By the way, most modern cars you buy have wireless connection that can be hacked already… big issue.

https://www.telekom.com/en/company/details/5g-network-as-fou…

8 Likes

V2V (vehicle to vehicle) and V2X (V to anything) are the technologies for vehicles to communicate with each other and to local infrastructure. They have nothing to do with 5G cellular networks. Car to network to cloud and back to another car is much slower by definition.

V2V or V2X is still just optional “hint” info, such as letting someone know there is a smaller car behind a bigger car or a car approaching an intersection from behind a building. It isn’t doing AI inference processing which I think is what the Cloudflare deal is about.

Mike

2 Likes

Sorry that should have read, AI Models, Tensorflow libraries and Hardware (GPUs) when I was bringing together what Nvidiawas bring to the Cloudflare Points of Presence. Making all of that available to Developers milliseconds from the user is a big deal, IMO.

It’s kinda mind blowing for me to imagine what they might build :exploding_head:

Thanks,

Jason

2 Likes

When you want to crunch a lot of AI training data (think processing millions of images to teach a model to recognize any photo of a dog and not confuse it with a cat), GPU chips, usually by NVidia, are the go-to because they parallelize the crunching better than CPUs. I look at the use of edge networks as another level and way of parallelizing data collection and crunching to train even bigger and better AI models in parallel.

This may be a really big deal as training AI models takes massive amounts of data throughput and processing. The more data you can collect and provide when training, the better the result of the model will be (until diminishing returns I suppose). I say “may” be a big deal because it doesn’t take a partnership for someone to make AI models. You can do it for free at home on any computer, depending on what you want to accomplish. If this can formalize a data pipeline by getting a lot of people testing and writing tools to make it easy to scale up with Cloudflare, making them a go-to choice as well for some set of applications, then that is a great thing. Time will tell.

7 Likes

I’m a bit confused about this whole announcement. It seems good for NVDA, but what it means for NET is unclear to me.

The applications for AI at the edge are limited as far as I can see. You need an app that requires:
a) Extremely low latency, b) AI processing and c) cannot be run on the users device (either no capacity, or data sovereignty issues).

I don’t think video games meet the grade. All multiplayer video games are limited in latency by communicating between players afaik.

The AI for autonomous vehicles is far off imo (in tech terms… 5+ years?) and in any case it’s unclear what edge AI processing would do for them. Why not just run the AI in the car with a bucketload more power?

IoT applications are perhaps more suitable, if you imagine a very small device (probably battery powered) that cannot run the AI, but also needs super-low latency. Drone applications would be an example, you don’t want to waste your battery running an AI model when you need it to fly and it needs to respond quickly.

eg: https://www.youtube.com/watch?v=9CO6M2HsoIA

:wink:

Another example would be something like “Google Glass”, or Apples (not yet seen) AR/VR glasses which cannot support AI processing on the device due to weight constraints.

Voice recognition, translation, video filtering and processing could be suitable candidates, but Zoom has its own datacenters. Security is the other obvious use case, although I’m not clear on the usefulness of NVDAs GPUs for security apps.

The other use case that springs to mind is massively scaled IoT applications collecting enormous amounts of data that you need compressed (via the AI model) to save bandwidth to central servers.

I don’t quite get the reaction, or how this will make any impact on NETs business in the short to medium term. Failure of my imagination?

cheers
Greg

12 Likes

I don’t quite get the reaction, or how this will make any impact on NETs business in the short to medium term. Failure of my imagination?

Nividia simply has the fastest darn chips out there to process tons of calculations to make software appear “as if” it is smart -hence the AI term where coming to a conclusion based on tons of data that is updates regularly allows it to make good decisions. Nvidia also bought ARM processor and they are making a new chip that is several times faster than what is available today. That has not happened in a long time. Next to quantum computing this is a big step and a reality, not a theretical we’re working on it concept.

So far as Iot, Edge computing, AI, and all the other marketing terms thrown around… A fast affordable processor running in a car, airplane, can make automated devices a reality without a connection. It does not need a connection to a data center to work, however it will most likely need the big data center to report information it gathered allowing the data center to update the devices doing these tasks making them “appear smarter”. If the disconnected device does not get updates it will still function and perhaps be more prone to mistakes or less efficient. I don’t think you will need a continuous high speed connection to all automated devices so they can function.

https://venturebeat.com/2021/04/12/nvidia-unveils-grace-arm-…

“The new Grace is named after computing pioneer Grace Hopper, and it’s coming in 2023 to bring “10x the performance of today’s fastest servers on the most complex AI and high performance computing workloads,” according to Nvidia. That will make it attractive to research firms building supercomputers, of course, which the Swiss National Supercomputing Centre (CSCS) and Los Alamos National Laboratory are already signed up to build in 2023 as well.”

https://www.theverge.com/circuitbreaker/2021/4/12/22380065/n…

10 Likes

Hey Greg

your skepticism or confusion as you put it, seems valid to me.
Proponents of 5G have pushed this concept of low latency very hard to justify this particular dimension of 5G (there are other dimensions e.g. higher bandwidth and larger number of connectivity, but that are not relevant to edge network argument) with some not-so-meaningful applications like remote surgery and self driving car as key drivers.

So, I agree that these use cases for edge networking, and also similarly drone and self driving cars as use cases for AI on the edge network (not the device itself) makes it easy to discredit them.

However, I have seen a few other use cases and also analogy of how technology and use cases evolve… that makes me believe that low latency edge networks… and AI on that edge will thrive… even if it take a year or two from now to really see applications.

Couple of examples I have seen -

  1. AR for consumers… you are walking down a street in a new city and AR glasses show you some information about the building or cross street of monument you are looking at… this is real time and latency sensitive and need AI
  2. Similar use cases in driving scenario… you are able to see few cars ahead of you… or get audio commentary on a park you are passing by, as if a friend sitting in passenger seat is looking up and sharing interesting information… need low latency and AI
  3. industrial scenario… say on a construction site, or remote site a technician is performing a task that’s aided by instant visualization of relevant information…

    And such list can go on… some of them can fall by wayside… while few more would emerge…

so I would agree that this partnership may not be as meaningful in near term as the share price rise suggest, on a long term basis, it has potential to have a big impact on Clouflare… and BTW, nVidia choosing CloudFlare for this partnership is a strong statement in my opinion.

9 Likes

Hi Nilvest,

I hope you don’t mind a quick question :person_raising_hand:???

Wouldn’t the Nvidia/Cloudflare partnership, with Nvidia already having AI models for robotics and Cloudflare being in a position at the edge with Nvidia now able to provide context in near real time, create a leap in robotics performance?

I believe robotics is the main exponential grower that has been waiting for such a partnership.

I appreciate your expertise and consideration,

Thanks,

Jason

A Fun use case.

Bringing AI to the edge with NVIDIA GPUs
04/13/2021

…As a demonstration of a real AI-based application running on Cloudflare’s infrastructure, the team in the Cloudflare Lisbon office built a website: nataornot.com. Upload a picture of food and it’ll tell you whether it’s one of Portugal’s delicious pasteis de nata (an egg custard tart pastry dusted with cinnamon) or not.

The code used a TensorFlow model built from thousands of pictures of pasteis and other foods
which runs on a Cloudflare server
which has an NVIDIA A100 Tensor Core GPU in it.
If you want to build your own pastel de nata recognizer we’ve open sourced the TensorFlow model here.

https://blog.cloudflare.com/workers-ai/

Best, kevin c
long of NET

1 Like

Greg,

How can one expect to run a full surveillance state without the fastest chips, AI, and edge computing?

I can’t think of many examples today that require low latency access to high-powered AI capabilities, but one might be real-time natural language translation. But, the point of building such capabilities isn’t limited to addressing today’s needs, but to spark imagination for tomorrow. If we as a society hadn’t continuously pushed the frontiers without the requirement of immediate payback, we might still be limited to a few dozen IBM mainframes in massive data centers…

NET is pushing capabilities now that will expand their TAM later. I kinda like that.

Tiptree, Fool One guide and Market Pass home Fool

3 Likes

Wouldn’t the Nvidia/Cloudflare partnership, with Nvidia already having AI models for robotics and Cloudflare being in a position at the edge with Nvidia now able to provide context in near real time, create a leap in robotics performance?

I believe robotics is the main exponential grower that has been waiting for such a partnership.

Hi Jason,

Any thing that is autonomous (moving on its own) - i.e. self driving car, robots (in field / outdoor) and drones need to be self sufficient - i.e. need AI implemented right on the device to avoid dependence on the network and also avoid power consumption related to communicating with network…

However, for sure there will be a few cases where robotics would benefit from additional, low latency AI in the edge networks.

BTW - there are applications where AI is implemented in layers… some local decision on device, some more context related smarts in edge and high performance smarts done in cloud. This whole thing is emerging and we will see exploding use cases in so many walks of life.

2 Likes

NET is pushing capabilities now that will expand their TAM later. I kinda like that.

The CEO said as much in his published remarks. He noted that he couldn’t wait to see what an army of developers might come up with now that they had access to low latency cloud storage of their data and the use of higher speed AI/ML.

IMHO this is the real value of the partnership between NVDIA and NET. All the debate over self driving cars, 5G and so on seems to me not to focus on the main point.

cheers

draj

3 Likes

NET is pushing capabilities now that will expand their TAM later. I kinda like that.

Further to the point of the preceding post. I just happened upon this tweet.

Matthew Prince :sun_behind_large_cloud:
@eastdakota
· Apr 13
Thrilled to partner with NVIDIA to bring AI to the edge!! @Cloudflare Workers is the largest, fastest, most used edge computing working. With NVIDIA’s hardware running at our edge we open a whole new class of applications for developers. #DeveloperWeek https://blog.cloudflare.com/workers-ai/

" a whole new class of applications" therein lies the promise and the opportunity.

cheers

draj

6 Likes

Couple of examples I have seen -
1. AR for consumers… you are walking down a street in a new city and AR glasses show you some information about the building or cross street of monument you are looking at… this is real time and latency sensitive and need AI

Huh?

https://www.acronymfinder.com/AR.html

AR Arabic
AR Administrative Record
AR Argentina
AR Army Regulation
AR Amateur Radio
AR Annual Report
AR Accelerated Reader
AR Adventure Racing

… and so forth for two more screens.

Us slow students need a little hand holding.

CNC