I’m a bit confused about this whole announcement. It seems good for NVDA, but what it means for NET is unclear to me.
The applications for AI at the edge are limited as far as I can see. You need an app that requires:
a) Extremely low latency, b) AI processing and c) cannot be run on the users device (either no capacity, or data sovereignty issues).
I don’t think video games meet the grade. All multiplayer video games are limited in latency by communicating between players afaik.
The AI for autonomous vehicles is far off imo (in tech terms… 5+ years?) and in any case it’s unclear what edge AI processing would do for them. Why not just run the AI in the car with a bucketload more power?
IoT applications are perhaps more suitable, if you imagine a very small device (probably battery powered) that cannot run the AI, but also needs super-low latency. Drone applications would be an example, you don’t want to waste your battery running an AI model when you need it to fly and it needs to respond quickly.
eg: https://www.youtube.com/watch?v=9CO6M2HsoIA
Another example would be something like “Google Glass”, or Apples (not yet seen) AR/VR glasses which cannot support AI processing on the device due to weight constraints.
Voice recognition, translation, video filtering and processing could be suitable candidates, but Zoom has its own datacenters. Security is the other obvious use case, although I’m not clear on the usefulness of NVDAs GPUs for security apps.
The other use case that springs to mind is massively scaled IoT applications collecting enormous amounts of data that you need compressed (via the AI model) to save bandwidth to central servers.
I don’t quite get the reaction, or how this will make any impact on NETs business in the short to medium term. Failure of my imagination?
cheers
Greg