NVDA: An analytics firm using GPUs

https://blogs.nvidia.com/blog/2018/10/03/omnisci-formerly-ma…

Companies that have the Ferrari (AKA: NVDA GPUs) and not the VW (any other compute chips) have an advantage in data analytics. NVDA is ground zero for investing in many of the trends that drive technology.

Chris

5 Likes

Interesting article. Omnisc seems to be a competitor to AYX (data analystics) and NEWR (infra monitoring)

“Lightning-fast analytics from OmniSci are powering faster, and financially impactful, business decisions. Verizon, for example, uses OmniSci to process billions of rows of communications data in real time to monitor network performance in real time, improving customer service and optimizing field service decisions.”

3 Likes

Really interesting stuff.

“Analytics and data science professionals are realizing there’s this new architecture emerging into the mainstream,” said Mostak.

OmniSci can be installed on-premises, and it runs on AWS and Google Cloud, harnessing NVIDIA GPUs. In March, the startup launched its own GPU-accelerated analytics-as-a-service cloud at our GPU Technology Conference. OmniSci Cloud makes the world’s fastest open source SQL engine and visual analytics software available in under 60 seconds, from a web browser.

No wonder that Nvidia’s data center is booming. Look at what they’re enabling here for the hyperscale cloud Titans. The build out of all of this compute has enabled so many companies to tap into resources previously unavailable at any cost. And scale out in a second as needed and adapt as new technology emerges. Amazing.

It’s really a revolution. An economy unto itself. You know, two years ago there was all this talk about how GPUs weren’t ideal for training and would be overtaken in short order by ASICS or FPGAs or some other moonbeam technology. With Volta Nvidia has put the training question to bed and they own it now. I believe Turing and T4 has done the same for inference.

Xilinx just released their next “gpu killer” FPGA that is supposed to be the “fastest data center inference accelerator on the market”.

https://finance.yahoo.com/news/xilinx-launches-worlds-fastes…

But if you check out the chips specs it is not so rosie.

https://www.xilinx.com/support/documentation/selection-guide…

So the lower end does 18TOPs at 8INT(where majority inference matters) Top end card does 33TOPs 8INT. Both cards are rated at 225W.

Sure comparing to a P4 that does 22TOPs at 8INT you may be able to make it seem to have favorable comparisons, but a check against Nvidia’s Inference specialized T4 there is no comparison at all.

https://www.nvidia.com/en-us/data-center/tesla-t4/

130TOPs of INT8. On 75W. And it has really impressive speeds across a wide variety of inferencing inputs 32/16/8/4 all of it. Very versatile and because of CUDA it works with every type of application, whereas FPGA needs reprogramming for different data sets. Time is money to hyperscalers.

It’s hard to see how NVDA is not going to be a dominate force here in the next wave.

Darth

6 Likes