NVDA: VR full emersion improvements


Turing’s RT Cores can also simulate sound, using the NVIDIA VRWorks Audio SDK. Today’s VR experiences provide audio quality that’s accurate in terms of location. But they’re unable to meet the computational demands to adequately reflect an environment’s size, shape and material properties, especially dynamic ones.

Deep learning, a method of GPU-accelerated AI, has the potential to address some of VR’s biggest visual and perceptual challenges. Graphics can be further enhanced, positional and eye tracking can be improved and character animations can be more true to life.

The Turing architecture’s Tensor Cores deliver up to 500 trillion tensor operations per second, accelerating inferencing and enabling the use of AI in advanced rendering techniques to make virtual environments more realistic.

Ray tracing can simulate sound waves to make sound more realistic in VR.


Ray tracing can simulate sound waves to make sound more realistic in VR.

Apologies for this almost being off-topic, but it relates to sound in general and might be something from real-life that NVDA will be trying to emulate with their VRWorks Audio SDK…also it is pretty interesting.

The Sound Traveler’s YouTube page (same guy that does “Smarter Everyday” videos on YouTube):

I came across this page following the SpaceX Falcon Heavy launch.

Again, apologies for being only tangentially on-topic, but I recommend checking out at least a few of these videos sometime (with head phones in to get the full experience). This should aid in getting an idea of what NVidia will be trying to simulate with this offering.