https://blogs.nvidia.com/blog/2018/08/20/turing-vr-full-imme…
Turing’s RT Cores can also simulate sound, using the NVIDIA VRWorks Audio SDK. Today’s VR experiences provide audio quality that’s accurate in terms of location. But they’re unable to meet the computational demands to adequately reflect an environment’s size, shape and material properties, especially dynamic ones.
Deep learning, a method of GPU-accelerated AI, has the potential to address some of VR’s biggest visual and perceptual challenges. Graphics can be further enhanced, positional and eye tracking can be improved and character animations can be more true to life.
The Turing architecture’s Tensor Cores deliver up to 500 trillion tensor operations per second, accelerating inferencing and enabling the use of AI in advanced rendering techniques to make virtual environments more realistic.
Ray tracing can simulate sound waves to make sound more realistic in VR.