When I started my professional career in 2012, at a small Berlin-based company called Pi-VR, I didn’t understand the importance of precision and accuracy in rendering 3d data. My background in 3D Animation and Game Design meant that, for me, the most important aspect of rendering was making it look “cool” rather than “precise” or “realistic.”
That changed when one of my colleagues at Pi-VR demoed VRED for me and showed me real-time caustics. The funny thing was that he seemed really excited about the grainy result that was rendered with 1-2 frames per second. I didn’t yet understand how groundbreaking this was.
That was the start of my VRED journey. Today, 8 years later, I share the passion that my colleague had when talking about VRED and the visualization of digital prototypes—because I understand it. That is why I am extremely proud that we have reached the milestone this awesome team has been working towards for almost a decade: Noise-free Full Global Illumination in Real-time.
And I mean proper Real-time, with 30+ frames per second. That’s light years from the 1-2 frames per second that was revolutionary back in 2012.
Now, you might ask yourself, as I did back then: Why is this important?
Because reviewing digital prototypes has significant benefits over creating physical ones:
- Digital prototypes have the potential to save time and money.
- They enable more frequent iterations, allowing teams to generate more ideas.
- Because they are digital, it is quick and easy to review the product’s most recent state at any time.
We know that the two biggest challenges our customers face are 1) that digital representations need to look and behave as much like the physical product as is possible, and 2) that the methods used to review physical prototypes need to be transferred to the digital world to provide the same benefit.
Evaluating and reviewing the digital twin in motion may sound very simple, but it has been a significant problem in the past. When computers were not able to output enough high-quality frames in real time, you did not have the chance to move the digital camera—as you would move your head in the real world—or to view the prototype moving as it would in physical reality.
In the past, decisions stemming from evaluation and review either on still images, pre-rendered, with inappropriate quality, or they were limited to physical prototypes. In the automotive world, this affects evaluation methods like gap analysis, perceived quality reviews, reflection studies, ambient lighting for interior, light design, and more.
Being able to evaluate digital prototypes in real-time (in motion) has a huge impact on the design and engineering processes in use today. It offers immediate visual feedback to designers and engineers, with optimal interplay between a digital prototype and different variables in the environment, like lights, materials, viewing angles, and shadows.
So today, I am delighted to announce that VRED 2021 is supporting GPU Accelerated Raytracing.
We have been collaborating closely with NVIDIA to leverage their RTX GPUs in VRED. With dedicated RT Cores, RTX is designed for real-time Raytracing, and AI-accelerated denoising, which removes the last bit of grain in real-time scenarios. The result? A beautiful, clear image in real-time.
This journey has not been easy, but it’s been worth it. Thanks to our amazing team, partners and all the contributors that have come together determined to create the best visualization software the world has ever seen. I am so proud to be a part of this, and I’m already looking forward to reaching our next major milestone: Real-time Raytraced Accurate Virtual Reality.
Watch Michael Nikelsky, Sr. Principal Engineer, talking about “Bringing the Autodesk VRED Raytracer to the GPU” during NVIDIA`s free GTC on-demand session: https://www.nvidia.com/en-us/gtc/session-catalog/?search=s21344