We’re here at Bangkok, Thailand to attend the NVIDIA briefing for the new RTX 20-series of graphic cards that are based on the Turing architecture. There are many insights shown here and we’re highlighting those that we find important and intriguing. The generational upgrade from the previous generation to RTX 20-series is not just purely in raw performance, but in other aspects as well.
If we want to compare what the new Turing architecture brings to the table, then it wouldn’t work. NVIDIA is including some Quadro features in consumer grade graphics cards – and literally no one else is doing this. There is no baseline to compare the RTX 20-series with – they are the pioneer.
For those who want to experience the entire 3-part briefing, you can watch the 3-part video that’s on our partner’s YouTube channel. Part 1 is about the Turing architecture, part 2 is the GeForce experience, and part 3 is demo with Q&A.
We also sat down with Tech Critter and take a look at mesh shading for ourselves. It’s magnificent, to say the least.
The Turing platform
Let’s first take a look at the RTX platform to know what aspects are NVIDIA focusing on in this generation of consumer graphic cards.
From the image above, we can see that the Turing platform will focus on a few new things this time around – firstly is the RT cores for ray-tracing, and AI which is handled by the Tensor cores. We still have the usual raster graphics pipeline and compute units with CUDA.
With a combination of all these features, NVIDIA’s new RTX 20-series of graphic cards is doing something that’s now referred to as “hybrid rendering”. It still has the traditional raster graphics, but mixed with real-time ray-tracing that’s done via 3D graphics and enhanced with DLSS.
First off, ray-tracing isn’t new to in the world of computing. The reason why NVIDIA hyped ray-tracing so much until they rebranded from GTX to RTX is because real-time ray-tracing has never been done before. The keyword here is real-time.
The concept of ray-tracing is to trace where and what the rays of light interacted with (hence the name) and see where and how light reflects and refracts on the surfaces. NVIDIA’s example is by creating a room with different colored walls and with geometrical shapes with different types of surfaces.
So how exactly does NVIDIA ray-tracing in real-time? In concept, NVIDIA employed the method by backtracking the ray of light from the “camera” back to the source, and then render the traced ray’s interactions with the surfaces that the ray hit.
You may have heard big budget animated movies where they claim to take weeks and months to render the entire movie – and that’s true since each frame has many, many rays to be traced to create a photo-realistic image. Each ray will have multiple bounces as well.
NVIDIA’s magic sauce here is by picking and choosing the rays to trace with less bounces. With less rays to be traced within a single frame, it can do ray-tracing for a lot more frames within a second. Since they can maintain a relatively high frame rate with ray-tracing enabled, hence it’s technically “real-time”.
However, as of the time of this publication there are still no games that support ray-tracing yet. It’s actually not NVIDIA or game developers’ fault for not having any games that has ray-tracing yet. DirectX does not have ray-tracing yet, and we’ll have to wait for the next Windows 10 update which will include DXR, or DirectX Ray-Tracing in it.
Deep Learning AI with Tensor cores
AI is a big umbrella term – and NVIDIA knows that. From self-driving cars to videos and photos, AI is implemented to help enable and enhance the experience.
Another part of NVIDIA’s architecture is the new NGX. It creates a “ground truth”, then put it in the framework to train the AI model, then to test the model. Rinse and repeat the training and testing process and you have yourself a working AI model.
Any games that has NGX API integrated into its game engine can take advantage of the trained AI model as well.
But, it’s not entirely limited to just trained AI models. There’s super resolution as well. It upscales an image from its original resolution but retains the clarity and details. The usual tradition methods will result in a rather blurry image – but through deep learning super resolution, the image that was shown to us is much sharper.
We also asked if something like Adobe Photoshop will implement NVIDIA’s method of super resolution in upscaling pictures taken – and there something similar will definitely come to the market soon.
We also saw a demo where segmentation is applied to a picture where it can cut out and replace people from an image and filled in the blanks with the background. Think of it as an advanced version of Photoshop’s content aware fill.
NVIDIA also showed us that a slow-motion video can be even slower by utilizing deep learning and interpolate more frames in between. Perhaps we can see interpolation from 30FPS to 60FPS videos too.
Maybe we can combine interpolation where we can convert a 360p @ 30FPS video to 4K @ 60FPS with deep learning.
Mesh Shading & Variable Rate Shading
In a game where the depth of each object is known, NVIDIA is manipulating the level of detail per object in the scene according to the distance from the “camera” and the object. NVIDIA is calling it mesh shading.
The implementation of segmentation in games is why perceiving depth of objects in a scene via segmentation, and then have its level of detail changed according to that distance. In real like, looking at a sandpaper at 10cm away, we can see the grains of sand. When we looked at the same sandpaper at 1m away, we can’t see the fine grains of sand anymore. Hence, the drop of perceived level of detail. NVIDIA is calling it mesh shading.
If we have two of the same sandpapers in the scene – one at 10cm and another at 1m, NVIDIA can perform segmentation and know the depth of each sandpaper and render them at different level of detail, hence enhancing your performance.
There’s another similar implementation called Variable Rate Shading as well, where an image is segmented and have different levels of detail rendered in a frame. Instead of using depth data, Variable Rate Shading picks parts of the frame to shade more where it is needed.
Deep Learning Super Sampling (DLSS)
DLSS is tightly integrated in NVIDIA’s ecosystem. To utilize DLSS, the game developers first have to take a part of the game and send it to NVIDIA. Then, similar to NGX, it will create a model to become the “ground truth”.
The “ground truth” is a super high quality image with 64 jittered sample rendering. Then, the super sampling “formula” is condensed into pure formulas and is sent to the users to utilize those formulas with the game. As of now, there are 15 more games confirmed to be supporting DLSS. Surprisingly, PUBG is one of them.
DLSS is claimed to provide a high-quality image that is comparable to TAA, while being much more efficient. Think of it as a new method to do anti-aliasing. NVIDIA also offered us an insight saying that DLSS can be implemented to any games that has TAA.
RTX Games That Are Coming Soon
Currently, we are seeing 28 games that will be categorized as RTX games. NVIDIA did clarify that any games that users part of the new RTX 20-series feature will be considered as an RTX game. Be it just ray-tracing or only DLSS, they are RTX games.
New GeForce Experience
Yes – the GeForce Experience will be getting an upgrade as well. And it’s coming to hundreds of games.
The biggest upgrade is Ansel RTX. As the name suggest, it’s part of Ansel’s feature and it can now take high resolution images with the AI up-res and have the image completely ray-traced.
The Ansel RTX ray-tracing is different and looks even more realistic – solely because capturing a screenshot via Ansel isn’t real-time and ray-tracing can take its own sweet time.
By removing the time limits in ray-tracing, the RTX 20-series cards can perform ray-tracing for the entire scene with higher accuracy by tracing more rays and have more bounces.
There’s also a new set of filters and features to be played around in Ansel as well. There’s a greenscreen filter – which you can superimpose a greenscreen at select depth within the scene of a game, adding stickers to the scene to make it funny, or crop it in a letterbox aspect ratio. We honestly expect more memes with the new greenscreen filter.
Ansel itself will also work with over 200 titles as some features do not need the Ansel SDK to use be implemented in the game. It makes use of AI a lot in the super resolution and HUD removal algorithms for games without Ansel SDK.
Is that all?
No, definitely not. The things we talked about here today is merely touching the surface as NVIDIA’s new Turing architecture actually brought many features from the Quadro series into the consumer graphics cards. There are many things that we can do with deep learning – not just games or photos and videos. Perhaps in noise removal for audio as well. And remember – there’s ray-tracing for audio as well.
Speaking of ray-tracing, we do hope that the Adobe Creative Cloud suite of apps will utilize ray-tracing in a more newbie-friendly method as well. I have personally encountered problems with content aware fill, and ray-tracing can definitely help out – if I can map the photo in 3D space, that is.
Perhaps one day, deep learning can do that. By mapping an object in 3D space, then superimposing it on a photograph and ray-trace to fix the imperfections. The possibility is there, and still unexplored.
To enjoy the latest games with the greatest tech, you’ll need NVIDIA’s RTX 20-series of graphic cards. Here are the prices and links to buy them over at Lazada and Amazon: