The march of technology is relentless, and nowhere is it more like graphics hardware. Every year, cards get significantly faster and bring in a new set of shortcuts for quirky graphic tricks.
Looking at the visual settings of PC gaming, you’ll find a verbal salad containing delicious nuggets like MSAA, FXAA, SMAA, and WWJD. Okay, maybe not the last one.
If you are the proud owner of a new Nvidia GeForce RTX card, you can now enable something called DLSS as well. It is short for Deep Learning Super Sampling. These are most of the next-gen hardware features that Nvidia RTX cards have.
At the time of writing, only these cards have the necessary hardware for DLSS operation:
- RTX 2060
- RTX 2060 Super
- RTX 2070
- RTX 2070 Super
- RTX 2080
- RTX 2080 Super
- RTX 2080 Ti
The particular hardware under consideration is called a “tensor” core, with each model having a different number of these specialized processors.
Tensor cores are designed to speed up machine learning tasks, as exemplified by DLSS. If you are not using DLSS, this part of the map will remain inactive. This means you are not using all the power of your shiny new GPU if DLSS is available but stays off.
But that’s not all. To understand the value DLSS brings to the table, we need to briefly highlight a few related concepts.
Rapid shift in internal decisions and promotion
Modern TVs and monitors have the so-called “native” resolution It simply means that the screen has a certain number of physical pixels. If the image you are displaying on this screen is different from the exact original resolution, it must be “scaled†up or down to match.
So, for example, if you output an HD image to a 4K display, it will look quite blocky and uneven. It’s like you’ve zoomed in on a digital photo too much. In practice, however, HD video looks great on a 4K TV, albeit perhaps slightly less sharp than the original 4K video. This is because the TV has hardware known as an “upscaler” that processes and filters the lower resolution image to make it look acceptable.
The problem is that the quality of upscaling hardware varies greatly depending on the make and model of the display. This is why GPUs often come with their own scaling technology.
“Professional” consoles designed for 4K output provide a native 4K image, so there is no display scaling at all. This means that game developers have complete control over the quality of the final image.
However, most console games are not displayed at native 4K resolution They have a lower “internal” resolution, which reduces the load on the GPU. This image is then scaled to look as good as possible on a high resolution screen using the console’s internal scaling technology.
Basically, DLSS is a sophisticated technique that renders a game on a PC at a lower resolution than native resolution and then uses DLSS technology to scale it up for the connected display. In theory, this translates into significant performance gains.
While this is very similar to what is happening on 4K consoles, there is something really special inside DLSS. And all thanks to “deep learning”.
What is the “deep learning” part?
Deep learning is a machine learning technique that uses a simulated neural network. In other words, a digital approximation of how the neurons in your brain learn and create solutions to complex problems.
It is a technology that, among other things, allows computers to recognize faces and allows robots to understand and navigate the world around them. He is also responsible for the recent wave of deepfakes This is the secret of DLSS.
Neural networks require “learning”, which basically shows networked examples of how something should be. If you want to teach the network to recognize faces, you show it millions of faces, allowing it to learn the features and patterns that make up a typical face. If he learns the lesson correctly, you can show him any image with a face and he will instantly choose it.
Nvidia trained its deep learning software on incredibly high resolution images from DLSS-enabled games. A neural network studies how a game “should” look when rendered using supercomputer-level graphics.
He then takes that frame at a lower internal resolution and, for lack of a better word, “imagines†what it would look like if the scene was rendered by a much more powerful computer than yours. If this sounds a bit like black magic to you, you are not alone!
When to use DLSS
First of all, you can only use DLSS on games that support it, and the list is fortunately growing rapidly. Each header also has its own DLSS requirements such as rendering at minimum resolution, because that’s what the neural network is trained to do.
However, the big brain at Nvidia never stops learning, and the DLSS feature on your card will continue to receive updates, expanding support for individual titles and improving quality.
The best way to know if you should use DLSS in games is to look at the outcome. Compare this to traditional upscaling or anti-aliasing to see which is more enjoyable. Performance is also an important deciding factor. If you’ve targeted 60fps and can’t get it, DLSS is a good choice.
However, if you have a high frame rate, DLSS can really slow you down. This is because tensor kernels take a fixed amount of time to process each frame. Now they cannot do it fast enough for high frame rate playback.
In fact, DLSS is most useful when using a high-resolution display (such as 4K, ultra-wide, or 1440p) with a target frame rate of around 60 frames per second. It’s also incredibly useful when activating another basic trick of RTX maps – ray tracing. DLSS can compensate quite well for the performance loss in ray tracing, with the end result being impressive at times.
This is the least you need to know before deciding whether to switch to DLSS or not. Just remember that this technology is changing rapidly, so if you don’t like the results today, come back in a few months and you may finally just be amazed.
–