Nvidia, all you need to know about Nvidia 2020 - dove world news

Nvidia, all you need to know about Nvidia 2020

Along with the fastest graphics cards ever designed for consumers, the Turing generation of GPUs from Nvidia has brought some exciting new features to players everywhere. Ray tracking is the easiest to wrap your head around, but deep learning shorthand, or DLSS, is more mysterious.

Even if the comprehension is more sophisticated, DLSS has the potential to be the biggest advantage of Nvidia 2000 series graphics cards, improving visual images and increasing performance at the same time. To help you understand the way you work, here is our guide to everything you need to know about Nvidia’s RTX DLSS technology, so you can decide if it’s enough reason to upgrade to a new RTX or even RTX Super GPU.

What is DLSS?

The deep sampling process uses artificial intelligence and machine learning to produce an image that looks like an image with a higher resolution, without the upper body. The Nvidia algorithm learns from tens of thousands of sequences provided from images created with the supercomputer. This trains the algorithm to be able to produce similar pretty images but without the need for a graphics card to work hard to do so.

DLSS also includes more traditional cosmetic techniques such as edge polishing to create a final image that looks as though it was rendered with higher resolution and much higher detail, without sacrificing the frame rate.

It was originally launched with little competition, despite other sharpening techniques from both AMD and Nvidia itself now competing with DLSS for mind sharing and efficient use in 2020.

What does DLSS actually do?

DLSS is the end result of Nvidia’s comprehensive learning algorithm process for generating better-looking games. After viewing the game at a lower resolution, DLSS extracts information from its knowledge base for training high-resolution images, to create an image that still appears to be working at a higher resolution. The idea is to make games rendered at 1440 pixels look like they are working on 4K or 1080 pixels games to look at 1440 pixels.

Traditional high-resolution techniques can cause errors and errors in the final image, but DLSS is designed to work with these errors to create a better-looking image. It’s still being improved, and NVIDIA claims that DLSS will continue to improve over the coming months and years, but in the right conditions, it can deliver a significant performance boost, without affecting the look and feel of the game.

This is a very dependent game, however. Early attempts to utilize DLSS in games like Final Fantasy XV increased the total frame rate between 5 and 15 fps, leaving some players to opt-out of all anti-aliasing methods instead of a greater increase in the overall frame rate.

Similar results can be found in games like Metro Exodus. It supports both ray tracing and DLSS but in our tests, we found that both technologies required some sacrifice – framerate and detail, respectively. Enabling both of them at the same time leaving behind a strangely inconspicuous image that detracted from the graphical gains achieved by tracing the rays in the first place. Ultimately, shutdown leads to a better overall gaming experience.

It can be said that the biggest benefit of DLSS so far has come in the form of synthetic standards. UL Benchmark Port Royal testing saw up to 50% increase for some RTX cards with DLSS enabled, although many fans have indicated that this technology can lead to strange flickering effects and sharper appearance, throughout the show.

How does DLSS work?

DLSS forces a game to offer a lower resolution (usually 1,440 pixels) and then uses its own trained AI algorithm to conclude what it would look like if it were rendered at a higher resolution (usually 4K). It does this by using some anti-aliasing effects (possibly Nvidia’s TVA) and some automated illustration. Visual artifacts that will not be located at higher resolutions are also used and used to infer the details that should be present in an image.

As Eurogamer explains, the AI ​​algorithm is trained to look at some games in very high resolution (supposedly 64x supersampling) and is distilled into something just a few megabytes, before adding them to the latest versions of Nvidia software and making them available to gamers around the world. It is something that needs to be done on a per-game basis.

In fact, DLSS is a real-time version of Nvidia’s optimized Ansel technology. It displays the image at a lower resolution to provide enhanced performance and then applies various effects to provide a relatively comparable overall effect on higher resolution.

The end result can be a mixed bag, but in general, it leads to higher frame rates without a significant loss of visual resolution. NVIDIA claims that frame rates can improve up to 75% on entertainment control when using both DLSS and ray tracing. Usually, this is less clear than that, and not everyone is impressed with the ultimate DLSS look, but the option is definitely available for those who want to beautify their games without the cost of running with higher accuracy.

Useful, but far from perfect

The Deep Super Learning feature has the ability to give players who cannot reach comfortable frame rates with resolutions of up to 1080p, the ability to do so with inference. DLSS can end up being the most influential feature on Nvidia’s RTX Turing Cards. It’s not as strong as we had hoped and beam tracking effects are pretty but it tends to have a big impact on performance, but DLSS can provide us with the best of both worlds: better-looking games perform well too.

The best place for this type of card could be the low-end cards, but unfortunately, they are only supported by RTX graphics cards, and the weakest one is RTX 2060 – $ 300 card. If Nvidia makes this technology available on GTX GPUs, you may find more success for it.

The real problem, however, is that the list of supported games is still limited, totaling less than 30 in early 2020. Although that may change, there is little suggestion that DLSS will see widespread adoption. With DLSS support growth over the past two years as well, AMD’s image sharpening technologies are likely to prove to be more popular as well, as they have no type of device limitation.

NVIDIA RTX 3080 and 3070: All we know about the new AMPERE GPUs

The power of PC video cards is set to grow relentlessly over the course of 2020. After focusing on the low and mid-range, AMD is developing the RDNA 2 microarchitecture and new high-performance GPUs. NVIDIA instead is grappling with the development of the Ampere architecture, which will take the place of the Turing architecture.

Ampere video cards should offer a rather significant leap in performance compared to current models, thanks to both to the new architecture and to the improvement of the production process, with the transition to 7 nm. A lot of information has recently emerged on the top proposals of the next range, the RTX 3080 and 3070, which together with the hypothetical RTX 3080 Ti will compete in the high end of the market.

A lot of power, even for Ray Tracing

The RTX 2080 Ti, 2080, and 2070 respectively use the TU102 and TU104 chips, the latter with the calculation units disabled in the case of the RTX 2070. The first two letters indicate the name of the architecture, in this case, Turing. According to rumors that have emerged so far, however, the RTX 3080 will be based on the GA103 chip, while the RTX 3070 will use the GA104. Obviously a possible GA102 would equip the Ti variant of the RTX 3080 but for now, there are no rumors about this GPU. To understand what the performance leap could be with the current generation, let’s take for example the RTX 2080 Super and the RTX 2070 Super.

The first one offers 3072 CUDA Core and 8 GB of GDDR6 RAM at 15.5 Gbps, the second one has 2560 CUDA Core and 8 GB of GDDR6 RAM at 14 Gbps. According to what has been leaked so far, the next RTX 3080 should have 3840 CUDA Core and 10 GB of GDDR6 RAM on 320-bit buses, which can also go up to 20 GB, probably reserved for Quadro variants, aimed at professional users.

The RTX 3070 instead could be equipped with 3072 CUDA Core and 8 GB of GDDR6 RAM on a 256-bit bus. The two cards not only offer a greater number of CUDA Core, but also higher efficiency, given by the new architecture. This means that for the same CUDA Core the performance will still be higher on the Turing GPUs. In recent weeks, rumors indicated a performance jump of up to 50% between Turing and Ampere, a figure that, if confirmed, would bring the power available in these video cards to much higher levels than the current ones.

We must not forget the Ray Tracing speech. The Turing GPUs were the first to be able to use this technology, the cost of which in terms of frame rates, however, remains very high, especially at higher resolutions. With the Ampere range, NVIDIA will not only improve performance with traditional rendering but also with Ray Tracing, increasing the number of dedicated computing units compared to the current ones.

Unfortunately, no indications have arrived, but the hope is that with the second generation of NVIDIA RTX cards it will be able to offer a much higher frame rate than that obtainable today, which especially in 4K does not allow to reach 60 fps, in many games, not even with an expensive RTX 2080 Ti. The numbers released so far, however, must be taken with great caution, on the one hand, they are not verifiable, on the other hand, we do not yet know the impact that the CUDA Core Ampere will have on performance, compared to the current Turing. What is certain is that the fight between AMD and NVIDIA this year will also move to the high end, which will thus be able to benefit from greater competition.

Related Articles

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Content is protected !!
Close
Close