I'm one of those people that uses DLSS, because I've got a large fancy 4k monitor that is big enough that is looks like shit at lower resolutions.
DLSS is better than nothing but it's no replacement for native rendering, it introduces a heap of visual anomalies and inconsistencies, especially in games with a consistent motion (racing games look like shit with DLSS), so I tend to be having lows of 50fps on medium before I'll even think about DLSS.
I'm also pretty sure Nvidia is paying devs to have it on by default, because everytime it's patched into a game they clear all the current graphics settings to turn on DLSS, at least in my experience.
I hate how AI upscaling looks and I really don't get why everyone seems to be gaga over it. In addition to the artifacts and other weirdness it can introduce, it just looks generally like someone smeared vaseline over the picture to me.
This is a big part of why I'm sticking to 1440p for as long as it's a viable option. Not like my imperfect vision with glasses on would benefit from more PPI anyway.
Because the largest gaming GPU manufacturer in the world says so. Unfortunately they have the clout to drive this narrative to devs who will accommodate them because devs dont want their game to look like shit on an Nvidia gpu.
I think that these technologies are still very new. Nvidia arent going to let us know what their skunkworks is up to and what the next generation of the tech is going to look like.
So long games don't force it to be on, then whatever. Although I expect it to become a requirement for a usable framerate for next gen games. Big developers don't want to optimize anymore and upscaling/framegen technologies are a great crutch.
Of course nobody want to optimize. Its boring. It messes up the code. Often reqires one to cheat the player with illusions. And its difficult. Not something just any junior developer can be put to work on.
You'd expect that when Raytracing/Pathtracing and the products that drive it have matured enough to be mainstream, devs will have more time for that.
Just place your light source and the tech does the rest. It's crazy how much less work that is.
But we all know the Publishers and shareholders will force them to use that time differently.
Who are you directing the comments to? The dev company or individuals? I disagree in the latter. On the former I still think it's a mischaracterizatuon of the situation. If the choice is to spend budget in scope and graphics at the expense of optimization that doesn't seem a hard choice to make.
DLSS 3.5 for example comes with that new AI enhanced RT that makes RT features look better, respond to changes in lighting conditions faster, and still remain at pre-enhanced levels of performance or better.
And Reflex fixes a lot of the latency issue.
A lot of games don't use the latest version of DLSS though, so I don't blame you if you have a bad experience with it.
For some reason, Larian shipped an old version of DLSS with the game. It looks better if you swap out the DLL for a newer one. I use DLAA on my 3070 TI and it looks good, but I did have to swap the DLL.
I upgraded the dll file and tried it again last night. It was much improved. BG3 is the only game I'm playing at the moment but I'm going to try it when Cyberpunk dlc comes out.
I prefer native. If you can't render something, then just don't. Not make everything else worse too just so you can claim to use a feature, and then try to make up junk to fill in the gaps. upscaling is upscaling. It will never be better than native.
they have to "guess" what data they should fill up the missing data with. Or you could render natively and calculate, so you don't have to guess. So you can't get it wrong.
During their discussion with Digital Foundry's Alex Battaglia and PCMR's Pedro Valadas, Bryan Catanzaro — Nvidia's VP of Applied Deep Learning Research — stated that native resolution gaming is no longer the best solution for maximum graphical fidelity.
Catanzaro's statement was in response to a question from Valadas regarding DLSS and whether Nvidia planned to prioritize native resolution performance in its GPUs.
Catanzaro pointed out that improving graphics fidelity through sheer brute force is no longer an ideal solution, due to the fact that "Moore's Law is dead."
In the case of Cyberpunk 2077, both Valadas and CD Projekt Red's Jakub Knapik said that full path-tracing would have been impossible in that game without all of DLSS's technologies — especially in terms of image upscaling and frame generation.
Catanzaro continued by saying the industry has realized it can learn much more complicated functions by looking at large data sets (with AI) rather than by building algorithms from the ground up ("traditional rendering techniques").
With the alleged "death" of Moore's Law, AI manipulation may be the only thing that continues to drive 3D graphics forward for the foreseeable future.
The original article contains 416 words, the summary contains 188 words. Saved 55%. I'm a bot and I'm open source!
I can agree but with two conditions. Benchmarks must always be done in native resolution. Hardware capability / system requirement must not take any upscaling into account.
For example, if a studio publishes the requirements for playing at 1080p, 60 FPS, High RT, it must be native 1080p and not 1080p with upscaling.
Benchmarks should not be disconnected from actual games. If games don't play in native resolution, then benchmarks should not be limited to native resolution. they should check both native and upscaled rendering, and rate the quality of the upscaling.
RT + DLSS is less cheating than most other graphics effects, especially any other approach to lighting. The entire graphics pipeline for anything 3D has always been fake shortcut stacked on top of fake shortcut.
This seems more like just a reality of LCD / LED display tech than anything. CRTs (remember those?) can do a lot of resolutions pretty well no problem, but new stuff not so much. I remember using a lower rez on early LCDs as a 'free AA' effect before AA got better/cheaper. This just seems like a response to folks getting ~4k or similar high rez displays and gfx card performance unable to keep up.
I was just playing around with gamescope that allows for this kind of scaling stuff (linux with AMD gfx). Seems kinda cool, but not exactly a killer feature type thing. It's very similar to the reprojection algos used for VR.
I don't get this "raw pixels are the best pixels" sentiment come from, judging from the thread everyone has their own opinion but didn't actually see the reason behind why people doing the upscalers. Well bad news for you, games have been using virtual pixels for all kinds of effects for ages. Your TV getting broadcast also using upscalers.(4k broadcast not that popular yet.)
I play Rocket Leauge with FSR from 1440p to 2160p and it's practically looking the same to 2160p native AND it feels more visually pleasing as the upscale also serve as extra filter for AA to smooth out and sharpen "at the same time". Frame rate is pretty important for older upscaler tech(or feature like distance field AO), as many tech relies information from previous frame(s) as well.
Traditionally, the render engine do the stupid way when we have more powerful GPU than engine demand where the engine allows you to render something like 4x resolution then downscale for AA, like sure it looks nice and sharp BUT it's a bruteforce and stupid way to approach it and many follow up AA tech prove more useful for gamedev, upscaler tech is the same. It's not intended for you to render 320x240 then upscale all the way to 4k or 8k, it will pave way for better post processing features or lighting tech like lumen or raytracing/pathtracing to actually become usable in game with decent "final output".(remember the PS4 Pro checkboard 4k, that was a really decent and genuinely good tech to overcome PS4 Pro's hardware limit for more quality demanding games. )
In the end, consumer vote with their wallet for nicer looking games all the time, that's what drives developers gear toward photo real/feature film quality renderings. There are still plenty studio gears toward stylized, or pixel art and everyone flip their shit and praise while those tech mostly relies on the underlying hardware advance pushed by photo real approach, they just use the same pipeline but their way to reach their desired look, Octopath Traveler II used Unreal Engine.
Game rendering is always about trade-offs, we've come a LONG way and will keep pushing boundaries, would upscaler tech become obsolete somewhere down the road? I have no idea, maybe AI can generate everything at native pixels, right?
I don't have anything against upscaling per se, in fact I am surprised at how good FSR 2 can look even at 1080p. (And FSR is open source, at least. I can happily try it on my GTX 970)
What I hate about it is how Nvidia uses it as a tool to price gouge harder than they've ever done.