
I turned 37, literally today (Happy birthday me). That sentence alone probably tells you more about where I sit in the gaming landscape than any hardware spec sheet ever could. My first console was a Sega Mega Drive: Sonic, Sonic & Knuckles, Streets of Rage, Street Fighter. I grew up in the era of Day of the Tentacle (I was a big fan of point and click adventure games in my youth), of trading Pokémon on a link cable with a mate’s Game Boy, of fiddling with autoexec.cfg files to squeeze an extra five frames out of a Voodoo 2. I remember watching those early Resident Evil cutscenes on the PlayStation and genuinely believing that was what games would actually look like one day.
Well. NVIDIA just showed us something at GTC 2026 that made me feel like that wide-eyed kid clutching a Mega Drive controller again. Photorealistic game engines are, genuinely, the future I desperately want us to get to. Not as some niche flex for people with expensive hardware, but in a way that mainstream gamers can actually experience. Wider adoption means more investment, which means better games for everyone. So when NVIDIA announced DLSS 5, I was paying attention.
But I’m not sure it’s there yet.
Table of contents
Open Table of contents
Right Then, What Is DLSS 5?
If you haven’t seen the news, NVIDIA announced something pretty significant at their GTC 2026 conference. DLSS 5 (Deep Learning Super Sampling, version five) introduces what they’re calling a “real-time neural rendering model.” Jensen Huang, leather jacket and all, described it as “the GPT moment for graphics.” That’s a big claim, so let me try and explain what he’s inferring from this.
Previous DLSS versions were clever performance tricks. They used AI to upscale lower-resolution images to look like higher-res ones, or to generate entirely new frames between real ones. It was like your GPU was doing a magic trick: rendering less, but making it look like more. Smart stuff, and it worked incredibly well.
DLSS 5 is doing something fundamentally different. Instead of just boosting performance, it’s adding visual fidelity that wasn’t there before. The neural rendering model takes a game’s existing frame (the colours, the geometry, the motion vectors) and infuses it with photorealistic lighting, materials, and surface detail. Subsurface scattering on skin. The sheen of fabric under different light conditions. The way hair catches a rim light.

In NVIDIA’s own words, it’s “bridging the divide between rendering and reality.” If this technology genuinely pushes us closer to a more realistic gaming experience in terms of visuals, then I’m excited. More realistic worlds means deeper immersion, and that’s something I’ve wanted since I first saw a pre-rendered cutscene and wished the actual gameplay looked half as good.
Seeing Is Believing (Mostly)
NVIDIA showed DLSS 5 running across several games: Starfield, Hogwarts Legacy, Resident Evil Requiem, and their own Zorah tech demo. The before-and-after comparisons are worth looking at yourself.
Starfield


I know what a lot of people will say. “It’s just better lighting, Joe.” And yes, at a glance, that’s what it looks like. But look closer at the textures, the way light interacts with different surfaces, the subtle depth in materials that were previously quite flat. There’s a dimensionality to it that traditional real-time rendering struggles to achieve. It’s heading toward that Hollywood VFX feel that NVIDIA keeps banging on about, and you can see why.
Hogwarts Legacy


PCMag’s Michael Kan, who got hands-on with the demos at GTC, described it by saying “DLSS 5 displayed the most realistic gaming graphics I’ve ever seen.” Strong words from someone who’s been covering hardware for years. He noted that in Hogwarts Legacy, the rocks looked like actual rocks. Trees felt organic. The interior of Hogwarts School gained a new sense of depth.
Resident Evil Requiem


Okay, I have to be honest here. This is the one that got me. I’ve been a Resident Evil fan since the original Director’s Cut on PlayStation. The first two games in that franchise hold a genuinely special place for me; there was something about the fixed camera angles, the inventory management, that door-opening animation that was secretly a loading screen. I’ve played most of the modern entries too, and CAPCOM has been on an incredible run. Seeing Requiem with DLSS 5 applied? The lighting on characters, the environmental detail. It’s the closest I’ve seen to that feeling of “this is what survival horror should look like.”
But this is also where my concerns crystallised. One observation that stuck with me from the community discussion was about this exact game: in the DLSS 5 version of certain scenes, the distance fog in the background appeared to be removed or significantly reduced. That fog wasn’t an accident. It was hand-crafted by CAPCOM’s artists to create atmosphere, to build tension, to make you feel like something could be lurking just out of sight. That’s the whole point of survival horror. If DLSS 5’s neural rendering strips that away in favour of “better” visuals, then we’re losing something of the experience. Shadows were reportedly changing completely, tone mapping shifting, contrast altered. These are deliberate artistic choices being overwritten by a model that doesn’t understand why they were made.
And then there’s the Zorah tech demo, which was built specifically to showcase what the neural rendering model can do at full tilt:


But Wait, There’s Drama
I’d be lying if I said the announcement was all standing ovations and slow claps. Within hours of the reveal, Reddit threads were on fire, and honestly, a lot of the criticism is valid.
The main concerns are worth taking seriously:
-
“It’s just a fancy Instagram filter”: Some gamers looked at the character comparisons and saw what they described as an AI filter being slapped over carefully crafted game art. There’s a kernel of truth to the concern; some early comparison shots did look like they were adding a kind of AI-generated sheen to faces.
-
“This is AI slop being forced onto art”: IGN published a piece calling DLSS 5 “a slap in the face to the art of video game design.” That’s a heated take, but I think the underlying point is sound. Game artists spend thousands of hours crafting specific looks. IGN’s Simon Cardy put it well when he wrote that the tech “threatens not only to make games visibly distracting, but also completely alter the emotions of a story.” When you think about it in those terms, it’s hard to dismiss the concern.
-
“Uncanny valley vibes”: Some PCMag staff members noted that the hyper-real faces created by DLSS 5 can feel a bit off. Too real, in a way that breaks the stylistic contract a game has with its players.
Jensen Huang’s response to all of this was characteristically blunt. In a Q&A at GTC, when asked about DLSS 5 allegedly creating “worse, homogenous” imagery, he said: “First of all, they’re completely wrong.”
He went on to explain that DLSS 5 isn’t post-processing at the frame level; it’s “generative control at the geometry level.” It understands the 3D structure of characters, environments, and their movement. And crucially, game developers get fine-grained controls over intensity, colour grading, and masking. They choose where and how the enhancements apply.
“We created the technology, we didn’t create the art,” Huang said.
I want to believe that. But the demos they chose to lead with tell a different story. If the artistic controls are there, the GTC showcase didn’t demonstrate them very convincingly. Bethesda, to their credit, quickly committed to “further adjusting the lighting and final effect” of Starfield’s implementation, adding that “this will all be under our artists’ control, and totally optional for players.” That’s encouraging, but it also suggests the default output still needs work.
He also drew a parallel to ray tracing’s rocky debut back in 2018. “Everybody pooh-poohed it,” he recalled (On a side note who says pooh-poohed nowadays?). “Everybody said ray tracing was FUBAR. If we didn’t have RTX today, doing full scene path-tracing, computer graphics wouldn’t be what it is today.” He’s got a point there. I remember the discourse. “Ray tracing is a gimmick,” they said. “Nobody needs it,” they said. Now path tracing is a baseline feature in top-tier titles.
The Elephant in the Room: Two GPUs?
The thing nobody’s really talking about enough is that the DLSS 5 demo at GTC ran on two RTX 5090 graphics cards. Two. Each one starts at £1174.99 (that was the best price I saw, but it’s also around $1,999 in the states.), and thanks to the ongoing AI-driven memory shortage, actual street prices are significantly higher.
One card handled the game rendering. The other handled the neural rendering. That’s not exactly a consumer-ready setup.
NVIDIA says the plan is to optimise DLSS 5 to run on a single GPU by the autumn (fall) launch. I hope so, because if this technology is going to matter, it needs to be accessible to mainstream gamers, not just people running multi-thousand-pound setups. The whole promise of photorealistic gaming only works if enough people can actually experience it. Wider adoption drives investment, investment drives quality, and that’s the cycle we need. The memory shortage hasn’t just inflated GPU prices. It’s making the whole upgrade cycle feel hostile for anyone who isn’t made of money. And most of us aren’t.
Why do I still care?
I’m 37. I work in tech. I’ve got a mortgage, a career, and precisely zero interest in grinding competitive ranked ladders until 2 AM (~mostly…). But you know what I do still care about? A good story. A world that pulls me in and makes me forget I’ve got a Teams meeting at 9 AM.
I still play World of Warcraft. Not because the gameplay loop has fundamentally changed (Blizzard’s been iterating on the same formula for over two decades now) but because Azeroth feels like home. There’s comfort in those zones, those characters, those moments when a questline genuinely catches you off guard. I recently found myself reading every quest text in the new expansion (I was also trying out an add-on Dialogue UI). Reading quest text.
I still fire up Call of Duty when the lads are online, and only when the lads are online, because I’m not about to solo-queue into a lobby of teenagers with reflexes I no longer possess. I’ve been playing Pokémon since the original Red and Blue on the Game Boy, and somehow I’m still playing Pokémon games (Although I can’t currently justify buying the Switch 2 JUST for Pokémon Pokopia). That’s nearly thirty years of catching the same creatures in progressively prettier grass.
For people like me, who grew up with gaming as a core part of our identity, who went from Sonic on the Mega Drive to Pokémon on the Game Boy to raiding in WoW to arguing about light levels in Destiny, visual fidelity isn’t about showing off hardware. It’s about immersion. It’s about the moment you forget you’re looking at a screen and just… exist in a world for a while.
When I saw DLSS 5 running in Hogwarts Legacy, I didn’t think “wow, nice technology.” I thought about what it would feel like to wander through the Forbidden Forest and have it look as convincing as a film. I thought about Resident Evil Requiem, a franchise I’ve followed since the Director’s Cut days, and how that neural rendering brings the horror closer to your face in a way that traditional lighting just can’t.
And then I thought about Fable. I cannot wait for the new Fable. Playground Games have been working on it for what feels like an eternity, and the art style they’ve shown so far is gorgeous. But imagine that world. Those rolling hills, the quirky characters, the slightly off-kilter British humour of it all, running with DLSS 5’s neural rendering on top. Albion looking like a real place you could walk into? Sign me up.
Bethesda’s already confirmed DLSS 5 support for Starfield and future titles. Todd Howard himself said “the artistic style and detail shine through without being held back by the traditional limits of real-time rendering.” The bloke from BGS who brought us Morrowind and Oblivion is excited about this. That means something.
It’s Not Just a Filter. But It’s Not Ready Either.
I should be measured here, because the internet has enough outrage and divisive commentary. DLSS 5 is impressive as a technology demonstration. The underlying concept of neural rendering is genuinely exciting, and if NVIDIA can get this right, it could be a meaningful step toward the photorealistic gaming experiences I think we’ve all been dreaming about since the Mega Drive days.
But the way DLSS 5 currently applies its generative AI framework to games leaves a lot to be desired. The fog removal in Resident Evil Requiem. The shadow and contrast shifts. The uncanny faces. These aren’t minor quibbles. They point to a fundamental tension: the model is optimising for what it thinks looks “better” without understanding the artistic intent behind what was already there.
Game designers don’t add fog to a horror scene by accident. They don’t choose a specific lighting tone on a whim. These are deliberate choices that serve the narrative, the atmosphere, the emotional experience. When DLSS 5 overwrites those choices, it doesn’t matter how impressive the pixel-level improvements are. You’re losing something that matters.
PCMag’s hands-on was limited to walking through single scenes: no combat, no spellcasting, no chaos. We don’t know the performance impact on a single GPU yet. We don’t know how it handles fast-paced action or genuinely dynamic environments. There’s still a lot of fine-tuning to be done before I feel like DLSS 5 is going to be welcomed with open arms by the gaming industry.
From what I’ve found so far, the list of confirmed games is encouraging, though:
- Assassin’s Creed Shadows
- Hogwarts Legacy
- Starfield
- Resident Evil Requiem
- Phantom Blade Zero
- The Elder Scrolls IV: Oblivion Remastered
- Delta Force
- Where Winds Meet
- And more from Bethesda, CAPCOM, Ubisoft, Tencent, and other major studios
That’s not a niche experiment. That’s a serious industry push.
The Bigger Picture
I keep coming back to this, DLSS started as a performance tool. DLSS 2 made upscaling actually good. DLSS 3 introduced frame generation. DLSS 4 gave us multi-frame generation and DLSS 4.5 brought the second-gen Transformer-based super resolution. Each version moved the needle on what “AI in graphics” meant.
DLSS 5 is something else entirely. It’s not about making games faster; it’s about making them look better in ways that weren’t possible with traditional rendering techniques. NVIDIA’s betting that neural rendering is the next paradigm, the same way programmable shaders were in 2001 and ray tracing was in 2018.
Twenty-five years after they invented the programmable shader, they’re trying to reinvent computer graphics again. And whether you think that’s brilliant or terrifying probably depends on which side of the “AI in creative work” debate you land on.
Me? I’m cautiously excited about the direction, but honest about where we are today. Photorealistic gaming is the future I want. I want Azeroth to look a little bit more like home. I want Albion to feel like a real place when Fable finally lands. I want the next Resident Evil to make me genuinely afraid to open a door again. And if neural rendering can get us there, while respecting the artists who build these worlds and the atmosphere they painstakingly craft, then I’m absolutely here for it.
But we’re not there yet. Not with fog being stripped from horror scenes. Not with faces being smoothed into uncanny uniformity. There’s a lot of work between this GTC preview and something the gaming community will genuinely embrace.
Besides, I’ve been on this ride since Sonic was running through Green Hill Zone on a Mega Drive. I’ve watched graphics go from 16-bit sprites to whatever this is. And honestly? That kid who used to blow into cartridges to make them work would be absolutely losing his mind right now.
We’re getting close. Finally.