The game is rendered at a lower resolution, this saves a lot of resources. This isn’t a linear thing, lowering the resolution reduces the performance needed by a lot more than you would think. Not just in processing power but also bandwidth and memory requirements. Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution. This is a fixed cost and can be done with little power since the components are designed to do this task.
My TV for example has an AI scaler chip which is pretty nice (especially after tuning) for showing old content on a large high res screen. For games applying AI up scaling to old textures also does wonders.
Now even though this gets the AI label slapped on, this is nothing like the LMMs such as chat GPT. These are expert systems trained and designed to do exactly one thing. This is the good kind of AI that’s actually useful instead of the BS AI like LLMs. Now these systems have their limitations, but for games the trade off between details and framerate can be worth it. Especially if our bad eyes and mediocre screens wouldn’t really show the difference anyways.
The game is rendered at a lower resolution, this saves a lot of resources.
Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution.
I get that much. Or at least, I get that’s the intention.
This is a fixed cost and can be done with little power since the components are designed to do this task.
This us the part I struggle to believe/understand. I’m roughly aware of how resource intensive upscaling is on locally hosted models. The necessary tech/resources to do that to 4k+ in real time (120+ fps) seems at least equivalent, if not more expensive, to just rendering it that way in the first place. Are these “scaler chips” really that much more advanced/efficient?
Further questions aside, I appreciate the explanation. Thanks!
Rendering a 3D scene is much more intensive and complicated than a simple scaler. The scaler isn’t advanced at all, it’s actually very simple. And it can’t be compared with running a large model locally. These are expert systems, not large models. They are very good at one thing and can do only that thing.
Like I said the cost is fixed, so if the scaler can handle 1080p at 120fps to upscale to 2K, then it can always handle that. It doesn’t matter how complex or simple the image is, it will always use the same amount of power. It reads the image, does the calculation and outputs the resulting image.
Rendering a 3D scene is much much more complex and power intensive. The amount of power highly depends on the complexity of the scene and there is a lot more involved. It needs the gpu, cpu, memory and even sometimes storage, plus all the bandwidth and latency in between.
Upscaling isn’t like that, it’s a lot more simple. So if the hardware is there, like the AI cores on a gpu or the dedicated upscaler chip, it will always work. And since that hardware will normally not be heavily used, the rest of the components are still available for the game. A dedicated scaler is the most efficient, but the cores on the gpu aren’t bad either. That’s why something like DLSS doesn’t just work on any hardware, it needs specialized components. And different generations and parts have different limitations.
Say your system can render a game at 1080p at a good solid 120fps. But you have a 2K monitor, so you want the game to run at 2K. This requires a lot more from the system, so the computer struggles to run the game at 60 fps and has annoying dips in demanding parts. With upscaling you run the game at 1080p at 120fps and the upscaler takes that image stream and converts it into 2K at a smooth 120fps. Now the scaler may not get all the details right, like running native 2K and it may make some small mistakes. But our eyes are pretty bad and if we’re playing games our brains aren’t looking for those details, but are instead focused on gameplay. So the output is probably pretty good and unless you were to compare it with 2K native side by side, probably you won’t even notice the difference. So it’s a way of having that excellent performance, without shelling out a 1000 bucks for better hardware.
There are limitations of course. Not all games conform to what the scaler is good at. It usually does well with realistic scenes, but can struggle with more abstract stuff. It can get annoying halos and weird artifacts. There are also limitations to what bandwidth it can push, so for example not all gpus can do 4K at a high framerate. If the game uses the AI cores as well for other stuff, that can become an issue. If the difference in resolution is too much, that becomes very noticeable and unplayable. Often there’s also the option to use previous frames to generate intermediate frames, to boost the framerate with little cost. In my experience this doesn’t work well and just makes the game feel like it’s ghosting and smearing.
But when used properly, it can give a nice boost basically for free. I have even seen it used where the game could be run at a lower quality at the native resolution and high framerate, but looked better at a lower resolution with a higher quality setting and then upscaled. The extra effects outweighed the small loss of fidelity.
It started as good tech to make GPUs last longer, but now is a crutch that even top notch hardware like a 4090 needs to actually achieve playable performance with ray tracing at high resolutions. And that hardware is already way overpriced, imagine the price of something that could do it natively.
Not really on topic, but AI upscaling is no joke. It’s actually very useful and saves alot processing power. Same with the extra fps, making a 30fps into a 60fps with ease.
FSR doesn’t use AI hardware. The original comment is overselling it a bit, but something AI-driven like DLSS does offer substantial (if slightly blurry) framerate gains.
No game should be running down at 60fps these days, especially with any sort of upscaling. Native performance should be the only measured metric, no need for shortcuts when hardware is as good as it is.
I think he also did Doom 3, but I don’t think he was involved in Doom 2. Doom 1 was mostly just playing fast and loose with copyright law. The iconic E1M1 theme song is just a MIDI version of some song from Slayer.
He did not. Chris Vrenna who was a NIN collaborator did. Trent was involved early in development but time commitments and mismanagement forced him to withdraw.
Oh man NFTs and eSports? Yawning Boat Monkeys or whatever lost billions and the guy who hyped crypto is going to prison for rugpulling on a massive scale.
Who the fuck at this point thinks anyone is interested in them?? eSports has been tried to be forced and anything made specifically for eSports sucks hot dog shit and fails. The only eSports that take off are games with homegrown audiences who enjoy the gameplay (like leave became eSports, it wasn’t created to BE an esport.)
This isnt a shock though, the IOC is openly corrupt and bribable. A brief glimpse at the 2016 Apocalympics in Rio shows they don’t give two fucks about the games, as long as their greasy hands get cash in them. Which totally tracks when NFTs are just a way to exchange fake dollars and hide money in something that has perceived value - just like real art.
Twitch’s behaviour is mildly disturbing for this case, in that they were monitoring Beahm’s private DMs, and then tried to hide it with a settlement when wrongdoing was found (to hide that they were monitoring prominent users of the whisper system?)
Or… the whispers were reported by the other party, or detected by an automatic abuse detection system, they paid off his contract because he was suing them for it and they weren’t confident it was a good investment to fight
This must be pandering to shareholders, no company in their right mind would want to compete when Meta is selling their first party headset at a giant loss.
This one and SimCopter took way more of my time than any of the other Sim games by a mile. I was especially surprised because SimCopter was full 3D and my old hardware could run it.
I don’t know how well a remaster of either would sell, but I’d buy them. SimCopter could even be a mobile game at this point.
I really don’t care what launcher I have to start (funny thing, for most epic games you don’t even need the launcher) and don’t know what the problem is with it and really don’t care though. And regarding Alan Wake 2, don’t forget that Epic funded the game and made it possible for Remedy do develop it…they are also quit happy with the partnership so I guess it’s a win for Remedy.
I pirated it and will buy it on sale on Steam whenever that happens.
Under normal circumstances it’ll never happen because it’s not a regular 1 year exclusivity deal, it’s Epic being the publisher.
It’s more likely to have its console versions emulated before landing on Steam. And even if Epic puts the game on Steam, Epic will still get the money.
It has three expansions bundled with it. I’ve never played, but it’s my understanding that these three expansions represent the initial must-buy content.
Ah of course. However I read recently that someone had the base gane and extra products but Bungie decided it wasn’t making enough money so removed the base game and has gone a different route. Removing the players items and game data.
So I’d highly recommend nobody touch destiny with a barge pole. Bad consumer practice. Would prefer if they disappear from the gaming industry
theverge.com
Ważne