Elden Ring really got this right by playing the cool cut scene once and then every attempt after goes straight into the fight. Could’ve done with closer sites/shrines in a few of the fights, though
The game is rendered at a lower resolution, this saves a lot of resources.
Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution.
I get that much. Or at least, I get that’s the intention.
This is a fixed cost and can be done with little power since the components are designed to do this task.
This us the part I struggle to believe/understand. I’m roughly aware of how resource intensive upscaling is on locally hosted models. The necessary tech/resources to do that to 4k+ in real time (120+ fps) seems at least equivalent, if not more expensive, to just rendering it that way in the first place. Are these “scaler chips” really that much more advanced/efficient?
Further questions aside, I appreciate the explanation. Thanks!