That’s got phong shading for a start. Was pretty advanced for a PS1 game. Before that each poly had it’s own normals, so everything looked blockier. Think Tekken 3 vs Tekken 2.
Maybe it’s not phong. Possibly gouraud? My memory is getting hazy since it’s like 25 years since any of this was current and actually spoken about in those terms.
It’s castlevania 64, on the N64. Not a ps1 game at all. The N64 was a lot more powerful than the PS1, it was just held back by the cartridge limiting texture sizes.
Someone already mentioned those graphics were optimized for old CRT TV’s, but also consider the fact that it was simply the best wed seen, and it blew our minds.
Just imagine what top notch realism will be 20 years from now, assuming it’s not all DLC for the same old stuff, obviously.
It’s still happening, there’s just so much of it now we’re more aware of what the improvements actually are on a technical level, that we’ve come to expect it even before release of the lastest thing. And it’s mostly disappointing now because we’re chasing that same high.
The changes are more incremental now too. It’s slightly better textures here, better lighting there, maybe a studio puts extra effort into motion nature and animations. But it’s not leaps and bounds better every generation anymore like it used to be.
There was that video going around a couple days ago comparing Arkham Knight to Suicide Squad and that’s a great example of graphics not getting noticeably better if a studio doesn’t really try for it.
But I’ll bet games that start coming out with the latest Unreal Engine, like Senua’s Saga, are going to give some of that feeling of amazement again.
Honestly, a good CRT shader is a real game changer for emulation. Many emulators have the ability to add a mesh grid over the top of the image, but this is just about the worst way to try to emulate a CRT; It doesn’t actually emulate CRT pixels, and the black grid laid on top of everything simply reduces the overall image brightness.
For an example of a good CRT shader, consider looking into CRT Royale. The benefit to a shader is that it’s actually running each frame through a calculation before it reaches your screen. So it is actually able to emulate a CRT properly. Shaders can actually emulate the individual red/green/blue pixels of CRTs, emulate the bloom around white text, emulate the smearing that occurs with large color differences, etc… It really does make old games much more pleasant to look at.
This is even earlier, the 80s, but I remember getting a not especially good game called The Halley Project for my Apple II, but I would load the game over and over again because the intro had a song with real vocals and guitar, something basically unheard of on an Apple II, or virtually any other computer at the time.
We hit diminishing returns a while ago. It will be much harder to find improvements, both in terms of techniques and computation.
Consider that there is ten years between Atari Pitfall and Wolfenstein 3D, ten years between that and Metroid Prime, and ten years between that and Mass Effect 3, and then about ten years between that and now. There’s definitely improvement between all those, but once past Metroid Prime, it becomes far less obvious.
We’ve hit the point where artistic style is more important than taking advantage of every clock cycle of the GPU.
Graphics 20 years from now will be incrementally better, but not mind-blowingly so. We’re rapidly approaching games that are 20 years old still looking pretty decent today.
Did games get any better though when the graphics got better? I remember being so hyped seeing PS3 game footage pre-2006, then after a few years it was like “oh shit, we have to go back!”
I saw some arguments over the last few years. It seems that the gaming industry focused so hard on good graphics that they forgot how to make the rest of the games. Honestly some faithful re-releases with updated graphics of ancient 8 and 16 bit games, would probably sell fairly well.
I messed up a little indie and A are basically the same thing. An example for things that are AA are smaller publishers and developers that still have a decent monetary backing like Devolver Digital, Warhorse studios, Obsidian (moreso when they were contracting out to larger developers like Bethesda but also with their own titles),Bohemia Interactive, platinum games (who make Nier). Essentially lower budget, generally less marketing, smaller but still decent team sizes between 50 and 100 people is considered to be AA. Whereas larger companies like rockstar, blizzard, Activision etc are AAA because they have that huge monetary backing of investors, many teams and sub companies that divvy up the work on multiple large scale projects at a time.
Some did and some didn’t. I’m pretty salty as the FF7 remake because, to me, it feels like it’s missing the heart of the original game. And the chocobo shit which I loved. I just wish they’d stop cheapening things when they remade them ffs. They just make them look nice and it feels like they put no other effort into it. Which is idiotic because they already have the whole game mapped out. Just remake it how it fucking was goddammit >:(
Meanwhile, BG3, the new Spiderman games, and the new Zelda games were (to me) fantastic. The perfect mixes of gorgeous graphics and actually solid gameplay that felt like they had some love and soul put into them.
So it’s a mixed bag and at the end of the day pretty graphics can’t trick people into liking games that should have been better. We complain about Skyrim being ported all over the damn place but at least they don’t drop half the original content every time. That’s such a sad low bar but there it is.
PS2 graphics were pretty on point. Upscale to a modern resolution, many of them still look decent now.
Xbox 360 era we got a lot of normal maps added (so models looked a lot more complex than they were).
PS4 added physically based rendering (ability to make parts of models look shiny without needing to separate them).
And the new shit is ray tracing, which PS5 isn’t really powerful enough to do, but honestly neither are most affordable PCs. We get nicer lighting at least, but we’ll still be on the old render paths for a while yet.
You still get improvements over time, but nothing is really going to compare to PS1 to PS2.
Games have gotten prettier, no argument, but I still feel like we’re playing the same games we were playing 20 years ago just with slight QOL improvements.
Yeah, I feel like everything we have now could have been done on the PS3 and Xbox 360. At least gameplay wise. Before that they were quite limited in terms of RAM. The big open world games probably couldn’t have been done prior to that gen. Stuff like Assassin’s Creed 2 or Far Cry 3 wouldn’t have been possible at all on PS2, I feel.
The closest they had was GTA SA which had huge nearly empty areas to hide the loading of the main city areas.
I’m just going to butt in and say that Far Cry 3 is the most ridiculously perfectly optimised game I’ve ever played. I managed to get it running on internal graphics of an old laptop in 800x600 resolution with potato settings and it was genuinely still enjoyable. I think I played through it halfway like that before I got my pc back.
Child imagination and crt picture do wonders, while i agree with you that it wasn’t peak performance, my childhood memories show me that this picture was way better than current AAA titles
Actually, the early 3D didn’t look ‘great’ even on CRT. Particularly PS1 had affine texture mapping and a very “wobbly” low precision geometry operations, in addition to the obvious limitations of polygon count and texture resolution. It was “neat” and “novel” to see that be attempted, but it felt in some ways kind of like a step back from where 2D games had gotten by that point. Both visually and control wise (very awkward control/camera schemes were attempted back then).
Much of the “but it looks great on CRT” applies to pretty deliberately crafted pixel art given knowledge of how NTSC or PAL feeding into a CRT behaved. The artistic design knew precisely how it was going to be presented and used that for interesting tricks in how things got blurred (e.g. faux translucency by putting stripy sprites on top of each other and letting the blur fake the translucency). In the 3D land, the textures and models were going to be distorted before presentation so they couldn’t do a lot of “leaning into the CRT” in their design. Consolation being that the hardware could now actually pull off the efects they were formerly relying on the CRT blur to pull off.
I’ll second that… I always found PS1 3D games to be pure eye-cancer even when played on a CRT TV back in the day. N64 was good-but-not-great by comparison.
The first time I thought I was seeing real life on the screen was NFS3 on PC, which… well, looking back, I was clearly wrong, but it’s decent-looking at least. The next time was when I briefly mistook my cousins playing NFL2K on Dreamcast for a Christmas day football game back in '99, and I feel like that generation of console (Dreamcast/PS2/Gamecube/OG XBox) is about where 3D games are, graphically at least, still palatable.
Yeah, pretty much right with you. N64 had the perspective correct texture mapping and more precise geometry calculation which did wonders for it to be good. The low geometry and tiny textures still made it like you said ‘good, not great’, and I’ll concur that the generation you cited is the key part where I didn’t feel like a step back from 2D games graphically.
My entire life when I heard someon say that it would get under my skin. The only ceiling has always been to make it so realistic, we can’t tell the difference.
Graphics peaked with the original Lara Croft and her triangular bosom. It’s been a steady decline since then trying to make things look round. Just accept the triangles.
It is my opinion that we reached peak graphics 6 or 7 years ago when GTX1080 was king. Why?
Games from that era look gorgeous (eg Shadow of Tomb Raider), yet were well optimized to run high/ultra at FHD on RX570.
We didn't need to rely on fakery like DLSS and frame generation to get playable frame rates. If anything, people used to supersample for the ultimate picture quality. Even upping the rendering scale to 1.25 made everything so crisp.
MSAA and SMAA antialiasing look better, but somehow even TAA from that era doesn't seem as blurry. Today, might as well use FXAA.
Graphics today seem ass-backward to me: render at 60...70% scale to have good framerates, FX are often rendered at even lower resolution, slap on overly blurry TAA to hide the jaggies, then use some upsample trickery to get to the native resolution. And it's still blurry, so squirt some sharpening and noise on top to create an illusion of detail. And still runs like crap, so throw in frame interpolation to get the illusion of higher frame rate.
I think it's high time we should be able to run non-raytracing graphics at 4k native and raytracing at 2.5k native on 500€ MSRP GPU-s with no trickery involved.
GPUs are getting better, but the demand from the crypto and ML AI markets mean they can just jack up the price of every new card to higher than the last so the prices have stopped dropping with each new generation.
We didn’t need to rely on fakery like DLSS and frame generation to get playable frame rates.
If truly believe what you wrote, then you should never look into the details of how a game world is rendered. It’s fakery stacked upon fakery that somehow looks great. If anything, the current move of ray tracing with upscaling is less fakery than what was before.
There’s a saying in computer graphics: if it looks right, it is right. Meaning you shouldn’t worry if the technique makes a mockary of how light actually works as long as the viewer won’t notice.
But there's a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware. DLSS has it's place for 150...200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ "midrange" cards.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware.
There is not a stark difference if you were to describe the techniques objectively and not twist it to what you feel they’re like.
There are so many steps in the render pipeline where native resolution isn’t used. Yet I don’t here the crowd complaining about shadow map size or how reflections are half res. Upscaling is just another tool that allows us to create better looking frames at playable refresh rates. Compare Alan Wake or Avatar with DLSS with any other game without DLSS and they will still come out on top.
DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
Just because you’re unhappy with Nvidia’s pricing strategy doesn’t mean you should slander new render techniques. You’re mixing two different topics.
I used to have a subscription to Game Informer magazine. I very specifically remember the multi page preview for the upcoming game, Oblivion. The pictures they had in there, I swear to God, were actually pictures of trees and grass. The fidelity was unparalleled and it was the peak of what games could do. Idk why that article sticks out so much, but it felt like the top of the mountain.
Hah I get that but it was for half life 1 and I thought the graphics were amazing. Rainbow 6 rogue spear was my first PC game and I thought that was the pinacle of graphics… fuck I’m old.
For me it was reading in Playstation Magazine that there were melting ice cubes in the then upcoming Metal Gear Solid 2. I’m not even sure PS2 had been released yet at the time, so I was just awe struck thinking wow it’s getting so powerful and detailed that even ice cubes in a sink are accounted for.
Man for me it was playing Halo CE on the original Xbox, you could see the individual blades of glass on the ground texture! I was absolutely blown away haha
I remember upgrading to a voodoo 3dfx card around the time transparent water was possible in Quake. The graphics blew me away and the ability to see players in the water gave a ridiculous advantage.
I remember the graphics “blue” me away too - I mean that lovingly. That the graphics colours looked much cooler compared to on the Riva TNT (actually this is my memory of Quake 2 (particularly Q2DM1),).
I can relate, but by the time Oblivion came out I was already starting to get jaded about graphical fidelity. What I can tell you is that I ogled over a similar preview for Morrowind, and actually built my first PC specifically targeting the recommended specs to run it in all its glorious glory!
Christmas of probably 98 or 99, my older brother gave my younger brother and I his PlayStation. He had Final Fantasy VII, and that was probably when I popped my graphics cherry. I was astounded when I went back to play it years later.
I was little when the OG Ace Combat game came out on the PS1 right? Polygonal jet engines & everything lol
Until i was like 11, whenever i saw real pictures of actual aircraft that were in the game i thought they were fake because their engines weren’t polygonal enough 🤣🤣🤣
For me it wasn't a video game but adjacent - I saw Final Fantasy: The Spirits Within in 2001 and thought "well, that's it, computer graphics have achieved photorealism and nothing could possibly ever be better."
startrek.website
Gorące