lemmy.world

Kolanaki, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.
@Kolanaki@pawb.social avatar

Has anyone ever really noticed how samey everything looks right now? It’s a bit hard to explain, because it’s not the aesthetics of any kind of art style used, but the tech employed and how it’s employed. Remember how a lot of early 3D in film just looked like it was plastic? It’s like that, but with a wider variety of materials than plastic. Yet every modern game kinda looks like it’s made using toys.

Like, 20 years from now I think it would be possible to look at any give game that is contemporary right now and be able to tell by how it looks when it was made. The way PS1 era games have a certain quality to them that marks when they were made, or how games of the early 2000’s are denoted by their use of browns and grays.

soloner,

My guess is a lot of convergence to a smaller set of known game engines. Godot, unreal, unity, plus a few others and some in-house like valves source.

I could be wrong but I presume in the past almost every game was made with its own custom engine. Now a lot of them have the “unreal engine” look.

But I’m not complaining. Looks great to me and leads to better performance and fewer bugs in the long run. Of course there are some caveats

Kolanaki,
@Kolanaki@pawb.social avatar

Oh yeah this isn’t a complaint, because I think it looks good. It’s just I notice it, and it probably is from almost everything being made on UE5 these days. However, I think MGSV was one of the first games to have this particular look to it, and that’s on its own in-house engine (FOX Engine). It could just be how the lighting and shadowing are done. Those two things are getting so close to photorealism that it’s the texturing and modeling work that puts things (usually human characters) into the uncanny valley. A scene of a forest can look so real… And then you put a person walking through it and the illusion is lost. lol

The_Picard_Maneuver,
@The_Picard_Maneuver@lemmy.world avatar

Yes, definitely. It has to be that they’re all using the exact same engines and methods or something.

ysjet,

It’s everyone using UE-based mocap tools that cause the hyperrealistic-yet-puffy faces, is what I suspect he’s talking about, along with the same photogrammetry tools/libraries.

MudMan,

What do you mean, "everything".

I wish this place was better for images, but... just pulling from my recently played list disproves this hard.

A_Union_of_Kobolds,

Horizon really shone in movement and how fluid the environment felt. It came out a long time ago now, though.

I thought it had a pretty good art direction for what it was

AdrianTheFrog,
@AdrianTheFrog@lemmy.world avatar

Honestly the biggest thing missing in general lighting is usually rough specular reflections and small scale global illumination, which are very hard to do consistently without raytracing (or huge light bakes)

Activision has a good technique for baking static light maps with rough specular reflections. It’s fairly efficient, however it’s still a lot of data. Their recent games have been in the 100-200 gb range apparently. I’m sure light bakes make up a good portion of that. It’s also not dynamic of course.

So, what I’m saying is, raytracing will help with this, hardware will advance, and everyone will get more realistic looking games hopefully.

gandalf_der_12te,

Games look samey because Game Studios don’t have ideas anymore. They just try to sell 20 h of playtime - that is essentially empty. It’s literally just a bunch of materials and “common techniques” squashed into a sellable product. In the early times of gaming, people had ideas before they had techniques to implement them. Nowadays, we have techniques and think the ideas are unimportant. It’s uninspired and uninspiring. That’s why.

RightHandOfIkaros, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

Ironically, Zelda Link to the Past ran at 60fps, and Ocarina of Time ran at 20fps.

The same framerates are probably in the Horizon pictures below lol.

Now, Ocarina of Time had to run at 20fps because it had one of the biggest draw distances of any N64 game at the time. This was so the player could see to the other end of Hyrule Field, or other large spaces. They had to sacrifice framerate, but for the time it was totally worth the sacrifice.

Modern games sacrifice performance for an improvement so tiny that most people would not be able to tell unless they are sitting 2 feet from a large 4k screen.

JoYo,
@JoYo@lemmy.ml avatar

when i was a smol i thought i needed to buy the memory expansion pack whenever OoT fps tanked.

Maalus,

Had to, as in “they didn’t have enough experience to optimize the games”. Same for Super Mario 64. Some programmers decompiled the code and made it run like a dream on original hardware.

RightHandOfIkaros,

The programming knowledge did not exist at the time. Its not that they did not have the experience, it was impossible for them to have the knowledge because it did not exist at the time. You can’t really count that against them.

Kaze optimizing Mario 64 is amazing, but it would have been impossible for Nintendo to have programmed the game like that because Kaze is able to use programming technique and knowledge that literally did not exist at the time the N64 was new. Its like saying that the NASA engineers that designed the Atlas LV-3B spacecraft were bad engineers or incapable of making a good rocket design just because of what NASA engineers could design today with the knowledge that did not exist in the 50s.

CancerMancer,

One of the reasons I skipped the other consoles but got a GameCube was because all the first party stuff was buttery smooth. Meanwhile trying to play shit like MechAssault on Xbox was painful.

RightHandOfIkaros,

I never had trouble with MechAssault, because the fun far outweighed infrequent performance drops.

I am a big proponent of 60fps minimum, but I make an exception for consoles from the 5th and 6th generations. The amount of technical leap and improvement, both in graphics technology and in gameplay innovation, far outweighs any performance dips as a cost of such improvement. 7th generation is on a game by game basis, and personally 8th generation (Xbox One, Switch, and PS4) is where it became completely unacceptable to run even just a single frame below 60fps. There is no reason that target could not have been met by then, definitely now. Switch was especially disappointing with this, since Nintendo made basically a 2015 mid-range smartphone but then they tried to make games for a real game console, with performance massively suffering as a result. 11fps, docked, in Breath of the Wild’s Korok Forest or Age of Calamity (anyehwere in the game, take your pick,) is totally unacceptable, even if it only happened one time ever rather than consistently.

thisismyhaendel,

I’m usually tolerant of frame drops, especially when they make hard games easier (like on the N64), but I agree it has gotten much worse on recent consoles. Looking at you, Control on PS4 (seems like it should just have been a PS5 game with all the frame drops; even just unpausing freezes the game for multiple seconds).

Regrettable_incident, do games w Steam Deck Gaming News
@Regrettable_incident@lemmy.world avatar

Thanks for taking the time to put this together. I dunno if you are aware but there’s also a steam deck sub here and they’d probably be happy if this was posted there too.

scrubbles, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.
!deleted6348 avatar

And they’re shocked that no one bought the PS5 pro for 800 dollars

atomicbocks, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

The improvement levels are the same amount they used to be. It’s just that adding 100mhz to a 100mhz processor doubles your performance, adding 100mhz to a modern processor adds little in comparison as a for instance.

bjoern_tantau,
@bjoern_tantau@swg-empire.de avatar

Well, that’s what Moore’s Law was for. The processing power does increase massively over each generation. It’s just that at this point better graphics are less noticeable. There is not much difference to the eye between 100.000 and a million or more polygons.

We’ve basically reached the top. Graphics fidelity is just down to what the artists do with it.

Takumidesh,

I disagree ( that we have reached the top).

Go watch a high budget animated movie (think Pixar or Disney) and come back when real time rendered graphics look like that.

Yea games look good, but real time rendering is still not as good as pre rendered (and likely will never be). Modern games are rife with clipping, and fakery.

If you watch the horizon forbidden West intro scene (as an example), and look at the details, how hair falls on characters shoulders, how clothing moves in relation to bodies, etc, and compare it to something like inside out 2, it’s a world of difference.

If we can pre render it, then in theory it’s only a matter of time before we can real time render it.

gandalf_der_12te,

If we can pre render it, then in theory it’s only a matter of time before we can real time render it.

Not really, because pre renders are often optimized to only look good from one side. If you try to make a 3D model out of it and render that in real time in the game world, it might look ugly or weird from another angle.

Takumidesh, (edited )

Any given frame is just looking at something from one side though, this is the case for video games as well and it’s part of the reason why real time rendering is so much slower. It’s an art and game direction challenge to make things look good however you want to not a technical limitation (in the sense of, you can make a video game look like a Pixar movie does today, it’s just going to render at days per frame instead of frames per second)

There isn’t really a conceptual difference between rendering a frame with the intent to save it and later play it back, and rendering a frame with the intent to display it as soon as it’s ready and dispose of it.

Toy story 1 took days to render a single frame, now it could be rendered on a single home GPU at 24 fps no problem, which would be real time rendering.

To clarify my first paragraph. The challenge is not that it is impossible to render a video game with movie like graphics it’s that the level of effort is higher because you don’t have the optimizations, and so art direction needs to account for that.

As far as considering unexpected behaviors, that is technically only a concern in psuedo-nondeterministic environments (e.g. dynamic physics rendering) where the complexity and amount of potential outcomes is very high and hard to account for. This is a related issue but not really the same one, and it is effectively solved with more horsepower, the same as rendering.

I think the point you were making is that potentially, artistic choices that are deliberately made can’t always be done in real time, which I could agree with. Something like ‘oh this characters hair looks weird the way it falls, let’s try it again and tweak this or that.’ That is awarded by the benefit of trial and error, and can only be replicated real time by more robust physics systems.

Ultimately the medium is different, and while they are both technically deterministic, something like a game has potential for unwanted side effects. However, psuedo-nondeterminism isn’t a prerequisite for a game. The example that comes to mind are real time rendered cutscenes. They aren’t fundamentally different from a movie in that regard, and most oddities in them are the result of bugs in the rendering engine rather than technical impossibilities. Similar bugs exist in 3d animation software, it’s just that Hollywood movies have the budget and attention to detail to fix them, or the luxury to try again.

I’ll end with, if we have the Pixar server farm sufficient hardware, there is nothing that says they couldn’t render Luca or whatever in real time or even faster than real time.

kitnaht, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

Kind of like smartphones. They all kind of blew up into this rectangular slab, and…

Nothing. It’s all the same shit. I’m using a OnePlus 6T from 2018, and I think I’ll have it easily for another 3 years. Things eventually just stagnate.

paraphrand,

What do you expect next? Folding phones? That would be silly!

user224,
@user224@lemmy.sdf.org avatar

I was hoping that eventually smartphones would evolve to do everything. Especially when things like Samsung Dex were intorduced, it looked to me like maybe in the future phones could replace desktops, running a full desktop OS when docked and some simplified mobile UI + power saving when in mobile mode.

But no, I only have a locked-down computer.

Trainguyrom,

Yeah whatever happened to that? That was such a good idea and could have been absolutely game changing if it was actually marketed to the people who would benefit the most from it

pufferfisherpowder,

I used it for a while when I worked two jobs. Is clock out of job 1 and had an agreement with them to be allowed to use the screen and input devices at my desk for job 2. Then I’d plug in my Tab S8 and get to work, instead of having to carry to chunky laptops.
So it still exists! What I noticed is that a Snapdragon 8 Gen 1 feels underpowered and that Android, and this is the bigger issue, does not have a single browser that works as a full fledged desktop version. All browser I tested has some shortcomings, especially with drag and drop or context menus or whatever. Like things work but you’re constantly reminded that you’re running a mobile os. Like weird behavior or oversized context menus or whatever.

I wish you could lunch into a Linux vm instead of Dex UI. Or for Samsung to double down on the concept. The Motorola Atrix was so ahead of it’s time. Like your phone transforming into your tablet, into your laptop, into your desktop. How fucking cool is that?
Apple would be in a prime position, they’re entire ecosystem is now ARM based and they have the chips with enough power. But it’s not their style to do something cool to threaten their bottom line. Why sell one phone when you can sell phone, laptop, tablet, desktop separately?

Trainguyrom,

It’s super easy to forget but Ubuntu tried to do it back in the day with Convergence as well, and amusingly this article also compares it to Microsoft’s solution on Windows Phone. It’s a brilliant idea but apparently no corporation with the ecosystem to make it actually happen has the will to risk actually changing the world despite every company talking about wanting an “iPhone moment”

Apple would be in a prime position, they’re entire ecosystem is now ARM based and they have the chips with enough power. But it’s not their style to do something cool to threaten their bottom line. Why sell one phone when you can sell phone, laptop, tablet, desktop separately?

Let’s be real, Apple’s biggest risk would be losing the entire student and young professional market by actually demonstrating that they don’t need a Mac Book Pro to use the same 5 webapps that would work just as well on a decent Chromebook (if such a thing existed)

user224,
@user224@lemmy.sdf.org avatar

Linux vm

Or just something like Termux, a terminal emulator for Android. Example screenshot (XFCE desktop over VNC server), I didn’t know what to fit in there:
https://files.catbox.moe/zr7kem.png

Full desktop apps, running natively under Android. For better compatibility Termux also has proot-distro (similar to chroot) where you can have… let me copy-paste


<span style="color:#323232;">Supported distributions (format: name < alias >):
</span><span style="color:#323232;">
</span><span style="color:#323232;">  * Alpine Linux < alpine >
</span><span style="color:#323232;">  * Arch Linux < archlinux >
</span><span style="color:#323232;">  * Artix Linux < artix >
</span><span style="color:#323232;">  * Chimera Linux < chimera >
</span><span style="color:#323232;">  * Debian (bookworm) < debian >
</span><span style="color:#323232;">  * deepin < deepin >
</span><span style="color:#323232;">  * Fedora < fedora >
</span><span style="color:#323232;">  * Manjaro < manjaro >
</span><span style="color:#323232;">  * openKylin < openkylin >
</span><span style="color:#323232;">  * OpenSUSE < opensuse >
</span><span style="color:#323232;">  * Pardus < pardus >
</span><span style="color:#323232;">  * Ubuntu (24.04) < ubuntu >
</span><span style="color:#323232;">  * Void Linux < void >
</span><span style="color:#323232;">
</span><span style="color:#323232;">Install selected one with: proot-distro install <alias>
</span>

Though there is apparently some performance hit. I just prefer Android, but maybe you could run even full LibreOffice under some distro this way.

If it can be done by Termux, then someone like Samsung could definitely make something like that too, but integrated with the system and with more software available in their repos.

What’s missing from the picture but is interesting too is NGINX server (reverse proxy, lazy file sharing, wget mirrored static website serving), kiwix-serve (serving ZIM files including the entire Wikipedia from SD card) and Navidrome (music server).
And brought to any internet-connected computer via Cloudflare QuickTunnel (because it doesn’t need account nor domain name). The mobile data upload speed will finally matter, a lot.

You get the idea, GNU+Linux. And Android already has the Linux kernel part.

pufferfisherpowder,

Yeah, I remember trying it and while it works the performance hit was too big for my use case. But it’s been a while!

Fortunately I’m in a position where I don’t have to juggle two jobs anymore so I barely use Dex these days.
Which in reverse is also why Samsung isn’t investing a lot into it I suppose - it’s a niche use case. I would guess that generally people with a desktop setup would want something with more performance than a mobile chip.

mrvictory1,

Linux on DeX was a thing but killed by Samsung

AdrianTheFrog,
@AdrianTheFrog@lemmy.world avatar

there is an official android desktop mode, I tried it and it isn’t great ofc but my phone manufacturer (oneplus) has clearly put no work into making it functional

MonkderVierte,

Maybe make the rectangular slab… smaller again?

CancerMancer,

I would love to have a smaller phone. Not thinner, smaller. I don’t care if it’s a bit thick, but I do care if the screen is so big I can’t reach across it with one hand.

secret300,

I miss physical keyboards on phones

starman2112,
@starman2112@sh.itjust.works avatar

One company put a stupid fucking notch in their screen and everyone bought that phone, so now every company has to put a stupid fucking notch in the screen

I just got my tax refund. If someone can show me a modern phone with a 9:16 aspect ratio and no notch, I will buy it right now

MonkderVierte,

You can easily keep a phone for 7 years.

mrvictory1,

OnePlus 6 line of phones are one of the very few with good Linux support, I mean, GNU/Linux support. If custom ROMs no longer cut it you can get even more years with Linux. I had an iPhone, was eventually fed up, got an Android aaand I realized I am done with smartphones lol. Gimme a laptop with phone stuff (push notifications w/o killing battery, VoLTE) and my money is yours, but no such product exists.

inlandempire, do games w Steam Deck Gaming News
@inlandempire@jlai.lu avatar

Nice, thanks for all the effort, keep them coming !

plant_based_monero, do games w Steam Deck Gaming News

People like you make lemmy great :)

nickwitha_k, do games w Steam Deck Gaming News

As a preface, I used to do this a lot on Reddit. My hobby (sounds odd) was to make a little old-school-blog-style post, detailing what I found interesting in gaming in the last week or so. I got a name for it, for a time, but having long-since abandoned reddit I thought I might try the same thing here, if you’ll indulge me!

Happy to have you posting!

VitoRobles,

Agreed! There’s apparently a huge ecosystem that I wasn’t aware of!

Harvey656, do games w Steam Deck Gaming News

This is a banger post, good work!

merthyr1831, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

yeah but the right hand pic has twenty billion more triangles that are compressed down and upscaled with AI so the engine programmers dont have to design tools to optimise art assets.

amotio,

It just works™

Cethin,

I know you’re joking, but these probably have the same poly count. The biggest noticeable difference to me is subsurface scattering on her skin. The left her skin looks flat, but the right it mostly looks like skin. I’m sure the lighting in general is better too, but it’s hard to tell.

merthyr1831,

yeah they probably just upped internal resolution and effects for what I assume is an in-engine cutscene. Not that the quality of the screenshot helps lmao

jmcs, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

What big shift do you expect? Even regarding 3D realism we are way past the point of diminishing returns in terms of development costs.

The_Picard_Maneuver,
@The_Picard_Maneuver@lemmy.world avatar

I can’t imagine what it would look like now. I just wish everyone could experience the same incredible growth.

UltraGiGaGigantic,
@UltraGiGaGigantic@lemmy.ml avatar

Bigger maps, more entities on screen at once, larger multi-player server capacity (anyone play Mag?).

I don’t care if the graphics have to go backwards to do it to. I love valheim and it isn’t high res.

Windex007, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

Link has two hookshots?

RightHandOfIkaros,

The Hookshot and the Longshot are different items internally, and via cheats or mods both can be equipped.

GraniteM, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.

Don’t get me started on Horizon: Forbidden West. It was a beautiful game. It also had every gameplay problem the first one did, and added several more to boot. The last half of the game was fucking tedious, and I basically finished it out of spite.

inb4_FoundTheVegan,
@inb4_FoundTheVegan@lemmy.world avatar

Awww.

I enjoyed the heck out of the first one, especially the story. Haven’t gotten around to picking up the 2nd so that’s a bummer to read.

moody,

I’d say it’s still worth playing, but the story is way more predictable, and they made some things more grindy to upgrade than they were in the first one. Also they added robots that are even more of a slog to fight through.

Those giant turtles are bullshit and just not fun.

metaldream, (edited )

If you’re actually struggling with the turtle guys that is 100% a skill issue. Literally just break the shell off and they die very quickly, there’s nothing to “slog” through with them. Out of all the big enemies they are by far the easiest.

So sick of reading nothing but shitty hot takes when it comes to this game. It’s such a good game but gets unfairly nitpicked by reddit/lemmy and review bombed by fascists.

moody,

Lol git gut, my opinion is the only one that matters

marble,

If it helps, I loved both.

scops,

Very much same. I wish the Burning Shores expansion was a bit longer. It’s kinda hard to call it a must-play DLC, but it’s got some big stuff in terms of Aloy’s character development.

ShinkanTrain,

I enjoyed learning the backstory of the first one, but I was very disinterested in the story, as in, what is currently happening.

hOrni,

If You liked the stealth aspects of the first game then there is no point in starting the second. The stealth is gone. It’s also more difficult. The equipment is much more complicated.

metaldream,

Stealth is gone from HFW, since when? I stealth just about every encounter in HFW with no issues at all.

hOrni,

I agree. I loved the first game, considered it one of my favourites. Couldn’t wait for the sequel. I was so disappointed, I abandoned it after a couple of hours.

red_bull_of_juarez,

I loved both. Different strokes…

metaldream,

It’s so weird how reddit and Lemmy constantly shit on these games yet they always score well with players elsewhere.

I never get sick of the combat in these games, the world is absolutely gorgeous, and the story is a lot of fun.

Xanthrax, do gaming w Small, incremental improvements don't make shockwaves like the old massive tech leaps used to.
@Xanthrax@lemmy.world avatar

Have you played VR? You might get that feeling again.

The_Picard_Maneuver,
@The_Picard_Maneuver@lemmy.world avatar

VR is the one thing that feels similar to the old generational leaps to me. It’s great, but I haven’t set mine up in a few years now.

Xanthrax,
@Xanthrax@lemmy.world avatar

Fair. I haven’t played “No Man’s Sky,” yet, but apparently, it’s awesome in VR.

onlinepersona,

I’m waiting in a affordable VR setup that can let me run around at home without hitting a wall. Solutions exist but they as expensive as a car and I don’t have that kind of money lying around.

Anti Commercial-AI license

Xanthrax, (edited )
@Xanthrax@lemmy.world avatar

If anyone can optimize Disney’s omni directional walking pad, we’ll be there. I’d give it 3 decades if it goes that way. I’ve heard it’s not like real walking. It feels very slippery. All that being said, you don’t have to wrap yourself in a harness and fight friction to simulate walking like other walking pads. It also seems simple enough, hardware wise, that it could be recreated using preexisting parts/ 3d printing. I’m honestly surprised I haven’t seen a DIY project yet.

renegadespork,
@renegadespork@lemmy.jelliefrontier.net avatar

VR definitely feels like the next 2D->3D paradigm shift, with similar challenges. except it hasn’t taken off like 3D did IMO for 2 reasons:

1. VR presents unique ergonomic challenges.

Like 3D, VR significantly increased graphics processing requirements and presented several gameplay design challenges. A lot of the early solutions were awkward, and felt more like proof-of-concepts than actual games. However, 3D graphics can be controlled (more or less) by the same human interface devices as 2D, so there weren’t many ergonomic/accessibility problems to solve. Interfacing VR with the human body requires a lot of rather clunky equipment, which presents all kinds of challenges like nausea, fatigue, glasses, face/head size/shape, etc.

2. The video game industry was significantly more mature when (modern) VR entered the scene.

Video games were still a relatively young industry when games jumped to 3D, so there was much more risk tolerance and experimentation even in the “AAA” space. When VR took off in 2016, studios were much bigger and had a lot more money involved. This usually results in risk aversion. Why risk losing millions on developing a AAA VR game that a small percentage of gamers even have the hardware for when we can spend half (and make 10x) on just making a proven sequel? Instead large game publishers all dipped their toes in with tech demos, half-assed ports, and then gave up when they didn’t sell that well (Valve, as usual, being the exception).

I honestly don’t believe the complaints you hear about hardware costs and processing power are the primary reasons, because many gaming tech, including 3D, had the same exact problem in the early stages. Enthusiasts bought the early stuff anyway because it was groundbreaking, and eventually costs come down and economies of scale kick in.

UltraGiGaGigantic,
@UltraGiGaGigantic@lemmy.ml avatar

Needs a couple more generations to cook.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • muzyka
  • NomadOffgrid
  • rowery
  • Technologia
  • niusy
  • esport
  • fediversum
  • Psychologia
  • krakow
  • antywykop
  • Gaming
  • test1
  • FromSilesiaToPolesia
  • Spoleczenstwo
  • sport
  • Blogi
  • lieratura
  • informasi
  • retro
  • motoryzacja
  • slask
  • giereczkowo
  • MiddleEast
  • Pozytywnie
  • tech
  • Cyfryzacja
  • shophiajons
  • warnersteve
  • Wszystkie magazyny