Slightly improved graphics while having worse enemy ai, unreal engine stutter, constant hand holding with in game puzzles, restricted character creation, all while having to wait for updates to fix issues that shouldn’t be there at launch.
These are Gaussian Splats; you take a bunch of photos of a scene from different angles, recording position and orientation (usually in the metadata), and an algorithm tries to match pixels across these independent images to build a 3D virtual scene of pixel density clouds that you can traverse through.
There are even plans to make it 4D, by making the scenes change with time, by constructing a scene from independent videos of the same object.
The reason I find this next-gen tech, is that when you navigate these scenes yourself and rotate to angles that were never truly captured the scene begins to “shard” apart and it’s like reality itself falls apart, almost like our own reality is this fleeting illusion that we cannot see past.
I can imagine highly immersive videogames being built like this, whilst always being just one dexter angle away from these sharding artefacts.
I’d say there’s more progress on scale than visual fidelity. There’s greater ability to render complexity at scale, whether that’s real actors on screen or physics in motion. I agree that progress in detail still frame has plateaued.
I’m waiting in a affordable VR setup that can let me run around at home without hitting a wall. Solutions exist but they as expensive as a car and I don’t have that kind of money lying around.
If anyone can optimize Disney’s omni directional walking pad, we’ll be there. I’d give it 3 decades if it goes that way. I’ve heard it’s not like real walking. It feels very slippery. All that being said, you don’t have to wrap yourself in a harness and fight friction to simulate walking like other walking pads. It also seems simple enough, hardware wise, that it could be recreated using preexisting parts/ 3d printing. I’m honestly surprised I haven’t seen a DIY project yet.
VR definitely feels like the next 2D->3D paradigm shift, with similar challenges. except it hasn’t taken off like 3D did IMO for 2 reasons:
1. VR presents unique ergonomic challenges.
Like 3D, VR significantly increased graphics processing requirements and presented several gameplay design challenges. A lot of the early solutions were awkward, and felt more like proof-of-concepts than actual games. However, 3D graphics can be controlled (more or less) by the same human interface devices as 2D, so there weren’t many ergonomic/accessibility problems to solve. Interfacing VR with the human body requires a lot of rather clunky equipment, which presents all kinds of challenges like nausea, fatigue, glasses, face/head size/shape, etc.
2. The video game industry was significantly more mature when (modern) VR entered the scene.
Video games were still a relatively young industry when games jumped to 3D, so there was much more risk tolerance and experimentation even in the “AAA” space. When VR took off in 2016, studios were much bigger and had a lot more money involved. This usually results in risk aversion. Why risk losing millions on developing a AAA VR game that a small percentage of gamers even have the hardware for when we can spend half (and make 10x) on just making a proven sequel? Instead large game publishers all dipped their toes in with tech demos, half-assed ports, and then gave up when they didn’t sell that well (Valve, as usual, being the exception).
I honestly don’t believe the complaints you hear about hardware costs and processing power are the primary reasons, because many gaming tech, including 3D, had the same exact problem in the early stages. Enthusiasts bought the early stuff anyway because it was groundbreaking, and eventually costs come down and economies of scale kick in.
Don’t get me started on Horizon: Forbidden West. It was a beautiful game. It also had every gameplay problem the first one did, and added several more to boot. The last half of the game was fucking tedious, and I basically finished it out of spite.
I’d say it’s still worth playing, but the story is way more predictable, and they made some things more grindy to upgrade than they were in the first one. Also they added robots that are even more of a slog to fight through.
Those giant turtles are bullshit and just not fun.
If you’re actually struggling with the turtle guys that is 100% a skill issue. Literally just break the shell off and they die very quickly, there’s nothing to “slog” through with them. Out of all the big enemies they are by far the easiest.
So sick of reading nothing but shitty hot takes when it comes to this game. It’s such a good game but gets unfairly nitpicked by reddit/lemmy and review bombed by fascists.
Very much same. I wish the Burning Shores expansion was a bit longer. It’s kinda hard to call it a must-play DLC, but it’s got some big stuff in terms of Aloy’s character development.
If You liked the stealth aspects of the first game then there is no point in starting the second. The stealth is gone. It’s also more difficult. The equipment is much more complicated.
I agree. I loved the first game, considered it one of my favourites. Couldn’t wait for the sequel. I was so disappointed, I abandoned it after a couple of hours.
yeah but the right hand pic has twenty billion more triangles that are compressed down and upscaled with AI so the engine programmers dont have to design tools to optimise art assets.
I know you’re joking, but these probably have the same poly count. The biggest noticeable difference to me is subsurface scattering on her skin. The left her skin looks flat, but the right it mostly looks like skin. I’m sure the lighting in general is better too, but it’s hard to tell.
yeah they probably just upped internal resolution and effects for what I assume is an in-engine cutscene. Not that the quality of the screenshot helps lmao
As a preface, I used to do this a lot on Reddit. My hobby (sounds odd) was to make a little old-school-blog-style post, detailing what I found interesting in gaming in the last week or so. I got a name for it, for a time, but having long-since abandoned reddit I thought I might try the same thing here, if you’ll indulge me!
Kind of like smartphones. They all kind of blew up into this rectangular slab, and…
Nothing. It’s all the same shit. I’m using a OnePlus 6T from 2018, and I think I’ll have it easily for another 3 years. Things eventually just stagnate.
I was hoping that eventually smartphones would evolve to do everything. Especially when things like Samsung Dex were intorduced, it looked to me like maybe in the future phones could replace desktops, running a full desktop OS when docked and some simplified mobile UI + power saving when in mobile mode.
Yeah whatever happened to that? That was such a good idea and could have been absolutely game changing if it was actually marketed to the people who would benefit the most from it
I used it for a while when I worked two jobs. Is clock out of job 1 and had an agreement with them to be allowed to use the screen and input devices at my desk for job 2. Then I’d plug in my Tab S8 and get to work, instead of having to carry to chunky laptops.
So it still exists! What I noticed is that a Snapdragon 8 Gen 1 feels underpowered and that Android, and this is the bigger issue, does not have a single browser that works as a full fledged desktop version. All browser I tested has some shortcomings, especially with drag and drop or context menus or whatever. Like things work but you’re constantly reminded that you’re running a mobile os. Like weird behavior or oversized context menus or whatever.
I wish you could lunch into a Linux vm instead of Dex UI. Or for Samsung to double down on the concept. The Motorola Atrix was so ahead of it’s time. Like your phone transforming into your tablet, into your laptop, into your desktop. How fucking cool is that?
Apple would be in a prime position, they’re entire ecosystem is now ARM based and they have the chips with enough power. But it’s not their style to do something cool to threaten their bottom line. Why sell one phone when you can sell phone, laptop, tablet, desktop separately?
It’s super easy to forget but Ubuntu tried to do it back in the day with Convergence as well, and amusingly this article also compares it to Microsoft’s solution on Windows Phone. It’s a brilliant idea but apparently no corporation with the ecosystem to make it actually happen has the will to risk actually changing the world despite every company talking about wanting an “iPhone moment”
Apple would be in a prime position, they’re entire ecosystem is now ARM based and they have the chips with enough power. But it’s not their style to do something cool to threaten their bottom line. Why sell one phone when you can sell phone, laptop, tablet, desktop separately?
Let’s be real, Apple’s biggest risk would be losing the entire student and young professional market by actually demonstrating that they don’t need a Mac Book Pro to use the same 5 webapps that would work just as well on a decent Chromebook (if such a thing existed)
Or just something like Termux, a terminal emulator for Android. Example screenshot (XFCE desktop over VNC server), I didn’t know what to fit in there: https://files.catbox.moe/zr7kem.png
Full desktop apps, running natively under Android. For better compatibility Termux also has proot-distro (similar to chroot) where you can have… let me copy-paste
Though there is apparently some performance hit. I just prefer Android, but maybe you could run even full LibreOffice under some distro this way.
If it can be done by Termux, then someone like Samsung could definitely make something like that too, but integrated with the system and with more software available in their repos.
What’s missing from the picture but is interesting too is NGINX server (reverse proxy, lazy file sharing, wget mirrored static website serving), kiwix-serve (serving ZIM files including the entire Wikipedia from SD card) and Navidrome (music server).
And brought to any internet-connected computer via Cloudflare QuickTunnel (because it doesn’t need account nor domain name). The mobile data upload speed will finally matter, a lot.
You get the idea, GNU+Linux. And Android already has the Linux kernel part.
Yeah, I remember trying it and while it works the performance hit was too big for my use case. But it’s been a while!
Fortunately I’m in a position where I don’t have to juggle two jobs anymore so I barely use Dex these days.
Which in reverse is also why Samsung isn’t investing a lot into it I suppose - it’s a niche use case. I would guess that generally people with a desktop setup would want something with more performance than a mobile chip.
there is an official android desktop mode, I tried it and it isn’t great ofc but my phone manufacturer (oneplus) has clearly put no work into making it functional
I would love to have a smaller phone. Not thinner, smaller. I don’t care if it’s a bit thick, but I do care if the screen is so big I can’t reach across it with one hand.
One company put a stupid fucking notch in their screen and everyone bought that phone, so now every company has to put a stupid fucking notch in the screen
I just got my tax refund. If someone can show me a modern phone with a 9:16 aspect ratio and no notch, I will buy it right now
OnePlus 6 line of phones are one of the very few with good Linux support, I mean, GNU/Linux support. If custom ROMs no longer cut it you can get even more years with Linux. I had an iPhone, was eventually fed up, got an Android aaand I realized I am done with smartphones lol. Gimme a laptop with phone stuff (push notifications w/o killing battery, VoLTE) and my money is yours, but no such product exists.
The improvement levels are the same amount they used to be. It’s just that adding 100mhz to a 100mhz processor doubles your performance, adding 100mhz to a modern processor adds little in comparison as a for instance.
Well, that’s what Moore’s Law was for. The processing power does increase massively over each generation. It’s just that at this point better graphics are less noticeable. There is not much difference to the eye between 100.000 and a million or more polygons.
We’ve basically reached the top. Graphics fidelity is just down to what the artists do with it.
Go watch a high budget animated movie (think Pixar or Disney) and come back when real time rendered graphics look like that.
Yea games look good, but real time rendering is still not as good as pre rendered (and likely will never be). Modern games are rife with clipping, and fakery.
If you watch the horizon forbidden West intro scene (as an example), and look at the details, how hair falls on characters shoulders, how clothing moves in relation to bodies, etc, and compare it to something like inside out 2, it’s a world of difference.
If we can pre render it, then in theory it’s only a matter of time before we can real time render it.
If we can pre render it, then in theory it’s only a matter of time before we can real time render it.
Not really, because pre renders are often optimized to only look good from one side. If you try to make a 3D model out of it and render that in real time in the game world, it might look ugly or weird from another angle.
Any given frame is just looking at something from one side though, this is the case for video games as well and it’s part of the reason why real time rendering is so much slower. It’s an art and game direction challenge to make things look good however you want to not a technical limitation (in the sense of, you can make a video game look like a Pixar movie does today, it’s just going to render at days per frame instead of frames per second)
There isn’t really a conceptual difference between rendering a frame with the intent to save it and later play it back, and rendering a frame with the intent to display it as soon as it’s ready and dispose of it.
Toy story 1 took days to render a single frame, now it could be rendered on a single home GPU at 24 fps no problem, which would be real time rendering.
To clarify my first paragraph. The challenge is not that it is impossible to render a video game with movie like graphics it’s that the level of effort is higher because you don’t have the optimizations, and so art direction needs to account for that.
As far as considering unexpected behaviors, that is technically only a concern in psuedo-nondeterministic environments (e.g. dynamic physics rendering) where the complexity and amount of potential outcomes is very high and hard to account for. This is a related issue but not really the same one, and it is effectively solved with more horsepower, the same as rendering.
I think the point you were making is that potentially, artistic choices that are deliberately made can’t always be done in real time, which I could agree with. Something like ‘oh this characters hair looks weird the way it falls, let’s try it again and tweak this or that.’ That is awarded by the benefit of trial and error, and can only be replicated real time by more robust physics systems.
Ultimately the medium is different, and while they are both technically deterministic, something like a game has potential for unwanted side effects. However, psuedo-nondeterminism isn’t a prerequisite for a game. The example that comes to mind are real time rendered cutscenes. They aren’t fundamentally different from a movie in that regard, and most oddities in them are the result of bugs in the rendering engine rather than technical impossibilities. Similar bugs exist in 3d animation software, it’s just that Hollywood movies have the budget and attention to detail to fix them, or the luxury to try again.
I’ll end with, if we have the Pixar server farm sufficient hardware, there is nothing that says they couldn’t render Luca or whatever in real time or even faster than real time.
lemmy.world
Najstarsze