600EUR console vs 2000-3000EUR PC. I get it, PC gives you more options what to do and what to play, but getting a console is not waste of money in my books.
First of all, who are you to decide what I need and what I don’t need? Secondly, okay, 1500-2000 EUR for a mid-range PC. Even the lower boundary is 2.5 times more expensive than PS5. With Xbox the difference is even bigger.
Why do people react so negatively to cloud options? (Emphasis on that last word)
It’s dumb for a lot of cases, but there’s plenty of niche occasions it’s very cool. I had an extended period of time I was away from my gaming PC, and sad that I couldn’t play my home games - but GFN let me do so easily.
Nobody working on this tech (with any sense) is claiming ALL games will come from the cloud in 10-20 years. Nobody will accept that level of lost control. But having it as an extra way to access games, in a situation where you’d be reliant on the internet more than hardware anyway, is very useful. It was even how I recommended people play Cyberpunk on release if they had a mediocre PC.
I get that there’s constant worries about how close we are to the EA-managed dystopian control of their library, I just don’t see the logical sequence of events there when it’s an option on a generally open and consumer-friendly store.
You talked about console hardware, but then mentioned distribution. I’m going to guess you mostly mean servers - as these days people don’t really need any special local hardware aside from any controller.
The major cities generally already have those servers distributed and working. It’s true certain edges of the world don’t have a good experience, but that sort of just fits in the 70% of scenarios where you wouldn’t want a cloud game.
There’s still this weird expectation it would replace your home den where you have lots of space and disposable income for multiple consoles - it doesn’t. It’s really more for the convenience of getting your games from a web browser.
It’s really more for the convenience of getting your games from a web browser.
Exactly, it’s a niche service that only appeals to a fraction of the folks who play games, but it also requires the operator to purchase servers with graphics cards and set them up in datacenters near everyone who has an account in order to minimize latency. It’s not viable for people who have slow internet or live in a rural area, especially when so much of their income goes to licensing game titles for use in the service.
It’s not going to be less powerful than the current systems.
It’s not like the headline implies that it could or should be less powerful. It’s a simple headline that conveys a simple message that a regular reader might find worth clicking.
amazon has repeatedly let competitors use amazon cloud services; and amazon has repeatedly ripped off those competitors ideas stored in cloud services and then shut them down economically
I get the impression Amazon just rightly avoided overselling it and growing too far too fast. I see it advertised for a few specific cases where people don’t own consoles and might try it, but not overblown in showcases the way Google did.
There were two heroes that were released as Xbox exclusive DLC back during the original release - Kit Fisto and Asajj Ventress. The mod added them into the PC version by reskinning Ki Adi Mundi and Ayla Secura respectively. The Aspyr release uses the reskinned versions from the mod rather than the official models from Pandemic/Lucasarts which they presumably had permission to use but chose not to despite the fact that the original models don’t have graphical errors and do have the correct fighting style/animations.
Seems like Tomb raider collection was the fluke, because that one is almost perfect. But I tried out the dark forces remastered and was met with a godawful AI laser rifle with visible artifacts.
Just bought it on PS5 last night… Playing Solo is VERY hard, at least at the start, as you get overrun by bugs/bots easily.
Today, playing online, a young Spanish kid was on the team who didn’t know English and kept talking on the mic and killing us (his teammates) and then laughing hysterically.
I then killed him and one of our team mates revived him for some reason.
After he came back he tried to hunt me down, but I killed him again, and then no one revived him and he left the game and then blocked me, lol
And this is not even beginning to touch content and features from other released versions of these games from 20 years ago not present, like four-screen splitscreen."
It’s so cool and amazing that we finally have home theatre systems in every fucking house, and that’s when devs decided we don’t get split screen anymore. Modern hardware is wasted on modern devs. Can we send them back in time to learn how to optimize, and bring back the ones that knew how to properly utilize hardware?
Even if you gave him a current-day computer to play with (otherwise, even supercomputers of the time would struggle to run UE5), he wouldn’t achieve much, consumer grade computers back then really struggled with 3D graphics. Quake, released in 1996, would usually play around 10-20 FPS.
It’s not a question of capability. It’s a question of cost-benefit spending developer time on a feature not many people would use.
Couch coop was a thing because there was no way for you to play from your own homes. Nowadays it’s a nice-to-have, because you can jump online any time and play together, anywhere in the world, without organizing everyone to show up at one house.
It’s a question of cost-benefit spending developer time on a feature not many people would use
Which is super ironic when you look at games that had an obviously tacked-on, rushed multiplayer component in the first place, such as Spec Ops: The Line, Bioshock 2 and Mass Effect 3
Goldeneye 007. Yeah seriously. The most famous multiplayer game of its generation very nearly didn’t have multiplayer. It was tacked on when they finished the game and had a little bit of extra time and ROM storage.
4x splitscreen needs approximately 4x VRAM with modern approaches to graphics: If you’re looking at something sufficiently different than another player there’s going to be nearly zero data in common between them, and you need VRAM for both sets. You go ahead and make a game run in 1/4th of its original budget.
I can’t do that, but you know who could? The people who originally made the game. Had they simply re-released the game that they already made, it wouldn’t be an issue. Us fans of the old games didn’t stop playing because the graphics got too bad. Even if we did, this weird half step towards updating the graphics doesn’t do anything for me. Low poly models with textures that quadruple the game’s size are the worst possible middle ground.
My flatmates and I actually played through a galactic conquest campaign on the OG battlefront 2 like 2 months ago. It holds up.
I can’t do that, but you know who could? The people who originally made the game.
How to tell me you’re not a gamedev without telling me you’re not a gamedev. You don’t just turn a knob and the game uses less VRAM, a 4x budget difference is a completely new pipeline, including assets.
Low poly models with textures that quadruple the game’s size are the worst possible middle ground.
Speaking about redoing mesh assets. Textures are easy, especially if they already exist in a higher resolution which will be the case for a 2015 game, but producing slightly higher-res meshes from the original sculpts is manual work. Topology and weight-painting at minimum.
So, different proposal: Don’t do it yourself. Scrap together a couple of millions to have someone do it for you.
The general point still stands, though, you can’t do the same thing with a 2015 game. On the flipside you should be able to run the 2004 game in different VMs on the same box, no native support required.
Output resolution has negligible impact on VRAM use: 32M for a 4-byte buffer for 4k, 8M for 1080p. It’s texture and mesh data that eats VRAM, texture and mesh data that’s bound to be different between different cameras and thus, as I already said, can’t be shared, you need to calculate with 4x VRAM use because you need to cover the worst-case scenario.
Modern hardware is wasted on modern devs. Can we send them back in time to learn how to optimize, and bring back the ones that knew how to properly utilize hardware?
I think a lot of the blame is erroneously placed on devs, or it’s used as a colloquialism. Anyone who has worked in a corporate environment as a developer knows that the developers are not the ones making the decisions. You really think that developers want to create a game that is bad, to have their name attached to something that is bad and to also know that they created something that is bad? No, developers want to make a good game, but time constraints and horrible management prioritizing the wrong things (mostly, microtransactions, monetizing the hell out of games, etc) results in bad games being created. Also, game development is more complex since games are more complex, hardware is more complex, and developers are expected to produce results in less time than ever before - it’s not exactly easy, either.
It’s an annoyance of mine and I’m sure you meant no harm by it, but as a developer (and as someone who has done game development on the side and knows a lot about the game development industry), it’s something that bothers me when people blame bad games solely on devs, and not on the management who made decisions which ended up with games in a bad state.
With that said, I agree with your sentiments about modern hardware not being able to take advantage of long-forgotten cool features like four-screen splitscreen, offline modes (mostly in online games), arcade modes, etc. I really wish these features were prioritized.
eurogamer.net
Gorące