the fact, that steam reviews were on mostly negative on launch day, and it suddenly got mostly positive means steam has once again tempered with the statistics. fuck valve as well.
Because the previous ones were great and this one has glowing critics reviews. For me though, the system requirements are too high, so I’ll buy and play it sometime after a PC upgrade.
BL3 was just as much (if not more) of a mess - people shouldn’t be surprised, especially with Randy telling people they should be selling their souls to have the privilege of playing this game.
I played through BL3 on a 2016 PC and it was OK. Not perfect, but perfectly playable. Looking at that PC Gamer article, I don’t even understand the complaint of being unable to run the game at 120 FPS. Seems like an unreasonably high bar. I’d take 60.
If you asked me to recall the story of any of the games, I’d not be able to. I don’t think people play the games for the story. It’s just a fun looter shooter, especially in co-op, which is how I played BL3 around its Epic launch. Revisiting my technical review of the game from then, yeah, you’re right, and I documented various reports of issues, though there were quick fixes deployed or workarounds available for the biggest issues. That seems commonplace in the industry though.
Same as games like the Witcher 3 and Cyberpunk, fans playing it in release year vs 2025 are gonna have completely different experiences. And BL 3 release was also messy, albeit not as bad as this. I was there.
I mostly agree. I’d say they went uphill though, but so did every other game, but even faster. Each game improved some things, but the competition improved much more. They’ve been coasting off of name recognition ever since the first game.
I’m going to have to try to remember that everyone has different taste… But good God I can’t be the only one who despised the dialog and “so randumb holds up spork!?11!” type humor in BL2?
It seems like everyone else loves it. I just found it incredibly grating.
Did not read the article. I have a 3070 graphics card (windows 10)and the game ran fine. I had a problem with not being able to select a different weapon until I messed with it quite a bit and my friend had one crash. He has a 3050.
Frankly I expected much worse, but this is just not a good response. Is he using this as an excuse NOT to fix it?
While I have no desire to defend Randy, Twitter is as Twitter does, and unless you spend time looking at his whole timeline, it sounds like he’s saying only stupid shit like this. He did actually acknowledge the issues, and stated that they’re working on them but also that for now the best way to play is with FSR/DLSS and frame gen.
I disagree with this deeply. He makes arguments about the imperceptibility of latency in frame gen, but that’s only true when the base framerate is high enough. DLSS is probably fine, but it’s also pretty fair for those who are using an 80 or 90 class card to complain about struggling at 1440p native, let alone 4k.
I’ll say it here again, i have an 7900xtx i expect it to run silky smooth.
If it doesn’t, that’s on you brother.
I also play on 1440p and it doesn’t reach a well enough framerate at ultra settings, i set it on high and adjusted some things to lower settings to get a better framerate.
B1 is a bit slow, but quite fun, B2 is brilliant, BTPS is similar to B2, but the crafting stuff is annoying, B3 was too chaotic with a too cluttred UI and a damn annoying story, B4, I have no idea
I liked that one but weirdly there’s no NG+ and the DLC kind of sucked. I finished it with a friend and we were like, “that’s it?”. It’s not very long, and it ends shortly after your end of skill tree powers become available.
I think it was a really good game originally. The writing has gotten really fucking bad though, and the gameplay hasn’t really evolved with the times. (I can’t speak on the new game.)
The new one feels like progress so far. I’m not very deep in, but the story and dialogue are not nearly as annoying as 3 was. The biggest difference has to be the movement. In previous games it often felt like you were trudging forward until you found an enemy and then running backwards so they didn’t catch you before they die. Grappling hooks, double jumps, and gliding add a TON of movement and gives you those John Wick moments where you’re bouncing around the area and blasting people from every direction.
I really don’t understand the open world though. I don’t think that’s the direction they needed to go. I think the best looter-shooter I’ve played recently is Roboquest. It has all the movement you said (and more), but it’s in tight rooms, so the devs have more control of the design. Open worlds means the devs have essentially zero control of encounters and it becomes too easy. The only thing they can do is crank up health of enemies so they don’t die as quickly.
I’m not far enough to have settled on an opinion on the open world yet. I did find it tedious in other BL games that I had to walk through the same areas in the same order over and over again to access the end game or start a new character.
That being said, I often don’t know where to go or what to do in BL4. Thank Torgue they added the Echo objective finder, that’s pretty much the only way I’ve been able to stay on track at all.
Yeah, I just have a bias against open world games at this point. Damn near every game thinks they need to be open world, and most of the time it just makes things more tedious and boring. It takes a ton of dev time to make just for players to run past 99% of it. There are some games it really works for, but most would be better off with a tighter design (and it’d also save time and money).
I understand your worries. I was was also concerned about the openworld first, but so far they have nailed the open world part pretty well. Travelling has been fun. There has been always fast travel near when i have wanted to use it. There is enough hidden jokes and easter eggs that i feel rewarded to look around.
I dont really understand your point. Devs still curate where you meet the enemies. Its not like its procedurally generated map where everything is random.
I cant remember single time in my 20 hours of gameplay where i have tought that i hate fighting here, or that these enemies dont fit here.
I dont really understand your point. Devs still curate where you meet the enemies. Its not like its procedurally generated map where everything is random.
I haven’t played it, so maybe they’ve done something to control it. I doubt it though. If you can come from any direction, that makes encounters much harder to design. Think about older Borderlands games when entering a compound. You’d come through one main gate and enemies would be set up with cover and you’d have to fight your way through. With open world you could do something like fly into the middle of the compound, and that’s has to be accounted for.
Check out Roboquest, for example. It has some really impressive movement options, but it’s choice of rooms let’s them restrict how much you can abuse them. You’ll always be fighting through the enemies from an expected direction.
I cant remember single time in my 20 hours of gameplay where i have tought that i hate fighting here, or that these enemies dont fit here.
This isn’t what I meant. There’s nuance between liking something and it being the best possible thing. It can be good and still be possible to be better. My biggest issue with open worlds is, like you mentioned at the beginning, fast travel. It takes so much time and resources to make an open world, just for players to fast travel past most of it. Is it really worth the that? Did it add that much to the experience? We could have more cheaper games with tighter designed experiences instead of games that cost hundreds of millions of dollars to make. (BL3 cost $140m, and for cost “more than twice” that, so minimum $280m.)
I don’t think people understand that everything is an opportunity cost. If you make an open world game, that’s at the expensive of so much more. At minimum, it’s going to be less game to play (or longer between games and more expensive). Is getting a lot of space that you hardly interact with worth it?
The thing about open world is, you can make those smaller contained spaces you keep mentioning with Roboquest inside of some structure with a single entrance and boom, we have your preferred formula.
Sure. You can make those, but you have to spend a lot of money and time making the open world just to make places for the rooms to live. Is that worth it? Everything is opportunity cost. Did doubling the cost improve the game that much?
It depends on the game. Could a Sonic game be fun in open world? Yes, and it was. Would The Hunt? Or Supermeat Boy? Probably not. I’m just pointing out you can still design for your movement abilities in an open world.
For sure, you can. However, every modern game is trying to be an open world game. It’s stupid. We get ballooning budgets and dev cycles for games that don’t really get anything from being open world. I’d rather get three great less open games than one open world game that is sacrificing things to make the open world work.
Most AI companies will die quite dramatically, but the big players will weather the storm and soak up all the rewards that come from monopolising on such technology. The ultimate product can turn to slop if you’re the only game in town.
Such a bad argument. There is no reason for the game to not support lower end hardware except for lazy development. Not a good sign for the future of borderlands. This is also the type of game that really sucks if you don’t get a locked 60 FPS. Borderlands 3 is still a laggy mess on my steam deck. Sometimes it just stutters forever. Constantly generating shaders or something. There is no reason for it to be that way and lowering the setting has almost no effect on the actual performance. This is 100% the fault of the devs. They are pushing half complete products to market not the consumer.
Also as many others have stated, not everyone has $4000 to drop on a PC. My most powerful machine is a Skylake processor with a 1070. It runs most games fine, it’s just the handful of unoptimized unreal engine games that run badly. I have nearly limitless options to buy other games from devs who actually care about us poorer folk. A 4070 ti is like 800-1000 rn. Probably won’t even run this game well. This is in an age where we have had nearly 100% inflation in a few years. Most people can barely afford to buy a car without spending half of their paycheck. They should be trying to make their games work on older hardware now more than ever. Ram is cheap, there are some things you can work around. It’s usually not worth it to target low RAM devices, but there is really no good reason for you game to not scale well to lower end GPUs. You don’t have to have only 4k textures on disk. You can easily automate a process to create lower poly, and lower resolution textures, implement modest lighting systems. It’s pretty easy in relation to other things. Cpus are mid, you shouldn’t necessarily target 10 year old CPUs if it’s going to make the game worse but your game shouldn’t be so unoptimized that you need 5.5 ghz on a single core to get 60 fps. You should at least have a proper lod system in place so that you can support lower end GPUs in many cases where it’s not very difficult.
The issue with many of these modern triple AAA games is they are trying to avoid as much work as possible. They are trying to avoid targeting lower end hardware because it’s a bit of work and they are struggling to finish their games. They need to plan for these things from the start and work them into their process.
Hell, I don’t know many people willing to drop 4k on a gaming rig. Most people I know with a gaming computer are in the 1-2k range and miss when you could get decent performance under 1k. Like, if I can’t get playable performance out of a several years old mid range computer I’m not buying your game, especially not for $70
That was a reference to a video I saw, something like, trying to play borderlands at 60 FPS on a $4k computer.
2k is about the minimum these days for a full system when you include taxes and shipping. That will get you a midrange system. You can get lower end stuff or buy a used graphics card. Personally I’m still rocking a 1070 and it’s excellent for like 99% of games. I’m lucky that the handful of games that won’t run on it I don’t care about anyways.
Also a less known fact, real inflation, not government reported is probably close to 100% over the past 10 years. So really a $2000 machine today is the same as a $1000 computer 10-15 years ago. Our wages didn’t go up of course. That’s the whole point of a fiat currency and inflation! It’s a clever and sneaky wealth tax. It’s a way to cut your wages quickly in a way that 90% of people don’t understand. They just yell at the gas station clerk because their soda is nearly $5. Their poor little brains can’t conceive of a concept so sophisticated as they are actually being payed less, stuff doesn’t cost more. People aren’t going to make stuff for free and give it to them. It’s just not a simple number so it confuses them. If businesses had to adjust your pay to match real inflation, guess what?, there would be no inflation and no fiat currency. No reason for it to exist because they couldn’t screw us out of our wages without telling us to our face Just extra paperwork and time for managers, little fake economic growth and no unnatural bubbling of markets.
Kind of divering from the larger point, but that’s true — RAM prices haven’t gone up as much as other things have over the years. I do kind of wonder if there are things that game engines could do to take advantage of more memory.
I think that some of this is making games that will run on both consoles and PCs, where consoles have a pretty hard cap on how much memory they can have, so any work that gets put into improving high-memory stuff is something that console players won’t see.
checks Wikipedia
The XBox Series X has 16GB of unified memory.
The Playstation 5 Pro has 16GB of unified memory and 2GB of system memory.
You can get a desktop with 256GB of memory today, about 14 times that.
Would have to be something that doesn’t require a lot of extra dev time or testing. Can’t do more geometry, I think, because that’d need memory on the GPU.
considers
Maybe something where the game can dynamically render something expensive at high resolution, and then move it into video memory.
Like, Fallout 76 uses, IIRC, statically-rendered billboards of the 3D world for distant terrain features, like, stuff in neighboring and further off cells. You’re gonna have a fixed-size set of those loaded into VRAM at any one time. But you could cut the size of a given area that uses one set of billboards, and keep them preloaded in system memory.
Or…I don’t know if game systems can generate simpler-geometry level-of-detail (LOD) objects in the distance or if human modelers still have to do that by hand. But if they can do it procedurally, increasing the number of LOD levels should just increase storage space, and keeping more preloaded in RAM just require more RAM. You only have one level in VRAM at a time, so it doesn’t increase demand for VRAM. That’d provide for smoother transitions as distant objects come closer.
You can divide stuff up into memory however you want, into objects, arrays, whatever. Generally speaking the GPU memory is used for things which will run fast in the streaming processors of the GPU. They are small processors specialized for a limited set of tasks that involve 3D rendering. The types of thing you would have in GPU memory are textures, models, shader scripts, various buffers created to store data for rendering passes like lighting and shadow, zbuffers, and the frame buffer and stuff.
Other things are kept in the ram and are used by the CPU which has many instruction sets and many optimizations for different types of tasks. CPUs are really good at running unpredictable code. They have very large and complex cores which do all kinds of things like branch prediction( taking several paths through code ahead of time when there is free time available) it has direct access to the PCI bus and access things like the south and north bridge, storage controller, io devices, etc.
Generally on a game engine most of the actual logic is happening on the CPU because this is very complex and arbitrary code that is calculation heavy. Things like the level data, AI, collisions, physics, streaming data and stuff is handled by the CPU. The CPU prepares frames by batching many things into one call to the GPU. This is because the GPU is good at taking a command from the CPU and performing that task many times simultaneously. Things like pixels for example. If the CPU had to send every instruction to the GPU in sequence it would be very slow. This is because of the physical distance between the GPU and CPU and also just that a script would only do one thing at a time in a loop. Shaders are different. They are like running a function across a large data set utilizing the 1000 + cores in an average modern GPU.
There are other differences as well. The CPU has access to low latency memory where the GPU prefers higher latency but high bandwidth memory. This is because the types of operations the GPU is doing are much more predictable and consistent. CPUs are very arbitrary and often the CPU might end up taking a path that is unusual so the memory it has to access might be scattered and arbitrary.
So basically most of the game engine and game logic runs in memory because it’s essentially a sequential program that is very linear and arbitrary and because the CPU has many tools in its tool boxes for different tasks, like AVX, SSE, and stuff like this. Most of the visual stuff like 3D transformation and shading and sampling take place on the GPU because its high bandwidth and highly parallel yet with some cores, yet you have many of them that can operate independently.
Ram is very useful but is always limited by console tech. It is particularly important in more interactive and sandboxy type games. Stuff like voxels. It also comes in handy when running sim or rts games. Engines are usually designed around console specs so they can release on those platforms. It can be used for anything even rendering, but it is extremely slow compared to GPU memory in actual bandwidth, which is usually less then an inch away from the actual GPU and has a large bus interface, something like 128-512 bit. This is how many physical wires connect the memory chip to the GPU. It limits how much data you can send in one chunk or cycle. With a 64 bit interface you can only send one 64 bit word at a time. Many processes can pack 4 of those into a 256 word and send them at once getting a 4x speed increase on a 256 bit bus, or 8x speed on a 512 bit bus.
So you have higher bandwidth, high latency memory on a wide bus which feeds a very predictable set of many simple processors. Usually when you want to load memory into the GPU you have to prepare it with the CPU and send it over the PCI bus. This is far too slow to actually use system ram to augment the GPU ram. It’s slow in latency and ram, so if you were to do so, your GPU will be sitting idle like 80% of the time waiting on packets, and then it will only get a 64 or 128 bit packet from the ram, not to mention the CPU overhead of constantly managing the memory in real time.
Having high ram requirements wouldn’t be the worse thing in the world because it’s cheap and can really help some types of games which have large and complex worlds with lots of physics and things happening. Ram is cheap. Not optimizing for GPUs is pretty bad especially with prices these days. That will not happen much because games tend to be written in languages like C++ which manage memory in a very low level way, so they tend to just take about as much as they need. One of the biggest reasons you use a language like C++ to write game engines is because you can decide how and when to allocate and free memory. This prevents stuttering. If the system is handling memory you tend to get a good deal of stuttering because the CPU will get loaded for half a sec here and there as the garbage collector tries to free 2 GBs of memory or something. This tends to make games engines very structured when it comes to the amount of memory they use. Since they are mostly trying to reuse code as much as possible, and are targeting consoles, they usually just aim for the amount of ram they know they will have on consoles. Things like extra draw distance on PCs and stuff can use more memory.
LODs can be generated in real time but this is slow. You can do nearly anything with code. It’s just if it’s a good fit for your application. In a game engine every cycle is precious. You are updating the entire scene, moving all your data, preparing a frame, resolving all interactions, running scripts, and everything else in just over 16 ms for 60 fps. The amount of data your PC in processing in just 16 ms will blow your mind. Usually 3-12 passes in the renderer. A very simple engine will draw a zbuffer, where during this 16 ms it determines the distance to the closest object behind every pixel, then using this data to figure out what needs to actually be drawn. Then it’s taking these objects and resolving the normals, basically figuring out if the polygon is facing towards or away from the player. This is cutting out rendering the vast majority of polygons. Then the lighting data and everything is combined with this and sent to the GPU which actually goes through a list of polygons need to be drawn, and then looking up the points to draw the polygons. It’s also casting rays from a light source and shading the scene. This is very simple, basically a quake or doom like game. Modern games are much more complex. They draw each frame many times with many different buffers Generating different data and using it for the next pass. Generating LODs is just something that isn’t done unless needed for some reason, like dynamic terrain or voxel terrain. In a game that is mostly static geometry there is not really any reason to give up that compute time when you can just pregen them. Generating LODs in real time is quite a process. You have to load a region into memory, reduce it’s polygon, downsize the texture. Generate a new mesh and texture, and place it in the world. This would be a back and forth between the GPU and CPU. Some games do it however. 7dtd, space engineers, Minecraft with a distant terrain mod, and I’m sure many others generate LODs on another thread, but these are usually fairly low quality meshes.
I’m still not convinced the engine is the problem. Maybe it’s not helping, sure, but heavy reliance on upscalers to achieve nominal performance is probably a bigger issue.
That, and shipping before proper optimization passes is probably more profitable in the short term, so publishers will push for that.
Yes, the engine could be used well, but it's used for it's out of the box "good" graphics, lighting and such. Which then yes, devs slap on shitty DLSS, frame generation or whatever at the end to reach a somewhat playable framerate (or "framerate number" should I say with the way things are going. Fuck you Nvidia).
No developers are going to spend ages tweaking the engine to get good performance when people will just buy the game regardless. I've yet to see a good performing UE5 game with good fidelity and I probably never will because it's entirely reliant on TAA as it's deferred rendering as standard. I hate seeing developers abandoning their own in-house engines just to swap to shitty UE5. I know, I know, it's all about the money...
The engine is a plague, as every developer is seemingly moving to it. Chasing "upgraded" graphics that no one asked for. All games consolidating onto one engine is very bad.
It's good for movies, bad for games. Give us good raster performance back, no TAA, no upscaling, no frame gen.
I partly agree with you in that everyone using the same UE5 engine is bad
But I really don’t agree that deferred rendering techniques are inherently bad. Maybe they cause negative incentives for developers that lead to worse games in the long run, but you might as well blame capitalism at that point
I like techniques such as TAA because they do work better as anti-aliasing in my experience. I’ve multiple times had the choice between TAA and traditional AA and I think TAA simply does what it sets out to do better. Upscaling and frame generation are also nice to haves as optional features people can enable. Sometimes I use them, sometimes not. But it is bad that companies use these techniques as a crutch, indeed, but I don’t want to see them gone
I really dont mind solutions like upscaling, but it should be for people with older hardware, so they can run newer games better.
Instead it is used as a crutch by developers to gain some "performance" out of their poorly made game (Not blaming devs individually here, they are all probably overworked on titles like this and they wont have much of a say in what tools or timeframe they have). You are right, it's a capitalism issue too.
TAA just looks like I have grease smeared over my monitor... the only acceptable AA for deferred rendering is SMAA honestly, but I still think it's a misused technique in most cases, I have only seen a few games look good with it. Games with it usually have lots of visual flaws, that they hope TAA smears over. But then you just get a blurry game.
I’m becoming more and more convinced it is the engine honestly. It is probably harder to optimize and devs not having enough time to do so if I had to guess.
UE5 can run well, but all the defaults that Epic suggests devs use are really quite bad for performance. They improve performance on horribly unoptimized scenes, but actually optimizing the scene would allow a 10x performance improvement at no reduction in visual fidelity. But devs don’t tend to optimize much anymore because those Epic-suggested defaults “take care of optimization”.
It’s mostly not UE5 exactly. UE5 just let’s devs turn on features that are performance hogs easily. Squad, for example, just upgraded from UE4 to UE5 but they took their time and did things in a smart way (like not using Lumen), and performance increased for a lot of people, with much higher detail too.
UE5 isn’t the issue. It’s devs who turn on all the features they can and ignore optimization because “the engine just handles it.” It’s got some really impressive technology, but it’ll ruin your game if you let it.
What? Some systems have worse performance, primarily if you don’t have enough VRAM, but artifacting and blur? What do you mean? Sure, there’s blur with TAA/FSR/DLSS, but that’s always true and cam be toggled.
You cant toggle it, or you get loads of shimmering, you cant use it because you get loads of blur. There's ghosting even without AA. This is the exact problem, there's no good implementation if you are relying on TAA and/or DLSS as anti-aliasing. Squad suffers it, the same as any UE5 game.
pcgamer.com
Aktywne