I would expect that Gabe is trying to hedge his bets and make the company more of a co-op, where several key figures in the company as well as himself, own the majority, so that there’s accountability in what everyone decides.
That way if someone’s kid ends up inheriting stock in valve, there’s a way to block them out of major decisions if there’s a need to.
If that’s indeed what’s happening, then it’s a very long-term play by Gabe. He’s looking so fast ahead, so that long after he’s departed the company, the values that make valve great (and successful) will endure.
I want to go back and play through the whole series again, but the nuances of the older games that were fixed in the more recent games always throws me.
I play on PC and it’s very very obvious that kb/mouse was an afterthought for some of the games… I just hate doing fps with a joystick/thumbstick.
Either way, I’ve redeemed this for all of my copies of Borderlands. So the next time I log in, I should have golden keys for days.
I was always planning to watch it when it went to streaming/home video kind of release. I rarely go to the theatre anymore.
I’m also waiting on the same for the new Deadpool + Wolverine movie…
Considering the little I’ve heard about it so far (trying to avoid spoilers), it seems like I should skip the Borderlands movie, but I’ll probably still watch it.
Well, previously, I had a mid tower on a large shelf above my TV, partly for this, but it was primarily built for VR. I still have my original Oculus cv1, from before Facebook bought them, then rebranded to meta…
I ran away screaming when they started to force everyone to use their Facebook account to log into Oculus.
Since then it’s been on my mind to replace it. I recently moved house, and just didn’t set up the VR PC again. I want a cleaner look.
The old system was a full ATX board with an Intel core i5 (4th Gen? IIRC), 16G of RAM and a GTX 1080 8G. Complete with some RGB and everything to be a bit more flashy because it was in a windowed case above my TV… It was a bit of a show piece.
Now, I just want something thats small, simple, and will do the job without too many compromises. I have an Xbox PC dongle (one of the old school fat ones), and a small assortment of Xbox controllers. Recently, my partner and I wanted to do so couch gaming so I dug it out and just plugged it into HDMI. It’s sitting on the floor and we haven’t played anything on it since. Wires everywhere.
When I replace it, I want something that won’t look out of place, and definitely little to no RGB stuff. It got annoying having it blinking and changing all the time, distracting from what I’m trying to watch.
The system is meant to be forgettable, just humming away in the background, ever ready to cater to whatever couch gaming whims I may have.
The TV is 4k, and I’d like to have enough power in it to play high res, especially for somewhat older or simpler titles.
Long term, I kind of want two (or at least a second system with similar performance), so for games that don’t do split screen, I can play on a smaller, closer screen, and my partner can take the TV. I’d have it wall mounted near my usual couch seat, with a display on an arm (also wall mounted). I imagine I’d use that for more than just games, since I’m not always a fan of juggling my laptop around trying to get comfortable on the couch.
The main problem I’m beating my head off of, is finding an adequate system that’s not huge, and doesn’t look like shit in my living room, with enough power to meet the demand.
I just want to install it and more or less forget about it until I want to play something. It should blend in, not stand out.
If I’m going that way, then I’m building a custom PC for the purpose and I’ve built enough PCs at this point that I’m not keen on building more… Especially when looks are a nontrivial point.
Finding a good looking small case that I can mount on the wall (or shallow shelf) seems like a gigantic headache. Plus building in such a case is probably a nightmare since it’s mostly built for looks, not for build friendliness.
I’m considering the minisforum nucxi7, to give you an idea. That whole thing is smaller than a 1650 super, and it has an rtx 3060/3070 built in, with a clean, minimal look. Not busy, but not boring.
It’s just… Basically impossible to buy, and/or afford.
I’ve been considering my options for a living room, couch gaming/emulation system for a while. I want something with a mid tier GPU for anything remotely modern that I might want to play there… My criteria for couch gaming vs desktop is whether it’s easier/better to play the game with a controller… One example would be driving/flight sims, the analog controls are generally better than mouse/kb… The only way desktop wins in that scenario is if you own a driving sim wheel/hotas for your desktop, otherwise, gamepad is generally better for the fine controls.
I don’t generally play a lot of driving/flight sims, but it’s a good example.
Anyways. The primary focus is on console exclusive games via emulation, specifically retro stuff. SNES/N64/Genesis/etc. Maybe to stuff as new as the Wii? IDK. But being able to play other PC titles would be helpful.
I’m thinking about a fair high clock speed, fairly recent CPU with a fair amount of memory. IMO, clock speed is more important so the reaction times of the emulation is minimized. I don’t think emulators can really take good advantage of multi threading.
My main issue is that any systems that fit the bill are super expensive. Something small/compact, with a high clock CPU, and something for graphics better than integrated… It’s not easy to find something like that for cheap.
It’s not something I would use all the time, so it’s not really very high on my priority list.
As stupid as it is, it doesn’t stop a creator from simply demonstrating issues, without commentary. Just show people the issues and don’t remark on them.
That being said, nobody should sign this. Trying to forbid people from making satirical remarks? What the crap?
The issue is the compression. There’s hundreds of individual assets, the process to compress or more accurately, uncompress the assets for use takes processor resources. Usually it only really needs to be done a few times when the game starts, when it loads the assets required. Basically when you get to a loading screen, the game is unpacking the relevant assets from those dataset files. Every time the game opens one of those datasets, it takes time to create the connection to the dataset file on the host system, then unpack the index of the dataset, and finally go and retrieve the assets needed.
Two things about this process: first, securing access to the file and getting the index is a fairly slow process. Allocating anything takes significant time (relative to the other steps in the process) and accomplishes nothing except preparing to load the relevant assets. It’s basically just wasted time. The second thing is that compressed files are most efficient in making the total size smaller when there’s more data in the file.
Very basically, the most simple compression, zip (aka “compressed folders” in Windows) basically looks through the files for repeating sections of data, it then replaces all that repeated content with a reference to the original data. The reference is much smaller than the data it replaces. This can also be referred to as de-duplication. In this way if you had a set of files that all contained mostly the same data, say text files with most of the same repeating messages, the resulting compression would be very high (smaller size) and this method is used for things like log files since there are many repeating dates, times, and messages with a few unique variances from line to line. This is an extremely basic concept of one style of compression that’s very common, and certainly not the only way, and also not necessarily the method being used, or the only method being used.
If there’s less content per compressed dataset file, there’s going to be fewer opportunities for the compression to optimize the content to be smaller, so large similar datasets are preferable over smaller ones containing more diverse data.
This, combined with the relatively long open times per file means that programmers will want as few datasets as possible to keep the system from needing to open many files to retrieve the required data during load times, and to boost the efficiency of those compressed files to optimal levels.
If, for example, many smaller files were used, then yes, updates would be smaller. However, loading times could end up being doubled or tripled from their current timing. Given that you would, in theory, be leading data many times over (every time you load into a game or a map or something), compared to how frequently you perform updates, the right choice is to have updates take longer with more data required for download, so when you get into the game, your intra-session loads may be much faster.
With the integration of solid state storage in most modern systems, loading times have also been dramatically reduced due to the sheer speed at which files can be locked, opened, and data streamed out of them into working memory, but it’s still a trade-off that needs to be taken into account. This is especially true when considering releases on PC, since PC’s can have wildly different hardware and not everyone is using SSDs, or similar (fast) flash storage; perhaps on older systems or if the end user simply prefers the less expensive space available from spinning platter hard disks.
All of this must be counter balanced to provide the best possible experience for the end user and I assure you that all aspects of this process are heavily scrutinized by the people who designed the game. Often, these decisions are made early on so that the rest of the loading system can be designed around these concepts consistently, and it doesn’t need to be reworked part way through the lifecycle of the game. It’s very likely that even as systems and standards change, the loading system in the game will not, so if the game was designed with optimizations for hard disks (not SSDs) in mind, then that will not change until at least the next major release in that games franchise.
What isn’t really excusable is when the next game from a franchise has a large overhaul, and the loading system (with all of its obsolete optimizations) is used for more modern titles; which is something I’m certain happens with most AAA studios. They reuse a lot of the existing systems and code to reduce how much work is required to go from concept to release, and hopefully shorten the duration of time (and the amount of effort required) to get to launch. Such systems should be under scrutiny at all times whenever possible, to further streamline the process and optimize it for the majority of players. If that means outlier customers trying to play the latest game on their WD green spinning disk have a worse time because they haven’t purchased an SSD, when more than 90% + have at least a SATA SSD, all of whom get the benefits from the newer load system while obsolete users are detrimented because of their slow platter drives, then so be it.
But I’m starting to cross over into my opinions on it a bit more than I intended to. So I’ll stop there. I hope that helps at least make sense of what’s happening and why such decisions are made. As always if anyone reads this and knows more than I do, please speak up and correct me. I’m just some guy on the internet, and I’m not perfect. I don’t make games, I’m not a developer. I am a systems administrator, so I see these issues constantly; I know how the subsystems work and I have a deep understanding of the underlying technology, but I haven’t done any serious coding work for a long long time. I may be wrong or inaccurate on a few points and I welcome any corrections that anyone may have that they can share.
I see stuff like this and I don’t blame developers/coders for all the shit that’s happening. If you objectively look at gameplay and such, most games are actually pretty decent on their own. The graphics are usually really nice and the story is adequate, if not quite good, the controls are sensible and responsive…
A lot of the major complaints about modern games isn’t necessarily what the devs are making, it’s more about what the garbage company demands is done as part of the whole thing. Online only single player is entirely about control, keeping you from pirating the game (or at least trying to) plus supplying on you and serving you ads and such… Bad releases are because stuff gets pushed out the door before it’s ready because the company needs more numbers for their profit reports, so things that haven’t been given enough time and need more work get pushed onto paying customers. Day one patches are normal because between the time they seed the game to distributors like valve and Microsoft and stuff, and the time the game unlocks for launch day, stuff is still being actively worked on and fixed.
The large game studios have turned the whole thing into a meat grinder to just pump money out of their customers as much as possible and as often as possible, and they’ve basically ruined a lot of the simple expectations for game releases, like having a game that works and that performs adequately and doesn’t crash or need huge extras (like updates) to work on day 1…
Developers themselves aren’t the problem. Studios are the problem and they keep consolidating into a horrible mass of consumer hostile policies.
For modern games, from what I’ve seen, they’ve taken a more modular approach to how assets are saved. So you’ll have large data files which are essentially full of compressed textures or something. Depending on how many textures you’re using and how many versions of each textures is available (for different detail levels), it can be a lot of assets, even if all the assets in this file, are all wall textures, as an example.
So the problem becomes that the updaters/installers are not complex enough to update a single texture file in a single compressed texture dataset file. So the solution is to instead, replace the entire dataset with one that contains the new information. So while you’re adding an item or changing how something looks, you’re basically sending not only the item, but also all similar items (all in the same set) again, even though 90% didn’t change. The files can easily reach into the 10s of gigabytes in size due to how many assets are needed. Adding a map? Dataset file for all maps needs to be sent. Adding a weapon or changing the look/feel/animation of a weapon? Here’s the entire weapon dataset again.
Though not nearly as horrible, the same can be said for the libraries and executable binaries of the game logic. This variable was added, well, here’s that entire binary file with the change (not just the change). Binaries tend to be a lot smaller than the assets so it’s less problematic.
The entirety of the game content is likely stored in a handful (maybe a few dozen at most) dataset files, so if any one of them change for any reason, end users now need to download 5-10% of the installed size of the game, to get the update.
Is there a better way? Probably. But it may be too complex to accomplish. Basically write a small patching program to unpack the dataset, replace/insert the new assets, then repack it. It would reduce the download size, but increase the amount of work the end user system needs to do for the update, which may or may not be viable depending on the system you’ve made the game for. PC games should support it, but what happens if you’re coding across PC, Xbox, PlayStation, and Nintendo switch? Do those consoles allow your game the read/write access they need to the storage to do the unpacking and repacking? Do they have the space for that?
It becomes a risk, and doing it the way they are now, if you have enough room to download the update, then no more space is needed, since the update manager will simply copy the updated dataset entirely, over the old one.
It’s a game of choices and variables, risks and rewards. Developers definitely don’t want to get into the business of custom updates per platform based on capabilities, so you have to find a solution that works for everyone who might be running the game. The current solution wastes bandwidth, but has the merit of being cross compatible, and consistent. The process is the same for every platform.
All I know is that I own both D3 and D4 (as well as others), and I’m not playing either.
I played through the story of D4, and started a seasonal character and everything and I just stopped playing.
My main gripe is that my character rarely feels powerful. With level scaling in D4, enemies are consistently at or above my level. I level up, and nothing changes, the enemies level up with me. I might as well not be leveling, just unlocking my more advanced abilities… the only time I feel the difference in power between me and my enemies is when they flatten me without effort. Then I realize they’re 3-5 levels above me, elite, and I’m like, oh yeah, that makes sense.
Basically, I’m almost never a higher level than my enemies. I’m always the same level or significantly lower level, so I have to be done kind of expert to dodge everything they throw at me and I’m just trying to play a dumb game.
I switched to something else where I can pick the difficulty, and I play on the easier modes, I’m not playing games to get clobbered all the time, I just want to kill some stuff, do the things that need doing and get my dopamine hit and move on. D4 is a constant struggle. It gives me anxiety.