Headsets in the thousand-dollar range are plenty good and still not selling. Take the hint. Push costs down. Cut out everything that is not strictly necessary. Less Switch, more Game Boy.
6DOF inside-out tracking is required, but you can get that from one camera and an orientation sensor. Is it easy? Nope. Is it tractable for any of the companies already making headsets? Yes, obviously. People want pick-up-and-go immersion. Lighthouses were infrastructure and Cardboard was not immersive. Proper tracking in 3D space has to Just Work.
Latency is intolerable. Visual quality, scene detail, shader complexity - these are nice back-of-the-box boasts. Instant response time is do-or-die. Some monocular 640x480 toy with rock-solid 1ms latency would feel more real than any ultrawide 4K pancake monstrosity that’s struggling to maintain 10ms.
Two innovations could make this painless.
One, complex lenses are a hack around flat lighting. Get rid of the LCD backlight and use one LED. This simplifies the ray diagram to be nearly trivial. Only the point light source needs to be far from the eye. The panel and its single lens can be right in your face. Or - each lens can be segmented. The pyramid shape of a distant point source gets smaller, and everything gets thinner. At some point the collection of tiny projectors looks like a lightfield, which is what we should pursue anyway.
Two, intermediate representation can guarantee high performance, even if the computer chokes. It is obviously trivial to throw a million colored dots at a screen. Dice up a finished frame into floating paint squares, and an absolute potato can still rotate, scale, and reproject that point-cloud, hundreds of times per second. But flat frames are meant for flat screens. Any movement at all reveals gaps behind everything. So: send point-cloud data, directly. Do “depth peeling.” Don’t do backface culling. Toss the headset a version of the scene that looks okay from anywhere inside a one-meter cube. If that takes longer for the computer to render and transmit… so what? The headset’s dinky chipset can show it more often than your godlike PC, because it’s just doing PS2-era rendering with microsecond-old head-tracking. The game could crash and you’d still be wandering through a frozen moment at 100, 200, 500 Hz.
Programs bounce around between a ton of different code segments, and it doesn’t really matter how they’re arranged within the binary. Some code even winds up repeated, when repetition is more efficient than jumping back and forth or checking a short loop. It doesn’t matter where the instructions are, so long as they do the right thing.
This machine code still tends to be clean, tight, and friendly toward reverse-engineering… relatively speaking. Anything more complex than addition is an inscrutable mess to people who aren’t warped by years of computer science, but it’s just a puzzle with a known answer, and there’s decades of tools for picking things apart and putting them back together. Scene groups don’t even need to unravel the whole program. They’re only looking for tricky details that will detect pirates and frustrate hackers. Eventually, they will find and defeat those checks.
So Denuvo does everything a hundred times over. Or a dozen. Or a thousand. Random chunks of code are decompiled, recompiled, transpiled, left incomplete, faked entirely, whatever. The whole thing is turned into a hot mess by a program that knows what each piece is supposed to be doing, and generally makes sure that’s what happens. The CPU takes a squiggly scribbled path hither and yon but does all the right things in the right order. And sprinked throughout this eight-ton haystack are so many more needles, any of which might do slightly different things. The “attack surface” against pirates becomes enormous. They’ll still get through, eventually, but a crack delayed is a crack denied.
Unfortunately for us this also fucks up why computers are fast now.
Back in the single-digit-megahertz era, this would’ve made no difference to anything, besides requiring more RAM for this bloated executables. 8- and 16-bit processors just go where they’re told and encounter each instruction by complete surprise. Intel won the 32-bit era by cranking up clock speeds, which quickly outpaced RAM response times, leading to hideously clever cache-memory use, inside the CPU itself. Cache layers nowadays are a major part of CPU cost and an even larger part of CPU performance. Data that’s read early and kept nearby can make an instruction take one cycle instead of one thousand.
Sending the program-counter on a wild goose chase across hundreds of megabytes guarantees you’re gonna hit those thousand-cycle instructions. The next instruction being X=N+1 might take literally no time, if it happens near a non-math instruction, and the pipeline has room for it. But if you have to jump to that instruction and back, it’ll take ages. Maybe an entire microsecond! And if it never comes back - if jumps to another copy of the whole function, and from there to parts unknown - those microseconds can become milliseconds. A few dozen of those in the wrong place and your water-cooled demigod of a PC will stutter like Porky Pig. That’s why Denuvo in practice just plain suuucks. It is a cache defeat algorithm. At its pleasure, and without remedy, it will give paying customers a glimpse of the timeline where Motorola 68000s conquered the world. Hit a branch and watch those eight cores starve.
And screw anyone going ‘but then how money?!’ while it infects billion-dollar business models. There’s no amount of money you can pay, where greedy suits won’t imagine taking your money and selling your eyeballs.
If you could leave, you’d never be trapped in a long game. You would enjoy every long game. The ones that suck wouldn’t last.
Root problem: the game requires a fixed number of human players, from start to finish. If bots worked then you could just take the L and quit. Or safely eject someone who’s being a total cock. Or possibly even split the game in two, so both the “fuck this” and “fuck you” groups see everyone else replaced with bots.
Bots don’t have to be good with every character. Bots don’t even have to play by the same rules as humans. They just need to be balanced. Which you’d figure these developers are really really good at, after fifteen years of pouring new characters into these games.
Individual scoring would be almost as powerful. A high-level player with a low-level team should ideally be scored on their skill - not a binary win / lose condition. Especially if half the players are guaranteed to lose. Long matches provide oodles of time to evaluate. And if bots work at all, the game can quietly run simulations from snapshots of the ongoing match - checking if players did better or worse than a player-like script would, and by how much.
Compare sports. You have a regulation basketball game. On one side is the 2023 Miami Heat, minus Jimmy Butler. On the other side you have the AZ Compass Prep Dragons, plus Jimmy Butler. The Dragons’ chances of winning are approximately diddly over fuck. But a talent scout watching those high-schoolers get smoked 132-15 can still recognize which of them are doing especially well under the circumstances. And Erik Spoelstra can still give Tyler Herro side-eye for ever missing a free throw. Despite a blowout loss, every individual can be judged for how they played, both in terms of independent actions and productive teamwork. (This new kid at Arizona, Jimmy somethingorther, is really good.)
Yet in a video game - where every moment can be scrutinized frame-by-frame, and statistical analysis is so easy you’d think this was baseball - there is only total victory and utter defeat, and only for whole teams. Everything from Smash Bros to Overwatch has little trophies to hand out for leading performance in a bunch of arbitrary details. So why doesn’t a loss caused by one feeding troll count as 90% of a win for the players who almost eked it out in spite of them?
More importantly: why doesn’t the game make it feel like they were doing good, when they were doing great?
I propose a Red Faction retro spinoff. Cash in on the underused franchise and the modern boom-shoot glut by doing a voxel-based game where everything, and I mean everything, is destructible. Like if Teardown was a setpiece-heavy FPS pretending to be from the Delta Force / Outcast era. Low fidelity keeps costs down, the genre is weirdly underused for all its indie-demo examples, and if the immersive sim curse kills any sequels then they’re only back to square one.
As always: if leaving or sucking ruins a game for everyone else, your game is badly designed.
Only MOBAs have this level of toxicity. All MOBAs have this problem. Maybe lashing strangers together for forty-five minutes, in a zero-sum contest where half of them will lose, with so much inter-dependence and complexity that nobody feels responsible, is not great for the human psyche.
You can’t even kick someone. Losing them for any reason ruins the game. You have to tough it out, for most of an hour, after waiting however long just to start the game, and the inevitable loss will still count against you. No kidding people get wound-up.
We already had that - it’s called “early access.” But people gunked that up, so they have to roll along the euphemism treadmill, and make a fancier name for paying extra to get an incomplete game.