Programs bounce around between a ton of different code segments, and it doesn’t really matter how they’re arranged within the binary. Some code even winds up repeated, when repetition is more efficient than jumping back and forth or checking a short loop. It doesn’t matter where the instructions are, so long as they do the right thing.
This machine code still tends to be clean, tight, and friendly toward reverse-engineering… relatively speaking. Anything more complex than addition is an inscrutable mess to people who aren’t warped by years of computer science, but it’s just a puzzle with a known answer, and there’s decades of tools for picking things apart and putting them back together. Scene groups don’t even need to unravel the whole program. They’re only looking for tricky details that will detect pirates and frustrate hackers. Eventually, they will find and defeat those checks.
So Denuvo does everything a hundred times over. Or a dozen. Or a thousand. Random chunks of code are decompiled, recompiled, transpiled, left incomplete, faked entirely, whatever. The whole thing is turned into a hot mess by a program that knows what each piece is supposed to be doing, and generally makes sure that’s what happens. The CPU takes a squiggly scribbled path hither and yon but does all the right things in the right order. And sprinked throughout this eight-ton haystack are so many more needles, any of which might do slightly different things. The “attack surface” against pirates becomes enormous. They’ll still get through, eventually, but a crack delayed is a crack denied.
Unfortunately for us this also fucks up why computers are fast now.
Back in the single-digit-megahertz era, this would’ve made no difference to anything, besides requiring more RAM for this bloated executables. 8- and 16-bit processors just go where they’re told and encounter each instruction by complete surprise. Intel won the 32-bit era by cranking up clock speeds, which quickly outpaced RAM response times, leading to hideously clever cache-memory use, inside the CPU itself. Cache layers nowadays are a major part of CPU cost and an even larger part of CPU performance. Data that’s read early and kept nearby can make an instruction take one cycle instead of one thousand.
Sending the program-counter on a wild goose chase across hundreds of megabytes guarantees you’re gonna hit those thousand-cycle instructions. The next instruction being X=N+1 might take literally no time, if it happens near a non-math instruction, and the pipeline has room for it. But if you have to jump to that instruction and back, it’ll take ages. Maybe an entire microsecond! And if it never comes back - if jumps to another copy of the whole function, and from there to parts unknown - those microseconds can become milliseconds. A few dozen of those in the wrong place and your water-cooled demigod of a PC will stutter like Porky Pig. That’s why Denuvo in practice just plain suuucks. It is a cache defeat algorithm. At its pleasure, and without remedy, it will give paying customers a glimpse of the timeline where Motorola 68000s conquered the world. Hit a branch and watch those eight cores starve.
Unfortunately, increasing cache seems to be the direction things are going, what with AMD’s 3D cache initiative and Apple moving RAM closer to the CPU.
So Denuvo could actually get away with it by just pushing the problem onto platforms. Ideally, this would discourage this type of DRM, but it’ll probably just encourage more PC upgrades.
I wouldn’t be surprised if we end up with ram-less systems soon. A lot of programs don’t need much more memory than the cache sizes already available. Things like electron bloat memory use through the roof, but even then it’s likely just a gigabyte or two. Cpus will have that much cache eventually. The few applications that really need tons of memory could be offloaded to a really fast SSD, which are already becoming the standard. I imagine we’ll see it in phones or tablets first, where multitasking isn’t as much of a thing and physical space is at a premium.
That’s just not true, here are a few off the top of my head:
video games
docker containers
web browsers
productivity software
RAM is actually the one resource I run out of in my day to day work as a software developer, and I get close on my gaming PC. I have a really fast SSD in my work computer (MacBook Pro) and my Linux gaming PC (some fast NVME drive), and both grind to a halt when I start swapping (Linux seems to handle it better imo). So no, I don’t think SSDs are enough by any stretch of the imagination.
If anything, our need for high performance RAM is higher today than ever! My SIL just started a graphics program (graphic design or UI/UX or something), so I advised her to prioritize a high amount of RAM over a high number of CPU/GPU cores because that’s how important RAM is to the user experience when deadlines approach.
Large CPU caches are great, but I don’t think you can really compensate for low system memory by having large caches and a fast SSD. What is obvious, though, is that memory latency and bandwidth is an issue, so I could see more Apple-style soldered NAND next to the CPU in the coming board revisions, which isn’t great for DIY systems. NAND modules are just so much cheaper to manufacturer than CPU cache, and they’re also sensitive to heat, so I don’t think embedding them on the CPU die is a great long term solution. I would prefer to see GPU-style memory modules either around or behind the CPU, soldered into the board, before we see on-die caches with multiple GB capacity.
Well you’re right that it’s not practical now. By “soon” I was thinking of like 10+ years from now. And as I said, it would likely start in systems that aren’t used for those applications anyway (aside from web browsers, which use way more ram than necessary anyway). By the time it takes over the applications you listed, we’ll have caches as big as our current ram anyway. And I’m using a loose definition of cache, I really just mean on-package memory of some kind. And we probably will see that GPU style memory before it’s fully integrated.
It’s already sort of a thing in embedded processors, such as ARM SOCs where RAM is glued to the top of the CPU package (I think the OG Raspberry Pi did that). But current iterations run the CPU way too hot for that to work, so the RAM is separate.
I could maybe see it be a thing in kiosks and other limited purpose devices (smart devices, smart watches, etc), but not for PCs, servers, or even smart phones, where we expect a lot higher memory load/multitasking.
“I actually believe Cyberpunk on launch was way better than it was received, and even the first reviews were positive,” he concludes. "Then it became a cool thing not to like it.
How are you planning to fix your image when you’re still saying shit like this?
I don’t think he’s completely wrong. A lot of people felt similarly. I know SkillUp felt similarly that if you had a really good PC and could overlook the (unforgiveable, admittedly) bugs, it had a lot going for it.
I knew almost nothing about the game and went into it completely without assumptions or preconceptions. I played it immediately at launch on XBone and didn’t stop for a few hundred hours of total game time. I was completely blown away.
Did it crash here and there? Was it lurchy and buggy? Did bikes sometimes get stuck in the pavement like it was sand? Did you wind up smashing an unconscious person’s head through the earth a la “Rock Bottom” every 4 or 5 times you tried to be sneaky? Yeah.
Despite that, was it one of the greatest games I’ve ever played? Fuck yeah.
People had genuine problems and a game should never launch in the state that CP77 was in, but I completely agree with him that it became cool to rip on the game.
Same for me, basically played it for 100 hours straight with as little sleep as possible… yeah it was buggy and the story was rushed in some places, but the overall experience was great for me.
Somehow I was convinced that Ratchet and Clank: Rift Apart was Insomniac’s first always-on ray-tracing game and that non-raytraced graphics had been added to the PC port but I was completely wrong.
Yeah, I feel the same about the events. But there’s only 2 “pro” tracks (White Land and Port Town 2), and I think they’ve dropped the frequencies of the full Grand Prix now with the Miniprix (which also include White Land and Port Town 2 as their final race)… which means I still haven’t played on Silence (I tend to be a bit too aggressive and end up blowing up on the 3rd or 4th track on Grand Prix).
That’s still a total rotation of 6 tracks (I’m not counting Silence because it’s hard to get there), and most of the time you’re playing on only 2 of them.
I mean even without inventing new F-Zero content, there are other Mode-7-style F-Zero games (that nobody played, but still they exist). It wouldn’t ruin my nostalgia to play tracks from BS F-Zero GP2 or the GBA F-Zero games. But yeah, you can’t tell me splatting some tiles down for new tracks would be that hard… I’m sure this is a game where you could even procedurally generate the tracks pretty well and eliminate the “memorization” aspect. I’d love that as an “event mode”.
If we’re talking stuff on F-Droid, the big one there for me is UnCiv. It’s an excellent fully-free reimplementation of Civ V… with all the nightmarish one-more-turn-oh-God-is-it-dawn addictive problems that implies. Only real flaw is that by adapting Civ V, it also adapts Civ V’s big flaw: traffic jams. Unciv units neither stack nor combine so waging war in an obstacle-rich landscape is hellishly tedious. Also the higher difficulties feel just abusively random and unfair because the hard-level AIs get free resources, but that’s normal for a Civ game.
“Okay, I know you weren’t too happy about my plan to retroactively charge you for every mile you drive in the car I already sold you… What if I told you, I’ll only charge you for a maximum of… uh… 30,000 miles per year?”
For what it’s worth, I am one of these people. I’d already watched a couple of streamers play random sidequests, but when I saw the early game I just couldn’t stomach playing any further.
games
Aktywne
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.