mindbleach,

I admire the concept behind Denuvo.

Programs bounce around between a ton of different code segments, and it doesn’t really matter how they’re arranged within the binary. Some code even winds up repeated, when repetition is more efficient than jumping back and forth or checking a short loop. It doesn’t matter where the instructions are, so long as they do the right thing.

This machine code still tends to be clean, tight, and friendly toward reverse-engineering… relatively speaking. Anything more complex than addition is an inscrutable mess to people who aren’t warped by years of computer science, but it’s just a puzzle with a known answer, and there’s decades of tools for picking things apart and putting them back together. Scene groups don’t even need to unravel the whole program. They’re only looking for tricky details that will detect pirates and frustrate hackers. Eventually, they will find and defeat those checks.

So Denuvo does everything a hundred times over. Or a dozen. Or a thousand. Random chunks of code are decompiled, recompiled, transpiled, left incomplete, faked entirely, whatever. The whole thing is turned into a hot mess by a program that knows what each piece is supposed to be doing, and generally makes sure that’s what happens. The CPU takes a squiggly scribbled path hither and yon but does all the right things in the right order. And sprinked throughout this eight-ton haystack are so many more needles, any of which might do slightly different things. The “attack surface” against pirates becomes enormous. They’ll still get through, eventually, but a crack delayed is a crack denied.

Unfortunately for us this also fucks up why computers are fast now.

Back in the single-digit-megahertz era, this would’ve made no difference to anything, besides requiring more RAM for this bloated executables. 8- and 16-bit processors just go where they’re told and encounter each instruction by complete surprise. Intel won the 32-bit era by cranking up clock speeds, which quickly outpaced RAM response times, leading to hideously clever cache-memory use, inside the CPU itself. Cache layers nowadays are a major part of CPU cost and an even larger part of CPU performance. Data that’s read early and kept nearby can make an instruction take one cycle instead of one thousand.

Sending the program-counter on a wild goose chase across hundreds of megabytes guarantees you’re gonna hit those thousand-cycle instructions. The next instruction being X=N+1 might take literally no time, if it happens near a non-math instruction, and the pipeline has room for it. But if you have to jump to that instruction and back, it’ll take ages. Maybe an entire microsecond! And if it never comes back - if jumps to another copy of the whole function, and from there to parts unknown - those microseconds can become milliseconds. A few dozen of those in the wrong place and your water-cooled demigod of a PC will stutter like Porky Pig. That’s why Denuvo in practice just plain suuucks. It is a cache defeat algorithm. At its pleasure, and without remedy, it will give paying customers a glimpse of the timeline where Motorola 68000s conquered the world. Hit a branch and watch those eight cores starve.

  • Wszystkie
  • Subskrybowane
  • Moderowane
  • Ulubione
  • Pozytywnie
  • krakow
  • giereczkowo
  • Blogi
  • rowery
  • tech
  • Spoleczenstwo
  • niusy
  • sport
  • lieratura
  • esport
  • Cyfryzacja
  • kino
  • muzyka
  • LGBTQIAP
  • opowiadania
  • slask
  • Psychologia
  • motoryzacja
  • turystyka
  • MiddleEast
  • fediversum
  • zebynieucieklo
  • test1
  • Archiwum
  • FromSilesiaToPolesia
  • NomadOffgrid
  • games@sh.itjust.works
  • m0biTech
  • Wszystkie magazyny