How about you don’t change the terms people agreed to retroactively. How about if you said “no royalties”, you stick with no royalties and don’t come up with some bullshit fee that works exactly like a royalty.
Dude, I put like 60 hours into Skyrim my first time before I thought “hey… where’s my shout powers and all the dragons?” Because as soon as Hadvar said “we should split up to avoid suspicion” I unchecked the active quest, said “adios!” And vanished into the trees. I had to come back at like level 30 or something to do the entire MQ from Riverwood to the end.
That’s just how a lot of people play these. I don’t wanna follow their story; I wanna make my own.
Edit: Oh and this is all besides the fact that not only do mods disable achievements, so now do console commands in Starfield. I’ve had to no clip a few times to get unstuck while jumping around with low gravity and ending up places I shouldn’t be, so there are probably some achievements I didn’t get simply because that command likely disabled them (it just gives a generic warning that some commands will disable them, but not which ones).
I guess I’m gonna have to actually check. I always play entirely in 1st person so I wouldn’t have noticed unless it was always in my windshield or something.
I can’t see how Unity increasing prices is anti-competitive. If anything, them going brainfuck with fees can only serve to open the market to new players. Also, I doubt they’ll be able to make the fees retroactive, at least for games that won’t get updated.
Love Insomniac so much. Not many companies have such a stellar track record. And there’s never been a single mtx in any of their games that I’ve played.
Humans using past work to improve, iterate, and further contribute themselves is not the same as a program throwing any and all art into the machine learning blender to regurgitate “art” whenever its button is pushed. Not only does it not add anything to the progress of art, it erases the identity of the past it consumed, all for the blind pursuit of profit.
Me not knowing everything doesn’t mean it isn’t known or knowable. Also, there’s a difference between things naturally falling into obscurity over time and context being removed forcefully.
And then there’s when its too difficult to upkeep them, exactly like how you can’t know everything.
We probably ain’t gonna stop innovation, so we mine as well roll with it (especially when its doing a great job redistributing previously expensive assets)
If it’s “too difficult” to manage, that may be a sign it shouldn’t just be let loose without critique. Also, innovation is not inherently good and “rolling with it” is just negligent.
Where did the AI companies get their code from? Is scraped from the likes of stack overflow and GitHub.
They don’t have the proprietary code that is used to run companies because it’s proprietary and it’s never been on a public forum available for download.
Stable Diffusion uses a dataset from Common Crawl, which pulled art from public websites that allowed them to do so. DeviantArt and ArtStation allowed this, without exception, until recently.
Devil's advocate. It means that only large companies will have AI, as they would be the only ones capable of paying such a large number of people. AI is going to come anyway except now the playing field is even more unfair since you've removed the ability for an individual to use the technology.
Instituting these laws would just be the equivalent of companies pulling the ladder up behind them after taking the average artist's work to use as training data.
How would you even go about determining what percentage belongs to the AI vs the training data? You could argue all of the royalties should go to the creators of the training data, meaning no one could afford to do it.
How would you identify text or images generated by AI after they have been edited by a human? Even after that, how would you know what was used as the source for training data? People would simply avoid revealing any information and even if you did pass a law and solved all of those issues, it would still only affect the country in question.
Literally the definition of greed. They dont deserve royalties for being an inspiration and moving a weight a fraction of a percentage in one direction…
If AI art is stolen data, then every artists on earth are thieves too.
Do you think artists just spontaneously conjure up art? No. Through their entire life of looking at other people’s works, they learned how to do stuff, they emulate and they improve. That’s how human artists come to be. Do you think artists go around asking permission from millions of past artists if they can learn from their art? Do artists track down whoever made the fediverse logo if I want to make a similar shaped art with it? Hell no. Consent in general is impossible too because whole lot of them are likely too dead to give consent be honest. Its the exact same way AI is made.
Your argument holds no consistent logic.
Furthermore, you likely have a misunderstanding of how AI is trained and works. AI models do not store nor copy art that it’s trained on. It studies shapes, concepts, styles, etc. It puts these concepts into matrix of vectors. Billions of images and words are turned into mere 2 gigabytes in something like SD fp16. 2GB is virtually nothing. There’s no compression capable of anywhere near that. So unless you actually took very few images and made a 2GB model, it has no capability to store or copy another person’s art. It has no knowledge of any existing copyrighted work anymore. It only knows the concepts and these concepts like a circle, square, etc. are not copyrightable.
If you think I’m just being pro-AI for the sake of it. Well, it doesn’t matter. Because copyright offices all over the world have started releasing their views on AI art. And it’s unanimously in agreement that it’s not stolen. Furthermore, resulting AI artworks can be copyrighted (lot more complexity there, but that’s for another day).
It’s a tool that can be used to replicate other art except it doesn’t replicate art does it.
It creates works based on other works which is exactly what humans do whether or not it’s sapient is irrelevant. My work isn’t valuable because it’s copyrightable. On a sociopath things like that
What gives a human right to learn off of another person without credit? There is no such inherent right.
Even if such a right existed, I as a person who can make AI training, would then have the right to create a tool to assist me in learning, because I’m a person with same rights as anyone else. If it’s just a tool, which it is, then it is not the AI which has the right to learn, I have the right to learn, which I used to make the tool.
I can use photoshop to replicate art a lot more easily than with AI. None of us are going around saying Photoshop is wrong. (Though we did say that before) The AI won’t know any specific art unless it’s an extremely repeated pattern like “mona lisa”. It literally do not have the capacity to contain other people’s art, and therefore it cannot replicate others art. I have already proven that mathematically.
I don’t have this game yet but I know out of the box modding any of the Fallout or Elder Scrolls games disables achievements (but you can get around this with other mods), so I assume it’s the same here. Bethesda games being some of the most modded games of all time I wouldn’t be surprised if even a lot of first time players were using one or two mods and having their achievements disabled.
Programs bounce around between a ton of different code segments, and it doesn’t really matter how they’re arranged within the binary. Some code even winds up repeated, when repetition is more efficient than jumping back and forth or checking a short loop. It doesn’t matter where the instructions are, so long as they do the right thing.
This machine code still tends to be clean, tight, and friendly toward reverse-engineering… relatively speaking. Anything more complex than addition is an inscrutable mess to people who aren’t warped by years of computer science, but it’s just a puzzle with a known answer, and there’s decades of tools for picking things apart and putting them back together. Scene groups don’t even need to unravel the whole program. They’re only looking for tricky details that will detect pirates and frustrate hackers. Eventually, they will find and defeat those checks.
So Denuvo does everything a hundred times over. Or a dozen. Or a thousand. Random chunks of code are decompiled, recompiled, transpiled, left incomplete, faked entirely, whatever. The whole thing is turned into a hot mess by a program that knows what each piece is supposed to be doing, and generally makes sure that’s what happens. The CPU takes a squiggly scribbled path hither and yon but does all the right things in the right order. And sprinked throughout this eight-ton haystack are so many more needles, any of which might do slightly different things. The “attack surface” against pirates becomes enormous. They’ll still get through, eventually, but a crack delayed is a crack denied.
Unfortunately for us this also fucks up why computers are fast now.
Back in the single-digit-megahertz era, this would’ve made no difference to anything, besides requiring more RAM for this bloated executables. 8- and 16-bit processors just go where they’re told and encounter each instruction by complete surprise. Intel won the 32-bit era by cranking up clock speeds, which quickly outpaced RAM response times, leading to hideously clever cache-memory use, inside the CPU itself. Cache layers nowadays are a major part of CPU cost and an even larger part of CPU performance. Data that’s read early and kept nearby can make an instruction take one cycle instead of one thousand.
Sending the program-counter on a wild goose chase across hundreds of megabytes guarantees you’re gonna hit those thousand-cycle instructions. The next instruction being X=N+1 might take literally no time, if it happens near a non-math instruction, and the pipeline has room for it. But if you have to jump to that instruction and back, it’ll take ages. Maybe an entire microsecond! And if it never comes back - if jumps to another copy of the whole function, and from there to parts unknown - those microseconds can become milliseconds. A few dozen of those in the wrong place and your water-cooled demigod of a PC will stutter like Porky Pig. That’s why Denuvo in practice just plain suuucks. It is a cache defeat algorithm. At its pleasure, and without remedy, it will give paying customers a glimpse of the timeline where Motorola 68000s conquered the world. Hit a branch and watch those eight cores starve.
Unfortunately, increasing cache seems to be the direction things are going, what with AMD’s 3D cache initiative and Apple moving RAM closer to the CPU.
So Denuvo could actually get away with it by just pushing the problem onto platforms. Ideally, this would discourage this type of DRM, but it’ll probably just encourage more PC upgrades.
I wouldn’t be surprised if we end up with ram-less systems soon. A lot of programs don’t need much more memory than the cache sizes already available. Things like electron bloat memory use through the roof, but even then it’s likely just a gigabyte or two. Cpus will have that much cache eventually. The few applications that really need tons of memory could be offloaded to a really fast SSD, which are already becoming the standard. I imagine we’ll see it in phones or tablets first, where multitasking isn’t as much of a thing and physical space is at a premium.
That’s just not true, here are a few off the top of my head:
video games
docker containers
web browsers
productivity software
RAM is actually the one resource I run out of in my day to day work as a software developer, and I get close on my gaming PC. I have a really fast SSD in my work computer (MacBook Pro) and my Linux gaming PC (some fast NVME drive), and both grind to a halt when I start swapping (Linux seems to handle it better imo). So no, I don’t think SSDs are enough by any stretch of the imagination.
If anything, our need for high performance RAM is higher today than ever! My SIL just started a graphics program (graphic design or UI/UX or something), so I advised her to prioritize a high amount of RAM over a high number of CPU/GPU cores because that’s how important RAM is to the user experience when deadlines approach.
Large CPU caches are great, but I don’t think you can really compensate for low system memory by having large caches and a fast SSD. What is obvious, though, is that memory latency and bandwidth is an issue, so I could see more Apple-style soldered NAND next to the CPU in the coming board revisions, which isn’t great for DIY systems. NAND modules are just so much cheaper to manufacturer than CPU cache, and they’re also sensitive to heat, so I don’t think embedding them on the CPU die is a great long term solution. I would prefer to see GPU-style memory modules either around or behind the CPU, soldered into the board, before we see on-die caches with multiple GB capacity.
Well you’re right that it’s not practical now. By “soon” I was thinking of like 10+ years from now. And as I said, it would likely start in systems that aren’t used for those applications anyway (aside from web browsers, which use way more ram than necessary anyway). By the time it takes over the applications you listed, we’ll have caches as big as our current ram anyway. And I’m using a loose definition of cache, I really just mean on-package memory of some kind. And we probably will see that GPU style memory before it’s fully integrated.
It’s already sort of a thing in embedded processors, such as ARM SOCs where RAM is glued to the top of the CPU package (I think the OG Raspberry Pi did that). But current iterations run the CPU way too hot for that to work, so the RAM is separate.
I could maybe see it be a thing in kiosks and other limited purpose devices (smart devices, smart watches, etc), but not for PCs, servers, or even smart phones, where we expect a lot higher memory load/multitasking.
“I actually believe Cyberpunk on launch was way better than it was received, and even the first reviews were positive,” he concludes. "Then it became a cool thing not to like it.
How are you planning to fix your image when you’re still saying shit like this?
I don’t think he’s completely wrong. A lot of people felt similarly. I know SkillUp felt similarly that if you had a really good PC and could overlook the (unforgiveable, admittedly) bugs, it had a lot going for it.
I knew almost nothing about the game and went into it completely without assumptions or preconceptions. I played it immediately at launch on XBone and didn’t stop for a few hundred hours of total game time. I was completely blown away.
Did it crash here and there? Was it lurchy and buggy? Did bikes sometimes get stuck in the pavement like it was sand? Did you wind up smashing an unconscious person’s head through the earth a la “Rock Bottom” every 4 or 5 times you tried to be sneaky? Yeah.
Despite that, was it one of the greatest games I’ve ever played? Fuck yeah.
People had genuine problems and a game should never launch in the state that CP77 was in, but I completely agree with him that it became cool to rip on the game.
Same for me, basically played it for 100 hours straight with as little sleep as possible… yeah it was buggy and the story was rushed in some places, but the overall experience was great for me.
games
Aktywne
Magazyn ze zdalnego serwera może być niekompletny. Zobacz więcej na oryginalnej instancji.