I’m not going to even consider playing again till more comes out about it. Diablo is grindy by design and I have no interest in going through the grind until I know that there is going to be a good return on that effort.
How would they credit the artists? Generative AI is trained on thousands and millions of images and data points from equally numerous artists. He might as well say, “I give credit to humanity.”
I only consume art from people born of mute mothers isolated from society during their pregnancy and then born into sensory deprivation chambers.
It is the only way to ensure proper pure art as all other artists are simply rehashing prior work.
Doubled down on the “yea were not gonna credit artist’s our AI stole from”. What a supreme douche
I don’t think it’s as simple as all that. Artists look at other artists’ work when they’re learning, for ideas, for methods of doing stuff, etc. Good artists probably have looked at a ton of other artwork, they don’t just form their skills in a vacuum. Do they need to credit all the artists they “stole from”?
In the article, the company made a point about not using AI models specifically trained on a smaller set of works (or some artist’s individual works). Doing something like that would be a lot easier to argue that it’s stealing: but the same would be true if a human artist carefully studied another person’s work and tried to emulate their style/ideas. I think there’s a difference between that an “learning” (or learning) for a large body of work and not emulating any specific artist, company, individual works, etc.
Obviously it’s something that needs to be handled fairly carefully, but that can be true with human artists too.
I swear I’m old enough to remember this exact same fucking debate when digital tools started becoming popular.
It is, simply put, a new tool.
It’s also not the one and done magic button people who’ve never used shit think it is.
The knee-jerk reaction of hating on every art made with AI, is dangerous.
You’re free to like it or not, but it’s already out of the hat.
Big companies will have the ressources to train their own model.
I for one would rather have it in the public domain rather than only available to big corps.
I wouldn't call myself a "good artist" at all, and I've never released anything, I just make music for myself. Most of the music I make starts with my shamelessly lifting a melody, chord progression, rhythm, sound, or something else, from some song I've heard. Then I'll modify it slightly, add my own elements elsewhere, modify the thing I "stole" again, etc, and by the time I've finished, you probably wouldn't even be able to tell where I "stole" from because I've iterated on it so much.
AI models are exactly the same. And, personally, I'm pretty good at separating the creative process from the end result when it comes to consuming/appreciating art. There are songs, paintings, films, etc, where the creative process is fascinating to me but I don't enjoy the art itself. There are pieces of art made by sex offenders, criminals and generally terrible people - people who I refuse to support financially in any way - but that doesn't mean my appreciation for the art is lessened. I'll lose respect for an artist as a person if I find out their work is ghostwritten, but I won't lose my appreciation for the work. So if AI can create art I find evocative, I'll appreciate that, too.
But ultimately, I don't expect to see much art created solely by AI that I enjoy. AI is a fantastic tool, and it can lead to some amazing results when someone gives it the right prompts and edits/curates its output in the right way. And it can be used for inspiration, and to create a foundation that artists can jump off, much like I do with my "stealing" when I'm writing music. But if someone gives an AI a simple prompt, they tend to get a fairly derivative result - one that'll feel especially derivative as we see "raw output" from AIs more often and become more accustomed to their artistic voice. I'm not concerned at all about people telling an AI to "write me a song about love" replacing the complex prog musicians I enjoy, and I'm not worried about crappy AI-generated games replacing the lovingly crafted experiences I enjoy either.
Artists who look at art are processing it in a relatable, human way. An AI doesnt look at art. A human tells the AI to find art and plug it in, knowing that work is copyrighted and not available for someone else’s commercial project to develop an AI.
That’s not how AI art works. You can’t tell it to find art and plug it in. It doesn’t have the capability to store or copy existing artworks. It only contains the matrix of vectors which contain concepts. Concepts cannot be copyrighted.
Kind of. The AI doesn’t go out and find/do anything, people include images in its training data though. So it’s the human that’s finding the art and plugging it in — most likely through automated processes that just scrape massive amounts of images and add them to the corpus used for training.
It doesn’t have the capability to store or copy existing artworks. It only contains the matrix of vectors which contain concepts.
Sorry, this is wrong. You definitely can train AI to produce works that are very nearly a direct copy. How “original” works created by the AI are is going to depend on the size of the corpus it got trained on. If you train the AI (or put a lot of weight on) training for just a couple works from one specific artist or something like that it’s going to output stuff that’s very similar. If you train the AI on 1,000,000 images from all different artists, the output isn’t really going to resemble any specific artist’s style or work.
That’s why the company emphasized they weren’t training the AI to replicate a specific artist’s (or design company, etc) works.
As a general statement: No, I am not. You’re making an over specific scenario to make it true. Sure, if I take 1 image and train a model just on that one image, it’ll make that exact same image. But that’s no different than me just pressing copy and paste on a single image file. The latter does the job whole lot better too. This entire counter argument is nothing more than being pedantic.
Furthermore, if I’m making such specific instructions to the AI, then I am the one who’s replicating the art. It doesn’t matter if I use a pencil to trace out the existing art, using photoshop, or creating a specific AI model. I am the one who’s doing that.
You didn’t qualify what you said originally. It either has the capability or not: you said it didn’t, it actually does.
You’re making an over specific scenario to make it true.
Not really. It isn’t that far-fetched that a company would see an artist they’d like to use but also not want to pay that artist’s fees so they train an AI on the artist’s portfolio and can churn out very similar artwork. Training it on one or two images is obviously contrived, but a situation like what I just mentioned is very plausible.
This entire counter argument is nothing more than being pedantic.
So this isn’t true. What you said isn’t accurate with the literal interpretation and it doesn’t work with the more general interpretation either. The person higher in the thread called it stealing: in that case it wasn’t, but AI models do have the capability to do what most people would probably call “stealing” or infringing on the artist’s rights. I think recognizing that distinction is important.
Furthermore, if I’m making such specific instructions to the AI, then I am the one who’s replicating the art.
Yes, that’s kind of the point. A lot of people (me included) would be comfortable calling doing that sort of thing stealing or plagiarism. That’s why the company in OP took pains to say they weren’t doing that.
Artists who look at art are processing it in a relatable, human way.
Yeah, sure. But there’s nothing that says “it’s not stealing if you do it in a relatable, human way”. Stealing doesn’t have anything to do with that.
knowing that work is copyrighted and not available for someone else’s commercial project to develop an AI.
And it is available for someone else’s commercial project to develop a human artist? Basically, the “an AI” part is still irrelevant to. If the works are out there where it’s possible to view them, then it’s possible for both humans and AIs to acquire them and use them for training. I don’t think “theft” is a good argument against it.
But there are probably others. I can think of a few.
I just want fucking humans paid for their work, why do you tech nerds have to innovate new ways to lick the boots of capital every few years? Let the capitalists make aeguments why AI should own all of our work, for free, rights be damned, and then profit off of it, and sell that back to us as a product. Let them do that. They don’t need your help.
That’s a problem whether or not we’re talking about AI.
why do you tech nerds have to innovate new ways to lick the boots of capital every few years?
That’s really not how it works. “Tech nerds” aren’t licking the boots of capitalists, capitalists just try to exploit any tech for maximum advantage. What are the tech nerds supposed to do, just stop all scientific and technological progress?
why AI should own all of our work, for free, rights be damned,
AI doesn’t “own your work” any more than a human artist who learned from it does. You don’t like the end result, but you also don’t seem to know how to come up with a coherent argument against the process of getting there. Like I mentioned, there are better arguments against it than “it’s stealing”, “it’s violating our rights” because those have some serious issues.
That’s over. Just let it go. It’s never going back in the bottle and artists will never see a penny from ai that trained their art. It’s not fair but life isn’t fair.
I can hardly think of a better example of “the lady doth protest too much” than the responses that would get fired back at Anita. Completely unable to mask just how close to home the criticism hit them.
Or is it just the ‘humans fighting giant machines’ part that they’re likening to Shadow of the Colossus Metal Gear Solid Horizon Zero Dawn?
Jokes aside, the standard of “could confuse consumers into mistaking one for another” was meant to prevent things like essentially typo-squatting in product names, e.g. going and making Orao cookies, instead of Oreo (which is why Oreo was able to copy Hydrox).
It wasn’t meant to just be about aping a concept or art style. No one would actually mistake “Light of Motiram” for “Horizon: Zero Dawn”.
Even if it was ripping on the other two games crossed out: Sony owns (or partially owns) those, too. They’re not suing over those games because… They’re theirs.
One of the things I’ve been thinking about a lot lately is media literacy as it relates to gaming - specifically about the design conversations developers are often having amongst each other that players only vaguely feel. Let me elaborate:
A good example is the Castlevania series. From early on, Castlevania was always both refining and reinventing itself. Vampire Killer and Castlevania feel to me like a kind of A/B testing to see what hits. When Castlevania prevailed, they immediately began iterating on the formula with both Simon’s Quest and Dracula’s Curse figuring out different modes of gameplay through nonlinear level design and changing characters. Super Castlevania IV was already a remaster of sorts starring Simon Belmont. Of course followed by the all time greats Rondo of Blood and Symphony of the Night. It had trouble jumping to 3D with the N64 entry which was just called Castlevania again and eschewed the burgeoning Metroidvania/RPG elements of its predecessors.
This eventually leads us to Lords of Shadow which I can certainly respect as a good game with a dedicated following, but it never appealed to me and I had a hard time putting my finger on why. It’s because it’s not just a reboot, but one that kind of wholesale grabs the QTE/cinematic/rage mode game mechanics of the 2010’s and stuffs them into a Castlevania package. It’s difficult to say anything isn’t a “true” Castlevania game in a series that was already very loosely defined as “gothic action probably with Dracula somewhere?” but it had very firmly stepped away from the conversation of its own series.
Even if you’re new to the Castlevania series today, I think you can find great satisfaction in trawling through the depths of the franchise, playing them in chronological release order, and appreciating the various thematic and gameplay elements that each entry contributed to the series. I think gamedevs could learn a lot by looking at this evolution, too. Take look at the Release timeline and note the space in between early entries.
Nowadays, a big game will spend multiple years in development. Inspirations it may have taken from the gaming landscape are years in the past, assuming it even picked up on them when they were peak. When that theoretical game exists, someone may then take inspiration from it and push it into their years long development. The needle moves sooooo … slowly …
And because of that, as we all know, they’re willing to take less of a risk on creating innovative games. There’s this prevailing notion that there are only “good” and “bad” game design concepts and if you mash enough of the good concepts together in a package, you’ll have a good game. They’re all homogenizing because they’re no long trying to deliver on a product to entice you to play it, they’re trying to force a platform/market on you. Take a look at Concord or Marathon or MindsEye or any of the other monumental flops. Kind of like the DCU in my mind; you know the proper thing to do is take the time to build out the world and characters by giving satisfying entries that serve people the things they’re craving. But they keep jumping the gun. If you really wanted Marathon to succeed as a GaaS, why not create a single player game first and allow players to get accustomed to the world and give them something to value to pull them away? The eagerness with which they keep sacrificing projects to snap the trap shut early and make their money back should be a big clue.
Anyways, speaking of MindsEye, I was watching this video earlier which speculates the game was supposed to be another metaverse platform called Everywhere, akin to Epic’s Fortnite. Nobody wants an everything game. Nobody wants an everything app. I don’t want ONE game that I play for the rest of forever, that’s not a thing I ever wanted. They’re trying to forcefully dictate the market at us and everyone is just gagging. As consumers I don’t think we can put effective boycotts together anymore but the market is so utterly saturated and overwhelmed that you literally cannot get people to care. It stands at the complete opposite end of what the article discusses and I think that’s worth meditating on.
It’s not speculation with MindsEye. Everywhere was shown off first, and it’s still happening. That studio was funded with VC money, and VCs want “the next big thing”. That thing at the time was “metaverse”. MindsEye seems to be the smaller project they can get out in the meantime and, charitably, is one of a number of things they’ll churn out that all comes from a similar process flow and builds on each other (they hope).
As to boycotts, your individual purchases always matter; not just with what you don’t buy but also what you do buy.
As to boycotts, your individual purchases always matter; not just with what you don’t buy but also what you do buy.
Agreed. I’m having a bit of a hard time articulating my ideas properly.
I think my overall point is just that it’s really hard to organize purposeful and effective boycotts these days, especially since no matter what the issue, there’s usually a counter movement dampening it. Whatever market forces are causing these companies to register the lack of interest and disdain the consumer market has, I’d like to identify it and capitalize on it because when the market adapts, it most likely won’t be to the consumer’s benefit.
You could live quite happily off indies these days, but it’s hard to ignore the thrashing leviathans. I’m not sure how much I really care about them anymore, but they do take up a lot of the oxygen in the room. And they seem to control a lot of platforms/storefronts as well …
That oxygen is in a different room. The person who only plays Fortnite probably never heard of MindsEye or Concord. At some point, I wonder why games media even covers certain companies anymore. Sure, EA and Ubisoft made games we all liked 20-25 years ago, but they don’t really make games for those same customers anymore, largely.
I absolutely recommend it! Slope’s Game Room has an excellent, 2 hour retrospective you can put on while you work if you want a pretty good deep dive. Other than that, I recommend getting yourself set with some emulators so you can kind of dig through the series. A lot of the early games are difficult and I think it’s perfectly fine to kind of just pick through them a bit, get a taste, move on, return to the ones you like, etc.
You can absolutely feel the arc of design elements through the early series up to the pinnacle, Rondo of Blood. That’s because it was all being done by Konami teams, often who knew eachother or were handing the projects off. Rondo hits this sweet spot where you can feel the inspiration of old vampire novels combined with dramatic stage plays (the stages have dynamic names like Feast of Flames instead of just area descriptors), told with 80’s anime cutscenes, wrapped into a videogame package. It’s truly a work of art that both wears its influences on its sleeve and also that couldn’t really exist the way that it does in any other medium. So where do you even go from there? Symphony of the Night! It takes everything that works about Rondo and kicks it to 11 while flipping the franchise on its head with an absolutely rocking soundtrack and sprawling castle. You can enjoy these games in a vacuum, sure. But playing the series up to that point gives you a real appreciation for what they were going for and how they accomplished it. I don’t even think you really need to play them in order because going back and returning to previous entries almost feels like fitting in missing pieces of a puzzle.
The series flounders a bit when it hits 3D, but it will always have a special place in my heart. Koji Igarashi takes the Symphony of the Night formula and basically owns the handheld world, especially from Aria of Sorrow into the DS trilogy, A++. Ultimately I think he developed that formula enough on his own that breaking it off into the Bloodstained series feels right and good, I think he’s better off this way not weighed down by Konami and the Castlevania franchise, but in this way, we still feel that arc of development. Bloodstained: Ritual of the Night actually took a bit to grow on me, but once it did, I saw it as the most Igavania game that ever existed, he has refined the formula.
All this to say that we just don’t get experiences like this anymore, where series have the proper time to cook and develop. Instead we get Concord where they pour millions into something and try and ram it down your throat, “You WILL enjoy this new franchise. You WILL pick one of these characters as your favorite to get invested in, even though we’ve given you no reason. You WILL make this your ONE game you play because … reasons?” Ditto Marathon. Ditto MindsEye (likely). Ditto all the other rubbish they keep pushing out.
EDIT: OH MY GOD! And the Castlevania DLC for Vampire Survivors, how could I even forget. It’s been a Castlevania wasteland for years and that DLC is some of the best I ever played. Completely the Richter scenario and getting to the end of it legit made me cry, it was such a love letter to fans and felt like a huge emotional, respectful sendoff for the series that Konami will never give us 😭 It’s so good, if you’re a Castlevania fan you should absolutely play it and if not, save it til the end because it’s incredible and bittersweet.
It’s good to have a constant in the current world, steam seems okay, I love what they’re doing for Linux gamers, I think they should reduce their share by at least 5%,but they do a good service and seem competent.
It was put out that everyone should change their passwords. That kind of info for like 90 million steam accounts would fetch a much higher price or ransom than some personal info on a bunch of people like names, phone numbers and an address.
They're NOT cheaper. There is exactly one cheaper PC handheld, and it's the base model of the LCD variant of the Deck.
And the reason for that is that Valve went out of its way to sign a console maker-style large scale deal with AMD. And even then, that model of the Deck has a much worse screen, worse CPU and GPU and presumably much cheaper controls (it does ship with twice as much storage, though).
They are, as the article says, competitive in price and specs, and I'm sure some next-gen iterations of PC handhelds will outperform the Switch 2 very clearly pretty soon, let alone by the end of its life. Right now I'd say the Switch 2 has a little bit of an edge, with dedicated ports selectively cherry picking visual features, instead of having to run full fat PC ports meant for current-gen GPUs at thumbnail resolutions in potato mode.
We don’t really know this. It is possible that the CPU will be trash. Nintendo’s devices don’t really support genres that require CPU power (4X, tycoon, city-builder, RTS, MMO etc.).
While we don’t have detailed info on the Switch 2 CPU, the original Switch CPU was three generations behind at the time of the console’s release.
Best we can tell this is an embedded Ampere GPU with some ARM CPU. The Switch had a slightly weird but very functional CPU for its time. It was a quad core thing with one core reserved for the OS, which was a bit weird in a landscape where every other console could do eight threads, but the cores were clocked pretty fast by comparison.
It's kinda weird to visualize it as a genre thing, though. I mean, Civ VII not only has a Switch 2 port, it has a Switch 1 port, too. CPU usage in gaming is a... weird and complicated thing. Unless one is a systems engineer working on the specific hardware I wouldn't make too many assumptions about how these things go.
If you primarily play CPU bound strategy games, you can very much make conclusive statements about CPU performance. For example, Cities in Motion 1 (from the studio that created Cities: Skylines), released in 2010, can bring a modern CPU to its knees if you use modded maps, free look and say a 1440p monitor (the graphics don’t actually matter). Even a simple looking game like The Final Earth 2 can bring your FPS to a crawl due to CPU bottlenecks (even modern CPUs) in the late game with large maps. I will note that The Final Earth 2 has an Android version, but that doesn’t mean the game (which I’ve played on Android) isn’t fundamentally limited by CPU performance.
It very much is a genre thing. Can you show me a game like Transport Fever 2 on the Switch? Cities: Skylines?
The OG switch CPU was completely outdated when released and provides extremely poor performance.
The switch was released in 2017. It’s CPU, the cortex A57, was released in 2012. It was three generation behind the cortex A75 that was released in 2017.
The Switch CPU had very poor performance for 2017, it was 3 generations behind then current ARM/cortex releases.
It is very likely the CPU in the Switch 2 will also be subpar by modern standards.
I.e. You don’t know that the Steam Deck has a worse CPU and considering Nintendo’s history with CPUs, it is not impossible for the Switch 2 CPU to be noticeably worse than the Steam Deck.
Nobody was complaining about the Switch CPU. It was a pretty solid choice for the time. It outperformed the Xbox 360 somewhat, which is really all it needed to do to support last-gen ports. Like I said, the big annoyance that was specifically CPU-related from a dev perspective was the low thread count, which made cramming previous-gen multithreaded stuff into a fraction of the threads a bit of a mess.
The point of a console CPU is to run games, it's not raw compute. The Switch had what it needed for the scope of games it was running. On a handheld you also want it to be power efficient, which it was. In fact, the Switch didn't overclock the CPU on docked, just the GPU. Because it didn't need it. And we now know it did have some headroom to run faster, jailbroken Switches can be reliably clocked up a fair amount. Nintendo locked it that low because they found it was the right balance of power consumption and speed to support the rest of the components.
Memory bandwidth ended up being much more of a bottleneck on it. For a lot of the games you wanted to make on a Switch the CPU was not the limit you were bumping into. The memory and the GPU were more likely to be slowing you down before CPU cycles did.
The Switch CPU performs extremely poorly as far as gaming is concerned. Case in point, you cited Cities: Skylines, a quick web search suggests performance is terrible on the Switch and it seems to have been abandoned shortly after release.
While I don’t doubt the Switch 2 CPU will be sufficient for games released by Nintendo, from a broader gaming perspective (gaming is not only Nintendo), it is likely the Switch 2 CPU will also be subpar and will perform worse than the Steam Deck (which is a handheld and its CPU is also subject to efficiency requirements). Whether Nintendo users know/care/don’t care about this is irrelevant. We are talking about objective facts.
What "standards" are you comparing it to? The Switch 1 was behind home consoles, but that's not really a fair comparison. There was nothing similar on the market to appropriately compare it to, no "standard".
Five years later the Steam Deck outperformed the Switch, because of course hardware from five years later would. But the gap between the 2017 Switch and 2022 Deck is not so vast that you can definitively claim in advance to know that the 2025 Switch 2 definitely has to be worse. You don't know that and can't go claiming it as fact.
All we know so far is that the Switch 2 does beat the Deck in at least one major attribute: it has a 1080p120 screen, in contrast to the Deck's 800p60. And it is not unlikely to expect the rest of the hardware to reflect that.
OP claimed the Steam Deck’s CPU was definitely worse than the Switch 2 (this was an explicit, categorical statement).
Considering the Switch’s history (Cortex A57 used in the OG Switch being three generation behind in 2017), it’s not unreasonable to speculate that the Switch 2 CPU is likely to be extremely weak from a gaming perspective (I never brought up compute or synthetic benchmarks).
Exactly what hardware at a similarly competitive price point and form factor are you comparing it to when you say it's behind?
The Switch 1 didn't use the very best top of the line parts that money could buy, but if that's what you're fixating on then you're missing the fact that neither did the Steam Deck. The Switch made compromises to hit a $300 price point in 2017, and the Deck made compromises to hit a $400 price point in 2022.
Portable devices using ARM CPU cores, even ones for ~$350, like the Xiaomi F1 released in 2018. It came with a new Snapdragon 845 SoC that included an Adreno 630 GPU.
It didn’t have the form factor of the Switch, I will give you that. My point is that the Switch had a very weak CPU when compared to similar devices even in the same price band for its time.
So it's not a similar device. Comparing to phones is rather misleading, given that phones do not have active cooling and wouldn't actually be able to run the kinds of games the Switch hardware could without catching on fire in the process. They aren't gaming hardware.
It’s a portable gaming device. It is in the same market.
You can play complex strategy games that require strong CPUs like Project Highrise, The Final Earth 2, Mega Mall Story 2 on mobile.
You won’t be able to run The Final Earth 2 even with the standard mobile population limit on a Switch because it uses an ancient CPU and it’s a quad core.
Don’t limit yourself by Nintendo PR and marketing. The gaming world (portable or otherwise) is not limited to Nintendo.
I can’t tell if this is just Wizards of the Coast panicking and flailing because they are out of good ideas, or if they are actually carefully analyzing and re-evaluating older cards because the balance and synergy of the current cards allow for the use of these older cards without being game breaking.
Hearthstone was doing this about a year ago when I quit. It was actually great for the game and really shook things up in the Wild format where you could play any set of cards. But Blizzard shit the bed on that one like usual, oh well.
Ohhh dammit. Of course. Man that sucks. Thank you for clarifying!
I was hoping they would make gold cheaper because that’s really all that I need. But of course they tack on every other thing and then raise prices even more.
As a rental service game pass is great. But at regular price it is too much for me. I’m gona miss converting 3 years of gold to ultimate, though, lol. I’ll be going back to the cheapest tier when I run out.
polygon.com
Ważne