I need to admit that in the past day, I asked an AI to write unit tests for a feature I’d just added. I didn’t trust it to write the feature, and I had to fix the tests afterwards, but it did save time.
I really don’t see any usefulness or good intent in the art world though. Sooo much of those models has been put together through copyright theft of people’s work. Disney made a pretty good case against them, before deciding to team up for a shitty service feature.
It’s sad Clair Obscur lost that indie award, but hopefully the game dev world can take that as a bit of a lesson.
I often use it in programming to either layout the unit testsor do something that’s repetitive like create entities or DTOs from schemas. These tasks I can do myself easily but they’re boring and I will also make mistakes. I always have to check every single line and need to correct things, plus have to write one or two detailed prompts to make sure that the correct pattern and style is followed. It saves me a lot of time, but always tries to do more than it should: if it writes tests it will try and run them, and then try and fix them, and then try to change my code which is annoying and I always cancel all of that.
I find AI art and creative writing boring and I only really see these things as a tool to support being more efficient where applicable, and you also have to know what you’re doing, just like using any other tool.
There are and I used to use them but they aren’t error-free either or following the style guides I need to adhere to so it’s essentially the same outcome.
I don’t know what you mean, but as a designer I can imagine my work without ai anymore. I get the same response from everybody I know In my line of work.
I don’t get banning it. At most for the ethical prudes is limiting one self to the models that were legally trained. But I have no problem admitting I am not one of those.
I still haven’t seen anything neat from any models that were certified following only legally permitted content. That said, to my knowledge there’s very few of that variety.
Training off of the work of current artists serves to starve them by negating the chance companies hire them on, and results in circumstances where AI trains off of other AIs, creating terrible work and a complete lack of innovation.
People suggest a brilliant future where no one has to work and AI does everything, but current generations of executives are so cut-throat and greedy to maximize revenue at the top, that will never happen without extreme, rapid political and commercial reform.
Artists have been always starving. The future is such that if you can’t compete with ai , chose another profession where you can. That’s not something I want, but the world is changing and people have to change with it. That’s either with another profession or by voting in politicians that can redistribute the wealth back to them. There is no option where the progress stops , where the clock stops ticking.
Many artists do starve, and many others succeed. Not sure what your point is, or why you want to shift the needle more in the former direction.
AI can’t compete with artists if they are not generating content to serve for the model. Even if the models could achieve consistent art, it would mean we get no new themes or ideas. People who would normally invent those new styles will start by repeating what’s existing, and will be paid for that.
Many nations provide grants for art, because they recognize it’s a world that doesn’t always generate immediate, quantifiable monetary return, but in the long run proves valuable. The base expectation is that companies recognize that value and uniqueness in fostered talent as well, rather than the immediacy of AI prompts giving them “good enough” visuals.
Artists are always starving is because that’s how it’s always been. I don’t think it can be an argument for or against anything.
I’ve worked with ai image generation professionally and I can say that they are not missing new ideas if people using them aren’t. They are great for brainstorming new ideas. They can’t make a design, but are a great tool speeding up the process.
I love art. I go to galleries often. I don’t think ai can do that and will never be able to. Not true art like capturing a moment in time with the original style of the artist and their life experience. I don’t think ai is a threat to that.
I saw an article about an artist who used AI just for overall composition, and who said that he couldn’t compete if he didn’t do this, because everyone in his field was doing it and it was significantly faster than what he used to do.
I suspect that when people say things like “AI cannot possibly help field X be more efficient like it does in field Y,” what they often really mean is, “I work in field Y and not field X.”
He’s right. You have to use the tools at your disposal. It’s not only a matter of survival but also about streamlining your work process. Focusing on the main design decisions and letting the machine do at least some of the leg work when possible. It’s more pleasant like that.
I don’t mind people hating on ai. Everybody can not use it as much as they want.
Don’t. I think it honestly has a place. Now that place is vastly different from what business bros think it is, but it does have a place. I think writing tests is a great reason, and it’s a good double check. Writing documentation is good, and even writing some boilerplate code and models. The kicker is that you need to already be an engineer to use it, and to understand what it’s doing. I would not trust it blindly, and I feel confident enough to catch it.
It’s another tool in our belt, it’s fine to use it that way. Management is insane though if they think you’ll 10x. Maybe 2x.
Entire problem with AI is literally a legal one. The entire moral outrage that everyone has for it has only been able to be sourced back to legal arguments. Hell even every philosophical argument being made all over the place still stems down to the legalities of it.
If you can find a single moral or philosophical argument to be made that does not have a rooted bias in the law then you might have a reason to feel dirty. But realistically you only feel dirty because your being told to feel dirty by idiots all around you.
If you hold copyright to that high of an esteem that you feel disgraced and sullied for violating it even indirectly then yeah, feel dirty. But I really doubt you hold the draconian laws of copyright to such a high morale standing as to let your self worth be hurt from it.
But even still, beyond ai, every tool you use in your work flow is almost guaranteed to be built off the back of abuse, slave labor, theft, and exploitation at some level. If we threw away tools and progress just because they were built by assholes we would have no tools at all.
Fight for better regulation, and more care in the next step of advancement. But to throw away tools is just not realistic, we live in reality unfortunately.
If the tool is genuinely useless to you then don’t use it. If it is genuinely useful then use it. If you can find a better tool then use that instead.
The copyright thing doesn’t bother me much, but the absurdly inflated hype and pushiness from the companies does, and using it at this moment only feeds into it. Probably after the bubble bursts I won’t feel bad about using it.
If you acknowledge the problem with theft from artists, do you not acknowledge there’s a problem with theft from coders? Code intended to be fully open source with licenses requiring derivatives to be open source is now being served up for closed source uses at the press of a button with no acknowledgement.
For what it’s worth, I think AI would be much better in a post scarcity moneyless society, but so long as people need to be paid for their work I find it hard to use ethically. The time it might take individuals to do the things offloaded to AI might mean a company would need to hire an additional person if they were not using AI. If AI were not trained unethically then I’d view it as a productivity tool and so be it, but because it has stolen for its training data it’s hard for me to view it as a neutral tool.
If the models are in fact reading code that’s GPL licensed, I think that’s a fair concern. Lots of code on sites like Stack Overflow is shared with the default assumption that their rights are not protected (that varies for some coding sites). That’s helpful if the whole point is for people to copy paste those solutions into large enterprise apps, especially if there’s no feasible way to write it a different way.
The main reason I don’t pursue that issue is that with so much public documentation, it becomes very hard to prove what was generated from code theft. I’ve worked with AI models that were able to make very functioning apps just off a project’s documentation, without even seeing examples.
I don’t think training on all public information is super ethical regardless, but to the extent that others may support it, I understand that SO may be seen as fair game. To my knowledge though, all the big AIs I’m aware of have been trained on GitHub regardless of any individual projects license.
It’s not about proving individual code theft, it’s about recognizing the model itself is built from theft. Just because an AI image output might not resemble any preexisting piece of art doesn’t mean it isn’t based on theft. Can I ask what you used that was trained on just a projects documentation? Considering the amount of data usually needed for coherent output, I would be surprised if it did not need some additional data.
The example I gave was more around “context” than “model” - data related to the question, not their learning history. I would ask the AI to design a system that interacts with XYZ, and it would be thoroughly confused and have no idea what to do. Then I would ask again, linking it to the project’s documentation page, as well as granting it explicit access to fetch relevant webpages, and it would give a detailed response. That suggests to me it’s only working off of the documentation.
That said, AIs are not strictly honest, so I think you have a point that the original model training may have grabbed data like that at some point regardless. If most AI models don’t track/cite the details on each source used for generation, be it artwork on Deviantart or licensed Github repos, I think it’s fair to say any of those models should become legally liable; moreso if there’s ways of demonstrating “copying-like” actions from the original.
It’s pretty much guaranteed that many AAA games out over the past 2 years had AI generated elements. Though finding these is not plausible. Telling about separate grass or tile textures if they are AI generated or taken from the asset store, or god forgive Ai generated assets taken from the asset store is basically impossible.
Alternatively imagine if an artist draws a concept art of an in game item & then uses image generation for creating the actual game assets. How will anyone find out?
the developers write that “our studio was mistakenly accused of using AI-generated art in our games, and every attempt to clarify our work only escalated the situation”. They say they’ve received a lot of insults and threats as a consequence.
I sincerely hope that Grand theft Auto 6 ships and people find generative AI elements in it. I hope it’s one of those games that’s so Blockbuster it tells you you’re going to either eat your morals or you’re not going to get that thing you want.
Multiple genres of games are about doing mass killings for fun.
You know that bit when you get bored playing some open world game, go around killing everyone, then reload? Postal is That: The Game. Just without the reloading.
Just to clarify a little bit (I was a little confused myself):
Postal was developed by the studio Goonswarm. The publisher Running With Scissors cancelled their games release because of the AI claims, and in response the developers have closed their studio, probably due to the financial strain of having your game completely cancelled by your publisher.
“After revealing POSTAL: Bullet Paradise, a title Running With Scissors was planning on publishing but not developing, we’ve been overwhelmed with negative responses from our concerned POSTAL Community,” reads a statement from Running With Scissors founder Vince Desi, emailed to RPS this afternoon. "The strong feedback from them is that elements of the game are very likely AI-generated and thus has caused extreme damage to our brand and our company reputation.
“We’ve always been, and will always be, transparent with our community,” Desi continues. "Our trust in the development team is broken; therefore, we’ve killed the project. We have a lot of good things coming (some you know and some you don’t).
“Since forming Running With Scissors in 1996, we’ve always said that our fans are part of the team,” it concludes. “Our priority is to always do right by the millions who support the POSTAL franchise. We are grateful for the opportunity to make the games we want to play, and will continue to focus on our new projects and updates coming in 2026 and beyond. We can’t wait to share more!”
Postal: Bullet Paradise was once “a timeline-hopping, dystopian bullet heaven first-person shooter with POSTAL’s signature darkly humorous personality”. The project is “no longer available” on Steam, though it still has a page as of writing.
Desi’s statement doesn’t mention which elements of the game may have been AI-generated, or whether they’ve taken any steps to confirm this with Goonswarm.
It seems like the publisher hasn’t done much to work with the devs, finding the true story instead of reacting to knee-jerk public opinions, and has just pulled the rug out from under them to protect themselves instead.
The devs have adamantly insisted there is no AI in their work; and if true, this really really sucks.
This really comes off as a knee-jerk reaction by RWS. I get they’ve been burned in the past by shit like Postal 3, and Postal is about all they have, but this should have been handled much better.
Delay things, verify there is no generative AI used, at worst replace assets that are deemed questionable.
RWS saying they “don’t trust the developers” anymore is a bizarre thing to talk about in public so quickly.
If I were a part of the development team I’d be thinking “ok, don’t ever work with a company with a name like ‘running with scissors’ ever again,” they don’t make good decisions.
Even in that case, they were quick to cut ties and not mention it to the public in the announcement, which doesn’t make them look any better.
I say this as a long time Postal fan girl, I can ever get a slight kick out of 3 as terrible as it is. Either way, RWS doesn’t look good right now. They were either aware of generative AI being used and refused to declare it, or they decided a bit of public backlash was worth tanking a studio over without verification.
they could also have not liked the way that the project was going and just used this as an excuse to fire the developer. not defending RWS, as it looks pretty shitty with the little context that we have
Most modern roguelikes tend to only have the first two of these, tho. But those are the 4 main elements of the original game for which the genre derives its own, Rogue.
And Rogue-lites tend to make progression persist after death, at least partially. Such as with the unlockable weapons and things in Hades, while the boons and other abilities are pick ups you only have until death untill you pick them up again the next run.
The popularity is because they are easy to pick up and put down. If I want to go back to an RPG that I haven’t touched on months I need to try to remember where I was going, what my build was doing, and how to deal with the things I was fighting. If I want to go back to FTL that I haven’t played on years I just start a new run anyways, and all my ship unlocks are there if I want them.
I would argue that a substantial reason for their popularity is also just that devs have fun when developing them.
With most other genres, you’ve seen the story a gazillion times, you’ve done each quest a thousand times etc… It just gets boring to test the game and it becomes really difficult to gauge whether it still is fun to someone who isn’t tired of it.
Meanwhile with roguelikes, the random generation means that each run is fresh and interesting. And if you’re not having fun on your trillionth run, that’s a real indicator that something needs to be added or improved.
There are a thousand definitions and mine is just one among many, I’m aware. This is not a “right vs. wrong” matter, it’s how you cut things out.
For me, a roguelike has four rules:
Permadeath—can’t reuse dead chars for new playthrus.
Procedural generation—lots of the game get changed from one to another playthru.
Turn-based—game time is split into turns, and there’s no RL time limit on how long each turn takes.
Simple elements—each action, event, item, stat etc. is by itself simple. Complexity appears through their interaction.
People aware of other definitions (like the Berlin Interpretation) will notice my #4 is not “grid-based”. I think the grid is just a consequence of keeping individual elements simple, in this case movement.
Those rules are not random. They create gameplay where there are limits on how better your character can get; but you, as the player, are consistently getting better. Not by having better reflexes, not by dumb memorisation, but by understanding the game better, and thinking deeper on how its elements interact.
I personally don’t consider games missing any of those elements a “roguelike”. Like The Binding of Isaac; don’t get me wrong, it’s a great game (I love it); but since it’s missing #3 (combat is real-timed) and #4 (complex movement and attack patterns, not just for you but your enemies), it relies way more on your reflexes and senses than a roguelike would.
Some might be tempted to use the label “roguelite” for games having at least few of those features, but not all of them. Like… well, Isaac—it does feature permadeath and procedural generation, right? Frankly, I think the definition isn’t useful, and it’s bound to include things completely different from each other. It’s like saying carrots and limes are both “orange-like” (carrots due to colour, limes because they’re citrus); instead of letting those games shine as their own things, you’re dumping them into a “failed to be a roguelike” category.
Slay the Spire: yes. All four rules are there, specially in spirit. It’s also a deck-building game but that’s fine, a game can belong to 2+ genres at the same time.
I’m not sure on Balatro. I didn’t play it, so… maybe?
You ask an excellent question, one that I feel you already know the answer to. From my understanding, the term is unfortunately broadly overused for any procedurally generated game, to the point where the original meaning has been lost to time.
Man I wish we had better terminology for this type of game. Roguelike and roguelite give the same energy as “Doom-clone” for every fps in the 90s. Later we called them FPS games. That genre has since been refined into tactical shooters, arcade shooters, milsim, etc. Meanwhile, we’re still stuck calling all games that have randomized runs “rogue-likes”. Being pedantic about the definition doesn’t make this situation better.
rockpapershotgun.com
Gorące