The solution I’m talking about should already be the standard by most devs (especially small studios), even before LLM was a thing. See, small teams can’t afford QA, at least not to the same extent as big studis, so they need to add checks to stuff in a way that catches large problems, and a placeholder making it into the final game is a big problem. Even before generated images were a thing devs would just use any random image they had that more or less worked, and those images could have copyright or be problematic in any other way, so ensuring none of that made it into the final release has always been important.
Dude, naming the textures placeholder_<name> doesn’t take any more time and ensures you won’t ship a game with a placeholder. This is, or at least should be, common practice even without using LLMs, and only takes a couple of seconds, not enough to cause any inconvenience.
That’s not what a concept artist does, concept artists (if they had one) did the work before, game artists are still doing the work while the generated placeholders are in place, no person’s job was compromised by using generated placeholders. That being said, if any placeholder made it into the final game then fuck them.
I agree with almost everything here, I think using LLMs to generate placeholders is fair game and allows studios to nail down the feeling of the game sooner. That being said there’s one thing I disagree:
However, it is obvious to see that occasionally you’ll forget to replace items with this technique
There are ways to ensure you don’t forget, things like naming your placeholders placeholder_<name> or whatever so you ensure there are no more placeholders when you make the final build. That is the best way to approach this because even extremely obvious placeholders might be missed otherwise, since even if you have a full QA team they won’t be playing every little scene from the game daily looking for that, and a few blank/pink/checkered textures on small or weird areas might be missed.
I think it’s okay for studios to use generative AI for placeholders, but if one of them makes it to the release you screwed up big time. And like I said there are ways to ensure you don’t, it’s trivial to make a plugin for any of the major engines (and should be even easier if you’re building the engine yourself) where it would alert you of placeholders in use at compile time.
Yeah, because business can’t simply ask employees or random people to buy the machines, rebuy from them and still get them cheaper. Hell, they can even advertise they will be buying machines for 10% higher price and let random people offer it to them. It’s an open platform, you can’t prevent people from getting it. Selling the machines at a loss is a sure way to have Valve bleed money, just like it happened with the PlayStation 3 until they closed the system. I would rather the hardware costs a bit more so that the platform can remain open.
Re-read my answer, if they were sold at a loss like you suggested it would be beneficial for companies to purchase them to be office, servers or anything, costing Valve money without bringing them any profit afterwards because those machines would be purchased without gaming in mind, only because they were the cheapest available option (since all of the others have some profit margin and steam machines would be sold at a loss).
Yes, but my whole point was that PCs have other uses, so Valve selling a PC at a loss can’t recover the money with games because people won’t necessarily play games on that machine. Saying “if you’re playing games” to that point is like someone explaining to you why seatbelts are needed in cars and you replying with “if you never crash they’re useless”, like OF COURSE that if we enter your hypothetical example everything works, the whole point is about the disaster that would happen if that wasn’t the case.
It’s not in the thread line I’m replying to, to get to that I would have had to read another reply, and all of the replies to that to spot yours.
If the work you do can be fully specified in a Jira ticket, you’re a code monkey and not a software engineer, of course you can use LLMs to do your job since you can be replaced by an LLM.
And it’s not true that agents can’t help with edge cases, they can. If you know which points to look at, you task to analyze the specific interaction and watch which parts of the code would be mentioned.
You’re missing my point entirely, it’s not that it can’t help with, it’s that the solution it writes will not take them into account unless you tell it to, and to explain every edge case in enough details to be unambiguous about all of them is essentially the same as writing code directly. Not to mention that you can’t possibly know all of the edge cases of the solution it will write without seeing it, so you can’t directly tell it to watch for edge cases without knowing what code it will write.
I do write way less amount of symbols to LLM than I would when I write code.
Maybe, but then you have to review everything it wrote so you waste more time. Give me one concrete example of something that you can prompt an LLM to give you code that is advanced enough to be worth it (i.e. writing the prompt and reviewing the code it wrote would be faster than writing the code myself) and not generic enough that I would be able to find the answer in stack overflow.
Those symbols don’t have to be structured
If you don’t structure them the LLM might misinterpret what you meant. Structure in a language is required to make things unambiguous, this reminds me of the stupid joke of “go to the store and bring 1L of milk, if they have eggs bring 6” and the programmer coming back with 6L of milk because they had eggs. Of course that’s a stupid example, but anything complex enough to be worth using an LLM would be hard to describe unambiguously and covering all edge cases in normal human speak.
and they can even have typos, so I can focus my brain activity on things that actually matter.
Typos are very easy to correct, most editors will highlight them for you, and some can even autocorrect them but more likely you avoid most of them by using tab completion anyways. I don’t waste any brain activity on that, I’m thinking on the solution and structuring it in an unambiguous way, that is what writing code is, it’s not some cryptic art of writing the proper runes to make the machine do your will like you seem to be implying, it’s just structured thought.
Plus, copilot is shit.
Might be, wouldn’t know any other as that’s the one I have available to use, but sincerely I doubt others are that much better to make a difference.
I rate your post as a skill issue.
Yup, I have absolutely no skill in using LLMs, nor will I waste my time with it. Don’t get me wrong, it’s a neat tool for auto completing small snippets like we used to do with an actual snippet library a couple of years ago, it is also a decent tool to navigate unknown code bases asking it where certain parts are or how to achieve something in the. I would say that 60% of the time it gives you some good pointers, but 90% of the time most of the code it writes is wrong, but at least it points you in the right direction of where to start investigating.
I don’t expect you to understand this since from what I’m reading here you probably never worked on anything big enough, but a software engineer job is not to write code, that’s just a side-effect, our job is to solve problems, so either you’re trying to get the LLM to solve the problem for you, or wasting lots of time explaining your solution in English, reading the generated code, understanding it, analyzing it, fixing any issues and testing it, possibly multiple times instead of explaining your solution once in code and testing it.
To be fair, the first part of the game is by far the best. The unofficial patch adds back in a heckton of content in the late game, but even then, it feels sparse.
Maybe, I don’t know how far into the game I got since I never finished it. But I don’t think it ever felt empty… Although the damn zombie mission is one I hate and has made me quit the game in more than one occasion.
I’m running Linux now
I have been running Linux only for over a decade, so I can confidently say the game runs, and in Steam is just hit play.
Not replying to you but to that statement, they’re absolutely wrong. I’ve never finished Bloodlines, life keeps getting in my way and I keep losing my save file (this is not unique to Bloodlines, there are several other games that are in the same bag). My point is every few years I start a new save on the OG bloodlines, and that game still holds out great, sure graphics are outdated, but other than that it’s a great game even by today standards, and while I haven’t played bloodlines 2, I’m fairly confident from everything I’ve seen it’s a worse game by every metric that matters. These people think that graphics can overcome anything, but that’s one of the least important parts of the game.
Sorry, I won’t go through your post history to reply to a comment, be clearer on the stuff you write.
I’m a software engineer, and if that’s how you code you’re either wasting time or producing garbage code, which might be acceptable wherever you work, but I guarantee you that you would not pass code reviews where I do. I do use copilot, and it’s good at suggesting small snippets, maybe an if, maybe a function header, but even then 60% of the time I need to change what it suggested. Reviewing code is harder than writing it yourself, even if I could trust that the LLM would do exactly what I asked (which I can’t, not by a long shot) it would maybe be opened to bugs or special cases that I would have to read the code, understand what it tried to do, figure out edge cases on that solution and see if it handled them. In short, it would take me much longer to do stuff via LLMs than writing them myself, because writing code is the easy part of programming, thinking on the solution and it’s limitations and edge cases is the hard part, and LLMs can’t understand that. The moment you describe your solution in sufficient detail that an LLM can possibly generate the right code, you’ve essentially written the code yourself just in a more complicated and ambiguous format, this is what most non technical managers fail to understand, code is just structured English, we’re already writing something better than prompts to an LLM.
Tbf AI tag should be about AI-generated assets. Cause there is no problem in keeping code quality while using AI, and that’s what the whole dev industry do now.
At no point did you mention someone approving it.
Also, you should read what I said, I said most large stuff generated by AI needs to be completely redone. You can generate a small function or maybe a small piece of an image, if you have a professional validating that small chunk, but if you think you can generate an entire program or image with LLMs you’re delusional.