Generating an AI voice to speak the lines increases that energy cost exponentially.
TTS models are tiny in comparison to LLMs. How does this track? The biggest I could find was Orpheus-TTS that comes in 3B/1B/400M/150M parameter sizes. And they are not using a 600 billion parameter LLM to generate the text for Vader’s responses, that is likely way too big. After generating the text, speech isn’t even a drop in the bucket.
You need to include parameter counts in your calculations. A lot of these assumptions are so wrong it borders on misinformation.
This doesn’t mean you can misrepresent facts like this though. The line I quoted is misinformation, and you don’t know what you’re talking about. I’m not trying to sound so aggressive, but it’s the only way I can phrase it.
So, started playing freedom unite recently and I have a massive skill issue. Any advice you can give? Is my first monster hunter game. Especially in quests where you have to kill a big monster I just get bodied. Tried out lance, longsword and dual swords.
It helps to think of fights against monsters as a turn-based encounter. As long as you can dodge or the monster misses its attack, you should be able to land a hit. If you get hit or are too far away when the monster attacks, you probably won’t be able to land any meaningful offense or heal without getting punished for it.
Do you think it is likely that we will start to see Large Language Models integrated in to major video games? If so, are there some examples within gaming already?
You can run language models on consumer cards right now. The only thing is depending on the size of the model and the amount of VRAM on your card, you might not be able to do much else.
I mean, you’ve never seen a purple elephant with a tennis racket. None of that exists in the data set since elephants are neither purple nor tennis players. Exposure to all the individual elements allows for generation of concepts outside the existing data, even though they don’t exit in reality or in the data set.
There are more forms of guidance than just raw words. Just off the top of my head, there’s inpainting, outpainting, controlnets, prompt editing, and embeddings. The researchers who pulled this off definitely didn’t do it with text prompts.
But at what point does that guidance just become the dataset you removed from the training data?
The whole point is that it didn’t know the concepts beforehand, and no it doesn’t become the dataset. Observations made of the training data are added to the model’s weights after training, the dataset is never relevant again as the model’s weights are locked in.
To get it to run Doom, they used Doom.
To realize a new genre, you’ll “just” have to make that game the old fashion way, first.
Or you could train a more general model. These things happen in steps, research is a process.
You keep moving the goal posts and putting words in my mouth. I never said you can do new things out of nothing. Nothing I mentioned is approaching, equaling, or exceeding the effort of training a model.
You haven’t answered a single one of my questions, and you are not arguing in good faith. We’re done here. I can’t say it’s been a pleasure.
If it’s three minutes for a boss, I think that’s reasonable. Do you have any examples you can show me? From what I’ve seen, the fights are pretty quick in this game.
Souls games are a whole different beast from this type of game. It doesn’t really prepare you for managing cooldowns and doing complex attack strings. They train you to be careful and win without taking big risks, which sounds like the strategy you executed. From what I can tell almost half of that fight is cutscenes, which isn’t ideal.
I will agree with you that the gameplay seems boring though. They could have done more. I’m told that it gets better when you unlock more moves and mechanics.
His work on Vagrant Story was phenomenal. Japanese scripts tend to be really boring and samey. Without the work of a good localizer, you’d hear the same twenty anime one-liners interspersed throughout the entire game.
Not only that, but Capcom seems to be adding this Enigma Protector bullshit to their back catalog as well, if this Steam forum discussion is to be believed. Bet this borks the games on Linux and Steam Deck now.
According to SAG AFTRA, the deal will “enable Replica to engage SAG-AFTRA members under a fair, ethical agreement to safely create and license a digital replica of their voice. Licensed voices can be used in video game development and other interactive media projects from pre-production to final release.”...
They’re not giving up though, what they’re doing is getting ahead of it. Assuming their deal is favorable for their members, they’re making it so that anyone who wants SAG-AFTRA synth voices has to go through their contracted company which they have collective bargaining power or strike an equal or better deal. Along with blacklisting companies from SAG-AFTRA work that use non-union synth voices.
This is way better than leaving actors on their own to bargain with companies, which would have definitely happened. Rather than have companies wear individuals down and drive pay down, they get to dictate the terms, together.
Pretty cool. I almost had to start liking Epic Store for not having such a dumb stance. The disclaimer on games using generative content is weird, but it’s a solid step forward.
AI generated content has a lot of unanswered legal questions around it which can lead to a lot of headache with moderation and possibility of illegal content showing up (remember that not only “well meaning” devs will use these tools). It’s seems reasonable for a company to try minimize the risk.
There were never any unanswered legal questions would prevent you from being able to use generated assets in a game. That’s why Valve’s old stance was so odd. I’m not sure what you mean by the possibility for illegal content, can you elaborate?
I remember there being a lot of uncertainty about the legality of what and how can('t) be used in training models (especially when used for commercial purposes) - has that been settled in any way? I think there was also a case of not being able to copyright AI generated content due to lack of human authorship (I’d have to look for an article on this one as it’s been a while) - this obviously won’t be a problem if generated assets are used as a base to be worked upon.
In the United States, the Authors Guild v. Google case established that Google’s use of copyrighted material in its books search constituted fair use. Most people agree this will apply to generative models as well since the nature of the use is highly transformative.
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF from April last year if you haven’t already. The EFF is a digital rights group who recently won a historic case: border guards now need a warrant to search your phone.
Works involving the use of AI are copyrightable, but just like everything else, it depends. It’s also important to remember the Copyright Office guidance isn’t law. Their guidance reflects only the office’s interpretation based on its experience, it isn’t binding in the courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.
As for illegal content - Valve mentioned it in regards to live-generated stuff. I assume they’re worried about possibility of plagiarism and things going against their ToS, which is why they ask about guardrails used in such systems. On a more general note, there were also cases of AI articles coming up with fake stories with accusations of criminal behavior involving real people - this probably won’t be a problem with AI usage in games (I hope anyway) but it’s another sensitive topic devs using such tools have to keep in mind.
I agree live generated stuff could get developers in trouble. With pre-generated assets you can make sure ahead of time everything is above board, but that’s not really possible when you have users influencing what content appears in your game. If they were going to ban anything, the original ban should have been limited to just this.
The one I kind of remembered (even though only partially) was the Reuters article, which contains this quote I was referring to:
The office reiterated Wednesday that copyright protection depends on the amount of human creativity involved, and that the most popular AI systems likely do not create copyrightable work.
This was likely in reference to Midjourney, which was the system in question in its ruling. Midjourney, even for its time had very rudimentary user controls way behind the open standards that likely didn’t impress the registrar.
There’s also a spectrum of involvement depending on what tool you’re using. I know with web based interfaces don’t allow for a lot of freedom due to wanting to keep users from generating things outside their terms of use, but with open source models based on Stable Diffusion you can get a lot more involved and get a lot more freedom. We’re in a completely different world from March 2023 as far as generative tools go.
Take a look at the difference between a Midjourney prompt and a Stable Diffusion prompt.
To break down a bit of what’s going on here, I’d like to explain some of the elements found here.
sarasf is the token for the LoRA of the character in this image, and <lora:sarasf_V2-10:0.7> is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.
The .07 in <lora:sarasf_V2-10:0.7> refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.
The Negative Prompt is where you include things you don’t want in your image. (worst quality, low quality:1.4), here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative prompt FastNegativeV2 is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.
In the next part, Steps stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer.
VAE is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in,
Size is the dimensions in pixels the image will be generated at.
Seed is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.
Model is the name of the model used, and Sampler is the name of the algorithm that solves the noise into an image. There are a few different samplers, each with their own trade-offs for speed, quality, and memory usage.
CFG is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output.
Hires steps represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts. Hires upscaler is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.
After ADetailer are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.
The rest of the soundtrack is great too, It was my favorite gacha I played, probably because it was the least exploitative gacha I’ve ever played. Sadly, it wasn’t doing too well and shut down in 2020 as development for Final Fantasy VII Remake was ramping up, and they need all hands on deck in Business Division 1.
Anyone heard of this? I’ve been following it since the first few trailers looked fake, but now I’m more convinced this is going to be a real game (and actually looks kinda good).
There was never anything stopping them from doing that without AI. They don’t do it because their executives and investors want the large Return on Investment that they can only get with big blockbusters. They don’t care to take over the indie scene because it’s often focused on titles that are niche and risky.
There’s a possibility the profit margins could just get that juicy. You could have a skeleton crew work on a game for a shorter amount of time and get it out there making money.
Really, I’m not entirely opposed to AI but the mindset here is definitely one I cannot gel with, one that making more, larger, faster art is more worthwhile than making it yourself. Even if AI could make whole characters and settings in someone’s style, the people working on it often want to make it themselves. An AI can’t condense all your inspirations and personality and the meaning you would put into a work for you. AI does not even truly understand what it does, it’s only providing a statistics-based output. Even the best, most complex, most truly intelligent AI imaginable is not replacement for an artist, because it isn’t that artist.
AI can’t create anything itself, it’s a tool to help artists create explore, expedite, and improve. An AI can’t condense all of your inspirations and personality and meaning in the same way a drawing tablet can’t. It’s all in how you use it. You can infuse it with your learned experiences at training, guidance, inference, and post-processing to make it more closely adhere to your statistics.
Ultimately AI still seems to serve better to expansive games that need to be filled with a lot of content than small works of passion.
We’ve been talking about indie game devs this whole time, but we haven’t even touched on amateur games devs. For small scale, I think this is where we’ll see the biggest impact. People with fewer or no skills might get the helping hand they need to fill the gaps in their knowledge and get started.
This is pure speculation, and a very iffy one at that. Large game companies keep betting on larger and larger projects, distancing themselves from niche genres. It’s a huge leap to go from “maybe they will try to make smaller games with AI”, which is already speculation, to “indie devs won’t be able to survive if they don’t use AI too”.
Square Enix, one of the biggest game publishers in the world, has several divisions that make gacha games for mobile platforms. These games are very profitable, and almost every one of them is developed in house. These games don’t compete with or replace their AAA games, and they keep on making them, so it must be good enough. It’s almost a requirement for there to be a mobile game of the latest Square-Enix game.
The tablet can be a neutral medium, an AI is trying to condense the outwardly obvious stylistic choices of countless other artists, without an understanding of the underlying ideas that guided them, while you are trying to wrestle something somewhat close to your vision out of it. I suppose that’s like being a director, but it inherently means the result less personal. What decided the shapes and colors? What decided the wording and tone? Who can say.
I’d say today there are easy enough tools that getting started is fairly easy, but there’s some merit to that. Still… that bumps with the uncomfortable possibility that if AI is widely adopted, there will be less game developer and artist jobs available. Sure, more people could get their start, but could they actually get any further than that?
That I can’t say, but I hate that this tool with boundless potential to revolutionize the way we communicate, inspire, create, and connect with each other out of the gate has people attacking it with saws trying to get it to fit into the curtain rod shaped box of capitalism. It’s a sorry state. Maybe more people will follow cottage creators with a vision they find appealing, like on OnlyFans and Patreon? We’re social creatures, we like having shared experiences in that way. Hell, maybe collaborative projects like SCP in the future.
Did you know that mobile freemium games already surpassed console games in revenue? Sure they may be cheaper to produce, but they are not niche or low in Return of Investment, much the opposite. This does not even vaguely correlate with a total indie market takeover.
You’re moving the goalposts here, your original comment asserted that large companies only bet on larger and larger games, and when you have this many mobile games out at once, a lot of them are going to be pretty niche. Currently, gacha is the go-to for small development for large companies, it’s not out of the realm of possibility for lower costs to lead to more traditional games to me.
However many examples you may pick, it still doesn’t make the tech able to make works exactly as the user envisions, and it isn’t based on their own internalized inspirations and personality the same way. If anything, using established popular characters and styles as an example indicates that you aren’t quite grasping what I’m getting at, about the unique characteristics that each artist puts in their work, sometimes even unwittingly. I don’t doubt that AI could perfectly make infinite Mickeys. This isn’t about making more Mickeys. So to speak, it’s about making less Mickeys and more of something entirely new. If you tell me what you want to see, I can probably find it.
I’m not sure what you believe generative tools are supposed to do. This is just one tool in a chest of many, it isn’t going to pop out fully finished work. You need to work with what you make. It also isn’t a requirement to use established characters, I picked things with distinctive characteristics, the characters are just a touchstone for people to evaluate how well those characteristics are transferred. This can work just as well for anyone, I’ve seen people fine tune with just nine images.
I’m not usually this radical, but putting it bluntly, either AI or Capitalism has to go. If not like this, I wouldn’t see any issue with this easier way to get some form of guided aid for artistic expression, leaving aside its limitations and the matter of scraping for a moment. Both of them together, we’ll see artists and game developers driven out of their industries, not to mention all the other artistic, customer service and intelectual jobs that will soon be replaced to optimize profits for executives and investors. None of this would be a concern if everyone could just work on their passion projects and have a guaranteed livelihood, but that’s not how it works as it is.
I never meant small in terms of profits, I only ever meant in terms of development resources, that’s what generative AI will impact. The most humble games can become huge hits, see: Stardew Valley. With a better cost proposition, we might just see those psychological surreal point-and-click adventure games.
Also do mind that Final Fantasy XV: Pocket Edition isn’t a gacha, it’s a scaled down port of the game of the same name that’s divided into ten chapters; the first one’s free, but the other nine will cost you. Meanwhile, Final Fantasy VII: Ever Crisis, a free-to-play port of Final Fantasy VII too will be episodic, but it will have a gacha for weapons and costumes.
I was never arguing that it would be effortless, but easier. I also feel like the marketing budgets are kind of beside the point here of development costs, but hey, generative AI might help with that too.
Even your examples of it being done different are still the highest profile releases from that company, not some quirky novel idea. They were betting big on FFXV when they released that, and they are doing this for FFVII these times.
I don’t know, they also released Diofield Chronicle, Triangle Tactics, and Octopath Traveler were smaller budget games with no pre-existing IP that were also pretty experimental. What they make may not be your “psychological surreal point-and-click adventure game”, but it might be something just as adventurous.
Some people consider releasing new RPG IPs pitching your money right in the trash. That’s pretty adventurous to me. Even if it doesn’t cause a drastic branching out, more companies dipping their toes might make quite the ripple.
It’s wild, but these days this is adventurous, even for Sqaure-Enix. The trend with their AAA games has been not turn based RPG for more than a decade. More big companies might decide to release more modest size games that play to their heritage and strengths.
MARVEL Tōkon: Fighting Souls | Announce Trailer (www.youtube.com) angielski
deleted_by_author
need advice, how to get good at monster hunter
So, started playing freedom unite recently and I have a massive skill issue. Any advice you can give? Is my first monster hunter game. Especially in quests where you have to kill a big monster I just get bodied. Tried out lance, longsword and dual swords.
Large Language Models in Video Games? angielski
Do you think it is likely that we will start to see Large Language Models integrated in to major video games? If so, are there some examples within gaming already?
Genshin Impact Game Developer Will be Banned from Selling Lootboxes to Teens Under 16 without Parental Consent, Pay a $20 Million Fine to Settle FTC Charges. (www.ftc.gov) angielski
Homewrecker (lemmy.world) angielski
Game Freak has been allegedly hacked, with source codes for Pokemon games reportedly leaked (gbatemp.net) angielski
The Legend of Goose: The Honk Waker - Mod Release Showcase (youtu.be) angielski
New AI model can hallucinate a game of 1993’s Doom in real time (arstechnica.com)
Games like punch out?
Been enjoying punch out and super punch out and wondering if there are similar games out there. Any recommendations?
Stellaris gets a DLC about AI that features AI-created voices, director insists it's 'ethical' and 'we're pretty good at exploring dystopian sci-fi and don't want to end up there ourselves' (www.pcgamer.com) angielski
"PSN isn't supported in my country. What do I do?" Arrowhead CEO: "I don't know" (lemmy.world) angielski
Crypt of the NecroDancer - Hatsune Miku DLC! (www.youtube.com)
omg miku…
Marvel Rivals | Official Announcement Trailer (www.youtube.com) angielski
Final Fantasy XVI PC Version In 'Final Stages Of Optimization,' Expect A Demo Before Release (www.gameinformer.com) angielski
Final Fantasy Tactics Creator Reacts to Unicorn Overlord Localization Debate and Shares His Own Stories (www.ign.com) angielski
Capcom adds new DRM to old PC games, raising worries over mods (www.polygon.com)
Not only that, but Capcom seems to be adding this Enigma Protector bullshit to their back catalog as well, if this Steam forum discussion is to be believed. Bet this borks the games on Linux and Steam Deck now.
Nothing but greed (lemmy.world) angielski
Video game actors speak out after union announces AI voice deal (www.videogameschronicle.com) angielski
According to SAG AFTRA, the deal will “enable Replica to engage SAG-AFTRA members under a fair, ethical agreement to safely create and license a digital replica of their voice. Licensed voices can be used in video game development and other interactive media projects from pre-production to final release.”...
Steamworks Development - AI Content on Steam (steamcommunity.com) angielski
Key points:...
FFVII Rebirth Director Naoki Hamaguchi says that Cait Sith's name is pronounced "Cat Shee." angielski
image...
Bayonetta creator Hideki Kamiya says 'It would be a disaster' if he ever collaborated with Hideo Kojima or Yoko Taro: 'It doesn't work like in Dragon Ball' (www.pcgamer.com) angielski
Volt Tackle, the first official Pokemon x Hatsune Miku crossover song has released (youtube.com) angielski
cross-posted from: bookwormstory.social/post/231842...
Palworld | TGS 2023 Trailer | Pocketpair | Multiplayer | Character Customization (www.youtube.com) angielski
Anyone heard of this? I’ve been following it since the first few trailers looked fake, but now I’m more convinced this is going to be a real game (and actually looks kinda good).
Tim Sweeney says Epic Games Store is open to devs using generative AI (www.gamedeveloper.com) angielski