We need the names of Jagex executives being shouted from the rooftops in RuneScape forums, and the jeers should follow them to their next job, because people deserve to know when a shitty, product-ruining executive is going to ruin a second product.
Jagex isn’t doing a damn thing. It’s the executives.
I tend not to finish them as well. I think I just like the exploration and learning part, then I get bored when it comes time to apply what I learned and power through content.
There’s a third person mod for it if that style of gameplay would suit you better.
It’s an absolute masterpiece, but it requires your active attention. You have to put the pieces it gives you together, or you won’t get that expansive “more than the sum of it’s parts” effect that really good art can do.
I think the PC vs. console divide is relevant here. I’m not sure how advanced text entry on consoles is these days, but I imagine PCs have the advantage with keyboards. Maybe if they use voice recognition on the consoles? But AAA games usually target both, and if interacting with the model is clunky for a big chunk of your market then the big developers might not use the technology.
Of course, indie devs that only target PC can go wild.
Whisper from OpenAI is pretty solid for speech recognition (at least English), and it is small enough to deploy on mobile devices. If I recall correctly, both PS and Xbox controllers have mics built-in, so input device is covered.
AI could also generate dialogie options for players, though. It could operate as traditional dialogue, with AI generating responses and possible doalogue paths ahead of time so you get a “normal” experience that just changes every time
A good fit would be random background NPCs. For example, pedestrians in a GTA like game. Can potentially increase the variety in the things they can say, and maybe even talk about things the player has just done.
We won’t see large language models. We will likely see a stripped down version like a small language model (or Domain Specific models if you want the fancy marketing wank term) because a NPC in a fantasy game doesn’t need to know about 13th century Europe or 19th century Asia.
Yes, LLMs are too costly for this and require a cloud service, smaller models could run on the client. The main difficulty is getting the training data and preparing it for machine learning.
Not anytime soon. Nvidia tried, and nobody liked it. LLMs still suck at creative writing and need a ton of RAM/VRAM just to work. They also often get confused or trail off in any discussion/roleplay.
The only game that sort of made it work was Suck Up!, where you’re a vampire that has to convince an AI to let you in their house so you can suck their blood. It’s a fun concept but even that game gets repetitive quick and the LLM is very stupid and random.
Tiny models only get stupid like that because you’re taking a general purpose model that knows everything in the world and compressing all that knowledge too much. If you start with a model that only knows basic english and info about a few hundred things in the game, it can be much smaller.
bin.pol.social
Najnowsze