Im pretty sure we could make AI in games smarter and/or better than humans for a long time. They are just not fun to play against. You need to have AI that you can win against. What i think should be happening instead of neural networks is the ai should gamble a bit more . The good example is eu4 where on hard difficulty ai will not attack you until its sure it can win… which makes it more predictable than normal ai beacuse you can reasonably guess whetewer it will attack you and try to outmanouver it. Wheras on normal sometimes it will just attack you if there is a reasonable ( or sometimes even unreasonable ) chance to win which makes normal sometimes( very very very very rarely ) harder difficulty. Now hard difficulty is stil generaly ( 99,9% of time ) much harder due to ai cheats but what i said is a thing. Total war Warhammer 3 could use that in particular to spice things up. Currently attacking army will always attack and defending will defend which makes attacking more advantagous , and the army will always wait for reinforcment . They could for example make it so depending on the army composition ( or even just rng ) the defending army will sometimes attack ( for example when there are only melee combatants ) so that you dont have time to deal damage with mage . Or the opposite. Make it so the attacking army will just stay still and protect the artilery and bombard you with canons it it has lots of artilery . Like you know just some basic strategies so the fights arent always so similar at the begining.
Yeah, the easiest thing to implement is omnipotent AI. The code for the AI is executed within the game engine, so you have complete access to any information you want.
You can just query the player position at any point in time, even if there’s a wall between the NPC and the player. It requires extra logic to not use the player position in such a case, or to only use the rough player position after the player made a noise, for example.
Of course, the decision-making is a whole separate story. Even an omnipotent AI won’t know how to use this information, unless you provide it with rules.
I’m guessing, what OP wants is:
limiting the knowledge of the AI by just feeding it a rendered image like humans see it, and
somehow train AI on this input, so it figures out such rules on its own.
The most advanced AI I’ve seen is in Hitman WoA, and Zelda: Breath of the Wild.
Both games don’t have “learning” AI. They just have tons of rules that the player can reasonably expect and interact with, that make them seem lifelike. If a guard sees you throw a coin twice in Hitman, he doesn’t get suspicious and investigate - he goes and picks it up just like the first one. Same for reactions to finding guns, briefcases, or your exploding rubber duck.
Finished Metro: Last Light last week. Have to say I didn’t really like it. Spoiler warnings below. The good bits were good, to be sure: the populated stations of Bolshoi and Venice were phenomenal and there were parts that harked back to the highlights of the first game - the early parts with Pavel for instance and some nice levels in the tunnels. Playing on Survival Hardcore there were passages that were phenomenally immersive and enjoyable, and I do love the world building around the communities in the metro.
The story just didn’t land with me. The political war left me completely uninterested and the love story with Anna was so half-baked I almost wanted to stop playing right there when the sex scene happened. I also didn’t really like the overly supernatural stuff like the River of Fate. It was also kind of hard for me to follow the logic of the narrative at times as it felt like Artyom was just kind of drifting around and happened to end up where he needed to be regardless. He also should have died like a dozen times, but I guess he’s a superhero.
The moral system left me frustrated more than anything now that I knew about its existence (I played 2033 completely blind). Finally, the boss fights felt terrible and really out of place in a game that should be about tension, loneliness and stealth. Artyom was too much of an action hero here for my taste. There wasn’t really anything like the great Library level in 2033. When he picked up a gatling gun at the end like a russian Rambo and fought off a horde of enemies I was rolling my eyes.
Still, I’m glad to have gotten through it finally - this was my second attempt - and I am interested to see what they did in Exodus as I’ve heard nothing but good things.
For now I’m taking a breather and tackling Bioshock 2, another backlog game to get through before being able to play Infinite, which is the game I’m really looking forward too.
ECHO, the 3rd person action\puzzle game was a fun concept to script in your machine dopplegangers to learn on you (and repeat after you one of the set actions you can do) and reset every cycle.
I don’t think it would work by itself without such limiting.
I always got the impression it wasn’t a learning AI but rather a very limited “Has the player pressed the run button? if YES: AI can use run next cycle”
Yes it is, it’s 100% scripted. And yes, in the environment where you can do like 10 different actions, they start to do their routine adding ones that you used in that cycle before they get reset. In a sense, they act no more natural than monsters from a tabletop game.
But these do make me think that if we talk gamedesign with a LLM as an actor, it should too have a very tight set of options around it to effectively learn. The ideal situation is something simplistic, like Google’s dino jumper where the target is getting as far as it can by recognising a barrier and jumping at the right time.
But when things get not that trivial, like when in CS 1.6 we have a choice to plant a bomb or kill all CTs, it needs a lot of learning to decide what of these two options is statistically right at any moment. And it needs to do this while having a choice of guns, a neverending branching tree of routes to take, tactics to use, and how to coexist with it’s teammates. And with growing complexity it’s hard to make sure that it’s guided right.
Imagine you have thousands of parameters from it playing one year straight to lose and to win. And you need to add weight to parameters that do affect it’s chance to win while it keeps learning. It’s more of a task than writing a believable bot, that is already dificult.
And the way ECHO fakes it… makes it less of a headache. Because if you limit possible options to the point close to Google’s dino, you can establish a firm grasp on teaching the LLM how to behave in a bunch of pre-defined situations.
And if you won’t, it’s probably easier to ‘fake it’ like ECHO or F.E.A.R. does giving a player an impression of AI when it’s just a complicated scri orchestrating the spectacle.
bin.pol.social
Ważne