The AI of War

Hey folks, Tom here! Today I’m going to share my experience of working on the AI with you.

I came to the Wargroove team fairly late, switching over from Stardew Valley once work on the multiplayer update came to a conclusion. When I first joined the team, Wargroove already had AI, albeit AI that would commonly throw away a win, or miss an opportunity to escape defeat. My first major task was finding ways to make the AI smarter… a daunting introduction for someone who (at the time) couldn’t even beat the AI!

Luckily, I had help from Tiy (game design) and Armagon (level design), experts at handing the AI its ass. Together, they sent me dozens of situations where the AI was acting dumb. My job in each case was to step through the AI’s decision making using the debug output available in dev builds, and figure out why it thought a dumb order was a good idea.

This heatmap is one such debug tool. It visualises ‘threat’: the damage that the AI’s opponent could deal against a unit on any tile of the map. Also factored in is the potential threat against each tile on future turns, albeit reduced by a decay factor. The AI bases decisions on a collection of heatmaps like this.

The gradient of the objective map biases movement of the AI’s units towards areas of strategic value: capturable structures, and opposing commanders and strongholds, for instance.

The support map provides a generalised approximation of the AI player’s combat ability. The AI uses this to keep its units from straying into enemy territory without backup, and to decide where to retreat to.

The AI uses values from these heatmaps to score every possible valid order it could give next, and then chooses the highest-scoring order to execute. When you see the AI do something dumb, such as sending its commander into certain doom, it’s almost always caused by a problem with the scoring method. Some specific examples I fixed were the threat map not factoring in crits, and the AI not considering the effect of the next turn’s change of weather on the opponent’s range.

The best possible way to score orders would be to model hypothetical future states of the match, e.g. by simulating the order on a copy of the match, or by trying to predict the opponent’s next move. However, for Wargroove, we need the AI to be snappy even on low-end devices, which limits the amount of looking ahead the AI can do.

One of the fun things about working on AI is that it can do really unexpected things. Even though Wargroove is a perfect-information game, the AI has to constantly deal with the uncertainty of not knowing what its opponent will do. One example is if the AI’s opponent is able to spawn a unit on their next turn that allows them to land a crit on the same turn.

Recruiting a pikeman here would mean Nuru can be defeated this turn, even though the outcome relies on a unit that didn’t exist in Nuru’s turn.

Without the ability to actually simulate hypotheticals like these, the AI has to use cautious assumptions, such as assuming that certain crits always apply. However, in a situation where the AI is nearly defeated, overestimating the threat on a perfectly valid escape route can prevent the AI from being able to take it. The AI may just sit there, ignoring a way out of its defeat. So making the AI more cautious can counter-intuitively make it act less cautious!

A lot of issues with the current AI can be traced back to the sacrifices we made to keep the AI snappy. I could write several giant blog posts about its limitations, and the weird side-effects of the trade-offs we made… But to provide a balanced perspective for a moment, having smarter AI doesn’t necessarily make a game better: a good game provides a challenge that scales with the player’s skill level. That means relative beginners need a fair shot at beating AI opponents. For more advanced players, there are always optional extra rules (beating a match within a certain number of turns to achieve a rank, for example), and multiplayer.