Master Baboon The sea of the simulation

11Nov/110

Warning: time sink! Or: the latest Google AI challenge

It's that time of the year again... the word spreads, I download it, and my nights fly by. It's time for the new Google AI Challenge: http://aichallenge.org/ ! This time we will be writing multi-agent systems, as we program cyber-ant-colonies fighting against each other for bread crumbs.

The colored dots are ants of different colonies. The black dots are obstacles. The barely visible brown-ish dots are food.

Tagged as: , No Comments
4Jun/111

Games with extensible AI

Some computer games offer the possibility to extend their Artificial Intelligence with external scripts, or are explicitly designed to be played by bots. Such games are a great resource to develop, test, and teach AI algorithms.

I have been looking for a list of this kind of games, but could find very little information, often outdated. This is my own, annotated list; let me know if I missed something, and I'll make amend. I included only decently active projects:

Games for bots:

  1. Robocode (Java, .NET): AI bots control tanks equipped with cannon and radar on an square arena. A game with a long tradition, descending from the classic AI games RobotWar (1981) and crobots (1985). This is a very active project, with tournaments organized regularly all over the world. The strategical possibilities are quite limited, though.
  2. Tron challenge (any language): This is the first AI competition organized by the University of Waterloo Computer Science Club and sponsored by Google. The AIs control the "snakes" in a variant of the popular game Snake (or Nibbles, of QBasic fame) on a number of boards with symmetric walls. Although the competition is closed, the code is open source and perfectly usable. A nice game, but there seems to be one dominating strategy: using minimax. Entries in the competition differ in their cleverness in coming up with various approximations to prune down the huge tree of possible moves, and in handling the endgame.
  3. Planet Wars challenge (any language): Second AI competition, again organized by the University of Waterloo CS Club. The inspiration is the galactic conquer game Galcon. Bots controls fleets of spaceships to control planets for resources. Unlike the first competition, the optimal strategy is far from obvious, and the game offers a lot of space for experimenting with different approaches. Not as visually fun as other games, though.
  4. PacMan capture the flag (Python): The CS department at Berkeley offers a very popular Artificial Intelligence course that requires students to code up classical AI algorithms to control agents in a PacMan-like environment. The course concludes with a tournament in which PacMan agents compete to eat each other's dots. The game is extremely fun as I wrote about in previous posts. The code is available online (although somewhat messy). Unfortunately (but understandably, since the game is used in a course), the authors don't want the code of developed agents to leak online.
  5. Grid-Soccer (Mono, .NET): This is a multi-agent soccer-like game played on a grid board. I haven't tried this one yet... According to the authors, it was originally conceived as a testbed for multi-agent reinforcement learning algorithms.

Scriptable games (games that expose an interface for external clients):

  1. Starcraft Brood War (any language): This is your one chance at writing AI bots for a commercial game! 😉 Starcraft has been hacked to allow to control any aspect of the game through a user-written client (building construction, units movement, the full monty). Since 2010 there have been a few AI competitions open to the public (two of them still accepting entries!). Here's a video from last year's AIIDE tournament:

  2. XBlast (potentially any): A free version of Bomberman. The game has not been used to develop AI agents... yet. It is built with a server-client architecture and it's begging to get bots running on it.
Tagged as: , 1 Comment
5Mar/110

Fight the machine at NYT

Perhaps inspired by the victory of the artificial brain Watson against humanity, the New York Times is offering today an interactive feature that allows to play a series of Rock-Paper-Scissors games against the computer.

The prediction algorithm seems to be a simple Bayesian estimator like the one I implemented for the Karate A.I. game. The interesting twist is that in "Veteran mode" the AI will make use its interaction with previous users to define a prior over possible move sequences. It surely works against me...

Tagged as: No Comments
21Jan/110

Tracking down the enemy (2)

I never got the chance to show a working agent based on the Bayesian estimator for the enemy position in the PacMan capture-the-flag game. In the previous PacMan post, I wrote about merging a model of agent movements with the noisy measurements returned by the game to track the enemy agents across the maze. Clearly, this information can give you an edge when planning an attack (to avoid ghosts) or when defending (to intercept the PacMen).

For  the traditional faculty-vs-students tournament at the G-Node scientific programming summer school this year, I wrote a PacMan team made by a simple attacker, and a more sophisticated defender that tries to intercept and devour enemy agents.

Both agents plan their movements using a shortest-path algorithm on a weighted graph: before the start of the game, the maze is transformed in a graph, where nodes are the maze tiles, and edges connect adjacent tiles. Weights along the edges are adjusted according to the estimated position of the agents:

  • Weights on edges close to an enemy ghost are increased (starting value is proportional to the probability of the enemy being there, and falls off exponentially with distance)
  • Weights on edges close to an enemy PacMan are decreased
  • Weights on edges close to a friendly agent are increased

An agent navigating on such a maze will tend to avoid ghosts, chase PacMen, and cover parts of the maze far from other friendly agents. My attacker does little else than updating the weights of the graph at every turn, and move toward the closest food dot.

On the other hand, defending is quite difficult in this game, so I needed a more sophisticated strategy. While the enemy is still a ghost in its own part of the maze, the defender moves toward the closest enemy agent (its estimated position, that is). When the enemy is a PacMan in the friendly half, the chase is on! Since ghosts and PacMen move at the same speed, it would be pointless to just follow it around, one needs to catch them from the front... Once more, the solution was to modify the weights of the maze graph, making weights behind the enemy (i.e., opposite to its direction of motion) very high, and lowering the edges in front of it.

The combination of estimator and the weighted graph strategy can be quite entertaining:

Sometimes the defender only needs to guard the border to scare the opponent shitless:

Another useful thing to keep in mind for the future: it is better to base strategies on soft constraints (weighted graphs, probabilities). Setting hard, deterministic rules tends to get you stuck in loops. Soft constraints and some randomness give you more flexibility when you need it but are otherwise just as good.

Tagged as: , , , No Comments
14Sep/100

Planet Wars – Google AI Challenge

The Computer Science of the University of Waterloo is organizing its second Google AI Challenge. The challenge is a competition between computer programs that control the artificial intelligence of the players in a video game.

This time, the game is set in space, and features a symmetric configuration of planets, each containing a fleet of defending spaceships. Each turn, new ships are created, with larger planets creating more ships. At the beginning of the game, each player controls one of the planets and a hundred ships; the AIs issue orders to the ships to send them to conquer planets by outnumbering the local defense. The goal of the game is, of course, to eliminate all of the enemy forces.

This video shows an example game between my first AI (green) playing against DualBot (red), one of the bots included in the starter package; the number floating around represent the fleets commanded by the AIs:

The competition is open to everybody, and it is possible to write the AI in basically any programming language. If you plan to use Python, I recommend not using the official starter package, which is un-pythonic and not very sophisticate, but rather this un-official client. The alternative client offers in particular the possibility to log debug information to a local file.

Also, it can be quite frustrating to wait for the official server to match you with some other program, which can take up to an hour. There is an alternative server that lets you play with another opponent straight away.

The submission period ends November 27, good luck!

Tagged as: , , No Comments
10Sep/100

Tracking down the enemy

As another scientific Python course is approaching, I've been brushing up my PacMan skills. I decided to give a try to a strategy I had been thinking on, which relies upon having a good estimate of the enemy's position. I should remind the reader that in the PacMan capture-the-flag game, one team does not know the exact position of the other agents unless they are within 5 squares of one's own agents. The game does, however, return a rough estimate of the opponent's distance. Our agents-tracker will thus have to blindly make its best guess, and keep a probability distribution over possible positions.

To estimate the position of the opponent agents we need to apply some probability theory:

P(x(t)) = sum_{x(t-1)}  P( x(t-1) ) P( x(t) | x(t-1) )

or, in other words, the probability that the agent is at position x(t) at time t is equal to the sum of the probability of it being at x(t-1) at time t-1, times the probability of transitioning from x(t-1) to x(t). The first term is given simply by the previous estimate, while the second term is our model of the behavior of our opponent (*).

For example, a very conservative model could assume that the opponent could take any legal move at random:

P( x(t) | x(t-1) ) = 1/N

if x(t-1)->x(t) is a legal move, where N is the total number of legal moves from x(t-1), and

P( x(t) | x(t-1) ) = 0

otherwise. This video shows how such a model performs when the opponent behaves exactly as assumed; the red agent, in the bottom left corner, is estimating the position of the blue agent in the opposite corner; the area of the red squares is proportional to the probability P(x(t)):

The tracker is doing a good job in this case, but fails miserably for a more realistic opponent:

We clearly need to improve the opponent's model... luckily another simple model results in a large improvement: we can safely assume that the opponent tends to explore new parts of the maze in search of food. We can formalize this as

P( x(t) | x(t-1) ) = 1/Z exp(-beta * v(x(t))

if x(t-1)->x(t) is a legal move, and 0 otherwise. v(x) is the number of times the agent visited x in the past, and beta is a constant that controls the how exploratory the opponent is. When beta=0, the model is equivalent to the previous random model. Z is a normalizing constant such that P( x(t) | x(t-1) ) sums to 1.

Let's see how this model does in practice (in the video, beta = 10):

Much better, isn't it? We can do even better by using two other sources of information: first, the game gives us a noisy estimation of the distance of the opponent (actual distance +/- 6); second, we know that if the opponent is not visible, it must be at least 5 squares away. We can take this information into account by setting P(x(t)) to 0 for squares that lie outside the noisy distance range, and for those inside the visibility range.

The last video shows the complete tracker at work. The blue lines show the area in which the agent might be according to the noisy distance, and the green line shows the visibility range:

The biggest area for improvement here is the agent's model P(x(t)|x(t-1)). One possibility could be to simulate several common strategies, and to use the transition statistics for the simulated agents to estimate that probability...

Now, can we use the estimated position to program better AI agents? I'll give it a try, and report back soon!

(*) Strictly speaking, we are doing "filtering" here, i.e., we're estimating the current position assuming the past inferences are fixed. The alternative is to do "smoothing", where the full joint probability P(x(t), ..., x(1)) is estimated at each step. The information coming from the new observation is propagated back and forth at each to improve the past inferences. For example, knowing that the agent is at a given position at time t might exclude another position at time t-2 because of too large a distance, which in turn could improve the estimate at time t.

Tagged as: , , , No Comments
12Sep/090

PacMan capture-the-flag: a fun game for artificial intelligence development and education

At the beginning of September I've been invited to teach at a summer school about scientific programming. The whole experience has been really rewarding, but it was the student's project that got me going: we had the students write artificial intelligence algorithms for the agents of a PacMan-like game, and organized a tournament for them to compete against each other.

The PacMan capture-the-flag game has been written originally by John DeNero, and has been used to teach an artificial intelligence course by him at Berkley and by Hal Daume III at University of Utah. Very often, this kind of games have a single strategy that dominates all others, and once you find it the interest fizzles out. In this case, I was impressed by how rich this game is. The game offers a lot of opportunities to develop and test complex learning and planning algorithms, including cooperation strategies for games with multiple agents.

capture_the_flag

The rules of the game are quite simple: the board is a PacMan maze, divided in a red and a blue half. The two halves belong to two teams of agents, who are controlled by computer programs to eat the opponent's food and protect their own. When in the opponent's half, the agents are PacMan (PacMen?), while in their own half, the agents are ghosts and can kill the opponent's PacMan agents, in which case these are returned to their initial position. The players get one point for each food dot they eat; no points are assigned for eating the other team's agents. The game ends when one of the two teams eats all of the opponent's food, or after 3000 moves; the team with the highest score wins.

To make the game more interesting, one can only observe the position of the other team's agents when they are very close to one'w own agents (5 squares away); otherwise, one can only obtain a noisy estimate of their distance.

The game is written in Python, my programming language of choice, which allows to write rapidly even sophisticated algorithms. I recommend the game to anyone wanting to organize an artificial intelligence course, or simply have fun writing AI agents. I plan to dedicate a couple of posts to the basic strategies to write successful agents in this game.

Here's a video of the best students' agents (red team) playing against the best tutors' agents (blue team). The tutors won, saving our reputation!

Update: The authors of the PacMan capture-the-flag game decided to keep the game close-source, and in particular would prefer not to publish the code of agents playing their game, fearing that it might interfere with their course. It's a shame because I was planning to write some Genetic Programming agents for the game, but of course I respect their decision. I guess there will be no series of posts re:PacMan...

Tagged as: , , , No Comments
12Sep/090

My AI reads your mind — Extensions (part 3)

In the previous two posts I showed how to make use of decision theory to write a game AI that forms a model of its opponent and adapts its strategy accordingly.

The AI could be improved in several ways:

  • The most obvious improvement would be to build a better model of the opponent. In the Karate game I used a 2nd order Markov model, i.e., I assumed that the next move of the player only depends on his previous two moves. It is of course possible to use an higher-order model, that would keep track of three or more past moves. However, a long history means a much larger number of parameters to estimate, so that it will take much longer for the AI to have a reasonable estimate of the player's behavior. An easy workaround would be to collect higher-order statistics, but only use them when enough data is available; this however would still fail if the player decides to adopt a new strategy. One could also use another class of models, like, for example a recurrent neural network. I prefer not to use neural networks, first of all to avoid the usual voodoo of choosing parameters like learning rate, network architecture, etc., and second because they work in a black-box fashion, which makes it difficult to extend them in principled ways, as for example suggested below.
  • When the game is started, the player statistics are blank, in the sense that all the player's moves are considered equally likely. This is our initial prior probability for the opponent's moves. However, it is probable that humans have common biases in the kind of action sequences they choose. One could adapt the game to allow it to learn this biases over many games with different players, and use them as the initial prior, thereby improving the initial phase of the game.
  • A related point is that at the moment I do not take into account the uncertainty about the player's move estimation. At the start of the game the AI has only a few examples on which to base its prediction of the next move, and it should take this into account when making decisions. The formal way to capture this uncertainty is to define an hyperprior on the transition probabilities, and then integrate over it when predicting the next move: so far, the transition probability is given by a matrix, p(n(t+1) | n(t)) = N_{n(t),n(t+1)}, where N is a matrix and for simplicity I'm only considering a first order Markov model. N is considered to be a constant at every time point; in reality, N is also a random variable that is being estimated from the player's move. It is thus only natural to put a prior on N itself, P(N) (e.g., a Dirichlet distribution for every row of the matrix). The improved prediction, which takes into account our uncertainty about N is given by p(n(t+1) | n(t)) = integral over N of p(n(t+1) | n(t), N) p(N) .
  • In the karate game, the AI tries to maximize its score by picking the action with the highest expected score. Another strategy would be to choose every action with a probability related to the score, for example using p(choosing action a) to be proportional to exp(beta * score of a), where beta is a constant that controls the "softness" of the decision: for beta=0, all actions are chosen with the same probability; for large betas, only the action with largest score is chosen. This seems to be closer to the way human takes decision, the so called probability matching rule. This strategy is suboptimal if the second order Markov model is the real model of the opponent, but since it is not, it appears to perform better in practice as the AI becomes less predictable (I updated the Karate game in the previous post to do probability matching).
  • A major improvement to the AI would be to take into account the so-called theory of mind, i.e., the fact that while the AI is building a model of the player, the player is doing the same for the AI, and trying to maximize his own score. Taking this aspect into account is quite complex, as one falls rapidly into a deep I-know-that-you-know-that-I-know kind of reasoning. Managing to do so, however, is likely to be highly rewarding for the player's experience of the game: several studies have shown how humans activate areas of the brain that are associated with theory-of-mind when playing against a human opponent, but fail to do so against a computer (see Gallagher and Frith, Functional imaging of ‘theory of mind’, Trends in Cognitive Sciences, 7(2), 2003, for a review). It is thus possible that by writing computer programs that make use of theory of mind themselves, those centers would become engaged, giving the player the impression of playing against a human opponent.
Tagged as: , , No Comments
3May/092

My AI reads your mind and kicks your ass (part 2)

Get Adobe Flash player

In the last post I discussed how it is possible to program a game Artificial Intelligence to exploit a player's unconscious biases using a simple mathematical model. In the karate game above, the AI uses that model in order to do the largest amount of damage. Give it a try! You get 10 points if you hit your opponent with a punch or a kick, 0 points if you miss, and 5 points if you block your opponent's move. As you play, the AI learns your strategy and adapts to knock you down as often as possible.

How does it work? According to decision theory, we need to maximize the expected score. To compute the expected score for an action 'x' (e.g., 'punch'), one needs to consider all possible player's moves, 'y', and weight the possible outcome with the probability of the player doing that move, i.e.

E[score for x] = sum_y P(y) * Score(y,x)

where P(y) is the probability of the player choosing action 'y' (obtained using last post's model), and Score(y,x) gives the score of responding 'x' to 'y'.

For example, in the karate game using a low kick has a priori the highest chance of success: you score in 3 out of 4 cases, and only lose 5 points if the opponent decides to block your kick. This is why, at the beginning, the AI tends to choose that move. However, if you know that the AI uses that move often, you will choose the kick-blocking move more often, increasing P(kick-block). This change will make the punch more likely to score points. As you play, the optimal strategy changes and the AI continues to adapt to your style.

With a bit of practice, you'll notice that you can compete with the AI and sometimes even gain the upper hand over it. This shows that you are in turn forming an internal model of the computer's strategy. I think that the game dynamics that results from this interaction makes the game quite interesting, even though it is extremely simple. Unfortunately, it's very rare to see learning AIs in real-life video games...

As always, you can download the code here.

Update: Instead of always making the best move, the AI now selects the move with a probability related to its score, which makes it less predictable. More details in the next post...

[ad#adpost]

23Apr/093

My AI reads your mind (part 1)

I regularly read about people complaining that AI in games should be improved. I definitely agree with them, but here's a argument why pushing it to the limits might not be such a good idea: computers can easily discover and exploit our unconscious biases.

Magic? ESP? More like a simple application of decision theory. In order to make an unbeatable AI one needs two steps: 1) build a model of a player's response in order to predict his next move, and 2) choose actions that maximize the expected score given the prediction of the model.

The basic idea behind 1) is that even if we try to be unpredictable, our actions contain hidden patterns that can be revealed using a pinch of statistics. Formally, the model takes the form of a probability distribution: P(next move | past observations).

Try it out: In the Flash example below, you can type in a sequence numbers 1-4, and the AI will try to predict your next choice. If your choices were completely random, the AI would only be able to guess correctly 25% of the time. In practice, it often guesses correctly 35-40% of the numbers! (It might take a few numbers before the AI starts doing a decent job.)

Get Adobe Flash player

In this example I used a 2nd order Markov model, i.e., I assumed that the next number, n(t+1), only depends on the past 2 choices: P(n(t+1) | past observations) = P(n(t+1) | n(t), n(t-1)). The rest is just book-keeping: I used two arrays, one to remember the past 3 numbers, and one to keep track of how many times the player chose number 'k', given that his past two moves were 'i' and 'j':

1
2
3
4
5
// last 3 moves
public var history:Array = [1, 3, 2];
// transition table: transitions[i,j,k] stores the number
// of time the player pressed 'i' followed by 'j' followed by 'k'
public var transitions:Array;

When the player makes a new choice, I update the history, and increment the corresponding entry in the transition table:

1
2
3
4
5
6
7
/*
* Update history and transition tables with player's move.
*/

public function update(move:int):void {
history = [history[1], history[2], move];
transitions[history[0]][history[1]][history[2]] += 1;
}

The probability that the next choice will be n(t+1), is given by the number of times the player pressed n(t+1) after n(t) and n(t-1) before, normalized by the number of time the sequence n(t-1), n(t) occurred in the past:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
/*
* Return probability distribution for next move.
*/

public function predict():Array {
// probability distribution over next move
var prob:Array = new Array(4);

// look up previous transitions from moves at time t-2, t-1
var tr:Array = transitions[history[1]][history[2]];

// normalizing constant
var sum:Number = 0;
for (var k:int = 0; k < 4; k++) {
sum += tr[k];
}

for (k = 0; k < 4; k++) {
prob[k] = tr[k] / sum;
}

return prob;
}

The best prediction is given by the choice with maximum probability. You're welcome to have a look at the code!

In the next post, I'll show how the AI can choose the best actions in order to maximize its expected score in a Virtual Karate game.

[ad#adpost]