Master Baboon The sea of the simulation

11Nov/110

Warning: time sink! Or: the latest Google AI challenge

It's that time of the year again... the word spreads, I download it, and my nights fly by. It's time for the new Google AI Challenge: http://aichallenge.org/ ! This time we will be writing multi-agent systems, as we program cyber-ant-colonies fighting against each other for bread crumbs.

The colored dots are ants of different colonies. The black dots are obstacles. The barely visible brown-ish dots are food.

Tagged as: , No Comments
29Jun/111

The road to javascript

I wanted to learn javascript and wrote a small game with an old school look using textareas, that I'm glad to share:

A screenshot of the game

A screenshot of the game

Writing javascript is quite intuitive for a Python developer, even though I'm sure that there are many idioms and tricks that I did not know about (and the code I wrote is the stream-of-thought kind).  I'm somewhat disappointed as I thought javascript would be more standard across browsers, but there remain important differences, for example in the way Firefox and IE handle ranges in textarea.

I haven't tried the game on Safari or Opera, please let me know if it works!

Tagged as: , 1 Comment
4Jun/111

Games with extensible AI

Some computer games offer the possibility to extend their Artificial Intelligence with external scripts, or are explicitly designed to be played by bots. Such games are a great resource to develop, test, and teach AI algorithms.

I have been looking for a list of this kind of games, but could find very little information, often outdated. This is my own, annotated list; let me know if I missed something, and I'll make amend. I included only decently active projects:

Games for bots:

  1. Robocode (Java, .NET): AI bots control tanks equipped with cannon and radar on an square arena. A game with a long tradition, descending from the classic AI games RobotWar (1981) and crobots (1985). This is a very active project, with tournaments organized regularly all over the world. The strategical possibilities are quite limited, though.
  2. Tron challenge (any language): This is the first AI competition organized by the University of Waterloo Computer Science Club and sponsored by Google. The AIs control the "snakes" in a variant of the popular game Snake (or Nibbles, of QBasic fame) on a number of boards with symmetric walls. Although the competition is closed, the code is open source and perfectly usable. A nice game, but there seems to be one dominating strategy: using minimax. Entries in the competition differ in their cleverness in coming up with various approximations to prune down the huge tree of possible moves, and in handling the endgame.
  3. Planet Wars challenge (any language): Second AI competition, again organized by the University of Waterloo CS Club. The inspiration is the galactic conquer game Galcon. Bots controls fleets of spaceships to control planets for resources. Unlike the first competition, the optimal strategy is far from obvious, and the game offers a lot of space for experimenting with different approaches. Not as visually fun as other games, though.
  4. PacMan capture the flag (Python): The CS department at Berkeley offers a very popular Artificial Intelligence course that requires students to code up classical AI algorithms to control agents in a PacMan-like environment. The course concludes with a tournament in which PacMan agents compete to eat each other's dots. The game is extremely fun as I wrote about in previous posts. The code is available online (although somewhat messy). Unfortunately (but understandably, since the game is used in a course), the authors don't want the code of developed agents to leak online.
  5. Grid-Soccer (Mono, .NET): This is a multi-agent soccer-like game played on a grid board. I haven't tried this one yet... According to the authors, it was originally conceived as a testbed for multi-agent reinforcement learning algorithms.

Scriptable games (games that expose an interface for external clients):

  1. Starcraft Brood War (any language): This is your one chance at writing AI bots for a commercial game! 😉 Starcraft has been hacked to allow to control any aspect of the game through a user-written client (building construction, units movement, the full monty). Since 2010 there have been a few AI competitions open to the public (two of them still accepting entries!). Here's a video from last year's AIIDE tournament:

  2. XBlast (potentially any): A free version of Bomberman. The game has not been used to develop AI agents... yet. It is built with a server-client architecture and it's begging to get bots running on it.
Tagged as: , 1 Comment
21Jan/110

Tracking down the enemy (2)

I never got the chance to show a working agent based on the Bayesian estimator for the enemy position in the PacMan capture-the-flag game. In the previous PacMan post, I wrote about merging a model of agent movements with the noisy measurements returned by the game to track the enemy agents across the maze. Clearly, this information can give you an edge when planning an attack (to avoid ghosts) or when defending (to intercept the PacMen).

For  the traditional faculty-vs-students tournament at the G-Node scientific programming summer school this year, I wrote a PacMan team made by a simple attacker, and a more sophisticated defender that tries to intercept and devour enemy agents.

Both agents plan their movements using a shortest-path algorithm on a weighted graph: before the start of the game, the maze is transformed in a graph, where nodes are the maze tiles, and edges connect adjacent tiles. Weights along the edges are adjusted according to the estimated position of the agents:

  • Weights on edges close to an enemy ghost are increased (starting value is proportional to the probability of the enemy being there, and falls off exponentially with distance)
  • Weights on edges close to an enemy PacMan are decreased
  • Weights on edges close to a friendly agent are increased

An agent navigating on such a maze will tend to avoid ghosts, chase PacMen, and cover parts of the maze far from other friendly agents. My attacker does little else than updating the weights of the graph at every turn, and move toward the closest food dot.

On the other hand, defending is quite difficult in this game, so I needed a more sophisticated strategy. While the enemy is still a ghost in its own part of the maze, the defender moves toward the closest enemy agent (its estimated position, that is). When the enemy is a PacMan in the friendly half, the chase is on! Since ghosts and PacMen move at the same speed, it would be pointless to just follow it around, one needs to catch them from the front... Once more, the solution was to modify the weights of the maze graph, making weights behind the enemy (i.e., opposite to its direction of motion) very high, and lowering the edges in front of it.

The combination of estimator and the weighted graph strategy can be quite entertaining:

Sometimes the defender only needs to guard the border to scare the opponent shitless:

Another useful thing to keep in mind for the future: it is better to base strategies on soft constraints (weighted graphs, probabilities). Setting hard, deterministic rules tends to get you stuck in loops. Soft constraints and some randomness give you more flexibility when you need it but are otherwise just as good.

Tagged as: , , , No Comments
15Sep/101

Solving the game Set®

Recently a friend of mine introduced me to a game called Set. Set is a logic cards game for several players; 3-5 players is probably best, but in there is no limit in principle, and it is also fun to play as a solitary.

The game contains a deck of cards with symbols varying across 4 dimensions: color, number, shape, and texture. For each dimension, there are 3 possible features (e.g., there are 3 possible textures: full, empty, striped). A valid set is formed by three cards that have on each dimension either the same feature, or three different features. So for example in the image below, the first three cards are a valid set, as they are different on all features across all dimensions; the second three cards also form a valid set, because they share the same features for color and number, and are different in shape and texture; the cards on the bottom are not a set, because two cards have the "full" texture, while one is striped.

Example of valid (top) and invalid (bottom) sets.

Example of valid (top) and invalid (bottom) sets.

In a Set game, 12 cards are put on the table, and the players have to find a valid set. The quickest among them collects the set and replaces the removed cards with new ones. If the players agree that there is no set on the table, 3 more cards are added. Once the deck is empty, the player who collected the most sets wins! Simple but fun...

I decided to write a solver for the game that finds all possible sets given a set of cards, with the goal of collecting statistics over thousands of games. The solver should thus be as efficient as I can manage to write it.

I'm going to represent each cards as a four-dimensional vector (color, shape, texture, number), each element containing either 0, 1, or 2 (representing the 3 possible values for each dimension). A function that checks if three cards form a valid set would thus look like this:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import numpy
import itertools

def same(x):
    """Returns True if all elements are the same."""
    return numpy.all(x == x[0])

def different(x):
    """Returns True if all elements are different."""
    return len(numpy.unique(x)) == len(x)

def is_set(cards, indices):
    """Checks that the cards indexed by 'indices' form a valid set."""
    ndims = cards.shape[0]

    subset = cards[:, indices]
    for dim in range(ndims):
        # cards must be all the same or all different for all dimensions
        if not same(subset[dim, :]) and not different(subset[dim, :]):
            return False
    return True
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True
A brute force solver could then try all possible combinations of three cards and check whether they form a valid set:

1
2
3
4
5
def find_sets(cards):
    """Brute-force Sets solver."""
    return [indices
            for indices in itertools.combinations(range(cards.shape[1]), 3)
            if is_set(cards, indices)]
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

The brute force approach is very inefficient, but it is also very useful to test more efficient solutions (just check that they give the same response as the brute force one).

The second solver improves on the first one with a simple observation. Once you have two cards, there is only one possible card that completes the set: for each dimension, if the two cards have the same feature, the missing one will also have to have the same feature; if the features are different, the missing card will have to have the missing feature.

Given two cards, there is only one possible card that forms a valid set.

Given two cards, there is only one possible card that forms a valid set.

A possible strategy is thus to consider all possible pairs of cards, derive the one completing the set, and check if it is present on the table. This solution is much more efficient, as there are 220 possible triplets out of 12 cards, but only 66 pairs. It runs about 6 times faster:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def find_sets2(cards):
    ndims, ncards = cards.shape
    all_features = set([0, 1, 2])

    # solutions contain the indices of the cards forming sets
    solutions = []
    # iterate over all pairs
    for idx1, idx2 in itertools.combinations(range(ncards - 1), 2):
        c1, c2 = cards[:, idx1], cards[:, idx2]

        # compute card that would complete the set
        missing = numpy.empty((ndims,), dtype='i')
        for d in range(ndims):
            if c1[d] == c2[d]:
                # same feature on this dimension ->; missing card also has same
                missing[d] = c1[d]
            else:
                # different features -> find third missing feature
                missing[d] = list(all_features - set([c1[d], c2[d]]))[0]

        # look for missing card in the cards array
        where_idx = numpy.flatnonzero(numpy.all(cards[:, idx2 + 1:].T == missing,
                                                axis=1))
        # append to solutions if found
        if len(where_idx) > 0:
            solutions.append((idx1, idx2, where_idx[0] + idx2 + 1))

    return solutions
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

The trained eye will see at this point that the problem can be re-written as a dynamic programming one. We can start looking at the first dimension, and form groups of cards with the same feature. We know by the reasoning above that valid sets will either contain cards in the same group, or contain one card from each group. First we consider the former case, and recursively apply the same procedure to all the remaining dimensions for cards within each group. Second, we consider all triplets of cards that have one card per group and verify if it's a set:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
def find_sets3(cards, indices=None):
    nd, n = cards.shape
    c0 = cards[0, :]
    if indices is None:
        indices = numpy.arange(n)

    groups = [(c0 == f).nonzero()[0] for f in range(nfeatures)]

    # equals
    solequal = []
    for g in groups:
        if len(g) < 3: continue
        solequal += find_sets3(cards[1:nd, g], indices[g])
    # different
    soldiff = [(indices[i0], indices[i1], indices[i2])
               for i0 in groups[0] for i1 in groups[1] for i2 in groups[2]
               if is_set(cards, (i0, i1, i2))]
    return solequal + soldiff
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

It turns out that, although it is a very efficient strategy to use while playing with cards, find_sets3 is slower than find_set2, probably because the overhead of calling the function recursively outweighs the efficiency for such a small number of cards.

Let's have a look at some statistics, then. While playing it can be quite frustrating to stare at the cards without being able to find any set. The instructions that ship with the official game say that such a situation should occur only once every 33 turns, but it certainly doesn't feel that way. Is that really so?

First, I drew at random 12 cards from a complete deck of cards, and used the solver to compute the number of valid sets present on the table. I repeated this 10000 times, and ended up with this distribution for the number of set in a random draw:

histogram_random_draws

Probability of finding a given number of sets in 12 cards drawn at random.

As the instructions say, about 3% of the time (1 in 33) there is no set in the cards. However, this is misleading, as during a game the cards are not independently drawn each turn: the players remove one set from the cards on the table, and replace it with new cards. I thus simulated complete games, where at every turn I removed one of the set present on the table at random. The code to simulate a game look like this:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
def random_deck():
    # initialize cards deck
    cards = numpy.array([card for card in itertools.product(range(nfeatures),
                                                            repeat=ndims)]).T
    n = cards.shape[1]
    # shuffle
    return cards[:, numpy.random.permutation(n)]

ncards = 12

def onegame():
    nsolutions = []
    deck = random_deck()
    pos = ncards
    cards = deck[:, :pos]
    while True:
        # find all sets
        sets = find_sets2(cards)
        nsets = len(sets)
        if nsets > 0:
            # choose a random set
            chosen = sets[numpy.random.randint(len(sets))]
            # remove cards from chosen set
            idx = [i for i in range(cards.shape[1]) if i not in chosen]
            cards = cards[:, idx]
            # add new cards
            if cards.shape[1] < 12:
                nadd = 12 - cards.shape[1]
                cards = numpy.concatenate((cards, deck[:, pos:pos + nadd]), axis=1)
                pos += nadd
        else:
            if pos >= deck.shape[1]:
                break # game is over
            # add additional cards
            cards = numpy.concatenate((cards, deck[:, pos:pos + 3]), axis=1)
            pos += 3
        nsolutions.append(nsets)
    return nsolutions

The distribution of the number of sets on the table at any point in time looks quite different now (after simulating 1000 5000 random games):

histogram_game_draws

Probability of there being a given number of sets on the table at any point in a Set game.

As you see, the probability of there being no set on the table tripled and became about 1 in 10!  Now that's a relief, it is not that weird not to be able to find a set... or is it? The distribution also tells us that about half of the times there are 3 sets or more on the table! Now that one, I didn't expect... (43 percent of the times, to be precise.)

The code and other material is available on the git repository at http://github.com/pberkes/masterbaboon/tree/master/projects/setgame/ .

Update 09/21/10: I updated the histogram of number of sets during a game with more games, so that the result is more accurate. The probability of not having a set at any point is even higher than reported before.

Cards and game © 1988, 1991 Cannei, LLC.  All rights reserved.  SET® is registered trademark of Cannei, LLC.  Used with permission from Set Enterprises, Inc.

Tagged as: , 1 Comment
14Sep/100

Planet Wars – Google AI Challenge

The Computer Science of the University of Waterloo is organizing its second Google AI Challenge. The challenge is a competition between computer programs that control the artificial intelligence of the players in a video game.

This time, the game is set in space, and features a symmetric configuration of planets, each containing a fleet of defending spaceships. Each turn, new ships are created, with larger planets creating more ships. At the beginning of the game, each player controls one of the planets and a hundred ships; the AIs issue orders to the ships to send them to conquer planets by outnumbering the local defense. The goal of the game is, of course, to eliminate all of the enemy forces.

This video shows an example game between my first AI (green) playing against DualBot (red), one of the bots included in the starter package; the number floating around represent the fleets commanded by the AIs:

The competition is open to everybody, and it is possible to write the AI in basically any programming language. If you plan to use Python, I recommend not using the official starter package, which is un-pythonic and not very sophisticate, but rather this un-official client. The alternative client offers in particular the possibility to log debug information to a local file.

Also, it can be quite frustrating to wait for the official server to match you with some other program, which can take up to an hour. There is an alternative server that lets you play with another opponent straight away.

The submission period ends November 27, good luck!

Tagged as: , , No Comments
10Sep/100

Tracking down the enemy

As another scientific Python course is approaching, I've been brushing up my PacMan skills. I decided to give a try to a strategy I had been thinking on, which relies upon having a good estimate of the enemy's position. I should remind the reader that in the PacMan capture-the-flag game, one team does not know the exact position of the other agents unless they are within 5 squares of one's own agents. The game does, however, return a rough estimate of the opponent's distance. Our agents-tracker will thus have to blindly make its best guess, and keep a probability distribution over possible positions.

To estimate the position of the opponent agents we need to apply some probability theory:

P(x(t)) = sum_{x(t-1)}  P( x(t-1) ) P( x(t) | x(t-1) )

or, in other words, the probability that the agent is at position x(t) at time t is equal to the sum of the probability of it being at x(t-1) at time t-1, times the probability of transitioning from x(t-1) to x(t). The first term is given simply by the previous estimate, while the second term is our model of the behavior of our opponent (*).

For example, a very conservative model could assume that the opponent could take any legal move at random:

P( x(t) | x(t-1) ) = 1/N

if x(t-1)->x(t) is a legal move, where N is the total number of legal moves from x(t-1), and

P( x(t) | x(t-1) ) = 0

otherwise. This video shows how such a model performs when the opponent behaves exactly as assumed; the red agent, in the bottom left corner, is estimating the position of the blue agent in the opposite corner; the area of the red squares is proportional to the probability P(x(t)):

The tracker is doing a good job in this case, but fails miserably for a more realistic opponent:

We clearly need to improve the opponent's model... luckily another simple model results in a large improvement: we can safely assume that the opponent tends to explore new parts of the maze in search of food. We can formalize this as

P( x(t) | x(t-1) ) = 1/Z exp(-beta * v(x(t))

if x(t-1)->x(t) is a legal move, and 0 otherwise. v(x) is the number of times the agent visited x in the past, and beta is a constant that controls the how exploratory the opponent is. When beta=0, the model is equivalent to the previous random model. Z is a normalizing constant such that P( x(t) | x(t-1) ) sums to 1.

Let's see how this model does in practice (in the video, beta = 10):

Much better, isn't it? We can do even better by using two other sources of information: first, the game gives us a noisy estimation of the distance of the opponent (actual distance +/- 6); second, we know that if the opponent is not visible, it must be at least 5 squares away. We can take this information into account by setting P(x(t)) to 0 for squares that lie outside the noisy distance range, and for those inside the visibility range.

The last video shows the complete tracker at work. The blue lines show the area in which the agent might be according to the noisy distance, and the green line shows the visibility range:

The biggest area for improvement here is the agent's model P(x(t)|x(t-1)). One possibility could be to simulate several common strategies, and to use the transition statistics for the simulated agents to estimate that probability...

Now, can we use the estimated position to program better AI agents? I'll give it a try, and report back soon!

(*) Strictly speaking, we are doing "filtering" here, i.e., we're estimating the current position assuming the past inferences are fixed. The alternative is to do "smoothing", where the full joint probability P(x(t), ..., x(1)) is estimated at each step. The information coming from the new observation is propagated back and forth at each to improve the past inferences. For example, knowing that the agent is at a given position at time t might exclude another position at time t-2 because of too large a distance, which in turn could improve the estimate at time t.

Tagged as: , , , No Comments
12Sep/090

PacMan capture-the-flag: a fun game for artificial intelligence development and education

At the beginning of September I've been invited to teach at a summer school about scientific programming. The whole experience has been really rewarding, but it was the student's project that got me going: we had the students write artificial intelligence algorithms for the agents of a PacMan-like game, and organized a tournament for them to compete against each other.

The PacMan capture-the-flag game has been written originally by John DeNero, and has been used to teach an artificial intelligence course by him at Berkley and by Hal Daume III at University of Utah. Very often, this kind of games have a single strategy that dominates all others, and once you find it the interest fizzles out. In this case, I was impressed by how rich this game is. The game offers a lot of opportunities to develop and test complex learning and planning algorithms, including cooperation strategies for games with multiple agents.

capture_the_flag

The rules of the game are quite simple: the board is a PacMan maze, divided in a red and a blue half. The two halves belong to two teams of agents, who are controlled by computer programs to eat the opponent's food and protect their own. When in the opponent's half, the agents are PacMan (PacMen?), while in their own half, the agents are ghosts and can kill the opponent's PacMan agents, in which case these are returned to their initial position. The players get one point for each food dot they eat; no points are assigned for eating the other team's agents. The game ends when one of the two teams eats all of the opponent's food, or after 3000 moves; the team with the highest score wins.

To make the game more interesting, one can only observe the position of the other team's agents when they are very close to one'w own agents (5 squares away); otherwise, one can only obtain a noisy estimate of their distance.

The game is written in Python, my programming language of choice, which allows to write rapidly even sophisticated algorithms. I recommend the game to anyone wanting to organize an artificial intelligence course, or simply have fun writing AI agents. I plan to dedicate a couple of posts to the basic strategies to write successful agents in this game.

Here's a video of the best students' agents (red team) playing against the best tutors' agents (blue team). The tutors won, saving our reputation!

Update: The authors of the PacMan capture-the-flag game decided to keep the game close-source, and in particular would prefer not to publish the code of agents playing their game, fearing that it might interfere with their course. It's a shame because I was planning to write some Genetic Programming agents for the game, but of course I respect their decision. I guess there will be no series of posts re:PacMan...

Tagged as: , , , No Comments
12Sep/090

My AI reads your mind — Extensions (part 3)

In the previous two posts I showed how to make use of decision theory to write a game AI that forms a model of its opponent and adapts its strategy accordingly.

The AI could be improved in several ways:

  • The most obvious improvement would be to build a better model of the opponent. In the Karate game I used a 2nd order Markov model, i.e., I assumed that the next move of the player only depends on his previous two moves. It is of course possible to use an higher-order model, that would keep track of three or more past moves. However, a long history means a much larger number of parameters to estimate, so that it will take much longer for the AI to have a reasonable estimate of the player's behavior. An easy workaround would be to collect higher-order statistics, but only use them when enough data is available; this however would still fail if the player decides to adopt a new strategy. One could also use another class of models, like, for example a recurrent neural network. I prefer not to use neural networks, first of all to avoid the usual voodoo of choosing parameters like learning rate, network architecture, etc., and second because they work in a black-box fashion, which makes it difficult to extend them in principled ways, as for example suggested below.
  • When the game is started, the player statistics are blank, in the sense that all the player's moves are considered equally likely. This is our initial prior probability for the opponent's moves. However, it is probable that humans have common biases in the kind of action sequences they choose. One could adapt the game to allow it to learn this biases over many games with different players, and use them as the initial prior, thereby improving the initial phase of the game.
  • A related point is that at the moment I do not take into account the uncertainty about the player's move estimation. At the start of the game the AI has only a few examples on which to base its prediction of the next move, and it should take this into account when making decisions. The formal way to capture this uncertainty is to define an hyperprior on the transition probabilities, and then integrate over it when predicting the next move: so far, the transition probability is given by a matrix, p(n(t+1) | n(t)) = N_{n(t),n(t+1)}, where N is a matrix and for simplicity I'm only considering a first order Markov model. N is considered to be a constant at every time point; in reality, N is also a random variable that is being estimated from the player's move. It is thus only natural to put a prior on N itself, P(N) (e.g., a Dirichlet distribution for every row of the matrix). The improved prediction, which takes into account our uncertainty about N is given by p(n(t+1) | n(t)) = integral over N of p(n(t+1) | n(t), N) p(N) .
  • In the karate game, the AI tries to maximize its score by picking the action with the highest expected score. Another strategy would be to choose every action with a probability related to the score, for example using p(choosing action a) to be proportional to exp(beta * score of a), where beta is a constant that controls the "softness" of the decision: for beta=0, all actions are chosen with the same probability; for large betas, only the action with largest score is chosen. This seems to be closer to the way human takes decision, the so called probability matching rule. This strategy is suboptimal if the second order Markov model is the real model of the opponent, but since it is not, it appears to perform better in practice as the AI becomes less predictable (I updated the Karate game in the previous post to do probability matching).
  • A major improvement to the AI would be to take into account the so-called theory of mind, i.e., the fact that while the AI is building a model of the player, the player is doing the same for the AI, and trying to maximize his own score. Taking this aspect into account is quite complex, as one falls rapidly into a deep I-know-that-you-know-that-I-know kind of reasoning. Managing to do so, however, is likely to be highly rewarding for the player's experience of the game: several studies have shown how humans activate areas of the brain that are associated with theory-of-mind when playing against a human opponent, but fail to do so against a computer (see Gallagher and Frith, Functional imaging of ‘theory of mind’, Trends in Cognitive Sciences, 7(2), 2003, for a review). It is thus possible that by writing computer programs that make use of theory of mind themselves, those centers would become engaged, giving the player the impression of playing against a human opponent.
Tagged as: , , No Comments
3May/092

My AI reads your mind and kicks your ass (part 2)

Get Adobe Flash player

In the last post I discussed how it is possible to program a game Artificial Intelligence to exploit a player's unconscious biases using a simple mathematical model. In the karate game above, the AI uses that model in order to do the largest amount of damage. Give it a try! You get 10 points if you hit your opponent with a punch or a kick, 0 points if you miss, and 5 points if you block your opponent's move. As you play, the AI learns your strategy and adapts to knock you down as often as possible.

How does it work? According to decision theory, we need to maximize the expected score. To compute the expected score for an action 'x' (e.g., 'punch'), one needs to consider all possible player's moves, 'y', and weight the possible outcome with the probability of the player doing that move, i.e.

E[score for x] = sum_y P(y) * Score(y,x)

where P(y) is the probability of the player choosing action 'y' (obtained using last post's model), and Score(y,x) gives the score of responding 'x' to 'y'.

For example, in the karate game using a low kick has a priori the highest chance of success: you score in 3 out of 4 cases, and only lose 5 points if the opponent decides to block your kick. This is why, at the beginning, the AI tends to choose that move. However, if you know that the AI uses that move often, you will choose the kick-blocking move more often, increasing P(kick-block). This change will make the punch more likely to score points. As you play, the optimal strategy changes and the AI continues to adapt to your style.

With a bit of practice, you'll notice that you can compete with the AI and sometimes even gain the upper hand over it. This shows that you are in turn forming an internal model of the computer's strategy. I think that the game dynamics that results from this interaction makes the game quite interesting, even though it is extremely simple. Unfortunately, it's very rare to see learning AIs in real-life video games...

As always, you can download the code here.

Update: Instead of always making the best move, the AI now selects the move with a probability related to its score, which makes it less predictable. More details in the next post...

[ad#adpost]