Master Baboon The sea of the simulation

21Jan/110

Tracking down the enemy (2)

I never got the chance to show a working agent based on the Bayesian estimator for the enemy position in the PacMan capture-the-flag game. In the previous PacMan post, I wrote about merging a model of agent movements with the noisy measurements returned by the game to track the enemy agents across the maze. Clearly, this information can give you an edge when planning an attack (to avoid ghosts) or when defending (to intercept the PacMen).

For  the traditional faculty-vs-students tournament at the G-Node scientific programming summer school this year, I wrote a PacMan team made by a simple attacker, and a more sophisticated defender that tries to intercept and devour enemy agents.

Both agents plan their movements using a shortest-path algorithm on a weighted graph: before the start of the game, the maze is transformed in a graph, where nodes are the maze tiles, and edges connect adjacent tiles. Weights along the edges are adjusted according to the estimated position of the agents:

  • Weights on edges close to an enemy ghost are increased (starting value is proportional to the probability of the enemy being there, and falls off exponentially with distance)
  • Weights on edges close to an enemy PacMan are decreased
  • Weights on edges close to a friendly agent are increased

An agent navigating on such a maze will tend to avoid ghosts, chase PacMen, and cover parts of the maze far from other friendly agents. My attacker does little else than updating the weights of the graph at every turn, and move toward the closest food dot.

On the other hand, defending is quite difficult in this game, so I needed a more sophisticated strategy. While the enemy is still a ghost in its own part of the maze, the defender moves toward the closest enemy agent (its estimated position, that is). When the enemy is a PacMan in the friendly half, the chase is on! Since ghosts and PacMen move at the same speed, it would be pointless to just follow it around, one needs to catch them from the front... Once more, the solution was to modify the weights of the maze graph, making weights behind the enemy (i.e., opposite to its direction of motion) very high, and lowering the edges in front of it.

The combination of estimator and the weighted graph strategy can be quite entertaining:

Sometimes the defender only needs to guard the border to scare the opponent shitless:

Another useful thing to keep in mind for the future: it is better to base strategies on soft constraints (weighted graphs, probabilities). Setting hard, deterministic rules tends to get you stuck in loops. Soft constraints and some randomness give you more flexibility when you need it but are otherwise just as good.

Tagged as: , , , No Comments
15Sep/101

Solving the game Set®

Recently a friend of mine introduced me to a game called Set. Set is a logic cards game for several players; 3-5 players is probably best, but in there is no limit in principle, and it is also fun to play as a solitary.

The game contains a deck of cards with symbols varying across 4 dimensions: color, number, shape, and texture. For each dimension, there are 3 possible features (e.g., there are 3 possible textures: full, empty, striped). A valid set is formed by three cards that have on each dimension either the same feature, or three different features. So for example in the image below, the first three cards are a valid set, as they are different on all features across all dimensions; the second three cards also form a valid set, because they share the same features for color and number, and are different in shape and texture; the cards on the bottom are not a set, because two cards have the "full" texture, while one is striped.

Example of valid (top) and invalid (bottom) sets.

Example of valid (top) and invalid (bottom) sets.

In a Set game, 12 cards are put on the table, and the players have to find a valid set. The quickest among them collects the set and replaces the removed cards with new ones. If the players agree that there is no set on the table, 3 more cards are added. Once the deck is empty, the player who collected the most sets wins! Simple but fun...

I decided to write a solver for the game that finds all possible sets given a set of cards, with the goal of collecting statistics over thousands of games. The solver should thus be as efficient as I can manage to write it.

I'm going to represent each cards as a four-dimensional vector (color, shape, texture, number), each element containing either 0, 1, or 2 (representing the 3 possible values for each dimension). A function that checks if three cards form a valid set would thus look like this:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import numpy
import itertools

def same(x):
    """Returns True if all elements are the same."""
    return numpy.all(x == x[0])

def different(x):
    """Returns True if all elements are different."""
    return len(numpy.unique(x)) == len(x)

def is_set(cards, indices):
    """Checks that the cards indexed by 'indices' form a valid set."""
    ndims = cards.shape[0]

    subset = cards[:, indices]
    for dim in range(ndims):
        # cards must be all the same or all different for all dimensions
        if not same(subset[dim, :]) and not different(subset[dim, :]):
            return False
    return True
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True
A brute force solver could then try all possible combinations of three cards and check whether they form a valid set:

1
2
3
4
5
def find_sets(cards):
    """Brute-force Sets solver."""
    return [indices
            for indices in itertools.combinations(range(cards.shape[1]), 3)
            if is_set(cards, indices)]
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

The brute force approach is very inefficient, but it is also very useful to test more efficient solutions (just check that they give the same response as the brute force one).

The second solver improves on the first one with a simple observation. Once you have two cards, there is only one possible card that completes the set: for each dimension, if the two cards have the same feature, the missing one will also have to have the same feature; if the features are different, the missing card will have to have the missing feature.

Given two cards, there is only one possible card that forms a valid set.

Given two cards, there is only one possible card that forms a valid set.

A possible strategy is thus to consider all possible pairs of cards, derive the one completing the set, and check if it is present on the table. This solution is much more efficient, as there are 220 possible triplets out of 12 cards, but only 66 pairs. It runs about 6 times faster:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def find_sets2(cards):
    ndims, ncards = cards.shape
    all_features = set([0, 1, 2])

    # solutions contain the indices of the cards forming sets
    solutions = []
    # iterate over all pairs
    for idx1, idx2 in itertools.combinations(range(ncards - 1), 2):
        c1, c2 = cards[:, idx1], cards[:, idx2]

        # compute card that would complete the set
        missing = numpy.empty((ndims,), dtype='i')
        for d in range(ndims):
            if c1[d] == c2[d]:
                # same feature on this dimension ->; missing card also has same
                missing[d] = c1[d]
            else:
                # different features -> find third missing feature
                missing[d] = list(all_features - set([c1[d], c2[d]]))[0]

        # look for missing card in the cards array
        where_idx = numpy.flatnonzero(numpy.all(cards[:, idx2 + 1:].T == missing,
                                                axis=1))
        # append to solutions if found
        if len(where_idx) > 0:
            solutions.append((idx1, idx2, where_idx[0] + idx2 + 1))

    return solutions
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

The trained eye will see at this point that the problem can be re-written as a dynamic programming one. We can start looking at the first dimension, and form groups of cards with the same feature. We know by the reasoning above that valid sets will either contain cards in the same group, or contain one card from each group. First we consider the former case, and recursively apply the same procedure to all the remaining dimensions for cards within each group. Second, we consider all triplets of cards that have one card per group and verify if it's a set:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
def find_sets3(cards, indices=None):
    nd, n = cards.shape
    c0 = cards[0, :]
    if indices is None:
        indices = numpy.arange(n)

    groups = [(c0 == f).nonzero()[0] for f in range(nfeatures)]

    # equals
    solequal = []
    for g in groups:
        if len(g) < 3: continue
        solequal += find_sets3(cards[1:nd, g], indices[g])
    # different
    soldiff = [(indices[i0], indices[i1], indices[i2])
               for i0 in groups[0] for i1 in groups[1] for i2 in groups[2]
               if is_set(cards, (i0, i1, i2))]
    return solequal + soldiff
# solution checker
# ----------------
def same(x):
"""Returns True if all elements are the same."""
return numpy.all(x == x[0])
def different(x):
"""Returns True if all elements are different."""
return len(numpy.unique(x)) == len(x)
def is_set(cards, indices):
"""Checks that the cards indexed by 'indices' form a valid set."""
ndims = cards.shape[0]
subset = cards[:, indices]
for dim in range(ndims):
# cards must be all the same or all different for all dimensions
if not same(subset[dim, :]) and not different(subset[dim, :]):
return False
return True

It turns out that, although it is a very efficient strategy to use while playing with cards, find_sets3 is slower than find_set2, probably because the overhead of calling the function recursively outweighs the efficiency for such a small number of cards.

Let's have a look at some statistics, then. While playing it can be quite frustrating to stare at the cards without being able to find any set. The instructions that ship with the official game say that such a situation should occur only once every 33 turns, but it certainly doesn't feel that way. Is that really so?

First, I drew at random 12 cards from a complete deck of cards, and used the solver to compute the number of valid sets present on the table. I repeated this 10000 times, and ended up with this distribution for the number of set in a random draw:

histogram_random_draws

Probability of finding a given number of sets in 12 cards drawn at random.

As the instructions say, about 3% of the time (1 in 33) there is no set in the cards. However, this is misleading, as during a game the cards are not independently drawn each turn: the players remove one set from the cards on the table, and replace it with new cards. I thus simulated complete games, where at every turn I removed one of the set present on the table at random. The code to simulate a game look like this:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
def random_deck():
    # initialize cards deck
    cards = numpy.array([card for card in itertools.product(range(nfeatures),
                                                            repeat=ndims)]).T
    n = cards.shape[1]
    # shuffle
    return cards[:, numpy.random.permutation(n)]

ncards = 12

def onegame():
    nsolutions = []
    deck = random_deck()
    pos = ncards
    cards = deck[:, :pos]
    while True:
        # find all sets
        sets = find_sets2(cards)
        nsets = len(sets)
        if nsets > 0:
            # choose a random set
            chosen = sets[numpy.random.randint(len(sets))]
            # remove cards from chosen set
            idx = [i for i in range(cards.shape[1]) if i not in chosen]
            cards = cards[:, idx]
            # add new cards
            if cards.shape[1] < 12:
                nadd = 12 - cards.shape[1]
                cards = numpy.concatenate((cards, deck[:, pos:pos + nadd]), axis=1)
                pos += nadd
        else:
            if pos >= deck.shape[1]:
                break # game is over
            # add additional cards
            cards = numpy.concatenate((cards, deck[:, pos:pos + 3]), axis=1)
            pos += 3
        nsolutions.append(nsets)
    return nsolutions

The distribution of the number of sets on the table at any point in time looks quite different now (after simulating 1000 5000 random games):

histogram_game_draws

Probability of there being a given number of sets on the table at any point in a Set game.

As you see, the probability of there being no set on the table tripled and became about 1 in 10!  Now that's a relief, it is not that weird not to be able to find a set... or is it? The distribution also tells us that about half of the times there are 3 sets or more on the table! Now that one, I didn't expect... (43 percent of the times, to be precise.)

The code and other material is available on the git repository at http://github.com/pberkes/masterbaboon/tree/master/projects/setgame/ .

Update 09/21/10: I updated the histogram of number of sets during a game with more games, so that the result is more accurate. The probability of not having a set at any point is even higher than reported before.

Cards and game © 1988, 1991 Cannei, LLC.  All rights reserved.  SET® is registered trademark of Cannei, LLC.  Used with permission from Set Enterprises, Inc.

Tagged as: , 1 Comment
14Sep/100

Planet Wars – Google AI Challenge

The Computer Science of the University of Waterloo is organizing its second Google AI Challenge. The challenge is a competition between computer programs that control the artificial intelligence of the players in a video game.

This time, the game is set in space, and features a symmetric configuration of planets, each containing a fleet of defending spaceships. Each turn, new ships are created, with larger planets creating more ships. At the beginning of the game, each player controls one of the planets and a hundred ships; the AIs issue orders to the ships to send them to conquer planets by outnumbering the local defense. The goal of the game is, of course, to eliminate all of the enemy forces.

This video shows an example game between my first AI (green) playing against DualBot (red), one of the bots included in the starter package; the number floating around represent the fleets commanded by the AIs:

The competition is open to everybody, and it is possible to write the AI in basically any programming language. If you plan to use Python, I recommend not using the official starter package, which is un-pythonic and not very sophisticate, but rather this un-official client. The alternative client offers in particular the possibility to log debug information to a local file.

Also, it can be quite frustrating to wait for the official server to match you with some other program, which can take up to an hour. There is an alternative server that lets you play with another opponent straight away.

The submission period ends November 27, good luck!

Tagged as: , , No Comments
10Sep/100

Tracking down the enemy

As another scientific Python course is approaching, I've been brushing up my PacMan skills. I decided to give a try to a strategy I had been thinking on, which relies upon having a good estimate of the enemy's position. I should remind the reader that in the PacMan capture-the-flag game, one team does not know the exact position of the other agents unless they are within 5 squares of one's own agents. The game does, however, return a rough estimate of the opponent's distance. Our agents-tracker will thus have to blindly make its best guess, and keep a probability distribution over possible positions.

To estimate the position of the opponent agents we need to apply some probability theory:

P(x(t)) = sum_{x(t-1)}  P( x(t-1) ) P( x(t) | x(t-1) )

or, in other words, the probability that the agent is at position x(t) at time t is equal to the sum of the probability of it being at x(t-1) at time t-1, times the probability of transitioning from x(t-1) to x(t). The first term is given simply by the previous estimate, while the second term is our model of the behavior of our opponent (*).

For example, a very conservative model could assume that the opponent could take any legal move at random:

P( x(t) | x(t-1) ) = 1/N

if x(t-1)->x(t) is a legal move, where N is the total number of legal moves from x(t-1), and

P( x(t) | x(t-1) ) = 0

otherwise. This video shows how such a model performs when the opponent behaves exactly as assumed; the red agent, in the bottom left corner, is estimating the position of the blue agent in the opposite corner; the area of the red squares is proportional to the probability P(x(t)):

The tracker is doing a good job in this case, but fails miserably for a more realistic opponent:

We clearly need to improve the opponent's model... luckily another simple model results in a large improvement: we can safely assume that the opponent tends to explore new parts of the maze in search of food. We can formalize this as

P( x(t) | x(t-1) ) = 1/Z exp(-beta * v(x(t))

if x(t-1)->x(t) is a legal move, and 0 otherwise. v(x) is the number of times the agent visited x in the past, and beta is a constant that controls the how exploratory the opponent is. When beta=0, the model is equivalent to the previous random model. Z is a normalizing constant such that P( x(t) | x(t-1) ) sums to 1.

Let's see how this model does in practice (in the video, beta = 10):

Much better, isn't it? We can do even better by using two other sources of information: first, the game gives us a noisy estimation of the distance of the opponent (actual distance +/- 6); second, we know that if the opponent is not visible, it must be at least 5 squares away. We can take this information into account by setting P(x(t)) to 0 for squares that lie outside the noisy distance range, and for those inside the visibility range.

The last video shows the complete tracker at work. The blue lines show the area in which the agent might be according to the noisy distance, and the green line shows the visibility range:

The biggest area for improvement here is the agent's model P(x(t)|x(t-1)). One possibility could be to simulate several common strategies, and to use the transition statistics for the simulated agents to estimate that probability...

Now, can we use the estimated position to program better AI agents? I'll give it a try, and report back soon!

(*) Strictly speaking, we are doing "filtering" here, i.e., we're estimating the current position assuming the past inferences are fixed. The alternative is to do "smoothing", where the full joint probability P(x(t), ..., x(1)) is estimated at each step. The information coming from the new observation is propagated back and forth at each to improve the past inferences. For example, knowing that the agent is at a given position at time t might exclude another position at time t-2 because of too large a distance, which in turn could improve the estimate at time t.

Tagged as: , , , No Comments
12Sep/090

PacMan capture-the-flag: a fun game for artificial intelligence development and education

At the beginning of September I've been invited to teach at a summer school about scientific programming. The whole experience has been really rewarding, but it was the student's project that got me going: we had the students write artificial intelligence algorithms for the agents of a PacMan-like game, and organized a tournament for them to compete against each other.

The PacMan capture-the-flag game has been written originally by John DeNero, and has been used to teach an artificial intelligence course by him at Berkley and by Hal Daume III at University of Utah. Very often, this kind of games have a single strategy that dominates all others, and once you find it the interest fizzles out. In this case, I was impressed by how rich this game is. The game offers a lot of opportunities to develop and test complex learning and planning algorithms, including cooperation strategies for games with multiple agents.

capture_the_flag

The rules of the game are quite simple: the board is a PacMan maze, divided in a red and a blue half. The two halves belong to two teams of agents, who are controlled by computer programs to eat the opponent's food and protect their own. When in the opponent's half, the agents are PacMan (PacMen?), while in their own half, the agents are ghosts and can kill the opponent's PacMan agents, in which case these are returned to their initial position. The players get one point for each food dot they eat; no points are assigned for eating the other team's agents. The game ends when one of the two teams eats all of the opponent's food, or after 3000 moves; the team with the highest score wins.

To make the game more interesting, one can only observe the position of the other team's agents when they are very close to one'w own agents (5 squares away); otherwise, one can only obtain a noisy estimate of their distance.

The game is written in Python, my programming language of choice, which allows to write rapidly even sophisticated algorithms. I recommend the game to anyone wanting to organize an artificial intelligence course, or simply have fun writing AI agents. I plan to dedicate a couple of posts to the basic strategies to write successful agents in this game.

Here's a video of the best students' agents (red team) playing against the best tutors' agents (blue team). The tutors won, saving our reputation!

Update: The authors of the PacMan capture-the-flag game decided to keep the game close-source, and in particular would prefer not to publish the code of agents playing their game, fearing that it might interfere with their course. It's a shame because I was planning to write some Genetic Programming agents for the game, but of course I respect their decision. I guess there will be no series of posts re:PacMan...

Tagged as: , , , No Comments