The concept of "open-ended evolution" in artificial life is mostly misunderstood. In many cases, "open-ended" is taken to mean that the final solution can take any shape, however complicated. Instead, a stronger definition of "open-ended" would require that the fitness function itself is changing. For example, Genetic Programming would be defined as open-ended in the first sense, since the evolved programs can solve and computable problem (provided that the instruction set is Turing-complete), but is not in the second sense, as the programs are selected according to one, immutable fitness function. This is more akin to adaptation than evolution.
In my opinion, an important but neglected goal of ALife should be that of identifying a minimal set of conditions that is able to sustain this stronger kind of open-ended evolution.
What would it take for an artificial system to display "real" open-ended evolution? It is possible that there may be multiple answer to this question, but co-evolution of agents seems to be a good candidate. A minimal example might involve populations of agents trying to predict each other's output. The output of a population should be tied to the process of predicting the response of the others. To initiate a runaway evolution process, predicting a simple output should require a simple algorithm, which however would produce a slightly more complex output by the laws of the artificial world. In such a scenario, the fitness function is given by the combination of prediction algorithms in the population, and each evolutionary step changes it in a way that requires increasingly complex behaviors.
Natural selection is the algorithm Nature uses to maximize the probability of reproduction of organisms. With Genetic Algorithms (GAs), engineers imitate this process in order to optimize a set of parameters in an engineering problem... or a dune buggy! The great application at boxcar2d.com uses simulated physics to evolve 2D cars that are optimally fast and stable.
GAs tie the ability to solve a problem to the likelihood of reproduction of a set of parameters: first, one creates a population of possible solutions to a problem, evaluates the solutions, and then forms a new generation by allowing the most successful ones to pass their parameters to the next generation, after small mutations and parameters swapping.
My general feeling about GAs is that in most cases other optimization algorithms will give similar or better results with less evaluations of the fitness function (which is typically the bottleneck). In Chapter 30.2 of his classic book on information theory, David McKay gives a very interesting interpretation of GAs as a Monte Carlo sampling in the space of parameters, and discusses the relation with efficient sampling methods, of which we have a better formal understanding.
Here's another classic from the 80's: an artificial life simulation, where bugs move on a virtual Petri dish, hunting for bacteria. If they manage to survive until adulthood, and accumulate enough energy from bacteria, the bugs reproduce and generate two copies of themselves. In the reproduction process, the genetic code undergoes small mutations, so that the baby-bugs are not exact copies of their mother.
The genetic code of bugs determines the way they move around. It consists of six numbers that give the probability that the bug will move in one of six directions (forward, soft/hard right, backwards, soft/hard left) at any point in time. For example, a bug with code [5, 0, 5, 0, 0, 0] would move forward 50% of the time (relative to his current direction), and for the rest of the time it would take a hard turn right (120 degrees on its right) and move. Mutations change one of the numbers in the code by +/- 2.
Individual with a genetic code unfit to deal with competition for food eventually die away, and by the law of natural selection the population of bugs adapts to efficiently navigate their environment to collect bacteria. The optimal strategy will depend on the environment: if the bacteria are randomly scattered around, the optimal behavior is to "slide" forward for some time before taking a turn. If instead bacteria are concentrated on a small patch, a surer way for a bug to survive is to rotate on itself, to make sure not to get too far.
This idea was described by A.K. Dewdney in in the article "Simulated evolution: wherein bugs learn to hunt bacteria" in 1989 in Scientific American (May, pp. 138-141). The flash application above is my version of Dewdney's simulation, implemented in ActionScript 3 (click here to download the code). It is based on the SpatialDatabase class described in the previous post, so you might want to have a look at the code if you're curious about how it can be used in practice.
As you might have guessed, the yellow circles are bacteria, while the green ones are bugs. Bugs start to fade when their energy is low; if they don't find food fast enough, they eventually disappear into nothing. The button "Garden of Eden" activates a small region with high bacterial growth, you can switch it on to see how fast the bugs adapt to the new environment. It usually takes around 20 generations for them to show a highly specialized behavior.
I decided to get my feet wet with Flash + ActionScript programming with a classic of the 80's: 2-dimensional Cellular Automata!
A 2D CA is a grid of cells, each of which can be in either an "alive" or "dead" state. The state of each cell evolves in time, according to simple update rules based on the number of alive neighbors (each cell has 9 neighbors). Briefly:
- The update rule defines the overall behavior of the CA, and is given by two lists of numbers, S for "Survival" and B for "Birth"
- If the cell is alive and the number of active neighbors is not on the S list, the cell dies
- If the cell is dead and the number of active neighbors is on the B list, the cell becomes alive
The standard notation for the rules is S/B. For example, 23/3 (S=[2,3], B=) corresponds to the celebrated Game of Life by Conway, that produces ever-changing patterns of activity. The reason CAs are so famous is because they are a perfect example of how simple, local rules can produce complex, global behavior.
In the Flash application below, you can experiment with different rules by (un)checking the checkbox on the right side. This wikipedia page has a list of rules known to produce interesting behavior.
Short instructions: click on the grid elements to switch them between dead and alive states. On the right side, you can add numbers to the Survival and Birth list, clear the CA to a blank state, or set the cells to a random state.
You can download the AS3 and the Flash library here. This is my first ActionScript project, so everything was new to me. The hardest part for me was to figure out how to link the interface I designed in the Flash IDE with the AS3 classes. I think I found a decent solution in the end, but let me know if you have suggestion to improve the code.
The BinaryCA class is an independent class to manage 2D CAs. To store the CA cells I based the core on the Array2 class from polygonal labs' AS3DS data structures library.
Cellular Automata... so retro!