LBrain publications

Palmer, M.E. Evolved Neurogenesis and Synaptogenesis for Robotic Control: The L-brain Model. in Genetic and Evolutionary Computation Conference 2011. Dublin, Ireland, 2011. (PDF) (videos)

Abstract: We have developed a novel method to “grow” neural networks according to an inherited set of production rules (the genotype), inspired by Lindenmayer systems. In the first phase (neurogenesis), the neurons proliferate in three-dimensional space by cell division, and differentiate in function, according to the production rules. In the second phase (synaptogenesis), axons emerge from the neurons and seek out connection targets. Part of each production rule is an augmented Reverse Polish Notation expression; this permits regulation of the applicable rules, as well as introduction of spatial and temporal context to the developmental process. We connect each network to a (fixed) robotic body with a set of input sensors and muscle actuators. The robot is placed in a physically simulated environment and controlled by its network for a certain time, receiving a fitness score according to its behavior (the phenotype). Mutations are introduced into offspring by making changes to their sets of production rules. This paper introduces the “L-brain” developmental method, and describes our first experiments with it, which produced controllers for robotic “spiders” with the ability to gallop, and to follow a compass heading.

Palmer, M.E., and Chou, A. "Evolved neural network controllers for physically simulated robots that hunt with an artificial visual cortex," in Artificial Life XIII. East Lansing, MI, 2012. (PDF) (videos)

Abstract: Using a rule-based system for growing artificial neural networks, we have evolved controllers for physically simulated robotic "spiders". The controllers take their input from an “artificial retina” that senses other spiders and inanimate barrier objects in the environment, and must provide output to dynamically control the 18 degrees of freedom of the six legs of the robot every time step. We perform evolutionary runs with two species of spider that interact in simulation with each other and with inanimate barrier objects. One species (the "predator") is selectively rewarded for "eating" (by physically colliding with) the other species, and the other (the "prey") is selectively penalized for being caught, and rewarded for "eating" the barriers. The two species evolve complex running gaits, with control inputs coming from their retinas that produce hunting or avoidance behavior. We suggest that predator-prey frequency dependent selection can provide a relatively long-term genetic memory of previously searched regions of phenotype space, enforcing a form of novelty search that may reduce duplicated evolutionary search effort.