Quick Tour

A conversational introduction to Darwin2k
What follows is slightly stream-of-consciousness, but hopefully it fills in most of the big picture about what Darwin2K, how it's organized, and how to use it.

Overview

There are several ways of using Darwin2K, depending on what you're doing with it: you can just do simulation, or you can do simulation and synthesis. In either case, the first step is to figure out what exactly you want the robot to do, how to represent the task in simulation, and how to control the robot as it completes the task. The next step is to implement the necessary special-purpose classes for your task and combine them with Darwin2K's existing classes to create a simulation. If you're doing synthesis, you'll also need to specify a number of performance metrics and the set of modules from which configurations will be built, and then start running the synthesizer.

There are four general groups of binaries:

All of the demos are obviously standalone simulations, some using specialized programs (such as 'ctestGL', which is tailored to a specific demo), and some using the general-purpose simulation shell program (d2kSimGL). The pm and pmStandalone programs take as input a description of the metrics used to measure performance and either one or more 'kernel' configurations or an entire population of configurations (including their performance measurements as measured from a previous run). The process of creating a simulation for use with the d2kSimGL/evStandaloneGL (which are very similar) and evaluator programs is described first, followed by a general overview of using the PM.

Simulating and evaluating configurations

d2kSimGL and evStandaloneGL are the programs to use when creating a new task for which a robot will be evolved. d2kimGL offers general simulation capabilites and can simulate multiple configurations simultaneously, whereas evStandaloneGL incorporates metrics and is meant for preparing simulations for use with the synthesizer. So, if you're just interested in simulation, you will probably use d2kSimGL, and if you're interested in synthesis, you'll probalby use evStandaloneGL. Since Darwin2K's synthesis capabilities are focused on mechanical configuration, not controller or task evolution, you need to have an idea of what the task will be and how the robot will be controlled, and d2kSimGL and evStandaloneGL let you refine the task and controller before starting an actual synthesis run. There's a bit of a chicken-and-egg problem: for many tasks, you'll need to create a configuration which can complete the task to some degree so that you can make sure the task representation (trajectories, etc.) and controller are correct. However, the configuration you use for this doesn't have to meet all the performance requirements of the application you're designing for. The generality of the controller you're using plays a big part in the amount of pre-synthesis work you have to do: if you're just making a manipulator that follows a trajectory, the sriController does a good job with a huge variety of manipulators and it's thus usually quite easy to specify the trajectory (or trajectories) and obstacles, set up your performance metrics, and turn the synthesizer loose. For other cases, you might need a specialized controller, and in order to create the controller you've got to have some kind of nominal configuration to work with while writing and debugging the controller. This is where d2kSimGL and evStandaloneGL come in. You can give evStandaloneGL (or evStandalone, which does the same simulation but without a graphical display) three file arguments:

d2kSimGL's arguments are similar, but allow multiple configurations and there's no pm.p file:

d2kSimGL [ ...]

The parameter files (*.p) look basically like C variable declarations, with the construct

#section int var1 = 1; double var2 = 0.02 * 2; ...

denoting a primitive sort of scoping: different parts of Darwin2K will look first in appropriately named sections for their parameters, and will then look in the global scope (all variables before the first #section declaration) if the variable isn't in the local scope. Each 'evaluator', 'd2kSimulator', or 'd2kComponent' derived class has a method called readParams which reads class-dependent variables from a section in the p-file. The d2kSimulator (from which the evaluator class is derived) serves as a central coordinator for the simulation and instantiates the basic simulation classes (e.g. a dynamicSystem) needed for simulation. d2kComponents and derived classes are more specific, implementing a certain type of controller or simulation capability (e.g. representing terrain geometry and soil properties).

For most applications, you'll need some sort of application-specific robot modules or simulation components. If you need a specialized task representation, you derive a class from the 'evaluator' class which performs high-level flow control for your simulation: for example, the roverEvaluator class is set up to generate different terrains and run a rover over each one, while the pathEvaluator makes one or more manipulators follow a series of trajectories. Other components (e.g. a new controller) are derived from within the d2kComponent class hierarchy. New classes that you create are packaged into a plugin, which is just a dynamic library.

So, let's say that you've created a new controller (of type "myController") and evaluator (of type "myEvalutor") for your specific application. These would be compiled into a separate dynamic library (called libMyEval.so), which is specified in the eval.p file and loaded at runtime. Let's say your evaluator is derived from the roverEvaluator, so you need that plugin too. Finally, let's say there's a function you need to call ("glDisplayInit") that is in the main program, not one of the libraries you're loading. In the eval.p file, you'd have the following sections:


char simType[80] = "myEvaluator"; char componentDBFilename[200] = "software/src/myEval/componentDB"; ...

#libraries char libraryName0[80] = "libsynEvalRover.so"; char libraryName1[80] = "libMyEval.so"; char initFunc1[80] = "myInitFunc";

char auxFunc0[80] = "glDisplayInit"; ...

#myEvaluator int foo = 1; double bar = 2.0;

...

#component0 char classname[80] = "myController"; int enabled = 1; char label[80] = "main controller"; ...

#component1

...

This would cause both the rover library and your library to be loaded, and the function 'int myInitFunc(void)' would be called after loading your library, which would add the myEvaluator and myController classes to Darwin2K's runtime database of simulation components. Then, a myEvalutor object would be created, which will (through the inherited d2kSimulator methods) create all of the components listed in the '#componentN' sections, including a myController object. If myController needs to interact with some other simulation component, it can ask myEvaluator to return a list of all components with a certain label or class. myEvaluator would need to define an evaluateConfiguration method (or use one from its parent class), which contains the main simulation loop. In the simulation loop, you'd frequently call the applyMetrics() function, which will evaluate all of the metrics specified in the pm.p file. Some metrics only get evaluated once (such as the massMetric, which records the mass of a configuration), while some get evaluated at regular intervals during simulation (such as the powerMetric).

Synthesis

The PM relies on simulation to determine the performance of each configuration it creates, so you'll need to have a working simulation before you can use the PM. Additionally, it usually takes multiple runs of the PM to arrive at a satisfactory solution: GAs are notorious for finding loopholes in the problem specification, and it's hard to predict what constraints must be imposed on the problem in order to get realistic solutions.

The pm.p file specfies which metrics are used, the ranges and thresholds for each, and the grouping of the metrics. The general framework for metrics is covered in some detail in Chapter 3 of my thesis (or the book, if you've got that), but briefly, the designer should organize the metrics into groups based on priority. Some metrics are treated as requirements and have thresholds that must be met (e.g. having zero collisions with obstacles), while others are open-ended (e.g. minimize power as much as possible). The metrics that are requirements are optimized before the open-ended once, since, for instance, a robot that doesn't do anything is obviously useless, but will have lower power consumption than one that successfully accomplishes a task. For reference, I've used between 5 and 13 metrics in the synthesis problems I've used Darwin2K for. You'll have to decide which metrics are appropriate for your application, as well as what the rough prioritization of metrics is. You don't have to specify detailed weights for the metrics, but you do have to specify which metrics go into each group. Typically, there are between 2 and 4 metrics in each group, with the most important requirements in group 0, the next most important requirments in group 1, and so on, with the open-ended metrics in the final group.

The pm.p file also indicates where to get the starting configurations. For some problems, you can start with a trivial configuration, such as a manipulator end-effector attached to a manipulator base. In other cases, you might have a good idea what the design should look like and will thus specify a fairly complete solution and just let the PM modify certain parameters or modules. Finally, you may be starting with the final population from a previous run, if you add or change metrics or modify some aspect of the simulation but don't want to lose all the progress made by the PM on the previous run. For the first two cases (partially-specified solutions), the initial configurations are called the 'kernel' configurations, and the initial population is created by randomly applying genetic operators to the kernels. For the case where you're starting from a previous population, you can either re-evaluate the initial population if the metrics or simulation have changed, or you can just pick up where you left off if, say, the previous run ended because your computer crashed. Depending on how much freedom you're giving the PM in generating new configurations, you may also need to specify a module database, which indicates the set of modules that the PM is allowed to insert into new configurations. If you've fixed the topology of the kernels and are only allowing parametric variation, then you don't need a module database.

As the PM crunches away, it periodically saves the population (including performance measurements), as well as separately saving the best configuration with respect to each metric. After a certain number of configurations have satisfied all of the performance thresholds within a group of requirments, the PM will move on to the next set of requirements, which entails beginning to use the new requirements in selection of configurations for reproduction. Whenthe final set of metrics is reached, the PM continues to run until a timeout (in wall-clock time) is reached, or until a pre-set number of configurations have been created. Sometimes, the PM will not be able to satisfy all of the requirements in a group; this usually indicates one of two things:

In the first case, you might have specified a mass requirement that just can't be met given the loads you want a robot to move or carry. In the second case, perhaps the only feasible solution is a four-wheeled rover, but you're letting the PM look at enormous manipulators, flying machines, and sixteen-legged walkers, so the search isn't focused enough to result in a good solution. There's a balance between the two extremes, and I don't have any hard and fast rules about how much prior expectation to encode in the metrics and module sets and parameters. If you are certain that a particular feature (e.g. having four wheels and a manipulator) is required for success, then by all means include it; however, also be aware of being too restrictive with the allowable range of parameters or modules. Also keep in mind that using a bad controller in simulation can make even the best robot look like junk as far as the performance metrics are concerned. The need to devise an effective controller is probably the biggest single limitation in applying Darwin2K to new design problems. I don't see any way around it without more computing power, which will make possible the incorporation of either seriously evolved controllers (rather than mere parametric optimization of a controller), or deliberative planners that can devise effective motion plans for a wide range of configurations and tasks.

So, let's say you've created the simulation and are using a reasonable controller, set of metrics and modules, and all the other stuff. The final output consists of three main data products:

evStandaloneGL can be used to observe the final configurations in simulation. It's up to the designer to choose the most appropriate configuration out of the Pareto-optimal set; all of the configurations in it are feasible, but have different trade-offs among the final set of metrics. Usually there are other metrics (e.g. cost or complexity) that aren't included in the synthesis process, but which are reasonably obvious to the designer and which can be used to make the decision about which of the Pareto-optimal configurations is most appropriate.

So, that's the basic process. You create a simulation and any application-specific modules or components you need, then test things with a prototype configuration using evStandaloneGL, then specify the metrics and run pm (most likely multiple times). Pretty simple, eh? :)

As for what types of synthesis problems you're likely to be able to address on just one or two PCs, it will depend heavily on the complexity of the simulation and how much free reign you'll give the PM. I normally boil the simulation down to the bare minimum set of sub-simulations (e.g. different worst-case obstacles or terrain for a rover) to keep simulation time down, and supply as much a-priori design knowledge to the PM in the form of partially-specified configurations or parameter ranges. While you can have Darwin2K add or remove limbs entirely, or swap feet for wheels or grippers, the limiting factor is often one of control: using a wheeled rather than legged robot will likely require a vastly different controller, and Darwin2K isn't smart enough to create the controller from scratch. So, if you want to look at both wheels and legs (or two other disparate classes of designs), then you'll need to do two separate synthesis runs and compare the final configurations from each to determine what's more appropriate.

So, that's a five-minute intro to using Darwin2K. Chances are, you'll want something specific that hasn't been implemented already, but it's also likely that a lot of what you need is already there. After you've gotten a feel for the demos and what classes there already are, take a look at Making Plugins. While you can add your code straight into Darwin2K, this makes upgrading to a newer version much more difficult. Plugins let you how to add your own code in an encapsulated manner so you can upgrade the core Darwin2K distribution without worrying about wiping out your changes or having to patch them into the updated files. Good luck!

See Also:
Making Plugins
d2kComponent
d2kSimulator

Alphabetic index Hierarchy of classes



This page was generated with the help of DOC++.