CEC99 SPECIAL SESSIONS

 

Session: Theory and Foundation of Evolutionary Computation – 5 Sessions

 

Organizer:

David Fogel

Natural Selection Inc., La Jolla, CA 92037

fogel@ece.ucsd.edu

 

1. "Characterizations of Trajectory Structure of Fitness Landscapes Based on Pairwise Transition Probabilities of Solutions"

 

Mark Jelasity, Boglarka Toth, and Tamas Vinko

 

Characterization of trajectory structure of fitness landscapes is a major problem of EC theory. In this paper a hardness measure of fitness landscapes will be introduced which is based on statistical properties of trajectories. These properties are approximated with the help of a heuristic based on the transition probabilities between the elements of the search space. This makes it possible to compute the measure for some well known function: a ridge function, a long path function, a fully deceptive function, and a combinatorial problem: the subset sum problem. Using the same transition probabilities the expected number of evaluations needed to reach the global optimum from any point in the space is approximated and examined for the above problems.

 

2. "Aggregating Models of Evolutionary Algorithms"

 

William M. Spears

 

The standard difficulty in modeling evolutionary algorithms (EAs) is in finding the correct level of granularity. If the granularity is too coarse the model has poor predictive ability. If the granularity is too find, the model is generally intractable. A solution to this dilemma is to find methods for aggregating fine-grained models of EAs -- the aggregation removes unnecessary detail from models, producing simpler models with good predictive accuracy. This paper summarizes two useful aggregation techniques for EA models that introduce little or no error in accuracy.

 

3. "Some Observations on the Interaction of Recombination and Self-Adaptation in Evolution Strategies"

 

Lothar Grunz and Hans-Georg Beyer

 

The performance of the multirecombinant (mu/mu,lambda)-ES with sigma-self-adaptation (sSA) is investigated on the sphere model. The investigation includes the computation of the maximal performance of an ES using recombination can reach when the mutation strength is optimally adjusted during the whole evolution. The comparison between the strategies with and without sSA shows that self-adaptation is not always able to drive the ES in its optimal working regime, although it still guarantees linear convergence order. The static and dynamic aspects of self-adaptation are discussed and it is shown that the learning parameter has a sensible influence on the progress rate.

 

4. "Self-Adaptation and Global Convergence: A Counter-Example"

 

Guenter Rudolph

 

The self-adaptation of the mutation distribution is a distinguishing feature of evolutionary algorithms that optimize over continuous variables. It is widely recognized that self-adaptation accelerates the search for optima and enhances the ability to locate optima accurately, but it is generally unclear whether these optima are global or not. Here, it is proven that the probability of convergence to the global optimum is less than one in general -- even if the objective function is continuous.

 

5. "Emphasizing Extinction in Evolutionary Programming"

 

Garrison Greenwood, Gary B. Fogel, and Manuel Ciobanu

 

Evolutionary programming uses tournament selection to choose parents for reproduction. Tournaments naturally emphasize survival. However, a natural opposite of survival is extinction, and a study of the fossil record shows extinction plays a key role in the evolutionary process. This paper presents a new evolutionary algorithm that emphasizes extinction to conduct search operations over a fitness landscape.

 

6. "A Performance Analysis of Evolutionary Pattern Search with Generalized

Mutation Steps"

 

William Hart and Keith Hunter

 

Evolutionary pattern search algorithms (EPSAs) are a class of evolutionary algorithms (EAs) that have convergence guarantees on a broad class of nonconvex continuous problems. In previous work we have analyzed the empirical performance of EPSAs. This paper revisits that analysis and extends it to a more general model of mutation. We experimentally evaluate how the choice of the set of mutation offsets affects optimization performance for EPSAs. Additionally, we compare EPSAs to self-adaptive Eas with respect to robustness and rate of optimization. All experiments employ a suite of test functions representing a range of modality and number of multiple minima.

 

7. "On the Real Arity of Multiparent Recombination"

 

I. G. Sprinkhuizen-Kuyper, C. A. Schippers, and A .E. Eiben

 

In the last years several papers reported experimental results for multi-parent recombination operators, looking at the effects of using more parents. Tacitly, these studies assume that the number of parents (the arity of the given recombination operator) tells how many old individuals contribute to a new one by passing their genetic information to it. In this article we point out that this assumption is not valid for a number of well-known recombination operators and distinguish parents and donors, the latter ones being those parents that really deliver information to the offspring. We perform a mainly theoretical analysis on the number of donors. Experimental results are provided to support theoretical estimates and predictions.

 

8. "Learning Landscapes: Regression on Discrete Spaces"

 

William G. Macready and Bennett S. Levitan

 

It is often useful to be able to reconstruct landscapes from a set of data points sampled from the landscape. Neural networks and other supervised learning techniques can accomplish this task but typically do not exploit the metric structure of discrete input spaces. In this paper we propose a new method based on Gaussian processes which reconstructs landscapes over discrete spaces from data sampled from the landscape and the optional prior beliefs about the correlation structure of the landscape. In addition to speeding up costly fitness evaluations, the methods can be used to characterize landscapes in terms of a small set of easily interpretable quantities.

 

9. "The Deterministic Genetic Algorithm: Implementation Details and Some Results"

 

Ralf Salomon

 

Recent literature on genetic algorithms provides a controversial discussion on the efficiency of this particular class of randomized optimization procedures; despite several encouraging empirical results, recent theoretical analyses have argued that in most cases, the runtime behavior of genetic algorithms is increased by at least a factor of ln(n) with n denoting the number of parameters to be optimized. It has been argued that these inefficiencies are due to intrinsic resampling effects. As a result of these theoretical considerations, a deterministic genetic algorithm has been suggested as a theoretical concept. Since its proposition, information discussions have been raised concerning some implementation details as well as efficacy issues. Since some implementation details are a bit tricky, this paper discusses some of them in a pseudo programming language similar to PASCAL or C. In addition, this paper presents two possible variants in detail and compares their runtime behavior with another fairly-established procedure, the breeder genetic algorithm. It turns out that on widely-used test functions, the deterministic variants scale strictly better. Furthermore, this paper discusses some specific fitness functions on which random algorithms yield better worst-case expectations than deterministic algorithms; but both types require constant time on average, i.e., one function evaluation.

 

10. "Effective Fitness Landscapes for Evolutionary Systems"

 

Chris Stephens

 

In evolution theory the concept of a fitness landscape has played an important role, evolution itself being portrayed as a hill-climbing process on a rugged landscape. In this article it is shown that in general, in the presence of other genetic operators such as mutation and recombination, hill-climbing is the exception rather than the rule. This discrepancy can be traced to the different ways that the concept of fitness appears -- as a measure of the number of fit offspring, or as a measure of the probability to reach reproductive age. Effective fitness models the former not the latter and gives an intuitive way to understand population dynamics as flows on an effective fitness landscape when genetic operators other than selection play an important role. The efficacy of the concept is shown using several simple analytic examples and also some more complicated cases illustrated by simulations.

 

11. "An Examination of Building Block Dynamics in Different Representations"

 

Annie S. Wu and Kenneth A. De Jong

 

We compare the traditional, fixed genetic algorithm (GA) representation scheme with a floating representation scheme and examine the differences in building block dynamics and how these differences affect a GA's ability to balance exploration and exploitation of building blocks. This study examines both the overall performance of a GA and the detailed events that contribute to overall behavior. Results indicate that the floating representation allows a GA to maintain a higher level of construction which results in a more diverse population from which to build solutions.

 

12. "The Futility of Programming Computers by Means of Natural Selection"

 

Terry Fogarty

 

Abstract Forthcoming

 

13. "The Time Complexity of Maximum Matching by Evolutionary Programming"

 

Xin Yao

 

Abstract Forthcoming

 

14. "Adaptive Genetic Algorithms -- Modeling and Convergence"

 

Alexandru Agapie

Head of Computational Intelligence Laboratory

Institute of Microtechnology

Bucharest, 72225, P.O. Box 38-160, Romania

e-mail: agapie@oblio.imt.pub.ro

agapie_a@hotmail.com

http://www.imt.ro/

 

The paper presents a new mathematical analysis of GAs; we propose the use of random systems with complete connections (RSCC), a non-trivial extension of the Markovian dependence, accounting for the complete, rather than recent, history of a stochastic evolution. As far as we know, this is the first theoretical modeling of an adaptive GA. First we introduce the RSCC model of an p(m)-adaptive GA, then we prove that a "classification of states" is still valid for our model, and finally we derive a convergence condition for the algorithm.

 

15. Colin Reeves, Title, Abstract, Forthcoming

 

16. "Problem Perturbation: Implications on Fitness Landscapes"

 

Worthy Martin

 

Abstract Forthcoming

 

17. "FDA - An Evolutionary Algorithm for Additively Decomposed Functions"

 

Heinz Muehlenbein and Thilo Mahnig

 

FDA - the Factorized Distribution Algorithm - is an evolutionary algorithm which combines mutation and recombination by using a distribution instead. First the distribution is estimated from a set of selected points. It is then used to generate new points for the next generation. In general a distribution defined for n binary variables has 2^n parameters. Therefore it is too expensive to compute. For additively decomposed discrete function (ADFs) there exists an algorithm which factors the distribution into conditional and marginal distributions, each of which can be computed in polynomial time. The scaling of FDA is investigated theoretically and numerically. The scaling depends on the ADF structure and the specific assignment of function values. Difficult functions on a chain or a tree structure are optimized in about O(n*sqrt(n)) function evaluations. More standard genetic algorithms are not able to optimize these functions. FDA is not restricted to exact factorizations. It also works for approximate factorizations.

 

18. Emanuel Falkenauer, Title, Abstract, Forthcoming.

 

19. "On Some Difficulties in Local Evolutionary Search"

 

Hans-Michael Voigt

 

In this paper we consider the very simple problem of optimizing a stationary unimodal function over R^n without using analytical gradient information. There exist numerous algorithms from mathematical programming to evolutionary algorithms for this problem. Here we have a closer look at advanced evolution strategies (GSA, CMA), the evolutionary gradient search algorithm (EGS), local search enhancement by random memorizing (LSERM), and the simple (1+1)-evolution strategy. These approaches show different problem solving capabilities for different test functions. We introduce different measures which reflect certain aspects of what might be seen as the problem difficulty. Based on these measures it is possible to characterize the weak and strong points of the mentioned approaches which may lead to even more advanced algorithms.

 

20. Some Information Theoretic Results on Evolutionary Optimization

 

Thomas M. English

Tom English Computing Innovations

2401 45th Street, Apt. 30

Lubbock, Texas 79412 USA

 

The body of theoretical results regarding conservation of information ("no free lunch") in optimization has not related directly to evolutionary computation. Prior work has assumed that an optimizer traverses a sequence of points in the domain of a function without revisiting points. The present work reduces the difference between theory and practice by a) allowing points to be revisited, b) reasoning about the set of visited points instead of the sequence, and c) considering the impact of bounded memory and revisited points upon optimizer performance. Fortuitously, this leads to clarification of the fundamental results in conservation of information. Although most work in this area emphasizes the futility of attempting to design a generally superior optimizer, the present work highlights possible constructive use of the theory in restricted problem domains.