World Library  
Flag as Inappropriate
Email this Article

Differential evolution

Article Id: WHEBN0003132241
Reproduction Date:

Title: Differential evolution  
Author: World Heritage Encyclopedia
Language: English
Subject: Meta-optimization, Mathematical optimization, Evolutionary multimodal optimization, Parallel metaheuristic, DE
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Differential evolution

In evolutionary computation, differential evolution (DE) is a method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found.

DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.[1]

DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.

DE is originally due to Storn and Price.[2][3] Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas.[4][5][6]

Algorithm

A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.

Formally, let f: \Bbb{R}^n \to \Bbb{R} be the cost function which must be minimized or fitness function which must be maximized. The function takes a candidate solution as argument in the form of a vector of real numbers and produces a real number as output which indicates the fitness of the given candidate solution. The gradient of f is not known. The goal is to find a solution m for which f(m) \leq f(p) for all p in the search-space, which would mean m is the global minimum. Maximization can be performed by considering the function h := -f instead.

Let \mathbf{x} \in \Bbb{R}^n designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows:

  • Initialize all agents \mathbf{x} with random positions in the search-space.
  • Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
    • For each agent \mathbf{x} in the population do:
      • Pick three agents \mathbf{a},\mathbf{b}, and \mathbf{c} from the population at random, they must be distinct from each other as well as from agent \mathbf{x}
      • Pick a random index R \in \{1, \ldots, n\} (n being the dimensionality of the problem to be optimized).
      • Compute the agent's potentially new position \mathbf{y} = [y_1, \ldots, y_n] as follows:
        • For each i, pick a uniformly distributed number r_i \equiv U(0,1)
        • If r_i < \text{CR} or i = R then set y_i = a_i + F \times (b_i-c_i) otherwise set y_i = x_i
        • (In essence, the new position is outcome of binary crossover of agent \mathbf{x} with intermediate agent \mathbf{z} = \mathbf{a} + F \times (\mathbf{b}-\mathbf{c}).)
      • If f(\mathbf{y}) < f(\mathbf{x}) then replace the agent in the population with the improved candidate solution, that is, replace \mathbf{x} with \mathbf{y} in the population.
  • Pick the agent from the population that has the highest fitness or lowest cost and return it as the best found candidate solution.

Note that F \in [0,2] is called the differential weight and \text{CR} \in [0,1] is called the crossover probability, both these parameters are selectable by the practitioner along with the population size \text{NP} \geq 4 see below.

Parameter selection

Performance landscape showing how the basic DE performs in aggregate on the Sphere and Rosenbrock benchmark problems when varying the two DE parameters \text{NP} and \text{F}, and keeping fixed \text{CR}=0.9.

The choice of DE parameters F, \text{CR} and \text{NP} can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al.[3][4] and Liu and Lampinen.[7] Mathematical convergence analysis regarding parameter selection was done by Zaharie.[8] Meta-optimization of the DE parameters was done by Pedersen [9][10] and Zhang et al.[11]

Variants

Variants of the DE algorithm are continually being developed in an effort to improve optimization performance. Many different schemes for performing crossover and mutation of agents are possible in the basic algorithm given above, see e.g.[3] More advanced DE variants are also being developed with a popular research trend being to perturb or adapt the DE parameters during optimization, see e.g. Price et al.,[4] Liu and Lampinen,[12] Qin and Suganthan,[13] Civicioglu [14] and Brest et al.[15] There are also some work in making a hybrid optimization method using DE combined with other optimizers. [16]

Sample code

The following is a specific pseudocode implementation of differential evolution, written similar to the Java language. For more generalized pseudocode, please see the listing in the Algorithm section above.

//definition of one individual in population
class Individual {
        //normally DifferentialEvolution uses floating point variables
        var float data1, data2
        //but using integers is possible too  
        var integer data3
}

class DifferentialEvolution {
        //Variables
        //linked list that has our population inside
        var LinkedList population=new LinkedList()
        //New instance of Random number generator
        var Random random=new Random()
        var integer PopulationSize=20

        //differential weight [0,2]
        var float F=1
        //crossover probability [0,1]
        var float CR=0.5
        //dimensionality of problem, means how many variables problem has. this case 3 (data1,data2,data3)
        var integer N=3;

        //This function tells how well given individual performs at given problem.
        function float fitnessFunction(Individual in) {
                ...
                return  fitness 
        }

        //this is main function of program
        function void Main() {
                //Initialize population whit individuals that have been initialized whit uniform random noise
                //uniform noise means random value inside your search space
                var i=0
                while(i
    

See also

References

  1. ^
  2. ^
  3. ^ a b c
  4. ^ a b c
  5. ^
  6. ^
  7. ^
  8. ^
  9. ^
  10. ^
  11. ^
  12. ^
  13. ^
  14. ^ a b
  15. ^
  16. ^ Zhang, Wen-Jun; Xie, Xiao-Feng (2003). DEPSO: hybrid particle swarm with differential evolution operator. IEEE International Conference on Systems, Man, and Cybernetics (SMCC), Washington, DC, USA: 3816-3821.

External links

  • Storn's Homepage on DE featuring source-code for several programming languages.
  • Fast DE Algorithm A Fast Differential Evolution Algorithm using k-Nearest Neighbour Predictor.
  • MODE Application Parameter Estimation of a Pressure Swing Adsorption Model for Air Separation Using Multi-objective Optimisation and Support Vector Regression Model.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 



Copyright © World Library Foundation. All rights reserved. eBooks from National Public Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.