Skip to main content
Log in

Terminating evolutionary algorithms at their steady state

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Assessing the reliability of termination conditions for evolutionary algorithms (EAs) is of prime importance. An erroneous or weak stop criterion can negatively affect both the computational effort and the final result. We introduce a statistical framework for assessing whether a termination condition is able to stop an EA at its steady state, so that its results can not be improved anymore. We use a regression model in order to determine the requirements ensuring that a measure derived from EA evolving population is related to the distance to the optimum in decision variable space. Our framework is analyzed across 24 benchmark test functions and two standard termination criteria based on function fitness value in objective function space and EA population decision variable space distribution for the differential evolution (DE) paradigm. Results validate our framework as a powerful tool for determining the capability of a measure for terminating EA and the results also identify the decision variable space distribution as the best-suited for accurately terminating DE in real-world applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Agapie, A.: Theoretical analysis of mutation-adaptive evolutionary algorithms. Evol. Comput. 9, 127–146 (2001)

    Article  Google Scholar 

  2. Arnold, S.: The Theory of Linear Models and Multivariate Observations. Wiley, New York (1997)

    Google Scholar 

  3. Ashish, S., Srivastava, M.: Regression Analysis. Theory, Methods and Applications. Springer, New York (1990)

    MATH  Google Scholar 

  4. Ashyraliyev, M., Fomekong-Nanfack, Y., Kaandorp, J., Blom, J.: Systems biology: parameter estimation for biochemical models. In: FEBS, pp. 886–902 (2009)

  5. Auger, A., Hansen, N.: A restart cma evolution strategy with increasing population size. In: CEC, pp. 1769–1776 (2005)

  6. Aytug, H.K.G.: New stopping criterion for genetic algorithms. Eur. J. Oper. Res. 126, 662–674 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bäck, T., Hammel, U., Schwefel, H.: Evolutionary computation: comments on the history and current state. IEEE Trans. Evol. Comput. 1, 3–17 (1997)

    Article  Google Scholar 

  8. Bertsimas, D.J.T.: Simulated annealing. Stat. Sci. 8(1), 10–15 (1993)

    Article  MathSciNet  Google Scholar 

  9. Bischl, B., Mersmann, O., Trautmann, H., Preuss, M.: Algorithm selection based on exploratory landscape analysis and cost-sensitive learning. In: GECCO, 14th Annual Conference on Genetic and Evolutionary Computation, pp. 313–320. ACM, New York, USA (2012)

  10. Cagnoni, S., Lutton, E., Olague, G.: Genetic and Evolutionary Computation for Image Processing and Analysis. Hindawi Publishing Corporation, New York (2008)

    Google Scholar 

  11. Cheney, W., Kincaid, D.: Numerical Mathematics and Computing, 6th edn. Bob Pirtle, USA (2008)

    Google Scholar 

  12. Das, S., Konar, A.: An improved differential evolution scheme for noisy optimization problems. In: Pat Rec Mac Int, no. 3776 in LNCS, pp. 417–421 (2005)

  13. Deb, K., Jain, S.: Running performance metrics for evolutionary multi-objective optimization. In: SEAL, pp. 13–20 (2002)

  14. Finck, S., Hansen, N., Ros, R., Auger, A.: Real Paraemter Black-Box Optimization Benchmarking 2009: Presentation of the Noiseless Functions. Technical Report 2009/20, Research Center PPE (2009)

  15. Goel, T., Stander, N.: A non-dominance-based online stopping criterion for multiobjective evolutionary algorithms. Int. J. Numer. Methods Eng. 84, 661–684 (2010)

    Article  MATH  Google Scholar 

  16. Guerrero, J., Marti, L., Garcia, J., Berlanga, A., Molina, J.: Introducing a robust and efficient stopping criterion for moeas. In: CEC, pp. 1–8 (2010)

  17. Hansen, N.: Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In: ACM-GECCO (2009)

  18. Hansen, N., Auger, A., Ros, R., Finck, S., Posík, P.: Comparing results of 31 algorithms from the black-box optimization benchmarking. In: GECCO, pp. 1689–1696 (2010)

  19. Hansen, N., Finck, S., Ros, R., Auger, A.: Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Technical Report RR-6829, INRIA (2009)

  20. Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)

  21. Kronfeld, M., Planatscher, H., Zell, A.: The EvA2 optimization framework. In: LION-SWOP, no. 6073 in LNCS, pp. 247–250. Springer, New York (2010)

  22. Kwok, N.M., Ha, Q.P., Liu, D.K., Fang, G., Tan, K.C.: Efficient particle swarm optimization: a termination condition based on the decision-making approach. In: CEC, pp. 3353–3360 (2007)

  23. Marti, L., Garcia, J., Berlanga, A., Molina, J.: An approach to stopping criteria for multi-objective optimization evolutionary algorithms: the mgbm criterion. In: CEC, pp. 1263–1270 (2009)

  24. Mersmann, O., Bischl, B., Trautmann, H., Preuss, M., Weihs, C., Rudolph, G.: Exploratory landscape analysis. In: GECCO’11, pp. 829–836 (2011)

  25. Mersmann, O., Preuss, M., Trautmann, H.: Benchmarking evolutionary algorithms: towards exploratory landscape analysis. In: PPSN XI, Part I, LNCS 6238, pp. 73–82 (2010)

  26. Mullen, K., Ardia, D., Gil, D., Windover, D., Cline, J.: DEoptim: an R package for global optimization by differential evolution. J. Stat. Softw. 40(6), 1–26 (2011)

    Google Scholar 

  27. Pandi, V.R., Panigrahi, B.K.: Comparative study of evolutionary computing methods for parameter estimation of power quality signals. IJAEC 1(2), 28–59 (2010)

    Google Scholar 

  28. Price, K., Storn, R., Lampinen, J.: Differential Evolution: A Practical Approach to Global Optimization. Springer, New York (2005)

    Google Scholar 

  29. Rizki, M., Zmuda, M., Tamburino, L.: Evolving pattern recognition systems. IEEE Trans. Evol. Comput. 6(6), 594–609 (2002)

    Article  Google Scholar 

  30. Roche, D., Gil, D., Giraldo, J.: An inference model for analyzing termination conditions of evolutionary algorithms. In: CCiA Proceedings, pp. 216–225 (2011)

  31. Roche, D., Gil, D., Giraldo, J.: Using statistical inference for designing termination conditions ensuring convergence of evolutionary algorithms. In: Proceedings of ECAL, pp. 680–687 (2011)

  32. Roche, D., Gil, D., Giraldo, J.: Detecting loss of diversity for an efficient termination of EAs. In: SYNASC, IEEE Proceedings, pp. 563–568 (2013)

  33. Rudenko, O., Schoenauer, M.: A steady performance stopping criterion for paretobased. In: Multi-Objective Programming and Goal Programming Conference (2004)

  34. Rudin, W.: Principles of Mathematical Analysis, 3rd edn. McGraw-Hill Science, New York (1976)

    MATH  Google Scholar 

  35. Rudolph, G.: Convergence analysis of canonical genetic algorithms. IEEE Trans. Neural Netw. 5, 96–101 (1994)

    Article  Google Scholar 

  36. Rudolph, G.: Finite markov chain results in evolutionary computation: a tour d’horizon. Fundam. Inform. (1998)

  37. Safe, M., Carballido, J., Ponzoni, I.: On stopping criteria for genetic algorithms. In: SBIA 2004, no. 3171 in LNCS, pp. 405–413 (2004)

  38. Shir, O.M.: Niching in Evolutionary Algorithms. Springer, New York (2013)

    Google Scholar 

  39. Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  40. Tagetiren, M., Suganthan, P.: A multi-populated differential evolution algorithm for solving contrained optimization problem. In: CEC, pp. 340–347 (2006)

  41. Thomsen, R.: Multimodal optimization using crowding-based differential evolution. No. vol 2 in CEC2004, pp. 1382–1389 (2004)

  42. Vesterstrom, J., Thomsen, R.: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. In: CEC, pp. 1980–1987 (2004)

  43. Wagner, T., Trautmann, H., Martí, L.: A taxonomy of online stopping criteria for multi-objective evolutionary algorithms. In: EMO’11, pp. 16–30 (2011)

  44. Wagner, T., Trautmann, H., Naujoks, B.: Ocd: online convergence detection for evolutionary multi-objective algorithms based on statistical testing. In: EMO, pp. 198–215 (2009)

  45. Xia, W., Wu, Z.: An effective hybrid optimization approach for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 48(2), 409–425 (2005)

    Article  MathSciNet  Google Scholar 

  46. Zaharie, D.: Critical values for the control parameters of differential evolution algorithm. In: MENDEL, pp. 62–67 (2002)

  47. Zaharie, D.: A compartive analysis of crossover variants in differential evolution. In: Proceedings of the International Conference on Computer Science and Information Technology, pp. 171–181 (2007)

  48. Zielinski, K., Laur, R.: Stopping criteria for differential evolution in constrained single-objective optimization. In: Studies in Computational Intelligence, No. 143 in Advances in Differential Evolution, pp. 111–138 (2008)

Download references

Acknowledgments

Work supported by Spanish projects TIN2009-13618, TIN2012-33116, SAF2010-19257 and CSD2007-00018, Fundacio La Marato de TV3 (110230) and RecerCaixa 2010ACUP 00378.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debora Gil.

Appendix

Appendix

1.1 Proof of Proposition 1

Proof

The Cauchy-condition given by Definition 1 is satisfied if \(\forall \epsilon \), \(\exists k_0\) such that \(\forall k_1,k_2 \le k_0\), we have that:

$$\begin{aligned} |Q_{k_1}-Q_{k_2}| < \epsilon \Rightarrow |Ref_{k_1}-Ref_{k_2}| < \epsilon \end{aligned}$$

By the regression model (4), we have that:

$$\begin{aligned} |Ref_{k_1}-Ref_{k_2}| =e^{\beta _0} |(Q_{k}^{\beta _1} e^{\varepsilon _{k_1}}-Q_{k_2}^{\beta _1} e^{\varepsilon _{k_2}})| \end{aligned}$$

for \(\varepsilon _{k_1}\), \(\varepsilon _{k_2}\) following a gaussian distribution \(N(0,\sigma ^2)\). Given that \(99.7\,\%\) of the values of a gaussian are in the interval given by three standard deviations, \([-3 \sigma , 3 \sigma ]\), we have that:

$$\begin{aligned}&|Ref_{k_1}-Ref_{k_2}| \le e^{\beta _0} |Q_{k_1}^{\beta _1} e^{3 \sigma }-Q_{k_2}^{\beta _1} e^{-3 \sigma }|\nonumber \\&\quad =e^{\beta _0} |Q_{k_1}^{\beta _1} e^{3 \sigma }-Q_{k_1}^{\beta _1} e^{-3 \sigma }+Q_{k_1}^{\beta _1} e^{-3 \sigma }-Q_{k_2}^{\beta _1} e^{-3 \sigma }|\nonumber \\&\quad \le e^{\beta _0} (Q_{k_1}^{\beta _1} |e^{3 \sigma }- e^{-3 \sigma }|+ e^{-3 \sigma } |Q_{k_1}^{\beta _1}-Q_{k_2}^{\beta _1}|) \nonumber \\&\quad = e^{\beta _0}(T_1+T_2) \end{aligned}$$
(16)

Since EA has converged to the minimum, \(Ref_k \rightarrow 0\) and, by the linear regression, also \(Q_k \rightarrow 0\). If \(\beta _1>0\), then \(Q_{k_1}^{\beta _1}\) is also convergent to zero. Therefore, \(T_1\) and \(T_2\) are bounded by:

$$\begin{aligned}&T_1 \le \epsilon |e^{3 \sigma }- e^{-3 \sigma }| \\&T_2 \le \epsilon e^{-3 \sigma } \end{aligned}$$

for \(k_1\), \(k_2\) large enough. Substituting in (16) and taking into account that \(e^{3 \sigma }- e^{-3 \sigma }>0\), we have that:

$$\begin{aligned} |Ref_{k_1}-Ref_{k_2}| \le \epsilon e^{\beta _0} e^{3 \sigma } \end{aligned}$$

Therefore \(|Ref_{k_1}-Ref_{k_2}| \le \epsilon \) holds provided that:

$$\begin{aligned} \epsilon e^{\beta _0} e^{3 \sigma } \le \epsilon \end{aligned}$$

Finally, taking logarithms we have that:

$$\begin{aligned} \beta _0 + 3 \sigma \le 0 \end{aligned}$$

The relation between \(\varepsilon \) and the model accuracy (\(\sigma \)) proves the Proposition. \(\square \)

We would like to note that Proposition 1 is proven using a set of inequalities \(\le \) which achieve strict values for some cases and, thus, it is a sufficient but not necessary condition. This means that any measure satisfying the inequality is Cauchy, but implies by no means that all Cauchy measures should fulfill it. In this context, Proposition 1 is a way of verifying that a measure is Cauchy rather than a means of discarding non-Cauchy measures.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gil, D., Roche, D., Borràs, A. et al. Terminating evolutionary algorithms at their steady state. Comput Optim Appl 61, 489–515 (2015). https://doi.org/10.1007/s10589-014-9722-4

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-014-9722-4

Keywords

Navigation