Stochastic optimization
From Wikipedia, the free encyclopedia
Stochastic optimization (SO) methods are optimization algorithms which incorporate probabilistic (random) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself (through random parameter values, random choices, etc.), or in both [1]. The concept contrasts with the deterministic optimization methods, where the values of the objective function are assumed to be exact, and the computation is completely determined by the values sampled so far.
Contents |
[edit] Methods for stochastic functions
Partly-random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system [2], and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include
- stochastic approximation (SA), by Robbins and Monro (1951) [3]
- stochastic gradient descent
- finite-difference SA by Kiefer and Wolfowitz (1952) [4]
- simultaneous perturbation SA by Spall (1992) [5]
[edit] Randomized search methods
On the other hand, even when the data is exact, it is sometimes beneficial to deliberately introduce randomness into the search process as a means of speeding convergence and making the algorithm less sensitive to modeling errors. Further, the injected randomness may provide the necessary impetus to move away from a local solution when searching for a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across all data sets, for all sorts of problems. Stochastic optimization method of this kind include:
- simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983) [6]
- evolutionary algorithms
- genetic algorithms by Goldberg (1989) [7]
- Cross-entropy method by Rubinstein and Kroese (2004) [8]
- random search by Zhigljavsky (1991) [9]
[edit] See also
[edit] References
- ^ Spall, J. C. (2003). Introduction to Stochastic Search and Optimization. Wiley.
- ^ Fu, M. C. (2002). "Optimization for Simulation: Theory vs. Practice". INFORMS Journal on Computing 14: 192–227. doi: .
- ^ Robbins, H.; Monro, S. (1951). "A Stochastic Approximation Method". Annals of Mathematical Statistics 22: 400–407. doi: .
- ^ J. Kiefer; J. Wolfowitz (1952). "Stochastic Estimation of the Maximum of a Regression Function". Annals of Mathematical Statistics 23: 462–466. doi: .
- ^ Spall, J. C. (1992). "Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation". IEEE Transactions on Automatic Control 37: 332–341. doi: .
- ^ S. Kirkpatrick; C. D. Gelatt; M. P. Vecchi (1983). "Optimization by Simulated Annealing". Science Vol 220: 671–680. doi: .
- ^ Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley.
- ^ Rubinstein, R. Y; Kroese, D.P (2004). The Cross-Entropy Method. Springer-Verlag.
- ^ Zhigljavsky, A. A. (1991). Theory of Global Random Search. Kluwer Academic.
- Michalewicz, Z. and Fogel, D. B. (2000), How to Solve It: Modern Heuristics, Springer-Verlag, New York.