20 results on '"Deutz, André"'
Search Results
2. The Hypervolume Indicator Hessian Matrix: Analytical Expression, Computational Time Complexity, and Sparsity
- Author
-
Deutz, André H., Emmerich, Michael T. M., and Wang, Hao
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Optimization and Control (math.OC) ,FOS: Mathematics ,Mathematics - Optimization and Control ,Machine Learning (cs.LG) - Abstract
The problem of approximating the Pareto front of a multiobjective optimization problem can be reformulated as the problem of finding a set that maximizes the hypervolume indicator. This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space (or $m$ dimensional objective space) to the scalar hypervolume indicator value. To define the Hessian matrix, the input set is vectorized, and the matrix is derived by analytical differentiation of the mapping from a vectorized set to the hypervolume indicator. The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets. So far, the full analytical expression was only established and analyzed for the relatively simple bi-objective case. This paper will derive the full expression for arbitrary dimensions ($m\geq2$ objective functions). For the practically important three-dimensional case, we also provide an asymptotically efficient algorithm with time complexity in $O(n\log n)$ for the exact computation of the Hessian Matrix' non-zero entries. We establish a sharp bound of $12m-6$ for the number of non-zero entries. Also, for the general $m$-dimensional case, a compact recursive analytical expression is established, and its algorithmic implementation is discussed. Also, for the general case, some sparsity results can be established; these results are implied by the recursive expression. To validate and illustrate the analytically derived algorithms and results, we provide a few numerical examples using Python and Mathematica implementations. Open-source implementations of the algorithms and testing data are made available as a supplement to this paper.
- Published
- 2022
3. Parallelized bayesian optimization for expensive robot controller evolution
- Author
-
Rebolledo, Margarita, Rehbach, Frederik, Eiben, A. E., Bartz-Beielstein, Thomas, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, Trautmann, Heike, Artificial intelligence, Network Institute, Computational Intelligence, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Subjects
050101 languages & linguistics ,Optimization problem ,Computer science ,Sample (statistics) ,02 engineering and technology ,Machine learning ,computer.software_genre ,Robot learning ,BBOB benchmarking ,0202 electrical engineering, electronic engineering, information engineering ,Test suite ,0501 psychology and cognitive sciences ,CMA-ES ,Bayesian optimization ,business.industry ,05 social sciences ,Parallelization ,Robotics ,Test case ,020201 artificial intelligence & image processing ,Artificial intelligence ,business ,computer - Abstract
An important class of black-box optimization problems relies on using simulations to assess the quality of a given candidate solution. Solving such problems can be computationally expensive because each simulation is very time-consuming. We present an approach to mitigate this problem by distinguishing two factors of computational cost: the number of trials and the time needed to execute the trials. Our approach tries to keep down the number of trials by using Bayesian optimization (BO) –known to be sample efficient– and reducing wall-clock times by parallel execution of trials. We compare the performance of four parallelization methods and two model-free alternatives. Each method is evaluated on all 24 objective functions of the Black-Box-Optimization-Benchmarking (BBOB) test suite in their five, ten, and 20-dimensional versions. Additionally, their performance is investigated on six test cases in robot learning. The results show that parallelized BO outperforms the state-of-the-art CMA-ES on the BBOB test functions, especially for higher dimensions. On the robot learning tasks, the differences are less clear, but the data do support parallelized BO as the ‘best guess’, winning on some cases and never losing.
- Published
- 2020
- Full Text
- View/download PDF
4. Ensuring smoothly navigable approximation sets by Bézier curve parameterizations in evolutionary bi-objective optimization
- Author
-
Maree, Stefanus C., Alderliesten, Tanja, Bosman, Peter A. N., Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, Trautmann, Heike, and Graduate School
- Subjects
050101 languages & linguistics ,Smoothness ,Mathematical optimization ,Optimization problem ,Computer science ,05 social sciences ,Approximation set navigation ,Evolutionary algorithm ,Bézier curve estimation ,Parameterized complexity ,02 engineering and technology ,Hypervolume ,Multi-objective optimization ,Set (abstract data type) ,Mixing (mathematics) ,0202 electrical engineering, electronic engineering, information engineering ,Trajectory ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences - Abstract
The aim of bi-objective optimization is to obtain an approximation set of (near) Pareto optimal solutions. A decision maker then navigates this set to select a final desired solution, often using a visualization of the approximation front. The front provides a navigational ordering of solutions to traverse, but this ordering does not necessarily map to a smooth trajectory through decision space. This forces the decision maker to inspect the decision variables of each solution individually, potentially making navigation of the approximation set unintuitive. In this work, we aim to improve approximation set navigability by enforcing a form of smoothness or continuity between solutions in terms of their decision variables. Imposing smoothness as a restriction upon common domination-based multi-objective evolutionary algorithms is not straightforward. Therefore, we use the recently introduced uncrowded hypervolume (UHV) to reformulate the multi-objective optimization problem as a single-objective problem in which parameterized approximation sets are directly optimized. We study here the case of parameterizing approximation sets as smooth Bezier curves in decision space. We approach the resulting single-objective problem with the gene-pool optimal mixing evolutionary algorithm (GOMEA), and we call the resulting algorithm BezEA. We analyze the behavior of BezEA and compare it to optimization of the UHV with GOMEA as well as the domination-based multi-objective GOMEA. We show that high-quality approximation sets can be obtained with BezEA, sometimes even outperforming the domination- and UHV-based algorithms, while smoothness of the navigation trajectory through decision space is guaranteed.
- Published
- 2020
5. Parallel Problem Solving from Nature - PPSN XVI - 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I
- Author
-
Bäck, Thomas, Preuss, Mike, Deutz, André, Wang, Hao, Doerr, Carola, Emmerich, Michael, Trautmann, Heike, Technische Universität Dortmund [Dortmund] (TU), Leiden Institute of Advanced Computer Science [Leiden] (LIACS), Universiteit Leiden [Leiden], Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), and Westfälische Wilhelms-Universität Münster (WWU)
- Subjects
Computer science ,020209 energy ,020208 electrical & electronic engineering ,0202 electrical engineering, electronic engineering, information engineering ,Library science ,02 engineering and technology ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2020
- Full Text
- View/download PDF
6. A Search for Additional Structure: The Case of Cryptographic S-boxes
- Author
-
Carlet, Claude, Djurasevic, Marko, Jakobovic, Domagoj, Picek, S., Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Subjects
050101 languages & linguistics ,Theoretical computer science ,Computer science ,business.industry ,05 social sciences ,Diagonal ,Structure (category theory) ,Cryptography ,02 engineering and technology ,0202 electrical engineering, electronic engineering, information engineering ,Benchmark (computing) ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,S-boxes ,Evolutionary Algorithms ,Resilience (network) ,business ,Hamming code ,Row - Abstract
We investigate whether it is possible to evolve cryptographically strong S-boxes that have additional constraints on their structure. We investigate two scenarios: where S-boxes additionally have a specific sum of values in rows, columns, or diagonals and the scenario where we check that the difference between the Hamming weights of inputs and outputs is minimal. The first case represents an interesting benchmark problem, while the second one has practical ramifications as such S-boxes could offer better resilience against side-channel attacks. We explore three solution representations by using the permutation, integer, and cellular automata-based encoding. Our results show that it is possible to find S-boxes with excellent cryptographic properties (even optimal ones) and reach the required sums when representing S-box as a square matrix. On the other hand, for the most promising S-box representation based on trees and cellular automata rules, we did not succeed in finding S-boxes with small differences in the Hamming weights between the inputs and outputs, which opens an interesting future research direction. Our results for this scenario and different encodings inspired a mathematical proof that the values reached by evolutionary algorithms are the best possible ones.
- Published
- 2020
7. Parallel Problem Solving from Nature - PPSN XVI - 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part II
- Author
-
Bäck, Thomas, Preuss, Mike, Deutz, André, Wang, Hao, Doerr, Carola, Emmerich, Michael, Trautmann, Heike, Technische Universität Dortmund [Dortmund] (TU), Leiden Institute of Advanced Computer Science [Leiden] (LIACS), Universiteit Leiden [Leiden], Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), and Westfälische Wilhelms-Universität Münster (WWU)
- Subjects
[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,ComputingMilieux_MISCELLANEOUS - Abstract
International audience
- Published
- 2020
- Full Text
- View/download PDF
8. Multi-objective Optimization by Uncrowded Hypervolume Gradient Ascent
- Author
-
Deist, Timo M., Maree, Stefanus C., Alderliesten, Tanja, Bosman, Peter A. N., Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, Trautmann, Heike, Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands, and Graduate School
- Subjects
050101 languages & linguistics ,Mathematical optimization ,Optimization problem ,Computer science ,05 social sciences ,Finite difference ,Evolutionary algorithm ,Pareto principle ,Contrast (statistics) ,Uncrowded hypervolume ,02 engineering and technology ,Multi-objective optimization ,Multiple objective ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Gradient descent ,Gradient search - Abstract
Evolutionary algorithms (EAs) are the preferred method for solving black-box multi-objective optimization problems, but when gradients of the objective functions are available, it is not straightforward to exploit these efficiently. By contrast, gradient-based optimization is well-established for single-objective optimization. A single-objective reformulation of the multi-objective problem could therefore offer a solution. Of particular interest to this end is the recently introduced uncrowded hypervolume (UHV) indicator, which is Pareto compliant and also takes into account dominated solutions. In this work, we show that the gradient of the UHV can often be computed, which allows for a direct application of gradient ascent algorithms. We compare this new approach with two EAs for UHV optimization as well as with one gradient-based algorithm for optimizing the well-established hypervolume. On several bi-objective benchmarks, we find that gradient-based algorithms outperform the tested EAs by obtaining a better hypervolume with fewer evaluations whenever exact gradients of the multiple objective functions are available and in case of small evaluation budgets. For larger budgets, however, EAs perform similarly or better. We further find that, when finite differences are used to approximate the gradients of the multiple objectives, our new gradient-based algorithm is still competitive with EAs in most considered benchmarks. Implementations are available at https://github.com/scmaree/uncrowded-hypervolume.
- Published
- 2020
- Full Text
- View/download PDF
9. Robust evolutionary bi-objective optimization for prostate cancer treatment with high-dose-rate brachytherapy
- Author
-
van der Meer, Marjolein C., Bel, Arjan, Niatsetski, Yury, Alderliesten, Tanja, Pieters, Bradley R., Bosman, Peter A. N., Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, Trautmann, Heike, Graduate School, CCA - Cancer Treatment and Quality of Life, and Radiotherapy
- Subjects
050101 languages & linguistics ,Mathematical optimization ,Bi objective optimization ,Computer science ,medicine.medical_treatment ,Computation ,05 social sciences ,Brachytherapy ,Evolutionary algorithm ,Robust optimization ,02 engineering and technology ,Multi-objective optimization ,Radiation oncology ,High-Dose Rate Brachytherapy ,Empirical study ,Robustness (computer science) ,0202 electrical engineering, electronic engineering, information engineering ,medicine ,020201 artificial intelligence & image processing ,0501 psychology and cognitive sciences ,Evolutionary Algorithms - Abstract
We address the real-world problem of automating the design of high-quality prostate cancer treatment plans in case of high-dose-rate brachytherapy, a form of internal radiotherapy. For this, recently a bi-objective real-valued problem formulation was introduced. With a GPU parallelization of the Multi-Objective Real-Valued Gene-pool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA), good treatment plans were found in clinically acceptable running times. However, optimizing a treatment plan and delivering it to the patient in practice is a two-stage decision process and involves a number of uncertainties. Firstly, there is uncertainty in the identified organ boundaries due to the limited resolution of the medical images. Secondly, the treatment involves placing catheters inside the patient, which always end up (slightly) different from what was optimized. An important factor is therefore the robustness of the final treatment plan to these uncertainties. In this work, we show how we can extend the evolutionary optimization approach to find robust plans using multiple scenarios without linearly increasing the amount of required computation effort, as well as how to deal with these uncertainties efficiently when taking into account the sequential decision-making moments. The performance is tested on three real-world patient cases. We find that MO-RV-GOMEA is equally well capable of solving the more complex robust problem formulation, resulting in a more realistic reflection of the treatment plan qualities.
- Published
- 2020
- Full Text
- View/download PDF
10. Fitness landscape analysis of dimensionally-aware genetic programming featuring feynman equations
- Author
-
Ðurasević, M., Jakobovic, Domagoj, Martins, Marcella Scoczynski Ribeiro, Picek, S., Wagner, Markus, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Subjects
FOS: Computer and information sciences ,Theoretical computer science ,Fitness landscape ,Computer science ,business.industry ,Computer Science - Neural and Evolutionary Computing ,Genetic programming ,Set (abstract data type) ,Variable (computer science) ,genetic programming ,dimensionally-aware GP ,fitness landscape ,local optima network ,Search algorithm ,Local search (optimization) ,Neural and Evolutionary Computing (cs.NE) ,Dimensionally-Aware GP ,Symbolic regression ,business ,Local optima network ,Curse of dimensionality - Abstract
Genetic programming is an often-used technique for symbolic regression: finding symbolic expressions that match data from an unknown function. To make the symbolic regression more efficient, one can also use dimensionally-aware genetic programming that constrains the physical units of the equation. Nevertheless, there is no formal analysis of how much dimensionality awareness helps in the regression process. In this paper, we conduct a fitness landscape analysis of dimensionallyaware genetic programming search spaces on a subset of equations from Richard Feynmans well-known lectures. We define an initialisation procedure and an accompanying set of neighbourhood operators for conducting the local search within the physical unit constraints. Our experiments show that the added information about the variable dimensionality can efficiently guide the search algorithm. Still, further analysis of the differences between the dimensionally-aware and standard genetic programming landscapes is needed to help in the design of efficient evolutionary operators to be used in a dimensionally-aware regression., 14 pages. Submitted to PPSN2020
- Published
- 2020
- Full Text
- View/download PDF
11. Faster Computation of Expected Hypervolume Improvement
- Author
-
Hupkens, Iris, Emmerich, Michael, and Deutz, André
- Subjects
FOS: Computer and information sciences ,G.1.6 ,Computer Science - Data Structures and Algorithms ,Data Structures and Algorithms (cs.DS) ,F.2.1 - Abstract
The expected improvement algorithm (or efficient global optimization) aims for global continuous optimization with a limited budget of black-box function evaluations. It is based on a statistical model of the function learned from previous evaluations and an infill criterion - the expected improvement - used to find a promising point for a new evaluation. The `expected improvement' infill criterion takes into account the mean and variance of a predictive multivariate Gaussian distribution. The expected improvement algorithm has recently been generalized to multiobjective optimization. In order to measure the improvement of a Pareto front quantitatively the gain in dominated (hyper-)volume is used. The computation of the expected hypervolume improvement (EHVI) is a multidimensional integration of a step-wise defined non-linear function related to the Gaussian probability density function over an intersection of boxes. This paper provides a new algorithm for the exact computation of the expected improvement to more than two objective functions. For the bicriteria case it has a time complexity in $O(n^2)$ with $n$ denoting the number of points in the current best Pareto front approximation. It improves previously known algorithms with time complexity $O(n^3 \log n)$. For tricriteria optimization we devise an algorithm with time complexity of $O(n^3)$. Besides discussing the new time complexity bounds the speed of the new algorithm is also tested empirically on test data. It is shown that further improvements in speed can be achieved by reusing data structures built up in previous iterations. The resulting numerical algorithms can be readily used in existing implementations of hypervolume-based expected improvement algorithms., Comment: LIACS Technical Report
- Published
- 2014
- Full Text
- View/download PDF
12. EVOLVE - A bridge between Probability , Set Oriented Numerics, and Evolutionary Computation IV
- Author
-
Emmerich, Michael, Deutz, André, Schuetze, Oliver, Back, Thomas, Tantar, Emilia, Tantar, Alexandru-Adrian, del Moral, Pierre, Legrand, Pierrick, Bouvry, Pascal, Coello Coello, Carlos, Leiden Institute of Advanced Computer Science [Leiden] (LIACS), Universiteit Leiden [Leiden], Centro de Investigacion y de Estudios Avanzados del Instituto Politécnico Nacional (CINVESTAV), Computer Science and Communications Research Unit [Luxembourg] (CSC), Laboratory of Advanced Software SYstems [Luxembourg] (LASSY), Université du Luxembourg (Uni.lu)-Université du Luxembourg (Uni.lu), Institut de Mathématiques de Bordeaux (IMB), Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS), Advanced Learning Evolutionary Algorithms (ALEA), Inria Bordeaux - Sud-Ouest, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Université de Bordeaux (UB)-Centre National de la Recherche Scientifique (CNRS), Michael Emmerich, André Deutz, Oliver Shütze, Thomas Back, Emilia Tantar, Alexandru-Adrian Tantar, Pierre Del Moral, Pierrick Legrand, Pascal Bouvry, Carlos Coello Coello, Universiteit Leiden, and Université Bordeaux Segalen - Bordeaux 2-Université Sciences et Technologies - Bordeaux 1 (UB)-Université de Bordeaux (UB)-Institut Polytechnique de Bordeaux (Bordeaux INP)-Centre National de la Recherche Scientifique (CNRS)
- Subjects
[MATH.MATH-PR]Mathematics [math]/Probability [math.PR] ,ComputingMilieux_MISCELLANEOUS ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
International audience
- Published
- 2013
13. Gradient-based/evolutionary relay hybrid for computing Pareto front approximations maximizing the S-metric
- Author
-
Beume, Nicola, Deutz, André, and Emmerich, Michael
- Subjects
MathematicsofComputing_NUMERICALANALYSIS - Abstract
The computation of a good approximation set of the Pareto front of a multiobjective optimization problem can be recasted as the maximization of its S-metric value. A high-precision method for computing approximation sets of a Pareto front with maximal S-Metric is presented in this paper as a high-level relay hybrid of an evolutionary algorithm and a gradient method, both guided by the S-metric. First, an evolutionary multiobjective optimizer moves the initial population close to the Pareto front. The gradient-based method takes this population as its starting point for computing a local maximal approximation set with respect to the S-metric. As opposed to existing work on gradient-based multicriteria optimization in the new gradient approach we compute gradients based on a set of points rather than for single points. We will term this approach indicatorbased gradient method, and exemplify it for the S-metric. We derive expressions for computing the gradient of a set of points with respect to its S-metric based on gradients of the objective functions. To deal with the problem of vanishing gradient components in case of dominated points in an approximation set, a penalty approach is introduced. We present a gradient based method for the aforementioned hybridization scheme and report on first results on artificial test problems., Reihe CI; 232-07
- Published
- 2007
14. Feature-based benchmarking of distance-based multi/many-objective optimisation problems: A machine learning perspective
- Author
-
Arnaud Liefooghe, Sébastien Verel, Tinkle Chugh, Jonathan Fieldsend, Richard Allmendinger, Kaisa Miettinen, Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Optimisation de grande taille et calcul large échelle (BONUS), Inria Lille - Nord Europe, Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL), Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS), Laboratoire d'Informatique Signal et Image de la Côte d'Opale (LISIC), Université du Littoral Côte d'Opale (ULCO), University of Exeter, University of Manchester [Manchester], University of Jyväskylä (JYU), Emmerich, Michael, Deutz, André, Wang, Hao, Kononova, Anna V., Naujoks, Boris, Li, Ke, Miettinen, Kaisa, Yevseyeva, Iryna, Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 [CRIStAL], Optimisation de grande taille et calcul large échelle [BONUS], Laboratoire d'Informatique Signal et Image de la Côte d'Opale [LISIC], and University of Jyväskylä [JYU]
- Subjects
Feature-based performance prediction ,Automated algorithm selection ,Multi/many-objective distance problems ,[INFO]Computer Science [cs] ,[INFO.INFO-RO]Computer Science [cs]/Operations Research [cs.RO] ,[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI] - Abstract
International audience; We consider the application of machine learning techniques to gain insights into the effect of problem features on algorithm performance, and to automate the task of algorithm selection for distance-based multi- and many-objective optimisation problems. This is the most extensive benchmark study of such problems to date. The problem features can be set directly by the problem generator, and include e.g. the number of variables, objectives, local fronts, and disconnected Pareto sets. Using 945 problem configurations (leading to 28 350 instances) of varying complexity, we find that the problem features and the available optimisation budget (i) affect the considered algorithms (NSGA-II, IBEA, MOEA/D, and random search) in different ways and that (ii) it is possible to recommend a relevant algorithm based on problem features.
- Published
- 2023
- Full Text
- View/download PDF
15. A Systematic Way of Structuring Real-World Multiobjective Optimization Problems
- Author
-
Bekir Afsar, Johanna Silvennoinen, Kaisa Miettinen, Emmerich, Michael, Deutz, André, Wang, Hao, Kononova, Anna V., Naujoks, Boris, Li, Ke, Miettinen, Kaisa, and Yevseyeva, Iryna
- Subjects
identifying objectives ,stakeholder interviews ,problem structuring ,sovellukset (soveltaminen) ,eliciting expert knowledge ,päätöksenteko ,MOO problem formulation ,ongelmanratkaisu ,sidosryhmät ,monitavoiteoptimointi ,decision making - Abstract
In recent decades, the benefits of applying multiobjective optimization (MOO) methods in real-world applications have rapidly increased. The MOO literature mostly focuses on problem-solving, typically assuming the problem has already been correctly formulated. The necessity of verifying the MOO problem and the potential impacts of having an incorrect problem formulation on the optimization results are not emphasized enough in the literature. However, verification is crucial since the optimization results will not be meaningful without an accurate problem formulation, not to mention the resources spent in the optimization process being wasted. In this paper, we focus on the MOO problem structuring, which we believe deserves more attention. The novel contribution is the proposed systematic way of structuring MOO problems that leverages problem structuring approaches from the literature on multiple criteria decision analysis (MCDA). They are not directly applicable to the formulation of MOO problems since the objective functions in the MOO problem depend on decision variables and constraint functions, whereas MCDA problems have a given set of solution alternatives characterized by criterion values. Therefore, we propose to elicit expert knowledge to identify decision variables and constraint functions, in addition to the objective functions, to construct a MOO problem appropriately. Our approach also enables the verification and validation of the problem before the actual decision making process. peerReviewed
- Published
- 2023
- Full Text
- View/download PDF
16. Evolutionary algorithms with self-adjusting asymmetric mutation
- Author
-
Rajabi, Amirhossein, Witt, Carsten, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Subjects
Self-adjusting algorithms ,Parameter control ,Asymmetric mutations ,Runtime analysis ,Evolutionary algorithms - Abstract
Evolutionary Algorithms (EAs) and other randomized search heuristics are often considered as unbiased algorithms that are invariant with respect to different transformations of the underlying search space. However, if a certain amount of domain knowledge is available the use of biased search operators in EAs becomes viable. We consider a simple (1+1) EA for binary search spaces and analyze an asymmetric mutation operator that can treat zero- and one-bits differently. This operator extends previous work by Jansen and Sudholt (ECJ 18(1), 2010) by allowing the operator asymmetry to vary according to the success rate of the algorithm. Using a self-adjusting scheme that learns an appropriate degree of asymmetry, we show improved runtime results on the class of functions OneMax$$:a$$ describing the number of matching bits with a fixed target $$a\in \{0,1\}^n$$.
- Published
- 2020
- Full Text
- View/download PDF
17. The hessian estimation evolution strategy
- Author
-
Glasmachers, Tobias, Krause, Oswin, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Subjects
Hessian matrix ,Covariance matrix adaptation ,Evolution strategy ,MathematicsofComputing_NUMERICALANALYSIS - Abstract
We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluating it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism.
- Published
- 2020
- Full Text
- View/download PDF
18. A New Paradigm in Interactive Evolutionary Multiobjective Optimization
- Author
-
Jussi Hakanen, Kaisa Miettinen, Bhupinder Singh Saini, Bäck, Thomas, Preuss, Mike, Deutz, André, Wang, Hao, Doerr, Carola, Emmerich, Michael, and Trautmann, Heike
- Subjects
050101 languages & linguistics ,Mathematical optimization ,Computer science ,media_common.quotation_subject ,decision maker ,Evolutionary algorithm ,päätöksentukijärjestelmät ,evoluutiolaskenta ,preference information ,02 engineering and technology ,Space (commercial competition) ,Multi-objective optimization ,optimointi ,achievement scalarizing functions ,algoritmit ,0202 electrical engineering, electronic engineering, information engineering ,0501 psychology and cognitive sciences ,Quality (business) ,evolutionary algorithms ,Function (engineering) ,media_common ,business.industry ,05 social sciences ,interactive methods ,Modular design ,Decision maker ,monitavoiteoptimointi ,Preference ,020201 artificial intelligence & image processing ,business - Abstract
Over the years, scalarization functions have been used to solve multiobjective optimization problems by converting them to one or more single objective optimization problem(s). This study proposes a novel idea of solving multiobjective optimization problems in an interactive manner by using multiple scalarization functions to map vectors in the objective space to a new, so-called preference incorporated space (PIS). In this way, the original problem is converted into a new multiobjective optimization problem with typically fewer objectives in the PIS. This mapping enables a modular incorporation of decision maker’s preferences to convert any evolutionary algorithm to an interactive one, where preference information is directing the solution process. Advantages of optimizing in this new space are discussed and the idea is demonstrated with two interactive evolutionary algorithms: IOPIS/RVEA and IOPIS/NSGA-III. According to the experiments conducted, the new algorithms provide solutions that are better in quality as compared to those of state-of-the-art evolutionary algorithms and their variants where preference information is incorporated in the original objective space. Furthermore, the promising results require fewer function evaluations. peerReviewed
- Published
- 2020
- Full Text
- View/download PDF
19. Improved fixed-budget results via drift analysis
- Author
-
Kötzing, Timo, Witt, Carsten, Bäck, Thomas, Preuss, Mike, Deutz, André, Emmerich, Michael, Wang, Hao, Doerr, Carola, and Trautmann, Heike
- Abstract
Fixed-budget theory is concerned with computing or bounding the fitness value achievable by randomized search heuristics within a given budget of fitness function evaluations. Despite recent progress in fixed-budget theory, there is a lack of general tools to derive such results. We transfer drift theory, the key tool to derive expected optimization times, to the fixed-budged perspective. A first and easy-to-use statement concerned with iterating drift in so-called greed-admitting scenarios immediately translates into bounds on the expected function value. Afterwards, we consider a more general tool based on the well-known variable drift theorem. Applications of this technique to the LeadingOnes benchmark function yield statements that are more precise than the previous state of the art.
- Published
- 2020
- Full Text
- View/download PDF
20. Exact extension of the DIRECT algorithm to multiple objectives
- Author
-
Kaisa Miettinen, Alberto Lovison, Emmerich, Michael T. M., Deutz, André H., Hille, Sander C., and Sergeyev, Yaroslav D.
- Subjects
ta113 ,Computer science ,Direct method ,ta111 ,multi-objective optimisation ,Extension (predicate logic) ,algorithms ,Multi-objective optimization ,monitavoiteoptimointi ,Nonlinear system ,Component (UML) ,Convergence (routing) ,algoritmit ,Global optimization ,Vector-valued function ,Algorithm - Abstract
The direct algorithm has been recognized as an efficient global optimization method which has few requirements of regularity and has proven to be globally convergent in general cases. direct has been an inspiration or has been used as a component for many multiobjective optimization algorithms. We propose an exact and as genuine as possible extension of the direct method for multiple objectives, providing a proof of global convergence (i.e., a guarantee that in an infinite time the algorithm becomes everywhere dense). We test the efficiency of the algorithm on a nonlinear and nonconvex vector function. peerReviewed
- Published
- 2019
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.