106 results on '"Bäck, T.H.W."'
Search Results
2. Machine learning for automated EEG-based biomarkers of cognitive impairment during Deep Brain Stimulation screening in patients with Parkinson’s Disease
- Author
-
Geraedts, V.J., Koch, M., Contarino, M.F., Middelkoop, H.A.M., Wang, H., van Hilten, J.J., Bäck, T.H.W., and Tannemaat, M.R.
- Published
- 2021
- Full Text
- View/download PDF
3. Optimizing stimulus energy for cochlear implants with a machine learning model of the auditory nerve
- Author
-
Nobel, J.P. de, Kononova, A.V., Briaire, J.J., Frijns, J.H.M., and Bäck, T.H.W.
- Subjects
Optimization ,Auditory nerve ,Machine learning ,Cochlear implants ,Neural model ,Evolutionary algorithms ,Sensory Systems - Abstract
Performing simulations with a realistic biophysical auditory nerve fiber model can be very time-consuming, due to the complex nature of the calculations involved. Here, a surrogate (approximate) model of such an auditory nerve fiber model was developed using machine learning methods, to perform simulations more efficiently. Several machine learning models were compared, of which a Convolutional Neural Network showed the best performance. In fact, the Convolutional Neural Network was able to emulate the behavior of the auditory nerve fiber model with extremely high similarity ( R 2 > 0 . 99 ), tested under a wide range of experimental conditions, whilst reducing the simulation time by five orders of magnitude. In addition, a method for randomly generating charge-balanced waveforms using hyperplane projection is introduced. In the second part of this paper, the Convolutional Neural Network surrogate model was used by an Evolutionary Algorithm to optimize the shape of the stimulus waveform in terms of energy efficiency. The resulting waveforms resemble a positive Gaussian-like peak, preceded by an elongated negative phase. When comparing the energy of the waveforms generated by the Evolutionary Algorithm with the commonly used square wave, energy decreases of 8%-45% were observed for differ-ent pulse durations. These results were validated with the original auditory nerve fiber model, which demonstrates that the proposed surrogate model can be used as its accurate and efficient replacement.(c) 2023 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
- Published
- 2023
4. A systematic approach to analyze the computational cost of robustness in model-assisted robust optimization: a systematic approach to analyze the computational cost
- Author
-
Ullah, S., Wang, H., Menzel, S., Sendhoff, B., Bäck, T.H.W., Rudolph, G., Kononova, A.V., Aguirre, H., Kerschke, P., Ochoa, G., Tušar, T., Rudolph, G., Kononova, A.V., Aguirre, H., Kerschke, P., Ochoa, G., and Tušar, T.
- Subjects
Model-Assisted Optimization ,Robust Optimization ,Numerical Optimization - Abstract
Real-world optimization scenarios under uncertainty and noise are typically handled with robust optimization techniques, which re-formulate the original optimization problem into a robust counterpart, e.g., by taking an average of the function values over different perturbations to a specific input. Solving the robust counterpart instead of the original problem can significantly increase the associated computational cost, which is often overlooked in the literature to the best of our knowledge. Such an extra cost brought by robust optimization might depend on the problem landscape, the dimensionality, the severity of the uncertainty, and the formulation of the robust counterpart.This paper targets an empirical approach that evaluates and compares the computational cost brought by different robustness formulations in Kriging-based optimization on a wide combination (300 test cases) of problems, uncertainty levels, and dimensions. We mainly focus on the CPU time taken to find robust solutions, and choose five commonly-applied robustness formulations: `"mini-max robustness'', "mini-max regret robustness'', "expectation-based robustness'', ``dispersion-based robustness'', and "composite robustness'' respectively. We assess the empirical performance of these robustness formulations in terms of a fixed budget and a fixed target analysis, from which we find that "mini-max robustness'' is the most practical formulation w.r.t.~the associated computational cost.
- Published
- 2022
5. One-shot optimization for vehicle dynamics control systems
- Author
-
Thomaser, A.M., Kononova, A.V., Vogt, M., Bäck, T.H.W., and Fieldsend, J.E.
- Published
- 2022
- Full Text
- View/download PDF
6. Multi-point acquisition function for constraint parallel efficient multi-objective optimization
- Author
-
Winter, R. de, Stein, B. van, Bäck, T.H.W., and Fieldsend, E.J.
- Abstract
Bayesian optimization is often used to optimize expensive black box optimization problems with long simulation times. Typically Bayesian optimization algorithms propose one solution per iteration. The downside of this strategy is the sub-optimal use of available computing power. To efficiently use the available computing power (or a number of licenses etc.) we introduce a multi-point acquisition function for parallel efficient multi-objective optimization algorithms. The multi-point acquisition function is based on the hypervolume contribution of multiple solutions simultaneously, leading to well spread solutions along the Pareto frontier. By combining this acquisition function with a constraint handling technique, multiple feasible solutions can be proposed and evaluated in parallel every iteration. The hypervolume and feasibility of the solutions can easily be estimated by using multiple cheap radial basis functions as surrogates with different configurations. The acquisition function can be used with different population sizes and even for one shot optimization. The strength and generalizability of the new acquisition function is demonstrated by optimizing a set of black box constraint multi-objective problem instances. The experiments show a huge time saving factor by using our novel multi-point acquisition function, while only marginally worsening the hypervolume after the same number of function evaluations.
- Published
- 2022
- Full Text
- View/download PDF
7. Learning the characteristics of engineering optimization problems with applications in automotive crash
- Author
-
Long, F.X., Stein, B. van, Frenzel, M., Krause, P., Gitterle, M., Bäck, T.H.W., and Fieldsend, J.E.
- Published
- 2022
- Full Text
- View/download PDF
8. Quantum-Enhanced Selection Operators for Evolutionary Algorithms
- Author
-
Von Dollen, D., Yarkoni, S., Weimer, D., Neukart, F., Bäck, T.H.W., and Fieldsend, J.E.
- Subjects
FOS: Computer and information sciences ,Genetic Algorithm ,Quantum Physics ,Computer Science - Machine Learning ,Maximum Diversity Problem ,Quantum-Enhanced Algorithms ,MathematicsofComputing_NUMERICALANALYSIS ,FOS: Physical sciences ,Computer Science - Neural and Evolutionary Computing ,Selection Operators ,Machine Learning (cs.LG) ,Combinatorial Optimization ,Quantum-Inspired Algorithms ,Quantum Computing ,Neural and Evolutionary Computing (cs.NE) ,Evolutionary Algorithms ,Quantum Physics (quant-ph) ,Quantum Annealing - Abstract
Genetic algorithms have unique properties which are useful when applied to black-box optimization. Using selection, crossover, and mutation operators, candidate solutions may be obtained without the need to calculate a gradient. In this work, we study results obtained from using quantum-enhanced operators within the selection mechanism of a genetic algorithm. Our approach frames the selection process as a minimization of a binary quadratic model with which we encode fitness and distance between members of a population, and we leverage a quantum annealing system to sample low-energy solutions for the selection mechanism. We benchmark these quantum-enhanced algorithms against classical algorithms over various black-box objective functions, including the OneMax function, and functions from the IOHProfiler library for black-box optimization. We observe a performance gain in the average number of generations to convergence for the quantum-enhanced elitist selection operator in comparison to classical on the OneMax function. We also find that the quantum-enhanced selection operator with ∗Corresponding author email: David.VonDollen@vw.com non-elitist selection outperforms benchmarks on functions with fitness perturbation from the IOHProfiler library. Additionally, we find that in the case of elitist selection, the quantum-enhanced operators outperform classical benchmarks on functions with varying degrees of dummy variables and neutrality
- Published
- 2022
9. Unsupervised strategies for identifying optimal parameters in Quantum Approximate Optimization Algorithm
- Author
-
MOUSSA, C., Wang, H., Bäck T.H.W., and Dunjko, V
- Subjects
Quantum Physics ,Control and Systems Engineering ,FOS: Physical sciences ,combinatorial optimization ,Electrical and Electronic Engineering ,QAOA ,Quantum Physics (quant-ph) ,Condensed Matter Physics ,quantum approximate optimization algorithm ,quantum computing ,Atomic and Molecular Physics, and Optics ,clustering - Abstract
As combinatorial optimization is one of the main quantum computing applications, many methods based on parameterized quantum circuits are being developed. In general, a set of parameters are being tweaked to optimize a cost function out of the quantum circuit output. One of these algorithms, the Quantum Approximate Optimization Algorithm stands out as a promising approach to tackling combinatorial problems. However, finding the appropriate parameters is a difficult task. Although QAOA exhibits concentration properties, they can depend on instances characteristics that may not be easy to identify, but may nonetheless offer useful information to find good parameters. In this work, we study unsupervised Machine Learning approaches for setting these parameters without optimization. We perform clustering with the angle values but also instances encodings (using instance features or the output of a variational graph autoencoder), and compare different approaches. These angle-finding strategies can be used to reduce calls to quantum circuits when leveraging QAOA as a subroutine. We showcase them within Recursive-QAOA up to depth $3$ where the number of QAOA parameters used per iteration is limited to $3$, achieving a median approximation ratio of $0.94$ for MaxCut over $200$ Erd\H{o}s-R\'{e}nyi graphs. We obtain similar performances to the case where we extensively optimize the angles, hence saving numerous circuit calls., Comment: Second version after publishing in journal
- Published
- 2022
- Full Text
- View/download PDF
10. Optimally weighted ensembles for efficient multi-objective optimization
- Author
-
Hanse, G., Winter, R. de, Stein, B. van, Bäck, T.H.W., Nicosia, G., Ojha, V., Malfa, E. La, Malfa, G. La, Jansen, G., Pardalos, P.M., Giuffrida, G., Umeton, R., Nicosia, G., Ojha, V., Malfa, E. La, Malfa, G. La, Jansen, G., Pardalos, P.M., Giuffrida, G., and Umeton, R.
- Published
- 2022
11. Automated configuration of genetic algorithms by tuning for anytime performance
- Author
-
Ye, F., Doerr, C., Wang, H., Bäck, T.H.W., Fieldsend, J.E., and Fieldsend, J.E.
- Published
- 2022
- Full Text
- View/download PDF
12. Auto-REP: an automated regression pipeline approach for high-efficiency earthquake prediction using LANL data
- Author
-
Yang, F., Kefalas, M., Koch, M., Kononova, A.V., Qiao, Y., and Bäck, T.H.W.
- Published
- 2022
13. Hyperparameter importance of quantum neural networks across small datasets
- Author
-
MOUSSA, C., Rijn, J.N. van, Bäck, T.H.W., Dunjko, V., Pascal, P., and Ienco, D.
- Abstract
As restricted quantum computers are slowly becoming a reality, the search for meaningful first applications intensifies. In this domain, one of the more investigated approaches is the use of a special type of quantum circuit – a so-called quantum neural network – to serve as a basis for a machine learning model. Roughly speaking, as the name suggests, a quantum neural network can play a similar role to a neural network. However, specifically for applications in machine learning contexts, very little is known about suitable circuit architectures, or model hyperparameters one should use to achieve good learning performance. In this work, we apply the functional ANOVA framework to quantum neural networks to analyze which of the hyperparameters were most influential for their predictive performance. We analyze one of the most typically used quantum neural network architectures. We then apply this to 7 open-source datasets from the OpenML-CC18 classification benchmark whose number of features is small enough to fit on quantum hardware with less than 20 qubits. Three main levels of importance were detected from the ranking of hyperparameters obtained with functional ANOVA. Our experiment both confirmed expected patterns and revealed new insights. For instance, setting well the learning rate is deemed the most critical hyperparameter in terms of marginal contribution on all datasets, whereas the particular choice of entangling gates used is considered the least important except on one dataset. This work introduces new methodologies to study quantum machine learning models and provides new insights toward quantum model selection.
- Published
- 2022
- Full Text
- View/download PDF
14. IOHanalyzer
- Author
-
Wang, H., Vermetten, D., Ye, F., Doerr, C., Bäck, T.H.W, and Fieldsend, J.E.
- Published
- 2022
15. Towards time-series feature engineering in automated machine learning for multi-step-ahead forecasting
- Author
-
Wang, C., Baratchi, M., Bäck, T.H.W., Hoos, H.H., Limmer, S., Olhofer, M., Rojas, I., Pomares, H., Valenzuela, O., Rojas, F., and Herrera, L.J.
- Abstract
8. International Conference on Time Series and Forecasting, ITISE 2022, Gran Canaria, Spain, 27 Jun 2022 - 30 Jun 2022; Engineering proceedings 18(1), 17 (2022). doi:10.3390/engproc2022018017 special issue: "The 8th International Conference on Time Series and Forecasting Gran Canaria, Spain : 27–30 June 2022 / Volume Editors: Ignacio Rojas, Hector Pomares, Olga Valenzuela, Fernando Rojas and Luis Javier Herrera", Published by MDPI, Basel
- Published
- 2022
- Full Text
- View/download PDF
16. The unreasonable effectiveness of the final batch normalization layer
- Author
-
Kocaman, V., Shir, O.M., Bäck, T.H.W., Bebis, G., Athitsos, V., Bebis, G., and Athitsos, V.
- Subjects
Batch normalisation ,Computer vision ,Imbalanced classification - Abstract
Early-stage disease indications are rarely recorded in real-world domains, such as Agriculture and Healthcare, and yet, their accurate identification is critical in that point of time. In this type of highly imbalanced classification problems, which encompass complex features, deep learning (DL) is much needed because of its strong detection capabilities. At the same time, DL is observed in practice to favor majority over minority classes and consequently suffer from inaccurate detection of the targeted early-stage indications. In this work, we extend the study done by [11], showing that the final BN layer, when placed before the softmax output layer, has a considerable impact in highly imbalanced image classification problems as well as undermines the role of the softmax outputs as an uncertainty measure. This current study addresses additional hypotheses and reports on the following findings: (i) the performance gain after adding the final BN layer in highly imbalanced settings could still be achieved after removing this additional BN layer in inference; (ii) there is a certain threshold for the imbalance ratio upon which the progress gained by the final BN layer reaches its peak; (iii) the batch size also plays a role and affects the outcome of the final BN application; (iv) the impact of the BN application is also reproducible on other datasets and when utilizing much simpler neural architectures; (v) the reported BN effect occurs only per a single majority class and multiple minority classes – i.e., no improvements are evident when there are two majority classes; and finally, (vi) utilizing this BN layer with sigmoid activation has almost no impact when dealing with a strongly imbalanced image classification tasks.
- Published
- 2022
17. Artificial Intelligence for the Design of Symmetric Cryptographic Primitives
- Author
-
Mariot, L., Jacobovic, D., Bäck, T.H.W., Batina, L., Buhan, I., Picek, S., Hernandez-Castro J., Batina, L., Buhan, I., and Picek, S.
- Published
- 2022
- Full Text
- View/download PDF
18. Improving imbalanced classification by anomaly detection
- Author
-
Kong, J., Kowalczyk, W.J., Menzel, S., Bäck, T.H.W, and Bäck, T.H.W. et al
- Subjects
ComputingMethodologies_PATTERNRECOGNITION - Abstract
Although the anomaly detection problem can be considered as an extreme case of class imbalance problem, very few studies consider improving class imbalance classification with anomaly detection ideas. Most data-level approaches in the imbalanced learning domain aim to introduce more information to the original dataset by generating synthetic samples. However, in this paper, we gain additional information in another way, by introducing additional attributes. We propose to introduce the outlier score and four types of samples (safe, borderline, rare, outlier) as additional attributes in order to gain more information on the data characteristics and improve the classification performance. According to our experimental results, introducing additional attributes can improve the imbalanced classification performance in most cases (6 out of 7 datasets). Further study shows that this performance improvement is mainly contributed by a more accurate classification in the overlapping region of the two classes (majority and minority classes). The proposed idea of introducing additional attributes is simple to implement and can be combined with resampling techniques and other algorithmic-level approaches in the imbalanced learning domain.
- Published
- 2020
- Full Text
- View/download PDF
19. Solving the shipment rerouting problem with quantum optimization techniques
- Author
-
Yarkoni, S., Huck, A., Schülldorf, H., Speitkamp, B., Tabrizi, M.S., Leib, M., Bäck, T.H.W., Neukart, F., Mes, M., Lalla-Ruiz, E., Voss, S., Mes, M., and Lalla-Ruiz, E., Voss, S.
- Published
- 2021
20. Tuning as a Means of Assessing the Benefits of New Ideas in Interplay with Existing Algorithmic Modules
- Author
-
Nobel, J.P. de, Vermetten, D., Wang, H., Doerr, C., Bäck, T.H.W., Krawiec, K., Leiden Institute of Advanced Computer Science [Leiden] (LIACS), Universiteit Leiden [Leiden], Centre National de la Recherche Scientifique (CNRS), Recherche Opérationnelle (RO), LIP6, Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS), and Sorbonne Université (SU)
- Subjects
FOS: Computer and information sciences ,Hyperparameter ,business.industry ,Computer science ,Computer Science - Neural and Evolutionary Computing ,Context (language use) ,0102 computer and information sciences ,02 engineering and technology ,Modular design ,[INFO.INFO-NE]Computer Science [cs]/Neural and Evolutionary Computing [cs.NE] ,Machine learning ,computer.software_genre ,Base (topology) ,01 natural sciences ,Task (project management) ,Set (abstract data type) ,010201 computation theory & mathematics ,Component (UML) ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,020201 artificial intelligence & image processing ,Neural and Evolutionary Computing (cs.NE) ,Artificial intelligence ,business ,computer - Abstract
International audience; Introducing new algorithmic ideas is a key part of the continuous improvement of existing optimization algorithms. However, when introducing a new component into an existing algorithm, assessing its potential benefits is a challenging task. Often, the component is added to a default implementation of the underlying algorithm and compared against a limited set of other variants. This assessment ignores any potential interplay with other algorithmic ideas that share the same base algorithm, which is critical in understanding the exact contributions being made. We explore a more extensive procedure, which uses hyperparameter tuning as a means of assessing the benefits of new algorithmic components. This allows for a more robust analysis by not only focusing on the impact on performance, but also by investigating how this performance is achieved. We implement our suggestion in the context of the Modular CMA-ES framework, which was redesigned and extended to include some new modules and several new options for existing modules, mostly focused on the step-size adaptation method. Our analysis highlights the differences between these new modules, and identifies the situations in which they have the largest contribution.
- Published
- 2021
- Full Text
- View/download PDF
21. A new acquisition function for robust Bayesian optimization of unconstrained problems
- Author
-
Ullah, S., Wang, H., Menzel, S, Sendhoff, B., Bäck, T.H.W., Chicano, F., and Chicano, F.
- Subjects
Mathematical optimization ,021103 operations research ,Bayesian Optimization ,Computer science ,Black-Box Optimization ,Bayesian optimization ,0211 other engineering and technologies ,Robust optimization ,Robust Optimization ,02 engineering and technology ,Function (mathematics) ,Noise ,Kriging ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Scenario testing - Abstract
A new acquisition function is proposed for solving robust optimization problems via Bayesian Optimization. The proposed acquisition function reflects the need for the robust instead of the nominal optimum, and is based on the intuition of utilizing the higher moments of the improvement. The efficacy of Bayesian Optimization based on this acquisition function is demonstrated on four test problems, each affected by three different levels of noise. Our findings suggest the promising nature of the proposed acquisition function as it yields a better robust optimal value of the function in 6/12 test scenarios when compared with the baseline.
- Published
- 2021
22. Cluster-based Kriging approximation algorithms for complexity reduction
- Author
-
Stein, B. van, Wang, H., Kowalczyk, W.J., Emmerich, M.T.M., and Bäck, T.H.W.
- Subjects
FOS: Computer and information sciences ,0209 industrial biotechnology ,Computer Science - Artificial Intelligence ,Computer science ,Machine Learning (stat.ML) ,02 engineering and technology ,Field (computer science) ,Evolutionary computation ,Machine Learning (cs.LG) ,Set (abstract data type) ,020901 industrial engineering & automation ,Surrogate model ,Quadratic equation ,Statistics - Machine Learning ,Artificial Intelligence ,Kriging ,0202 electrical engineering, electronic engineering, information engineering ,Statistics::Methodology ,Approximation algorithm ,Regression analysis ,Statistics::Computation ,Data set ,Computer Science - Learning ,Artificial Intelligence (cs.AI) ,Data point ,020201 artificial intelligence & image processing ,Algorithm - Abstract
Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-defined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms., Submitted to IEEE Computational Intelligence Magazine for review
- Published
- 2019
- Full Text
- View/download PDF
23. Towards an adaptable quality monitoring process for self-piercing riveting
- Author
-
Noller, Walther, U, Meschut, G., and Bäck, T.H.W.
- Subjects
Computer science ,Process (computing) ,Rivet ,Quality monitoring ,Manufacturing engineering - Abstract
In the automotive industry self-piercing riveting (SPR) is a standard joining technology, especially for vehicles with a high mix of materials. The applied quality control system and the underlying quality decisions have hardly changed in the recent years. The commonly used combination of process monitoring and rework strategies leads to a large number of false positives and a high amount of manual work. The collected process data is not used comprehensively and misses the potential to improve the SPR process. This article introduces a quality monitoring method for SPR to show a proof of concept by using machine learning to predict faulty joining points. Based on numerous technical and statistical features extracted from the force-displacement curve of SPR, the trained model categorizes the observations in two rivet head height (RHH) classes. An evolutionary algorithm is used for the feature selection and a random forest model for classification. The resulting accuracy scores up to 84.4% and shows the potential of the developed random forest model. The potential application of this approach in the context of serial body-shop production improves the prediction of joining quality and the process availability significantly. This enables the adaptable rework of non-critical joining points depending on the classified RHH.
- Published
- 2021
- Full Text
- View/download PDF
24. Addressing the Multiplicity of Solutions in Optical Lens Design as a Niching Evolutionary Algorithms Computational Challenge
- Author
-
Kononova, A.V., Shir, O.M., Tukker, T., Frisco, P., Zeng, S., Bäck, T.H.W., Chicano, F., and Chicano, F.
- Subjects
Hessian matrix ,Continuous optimization ,FOS: Computer and information sciences ,Mathematical optimization ,Heuristic (computer science) ,Computer science ,Computer Science - Artificial Intelligence ,Evolutionary algorithm ,Domain (software engineering) ,Maxima and minima ,Computational Engineering, Finance, and Science (cs.CE) ,symbols.namesake ,Artificial Intelligence (cs.AI) ,Problem domain ,symbols ,Optical lens design ,Computer Science - Computational Engineering, Finance, and Science - Abstract
Optimal Lens Design constitutes a fundamental, long-standing real-world optimization challenge. Potentially large number of optima, rich variety of critical points, as well as solid understanding of certain optimal designs per simple problem instances, provide altogether the motivation to address it as a niching challenge. This study applies established Niching-CMA-ES heuristic to tackle this design problem (6-dimensional Cooke triplet) in a simulation-based fashion. The outcome of employing Niching-CMA-ES 'out-of-the-box' proves successful, and yet it performs best when assisted by a local searcher which accurately drives the search into optima. The obtained search-points are corroborated based upon concrete knowledge of this problem-instance, accompanied by gradient and Hessian calculations for validation. We extensively report on this computational campaign, which overall resulted in (i) the location of 19 out of 21 known minima within a single run, (ii) the discovery of 540 new optima. These are new minima similar in shape to 21 theoretical solutions, but some of them have better merit function value (unknown heretofore), (iii) the identification of numerous infeasibility pockets throughout the domain (also unknown heretofore). We conclude that niching mechanism is well-suited to address this problem domain, and hypothesize on the apparent multidimensional structures formed by the attained new solutions.
- Published
- 2021
25. Is there Anisotropy in Structural Bias?
- Author
-
Vermetten, D., Kononova, A.V., Caraffini, F., Wang, H., Bäck, T.H.W., Krawiec, K., and Krawiec, K.
- Subjects
FOS: Computer and information sciences ,algorithmic behaviour ,Computer science ,Suite ,05 social sciences ,Isotropy ,Computer Science - Neural and Evolutionary Computing ,050301 education ,Structural bias ,02 engineering and technology ,Type (model theory) ,uniformity ,Single test ,Methodology (stat.ME) ,statistical testing ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Neural and Evolutionary Computing (cs.NE) ,Heuristics ,Anisotropy ,0503 education ,Algorithm ,Statistics - Methodology ,Statistical hypothesis testing - Abstract
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Structural Bias (SB) is an important type of algorithmic deficiency within iterative optimisation heuristics. However, methods for detecting structural bias have not yet fully matured, and recent studies have uncovered many interesting questions. One of these is the question of how structural bias can be related to anisotropy. Intuitively, an algorithm that is not isotropic would be considered structurally biased. However, there have been cases where algorithms appear to only show SB in some dimensions. As such, we investigate whether these algorithms actually exhibit anisotropy, and how this impacts the detection of SB. We find that anisotropy is very rare, and even in cases where it is present, there are clear tests for SB which do not rely on any assumptions of isotropy, so we can safely expand the suite of SB tests to encompass these kinds of deficiencies not found by the original tests. We propose several additional testing procedures for SB detection and aim to motivate further research into the creation of a robust portfolio of tests. This is crucial since no single test will be able to work effectively with all types of SB we identify.
- Published
- 2021
26. Tabu-Driven Quantum Neighborhood Samplers
- Author
-
Moussa, C., Wang, H., Calandra, H., Zarges, C., Verel, S., Bäck T.H.W., Dunjko V., Zarges, C., and Verel, S.
- Subjects
Quantum Physics ,Mathematical optimization ,Optimization problem ,Combinatorial optimization ,Computer science ,FOS: Physical sciences ,Sample (statistics) ,02 engineering and technology ,Quantum computing ,01 natural sciences ,Tabu search ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Quantum algorithm ,010306 general physics ,Heuristics ,Quantum Physics (quant-ph) ,Quantum ,Quantum computer - Abstract
Combinatorial optimization is an important application targeted by quantum computing. However, near-term hardware constraints make quantum algorithms unlikely to be competitive when compared to high-performing classical heuristics on large practical problems. One option to achieve advantages with near-term devices is to use them in combination with classical heuristics. In particular, we propose using quantum methods to sample from classically intractable distributions -- which is the most probable approach to attain a true provable quantum separation in the near-term -- which are used to solve optimization problems faster. We numerically study this enhancement by an adaptation of Tabu Search using the Quantum Approximate Optimization Algorithm (QAOA) as a neighborhood sampler. We show that QAOA provides a flexible tool for exploration-exploitation in such hybrid settings and can provide evidence that it can help in solving problems faster by saving many tabu iterations and achieving better solutions., Comment: Compressed version of the paper with better plots
- Published
- 2021
27. SAMO-COBRA: a fast surrogate assisted constrained multi-objective optimization algorithm
- Author
-
Winter, R. de, Stein, B. van, Bäck, T.H.W., Ishibuchi, H., Zhang, Q., Cheng, R., Li, K., Li, H., Wang, H., Zhou, A., Ishibuchi, H., Zhang, Q., Cheng, R., Li, K., Li, H., Wang, H., and Zhou, A.
- Subjects
Set (abstract data type) ,Mathematical optimization ,Optimization problem ,business.industry ,Computer science ,Pareto principle ,Constrained optimization ,Local search (optimization) ,Radial basis function ,Function (mathematics) ,business ,Multi-objective optimization - Abstract
This paper proposes a novel Self-Adaptive algorithm for Multi-Objective Constrained Optimization by using Radial Basis Function Approximations, SAMO-COBRA. The algorithm automatically determines the best Radial Basis Function-fit as surrogates for the objectives as well as the constraints, to find new feasible Pareto-optimal solutions. The algorithm also uses hyper-parameter tuning on the fly to improve its local search strategy. In every iteration one solution is added and evaluated, resulting in a strategy requiring only a small number of function evaluations for finding a set of feasible solutions on the Pareto frontier. The proposed algorithm is compared to a wide set of other state-of-the-art algorithms (NSGA-II, NSGA-III, CEGO, SMES-RBF) on 18 constrained multi-objective problems. In the experiments we show that our algorithm outperforms the other algorithms in terms of achieved Hypervolume after given a fixed small evaluation budget. These results suggest that SAMO-COBRA is a good choice for optimizing constrained multi-objective optimization problems with expensive function evaluations.
- Published
- 2021
- Full Text
- View/download PDF
28. Discovering outstanding subgroup lists for numeric targets using MDL
- Author
-
Manuel Proença, H., Grünwald, P.D., Bäck, T.H.W., Leeuwen, M. van, Hutter, F., Kersting, K., Lijffijt, J., and Valera, I.
- Subjects
Mathematics::Group Theory ,ComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATION - Abstract
The task of subgroup discovery (SD) is to find interpretable descriptions of subsets of a dataset that stand out with respect to a target attribute. To address the problem of mining large numbers of redundant subgroups, subgroup set discovery (SSD) has been proposed. State-of-the-art SSD methods have their limitations though, as they typically heavily rely on heuristics and/or user-chosen hyperparameters.We propose a dispersion-aware problem formulation for subgroup set discovery that is based on the minimum description length (MDL) principle and subgroup lists. We argue that the best subgroup list is the one that best summarizes the data given the overall distribution of the target. We restrict our focus to a single numeric target variable and show that our formalization coincides with an existing quality measure when finding a single subgroup, but that—in addition—it allows to trade off subgroup quality with the complexity of the subgroup. We next propose SSD++, a heuristic algorithm for which we empirically demonstrate that it returns outstanding subgroup lists: non-redundant sets of compact subgroups that stand out by having strongly deviating means and small spread.
- Published
- 2021
- Full Text
- View/download PDF
29. Locating the local minima in lens design with machine learning
- Author
-
Kononova, A.V., Shir, O., Tukker, T., Frisco, P., Zeng, S., Bäck, T.H.W., Johnson, R.B., Mahajan, V.N., Thibault, S., Johnson, R.B., Mahajan, V.N., and Thibault, S.
- Subjects
Maxima and minima ,Lens (optics) ,Single objective ,Cooke triplet ,law ,Heuristic ,Function (mathematics) ,CMA-ES ,Evolution strategy ,Algorithm ,Mathematics ,law.invention - Abstract
We applied an extended version of the Niching-CMA-ES heuristic to search for local minima of the Cooke triplet, a renowned photographic lens design, of which 21 local minima were already known. The considered problem is defined by 6 input (decision) variables, namely the curvatures of the three lenses present in the Cooke triplet, and is driven by a single objective function, that is the RMS spot size. The applied approach found: (i) 19 out of the 21 known minima in a single run; (ii) 540 new local minima with objective values lower/equal to those of the known 21 minima; (iii) a large number of infeasible designs.
- Published
- 2021
30. Efficient AutoML via combinational sampling
- Author
-
Nguyen, D.A., Kononova, A.V., Menzel, S., Sendhoff, B., and Bäck T.H.W.
- Abstract
Automated machine learning (AutoML) aims to automatically produce the best machine learning pipeline, i.e., a sequence of operators and their optimized hyperparameter settings, to maximize the performance of an arbitrary machine learning problem. Typically, AutoML based Bayesian optimization (BO) approaches convert the AutoML optimization problem into a Hyperparameter Optimization (HPO) problem, where the choice of algorithms is modeled as an additional categorical hyperparameter. In this way, algorithms and their local hyper-parameters are referred to as the same level. Consequently, this approach makes the resulting initial sampling less robust. In this study, we describe a first attempt to formulate the AutoML optimization problem as its nature instead of transfer it into a HPO problem. To take advantage of this paradigm, we propose a novel initial sampling approach to maximize the coverage of the AutoML search space to help BO construct a robust surrogate model. We experiment with 2 independent scenarios of AutoML with 2 operators and 6 operators over 117 benchmark datasets. Results of our experiments demonstrate that the performance of BO significantly improved by using our sampling approach.
- Published
- 2021
31. Ship design performance and cost optimization with machine learning
- Author
-
Winter, R. de, Stein, B. van, Bäck, T.H.W., Bertram, V., and Bertram, V.
- Abstract
This contribution shows how, in the preliminary design stage, naval architects can make more informed decisions by using machine learning. In this ship design phase, little information is available, and decisions need to be made in a limited amount of time. However, it is in the preliminary design phase where the most influential decisions are made regarding the global dimensions, the machinery, and therefore the performance and costs. In this paper it is shown that a machine learning algorithm trained with data from reference vessels are more accurate when estimating key performance indicators compared to existing empirical design formulas. Finally, the combination of the trained models with optimization algorithms shows to be a powerful tool for finding Pareto-optimal designs from which the naval architect can learn.
- Published
- 2021
32. Designing Air Flow with Surrogate-assisted Phenotypic Niching
- Author
-
Hagg, A., Wilde, D., Asteroth, A., Bäck, T.H.W., Preuss, M., Deutz, A., Wang, H., Doerr, C., Emmerich, M.T.M., Trautmann, H., Preuss, M., Deutz, A., Wang, H., Doerr, C., Emmerich, M.T.M., and Trautmann, H.
- Subjects
FOS: Computer and information sciences ,050101 languages & linguistics ,Optimization problem ,Computer science ,02 engineering and technology ,Machine learning ,computer.software_genre ,Domain (software engineering) ,Set (abstract data type) ,Computational Engineering, Finance, and Science (cs.CE) ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Fluid dynamics ,FOS: Mathematics ,0501 psychology and cognitive sciences ,Mathematics - Numerical Analysis ,Neural and Evolutionary Computing (cs.NE) ,Computer Science - Computational Engineering, Finance, and Science ,business.industry ,05 social sciences ,Sampling (statistics) ,Computer Science - Neural and Evolutionary Computing ,Numerical Analysis (math.NA) ,Solver ,020201 artificial intelligence & image processing ,Artificial intelligence ,Focus (optics) ,business ,computer - Abstract
In complex, expensive optimization domains we often narrowly focus on finding high performing solutions, instead of expanding our understanding of the domain itself. But what if we could quickly understand the complex behaviors that can emerge in said domains instead? We introduce surrogate-assisted phenotypic niching, a quality diversity algorithm which allows to discover a large, diverse set of behaviors by using computationally expensive phenotypic features. In this work we discover the types of air flow in a 2D fluid dynamics optimization problem. A fast GPU-based fluid dynamics solver is used in conjunction with surrogate models to accurately predict fluid characteristics from the shapes that produce the air flow. We show that these features can be modeled in a data-driven way while sampling to improve performance, rather than explicitly sampling to improve feature models. Our method can reduce the need to run an infeasibly large set of simulations while still being able to design a large diversity of air flows and the shapes that cause them. Discovering diversity of behaviors helps engineers to better understand expensive domains and their solutions.
- Published
- 2021
- Full Text
- View/download PDF
33. Explorative data analysis of time series based algorithm features of CMA-ES variants
- Author
-
Nobel, J.P. de, Wang, H., Bäck, T.H.W., and Chicano, F.
- Subjects
Series (mathematics) ,Computer science ,Feature selection ,0102 computer and information sciences ,02 engineering and technology ,Function (mathematics) ,01 natural sciences ,Hierarchical clustering ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Feature (machine learning) ,Cutoff ,020201 artificial intelligence & image processing ,CMA-ES ,Cluster analysis ,Algorithm - Abstract
In this study, we analyze behaviours of the well-known CMA-ES by extracting the time-series features on its dynamic strategy parameters. An extensive experiment was conducted on twelve CMA-ES variants and 24 test problems taken from the BBOB (Black-Box Optimization Bench-marking) testbed, where we used two different cutoff times to stop those variants. We utilized the tsfresh package for extracting the features and performed the feature selection procedure using the Boruta algorithm, resulting in 32 features to distinguish either CMA-ES variants or the problems. After measuring the number of predefined targets reached by those variants, we contrive to predict those measured values on each test problem using the feature. From our analysis, we saw that the features can classify the CMA-ES variants, or the function groups decently, and show a potential for predicting the performance of those variants. We conducted a hierarchical clustering analysis on the test problems and noticed a drastic change in the clustering outcome when comparing the longer cutoff time to the shorter one, indicating a huge change in search behaviour of the algorithm. In general, we found that with longer time series, the predictive power of the time series features increase.
- Published
- 2021
34. An Analysis of Phenotypic Diversity in Multi-Solution Optimization
- Author
-
Hagg, A., Preuss, M., Asteroth, A., Bäck, T.H.W., Filipič, B., Minisci, E., Vasile, M., Filipič, B., Minisci, E., and Vasile, M.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer Science - Neural and Evolutionary Computing ,Neural and Evolutionary Computing (cs.NE) ,Machine Learning (cs.LG) - Abstract
More and more, optimization methods are used to find diverse solution sets. We compare solution diversity in multi-objective optimization, multimodal optimization, and quality diversity in a simple domain. We show that multiobjective optimization does not always produce much diversity, multimodal optimization produces higher fitness solutions, and quality diversity is not sensitive to genetic neutrality and creates the most diverse set of solutions. An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set with quality diversity. Finally, we make recommendations about when to use which approach.
- Published
- 2021
- Full Text
- View/download PDF
35. A combination of Fourier transform and machine learning for fault detection and diagnosis of induction motors
- Author
-
Nguyen, D.V., Zwanenburg, E., Limmer, S., Luijben, W., Bäck, T.H.W., and Olhofer, M.
- Published
- 2021
36. Improving many-objective evolutionary algorithms by means of edge-rotated cones
- Author
-
Wang, Y., Deutz, A.H., Emmerich, M.T.M., Bäck, T.H.W., Preuss, M., Wang, H., Doerr, C., Trautmann, H., Preuss, M., Wang, H., Doerr, C., and Trautmann, H.
- Published
- 2020
37. Benchmarking a (μ+λ) genetic algorithm with configurable crossover probability
- Author
-
Ye, F., Wang, H., Doerr, C., Bäck, T.H.W., Preuss, M., Deutz, A., Emmerich, M.T.M., and Trautman, H.
- Published
- 2020
38. Can compact optimisation algorithms be structurally biased?
- Author
-
Kononova, A.V., Caraffini, F., Wang, H., Bäck, T.H.W., Preuss, M., Deutz, A., Doerr, C., Emmerich, M.T.M., Trautmann, H., Preuss, M., Deutz, A., Doerr, C., Emmerich, M.T.M., and Trautmann, H.
- Published
- 2020
39. Improving NSGA-III for flexible job shop scheduling using automatic configuration, smart initialization and local search
- Author
-
Wang, Y., Stein, N. van, Bäck, T.H.W., and Emmerich, M.T.M.
- Subjects
Mathematical optimization ,education.field_of_study ,Job shop scheduling ,Computer science ,business.industry ,Offspring ,Crossover ,Population ,Evolutionary algorithm ,Algorithm engineering ,Initialization ,0102 computer and information sciences ,02 engineering and technology ,01 natural sciences ,Multi-objective optimization ,010201 computation theory & mathematics ,0202 electrical engineering, electronic engineering, information engineering ,Combinatorial optimization ,020201 artificial intelligence & image processing ,Local search (optimization) ,business ,education - Abstract
This paper provides a short summary of a novel algorithm tailored towards multi-objective flexible job shop scheduling problems (FJSP). The result shows that for challenging real-world problems in combinatorial optimization, off-the-shelf implementations of multi-objective optimization evolutionary algorithms (MOEAs) might not work, but by using various adaptations, these methods can be tailored to provide excellent results. This is demonstrated for a state of the art MOEA, that is NSGA-III, and the following adaptations: (1) initialization approaches to enrich the first-generation population, (2) various crossover operators to create a better diversity of offspring, (3) parameter tuning, to determine the optimal mutation probabilities, using the MIP-EGO configurator, (4) local search strategies to explore the neighborhood for better solutions. Using these measures, NSGA-III has been enabled to solve benchmark multi-objective FJSPs and experimental results show excellent performance.
- Published
- 2020
40. Can Single Solution Optimisation Methods Be Structurally Biased?
- Author
-
Kononova, A.V., Caraffini, F., Wang, H., and Bäck, T.H.W.
- Abstract
This paper investigates whether optimisation methods with the population made up of one solution can suffer from structural bias just like their multisolution variants. Following recent results highlighting the importance of choice of strategy for handling solutions generated outside the domain, a selection of single solution methods are considered in conjunction with several such strategies. Obtained results are tested for the presence of structural bias by means of a traditional approach from literature and a newly proposed here statistical approach. These two tests are demonstrated to be not fully consistent. All tested methods are found to be structurally biased with at least one of the tested strategies. Confirming results for multisolution methods, it is such strategy that is shown to control the emergence of structural bias in single solution methods. Some of the tested methods exhibit a kind of structural bias that has not been observed before.
- Published
- 2020
41. A deep dive into exploring the preference hypervolume
- Author
-
Hagg, A., Asteroth, A., Bäck, T.H.W., Kutz, O., and Pinto, H.S.
- Published
- 2020
42. Computer-implemented land planning system and method with GIS integration
- Author
-
Detwiler, M.W., Reynolds Jr, J.W., Watts, A.H., Breukelaar, R., and Bäck, T.H.W.
- Published
- 2020
43. Towards dynamic algorithm selection for numerical black-box optimization
- Author
-
Vermetten, D.L., Wang, H., Bäck, T.H.W., and Doerr, C.
- Published
- 2020
44. Time series encodings with temporal convolutional networks
- Author
-
Thill, M., Konen, W., Bäck, T.H.W., Filipic, B., Minisci, E., Vasile, M., Filipic, B., Minisci, E., and Vasile, M.
- Subjects
Series (mathematics) ,Computer science ,business.industry ,Benchmark (computing) ,Unsupervised learning ,Labeled data ,Anomaly detection ,Pattern recognition ,Artificial intelligence ,Benchmark data ,Anomaly (physics) ,business ,Autoencoder - Abstract
The training of anomaly detection models usually requires labeled data. We present in this paper a novel approach for anomaly detection in time series which trains unsupervised using a convolutional approach coupled to an autoencoder framework. After training, only a small amount of labeled data is needed to adjust the anomaly threshold. We show that our new approach outperforms several other state-of-the-art anomaly detection algorithms on a Mackey-Glass (MG) anomaly benchmark. At the same time our autoencoder is capable of learning interesting representations in latent space. Our new MG anomaly benchmark allows to create an unlimited amount of anomaly benchmark data with steerable difficulty. In this benchmark, the anomalies are well-defined, yet difficult to spot for the human eye.
- Published
- 2020
45. On the performance of oversampling techniques for class imbalance problems
- Author
-
Kong, J., de Jesus de Araujo Rios, T., Kowalczyk, W.J., Menzel, S., Lauw, H., Wong, R.W., Ntoulas, A., Lim, E.P., Ng, S.K., Pan, S, and Bäck T.H.W
- Abstract
Although over 90 oversampling approaches have been developed in the imbalance learning domain, most of the empirical study and application work are still based on the “classical” resampling techniques. In this paper, several experiments on 19 benchmark datasets are set up to study the efficiency of six powerful oversampling approaches, including both “classical” and new ones. According to our experimental results, oversampling techniques that consider the minority class distribution (new ones) perform better in most cases and RACOG gives the best performance among the six reviewed approaches. We further validate our conclusion on our real-world inspired vehicle datasets and also find applying oversampling techniques can improve the performance by around 10%. In addition, seven data complexity measures are considered for the initial purpose of investigating the relationship between data complexity measures and the choice of resampling techniques. Although no obvious relationship can be abstracted in our experiments, we find F1v value, a measure for evaluating the overlap which most researchers ignore, has a strong negative correlation with the potential AUC value (after resampling).
- Published
- 2020
- Full Text
- View/download PDF
46. Neural Network Design: Learning from Neural Architecture Search
- Author
-
Stein, N. van, Wang, H., and Bäck, T.H.W.
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,Computer science ,Feature extraction ,02 engineering and technology ,Machine learning ,computer.software_genre ,Machine Learning (cs.LG) ,Set (abstract data type) ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,Neural and Evolutionary Computing (cs.NE) ,Network architecture ,Artificial neural network ,Contextual image classification ,business.industry ,Computer Science - Neural and Evolutionary Computing ,Benchmark (computing) ,020201 artificial intelligence & image processing ,Artificial intelligence ,Problem set ,business ,computer ,030217 neurology & neurosurgery ,MNIST database - Abstract
Neural Architecture Search (NAS) aims to optimize deep neural networks' architecture for better accuracy or smaller computational cost and has recently gained more research interests. Despite various successful approaches proposed to solve the NAS task, the landscape of it, along with its properties, are rarely investigated. In this paper, we argue for the necessity of studying the landscape property thereof and propose to use the so-called Exploratory Landscape Analysis (ELA) techniques for this goal. Taking a broad set of designs of the deep convolutional network, we conduct extensive experimentation to obtain their performance. Based on our analysis of the experimental results, we observed high similarities between well-performing architecture designs, which is then used to significantly narrow the search space to improve the efficiency of any NAS algorithm. Moreover, we extract the ELA features over the NAS landscapes on three common image classification data sets, MNIST, Fashion, and CIFAR-10, which shows that the NAS landscape can be distinguished for those three data sets. Also, when comparing to the ELA features of the well-known Black-Box optimization Benchmarking (BBOB) problem set, we found out that the NAS landscapes surprisingly form a new problem class on its own, which can be separated from all 24 BBOB problems. Given this interesting observation, we, therefore, state the importance of further investigation on selecting an efficient optimizer for the NAS landscape as well as the necessity of augmenting the current benchmark problem set.
- Published
- 2020
- Full Text
- View/download PDF
47. Exploring dimensionality reduction techniques for efficient surrogate-assisted optimization
- Author
-
Ullah, S., Nguyen, D.A., Wang, H., Menzelx, S., Sendhoff, B., and Bäck, T.H.W.
- Published
- 2020
48. Sequential vs. Integrated Algorithm Selection and Configuration: A Case Study for the Modular CMA-ES
- Author
-
Vermetten, D.L., Wang, H., Doerr, C., and Bäck, T.H.W.
- Abstract
When faced with a specific optimization problem, choosing which algorithm to use is always a tough task. Not only is there a vast variety of algorithms to select from, but these algorithms often are controlled by many hyperparameters, which need to be tuned in order to achieve the best performance possible. Usually, this problem is separated into two parts: algorithm selection and algorithm configuration. With the significant advances made in Machine Learning, however, these problems can be integrated into a combined algorithm selection and hyperparameter optimization task, commonly known as the CASH problem. In this work we compare sequential and integrated algorithm selection and configuration approaches for the case of selecting and tuning the best out of 4608 variants of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) tested on the Black Box Optimization Benchmark (BBOB) suite. We first show that the ranking of the modular CMA-ES variants depends to a large extent on the quality of the hyperparameters. This implies that even a sequential approach based on complete enumeration of the algorithm space will likely result in sub-optimal solutions. In fact, we show that the integrated approach manages to provide competitive results at a much smaller computational cost. We also compare two different mixed-integer algorithm configuration techniques, called irace and Mixed-Integer Parallel Efficient Global Optimization (MIP-EGO). While we show that the two methods differ significantly in their treatment of the exploration-exploitation balance, their overall performances are very similar.
- Published
- 2020
49. Weighted ensembles in model-based global optimization
- Author
-
Friese, M.M., Bartz-Beielstein, T., Bäck, T.H.W., Naujoks, B., Emmerich, M.T.M., Deutz, H., Hille, C., and Sergeyev, Y.D.
- Subjects
Mathematical optimization ,Surrogate model ,Generalization ,Computer science ,Model selection ,Regular polygon ,Context (language use) ,Regression analysis ,Function (mathematics) ,Global optimization - Abstract
It is a common technique in global optimization with expensive black-box functions, to learn a regression model (or surrogate-model) of the response function from past evaluations and to use this model to decide on the location of future evaluations. In surrogate model assisted optimization it can be difficult to select the right modeling technique. Without preliminary knowledge about the function it might be beneficial if the algorithm trains as many different surrogate models as possible and selects the model with the smallest training error. This is known as model selection. Recently a generalization of this approach was proposed: instead of selecting a single model we propose to use optimal convex combinations of model predictions. This approach, called model mixtures, is adopted and evaluated in the context of sequential parameter optimization. Besides discussing the general strategy, the optimal frequency of learning the convex weights is investigated. The feasibility of this approach is examined and its benefits are compared to simpler methods.
- Published
- 2019
- Full Text
- View/download PDF
50. Diversity-Indicator Based Multi-Objective Evolutionary Algorithm: DI-MOEA
- Author
-
Wang, Y., Emmerich, M.T.M., Deutz, A., Bäck, T.H.W., Deb, K. Goodman E., Coello, Coello C.A., Klamroth, K., Mietinnen, K., Mostaghim, S., and Reed, P.
- Published
- 2019
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.