### Refine

#### Document Type

- Report (4)
- Working Paper (4)
- Article (3)
- Preprint (2)

#### Language

- English (13)

#### Has Fulltext

- yes (13)

#### Is part of the Bibliography

- no (13)

#### Keywords

- Optimization (13) (remove)

Real-world problems such as computational fluid dynamics simulations and finite element analyses are computationally expensive. A standard approach to mitigating the high computational expense is Surrogate-Based Optimization (SBO). Yet, due to the high-dimensionality of many simulation problems, SBO is not directly applicable or not efficient. Reducing the dimensionality of the search space is one method to overcome this limitation. In addition to the applicability of SBO, dimensionality reduction enables easier data handling and improved data and model interpretability. Regularization is considered as one state-of-the-art technique for dimensionality reduction. We propose a hybridization approach called Regularized-Surrogate-Optimization (RSO) aimed at overcoming difficulties related to high-dimensionality. It couples standard Kriging-based SBO with regularization techniques. The employed regularization methods are based on three adaptations of the least absolute shrinkage and selection operator (LASSO). In addition, tree-based methods are analyzed as an alternative variable selection method. An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than standard SBO to obtain comparable results. The pros and cons of the RSO approach are discussed, and recommendations for practitioners are presented.

When designing or developing optimization algorithms, test functions are crucial to evaluate
performance. Often, test functions are not sufficiently difficult, diverse, flexible or relevant to real-world
applications. Previously,
test functions with real-world relevance were generated by training a machine learning model based on
real-world data. The model estimation is used as a test function.
We propose a more principled approach using simulation instead of estimation.
Thus, relevant and varied test functions
are created which represent the behavior of real-world fitness landscapes.
Importantly, estimation can lead to excessively smooth test functions
while simulation may avoid this pitfall. Moreover, the simulation
can be conditioned by the data, so that the simulation reproduces the training data
but features diverse behavior in unobserved regions of the search space.
The proposed test function generator is illustrated with an intuitive, one-dimensional
example. To demonstrate the utility of this approach it
is applied to a protein sequence optimization problem.
This application demonstrates the advantages as well as practical limits of simulation-based
test functions.

An essential task for operation and planning of biogas plants is the optimization of substrate feed mixtures. Optimizing the monetary gain requires the determination of the exact amounts of maize, manure, grass silage, and other substrates. Accurate simulation models are mandatory for this optimization, because the underlying chemical processes are very slow. The simulation models themselves may be time-consuming to evaluate, hence we show how to use surrogate-model-based approaches to optimize biogas plants efficiently. In detail, a Kriging surrogate is employed. To improve model quality of this surrogate, we integrate cheaply available data into the optimization process. Doing so, Multi-fidelity modeling methods like Co-Kriging are employed. Furthermore, a two-layered modeling approach is employed to avoid deterioration of model quality due to discontinuities in the search space. At the same time, the cheaply available data is shown to be very useful for initialization of the employed optimization algorithms. Overall, we show how biogas plants can be efficiently modeled using data-driven methods, avoiding discontinuities as well as including cheaply available data. The application of the derived surrogate models to an optimization process is shown to be very difficult, yet successful for a lower problem dimension.

The performance of optimization algorithms relies crucially on their parameterizations. Finding good parameter settings is called algorithm tuning. Using
a simple simulated annealing algorithm, we will demonstrate how optimization algorithms can be tuned using the Sequential Parameter Optimization Toolbox (SPOT). SPOT provides several tools for automated and interactive tuning. The underlying concepts of the SPOT approach are explained. This includes key techniques such as exploratory fitness landscape analysis and response surface methodology. Many examples illustrate
how SPOT can be used for understanding the performance of algorithms and gaining insight into algorithm behavior. Furthermore, we demonstrate how SPOT can be used as an optimizer and how a sophisticated ensemble approach is able to combine several meta models via stacking.

When researchers and practitioners in the field of
computational intelligence are confronted with real-world
problems, the question arises which method is the best to
apply. Nowadays, there are several, well established test
suites and well known artificial benchmark functions
available.
However, relevance and applicability of these methods to
real-world problems remains an open question in many
situations. Furthermore, the generalizability of these
methods cannot be taken for granted.
This paper describes a data-driven approach for the
generation of test instances, which is based on
real-world data. The test instance generation uses
data-preprocessing, feature extraction, modeling, and
parameterization. We apply this methodology on a classical
design of experiment real-world project and generate test
instances for benchmarking, e.g. design methods, surrogate
techniques, and optimization algorithms. While most
available results of methods applied on real-world
problems lack availability of the data for comparison,
our future goal is to create a toolbox covering multiple
data sets of real-world projects to provide a test
function generator to the research community.

We propose a hybridization approach called Regularized-Surrogate- Optimization (RSO) aimed at overcoming difficulties related to high- dimensionality. It combines standard Kriging-based SMBO with regularization techniques. The employed regularization methods use the least absolute shrinkage and selection operator (LASSO). An extensive study is performed on a set of artificial test functions and two real-world applications: the electrostatic precipitator problem and a multilayered composite design problem. Experiments reveal that RSO requires significantly less time than Kriging to obtain comparable results. The pros and cons of the RSO approach are discussed and recommendations for practitioners are presented.

Surrogate-based optimization relies on so-called infill criteria (acquisition functions) to decide which point to evaluate next. When Kriging is used as the surrogate model of choice (also called Bayesian optimization), one of the most frequently chosen criteria is expected improvement. We argue that the popularity of expected improvement largely relies on its theoretical properties rather than empirically validated performance. Few results from the literature show evidence, that under certain conditions, expected improvement may perform worse than something as simple as the predicted value of the surrogate model. We benchmark both infill criteria in an extensive empirical study on the ‘BBOB’ function set. This investigation includes a detailed study of the impact of problem dimensionality on algorithm performance. The results support the hypothesis that exploration loses importance with increasing problem dimensionality. A statistical analysis reveals that the purely exploitative search with the predicted value criterion performs better on most problems of five or higher dimensions. Possible reasons for these results are discussed. In addition, we give an in-depth guide for choosing the infill criteria based on prior knowledge about the problem at hand, its dimensionality, and the available budget.

The availability of several CPU cores on current computers enables
parallelization and increases the computational power significantly.
Optimization algorithms have to be adapted to exploit these highly
parallelized systems and evaluate multiple candidate solutions in
each iteration. This issue is especially challenging for expensive
optimization problems, where surrogate models are employed to
reduce the load of objective function evaluations.
This paper compares different approaches for surrogate modelbased
optimization in parallel environments. Additionally, an easy
to use method, which was developed for an industrial project, is
proposed. All described algorithms are tested with a variety of
standard benchmark functions. Furthermore, they are applied to
a real-world engineering problem, the electrostatic precipitator
problem. Expensive computational fluid dynamics simulations are
required to estimate the performance of the precipitator. The task
is to optimize a gas-distribution system so that a desired velocity
distribution is achieved for the gas flow throughout the precipitator.
The vast amount of possible configurations leads to a complex
discrete valued optimization problem. The experiments indicate
that a hybrid approach works best, which proposes candidate solutions
based on different surrogate model-based infill criteria and
evolutionary operators.

Computational intelligence methods have gained importance in several real-world domains such as process optimization, system identification, data mining, or statistical quality control. Tools are missing, which determine the applicability of computational intelligence methods in these application domains in an objective manner. Statistics provide methods for comparing algorithms on certain data sets. In the past, several test suites were presented and considered as state of the art. However, there are several drawbacks of these test suites, namely: (i) problem instances are somehow artificial and have no direct link to real-world settings; (ii) since there is a fixed number of test instances, algorithms can be fitted or tuned to this specific and very limited set of test functions; (iii) statistical tools for comparisons of several algorithms on several test problem instances are relatively complex and not easily to analyze. We propose a methodology to overcome these difficulties. It is based on standard ideas from statistics: analysis of variance and its extension to mixed models. This paper combines essential ideas from two approaches: problem generation and statistical analysis of computer experiments.

This survey compiles ideas and recommendations from more than a dozen researchers with different backgrounds and from different institutes around the world. Promoting best practice in benchmarking is its main goal. The article discusses eight essential topics in benchmarking: clearly stated goals, well- specified problems, suitable algorithms, adequate performance measures, thoughtful analysis, effective and efficient designs, comprehensible presentations, and guaranteed reproducibility. The final goal is to provide well-accepted guidelines (rules) that might be useful for authors and reviewers. As benchmarking in optimization is an active and evolving field of research this manuscript is meant to co-evolve over time by means of periodic updates.

Surrogate-based optimization and nature-inspired metaheuristics have become the state of the art in solving real-world optimization problems. Still, it is difficult for beginners and even experts to get an overview that explains their advantages in comparison to the large number of available methods in the scope of continuous optimization. Available taxonomies lack the integration of surrogate-based approaches and thus their embedding in the larger context of this broad field.
This article presents a taxonomy of the field, which further matches the idea of nature-inspired algorithms, as it is based on the human behavior in path finding. Intuitive analogies make it easy to conceive the most basic principles of the search algorithms, even for beginners and non-experts in this area of research. However, this scheme does not oversimplify the high complexity of the different algorithms, as the class identifier only defines a descriptive meta-level of the algorithm search strategies. The taxonomy was established by exploring and matching algorithm schemes, extracting similarities and differences, and creating a set of classification indicators to distinguish between five distinct classes. In practice, this taxonomy allows recommendations for the applicability of the corresponding algorithms and helps developers trying to create or improve their own algorithms.

RGP is genetic programming system based on, as well as fully integrated into, the R environment. The system implements classical tree-based genetic programming as well as other variants including, for example, strongly typed genetic programming and Pareto genetic programming. It strives for high modularity through a consistent architecture that allows the customization and replacement of every algorithm component, while maintaining accessibility for new users by adhering to the "convention over configuration" principle.