Date: 11.09.2024, 15:30-17:00
Room: 26-25/105
Speakers
Motivation
Algorithm selection aims to identify the best algorithm for a given problem instance, leveraging the strengths of different algorithms across various problems. However, selecting the optimal algorithm for an unseen instance is a complex challenge, attracting substantial research interest. This tutorial surveys key contributions to algorithm selection in single-objective continuous black-box optimization. It explores current efforts in using representation learning to derive meta-features for optimization problems, algorithms, and their interactions. Additionally, it examines the application of machine learning models for automated algorithm selection, configuration, and performance prediction. By analyzing these advancements, the tutorial highlights existing gaps in the field and proposes directions for enhancing meta-feature representations to improve algorithm selection efficacy.
The tutorial will provide an overview of the recent trends in learning techniques for meta-features that can describe problem characteristics, algorithm characteristics, and algorithm-problem interactions (i.e., trajectory-based features). All of them will be presented in the single-objective black-box optimization scenario with opportunities to be transferred to other learning tasks. The problem landscape features are independent of any specific algorithm’s behavior on a problem instance. Various methods have been developed to capture the characteristics of single-objective continuous optimization problems, categorized into high-level features, like those in Fitness Landscape Analysis (FLA), and low-level features, such as those in Exploratory Landscape Analysis (ELA), Topological Landscape Analysis (TLA), and deep learning-based approaches. High-level features are human-interpretable, while most low-level features are considered black-box features, which can enhance machine learning tasks. Algorithm features describe the characteristics of specific algorithm instances and can be derived from various sources, including source code, performance metrics, graph embeddings, and configuration settings. Additionally, trajectory-based features capture the interactions between the problem instance and the optimization algorithm by using samples generated during the algorithm’s execution, providing a dynamic representation of this relationship. Machine learning models trained on these features can address algorithm selection tasks through multi-class classification, multi-label classification, regression, or ranking approaches. We will also provide a comprehensive comparison of machine learning techniques for algorithm selection using those features in single-objective optimisation.
The tutorial will follow our survey article on meta-features for automated algorithm selection for black-box single objective continuous optimisation ( https://arxiv.org/abs/2406.06629).
Outline
- Introduction to an automated algorithm selection (AAS) pipeline
- Calculating/Learning problem landscape features
- Exploratory Landscape Analysis
- Topological Landscape Analysis
- Deep Learning-based features
- Feature used by Convolution Neural Networks
- Features learned using a point cloud transformer
- DoE2Vec features learned by using a variation autoencoder
- TransOpt features learned by using a transformer
- Deep-ELA features
- Calculating/Learning algorithm features
- Algorithm Features Based on Source Code
- Algorithm Features Based on Performance
- Algorithm Features Based on Problem Landscape Features used by Performance Prediction Models
- Algorithm Features Using Graph Embeddings
- Learning trajectory-based features
- Trajectory-based features Based on Internal Algorithm Parameters
- Trajectory-based ELA features
- DynamoRep features
- Opt2Vec features
- Iterative-based ELA features
- Local Optima Networks
- Search Trajectory Networks
- Probing trajectories
- Machine Learning techniques for AAS
Speakers
Gjorgjina Cenikj
Gjorgjina Cenikj is a young researcher at the Computer Systems Department at the Jožef Stefan Institute in Ljubljana, Slovenia. She is currently pursuing a PhD degree at the Jožef Stefan Postgraduate School, targeting the development of representation learning methodologies for single-objective numerical optimization problems and algorithms, with the goal of improving algorithm selection. She has been a major contributor to several feature-based characterizations of optimization problems Cenikj et al. (2024, 2023b,a); Petelin et al. (2022, 2024). Her research interests include machine learning, representation learning, natural language processing, and graph learning.
Ana Nikolikj
Ana Nikolikj is a young researcher at the Computer Systems Department at the Jožef Stefan Institute in Ljubljana, Slovenia. She is working towards her PhD at the Jožef Stefan Postgraduate School, focusing on inventing methodologies to understand the behavior of single-objective numerical optimization algorithms via meta-learning. This is aimed at enhancing the process of algorithm performance prediction and algorithm selection. Her areas of interest encompass machine learning, representation learning, and methods for explainability. During her master thesis, she explored algorithm features based on explainable performance prediction models Nikolikj et al. (2022b,a).
Tome Eftimov
Tome Eftimov is a senior researcher at the Computer Systems Department at the Jožef Stefan Institute. He was a postdoctoral research fellow at Stanford University, USA, and a research associate at the University of California, San Francisco. His research interests include statistical data analysis, meta-learning, metaheuristics, natural language processing, representation learning, and machine learning. He has presented his work as 81 conference articles, 50 journal articles, and one Springer book published in 2022. He has been involved in courses on probability and statistics, and statistical data analysis. His work related to Deep Statistical Comparison was presented as a tutorial (i.e. IJCCI 2018, IEEE SSCI 2019, GECCO 2020, 2021, 2022, 2024, PPSN 2020, 2022, IEEE CEC 2021, 2022, 2023) or as an invited lecture to several international conferences and universities. Currently, he is supervising three Ph.D. students who are working on methodologies related to the proposed tutorial topic.