Date: 11.09.2024; 15:30-17:00
Room: Auditorium
Speakers
Motivation
Performance prediction is an important part of the **Neural Architecture Search** process, as it allows avoiding resource-intensive network training. Recently, zero-cost proxies have gained in popularity, as they do not require any network training and provide a fast estimate of the performance. A drawback of zero-cost proxies is the lack of a solid theoretical understanding of these proxies. It is not clear why they correlate with the validation accuracy, and additionally show inconsistent results across diverse search spaces and tasks. In addition, some zero-cost proxies even tend to be biased to some network properties, for example, the number of operations in the graph.
In this talk, we will present the advantages and also disadvantages of zero-cost proxies. We will provide a systematic overview of performance predictors and zero-cost proxies across search spaces (including newer ones like transformer spaces). Eventually, we will outline the different use cases, including zero-cost proxies as network encodings for tabular predictors, their usage in multi-objective settings, and the combination with graph neural predictors.
In the hands-on part of the tutorial, we aim to showcase the importance of an analysis of the proxies beyond a simple correlation with the CIFAR-10 validation accuracy. We will furthermore introduce the recently developed [GRAF](https://openreview.net/forum?id=EhPpZV6KLk) predictor – inspired by the zero-cost proxy biases, it uses neural graph features like operation count or path lengths for interpretable prediction. The participants will learn how zero-cost proxies and network properties impact the performance prediction, and how these proxies can be used to speed-up their NAS methods.
Outline
Part 1: Presentation on Performance Predictors in NAS
The first part of this tutorial will be a 45 minutes presentation, which will provide the participants an overview of performance prediction methods with a special focus on zero-cost proxies.
Presentation outline:
- Short overview of different performance prediction methods
- How predictors are used in different NAS settings
- Zero-cost proxies – basics, examples across search spaces (beyond CIFAR-10 and convnets)
- Integrating zero-cost proxies into tabular predictors and graph neural networks + multi-objective prediction
- Zero-cost proxy biases (continued in hands-on)
Part 2: Hands-on Tutorial
In the second part, we will have a 30 minutes hands-on tutorial, where the participants learn to analyze zero-cost proxies, use them for prediction, and combine them with other features of the neural graph.
Hands-on outline:
- Visualize zero-cost proxy biases.
- Use zero-cost proxies and GRAF in prediction on search spaces available in NASLib.
- Explore the interpretability of the GRAF predictor via SHAP ( Lundberg and Lee, 2017) – which proxies and features influence the prediction?
- (advanced) How to implement novel zero-cost proxies and graph features, and integrate them to GRAF and NASLib.
Speakers
Gabi Kadlecová
Gabi Kadlecová is a PhD student at Charles University in Prague. She also works at the Institute of Computer Science of the Czech Academy of Sciences, where she is supervised by Roman Neruda. Her focus is Neural Architecture Search, specifically performance prediction and architecture embedding using graph neural networks. She presented the tutorial on performance prediction in NAS at the AutoML Fall School 2023. She also presented a tutorial on survival analysis at a statistics workshop. Her teaching experience includes labs on nature-inspired algorithms (2024, 2 labs), and introductory Python programming labs (2021).
Jovita Lukasik
Jovita Lukasik is a postdoctoral researcher at the University of Siegen, specialized in the research for architectural representations for efficient neural architecture search using performance prediction methods. Her current NAS research focus is in the area of multi-objectiveness for computer vision using prediction methods, especially with in the context of neural network robustness . In addition to her research, she also co-organizes virtual seminars on Automated Machine Learning and serves as a publication chair for the European Conference on Computer Vision 2024. She also co-organized the NAS@ICLR Workshop in 2021. Furthermore, she was a panelist at the “AutoML, Trustworthiness and Alignment” panel at the AutoML conference.