A What makes a good AutoML application submission to this track?

We welcome both open-source AutoML software and applications in this category that help us bridge the gap between theory and practice. For software, we follow a similar approach to JMLR MLOSS by looking for papers about novel, well-engineered, well-established and well-documented systems. To this end, a submission has to show that:

  1. It is a novel system that has features or application domains that were not available beforehand.
  2. It already has an established user base (shown by stars on github, active commit history by several developers, an active issue tracker, etc.)
  3. It is an open-source software package with an open-source software licence that allows users to easily use and contribute to it.
  4. It achieves excellent performance on the addressed application domains.

For applications, we are specifically interested in real-world applications that have taught us valuable lessons in how to apply AutoML in practice. Submissions have to show that:

  1. It is an actual real-world application of AutoML highlighting aspects of AutoML that are often overlooked in the literature.
  2. New and/or surprising insights were obtained that are important and generally useful for AutoML practitioners.
  3. The real-world problem, as well as the developed solutions, are well-described to an extent that allows others to verify the findings.

We particularly welcome high-profile applications of importance to humankind.

B What makes a good benchmark submission to this track?

The progress in the field of AutoML is often driven by empirical results. Although the community has made tremendous progress in defining best practices and benchmarks in recent years, we invite submissions to further enhance the quality of benchmarking in AutoML. This could include (but is not strictly limited to):

  1. Demonstrating pitfalls and proposing solutions in benchmarking AutoML systems
  2. Proposing new benchmarks (e.g., similar to HPOBench for hyperparameter optimization or the NASBench series for neural architecture search) or substantial extensions of existing benchmarks
  3. Approaches for more efficient benchmarking

For all submissions, it is important that all benchmarking data and tools are easily accessible, and that all benchmarking results are easily reproducible. All necessary datasets, code, and evaluation procedures must be accessible and well-documented.

C What makes a good challenge submission to this track?

In recent years, there were many challenges on AutoML, AutoDL, HPOand NAS pushing the community to new heights. Since neither running a meaningful competition is trivial nor gaining thorough insights from it, we invite submissions on the following topics:

  1. Design and visions for future challenges on AutoML
  2. Post-challenge analysis, highlighting gained insights and future open tasks
  3. Methodology and best practices for organizing AutoML challenges

We note that especially post-challenge analysis and insights can also be described by attendees and not only by organizers.

D What makes a good dataset submission to this track?

We welcome all types of new datasets, collections of datasets, or meta-datasets that open up new avenues of research into better AutoML systems. These include new datasets on underexplored areas of AutoML (e.g. imperfect, multi-modal, or multi-objective data), meta-datasets that contain hyperparameter configurations and their performance on many tasks, large datasets to pretrain AutoML methods, and many more. Submissions have to show that they are:

  1. Impactful. It must be clear how the dataset benefits AutoML research and/or its (future) applicability in the real world.
  2. Well-documented. The content of the data and how it was collected, and how it is intended to be used must be well-described. We recommend using data documentation frameworks, such as datasheets for datasets.
  3. Easily accessible and easy to use. There must be a clear reference or URL to the website/platform where the dataset can be viewed and downloaded.
  4. Well-maintained. There should be sustainable hosting, licensing and a maintenance plan to ensure that the dataset will be accessible for the foreseeable future.