AutoML 2024 – The third International Conference on Automated Machine Learning brings together researchers and users with the goal of developing automated methods for machine learning pipelines and applications. These automated methods can accelerate development, improve performance, and democratize machine learning to a wider audience.

See here for all deadlines; submission will be via OpenReview:

Topics

We welcome submissions on any topic touching upon automating any aspect of machine learning, broadly interpreted. If there is any question of fit, please feel free to contact the program chairs.

This year’s conference will have two parallel tracks: one on AutoML methods and one on applications, benchmarks, challenges, and datasets (ABCD) for AutoML. Papers accepted to either track will comprise the conference program on equal footing.

The following non-exhaustive list provides some examples of work in scope for these two tracks:

Methods Track

  • model selection (e.g., Neural Architecture Search, ensembling)
  • configuration/tuning (e.g., via evolutionary algorithms, Bayesian optimization)
  • AutoML methodologies (e.g., reinforcement learning, meta-learning, in-context learning, warmstarting, portfolios, multi-objective optimization, constrained optimization)
  • pipeline automation (e.g., automated data wrangling, feature engineering, pipeline synthesis, and configuration)
  • automated procedures for diverse data (e.g., tabular, relational, multimodal, etc.)
  • ensuring quality of results in AutoML (e.g., fairness, interpretability, trustworthiness, sustainability, robustness, reproducibility)
  • supporting analysis and insight from automated systems
  • etc.

ABCD Track

  • Applications: open-source AutoML software and applications in this category that help us bridge the gap between theory and practice
  • Benchmarks: submissions to further enhance the quality of benchmarking in AutoML
  • Challenges: Design, visions, analyses, methods and best practices for future and past challenges 
  • Datasets: New datasets, collections of datasets, or meta-datasets that open up new avenues of AutoML research

For more info regarding topics for the ABCD track, see here.

Submission Guidelines

If a submission should violate any of the following guidelines, it may get (desk) rejected.

Anonymity

Methods track: Double-blind reviewing

All submissions to the methods track will undergo double-blind review. That is (i) the paper, code, and data submitted for reviewing must be anonymized to make it impossible to deduce the authors, and (ii) the reviewers will also be anonymous to the authors.

ABCD track: Optional single-blind reviewing

Since authors and organizers of AutoML systems, benchmarks, challenges, and datasets are often easily identifiable (and it is often required to reveal these identities during the review process), submissions to this track will undergo single-blind review. That is, the authors of the submission should be listed on the front page. In case there is a good reason that a submission should be treated as double-blind, authors may also elect to submit double-blind (as long as this does not hinder the review process).

Broader Impact Statement

The main paper must include a broader impact statement regarding the approach, datasets, and applications proposed/used in your paper. It should reflect on the environmental, ethical, and societal implications of your work and discuss any limitations your approach may have. For example, authors may consider whether there is potential use for the data or methods to create or exacerbate unfair bias. The statement should require at most one page and must be included both at submission and camera-ready time. If authors have reflected on their work and determined that there are no likely negative broader impacts, they may use the following statement: “After careful reflection, the authors have determined that this work presents no notable negative impacts to society or the environment.” A section with this name is included at the end of the paper body in the provided template, but you may place this discussion anywhere in the paper as you see fit, e.g., in the introduction/future work. 

The Centre for the Governance of AI has written an excellent guide for writing good broader impact statements (for the NeurIPS conference) that may be a useful resource for AutoML-Conf authors:  https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832 

Formatting Instructions

The paper has to be formatted according to the LaTeX template available at https://github.com/automl-conf/LatexTemplate. The page limit for the main paper is 9 pages; this includes the broader impact statement but not the submission checklist, references, or appendix. The broader impact statement and submission checklist are mandatory (please see the LaTeX template for details) at both submission time and in the camera ready. References and supplemental materials are not limited in length. Accepted papers will be allowed to add an additional page of content to the main paper to react to reviewer feedback. This additional content may in fact be added during the rebuttal phase as authors interact with the reviewers to ensure acceptance decisions can be made regarding near-camera-ready work.

Submission Platform

We will use OpenReview to manage submissions (links coming soon). Shortly after the acceptance/rejection notifications are sent out, the de-anonymized paper and anonymous reviews of all accepted papers will become public in OpenReview and open for non-anonymous public commenting. For two weeks following the notifications, we will also allow authors of rejected papers to opt-in for their de-anonymized papers (including anonymous reviews) to also be made public in OpenReview if they choose. Unless the authors of a rejected paper choose to opt-in, there will be no public record that the paper was submitted to the conference.

Ethics Review

We ask that authors think about the broader impact and ethical considerations of their work and discuss these issues in their broader impact section. Reviewers will not have the ability to directly reject papers based on ethical considerations, but they will be able to flag papers with perceived ethical concerns for further review by the conference organizers. The PC chairs will decide which action(s) may need to be taken in such a case and may decide to reject papers if any serious ethical concerns cannot be adequately addressed.

Dual Submissions

The goal of AutoML 2024 is to publish exciting new work for the first time while avoiding duplicating the efforts of reviewers. Papers that are substantially similar to previously published papers, accepted for publication, or submitted in parallel for publication, may not be submitted to AutoML 2024. Here, we define “publication” as a paper that appears in a venue that is (i) archival and (ii) the paper is 5 or more pages, excluding references. (This does not include non-archival workshops or papers with up to 4 pages.)

For example:

  • Allowed to submit: A manuscript on arXiv; a paper that appeared in a NeurIPS/ICML/ICLR workshop; a short CVPR/ICCV/ECCV workshop paper (<= 4 pages).
  • Not allowed to submit: A conference paper published at NeurIPS/ICML/ICLR; a published journal paper (although a published journal paper may be submitted to our separate journal track).

The dual submissions policy applies for the duration of the review process. We also discourage slicing contributions too thinly.

Posting non-anonymized submissions on arXiv, personal websites, or social media is allowed. However, if posting to arXiv prior to acceptance using the AutoML style, we ask that authors use the “preprint” rather than the “final” option when compiling their document.

Reproducibility

We strongly value reproducibility as an integral part of scientific quality assurance. Therefore, we require that all submissions are accompanied by a link to an open-source repository providing an implementation (if empirical results are part of the paper). To abide by double-blind reviewing (methods track), we have hosted our own version of anonymous GitHub which supports anonymization and full download of repositories: https://anon-github.automl.cc/.

Author Responses

After an initial review, the authors will be encouraged to discuss any questions that may arise with the reviewers via OpenReview and will be allowed to update their papers based on the feedback of the reviewers. 

Commitment to Review

We ask that at least one author of each submission volunteers to review for AutoML 2024. 

Changing the Author List

New authors cannot be added after the abstract deadline, although changing the ordering and removing authors will be allowed at any time. This policy avoids any potential conflicts of interest during the reviewer bidding stage.

Publication of Accepted Submissions

The publication of accepted submissions will be done via OpenReview and in a volume of PMLR. Feedback by the reviewers can be incorporated into the final version of the paper; other major changes are not allowed. Furthermore, each paper has to be accompanied by a link to an open-source implementation (if there are any empirical results in the paper) to ensure reproducibility. As mentioned above, accepted papers will be made available alongside with their reviews and meta-reviews.

Attending the Conference

The conference is scheduled for September 09-12, 2024 in Paris, France. We are planning for a hybrid conference, with active integration of remote attendees. The conference’s main objective is to allow for in-depth discussions and networking. It is not mandatory for authors of accepted papers to attend the conference in person, but we strongly encourage it. Nevertheless, for all accepted papers, we will require at least an online attendance and a short video to be produced. This will allow attendees to watch videos ahead of time at their own pace and plan ahead to use the conference most effectively for in-depth discussions and networking.

Program Chairs

  • Katharina Eggensperger
  • Roman Garnett
  • Joaquin Vanschoren