Keynote Speakers

Chris Van Pelt

co-founder of Weights & Biases

Johannes Hoffart
& Marco Spinaci

SAP

Invited Speakers

Shubham Agarwal

ML Engineer
at LinkedIn

Rahma Chaabouni

Research scientist at  Google Deepmind

Hideaki Imamura

Researcher at Preferred Networks, Inc. / Optuna

Martin Rapp

Research Scientist at Bosch AI Research

Victor Picheny & Hrvoje Stojic

at SecondMind

Nilesh Jain

Principal Engineer at Intel Labs

Chris Van Pelt

Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College.

Shubham Agarwal

Shubham Agarwal is a Staff ML Engineer at LinkedIn with a focus on content moderation. He has spent the last five years at the intersection of machine learning and real-world applications. His work is driven by a deep passion for applying AutoML technologies to solve complex business problems at scale, affecting millions of lives. He is eager to share insights and strategies at AutoML 2024, demonstrating the transformative potential of AutoML in creating impactful, scalable solutions

Hideaki Imamura

Title: Optuna: A black-box optimization framework

Optuna is an open-source black-box optimization framework. In this presentation, I will first discuss how the development of Optuna started, why a company is developing it as OSS, and how we have cultivated its community. Then, I will talk about the extensive application areas of Optuna, with specific examples in robotics and material science. Lastly, I will introduce OptunaHub, a new challenge in the Optuna ecosystem, which is a platform for sharing features.

Hideaki Imamura is a researcher at Preferred Networks, Inc., and one of the core developers involved with Optuna development since 2020. He earned his Master’s degree in Computer Science from The University of Tokyo. He was the project manager for Optuna V3.0, is one of the authors of the Japanese book on Optuna and book on Bayesian optimization, and has been invited to give lectures and tutorials on Optuna and Bayesian optimization at ICIAM 2023
workshops and multiple domestic workshops in Japan.

Johannes Hoffart
& Marco Spinaci

Title: Foundation Models on Linked Business Data

Expectations towards AI in business applications and processes are higher than ever. The key ingredients of data, compute and attention seem to be available in abundance, so why do we not see Business AI taking off more quickly? There are still many hurdles to infuse AI capabilities into daily usage at scale, most significantly the lack of domain knowledge to quickly realize new use cases and the ability to generalize across different domains. This presentation will give an idea on how to address both issues: Tapping into linked business data and using it to train foundation models with inherent understanding of business domains and their data.

Linked business data refers to interconnected datasets that include not just primary business data but also a wide array of contextual and semantic information surrounding that data. Together, these provide a comprehensive view of a business’s operational landscape. This extensive data network is a crucial yet untapped resource that many have overlooked.

Leveraging linked business data using foundation models will enable the development of AI systems that are not only tailored to the specifics of a business’s data but also capable of adapting to various contexts and customer needs, ultimately unleashing enormous potential in solving a broad range of business problems.

Johannes Hoffart is heading the AI CTO office at SAP, a group of technology experts and scientists driving the development of business foundation models and knowledge graphs on SAP’s structured data. Before joining SAP in 2021, Johannes has led an AI research group on NLP and Knowledge Graphs at Goldman Sachs and co-founded a spin-off from the Max Planck Institute for Informatics with the goal of enabling businesses to tap into their knowledge hidden in text.

Marco Spinaci is leading the “Self-Supervised Learning and Architecture” work stream of the development of SAP’s Business Foundational Model for tabular data. Marco is a Data Science Expert at SAP, that he joined in 2018; he also holds a position as adjunct professor at UVSQ – Université Paris Saclay.



Rahma Chaabouni

Rahma is a research scientist at Google Deepmind. Before that she was a Ph.D. candidate at Facebook AI Research and ENS Ulm, advised by Prof. Marco Baroni and Prof. Emmanuel Dupoux.

She is interested in understanding what made our language unique and how we can endow artificial models with such a communication protocol. Her recent work has centered on LLM pre/post-training within the Gemini team, where she played a key role in the release of Gemini Pro 1.5 with 1M context window.

Martin Rapp

Title: Hardware-Aware Neural Archicture Search at Bosch

Bringing deep learning models to various embedded devices for a multitude of applications is crucial for Bosch. Hardware-aware neural architecture search (HW-NAS) is one of the key methods for simultaneously optimizing the hardware efficiency and task performance of neural network models, allowing for the deployment of AI models on resource-constrained devices while maintaining high accuracy. We present several aspects of HW-NAS at Bosch: 1) HW-aware one-shot NAS for automotive workloads, where we present a case study to boost the performance of a video perception network by jointly applying HW-NAS, quantization, and deployment optimizations; 2) efficient knowledge distillation (KD) and NAS, which combines the benefits of KD and NAS to obtain highly accurate yet hardware-efficient models; and 3) a hardware surrogate model with in-context learning, reducing the need for expensive and time-consuming hardware measurements.

Martin Rapp is a research scientist at Bosch AI Research. His focus is on optimizing deep learning models for efficient inference on resource-constrained hardware, leveraging techniques such as hardware-aware neural architecture search and knowledge distillation. Machine learning with limited computational resources has been his primary research interest for the past six years. Prior to joining Bosch in 2023, he was a researcher at KIT, where he specialized in machine learning for embedded systems.

Nilesh Jain

Title: Eliminating Friction in AI Optimization – An Industry Perspective

In the dynamic and rapidly evolving landscape of AI computation, optimizing AI models is essential to meet the escalating demands for performance and efficiency. This presentation delves into the shifting paradigms of AI computation, focusing on the developer challenges in AI optimization from an industry perspective. We will showcase how sophisticated (AI for AI) automation can dramatically reduce friction in AI optimization, from lowering exploration costs to achieving real-time optimization and ensuring multi-tenant deterministic AI performance. Our findings show substantial performance improvements and cost reductions, underscoring the importance of an end-to-end framework that integrates multiple optimization techniques.

Nilesh Jain is a Principal Engineer at Intel Labs and Director of Emerging Visual-AI Systems Research Lab. He focuses on developing cutting-edge technologies for edge and cloud systems, driving advancements in visual-AI applications. His current research focuses AI systems and infrastructure, algorithm-hardware co-design, and hardware-aware AutoML systems. As a Senior IEEE member, Nilesh has significantly contributed to the field with over 30 peer-reviewed publications and more than 45 patents, with many more pending.

Victor Picheny &
Hrvoje Stojic

Victor is the Director of Research and Hrvoje is a Senior Research Scientist at Secondmind, a machine learning based startup focused on helping automotive engineers to design and develop complex products, faster. Together they have a few decades of experience with developing and applying Bayesian optimization and probabilistic modelling to a broad range of problems across disciplines, from aerospace engineering to cognitive sciences. At Secondmind they have focused on scaling up Bayesian optimization and overcoming challenges of applying it to real-world problems faced by engineers in the automotive sector.