Keynote Speakers

Chris Van Pelt

co-founder of Weights & Biases

Johannes Hoffart
& Marco Spinaci

SAP

Invited Speakers

Martin Rapp

Research Scientist at Bosch AI Research

Hideaki Imamura

Researcher at Preferred Networks, Inc. / Optuna

Nilesh Jain

Principal Engineer at Intel Labs

Chris Van Pelt

Title: Building an AI Application with Weave and AutoML

This talk will demonstrate lessons learned building a real world application on top of LLM’s. Weave, Weights and Biases’s new developers tool, will be used to debug and evaluate the application. We’ll also explore what AutoML can do in our new few shot learning world.

Chris Van Pelt is a co-founder of Weights & Biases, a developer MLOps platform. In 2009, Chris founded Figure Eight/CrowdFlower. Over the past 12 years, Chris has dedicated his career optimizing ML workflows and teaching ML practitioners, making machine learning more accessible to all. Chris has worked as a studio artist, computer scientist, and web engineer. He studied both art and computer science at Hope College.

Hideaki Imamura

Title: Optuna: A black-box optimization framework

Optuna is an open-source black-box optimization framework. In this presentation, I will first discuss how the development of Optuna started, why a company is developing it as OSS, and how we have cultivated its community. Then, I will talk about the extensive application areas of Optuna, with specific examples in robotics and material science. Lastly, I will introduce OptunaHub, a new challenge in the Optuna ecosystem, which is a platform for sharing features.

Hideaki Imamura is a researcher at Preferred Networks, Inc., and one of the core developers involved with Optuna development since 2020. He earned his Master’s degree in Computer Science from The University of Tokyo. He was the project manager for Optuna V3.0, is one of the authors of the Japanese book on Optuna and book on Bayesian optimization, and has been invited to give lectures and tutorials on Optuna and Bayesian optimization at ICIAM 2023
workshops and multiple domestic workshops in Japan.

Johannes Hoffart
& Marco Spinaci

Title: Foundation Models on Linked Business Data

Expectations towards AI in business applications and processes are higher than ever. The key ingredients of data, compute and attention seem to be available in abundance, so why do we not see Business AI taking off more quickly? There are still many hurdles to infuse AI capabilities into daily usage at scale, most significantly the lack of domain knowledge to quickly realize new use cases and the ability to generalize across different domains. This presentation will give an idea on how to address both issues: Tapping into linked business data and using it to train foundation models with inherent understanding of business domains and their data.

Linked business data refers to interconnected datasets that include not just primary business data but also a wide array of contextual and semantic information surrounding that data. Together, these provide a comprehensive view of a business’s operational landscape. This extensive data network is a crucial yet untapped resource that many have overlooked.

Leveraging linked business data using foundation models will enable the development of AI systems that are not only tailored to the specifics of a business’s data but also capable of adapting to various contexts and customer needs, ultimately unleashing enormous potential in solving a broad range of business problems.

Johannes Hoffart is heading the AI CTO office at SAP, a group of technology experts and scientists driving the development of business foundation models and knowledge graphs on SAP’s structured data. Before joining SAP in 2021, Johannes has led an AI research group on NLP and Knowledge Graphs at Goldman Sachs and co-founded a spin-off from the Max Planck Institute for Informatics with the goal of enabling businesses to tap into their knowledge hidden in text.

Marco Spinaci is leading the “Self-Supervised Learning and Architecture” work stream of the development of SAP’s Business Foundational Model for tabular data. Marco is a Data Science Expert at SAP, that he joined in 2018; he also holds a position as adjunct professor at UVSQ – Université Paris Saclay.



Martin Rapp

Title: Hardware-Aware Neural Archicture Search at Bosch

Bringing deep learning models to various embedded devices for a multitude of applications is crucial for Bosch. Hardware-aware neural architecture search (HW-NAS) is one of the key methods for simultaneously optimizing the hardware efficiency and task performance of neural network models, allowing for the deployment of AI models on resource-constrained devices while maintaining high accuracy. We present several aspects of HW-NAS at Bosch: 1) HW-aware one-shot NAS for automotive workloads, where we present a case study to boost the performance of a video perception network by jointly applying HW-NAS, quantization, and deployment optimizations; 2) efficient knowledge distillation (KD) and NAS, which combines the benefits of KD and NAS to obtain highly accurate yet hardware-efficient models; and 3) a hardware surrogate model with in-context learning, reducing the need for expensive and time-consuming hardware measurements.

Martin Rapp is a research scientist at Bosch AI Research. His focus is on optimizing deep learning models for efficient inference on resource-constrained hardware, leveraging techniques such as hardware-aware neural architecture search and knowledge distillation. Machine learning with limited computational resources has been his primary research interest for the past six years. Prior to joining Bosch in 2023, he was a researcher at KIT, where he specialized in machine learning for embedded systems.

Nilesh Jain

Title: Eliminating Friction in AI Optimization – An Industry Perspective

In the dynamic and rapidly evolving landscape of AI computation, optimizing AI models is essential to meet the escalating demands for performance and efficiency. This presentation delves into the shifting paradigms of AI computation, focusing on the developer challenges in AI optimization from an industry perspective. We will showcase how sophisticated (AI for AI) automation can dramatically reduce friction in AI optimization, from lowering exploration costs to achieving real-time optimization and ensuring multi-tenant deterministic AI performance. Our findings show substantial performance improvements and cost reductions, underscoring the importance of an end-to-end framework that integrates multiple optimization techniques.

Nilesh Jain is a Principal Engineer at Intel Labs and Director of Emerging Visual-AI Systems Research Lab. He focuses on developing cutting-edge technologies for edge and cloud systems, driving advancements in visual-AI applications. His current research focuses AI systems and infrastructure, algorithm-hardware co-design, and hardware-aware AutoML systems. As a Senior IEEE member, Nilesh has significantly contributed to the field with over 30 peer-reviewed publications and more than 45 patents, with many more pending.