Accelerated Machine Learning and Deep Learning with Intel

Europe/Berlin
Online/Zoom

Online/Zoom

Andreas Marek (MPCDF), Timoteo Colnaghi (MPCDF)
Description

During this workshop, we will showcase how to accelerate your classical Machine Learning and Deep Learning workloads on Intel architecture. We will present the Intel optimizations of commonly used data science libraries such as Pandas, classical Machine Learning libraries such as Scikit-learn and XGBoost and of course, the optimizations of Deep Learning libraries such TF and PyTorch. We will also discuss various topics regarding hyperparameter optimization, AI on HPC system, inference optimization with OpenVINO library and compression techniques using Intel Neural Compressor.
 

  • Wednesday, 25 October
    • 09:30 09:35
      Welcome and Introduction 5m

      Agenda and speakers' presentation

    • 09:35 10:00
      Hardware acceleration for AI and Intel® oneAPI AI Analytics Toolkit 25m

      In this session, we will first introduce the hardware features that are powering AI on Intel, we will then get a first glance at the software stack harnessing them, namely the Intel® oneAPI AI Analytics Toolkit.

      Speaker: Dr Séverine Habert (Intel)
    • 10:00 10:30
      How to accelerate Classical Machine Learning on Intel Architecture 30m

      In this session, we will cover the Intel-optimized libraries for Machine Learning. Python is currently ranked as the most popular programming language and is widely used in Data Science and Machine Learning. We will begin by covering the Intel® Distribution for Python and its optimizations. We will then cover the optimizations for ML Python packages such as Modin, Intel® Extension for Scikit-learn and XGBoost. The presentations will be accompanied with demos to showcase the performance speedup.

      Speaker: Vladimir Kilyazov (Intel)
    • 10:30 10:50
      Hands-on 20m
    • 10:50 11:00
      Break 10m
    • 11:00 12:00
      Hands-on 1h
    • 12:00 13:00
      Lunch break 1h
    • 13:00 13:45
      A introduction to GenAI and its application to Science 45m

      This session offers an introductory exploration into Transformers and Large Language Models (LLMs). We will delve into the fundamental concepts of Transformers, shedding light on their architecture and capabilities. Following this introduction, the focus shifts to the exciting intersection of LLMs with scientific domains such as biology and physics.

      Speaker: Dr Séverine Habert (Intel)
    • 13:45 14:30
      Optimize Deep Learning on Intel ! 45m

      In this session, we present to you what is behind the scenes of Deep Learning with the highly-optimized Intel® oneDNN library in order to get the best-in-class performance on Intel hardware. We then show you Intel® oneDNN in action in DL frameworks such as the Intel-optimized TensorFlow, Intel-optimized PyTorch and the Intel® Extension for PyTorch (IPEX) and Tensorflow (ITEX).

      Speaker: Akash Dhamasia (Intel)
    • 14:30 14:40
      Break 10m
    • 14:40 15:40
      Hands-on 1h
    • 15:40 15:45
      Closure day 1 5m
  • Thursday, 26 October
    • 09:30 09:40
      Previously on Intel workshop: a recap of Deep Learning 10m
    • 09:40 10:40
      Deep Learning at Scale with Distributed Training 1h

      In this presentation, we discuss how Distributed Training addresses the need to efficiently train large and complex deep learning models, including LLM. Join us as we break down the key ideas behind distributed training, data parallelism, model parallelism, understand its advantages, and gain insights into how it is used to train and inference LLM.

      Speakers: Akash Dhamasia (Intel), Dr Nikolai Solmsdorf (Intel)
    • 10:40 10:50
      Break 10m
    • 10:50 11:30
      Latent Diffusion Model in practice: an example with Stable Diffusion 40m
    • 11:30 12:00
      TBA 30m
    • 12:00 13:00
      Lunch 1h
    • 13:00 13:35
      Introduction to Neural Network Compression Techniques 35m

      In this session, we will explain various network compression techniques in Deep Learning—such as quantization, pruning, and knowledge distillation—, their benefits in terms of performance speed-up, and finally we will showcase you the Intel tools that help you compress your model, like the Intel® Neural Compressor.

      Speaker: Dr Nikolai Solmsdorf (Intel)
    • 13:35 14:20
      Hands-on 45m
    • 14:20 14:30
      Break 10m
    • 14:30 15:00
      Physics simulation using 3D-GAN 30m
      Speaker: Dr Massoud Rezavand (Intel)
    • 15:00 15:45
      Easily speed up Deep Learning inference – Write once deploy anywhere 45m

      In this session, we will showcase the Intel® Distribution of OpenVINO™ Toolkit that allows you to optimize for high-performance inference models that you trained with TensorFlow or with PyTorch. We will demonstrate how to use it to write once and deploy on multiple Intel hardware.

      Speaker: Vladimir Kilyazov (Intel)
    • 15:45 16:00
      Closure day 2 15m