Accelerated Machine Learning and Deep Learning with Intel
from
Wednesday 8 March 2023 (09:30)
to
Thursday 9 March 2023 (16:30)
Monday 6 March 2023
Tuesday 7 March 2023
Wednesday 8 March 2023
09:30
Welcome and Introduction
Welcome and Introduction
09:30 - 09:35
Agenda and speakers' presentation
09:35
Hardware acceleration for AI and Intel® oneAPI AI Analytics Toolkit
-
Séverine Habert
(
Intel
)
Hardware acceleration for AI and Intel® oneAPI AI Analytics Toolkit
Séverine Habert
(
Intel
)
09:35 - 10:00
In this session, we will first introduce the hardware features that are powering AI on Intel, we will then get a first glance at the software stack harnessing them, namely the Intel® oneAPI AI Analytics Toolkit.
10:00
How to accelerate Classical Machine Learning on Intel Architecture
-
Vladimir Kilyazov
(
Intel
)
How to accelerate Classical Machine Learning on Intel Architecture
Vladimir Kilyazov
(
Intel
)
10:00 - 10:45
In this session, we will cover the Intel-optimized libraries for Machine Learning. Python is currently ranked as the most popular programming language and is widely used in Data Science and Machine Learning. We will begin by covering the Intel® Distribution for Python and its optimizations. We will then cover the optimizations for ML Python packages such as Modin, Intel® Extension for Scikit-learn and XGBoost. The presentations will be accompanied with demos to showcase the performance speedup.
10:45
Break
Break
10:45 - 11:00
11:00
Hands-on
Hands-on
11:00 - 11:45
11:45
Closure day 1
Closure day 1
11:45 - 12:00
Thursday 9 March 2023
09:30
Optimize Deep Learning on Intel
-
Akash Dhamasia
(
Intel
)
Optimize Deep Learning on Intel
Akash Dhamasia
(
Intel
)
09:30 - 10:05
In this session, we present to you what is behind the scenes of Deep Learning with the highly-optimized Intel® oneDNN library in order to get the best-in-class performance on Intel hardware. We then show you Intel® oneDNN in action in DL frameworks such as the Intel-optimized TensorFlow, Intel-optimized PyTorch and the Intel® Extension for PyTorch (IPEX) and Tensorflow (ITEX).
10:05
Hands-on
Hands-on
10:05 - 10:50
10:50
Break
Break
10:50 - 11:00
11:00
AI-driven multiphysics HPC applications on Intel architecture: Bridging the gap between HPC and ML
-
Massoud Rezavand
(
Intel
)
AI-driven multiphysics HPC applications on Intel architecture: Bridging the gap between HPC and ML
Massoud Rezavand
(
Intel
)
11:00 - 12:00
A major challenge in HPC is to make use of and understand the massive amounts of data that are being produced when running numerical simulations. For ML on the other hand, the challenge is to have access to enough data so that we have the confidence that our models truly understand the world. Therefore, researchers are looking to replace components of HPC applications with ML models to (a) reduce the need for data storage, (b) accelerate the simulations by ML models to capture longer timescales, and (c) achieve accurate simulations in some problems that the classical solvers are not applicable to. In this session we present this interdisciplinary field and highlight recent achievements on Intel® architectures.
12:00
Lunch
Lunch
12:00 - 13:00
13:00
Introduction to Neural Network Compression Techniques
-
Nikolai Solmsdorf
(
Intel
)
Introduction to Neural Network Compression Techniques
Nikolai Solmsdorf
(
Intel
)
13:00 - 13:35
In this session, we will explain various network compression techniques in Deep Learning—such as quantization, pruning, and knowledge distillation—, their benefits in terms of performance speed-up, and finally we will showcase you the Intel tools that help you compress your model, like the Intel® Neural Compressor.
13:35
Hands-on
Hands-on
13:35 - 14:20
14:20
Break
Break
14:20 - 14:30
14:30
Uncertainty estimation
-
Akash Dhamasia
(
Intel
)
Uncertainty estimation
Akash Dhamasia
(
Intel
)
14:30 - 15:15
In this session, we will talk about the limitations of conventional deep learning techniques such as being not explainable, overconfident, and being susceptible to adversarial attacks and why in safety critical applications, it is important to incorporate reliable uncertainty estimation to DNNs for trustworthy and informed decision making. Demo with IPEX.
15:15
Easily speed up Deep Learning inference – Write once deploy anywhere
-
Anas Ahouzi
(
Intel
)
Easily speed up Deep Learning inference – Write once deploy anywhere
Anas Ahouzi
(
Intel
)
15:15 - 16:00
In this session, we will showcase the Intel® Distribution of OpenVINO™ Toolkit that allows you to optimize for high-performance inference models that you trained with TensorFlow* or with PyTorch*. We will demonstrate how to use it to write once and deploy on multiple Intel hardware.
15:45
Closure
Closure
15:45 - 16:00