Adversarial Machine Learning
Masterclass - 1 & 3 December 2021
How can you tell the difference between elephants and lions?
We, humans, will know for sure when we encounter one in the wild, but machines can nowadays make an excellent guess too.
With more than 99.5% accuracy, modern machine learning algorithms know very well how to exploit visual features as best as possible.
But, alas, that is not the complete picture: machines are easily fooled too.
With some clever tricks, any picture of a lion can easily be manipulated in such a way that humans do not notice any difference,
but that the machine learning model sees a completely different animal. Such a manipulated image is called an adversarial example.
What is going on there? Have we built the most powerful machine learning models ever on brittle foundations?
Each class will be wrapped up by a speaker from industry (ML6 and IBM), where theory becomes practice.
The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).
This masterclass is a collaboration between UGain and VAIA.
We welcome participants from universities, colleges, industry, research centres, etc...
Beat Buesser is a Research Staff Member in the AI & Machine Learning group at the Dublin Research Laboratory of IBM Research. He is currently leading the development of the Adversarial Robustness 360 Toolbox (ART) and his research focuses on the security of machine learning and artificial intelligence. Before joining IBM, he worked as postdoctoral associate at the Massachusetts Institute of Technology (MIT) and obtained his doctorate degree from ETH Zurich.
Cedric De Boom is a postdoctoral researcher at Ghent University on the intersection between generative modelling and natural language processing. He teaches a master's course on Deep Generative Models and a bachelor's course on Discrete Mathematics. He also has a strong background in recommender systems, which he put into practice at both Spotify and DPG Media.
After graduating as a Computer Science Engineer from the KU Leuven, Lucas Desard joined ML6 as an ML engineer. Next to being a ML engineer at ML6, he focusses part-time on building out gener8.ai, a new brand that focuses on serving companies that want to make use of the latest and greatest generative AI techniques for their businesses.
Jonathan Peck received the B.Sc. degree in computer science and the M.Sc. degree in mathematical informatics from Ghent University, Belgium, in 2015 and 2017, respectively. He is currently pursuing the joint Ph.D. degree with the Department of Applied Mathematics, Computer Science and Statistics and the VIB Inflammation Research Center, Ghent, Belgium. His research is sponsored by a fellowship of the Research Foundation Flanders (FWO) and focuses improving the robustness of machine learning models to adversarial manipulations.
Program
Part 1 (9h – 11h) - Cedric De Boom
In this masterclass, you will learn everything there is to know about generative adversarial networks (GANs) and how they are trained. We will look deep into some of the training issues that arise and how they can be solved using some “black belt ninja tricks”. Finally, we will look at three interesting classes of GANs that have proven their merits: CycleGAN, Wasserstein GAN and StyleGAN.
- 1. Introduction to generative models
- 2. Introduction to generative adversarial networks (GANs)
- 3. “The game of GANs”: generator vs discriminator
- 4. How to train GANs
- 5. Training challenges: game saturation & mode collapse
- 6. Black belt ninja tricks for GANs
- 7. Performance evaluation for GANs
- 8. CycleGAN
- 9. Wasserstein GAN
- 10. StyleGAN
- 11. Application: adversarial autoencoders
Part 2 (11h30 – 12h30): Lucas Desard
Lucas Desard will take you on an exciting trip and show you what can be done in practice with generative adversarial networks in the present day. He will talk about deep fakes and how they can be detected, as well as face swaps, colorisation of historic footage, lip synchronization, etc.
- 1. Deep fakes: creation and detection
- 2. Colorisation of historical photographs
- 3. Face swaps and face transfers
- 4. Artificial lip synchronization
- 5. And much more
Part 1 (9h – 11h): Jonathan Peck (Ghent University)
In this masterclass you will learn how to fool machine learning models by attacking them with adversarial techniques. You will also learn how to to make your models more robust and how to protect them against such attacks. Should we worry, or is this all just a theoretical exercise? Jonathan will tell you everything about it.
- 1. Types of attacks on ML models
- 2. Adversarial examples, threat model and Kerckhoffs's principle
- 3. Attacks: L-BFGS, fast gradient sign, PGD, Carlini-Wagner, Transfer...
- 4. Real-life adversarial examples
- 5. Defenses: denoising, detection, hardening; certified, uncertified; robust optimization; randomized smoothing
- 6. Arms race, Schneier's law, no free lunch
- 7. Optimal transport
- 8. Robustness vs accuracy; brittle features
Part 2 (11.30 - 12.30) - Beat Buesser (IBM Dublin)
Beat Buesser is researcher at IBM and works on adversarial machine learning. He is leading the development team of the ART toolbox, “Adversarial Robustness 360 Toolbox”, which is the industry standard when implementing and researching adversarial attack and examples. Beat will give you an overview of the toolbox and will provide some tutorials on how to experiment it yourself.
Practical info
Fee
1 Masterclass (1/2 Day) |
€ 35 |
|
Certificate of attendance
All participants will receive a certificate of attendance.Cancellation policy
Cancellation must be done in writing. When cancelling up to 7 days before the start of the course 25% of the participation fee will be charged. When cancelling less than 7 days before the start of the course, the full fee is due.