Adversarial Machine Learning

Masterclass - 1 & 3 December 2021

How can you tell the difference between elephants and lions? We, humans, will know for sure when we encounter one in the wild, but machines can nowadays make an excellent guess too. With more than 99.5% accuracy, modern machine learning algorithms know very well how to exploit visual features as best as possible. But, alas, that is not the complete picture: machines are easily fooled too. With some clever tricks, any picture of a lion can easily be manipulated in such a way that humans do not notice any difference, but that the machine learning model sees a completely different animal. Such a manipulated image is called an adversarial example.
What is going on there? Have we built the most powerful machine learning models ever on brittle foundations?

The above is an example of adversarial learning, a subfield of machine learning that is concerned with what happens when we fool a machine learning model and how we can prevent and/or exploit this. While the first example here is mostly concerned with issues, an example of exploitation is found in the class of machine learning models called generative adversarial networks. Ever since the dawn of generative adversarial networks, the field of generative machine learning has taken multiple leaps forward and has paved the way to application domains we couldn’t even think of before. When training generative adversarial networks, we let two machines play a game against each other: one machine gradually tries to become a master painter, while the other machine is the art critic that gets better and better at discerning genuine paintings from counterfeits. By making the painter try to fool the critic over and over again, the painter becomes more skilled and will produce more realistic paintings.

In this series of masterclasses, two researchers from Ghent University will take you deep into the field of adversarial learning. On the first morning we will examine generative adversarial networks in full detail, while the second morning will all be about finding and protecting against adversarial examples.
Each class will be wrapped up by a speaker from industry (ML6 and IBM), where theory becomes practice.

The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).

This masterclass is a collaboration between UGain and VAIA.

We welcome participants from universities, colleges, industry, research centres, etc...

Prior knowledge: since this is an advanced masterclass, every participant should have a good understanding of machine learning and deep learning.

    Beat Buesser is a Research Staff Member in the AI & Machine Learning group at the Dublin Research Laboratory of IBM Research. He is currently leading the development of the Adversarial Robustness 360 Toolbox (ART) and his research focuses on the security of machine learning and artificial intelligence. Before joining IBM, he worked as postdoctoral associate at the Massachusetts Institute of Technology (MIT) and obtained his doctorate degree from ETH Zurich.

    Cedric De Boom is a postdoctoral researcher at Ghent University on the intersection between generative modelling and natural language processing. He teaches a master's course on Deep Generative Models and a bachelor's course on Discrete Mathematics. He also has a strong background in recommender systems, which he put into practice at both Spotify and DPG Media.

    After graduating as a Computer Science Engineer from the KU Leuven, Lucas Desard joined ML6 as an ML engineer. Next to being a ML engineer at ML6, he focusses part-time on building out gener8.ai, a new brand that focuses on serving companies that want to make use of the latest and greatest generative AI techniques for their businesses.

    Jonathan Peck received the B.Sc. degree in computer science and the M.Sc. degree in mathematical informatics from Ghent University, Belgium, in 2015 and 2017, respectively. He is currently pursuing the joint Ph.D. degree with the Department of Applied Mathematics, Computer Science and Statistics and the VIB Inflammation Research Center, Ghent, Belgium. His research is sponsored by a fellowship of the Research Foundation Flanders (FWO) and focuses improving the robustness of machine learning models to adversarial manipulations.


Part 1 (9h – 11h) - Cedric De Boom

In this masterclass, you will learn everything there is to know about generative adversarial networks (GANs) and how they are trained. We will look deep into some of the training issues that arise and how they can be solved using some “black belt ninja tricks”. Finally, we will look at three interesting classes of GANs that have proven their merits: CycleGAN, Wasserstein GAN and StyleGAN.

  • 1. Introduction to generative models
  • 2. Introduction to generative adversarial networks (GANs)
  • 3. “The game of GANs”: generator vs discriminator
  • 4. How to train GANs
  • 5. Training challenges: game saturation & mode collapse
  • 6. Black belt ninja tricks for GANs
  • 7. Performance evaluation for GANs
  • 8. CycleGAN
  • 9. Wasserstein GAN
  • 10. StyleGAN
  • 11. Application: adversarial autoencoders

Part 2 (11h30 – 12h30): Lucas Desard

Lucas Desard will take you on an exciting trip and show you what can be done in practice with generative adversarial networks in the present day. He will talk about deep fakes and how they can be detected, as well as face swaps, colorisation of historic footage, lip synchronization, etc.

  • 1. Deep fakes: creation and detection
  • 2. Colorisation of historical photographs
  • 3. Face swaps and face transfers
  • 4. Artificial lip synchronization
  • 5. And much more

Part 1 (9h – 11h): Jonathan Peck (Ghent University)

In this masterclass you will learn how to fool machine learning models by attacking them with adversarial techniques. You will also learn how to to make your models more robust and how to protect them against such attacks. Should we worry, or is this all just a theoretical exercise? Jonathan will tell you everything about it.

  • 1. Types of attacks on ML models
  • 2. Adversarial examples, threat model and Kerckhoffs's principle
  • 3. Attacks: L-BFGS, fast gradient sign, PGD, Carlini-Wagner, Transfer...
  • 4. Real-life adversarial examples
  • 5. Defenses: denoising, detection, hardening; certified, uncertified; robust optimization; randomized smoothing
  • 6. Arms race, Schneier's law, no free lunch
  • 7. Optimal transport
  • 8. Robustness vs accuracy; brittle features

Part 2 (11.30 - 12.30) - Beat Buesser (IBM Dublin)

Beat Buesser is researcher at IBM and works on adversarial machine learning. He is leading the development team of the ART toolbox, “Adversarial Robustness 360 Toolbox”, which is the industry standard when implementing and researching adversarial attack and examples. Beat will give you an overview of the toolbox and will provide some tutorials on how to experiment it yourself.

Practical info


1 Masterclass (1/2 Day)

€ 35

Certificate of attendance

All participants will receive a certificate of attendance.

Cancellation policy

Cancellation must be done in writing.

When cancelling up to 7 days before the start of the course 25% of the participation fee will be charged. When cancelling less than 7 days before the start of the course, the full fee is due.

Personal data

M ♂        F ♀

Last Name* mandatory
First Name*  
E-mail participant*  
Billed privately or on the company?*  

Private data

Street en number
Postal code

Company data

Company name
Street en number
Postal code


I subscribe to the Masterclass Adversarial Machine Learning:

Day 1: Online (€ 35)
Day 2: Online (€ 35)


I wish to stay informed about all future UGain-courses.

How were you informed about this course?

Via the UGAIN e-mailing


Cancellation policy

I have consulted the cancellation policy and I agree with it.*

  • The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).
  • Dates may change due to unforeseen reasons.


English is used in all presentations and documentation.


Universiteit Gent
UGent Academie voor Ingenieurs
Els Van Lierde
Technologiepark 60
9052 Zwijnaarde
Tel.: +32 9 264 55 82
fax: +32 9 264 56 05