Masterclass - 1 & 3 December 2021
How can you tell the difference between elephants and lions?
We, humans, will know for sure when we encounter one in the wild, but machines can nowadays make an excellent guess too.
With more than 99.5% accuracy, modern machine learning algorithms know very well how to exploit visual features as best as possible.
But, alas, that is not the complete picture: machines are easily fooled too.
With some clever tricks, any picture of a lion can easily be manipulated in such a way that humans do not notice any difference,
but that the machine learning model sees a completely different animal. Such a manipulated image is called an adversarial example.The above is an example of adversarial learning,
What is going on there? Have we built the most powerful machine learning models ever on brittle foundations?
a subfield of machine learning that is concerned with what happens
when we fool a machine learning model and how we can prevent and/or exploit this.
While the first example here is mostly concerned with issues,
an example of exploitation is found in the class of machine learning models called generative adversarial networks
Ever since the dawn of generative adversarial networks, the field of generative machine learning has taken multiple
leaps forward and has paved the way to application domains we couldn’t even think of before.
When training generative adversarial networks
, we let two machines play a game against each other:
one machine gradually tries to become a master painter, while the other machine is the art critic that gets
better and better at discerning genuine paintings from counterfeits. By making the painter try to fool the
critic over and over again, the painter becomes more skilled and will produce more realistic paintings.
In this series of masterclasses, two researchers from Ghent University will take you deep into the field of adversarial learning.
On the first morning we will examine generative adversarial networks
in full detail,
while the second morning will all be about finding and protecting against adversarial examples.
Each class will be wrapped up by a speaker from industry (ML6 and IBM), where theory becomes practice.
The masterclasses will take place on 1 & 3 December 2021 from 9h till 12h30 (via livestream only).
This masterclass is a collaboration between UGain and VAIA.