Adversarial attacks
Zhengyu Zhao
CISPA Helmholtz Center for Information Security, Saarbrücken, Germany
https://zhengyuzhao.github.io/
Date(s) : 03/03/2023 iCal
14h30 - 15h30
An adversarial attack is a method to generate adversarial examples. Hence, an adversarial example is an input to a machine learning model that is purposely designed to cause a model to make a mistake in its predictions despite resembling a valid input to a human.
[su_spacer size= »10″]
L’apprentissage automatique contradictoire est l’étude des attaques contre les algorithmes d’apprentissage automatique et des défenses contre de telles attaques.
[su_spacer size= »10″]
Séminaire Signal et Apprentissage[su_spacer size= »10″]
Emplacement
I2M Chateau-Gombert - CMI
Catégories