Commit d8d22714 authored by Jurgen Gaeremyn's avatar Jurgen Gaeremyn 👉
Browse files

tweaking event adversarial noise

parent f8512a01
Pipeline #368449408 passed with stages
in 2 minutes and 22 seconds
---
startdate: 2021-09-26
starttime: "14:00"
endtime: "17:30"
linktitle: "Adversarial Noise"
title: "Adversarial Noise"
price: "tip jar"
series: "adversarial-noise"
eventtype: "Workshop"
location: "HSBXL"
image: "javierlloret.jpeg"
hackeragenda: "true"
---
## Topic
Javier Lloret is a researcher, media artist and engineer based in Amsterdam.
He is conducting research on the vulnerabilities of AI-based Computer Vision systems to Adversarial Attacks. The project is subsidized by the Dutch funds Stimuleringfonds for Digital Culture. Here there is a short description available (in Dutch):
https://stimuleringsfonds.nl/nl/toekenningen/adversarial_noise/
This workshop about Artificial Intelligence (AI) and its weakness to ‘Adversarial Attacks’ will tackle ways AI vision can be misled by adding visual noise to images, leading to malfunctions in the system. Javier will give an introduction to Adversarial Attacks applied to images and we will explore code examples that implement different adversarial techniques. This workshop will also propose a space for dialogue and discussion around the subject.
## Constraints
The workshop is limited to around 8 active participants. Mail to contact@hsbxl.be if you want to join.
\ No newline at end of file
---
startdate: 2021-09-26
starttime: "14:00"
endtime: "17:30"
linktitle: "Adversarial Noise"
title: "Adversarial Noise"
price: "tip jar"
series: "ai"
linktitle: "Adversarial Noise Page"
title: "Adversarial Noise Page"
series: "adversarial-noise"
eventtype: "Workshop"
location: "HSBXL"
image: "javierlloret.jpeg"
hackeragenda: "true"
---
Javier Lloret is a researcher, media artist and engineer based in Amsterdam.
He is conducting research on the vulnerabilities of AI-based Computer Vision systems to Adversarial Attacks. The project is subsidized by the Dutch funds Stimuleringfonds for Digital Culture. Here there is a short description available (in Dutch):
https://stimuleringsfonds.nl/nl/toekenningen/adversarial_noise/
This workshop about Artificial Intelligence (AI) and its weakness to ‘Adversarial Attacks’ will tackle ways AI vision can be misled by adding visual noise to images, leading to malfunctions in the system. Javier will give an introduction to Adversarial Attacks applied to images and we will explore code examples that implement different adversarial techniques. This workshop will also propose a space for dialogue and discussion around the subject.
\ No newline at end of file
## Event
{{< series series="adversarial-noise" >}}
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment