Adversarial Examples Aren’t Bugs, They’re Features | Synced

n the new paper Adversarial Examples Are Not Bugs, They Are Features, a group of MIT researchers propose that adversarial examples’ effectiveness can be attributed to non-robustness: “Adversa...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

n the new paper Adversarial Examples Are Not Bugs, They Are Features, a group of MIT researchers propose that adversarial examples’ effectiveness can be attributed to non-robustness: “Adversarial vulnerability is a direct result of our models’ sensitivity to well-generalizing features in the data.”