A new paper from BKC affiliate Ram Shankar Siva Kumar, fellow Salomé Viljoen, senior researcher David O’Brien and clinical instructional fellow Kendra Albert.
When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond? Through scenarios grounded in adversarial ML literature, we explore how some aspects of computer crime, copyright, and tort law interface with perturbation, poisoning, model stealing and model inversion attacks to show how some attacks are more likely to result in liability than others. We end with a call for action to ML researchers to invest in transparent benchmarks of attacks and defenses; architect ML systems with forensics in mind and finally, think more about adversarial machine learning in the context of civil liberties.
The paper is targeted towards ML researchers who have no legal background.
You might also like
- communityPIT in Action: Climate