Machines Learning to Find Injustice
Featuring HLS Climenko Fellow and Lecturer on Law, Ryan Copus
Predictive algorithms can often outperform humans in making legal decisions. But when used to automate or guide decisions, predictions can embed biases, conflict with a "right to explanation," and be manipulated by litigants. HLS Climenko Fellow and Lecturer on Law Ryan Copus suggests we should instead use predictive algorithms to identify unjust decisions and subject them to secondary review.
This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more.
You might also like
- communityThe apocalypse that wasn’t