Tom Zick considers the near-term societal risks of reinforcement learning in a post for the BKC Medium Collection.
“The harder it is to describe what success looks like within a given domain, the more prone to bad outcomes it is. This is true of all ML systems, but even more crucial for RL systems that cannot be meaningfully validated ahead of use. As regulators, we need to think about which domains need more regulatory scaffolding to minimize the fallout from our intellectual debt, while allowing for the immense promise of algorithms that can learn from their mistakes,” Zick writes.
You might also like
- communityThe apocalypse that wasn’t