Five policy uses of algorithmic explainability

02/06/2023
by   Matthew O'Shaughnessy, et al.
0

The notion that algorithmic systems should be "explainable" is common in the many statements of consensus principles developed by governments, companies, and advocacy organizations. But what exactly do these policy and legal actors want from explainability, and how do their desiderata compare with the explainability techniques developed in the machine learning literature? We explore this question in hopes of better connecting the policy and technical communities. We outline five settings in which policymakers seek to use explainability: complying with specific requirements for explanation; helping to obtain regulatory approval in highly regulated settings; enabling or interfacing with liability; flexibly managing risk as part of a self-regulatory process; and providing model and data transparency. We illustrate each setting with an in-depth case study contextualizing the purpose and role of explanation. Drawing on these case studies, we discuss common factors limiting policymakers' use of explanation and promising ways in which explanation can be used in policy. We conclude with recommendations for researchers and policymakers.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset