The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

05/02/2022
by   Sandra Wachter, et al.
0

Artificial Intelligence (AI) is increasingly used to make important decisions about people. While issues of AI bias and proxy discrimination are well explored, less focus has been paid to the harms created by profiling based on groups that do not map to or correlate with legally protected groups such as sex or ethnicity. This raises a question: are existing equality laws able to protect against emergent AI-driven inequality? This article examines the legal status of algorithmic groups in North American and European non-discrimination doctrine, law, and jurisprudence and will show that algorithmic groups are not comparable to traditional protected groups. Nonetheless, these new groups are worthy of protection. I propose a new theory of harm - "the theory of artificial immutability" - that aims to bring AI groups within the scope of the law. My theory describes how algorithmic groups act as de facto immutable characteristics in practice that limit people's autonomy and prevent them from achieving important goals.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset