Coping with Mistreatment in Fair Algorithms
Machine learning actively impacts our everyday life in almost all endeavors and domains such as healthcare, finance, and energy. As our dependence on the machine learning increases, it is inevitable that these algorithms will be used to make decisions that will have a direct impact on the society spanning all resolutions from personal choices to world-wide policies. Hence, it is crucial to ensure that (un)intentional bias does not affect the machine learning algorithms especially when they are required to take decisions that may have unintended consequences. Algorithmic fairness techniques have found traction in the machine learning community and many methods and metrics have been proposed to ensure and evaluate fairness in algorithms and data collection. In this paper, we study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric. We demonstrate that such a classifier has an increased false positive rate across sensitive groups and propose a conceptually simple method to mitigate this bias. We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
READ FULL TEXT