Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data
A fancy learning algorithm A outperforms a baseline method B when they are both trained on the same data. Should A get all of the credit for the improved performance or does the training data also deserve some credit? When deployed in a new setting from a different domain, however, A makes more mistakes than B. How much of the blame should go to the learning algorithm or the training data? Such questions are becoming increasingly important and prevalent as we aim to make ML more accountable. Their answers would also help us allocate resources between algorithm design and data collection. In this paper, we formalize these questions and provide a principled Extended Shapley framework to jointly quantify the contribution of the learning algorithm and training data. Extended Shapley uniquely satisfies several natural properties that ensure equitable treatment of data and algorithm. Through experiments and theoretical analysis, we demonstrate that Extended Shapley has several important applications: 1) it provides a new metric of ML performance improvement that disentangles the influence of the data regime and the algorithm; 2) it facilitates ML accountability by properly assigning responsibility for mistakes; 3) it provides more robustness to manipulation by the ML designer.
READ FULL TEXT