File(s) under embargo
11
month(s)6
day(s)until file(s) become available
IMPROVING MACHINE LEARNING FAIRNESS BY REPAIRING MISLABELED DATA
As Machine learning (ML) and Artificial intelligence (AI) are becoming increasingly prevalent in high-stake decision-making, fairness has emerged as a critical societal issue. Individuals belonging to diverse groups receive different algorithmic outcomes largely due to the inherent errors and biases in the underlying training data, thus resulting in violations of group fairness or bias.
This study investigates the problem of resolving group fairness by detecting mislabeled data and flipping the label instances in the training data. Four solutions are proposed to obtain an ordering in which the labels of training data instances should be flipped to reduce the bias in predictions of a model trained over the modified data. Through experimental evaluation, we showcase the effectiveness of repairing mislabeled data using mislabel detection techniques to improve the fairness of machine learning models.
History
Degree Type
- Master of Science
Department
- Computer and Information Technology
Campus location
- West Lafayette