Purdue University Graduate School
Browse

Interactive Mitigation of Biases in Machine Learning Models

Download (2.24 MB)
thesis
posted on 2024-09-03, 13:24 authored by Kelly M Van BusumKelly M Van Busum

Bias and fairness issues in artificial intelligence algorithms are major concerns as people do not want to use AI software they cannot trust. This work uses college admissions data as a case study to develop methodology to define and detect bias, and then introduces a new method for interactive bias mitigation.

Admissions data spanning six years was used to create machine learning-based predictive models to determine whether a given student would be directly admitted into the School of Science under various scenarios at a large urban research university. During this time, submission of standardized test scores as part of a student’s application became optional which led to interesting questions about the impact of standardized test scores on admission decisions. We developed and analyzed predictive models to understand which variables are important in admissions decisions, and how the decision to exclude test scores affects the demographics of the students who are admitted.

Then, using a variety of bias and fairness metrics, we analyzed these predictive models to detect biases the models may carry with respect to three variables chosen to represent sensitive populations: gender, race, and whether a student was the first in his/her family to attend college. We found that high accuracy rates can mask underlying algorithmic bias towards these sensitive groups.

Finally, we describe our method for bias mitigation which uses a combination of machine learning and user interaction. Because bias is intrinsically a subjective and context-dependent matter, it requires human input and feedback. Our approach allows the user to iteratively and incrementally adjust bias and fairness metrics to change the training dataset for an AI model to make the model more fair. This interactive bias mitigation approach was then used to successfully decrease the biases in three AI models in the context of undergraduate student admissions.

History

Degree Type

  • Doctor of Philosophy

Department

  • Computer Science

Campus location

  • Indianapolis

Advisor/Supervisor/Committee Chair

Dr. Shiaofen Fang

Additional Committee Member 2

Dr. Snehasis Mukhopadhyay

Additional Committee Member 3

Dr. Yuni Xia

Additional Committee Member 4

Dr. Mihran Tuceryan

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC