Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).

FairHealthGrid: A Systematic Framework for Evaluating Bias Mitigation Strategies in Healthcare Machine Learning
Pedro Paiva, Farzaneh Dehghani, Fahim Anzum, Mansi Singhal, Ayah Metwali, Marina Gavrilova, Mariana Bento
The integration of machine learning (ML) into healthcare demands rigorous fairness assurance to prevent algorithmic biases from exacerbating disparities in treatment. This study introduces FairHealthGrid, a systematic framework for evaluating bias mitigation strategies across healthcare ML models. Our framework combines grid search with a composite fairness score, incorporating fairness metrics weighted by risk tolerances. As output, we present a trade-off map concurrently evaluating accuracy and fairness, categorizing solutions (model + bias mitigation) into the following regions: Win-Win, Good, Poor, Inverted, or Lose-Lose. We apply the framework on three different healthcare datasets. Results reveal significant variability across different healthcare applications. The framework identifies model-bias mitigation for balancing equity and accuracy, yet highlights the absence of universal solutions. By enabling systematic trade-off analysis, FairHealthGrid allows healthcare stakeholders to audit, compare, and select ethically aligned ML models for specific healthcare applications—advancing towards equitable AI in healthcare.

AuthorConnect Sessions

No sessions scheduled yet