Judiciously Reducing Sub-group Comparisons for Learning Intersectional Fair Representations

This abstract has open access
Abstract Summary
Ensuring fairness in ranking systems is critical to avoid discriminatory outcomes towards minority groups in high stakes domains such as recruitment. Most fairness interventions only address fairness for one or more binary groups without accounting for intersectional fairness. We study the problem of achieving intersectional fairness in ranking systems, where individuals may face compounded disadvantages. We adapt and extend existing pre-processing fairness intervention methods to optimize for intersectional group fairness. Importantly, as the number of intersectional sub-groups grows exponentially with the number of attributes, optimization becomes computationally expensive and possibly infeasible. To address this challenge, we propose to reduce the number of sub-group comparisons when optimizing for intersectional fairness, based on the highest disparities between sub-groups. Our results show that limiting sub-group comparisons achieves comparable or better intersectional fairness. We validate this on three real-world datasets and a simulated setup designed to test robustness to intersectional fairness challenges.
Abstract ID :
NKDR149
Submission Type
Submission Topics

Associated Sessions

PhD
,
University Of Amsterdam
Johns Hopkins University, HLTCOE
Distinguished University Professor
,
University Of Amsterdam
2 visits