Back to news
Ethics Society

A New Framework Aims to Map AI Fairness into a Single Chart

Researchers propose a new human-centered framework aimed at structuring research on AI fairness and facilitating the implementation of systems designed to be fair. The concern is that AI is increasingly used in critical decisions that may involve, for example, hiring, loan approvals, or social benefits—areas where unequal treatment based on race, gender, or socioeconomic status is particularly problematic.

Fairness in AI is discussed with many different concepts that are often contradictory. Improving one metric can weaken another, and balancing predictive accuracy with fairness has proven practically challenging. The goal of the new framework is to make these tensions visible and comprehensible.

The structure developed by researchers systematically compiles eight different fairness metrics. They arise from combining three dimensions: perspectives of individual and group fairness, so-called inframarginal and intersectional assumptions, and outcome-based approaches and those emphasizing equality of opportunity.

In individual fairness, the question is whether similar individuals are treated the same way, while group fairness examines differences between various groups of people, such as genders or ethnic groups. Intersectionality introduces the intersections of multiple characteristics, for example, how gender and ethnic background together affect treatment.

The authors suggest that such a unified human-centered framework helps designers and decision-makers consciously choose appropriate fairness criteria for the situation and better understand the compromises associated with different choices.

Source: A Unifying Human-Centered AI Fairness Framework, ArXiv (AI).

This text was generated with AI assistance and may contain errors. Please verify details from the original source.

Original research: A Unifying Human-Centered AI Fairness Framework
Publisher: ArXiv (AI)
Authors: Munshi Mahbubur Rahman, Shimei Pan, James R. Foulds
December 25, 2025
Read original →