Fairness in Automated Decision-Making
Projektlaufzeit: 01.09.2020 bis 31.08.2022
Kurzbeschreibung
Artificial intelligence offers many potentials for addressing complex societal problems. In the governmental context, artificial intelligence is increasingly used for automated decision-making (ADM) and promises to enhance government efficiency by automating bureaucratic processes. By eliminating human judgement, ADM promises to find the right decisions in shorter time and to be neutral and objective. At the same time, however, concerns are raised that ADM may foster discrimination or create new biases. So far, most of the findings on algorithmic fairness and discrimination stem from the U.S. context with a strong focus on technical aspects of the algorithms underlying the decision processes. Very little attention has been paid to the underlying societal mechanism and the specific decision-making context when evaluating the algorithms. To close this research gap, the proposed project will investigate and systematically classify ADM practices in governmental contexts in Germany. The project will integrate research on algorithmic fairness with sociology’s understanding of inequality and discrimination. To investigate fairness and discrimination in a real-world scenario, the project will develop an ADM system using labor market data and evaluate the system as well as various bias correction techniques regarding different fairness aspects.
Ziel
tba