Lässig, NicoOppold, SarahHerschel, MelanieKai-Uwe SattlerMelanie HerschelWolfgang Lehner2021-03-162021-03-162021978-3-88579-705-0https://dl.gi.de/handle/20.500.12116/35813As regularly reported in the media, automated classifications and decisions based on machine learning models can cause unfair treatment of certain groups of a general population. Classically, the machine learning models are designed to make highly accurate decisions in general. When one machine learning model is not sufficient to define the possibly complex boundary between classes, multiple specialized" models are used within a model ensemble to further boost accuracy. In particularenModel fairnessbias in machine learningmodel ensemblesUsing FALCES against bias in automated decisions by integrating fairness in dynamic model ensembles10.18420/btw2021-081617-5468