Sharma, ArnabWehrheim, HeikeBecker, SteffenBogicevic, IvanHerzwurm, GeorgWagner, Stefan2019-03-142019-03-142019978-3-88579-686-2https://dl.gi.de/handle/20.500.12116/20909With the increased application of machine learning (ML) algorithms to decision-making processes, the question of fairness of such algorithms came into the focus. Fairness testing aims at checking whether a classifier as “learned” by an ML algorithm on some training data is biased in the sense of discriminating against some of the attributes (e.g. gender or age). Fairness testing thus targets the prediction phase in ML, not the learning phase. In our approach, we investigate fairness for the learning phase. Our definition of fairness is based on the idea that the learner should treat all data in the training set equally, disregarding issues like names or orderings of features or orderings of data instances. We term this property balanced data usage. We have developed a (metamorphic) testing approach called TiLe for checking balanced data usage and report on some experiments of using TiLe to check classifiers from the scikit-learn library for balancedness.enTesting Balancedness of ML AlgorithmsText/Conference Paper10.18420/se2019-481617-5468