Szepannek, GeroLübke, Karsten2023-01-182023-01-1820222022http://dx.doi.org/10.1007/s13218-022-00764-8https://dl.gi.de/handle/20.500.12116/40041In the recent past, several popular failures of black box AI systems and regulatory requirements have increased the research interest in explainable and interpretable machine learning. Among the different available approaches of model explanation, partial dependence plots (PDP) represent one of the most famous methods for model-agnostic assessment of a feature’s effect on the model response. Although PDPs are commonly used and easy to apply they only provide a simplified view on the model and thus risk to be misleading. Relying on a model interpretation given by a PDP can be of dramatic consequences in an application area such as forensics where decisions may directly affect people’s life. For this reason in this paper the degree of model explainability is investigated on a popular real-world data set from the field of forensics: the glass identification database. By means of this example the paper aims to illustrate two important aspects of machine learning model development from the practical point of view in the context of forensics: (1) the importance of a proper process for model selection, hyperparameter tuning and validation as well as (2) the careful used of explainable artificial intelligence. For this purpose, the concept of explainability is extended to multiclass classification problems as e.g. given by the glass data.Black box algorithmsExplainabilityForensicsHyperparameter tuningInterpretable machine learningMulticlass classificationPartial dependence plotsExplaining Artificial Intelligence with CareText/Journal Article10.1007/s13218-022-00764-81610-1987