Baur, TobiasHeimerl, AlexanderLingenfelser, FlorianWagner, JohannesValstar, Michel F.Schuller, BjörnAndré, Elisabeth2021-04-232021-04-2320202020http://dx.doi.org/10.1007/s13218-020-00632-3https://dl.gi.de/handle/20.500.12116/36291In the following article, we introduce a novel workflow, which we subsume under the term “explainable cooperative machine learning” and show its practical application in a data annotation and model training tool called NOVA . The main idea of our approach is to interactively incorporate the ‘human in the loop’ when training classification models from annotated data. In particular, NOVA offers a collaborative annotation backend where multiple annotators join their workforce. A main aspect is the possibility of applying semi-supervised active learning techniques already during the annotation process by giving the possibility to pre-label data automatically, resulting in a drastic acceleration of the annotation process. Furthermore, the user-interface implements recent eXplainable AI techniques to provide users with both, a confidence value of the automatically predicted annotations, as well as visual explanation. We show in an use-case evaluation that our workflow is able to speed up the annotation process, and further argue that by providing additional visual explanations annotators get to understand the decision making process as well as the trustworthiness of their trained machine learning models.AnnotationCooperative machine learningExplainable AIeXplainable Cooperative Machine Learning with NOVAText/Journal Article10.1007/s13218-020-00632-31610-1987