Winkler, Jonas PaulVogelsang, AndreasTichy, MatthiasBodden, EricKuhrmann, MarcoWagner, StefanSteghöfer, Jan-Philipp2019-03-292019-03-292018978-3-88579-673-2https://dl.gi.de/handle/20.500.12116/21161Neural Networks have been utilized to solve various tasks such as image recognition, text classification, and machine translation and have achieved exceptional results in many of these tasks. However, understanding the inner workings of neural networks and explaining why a certain output is produced are no trivial tasks. Especially when dealing with text classification problems, an approach to explain network decisions may greatly increase the acceptance of neural network supported tools. In this paper, we present an approach to visualize reasons why a classification outcome is produced by convolutional neural networks by tracing back decisions made by the network. The approach is applied to various text classification problems, including our own requirements engineering related classification problem. We argue that by providing these explanations in neural network supported tools, users will use such tools with more confidence and also may allow the tool to do certain tasks automatically.envisual feedbackneural networksartificial intelligencemachine learningnatural language processingexplanationsrequirements engineeringWhat Does My Classifier Learn? A Visual Approach to Understanding Natural Language Text ClassifiersText/Conference Paper1617-5468