Winkler, JonasGrönberg, JannisVogelsang, AndreasFelderer, MichaelHasselbring, WilhelmRabiser, RickJung, Reiner2020-02-032020-02-032020978-3-88579-694-7https://dl.gi.de/handle/20.500.12116/31722An important task in requirements engineering is to identify and determine how to verify a requirement (eg., by manual review, testing, or simulation; also called \emphpotential verification method). This information is required to effectively create test cases and verification plans for requirements. In this paper, we propose an automatic approach to classify natural language requirements with respect to their potential verification methods (PVM). Our approach uses a convolutional neural network architecture to implement a multiclass and multilabel classifier that assigns probabilities to a predefined set of six possible verification methods, which we derived from an industrial guideline. Additionally, we implemented a backtracing approach to analyze and visualize the reasons for the network's decisions. In a 10-fold cross validation on a set of about 27,000 industrial requirements, our approach achieved a macro averaged \fone score of 0.79 across all labels. The results show that our approach might help to increase the quality of requirements specifications with respect to the PVM attribute and guide engineers in effectively deriving test cases and verification plans.enRequirements EngineeringRequirements ValidationTest EngineeringMachine LearningNatural Language ProcessingNeural NetworksPredicting How to Test Requirements: An Automated ApproachText/Conference Paper10.18420/SE2020_431617-5468