Haug, SaskiaRietz, TimMädche, AlexanderSchneegass, StefanPfleging, BastianKern, Dagmar2021-09-032021-09-032021https://dl.gi.de/handle/20.500.12116/37252While qualitative research can produce a rich understanding of peoples’ mind, it requires an essential and strenuous data annotation process known as coding. Coding can be repetitive and timeconsuming, particularly for large datasets. Crowdsourcing provides flexible access toworkers all around theworld, however, researchers remain doubtful about its applicability for coding. In this study, we present an interactive coding system to support crowdsourced deductive coding of semi-structured qualitative data. Through an empirical evaluation on Amazon Mechanical Turk, we assess both the quality and the reliability of crowd-support for coding. Our results show that non-expert coders provide reliable results using our system. The crowd reached a substantial agreement of up to 91% with the coding provided by experts. Our results indicate that crowdsourced coding is an applicable strategy for accelerating a strenuous task. Additionally, we present implications of crowdsourcing to reduce biases in the interpretation of qualitative data.enCrowdsourcingCodingQualitative DataEmpirical EvaluationAccelerating Deductive Coding of Qualitative Data: An Experimental Study on the Applicability of CrowdsourcingText/Conference Paper10.1145/3473856.3473873