Chamberlain, JonKruschwitz, UdoPoesio, Massimo2021-06-212021-06-212018https://dl.gi.de/handle/20.500.12116/36590Crowdsourcing has revolutionised the way tasks can be completed but the process is frequently inefficient, costing practitioners time and money. This research investigates whether crowdsourcing can be optimised with a validation process, as measured by four criteria: quality; cost; noise; and speed. A validation model is described, simulated and tested on real data from an online crowdsourcing game to collect data about human language. Results show that by adding an agreement validation (or a like/upvote) step fewer annotations are required, noise and collection time are reduced and quality may be improved.enCrowdsourcingEmpirical studies in interaction designInteractive gamesSocial networksNatural language processingOptimising crowdsourcing efficiency: Amplifying human computation with validationText/Journal Article10.1515/itit-2017-00202196-7032