Logo des Repositoriums
 

Optimising crowdsourcing efficiency: Amplifying human computation with validation

dc.contributor.authorChamberlain, Jon
dc.contributor.authorKruschwitz, Udo
dc.contributor.authorPoesio, Massimo
dc.date.accessioned2021-06-21T10:07:06Z
dc.date.available2021-06-21T10:07:06Z
dc.date.issued2018
dc.description.abstractCrowdsourcing has revolutionised the way tasks can be completed but the process is frequently inefficient, costing practitioners time and money. This research investigates whether crowdsourcing can be optimised with a validation process, as measured by four criteria: quality; cost; noise; and speed. A validation model is described, simulated and tested on real data from an online crowdsourcing game to collect data about human language. Results show that by adding an agreement validation (or a like/upvote) step fewer annotations are required, noise and collection time are reduced and quality may be improved.en
dc.identifier.doi10.1515/itit-2017-0020
dc.identifier.pissn2196-7032
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/36590
dc.language.isoen
dc.publisherDe Gruyter
dc.relation.ispartofit - Information Technology: Vol. 60, No. 1
dc.subjectCrowdsourcing
dc.subjectEmpirical studies in interaction design
dc.subjectInteractive games
dc.subjectSocial networks
dc.subjectNatural language processing
dc.titleOptimising crowdsourcing efficiency: Amplifying human computation with validationen
dc.typeText/Journal Article
gi.citation.endPage49
gi.citation.publisherPlaceBerlin
gi.citation.startPage41

Dateien