Optimising crowdsourcing efficiency: Amplifying human computation with validation
dc.contributor.author | Chamberlain, Jon | |
dc.contributor.author | Kruschwitz, Udo | |
dc.contributor.author | Poesio, Massimo | |
dc.date.accessioned | 2021-06-21T10:07:06Z | |
dc.date.available | 2021-06-21T10:07:06Z | |
dc.date.issued | 2018 | |
dc.description.abstract | Crowdsourcing has revolutionised the way tasks can be completed but the process is frequently inefficient, costing practitioners time and money. This research investigates whether crowdsourcing can be optimised with a validation process, as measured by four criteria: quality; cost; noise; and speed. A validation model is described, simulated and tested on real data from an online crowdsourcing game to collect data about human language. Results show that by adding an agreement validation (or a like/upvote) step fewer annotations are required, noise and collection time are reduced and quality may be improved. | en |
dc.identifier.doi | 10.1515/itit-2017-0020 | |
dc.identifier.pissn | 2196-7032 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/36590 | |
dc.language.iso | en | |
dc.publisher | De Gruyter | |
dc.relation.ispartof | it - Information Technology: Vol. 60, No. 1 | |
dc.subject | Crowdsourcing | |
dc.subject | Empirical studies in interaction design | |
dc.subject | Interactive games | |
dc.subject | Social networks | |
dc.subject | Natural language processing | |
dc.title | Optimising crowdsourcing efficiency: Amplifying human computation with validation | en |
dc.type | Text/Journal Article | |
gi.citation.endPage | 49 | |
gi.citation.publisherPlace | Berlin | |
gi.citation.startPage | 41 |