Auflistung nach Schlagwort "Crowdsourcing"
1 - 10 von 22
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAccelerating Deductive Coding of Qualitative Data: An Experimental Study on the Applicability of Crowdsourcing(Mensch und Computer 2021 - Tagungsband, 2021) Haug, Saskia; Rietz, Tim; Mädche, AlexanderWhile qualitative research can produce a rich understanding of peoples’ mind, it requires an essential and strenuous data annotation process known as coding. Coding can be repetitive and timeconsuming, particularly for large datasets. Crowdsourcing provides flexible access toworkers all around theworld, however, researchers remain doubtful about its applicability for coding. In this study, we present an interactive coding system to support crowdsourced deductive coding of semi-structured qualitative data. Through an empirical evaluation on Amazon Mechanical Turk, we assess both the quality and the reliability of crowd-support for coding. Our results show that non-expert coders provide reliable results using our system. The crowd reached a substantial agreement of up to 91% with the coding provided by experts. Our results indicate that crowdsourced coding is an applicable strategy for accelerating a strenuous task. Additionally, we present implications of crowdsourcing to reduce biases in the interpretation of qualitative data.
- KonferenzbeitragAnalyse der Geschäftsmodellelemente von Crowdsourcing-Marktplätzen(Workshop Gemeinschaften in Neuen Medien 2011, 2011) Ickler, Henrik; Baumöl, Ulrike
- ZeitschriftenartikelCrowd Work(Business & Information Systems Engineering: Vol. 58, No. 4, 2016) Durward, David; Blohm, Ivo; Leimeister, Jan Marco
- ZeitschriftenartikelCrowd-basiertes Requirements-Engineering Requirements - Wie neue Technologien das klassische klassische RE erweitern(Softwaretechnik-Trends Band 39, Heft 1, 2019) Rupp, Chris; Schüpferling, Dirk; Schlör, PascalDie steigende Globalisierung erschließt für Unternehmen auf der ganzen Welt neue Absatzmärkte. Gerade der Markt der Software und Applikationen ist dafür prädestiniert. Doch diese Kunden zu gewinnen ist eine große Herausforderung, denn die Konkurrenz ist riesig. Die größten Chancen am Markt hat dabei, wer die Anforderungen seiner Kunden am besten kennt und umsetzen kann. Doch was tun, wenn die Anwender eines Systems auf der ganzen Welt verstreut sind schlimmer: gar nicht bekannt ist Anwender eigentlich sind?
- ZeitschriftenartikelCrowd-Powered Systems(KI - Künstliche Intelligenz: Vol. 27, No. 1, 2013) Bernstein, Michael S.Crowd-powered systems combine computation with human intelligence, drawn from large groups of people connecting and coordinating online. These hybrid systems enable applications and experiences that neither crowds nor computation could support alone.Unfortunately, crowd work is error-prone and slow, making it difficult to incorporate crowds as first-order building blocks in software. We introduce computational techniques that decompose complex tasks into simpler, verifiable steps to improve quality, and optimize work to return results in seconds. Using these techniques, we prototype a set of interactive crowd-powered systems. The first, Soylent, is a word processor that uses paid micro-contributions to aid writing tasks such as text shortening and proofreading. Using Soylent is like having access to an entire editorial staff as you write. The second system, Adrenaline, is a camera that uses crowds to help amateur photographers capture the exact right moment for a photo. It finds the best smile and catches subjects in mid-air jumps, all in realtime. These systems point to a future where social and crowd intelligence are central elements of interaction, software, and computation.
- KonferenzbeitragCrowde: Individuelles Lernen durch individualisierte Klausuren(DeLFI 2018 - Die 16. E-Learning Fachtagung Informatik, 2018) Dieterle, Sean; Koschmider, Agnes; Rechenberger, Tristan; Schoder, DetlefAngelehnt an das Konzept des Crowdsourcing konzipieren Lernende im laufenden Semester Aufgaben und Lösungen zum Vorlesungsmaterial und bewerten und korrigieren die Aufgaben und Lösungen gegenseitig. Auf Basis eines personalisierten Zugangs zum Aufgabenpool können individuell zugeschnittene Klausuren erzeugt werden. Dieser Lehransatz wird in dem Werkzeug Crowde entwickelt.
- WorkshopbeitragDetecting a Crisis: Comparison of Self-Reported vs. Automated Internet Outage Measuring Methods(Mensch und Computer 2022 - Workshopband, 2022) Orlov, Denis; Möller, Simon; Düfer, Sven; Haesler, Steffen; Reuter, ChristianEvery day, there are internet disruptions or outages around the world that affect our daily lives. In this paper, we analyzed these events in Germany in recent years and found out how they can be detected, and what impact they have on citizens, especially in crisis situations. For this purpose, we take a look at two different approaches to recording internet outages, namely the self-reporting of citizens and automatic reporting by algorithmic examination of the availability of IP networks. We evaluate the data of six major events with regard to their meaningfulness in quality and quantity. We found that due to the amount of data and the inherent imprecision of the methods used, it is difficult to detect outages through algorithmic examination. But once an event is publicly known by self-reporting, they have advantages to capture the temporal and spatial dimensions of the outage due to its nature of objective measurements. As a result, we propose that users’ crowdsourcing can enhance the detection of outages and should be seen as an important starting point to even begin an analysis with algorithm-based techniques, but it is to ISPs and regulatory authorities to support that.
- ZeitschriftenartikelHourly Wages in Crowdworking: A Meta-Analysis(Business & Information Systems Engineering: Vol. 64, No. 5, 2022) Hornuf, Lars; Vrankar, DanielIn the past decade, crowdworking on online labor market platforms has become an important source of income for a growing number of people worldwide. This development has led to increasing political and scholarly interest in the wages people can earn on such platforms. This study extends the literature, which is often based on a single platform, region, or category of crowdworking, through a meta-analysis of prevalent hourly wages. After a systematic literature search, the paper considers 22 primary empirical studies, including 105 wages and 76,765 data points from 22 platforms, eight different countries, and 10Â years. It is found that, on average, microtasks results in an hourly wage of less than $6. This wage is significantly lower than the mean wage of online freelancers, which is roughly three times higher when not factoring in unpaid work. Hourly wages accounting for unpaid work, such as searching for tasks and communicating with requesters, tend to be significantly lower than wages not considering unpaid work. Legislators and researchers evaluating wages in crowdworking need to be aware of this bias when assessing hourly wages, given that the majority of literature does not account for the effect of unpaid work time on crowdworking wages. To foster the comparability of different research results, the article suggests that scholars consider a wage correction factor to account for unpaid work. Finally, researchers should be aware that remuneration and work processes on crowdworking platforms can systematically affect the data collection method and inclusion of unpaid work.
- ZeitschriftenartikelHow (not) to Incent Crowd Workers(Business & Information Systems Engineering: Vol. 57, No. 3, 2015) Straub, Tim; Gimpel, Henner; Teschner, Florian; Weinhardt, ChristofCrowdsourcing gains momentum: In digital work places such as Amazon Mechanical Turk, oDesk, Clickworker, 99designs, or InnoCentive it is easy to distribute human work to hundreds or thousands of freelancers. In these crowdsourcing settings, one challenge is to properly incent worker effort to create value. Common incentive schemes are piece rate payments and rank-order tournaments among workers. Tournaments might or might not disclose a worker’s current competitive position via a leaderboard. Following an exploratory approach, we derive a model on worker performance in rank-order tournaments and present a series of real effort studies using experimental techniques on an online labor market to test the model and to compare dyadic tournaments to piece rate payments. Data suggests that on average dyadic tournaments do not improve performance compared to a simple piece rate for simple and short crowdsourcing tasks. Furthermore, giving feedback on the competitive position in such tournaments tends to be negatively related to workers’ performance. This relation is partially mediated by task completion and moderated by the provision of feedback: When playing against strong competitors, feedback is associated with workers quitting the task altogether and, thus, showing lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Overall, individual piece rate payments are most simple to communicate and implement while incenting performance is on par with more complex dyadic tournaments.
- KonferenzbeitragIA von Websites: asynchrone Remote-Tests und Laborstudien im Vergleich(Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Meier, Florian; Wolff, ChristianDer Beitrag zeigt auf, wie Crowdsourcing-Verfahren für asynchronous remote usability testing eingesetzt werden können. Das konkrete Szenario hierfür ist die Untersuchung der Informationsarchitektur (IA) von Websites mit Hilfe des Tree-Test und des Navigations-Stress-Test. Für beide Methoden wurden Crowdsourcing-kompatible Online-Verfahren entwickelt bzw. angepasst. In einer Vergleichsstudie werden jeweils gleiche Aufgaben sowohl über Crowdsourcing-Plattformen als auch in einem Laborsetting getestet. Diese empirische Studie zeigt, dass sich vergleichbare Ergebnisse erzielen lassen, wobei sich für die Crowdsourcing-basierte Untersuchung mit vergleichsweise geringem Aufwand hinreichend viele Testpersonen rekrutieren lassen. Der Beitrag ist insofern auch im Sinne des network as the extension of the usability laboratory zu verstehen.
- «
- 1 (current)
- 2
- 3
- »