Auflistung nach Autor:in "Rossner, Alexander"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragDo Users Really Care? Evaluating the User Perception of Disclosing AI-Generated Content on Credibility in (Sports) Journalism(Proceedings of Mensch und Computer 2024, 2024) Rossner, Alexander; Cassel, Marie; Huschens, MartinAI-generated journalism (robot journalism) enables the automated creation of news articles through Artificial Intelligence (AI). Especially in sports reporting robot journalism enables providers to publish standardized match reports quickly after sporting events (e.g. soccer games). This study examines the influence of disclosing the type of origin (human and AI) on the perception of the credibility of sports reporting. For this purpose, an quantitative online survey was conducted with 154 participants, where two match reports about the same soccer game were compared: One of these reports was written by a journalist, while the other was AI-generated. The participants were divided into three groups, with varying disclosures on the type of origin (no disclosure, correct disclosure, manipulated disclosure). The analysis showed that the origin disclosures had no significant influence on credibility. Both expertise and trustworthiness were rated similarly. Since readers are indifferent about the source of information, this suggests that the use of AI in sports reporting can be useful to increase efficiency. However, in a wider sense, this indifference poses challenges to policymakers trying to contain the spread of misinformation and fake news based on the use of AI.
- WorkshopbeitragUser-Centered Evaluation of Machine Learning vs. Human Decisions – Identifying Emotional Highlights in Reality TV Formats(Mensch und Computer 2023 - Workshopband, 2023) Rossner, Alexander; Pagel, Sven; Dörner, RalfThis research paper examines a user-centered evaluation approach of artificial intelligence (AI) systems in the context of identifying emotional highlight scenes in reality TV formats. The study investigates the accuracy and reliability of AI compared to humans in identifying these highlights and explores viewers’ ability to distinct human versus AI-assisted decisions. Internal user tests with media company employees (enterprise users) demonstrate that the AI algorithm developed in the AI4MediaData research project achieves a high level of accuracy, closely aligning with human editors’ assessments. External user tests with viewers (consumers) reveal that participants are unable to distinguish whether highlight clips were identified by humans or by an AI. These findings emphasize the importance of user-centered evaluations that go beyond algorithm-centered evaluations to ensure useful AI-based systems. The research contributes to the advancement of Human-Centered Artificial Intelligence (HCAI) by considering both cognitive and emotional elements in AI-assisted decision-making.