Auflistung nach Schlagwort "explainability"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragConfigurations of human-AI work in agriculture(43. GIL-Jahrestagung, Resiliente Agri-Food-Systeme, 2023) Hüllmann, Joschka Andreas; Precht, Hauke; Wübbe, CarolinAgriculture is making leaps in digitalization and the development of artificial intelligence (AI) systems, e.g., decision support systems, sensors, or autonomous vehicles. However, adoption and widespread use of these technologies remains below expectations with negative consequences for digitally advancing the agricultural industry. Therefore, this study investigates the configurations of human-AI work, in particular, human-AI decision-making. Configurations describe the interactions between workers and intelligent systems, emphasizing the adoption and use of technologies in-situ. This study targets agricultural farms in Germany, collecting qualitative data at small and medium-sized businesses. From this data, the paper examines how configurations of human-AI work emerge and how explanations influence these configurations in the context of agricultural work. Theoretical contributions include a new understanding of how agricultural workers adopt and work with AI to make decisions. Practical contributions include more accessible AI systems, easing transfer into practice, and improving agricultural workers’ interactions with AI.
- KonferenzbeitragThe Effect of Explanations on Trust in an Assistance System for Public Transport Users and the Role of the Propensity to Trust(Mensch und Computer 2021 - Tagungsband, 2021) Faulhaber, Anja K.; Ni, Ina; Schmidt, LudgerThe present study aimed to investigate whether explanations increase trust in an assistance system. Moreover, we wanted to take the role of the individual propensity to trust in technology into account. We conducted an empirical study in a virtual reality environment where 40 participants interacted with a specific assistance system for public transport users. The study was in a 2x2 mixed design with the within-subject factor assistance system feature (trip planner and connection request) and the between-subject factor explanation (with or without). We measured trust as explicit trust via a questionnaire and as implicit trust via an operationalization of the participants’ behavior. The results showed that trust propensity predicted explicit trust, and explanations increased explicit trust significantly. This was not the case for implicit trust, though, suggesting that explicit and implicit trust do not necessarily coincide. In conclusion, our results complement the literature on explainable artificial intelligence and trust in automation and provide topics for future research regarding the effect of explanations on trust in assistance systems or other technologies.
- KonferenzbeitragExploring explainability formats to aid decision-making in dairy farming systems(44. GIL - Jahrestagung, Biodiversität fördern durch digitale Landwirtschaft, 2024) Mengisti Berihu Girmay, Felix MöhrleIn this paper, we examine different approaches to explaining decision support in herd management systems for their effects on comprehensibility and trust. To this end, we present a hypothetical system for assessing the risk of mastitis, a common infectious disease in dairy cattle. For this system, we design four explanation formats to present risk assessments to farmers. We collect their feedback in a survey to get suggestions for designing systems that are well accepted. In our work, it was not possible to identify one explanation format that is preferable to all others. Rather, a finding was that herd management systems should optimally support multiple explanation formats and allow switching between them depending on the situation.