Auflistung nach Schlagwort "anthropomorphism"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragAn Anthropomorphic Approach to establish an Additional Layer of Trustworthiness of an AI Pilot(Software Engineering 2022 Workshops, 2022) Regli, Christoph; Annighoefer, BjörnAI algorithms promise solutions for situations where conventional, rule-based algorithms reach their limits. They perform in complex problems yet unknown at design time, and highly efficient functions can be implemented without having to develop a precise algorithm for the problem at hand. Well-tried applications show the AI’s ability to learn from new data, extrapolate on unseen data, and adapt to a changing environment — a situation encountered in fl ight operations. In aviation, however, certifi cation regulations impede the implementation of non-deterministic or probabilistic algorithms that adapt their behaviour with increasing experience. Regulatory initiatives aim at defining new development standards in a bottom-up approach, where the suitability and the integrity of the training data shall be addressed during the development process, increasing trustworthiness in eff ect. Methods to establish explainability and traceability of decisions made by AI algorithms are still under development, intending to reach the required level of trustworthiness. This paper outlines an approach to an independent, anthropomorphic software assurance for AI/ML systems as an additional layer of trustworthiness, encompassing top-down black-box testing while relying on a well-established regulatory framework.
- KonferenzbeitragMore human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human-agent cooperation(Mensch und Computer 2019 - Tagungsband, 2019) Kulms, Philipp; Kopp, StefanComputer agents are increasingly endowed with anthropomorphic characteristics and autonomous behavior to improve their capabilities for problem-solving and make interactions with humans more natural. This poses new challenges for human users who need to make trust-based decisions in dynamic and complex environments. It remains unclear if people trust agents like other humans and thus apply the same social rules to human-computer interaction (HCI), or rather, if interactions with computers are characterized by idiosyncratic attributions and responses. To this ongoing and crucial debate we contribute an experiment on the impact of anthropomorphic cues on trust and trust-related attributions in a cooperative human-agent setting, permitting the investigation of interdependent, continued, and coordinated decision-making toward a joint goal. Our results reveal an incongruence between self-reported and behavioral trust measures. First, the varying degree of agent anthropomorphism (computer vs. virtual vs. human agent) did not affect people's decision to behaviorally trust the agent by adopting task-specific advice. Behavioral trust was affected by advice quality only. Second, subjective ratings indicate that anthropomorphism did increase self-reported trust.