Auflistung nach Autor:in "Alizadeh, Fatemeh"
1 - 6 von 6
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragA Consumer Perspective on Privacy Risk Awareness of Connected Car Data Use(Mensch und Computer 2021 - Tagungsband, 2021) Jakobi, Timo; Alizadeh, Fatemeh; Marburger, Martin; Stevens, GunnarNew cars are increasingly "connected" by default. Since not having a car is not an option for many people, understanding the privacy implications of driving connected cars and using their data-based services is an even more pressing issue than for expendable consumer products. While risk-based approaches to privacy are well established in law, they have only begun to gain traction in HCI. These approaches are understood not only to increase acceptance but also to help consumers make choices that meet their needs. To the best of our knowledge, perceived risks in the context of connected cars have not been studied before. To address this gap, our study reports on the analysis of a survey with 18 open-ended questions distributed to 1,000 households in a medium-sized German city. Our findings provide qualitative insights into existing attitudes and use cases of connected car features and, most importantly, a list of perceived risks themselves. Taking the perspective of consumers, we argue that these can help inform consumers about data use in connected cars in a user-friendly way. Finally, we show how these risks fit into and extend existing risk taxonomies from other contexts with a stronger social perspective on risks of data use.
- WorkshopbeitrageXplainable AI: Take one Step Back, Move two Steps forward(Mensch und Computer 2020 - Workshopband, 2020) Alizadeh, Fatemeh; Esau, Margarita; Stevens, Gunnar; Cassens, LenaIn 1991 the researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI” from the users, who were interacting with AI but did not realize it. Three decades of research and we are still facing the same issue with the AItechnology users. In the lack of users’ awareness and mutual understanding of AI-enabled systems between designers and users, informal theories of the users about how a system works (“Folk theories”) become inevitable but can lead to misconceptions and ineffective interactions. To shape appropriate mental models of AI-based systems, explainable AI has been suggested by AI practitioners. However, a profound understanding of the current users’ perception of AI is still missing. In this study, we introduce the term “Perceived AI” as “AI defined from the perspective of its users”. We then present our preliminary results from deep-interviews with 50 AItechnology users, which provide a framework for our future research approach towards a better understanding of PAI and users’ folk theories.
- KonferenzbeitragGDPR-Realitycheck on the right to access data(Mensch und Computer 2019 - Tagungsband, 2019) Alizadeh, Fatemeh; Jakobi, Timo; Boldt, Jens; Stevens, GunnarLoyalty programs are early examples of companies commercially collecting and processing personal data. Today, more than ever before, personal information is being used by companies of all types for a wide variety of purposes. To limit this, the General Data Protection Regulation (GDPR) aims to provide consumers with tools to control data collection and processing. What this right concretely means, which types of tools companies have to provide to their customers and in which way, is currently uncertain because precedents from case law are missing. Contributing to closing this gap, we turn to the example of loyalty cards to supplement current implementations of the right to claim data with a user perspective. In our hands-on approach, we had 13 households request their personal data from their respective loyalty program. We investigate expectations of GDPR in general and the right to access in particular, observe the process of claiming and receiving, and discuss the provided data takeouts. One year after the GDPR has come into force, our findings highlight the consumer's expectations and knowledge of the GDPR and in particular the right to access to inform design of more usable privacy enhancing technologies.
- ZeitschriftenartikelI Don’t Know, Is AI Also Used in Airbags? - An Empirical Study of Folk Concepts and People’s Expectations of Current and Future Artificial Intelligence(i-com: Vol. 20, No. 1, 2021) Alizadeh, Fatemeh; Stevens, Gunnar; Esau, MargaritaIn 1991, researchers at the center for the Learning Sciences of Carnegie Mellon University were confronted with the confusing question of “where is AI?” from users, who were interacting with artificial intelligence (AI) but did not realize it. After three decades of research, we are still facing the same issue with the unclear understanding of AI among people. The lack of mutual understanding and expectations among AI users and designers and the ineffective interactions with AI that result raises the question of “how AI is generally perceived today?” To address this gap, we conducted 50 semi-structured interviews on perception and expectations of AI. Our results revealed that for most, AI is a dazzling concept that ranges from a simple automated device up to a full controlling agent and a self-learning superpower. We explain how these folk concepts shape users’ expectations when interacting with AI and envisioning its current and future state.
- WorkshopbeitragSchool Field Trips and Children’s Safety: A Teacher Assistant System for School Field Trips(Mensch und Computer 2019 - Workshopband, 2019) Alizadeh, Fatemeh; Amirkhani, Sima; Schmittel, EliasWorldwide school field trips have been an important part of the educational curriculum for young and older students. They are ideally a reinforcement and an extension of the taught curriculum and give the students a more direct, real-world, experience for the theoretical lessons. But for all their benefits, there is also a huge amount of work and stress for the teachers, parents, and school involved [3]. The trade off between prioritizing safety of the students and the quality of school field trips and the incapability of the current communication devices to enhance security of the children during field trips, led us to the idea of a practical safety- orientated communication system between teachers and students during the field trips and particularly in the critical situations. According to the process of the user-centered design method [18], several methods were conducted to specify the requirements and expectations of the teachers from an assistant system. Because of the sensitivity of the children’s related data, the GDPR guideline [7] has been taken into consideration and to assure the usability of the system a three-level iterative design and evaluation process has been applied. The iterative process included a scenario-based design [21] followed by a parallel design and H-form evaluation [9] and a participatory design (PD) workshop [16] combined with a feedback and discussion session, which led to the final design of the system. “SafeTrip” assists teachers to have better control on the students without restricting their free movements. It also reduces the risks of putting children in danger and anxiety of the teacher by decreasing the amount of the preparation needed.
- WorkshopbeitragUser-friendly Explanatory Dialogues(Mensch und Computer 2023 - Workshopband, 2023) Alizadeh, Fatemeh; Pins, Dominik; Stevens, GunnarWhen dialogues with voice assistants (VAs) fall apart, users often become confused or even frustrated. To address these issues and related privacy concerns, Amazon recently introduced a feature allowing Alexa users to inquire about why it behaved in a certain way. But how do users perceive this new feature? In this paper, we present preliminary results from research conducted as part of a three-year project involving 33 German households. This project utilized interviews, fieldwork, and co-design workshops to identify common unexpected behaviors of VAs, as well as users’ needs and expectations for explanations. Our findings show that, contrary to its intended purpose, the new feature actually exacerbates user confusion and frustration instead of clarifying Alexa's behavior. We argue that such voice interactions should be characterized as explanatory dialogs that account for VA’s unexpected behavior by providing interpretable information and prompting users to take action to improve their current and future interactions.