Logo des Repositoriums
 

BISE 63(1) - February 2021

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 7 von 7
  • Zeitschriftenartikel
    Understanding Collaboration with Virtual Assistants – The Role of Social Identity and the Extended Self
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Mirbabaie, Milad; Stieglitz, Stefan; Brünker, Felix; Hofeditz, Lennart; Ross, Björn; Frick, Nicholas R. J.
    Organizations introduce virtual assistants (VAs) to support employees with work-related tasks. VAs can increase the success of teamwork and thus become an integral part of the daily work life. However, the effect of VAs on virtual teams remains unclear. While social identity theory describes the identification of employees with team members and the continued existence of a group identity, the concept of the extended self refers to the incorporation of possessions into one’s sense of self. This raises the question of which approach applies to VAs as teammates. The article extends the IS literature by examining the impact of VAs on individuals and teams and updates the knowledge on social identity and the extended self by deploying VAs in a collaborative setting. Using a laboratory experiment with N = 50, two groups were compared in solving a task, where one group was assisted by a VA, while the other was supported by a person. Results highlight that employees who identify VAs as part of their extended self are more likely to identify with team members and vice versa. The two aspects are thus combined into the proposed construct of virtually extended identification explaining the relationships of collaboration with VAs. This study contributes to the understanding on the influence of the extended self and social identity on collaboration with VAs. Practitioners are able to assess how VAs improve collaboration and teamwork in mixed teams in organizations.
  • Zeitschriftenartikel
    City 5.0
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Rosemann, Michael; Becker, Jörg; Chasin, Friedrich
  • Zeitschriftenartikel
    Ready or Not, AI Comes— An Interview Study of Organizational AI Readiness Factors
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Jöhnk, Jan; Weißert, Malte; Wyrtki, Katrin
    Artificial intelligence (AI) offers organizations much potential. Considering the manifold application areas, AI’s inherent complexity, and new organizational necessities, companies encounter pitfalls when adopting AI. An informed decision regarding an organization’s readiness increases the probability of successful AI adoption and is important to successfully leverage AI’s business value. Thus, companies need to assess whether their assets, capabilities, and commitment are ready for the individual AI adoption purpose. Research on AI readiness and AI adoption is still in its infancy. Consequently, researchers and practitioners lack guidance on the adoption of AI. The paper presents five categories of AI readiness factors and their illustrative actionable indicators. The AI readiness factors are deduced from an in-depth interview study with 25 AI experts and triangulated with both scientific and practitioner literature. Thus, the paper provides a sound set of organizational AI readiness factors, derives corresponding indicators for AI readiness assessments, and discusses the general implications for AI adoption. This is a first step toward conceptualizing relevant organizational AI readiness factors and guiding purposeful decisions in the entire AI adoption process for both research and practice.
  • Zeitschriftenartikel
    Highly Accurate, But Still Discriminatory
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Köchling, Alina; Riazy, Shirin; Wehner, Marius Claus; Simbeck, Katharina
    The study aims to identify whether algorithmic decision making leads to unfair (i.e., unequal) treatment of certain protected groups in the recruitment context. Firms increasingly implement algorithmic decision making to save costs and increase efficiency. Moreover, algorithmic decision making is considered to be fairer than human decisions due to social prejudices. Recent publications, however, imply that the fairness of algorithmic decision making is not necessarily given. Therefore, to investigate this further, highly accurate algorithms were used to analyze a pre-existing data set of 10,000 video clips of individuals in self-presentation settings. The analysis shows that the under-representation concerning gender and ethnicity in the training data set leads to an unpredictable overestimation and/or underestimation of the likelihood of inviting representatives of these groups to a job interview. Furthermore, algorithms replicate the existing inequalities in the data set. Firms have to be careful when implementing algorithmic video analysis during recruitment as biases occur if the underlying training data set is unbalanced.
  • Zeitschriftenartikel
    AI-Based Information Systems
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Buxmann, Peter; Hess, Thomas; Thatcher, Jason Bennett
  • Zeitschriftenartikel
    Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Berger, Benedikt; Adam, Martin; Rühr, Alexander; Benlian, Alexander
    Owing to advancements in artificial intelligence (AI) and specifically in machine learning, information technology (IT) systems can support humans in an increasing number of tasks. Yet, previous research indicates that people often prefer human support to support by an IT system, even if the latter provides superior performance – a phenomenon called algorithm aversion. A possible cause of algorithm aversion put forward in literature is that users lose trust in IT systems they become familiar with and perceive to err, for example, making forecasts that turn out to deviate from the actual value. Therefore, this paper evaluates the effectiveness of demonstrating an AI-based system’s ability to learn as a potential countermeasure against algorithm aversion in an incentive-compatible online experiment. The experiment reveals how the nature of an erring advisor (i.e., human vs. algorithmic), its familiarity to the user (i.e., unfamiliar vs. familiar), and its ability to learn (i.e., non-learning vs. learning) influence a decision maker’s reliance on the advisor’s judgement for an objective and non-personal decision task. The results reveal no difference in the reliance on unfamiliar human and algorithmic advisors, but differences in the reliance on familiar human and algorithmic advisors that err. Demonstrating an advisor’s ability to learn, however, offsets the effect of familiarity. Therefore, this study contributes to an enhanced understanding of algorithm aversion and is one of the first to examine how users perceive whether an IT system is able to learn. The findings provide theoretical and practical implications for the employment and design of AI-based systems.
  • Zeitschriftenartikel
    Interview with Karl-Heinz Streibich on “Artificial Intelligence?
    (Business & Information Systems Engineering: Vol. 63, No. 1, 2021) Buxmann, Peter