Logo des Repositoriums

Augmented Intelligence, Augmented Responsibility?

Vorschaubild nicht verfügbar

Volltext URI


Text/Journal Article





ISSN der Zeitschrift





Intelligence Augmentation Systems (IAS) allow for more efficient and effective corporate processes by means of an explicit collaboration between artificial intelligence and human judgment. However, the higher degree of system autonomy, along with the enrichment of human capabilities, amplifies pre-existing issues of the distribution of moral responsibility: If an IAS has caused harm, firms who have operated the system might argue that they lack control over its actions, whereas firms who have developed the system might argue that they lack control over its actual use. Both parties rejecting responsibility and attributing it to the autonomous nature of the system leads to a variety of technologically induced responsibility gaps. Given the wide-ranging capabilities and applications of IAS, such responsibility gaps warrant a theoretical grounding in an ethical theory, also because the clear distribution of moral responsibility is an essential first step to govern explicit morality in a firm using structures such as accountability mechanisms. As part of this paper, first the necessary conditions for the distribution of responsibility for IAS are detailed. Second, the paper develops an ethical theory of Reason-Responsiveness for Intelligence Augmentation Systems (RRIAS) that allows for the distribution of responsibility at the organizational level between operators and providers. RRIAS provides important guidance for firms to understand who should be held responsible for developing suitable corporate practices for the development and usage of IAS.


Lüthi, Nick; Matt, Christian; Myrach, Thomas; Junglas, Iris (2023): Augmented Intelligence, Augmented Responsibility?. Business & Information Systems Engineering: Vol. 65, No. 4. DOI: 10.1007/s12599-023-00789-9. Springer. ISSN: 1867-0202