Offert, FabianBell, PeterReussner, Ralf H.Koziolek, AnneHeinrich, Robert2021-01-272021-01-272021978-3-88579-701-2https://dl.gi.de/handle/20.500.12116/34711Machine vision systems based on deep convolutional neural networks are increasingly utilized in digital humanities projects, particularly in the context of art-historical and audiovisual data. As research has shown, such systems are highly susceptible to bias. We propose that this is not only due to their reliance on biased datasets but also because their perceptual topology, their specific way of representing the visual world, gives rise to a new class of bias that we call perceptual bias. Perceptual bias, we argue, affects almost all currently available “off-the-shelf” machine vision systems, and is thus especially relevant for digital humanities applications, which often rely on such systems for hypothesis building. We evaluate the nature and scope of perceptual bias by means of a close reading of a visual analytics technique called “feature visualization” and propose to understand the development of critical visual analytics techniques as an important (digital) humanities challenge, situated at the interface of computer science and visual studies.enmachine learningvisual analyticscomputer visionbiasinterpretabilitydigital art historyUnderstanding Perceptual Bias in Machine Vision Systems10.18420/inf2020_1211617-5468