Logo des Repositoriums
 
Konferenzbeitrag

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Vorschaubild nicht verfügbar

Volltext URI

Dokumententyp

Text/Conference Paper

Zusatzinformation

Datum

2023

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

Beschreibung

Pedro C. Neto, Eduarda Caldeira (2023): Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition. BIOSIG 2023. Gesellschaft für Informatik e.V.. ISSN: 1617-5468. ISBN: 978-3-88579-733-3

Zitierform

DOI

Tags