Logo des Repositoriums
 

Using FALCES against bias in automated decisions by integrating fairness in dynamic model ensembles

dc.contributor.authorLässig, Nico
dc.contributor.authorOppold, Sarah
dc.contributor.authorHerschel, Melanie
dc.contributor.editorKai-Uwe Sattler
dc.contributor.editorMelanie Herschel
dc.contributor.editorWolfgang Lehner
dc.date.accessioned2021-03-16T07:57:13Z
dc.date.available2021-03-16T07:57:13Z
dc.date.issued2021
dc.description.abstractAs regularly reported in the media, automated classifications and decisions based on machine learning models can cause unfair treatment of certain groups of a general population. Classically, the machine learning models are designed to make highly accurate decisions in general. When one machine learning model is not sufficient to define the possibly complex boundary between classes, multiple specialized" models are used within a model ensemble to further boost accuracy. In particularen
dc.identifier.doi10.18420/btw2021-08
dc.identifier.isbn978-3-88579-705-0
dc.identifier.pissn1617-5468
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/35813
dc.language.isoen
dc.publisherGesellschaft für Informatik, Bonn
dc.relation.ispartofBTW 2021
dc.relation.ispartofseriesLecture Notes in Informatics (LNI) - Proceedings, Volume P-311
dc.subjectModel fairness
dc.subjectbias in machine learning
dc.subjectmodel ensembles
dc.titleUsing FALCES against bias in automated decisions by integrating fairness in dynamic model ensemblesen
gi.citation.endPage174
gi.citation.startPage155
gi.conference.date13.-17. September 2021
gi.conference.locationDresden
gi.conference.sessiontitleML & Data Science

Dateien

Originalbündel
1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
A2-2.pdf
Größe:
2.24 MB
Format:
Adobe Portable Document Format