Using FALCES against bias in automated decisions by integrating fairness in dynamic model ensembles
dc.contributor.author | Lässig, Nico | |
dc.contributor.author | Oppold, Sarah | |
dc.contributor.author | Herschel, Melanie | |
dc.contributor.editor | Kai-Uwe Sattler | |
dc.contributor.editor | Melanie Herschel | |
dc.contributor.editor | Wolfgang Lehner | |
dc.date.accessioned | 2021-03-16T07:57:13Z | |
dc.date.available | 2021-03-16T07:57:13Z | |
dc.date.issued | 2021 | |
dc.description.abstract | As regularly reported in the media, automated classifications and decisions based on machine learning models can cause unfair treatment of certain groups of a general population. Classically, the machine learning models are designed to make highly accurate decisions in general. When one machine learning model is not sufficient to define the possibly complex boundary between classes, multiple specialized" models are used within a model ensemble to further boost accuracy. In particular | en |
dc.identifier.doi | 10.18420/btw2021-08 | |
dc.identifier.isbn | 978-3-88579-705-0 | |
dc.identifier.pissn | 1617-5468 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/35813 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik, Bonn | |
dc.relation.ispartof | BTW 2021 | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-311 | |
dc.subject | Model fairness | |
dc.subject | bias in machine learning | |
dc.subject | model ensembles | |
dc.title | Using FALCES against bias in automated decisions by integrating fairness in dynamic model ensembles | en |
gi.citation.endPage | 174 | |
gi.citation.startPage | 155 | |
gi.conference.date | 13.-17. September 2021 | |
gi.conference.location | Dresden | |
gi.conference.sessiontitle | ML & Data Science |
Dateien
Originalbündel
1 - 1 von 1