Mejri, OumaymaWaedt, KarlYatagha, RomarickEdeh, NatashaSebastiao, Claudia LemosKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-3https://dl.gi.de/handle/20.500.12116/45144Artificial intelligence (AI) has become increasingly integrated into various aspects of society, from healthcare and finance to law enforcement and hiring processes. More recently, sensitive infrastructure such as nuclear plants is engaging AI in aspects of safety. However, these systems are not immune to biases and ethical concerns. This paper explores the role of knowledge representation in addressing ethics and fairness in AI, examining how biased or incomplete representations can lead to unfair outcomes and unreliable decision-making. It proposes strategies to mitigate these risks.enArtificial IntelligenceBias in AIKnowledge RepresentationTrustworthy AISensitive InfrastructureData BiasExplainable AIAlgorithmic FairnessEnsuring trustworthy AI for sensitive infrastructure using Knowledge RepresentationText/Conference Paper10.18420/inf2024_1671617-5468