Fähndrich, JohannesWischow, MaikKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45182Forensic application of Methods of AI depends on the level of trust towards automated reasoning. Automated reasoning leads necessarily to conflicts, and with that to the need for adaptation. Knowledge Graphs are an existential part of formalization in complex systems, e.g. as representation of beliefs of an AI. Strong AI, and with that one of the two main research areas of the early 21st century in Computer Science, struggles with the representation of conflicting beliefs, as well as with strategies for their resolution. We present a template based approach with an implementation on detecting and resolving conflicts in belief systems leading to a deeper insight into AI and its ability of self reflection. Without the understanding of how beliefs are handled in strong AI systems, the application to forensics is hurdled.enStrong AIReasoningConflict ResolutionMachine LearningKnowledge GraphBelief SystemAutomated Reasoning for Conflict Solving in Knowledge GraphsText/Conference Paper10.18420/inf2024_241617-54682944-7682