Adelt, JuliusLiebrenz, TimmHerber, PaulaEngels, GregorHebig, ReginaTichy, Matthias2023-01-182023-01-182023978-3-88579-726-5https://dl.gi.de/handle/20.500.12116/40113Reinforcement Learning (RL) is a powerful technique to control intelligent hybrid systems (HS) in dynamic and uncertain environments. However, formally guaranteeing safe behavior of intelligent HS is hard because formal descriptions are often not available in industrial design processes and hard to obtain for RL. Furthermore, the intertwined discrete and continuous behavior of hybrid systems results in limited scalability of automatic verification methods, such as model checking. This makes deductive verification desirable. In this paper, we summarize our approach for deductive verification of intelligent HS with embedded RL components that are modeled with Simulink and the RL Toolbox. This paper was originally published at the Formal Methods conference 2021 (FM21) [ALH21].enFormal VerificationTheorem ProvingHybrid SystemsSafe Reinforcement LearningFormal Verification of Intelligent Hybrid Systems that are modeled with Simulink and the Reinforcement Learning ToolboxText/Conference Paper1617-5468