Rauschenberger, MariaBaeza-Yates, Ricardo2021-01-172021-01-172021https://dl.gi.de/handle/20.500.12116/34681When discussing interpretable machine learning results, researchers need to compare them and check for reliability, especially for health-related data. The reason is the negative impact of wrong results on a person, such as in wrong prediction of cancer, incorrect assessment of the COVID-19 pandemic situation, or missing early screening of dyslexia. Often only small data exists for these complex interdisciplinary research projects. Hence, it is essential that this type of research understands different methodologies and mindsets such as the <em>Design Science Methodology</em>, <em>Human-Centered Design</em> or <em>Data Science</em> approaches to ensure interpretable and reliable results. Therefore, we present various recommendations and design considerations for experiments that help to avoid over-fitting and biased interpretation of results when having small imbalanced data related to health. We also present two very different use cases: early screening of dyslexia and event prediction in multiple sclerosis.enMachine LearningHuman-Centered DesignHCDinteractive systemshealthsmall dataimbalanced dataover-fittingvariancesinterpretable resultsguidelinesHow to Handle Health-Related Small Imbalanced Data in Machine Learning?Text/Journal Article10.1515/icom-2020-00182196-6826