Show simple item record

dc.contributor.authorRauschenberger, Maria
dc.contributor.authorBaeza-Yates, Ricardo
dc.date.accessioned2021-01-17T20:04:29Z
dc.date.available2021-01-17T20:04:29Z
dc.date.issued2021
dc.identifier.issn2196-6826
dc.identifier.urihttp://dl.gi.de/handle/20.500.12116/34681
dc.description.abstractWhen discussing interpretable machine learning results, researchers need to compare them and check for reliability, especially for health-related data. The reason is the negative impact of wrong results on a person, such as in wrong prediction of cancer, incorrect assessment of the COVID-19 pandemic situation, or missing early screening of dyslexia. Often only small data exists for these complex interdisciplinary research projects. Hence, it is essential that this type of research understands different methodologies and mindsets such as the <em>Design Science Methodology</em>, <em>Human-Centered Design</em> or <em>Data Science</em> approaches to ensure interpretable and reliable results. Therefore, we present various recommendations and design considerations for experiments that help to avoid over-fitting and biased interpretation of results when having small imbalanced data related to health. We also present two very different use cases: early screening of dyslexia and event prediction in multiple sclerosis.en
dc.language.isoen
dc.publisherDe Gruyter
dc.relation.ispartofi-com: Vol. 19, No. 3
dc.subjectMachine Learning
dc.subjectHuman-Centered Design
dc.subjectHCD
dc.subjectinteractive systems
dc.subjecthealth
dc.subjectsmall data
dc.subjectimbalanced data
dc.subjectover-fitting
dc.subjectvariances
dc.subjectinterpretable results
dc.subjectguidelines
dc.titleHow to Handle Health-Related Small Imbalanced Data in Machine Learning?en
dc.typeText/Journal Article
dc.pubPlaceBerlin
mci.reference.pages215-226
dc.identifier.doi10.1515/icom-2020-0018


Files in this item

FilesSizeFormatView

There are no files associated with this item.

Show simple item record