Logo des Repositoriums
 

Learning for User Adaptive Systems: Likely Pitfalls and Daring Rescue

dc.contributor.authorMüller, Martin E.de_DE
dc.date.accessioned2017-11-15T15:01:41Z
dc.date.available2017-11-15T15:01:41Z
dc.date.issued2003
dc.description.abstractAdaptive user interfaces adapt themselves to the user by reasoning about the user and refining their internal model of the user’s needs. In machine learning, artificial systems learn how to perform better through experience. By observing examples from a sample, the learning algorithm tries to in- duce a hypothesis which approximates the target function. It seems obvious, that ma- chine learning exactly offers what is desperately needed in intelligent adaptive behavior. But when trying to adapt by learning, one will sooner or later encounter one or more well–known problems, some of which have been discussed in [Webb et al., 2001]. We propose a framework for describing user modeling problems, identify several reasons for inherent noise and discuss few promising approaches which tackle these problems.
dc.identifier.urihttp://abis.l3s.uni-hannover.de/images/proceedings/abis2003/mueller.pdfde_DE
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/5125
dc.language.isoende_DE
dc.relation.ispartof11. GI-Workshop "Adaptivität und Benutzermodellierung in interaktiven Softwaresystemen"de_DE
dc.subjectMachine Learning for User Modeling
dc.subjectSample size
dc.subjectNoise
dc.subjectInterpreting User Interactions
dc.titleLearning for User Adaptive Systems: Likely Pitfalls and Daring Rescuede_DE
dc.typeText/Conference Paper
gi.document.qualitydigidoc

Dateien