Learning for User Adaptive Systems: Likely Pitfalls and Daring Rescue
dc.contributor.author | Müller, Martin E. | de_DE |
dc.date.accessioned | 2017-11-15T15:01:41Z | |
dc.date.available | 2017-11-15T15:01:41Z | |
dc.date.issued | 2003 | |
dc.description.abstract | Adaptive user interfaces adapt themselves to the user by reasoning about the user and refining their internal model of the user’s needs. In machine learning, artificial systems learn how to perform better through experience. By observing examples from a sample, the learning algorithm tries to in- duce a hypothesis which approximates the target function. It seems obvious, that ma- chine learning exactly offers what is desperately needed in intelligent adaptive behavior. But when trying to adapt by learning, one will sooner or later encounter one or more well–known problems, some of which have been discussed in [Webb et al., 2001]. We propose a framework for describing user modeling problems, identify several reasons for inherent noise and discuss few promising approaches which tackle these problems. | |
dc.identifier.uri | http://abis.l3s.uni-hannover.de/images/proceedings/abis2003/mueller.pdf | de_DE |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/5125 | |
dc.language.iso | en | de_DE |
dc.relation.ispartof | 11. GI-Workshop "Adaptivität und Benutzermodellierung in interaktiven Softwaresystemen" | de_DE |
dc.subject | Machine Learning for User Modeling | |
dc.subject | Sample size | |
dc.subject | Noise | |
dc.subject | Interpreting User Interactions | |
dc.title | Learning for User Adaptive Systems: Likely Pitfalls and Daring Rescue | de_DE |
dc.type | Text/Conference Paper | |
gi.document.quality | digidoc |