Mahmood, TariqRicci, FrancescoBrunkhorst, IngoKrause, DanielSitou, Wassiou2017-11-152017-11-152007http://abis.l3s.uni-hannover.de/images/proceedings/abis2007/abis2007_mahmood_ricci.pdfhttps://dl.gi.de/handle/20.500.12116/5039Typical conversational recommender systems support interactive strategies that are hard-coded in advance and followed rigidly during a recommendation session. In fact, Reinforcement Learning techniques can be used in order to autonomously learn an optimal (user-adaptive) strategy, basically by exploiting some information encoded as features of a state representation. In this regard, it is important to determine the set of relevant state features for a given recommendation task. In this paper, we address the issue of feature relevancy, and determine the relevancy of adding four different features to a baseline representation. We show that adding a feature might not always be beneficial, and that the relevancy could be influenced by the user behavior. The results motivate the application of our approach online, in order to acquire the right mixture of online user behavior for addressing the relevancy problem.enTowards Learning User-Adaptive State Models in a Conversational Recommender SystemText/Conference Paper