Research Area:  Machine Learning
Recommender systems aim to maximize the overall accuracy for long-term recommendations. However, most of the existing recommendation models adopt a static view, and ignore the fact that recommendation is a dynamic sequential decision-making process. As a result, they fail to adapt to new situations and suffer from the cold-start problem. Although sequential recommendation methods have been gaining attention recently, the objective of long-term recommendation still has not been explicitly addressed since these methods are developed for short-term prediction situations. To overcome these problems, we propose a novel top-N interactive recommender system based on deep reinforcement learning. In our model, the processes of recommendation are viewed as Markov decision processes (MDP), wherein the interactions between agent (recommender system) and environment (user) are simulated by the recurrent neural network (RNN). In addition, reinforcement learning is employed to optimize the proposed model for the purpose of maximizing long-term recommendation accuracy. Experimental results based on several benchmarks show that our model significantly outperforms previous top-N methods in terms of Hit-Rate and NDCG for the long-term recommendation, and can be applied to both cold-start and warm-start scenarios.
Keywords:  
Author(s) Name:  Liwei Huang, Mingsheng Fu, Fan Li, Hong Qu, Yangjun Liu, Wenyu Chena
Journal name:  Knowledge-Based Systems
Conferrence name:  
Publisher name:  ELSEVIER
DOI:  https://doi.org/10.1016/j.knosys.2020.106706
Volume Information:  Volume 213, 15 February 2021, 106706
Paper Link:   https://www.sciencedirect.com/science/article/abs/pii/S0950705120308352