Key points are not available for this paper at this time.
Self-attention networks (SANs) have been intensively applied for sequential recommenders, but they are limited due to: (1) the quadratic complexity and vulnerability to over-parameterization in self-attention; (2) inaccurate modeling of sequential relations between items due to the implicit position encoding. In this work, we propose the low-rank decomposed self-attention networks (LightSANs) to overcome these problems. Particularly, we introduce the low-rank decomposed self-attention, which projects user's historical items into a small constant number of latent interests and leverages item-to-interest interaction to generate the context-aware representation. It scales linearly w.r.t. the user's historical sequence length in terms of time and space, and is more resilient to over-parameterization. Besides, we design the decoupled position encoding, which models the sequential relations between items more precisely. Extensive experimental studies are carried out on three real-world datasets, where LightSANs outperform the existing SANs-based recommenders in terms of both effectiveness and efficiency.
Building similarity graph...
Analyzing shared references across papers
Loading...
Xinyan Fan
Zheng Liu
Jianxun Lian
Microsoft Research Asia (China)
Renmin University of China
Building similarity graph...
Analyzing shared references across papers
Loading...
Fan et al. (Sun,) studied this question.
www.synapsesocial.com/papers/69dab64daae38ff6ad8360dc — DOI: https://doi.org/10.1145/3404835.3462978