Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation
- Xinyan Fan ,
- Zheng Liu ,
- Jianxun Lian ,
- Wayne Xin Zhao ,
- Xing Xie ,
- Ji-Rong Wen
SIGIR 2021 short paper |
Published by ACM
Self-attention networks (SANs) have been intensively applied for sequential recommenders, but they are limited due to: (1) the quadratic complexity and vulnerability to over-parameterization in selfattention; (2) inaccurate modeling of sequential relations between items due to the implicit position encoding. In thiswork,we propose the low-rank decomposed self-attention networks (LightSANs) to overcome these problems. Particularly, we introduce the low-rank decomposed self-attention, which projects user’s historical items into a small constant number of latent interests and leverages itemto-interest interaction to generate the context-aware representation. It scales linearly w.r.t. the user’s historical sequence length in terms of time and space, and is more resilient to over-parameterization. Besides, we design the decoupled position encoding, which models the sequential relations between items more precisely. Extensive experimental studies are carried out on three real-world datasets, where LightSANs outperform the existing SANs-based recommenders in terms of both effectiveness and efficiency.