Deep encoders with auxiliary parameters for extreme classification
- Kunal Dahiya ,
- Sachin Yadav ,
- Sushant Sondhi ,
- Deepak Saini ,
- Sonu Mehta ,
- Jian Jiao ,
- Sumeet Agarwal ,
- Purushottam Kar ,
- Manik Varma
In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, California |
The task of annotating a data point with labels most relevant to it from a large universe of labels is referred to as Extreme Classification (XC). State-of-the-art XC methods have applications in ranking, recommendation, and tagging and mostly employ a combination architecture comprised of a deep encoder and a high-capacity classifier. These two components are often trained in a modular fashion to conserve compute. This paper shows that in XC settings where data paucity and semantic gap issues abound, this can lead to suboptimal encoder training which negatively affects the performance of the overall architecture. The paper then proposes a lightweight alternative DEXA that augments encoder training with auxiliary parameters. Incorporating DEXA into existing XC architectures requires minimal modifications and the method can scale to datasets with 40 million labels and offer predictions that are up to 6% and 15% more accurate than embeddings offered by existing deep XC methods on benchmark and proprietary datasets, respectively. The paper also analyzes DEXA theoretically and shows that it offers provably superior encoder training than existing Siamese training strategies in certain realizable settings. Code for DEXA is available at https://github.com/Extreme-classification/dexa.