Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
- Yu Wang ,
- Jinchao Li ,
- Tristan Naumann ,
- Chenyan Xiong ,
- Hao Cheng ,
- Rob Tinn ,
- Cliff Wong ,
- Naoto Usuyama ,
- Rick Rogahn ,
- Zhihong Shen ,
- Yang Qin ,
- Eric Horvitz ,
- Paul Bennett ,
- Jianfeng Gao ,
- Hoifung Poon
KDD 2021 |
Information overload is a prevalent challenge in many high-value domains. A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months. In general, biomedical literature expands by two papers every minute, totalling over a million new papers every year. Search in the biomedical realm, and many other vertical domains is challenging due to the scarcity of direct supervision from click logs. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck. We propose a general approach for vertical search based on domain-specific pretraining and present a case study for the biomedical domain. Despite being substantially simpler and not using any relevance labels for training or development, our method performs comparably or better than the best systems in the official TREC-COVID evaluation, a COVID-related biomedical search competition. Using distributed computing in modern cloud infrastructure, our system can scale to tens of millions of articles on PubMed and has been deployed as Microsoft Biomedical Search, a new search experience for biomedical literature: this https URL.