À propos
Dr. Ming Gong (公明) is a Principal Applied Scientist Manager at Microsoft STCA NLP Group, leading an elite team with 40+ applied scientists and engineers to develop NLP technologies and applications. Her research interests include Question Answering, Search intelligence, Cross Lingual Models, Pre-trained models, etc.
Before joining MS, she received her Ph.D. on Graphics and Visual Computing from Institute of Computing Technology, Chinese Academy of Sciences, in 2013, under supervision of Prof. Hua Li (opens in new tab).
We are hiring NLP researchers, engineers, and interns! If you have strong publications and experiences in above areas and are willing to work in Microsoft Beijing or Suzhou, feel free to shoot me your resume ([email protected] (opens in new tab)).
Highlights:
- 2022-06-10: Our cross-lingual dense retrieval work QuiCK achieves SOTA result on the XOR-Retrieve leaderboard (opens in new tab).
- 2022-06-07: 1 paper accepted by IJCAI 2022.
- 2022-04-11: 1 paper accepted by NAACL 2022.
- 2021-11-01: Our work RIKD won Microsoft 2021 MLADS Fall Distinguished Contribution Award (2/250+). Honored to receive the Award twice in 2 consecutive conferences (the first time it happens in conf history).
- 2021-08-27: 2 papers accepted by EMNLP 2021 and Findings of EMNLP 2021.
- 2021-07-30: Code-XGLUE accepted by NeurIPS 2021.
- 2021-06-11: Our work CalibreNet won Microsoft 2021 MLADS Distinguished Contribution Award (2/300+).
- 2021-05-18: 1 long paper and 1 tutorial accepted by KDD 2021.
- 2021-05-05: 4 long papers accepted by ACL / Findings of ACL 2021.
- 2021-04-10: Universal topic model work is shipped in Windows release – News and Interests on Windows task bar in global 50+ regions. (blog (opens in new tab), media (opens in new tab), video (opens in new tab))
-
2021-03-05: Long document machine reading comprehension model (DocMRC) shipped in Azure Semantic Search (service (opens in new tab), blog (opens in new tab), video (opens in new tab)).
- 2021-02-01: 1 long paper accepted by ICASSP 2021.
- 2020-12-24: Lecture tutorial accepted (link (opens in new tab)) by TheWebConf 2021 about Language Scaling.
- 2020-12-02: 1 long paper accepted by AAAI 2021.
- 2020-11-27: GLGE (github (opens in new tab)) (opens in new tab): a benchmark dataset for natural language generation is released.
- 2020-10-16: 1 long paper accepted by WSDM 2020.
- 2020-10-01: Bing Blog (link (opens in new tab)) about Universal QnA is published.
- 2020-09-30: CodeXGLUE (github (opens in new tab), blog (opens in new tab), blog_zh (opens in new tab)) is released for code intelligence research.
- 2020-09-30: 2 long papers accepted by COLING 2020.
- 2020-09-16: 4 long papers accepted by EMNLP / Findings of EMNLP 2020.
- 2020-07-04: Bing Cross-Lingual QA in 100 Languages: Cross lingual models powered Bing QnA has been shipped to 100+ languages and 200+ regions in total, serving millions of users on Bing.com (opens in new tab). Example cases: Greek {γιατί το χρώμα του ουρανού είναι μπλε (opens in new tab)}, Turkish {beyoğlu gezilecek yerler (opens in new tab)}, Frisian {wat is winteroarloch (opens in new tab)}, Arabic {افضل الزيوت لنمو الشعر (opens in new tab)}, Russian {как сбросить хонор до заводских настроек (opens in new tab)}, Telugu { నేరేడు పండు తినడం వల్ల కలిగే ప్రయోజనాలు (opens in new tab)}.
- 2020-06-03: The Promotion Video (opens in new tab) of our KDD 2020 paper: Mining Implicit Relevance Feedback from User Behavior for Web Question Answering.
- 2020-06-02: Unicoder Model (code (opens in new tab)) is open sourced – SOTA cross lingual pretrained model.
- 2020-05-28: XGLUE Leaderboard (link (opens in new tab)) is online now.
- 2020-05-01: Bing Cross Lingual QA: Cross lingual models powered Bing QnA shipped to 28+ markets/regions in total covering five continents (America, Africa, Asia, Europe, and Australia), serving millions of users on Bing.com. Example cases: ja-JP {ワーキングプア 年収 (opens in new tab)}, ru-RU {длина биссектрисы (opens in new tab)}, nl-NL {postcode molenkampweg 3 vlagtwedde (opens in new tab)}.
- 2020-04-03: XGLUE (paper (opens in new tab)) is a new benchmark dataset for cross-lingual pre-training, understanding and generation.
-
2020-02-10: ReflectionNet achieves SOTA result on on the Natural Question Leaderboard (opens in new tab).
- 2020-02-19: CodeBERT (paper (opens in new tab)) is an code-language pre-trained model, which achieves SOTA results on Code Retrieval and Code Generation tasks.
- 2019-09-05: Unicoder-VL for Images (paper (opens in new tab)) is an image-language pre-trained model, which achieves SOTA results on Image-Text Retrieval and Visual QA (GQA (opens in new tab)) tasks.
- 2019-07-19: Unicoder (paper (opens in new tab)) is a cross-lingual pre-trained model, which achieves SOTA results on XNLI (opens in new tab) and Cross-lingual QA tasks.
- 2019-04-20: NeuronBlocks (paper (opens in new tab), code (opens in new tab)) is open sourced on Github.