SIGIR 2009: Seeking Better Search

Published

By Rob Knies, Managing Editor, Microsoft Research

Organizing threaded discussions. Using reasoning to rank answers on community sites. Predicting click-through rates for news queries. Assessing how crawl policies affect the effectiveness of Web search. Taking context into consideration when classifying queries and predicting user interests.

Much remains to be solved in the field of information retrieval. That becomes obvious just by scanning the list of papers to be presented by Microsoft Research during SIGIR 2009, the 32nd annual conference sponsored by the Association for Computing Machinery’s Special Interest Group on Information Retrieval.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

The event, to be held in Boston from July 19-23 at the Sheraton Boston Hotel and Northeastern University, will feature 74 accepted papers, in addition to a busy schedule packed with tutorials, workshops, posters, demonstrations, and talks. And of those accepted papers, no fewer than 21―28 percent―come from Microsoft Research.

That should come as little surprise. Microsoft Research has a history of strong participation in SIGIR. Over the past four years, 58 papers from the organization’s labs have been accepted for presentation during this prestigious conference, an indication of how seriously Microsoft is taking its goals of refining and advancing the state of the art in information retrieval.

Microsoft Research’s support for SIGIR is hardly limited to papers, either. Five workshops to be conducted July 23 feature Microsoft Research personnel. Four of the paper sessions will be chaired by researchers from the organization. Of 104 posters to be displayed during the gathering, 12 will be from Microsoft Research, which is also contributing one demonstration and a talk during the industry track of the conference. Microsoft Research is a gold sponsor for SIGIR 2009.

Three of Microsoft Research’s six labs worldwide are well represented in the paper presentations, with Microsoft Research Asia contributing eight, and both Microsoft Research Redmond and Microsoft Research Cambridge being responsible for six. Another is a joint effort between the latter lab and Microsoft Research Silicon Valley. In addition, two other papers have been submitted by authors from elsewhere in Microsoft.

Meng Wang and Xian-Sheng Hua of Microsoft Research Asia, along with Bo Liu of the University of Science and Technology of China, will conduct a demonstration on July 20 entitled Accommodating Colorblind Users in Image Search.

Two days later, danah boyd of Microsoft Research New England is to deliver an industry-track discussion called The Searchable Nature of Acts in Networked Publics.

Natasa Milic-Frayling of Microsoft Research Cambridge acts as session chair for a July 20 collection of papers called Recommenders I. Tie-Yan Liu of Microsoft Research Asia chairs the Learning to Rank I session on the morning of July 21, and that afternoon, Susan Dumais of Microsoft Research Redmond will chair the Interactive Search session. The following morning, it’s Tetsuya Sakai’s turn, during a session entitled Evaluation and Measurement I.

Microsoft Research attendees will be busy collaborating with academic partners during the July 23 workshops, participating in five:

Papers and posters accepted for SIGIR 2009 with at least one Microsoft Research co-author:

Papers

Click-Through Prediction for News Queries Arnd Christian König, Microsoft Research Redmond; Michael Gamon, Microsoft Research Redmond; and Qiang Wu, Microsoft Research Redmond.

Context-Aware Query Classification Huanhuan Cao, University of Science and Technology of China; Derek Hao Hu, Hong Kong University of Science and Technology; Dou Shen, Microsoft; Daxin Jiang, Microsoft Research Asia; Jian-Tao Sun, Microsoft Research Asia; Enhong Chen, University of Science and Technology of China; and Qiang Yang, Hong Kong University of Science and Technology.

CrowdReranking: Exploring Multiple Search Engines for Visual Search Reranking Yuan Liu, University of Science and Technology of China; Tao Mei, Microsoft Research Asia; and Xian-Sheng Hua, Microsoft Research Asia.

Document Selection Methodologies for Efficient and Effective Learning-to-Rank Javed A. Aslam, Northeastern University; Evangelos Kanoulas, Northeastern University; Virgil Pavlu, Northeastern University; Stefan Savev, Northeastern University; and Emine Yilmaz, Microsoft Research Cambridge.

Effective Query Expansion for Federated Search Milad Shokouhi, Microsoft Research Cambridge; Leif Azzopardi, University of Glasgow; and Paul Thomas, Commonwealth Scientific and Research Organization.

Extracting Structured Information from User Queries with Semi-Supervised Conditional Random Fields Xiao Li, Microsoft Research Redmond; Ye-Yi Wang, Microsoft Research Redmond; and Alex Acero, Microsoft Research Redmond.

Named Entity Recognition in Query Jiafeng Guo, Chinese Academy of Sciences; Gu Xu, Microsoft Research Asia; Xueqi Cheng, Chinese Academy of Sciences; and Hang Li, Microsoft Research Asia.

On the Local Optimality of LambdaRank Pinar Donmez, Carnegie Mellon University; Krysta M. Svore, Microsoft Research Redmond; and Christopher J.C. Burges, Microsoft Research Redmond.

Optimizing Search Engine Revenue in Sponsored Search Yunzhang Zhu, Tsinghua University; Gang Wang, Microsoft Research Asia; Junli Yang, Nankai University; Dakan Wang, Shanghai Jiaotong University; Jun Yan, Microsoft Research Asia; Jian Hu, Microsoft Research Asia; and Zheng Chen, Microsoft Research Asia.

Predicting User Interests from Contextual Information Ryen W. White, Microsoft Research Redmond; Peter Bailey, Microsoft; and Liwei Chen, Microsoft.

Ranking Community Answers by Modeling Question-Answer Relationships via Analogical Reasoning Xin-Jing Wang, Microsoft Research Asia; Xudong Tu, Huazhong Science and Technology University; Dan Feng, Huazhong Science and Technology University; and Lei Zhang, Microsoft Research Asia.

Refined Experts: Improving Classification in Large Taxonomies Paul N. Bennett, Microsoft Research Redmond; and Nam Nguyen, Cornell University.

Risky Business: Modeling and Exploiting Uncertainty in Information Retrieval Jianhan Zhu, University College London; Jun Wang, University College London; Ingemar J. Cox, University College London; and Michael J. Taylor, Microsoft Research Cambridge.

Robust Sparse Rank Learning for Non-Smooth Ranking Measures Zhengya Sun, Chinese Academy of Sciences; Tao Qin, Microsoft Research Asia; Qing Tao, Chinese Academy of Sciences; and Jue Wang, Chinese Academy of Sciences.

Simultaneously Modeling Semantics and Structure of Threaded Discussions: A Sparse Coding Approach and Its Applications Chen Lin, Fudan University; Jiang-Ming Yang, Microsoft Research Asia; Rui Cai, Microsoft Research Asia; Xin-Jing Wang, Microsoft Research Asia; Wei Wang, Fudan University; and Lei Zhang, Microsoft Research Asia.

Smoothing Clickthrough Data for Web Search Ranking Jianfeng Gao, Microsoft Research Redmond; Wei Yuan, University of Montreal; Xiao Li, Microsoft Research Redmond; Kefeng Deng, Microsoft; and Jian-Yun Nie, University of Montreal.

SUSHI: Scoring Scaled Samples for Server Selection Paul Thomas, Commonwealth Scientific and Research Organization; and Milad Shokouhi, Microsoft Research Cambridge.

The Impact of Crawl Policy on Web Search Effectiveness Dennis Fetterly, Microsoft Research Silicon Valley; Nick Craswell, Microsoft Research Cambridge; and Vishwa Vinay, Microsoft Research Cambridge.

Towards Methods for the Collective Gathering and Quality Control of Relevance Assessments Gabriella Kazai, Microsoft Research Cambridge; Natasa Milic-Frayling, Microsoft Research Cambridge; and Jamie Costello, Microsoft Research Cambridge.

Using Anchor Texts with Their Hyperlink Structure for Web Search Zhicheng Dou, Microsoft Research Asia; Ruihua Song, Microsoft Research Asia; Jian-Yun Nie, University of Montreal; and Ji-Rong Wen, Microsoft Research Asia.

Where to Stop Reading a Ranked List? Threshold Optimization Using Truncated Score Distributions Avi Arampatzis, University of Amsterdam; Jaap Kamps, University of Amsterdam; and Stephen Robertson, Microsoft Research Cambridge.

Posters

AdOn: An Intelligent Overlay Video Advertising System Jinlian Guo, University of Science and Technology of China; Tao Mei, Microsoft Research Asia; Falin Liu, University of Science and Technology of China; and Xian-Sheng Hua, Microsoft Research Asia.

Concept Representation Based Video Indexing Meng Wang, Microsoft Research Asia; Yan Song, University of Science and Technology of China; and Xian-Sheng Hua, Microsoft Research Asia.

Deep Versus Shallow Judgments in Learning to Rank Emine Yilmaz, Microsoft Research Cambridge; and Stephen Robertson, Microsoft Research Cambridge.

Estimating Query Performance Using Class Predictions Kevyn Collins-Thompson, Microsoft Research Redmond; and Paul N. Bennett, Microsoft Research Redmond.

Finding Advertising Keywords on Video Scripts Jung-Tae Lee, Korea University; Hyungdong Lee, Samsung Electronics; Hee-Seon Park, Samsung Electronics; Youngin Song, Microsoft Research Asia; and Hae-Chang Rim, Korea University.

Page Hunt: Improving Search Engines Using Human Computation Games Hao Ma, The Chinese University of Hong Kong; Raman Chandrasekar, Microsoft Research Redmond; Chris Quirk, Microsoft Research Redmond; and Abhishek Gupta, Georgia Institute of Technology.

Query Sampling for Ranking Learning in Web Search Linjun Yang, Microsoft Research Asia; Li Wang, University of Science and Technology of China; Bo Geng, Peking University; and Xian-Sheng Hua, Microsoft Research Asia.

A Ranking Approach to Keyphrase Extraction Xin Jiang, Peking University; Yunhua Hu, Microsoft Research Asia; and Hang Li, Microsoft Research Asia.

Serendipitous Search via Wikipedia: A Query Log Analysis Tetsuya Sakai, Microsoft Research Asia; and Kenichi Nogami, NewsWatch.

Temporal Query Substitution for Ad Search Wen Zhang, University of Science and Technology of China; Jun Yan, Microsoft Research Asia; Shuicheng Yan, National University of Singapore; Ning Liu, Microsoft Research Asia; and Zheng Chen, Microsoft Research Asia.

Topic (Query) Selection for IR Evaluation Jianhan Zhu, University College London; Jun Wang, University College London; Vishwa Vinay, Microsoft Research Cambridge; and Ingemar J. Cox, University College London.

Usefulness of Click-Through Data in Expert Search Craig Macdonald, University of Glasgow; and Ryen W. White, Microsoft Research Redmond.

Related publications

Continue reading

See all blog posts