By Janie Chang, Writer, Microsoft Research
With its New Orleans location and a schedule that includes team competitions, an animation festival, musical performances, and a gallery of interactive art, the 36th annual International Conference and Exhibition on Computer Graphics and Interactive Technologies (SIGGRAPH 2009 (opens in new tab)) might look suspiciously like a five-day party to outsiders. For attendees however, this is the premier conference for computer graphics and interactive techniques, and represents a valuable opportunity to listen to industry visionaries, get hands-on with technology, share their knowledge, and connect with other professionals. From Aug. 3-7, as many as 25,000 attendees are expected to flock to events at the Ernest N. Morial Convention Center in New Orleans.
The size of the conference is an indication not only of interest in this field, but also of the diversity and depth of topics that fall under the auspices of SIGGRAPH. It is often the details that make the difference when it comes to rendering realistic images, and the technical papers being presented during the conference reflect the various approaches researchers are taking to enable better accuracy and easier image manipulation. With 97 papers in 24 topic categories (opens in new tab), it is clear that computer graphics and interactive techniques have found application in many aspects of art, animation, special effects, design, music, user interfaces, and engineering. Microsoft Research has been active in SIGGRAPH for almost 20 years, and this year is no different. SIGGRAPH 2009 has accepted seven technical papers from Microsoft Research for presentation at the conference, papers covering a wide range of technical disciplines and encompassing five of the topic categories: Light and Materials; Shape Editing and Deformation; Creating Natural Variations; Imaging and Rendering Pipeline; and Visual, Cut, Paste and Search.
Spotlight: Microsoft research newsletter
In film production and image editing, both foreground and background details matter. One common editing task is replacement of a boring, featureless sky with one that is more dramatic or better-suited to the scene. Commercial image-search engines depend on keywords, so a search for “sunset” could turn up images of poor quality or images that were inaccurately labeled. If a user browses for images in a large collection page-by-page, it could be a time-consuming hunt through thousands of pages. This problem caught the interest of Jian Sun of Microsoft Research Asia (opens in new tab), along with Litian Tao of Beihang University and Lu Yuan of the Hong Kong University of Science and Technology. Their research, described in the paper SkyFinder: Attribute-Based Sky Image Search (opens in new tab) and the associated video, seeks to provide a semantic way to conduct the process of search and replace for sky images.
The SkyFinder system automatically extracts a set of sky attributes—such as category, layout, richness, horizon, or sun position—from each image in a collection. This enables users to search the collection interactively at the semantic level using text queries, like “a landscape at sunset with the sun at the bottom left,” a simple process that belies the complex, content-based image search the system accomplishes. SkyFinder takes functionality a step further by adapting the brightness and color of the image’s foreground to be visually plausible with the new sky, a finishing detail users are sure to appreciate.
Wholesale replacement of a sky is at the opposite end of the spectrum from work that another team has been doing. Paint Selection (opens in new tab)—a paper from Sun; Harry Shum (opens in new tab), Microsoft corporate vice president of Search Development; and Jiangyu Liu of the University of Science and Technology of China—explores the challenges of editing highly detailed, multi-megapixel images. Local selection of an image area is one of the most powerful functions in graphics editing, whether the task is a cut and paste or a re-coloring. With today’s digital cameras, a user achieves extremely high resolution images, but with such high resolution come extremely large file sizes. When it comes to editing images of 10 to 24 megapixels, users encounter performance delays, which are especially frustrating during interactive image editing, when instant feedback is critical.
Current global optimization techniques are inadequate to the task of working with so many pixels. The researchers observed that interactive local selection is a progressive process in which users create the selection step-by-step. Thus, they could dispense with solving the global optimization problem from scratch for each user interaction; instead, they could progressively solve a series of local optimization problems to match users’ directions, without sacrificing usability and selection quality.
The resulting tool, Paint Selection, takes a progressive painting approach for local selection in images, making it easy for users to make a selection by roughly identifying the area of interest using a brush, as demonstrated in this video. The progressive-selection algorithm and new optimization techniques mean that Paint Selection is efficient enough to provide instant feedback to users as they drag the mouse, enabling high-quality selections that are quickly, effectively “painted” onto a variety of multi-megapixel images, even for 110-megapixel panoramas.
Working with light is a perennial challenge. Kernel Nyström Method for Light Transport (opens in new tab), a paper co-authored by Jiaping Wang, Xin Tong (opens in new tab), Zhouchen Lin, and Baining Guo (opens in new tab) of Microsoft Research Asia, along with Yue Dong, a student at Tsinghua University who is interning at Microsoft Research Asia this summer, studies the problem of image-based relighting.
The goal is to take real-world images of a scene and reconstruct it with different lighting by using the light-transport matrix derived from those images, rather than through arduous modeling. But in order to generate high-quality results for images with complex lighting, existing techniques often require tens of thousands of images to reconstruct the light-transport matrix accurately. The team’s objective was to achieve detailed, high-quality results using a small number of images; the research reduces the number of required images from 10s of thousands to mere hundreds.
Making life easier for animators also poses interesting challenges. Weiwei Xu, associate researcher at Microsoft Research Asia, has taken an intriguing approach to improve direct manipulation of complex, jointed objects in Joint-Aware Manipulation of Deformable Models (opens in new tab), a paper co-authored by Xu, lab colleagues KangKang Yin, and Guo, and Jun Wang and Falai Chen of the University of Science and Technology of China, Kun Zhou of Zhejiang University, and Michiel van de Panne of the University of British Columbia.
Complex models of mechanical or biological objects often consist of multiple components connected by various types of joints. In the real world, an object’s movement is defined by the natural degree of movement in its joints; in animation, the objective is deformation that is naturalistic and physically plausible, a task that requires attention to every detail of movement. The project applies motion analysis to computer-aided-design models to determine automatically the location and constraints of joints; the results are used to construct a “joint-aware” deformation framework that supports the direct, interactive manipulation of models in a manner that respects their constraints.
Underscoring Microsoft Research’s active collaboration with academia, all seven of its SIGGRAPH 2009 papers are the results of research partnerships with universities:
Example-Based Hair Geometry Synthesis (opens in new tab) Lvdi Wang, Tsinghua University; Yizhou Yu, University of Illinois at Urbana-Champaign; Kun Zhou, Zhejiang University; Baining Guo, Microsoft Research Asia.
GRAMPS: A Programming Model for Graphics Pipelines (opens in new tab) Jeremy Sugerman, Kayvon Fatahalian, and Solomon Boulos, Stanford University; Kurt Akeley, Microsoft Research Silicon Valley; Pat Hanrahan, Stanford University.
Joint-Aware Manipulation of Deformable Models (opens in new tab) Weiwei Xu, Microsoft Research Asia; Jun Wang, University of Science and Technology of China; KangKang Yin, Microsoft Research Asia; Kun Zhou, Zhejiang University; Michiel van de Panne, University of British Columbia; Falai Chen, University of Science and Technology of China; Baining Guo, Microsoft Research Asia
Kernel Nyström Method for Light Transport (opens in new tab) Jiaping Wang, Microsoft Research Asia; Yue Dong, Tsinghua University and Microsoft Research Asia; Xin Tong and Zhouchen Lin, Microsoft Research Asia; Baining Guo, Microsoft Research Asia and Tsinghua University.
Paint Selection (opens in new tab) Jiangyu Liu, University of Science and Technology of China; Jian Sun, Microsoft Research Asia; Heung-Yeung Shum, Microsoft.
SkyFinder: Attribute-Based Sky Image Search (opens in new tab) Litian Tao, Beihang University; Lu Yuan, Hong Kong University of Science and Technology; Jian Sun, Microsoft Research Asia.
SubEdit: A Representation for Editing Measured Heterogeneous Subsurface Scattering (opens in new tab) Ying Song, Zhejiang University; Xin Tong, Microsoft Research Asia; Fabio Pellacini, Dartmouth College; Pieter Peers, University of Southern California.
Images included in this article are © Copyright 2009 by ACM. The images included here are the author’s version of the work. They are posted here by permission of ACM for your personal use. Not for redistribution. To obtain the definitive version of these images, please go to http://www.acm.org/ (opens in new tab).