Deliver Your News to the World

Microsoft Drives Innovation in Video, Graphics and Photography at SIGGRAPH


WEBWIRE

Q&A: Richard Szeliski, principal researcher at Microsoft Research, discusses Microsoft’s participation at SIGGRAPH 2008 and the Research group’s commitment to push the state of the art in graphics innovation.

LOS ANGELES — Computer graphics are everywhere: on the Web, at the movies, in video games and on our cell phones. This week, the computer graphics world has converged at the Los Angeles Convention Center for SIGGRAPH 2008, the annual international conference for presenting new scholarly work in visual effects, animation and interactive techniques. Microsoft researchers are presenting papers on such diverse topics as computational photography, image-based modeling and video editing.


PressPass spoke with Richard Szeliski, principal researcher in the Interactive Visual Media group at Microsoft Research, and co-author of two papers being presented at SIGGRAPH, to find out how cutting-edge research from Microsoft is fueling the future of the computer graphics and visual effects environment.

PressPass: What makes SIGGRAPH such an important event for the graphics industry and specifically for Microsoft?

Szeliski: Within the industry, SIGGRAPH is viewed as the launching pad for innovations in computer graphics, a space that is very important to Microsoft. Graphics are extensively used nowadays, right from computer games and data exploration to image-based modeling and photo editing. SIGGRAPH features a lot of new, experimental work in the graphics field. Breakthroughs in computer graphics that are unveiled at SIGGRAPH frequently end up in Microsoft’s image editing products. Microsoft Research is dedicated to advancing the state of the art in graphics and has remained at the forefront of innovation in the graphics space.

PressPass: Do you anticipate breakthroughs in any one area of computer graphics to emerge as a strong trend at SIGGRAPH this year?

Szeliski: Last year photography was a strong trend at SIGGRAPH, especially computational photography. I expect that it will continue to be a major trend at this year’s conference as well. We have two papers from our group that address the computational photography space and there is a lot of innovation and experimentation that is currently taking place in this specific area.

PressPass: What is computational photography?

Computational photography is a rapidly expanding field at the convergence of photography, computer vision, image processing and computer graphics. It leverages the power of digital processing to overcome limitations of traditional photography and offers opportunities for the enhancement and enrichment of visual media.

PressPass: What can you tell us about your paper titled “Finding Paths Through the World’s Photos,” which is being presented at the conference?

Szeliski: This paper is a follow-up to an earlier project, Photo Tourism, a technology developed in collaboration with Steven Seitz and Noah Snavely of the University of Washington, and presented two years ago at SIGGRAPH. Photo Tourism is basically a system for interactively browsing and exploring large collections of photographs of a scene using a 3-D interface. This system takes sets of images from either personal photograph collections or photograph-sharing Web sites, and automatically computes each photo’s viewpoint into a 3-D model of the scene. The photo explorer interface enables the viewer to interactively move about the 3-D space by transitioning between photographs, based on user control.

Our paper addresses better ways to navigate between the photos in a Photo Tour by displaying in-between images as the user views successive photos or rotates around an object. When a scene is photographed many times by different people, the viewpoints often cluster along certain paths that are specific to the scene being photographed. We discover such paths and turn them into controls for image-based navigation, so that the scene can then be interactively browsed in 3-D using these controls.

PressPass: What area of research do you address in your second paper, “Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation”?

Szeliski: This paper is more of a classical computational photography study and deals with multiscale tone and detail management. In this paper, we look at the possibilities of enhancing contrast in photographs in a way that lets the user control whether details or global properties are modified.

This study explores the potential of controlling visual qualities such as the tonal balance and the amount of detail. Earlier techniques have not historically had the ability to control the scale at which filtering effects can be applied. The objective of our new algorithms is to achieve very fine-scale detail control in photographs, without producing what we call “halo artifacts” near the edges of objects in the rendered image. We are working within the realm of photo editing and exploring ways to make photographs look better.

PressPass: What are some of the of real-world applications of this technology?

Szeliski: The advanced features, such as detail manipulation and photo quality enhancement, can be incorporated into photo-editing products. Both home users and professionals actively use photo-editing software. People are always trying to increase the contrast and visibility of details in the photos that they take. These techniques allow users to do this without any distracting halo artifacts.

Editing a video of a talking head. The input images are automatically converted to an “unwrap mosaic” without an intervening 3D reconstruction. Painting the mosaic and re-rendering the video allows us to add virtual make-up (eyebrows, moustache, and rouge on the cheeks) to the actor, as if to a texture map on a deformable 3D surface. From the paper “Unwrap Mosaics: A new representation for video editing.”

PressPass: What are some of the significant technology launches that you’ve witnessed at past SIGGRAPH conferences?

Szeliski: There have been many important launches over the years. In recent times, computational photography is probably the most significant area that has been in the spotlight at SIGGRAPH. We’ve also seen a lot of activity in such areas as image-based rendering as well as geometric modeling, which gives structure to 3-D computer graphics. At earlier SIGGRAPH conferences, my colleague Hugues Hoppe presented some groundbreaking work on progressive meshes, which enables 3-D models to transition smoothly among different levels of complexity based on a single model. And finally, there is always a fair amount of innovation around more classical areas such as computer animation, which underlies the gaming industry.

PressPass: What is the value of industry-academia collaboration?

Szeliski: Microsoft Research has been actively building partnerships with academia over the years. Many of our researchers are visiting professors at universities. We regularly advise students and frequently recruit interns to come in and work with us on projects. Some of our researchers have colleagues at universities and have engaged in joint research ventures with them. Microsoft’s participation at SIGGRAPH also underscores an ongoing emphasis on collaboration with academia. Ten of the Microsoft-presented papers have been co-authored with academic partners from around the globe. Both of my papers being presented at SIGGRAPH this year have been co-written with university researchers, from the University of Washington and the Hebrew University of Jerusalem.



Microsoft Research at SIGGRAPH
A complete list of papers from Microsoft Research to be presented during SIGGRAPH 2008 can be found here. Microsoft-presented papers at SIGGRAPH will address diverse aspects of photography, video editing and visual graphics.

• “Unwrap Mosaics, ACM Transactions on Graphics” by A. Rav-Acha, P. Kohli, C. Rother and A. Fitzgibbon introduces a new video representation to simplify editing tasks. The findings enable both home users and professionals to easily deform 3-D objects and edit complex imagery using a 2-D representation.

• “Factoring Repeated Content Within and Among Images” by H. Wang, Y. Wexler, E. Ofek, and H. Hoppe presents ways to reduce transmission bandwidth and memory space for images by factoring repeated content. The research reveals that retaining only a portion of an image is sufficient for rendering the whole, reducing the amount of information that needs to be stored, both on disk and in memory.

• “BSGP: Bulk-Synchronous GPU Programming” by Kun Zhou and Baining Guo presents a new programming language for general-purpose computing on the CPU. By adding a bare minimum of extra information to a sequential C program, programmers describe parallel processing on the GPU. The most important advantage of BSGP is that it is easy to read, write and maintain.







WebWireID72131





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.