Stereo pairs from linear morphing

Similar documents
Image-Based Deformation of Objects in Real Scenes

Shape as a Perturbation to Projective Mapping

Image Base Rendering: An Introduction

Image Based Rendering

Image Processing: Motivation Rendering from Images. Related Work. Overview. Image Morphing Examples. Overview. View and Image Morphing CS334

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering. Image-Based Rendering

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

But First: Multi-View Projective Geometry

FAST ALGORITHM FOR CREATING IMAGE-BASED STEREO IMAGES

An Animation Synthesis System based on 2D Skeleton Structures of Images

View Interpolation for Dynamic Scenes

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Interactive Shape Metamorphosis

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

PAPER Three-Dimensional Scene Walkthrough System Using Multiple Acentric Panorama View (APV) Technique

Synthesizing Realistic Facial Expressions from Photographs

Lecture 9: Epipolar Geometry

Expression Morphing Between Different Orientations

Lecture 9 & 10: Stereo Vision

Computer Vision for Computer Graphics

Multi-Resolution Image Morphing

Face Cyclographs for Recognition

Image-Based Rendering and Modeling. IBR Approaches for View Synthesis

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Web client. Result window. Java Applet, HTML FORM. Results Image, MPEG. Information for rendering. Parameters Rendering Application

1-2 Feature-Based Image Mosaicing

Image-based modeling (IBM) and image-based rendering (IBR)

Lecture 6 Stereo Systems Multi-view geometry

Rasterization Overview

Automatically Synthesising Virtual Viewpoints by Trinocular Image Interpolation

Multimedia Technology CHAPTER 4. Video and Animation

and Pattern Recognition (CVPR'96), San Francisco, June 1996, pp. 852{858 Stereo Vision for View Synthesis Daniel Scharstein Cornell University

A Warping-based Refinement of Lumigraphs

Prof. Feng Liu. Winter /05/2019

The Light Field and Image-Based Rendering

CSc Topics in Computer Graphics 3D Photography

Reading. 18. Projections and Z-buffers. Required: Watt, Section , 6.3, 6.6 (esp. intro and subsections 1, 4, and 8 10), Further reading:

Local Image Registration: An Adaptive Filtering Framework

Rectification and Distortion Correction

EE795: Computer Vision and Intelligent Systems

Image Morphing. The user is responsible for defining correspondences between features Very popular technique. since Michael Jackson s clips

MODELING AND HIERARCHY

What have we leaned so far?

Hierarchical Matching Techiques for Automatic Image Mosaicing

Mikio Terasawa* Yasushi Yamaguchiy. Kinji Odakaz. *College of Economics, Nihon University Misaki-cho, Chiyoda-ku, Tokyo Japan

An Abstraction Technique for Producing 3D Visual Contents

Stereo Image Rectification for Simple Panoramic Image Generation

Plenoptic Image Editing

Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering

View Synthesis by Trinocular Edge Matching and Transfer

Image-Based Rendering

Real-time Integral Photography Holographic Pyramid using a Game Engine

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

A Thin-Client Approach for Porting OpenGL Applications to Pocket PC s

Stereo vision. Many slides adapted from Steve Seitz

Realtime View Adaptation of Video Objects in 3-Dimensional Virtual Environments

Shape Blending Using the Star-Skeleton Representation

Image warping/morphing

COMPUTER GRAPHICS COURSE. Rendering Pipelines

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

2D to pseudo-3d conversion of "head and shoulder" images using feature based parametric disparity maps

calibrated coordinates Linear transformation pixel coordinates

Multiview Depth-Image Compression Using an Extended H.264 Encoder Morvan, Y.; Farin, D.S.; de With, P.H.N.

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

CPSC 425: Computer Vision

Computer Vision. I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University

Image Warping: A Review. Prof. George Wolberg Dept. of Computer Science City College of New York

Lecture 14: Computer Vision

Smoothing Region Boundaries in Variable Depth Mapping for Real Time Stereoscopic Images

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

More Single View Geometry

Hardware-Assisted Relief Texture Mapping

CS 112 The Rendering Pipeline. Slide 1

Lecture 5 Epipolar Geometry

SUMMARY. CS380: Introduction to Computer Graphics Projection Chapter 10. Min H. Kim KAIST School of Computing 18/04/12. Smooth Interpolation

03 Vector Graphics. Multimedia Systems. 2D and 3D Graphics, Transformations

Scene Modeling for a Single View

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Intermediate view synthesis considering occluded and ambiguously referenced image regions 1. Carnegie Mellon University, Pittsburgh, PA 15213

Specification and Computation of Warping and Morphing Transformations. Bruno Costa da Silva Microsoft Corp.

CONVERSION OF FREE-VIEWPOINT 3D MULTI-VIEW VIDEO FOR STEREOSCOPIC DISPLAYS

lecture 10 - depth from blur, binocular stereo

Volumetric Warping for Voxel Coloring on an Infinite Domain Gregory G. Slabaugh Λ Thomas Malzbender, W. Bruce Culbertson y School of Electrical and Co

Resampling radially captured images for perspectively correct stereoscopic display

Camera model and multiple view geometry

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Player Viewpoint Video Synthesis Using Multiple Cameras

Computer Vision Lecture 17

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Computer Vision Lecture 17

More Single View Geometry

Real-Time Video- Based Modeling and Rendering of 3D Scenes

Transcription:

Proc. of SPIE Vol. 3295, Stereoscopic Displays and Virtual Reality Systems V, ed. M T Bolas, S S Fisher, J O Merritt (Apr 1998) Copyright SPIE Stereo pairs from linear morphing David F. McAllister Multimedia Lab Department of Computer Science North Carolina State University Raleigh, NC 27695-7534 ABSTRACT Several authors have recently investigated the ability to compute intermediate views of a scene using given 2D images from arbitrary camera positions. The methods fall under the topic of image bused rendering. In the case we give here, linear morphing between two parallel views of a scene produces intermediate views that would have been produced by parallel movement of a camera. Hence, the technique produces images computed in a way that is consistent with the standard offaxis perspective projection method for computing stereo pairs. Using available commercial 2D morphing software, linear morphing can be used to produce stereo pairs from a single image with bilateral symmetry such as a human face. In our case, the second image is produced by horizontal reflection. We describe morphing and show how it can be used to produce stereo pairs from single images. Keywords: Key Words: stereo, 3D imaging, linear morphing, 2D morph, plenoptic modeling, interpolation 1. INTRODUCTION Stereo photographers taught us that the correct way to compute stereo pairs was to produce left and right images from cameras that had been displaced horizontally or the two images could be parallel views of the same scene. In computer graphics we simulate parallel views using ofl--axis perspective projections. That is, two centers of projection which are each translated from the z axis by an amount less than one half the interocular distance (see Figure l), the view volumes being a skewed frustum (truncated pyramid) for each eye where the extents are determined by the boundaries of the viewing window. left I right Z Figure 1: Off-axis perspective projections 46 Part of IS&T/SPIE s Stereoscopic Displays and Applications IX 0 San lose, California, USA l lanuary 1998 SPIE Vol. 3295. 0277-786x/98/$10.00 46

Simple.affine and perspective transformations can be used to convert each truncated frustum to a canonical rectangular view volume for rapid clipping and rendering 5. 2. IMAGE BASED RENDERING There has been a considerable amount of interest recently in rapid rendering of scenes without having to know the underlying 3D geometry. Visualization and VRML has spawned research in how to produce new images from old by combining images to produce new ones or warping images to reflect new camera positions. Here we show how one technique can be used to produce an alternate view of a scene from a single view to produce a stereo pair without knowing the underlying geometry. However, the technique requires good commercial software and considerable labor on the part of the user. Love 5 suggests a technique he calls pixel shifting to produce rapid stereo images from a single image if one has the depth of the object which projects to a given pixel. He originally proposed it for stereo animation. Here the geometry is known for one eye and is used to infer the geometry for the other. It is a scan line based algorithm. It is very fast, can be very inaccurate and ignores the hidden surface problem. Love suggests filling holes produced by hidden surfaces by linear interpolation. Obviously this can produce severe anomalies in the resulting image. The method is suggested in Figure 2. eye x L z axis v eye xir Figure 2: Pie1 Shifting Chen and Williams study the case of producing parallel views of a scene from two images without requiring depth information. They were the first to argue that linear interpolation between identical features in the two images should produce new perspective views when a camera moves parallel to the image plane. This is exactly the case we have in producing stereo pairs, and we exploit it here. See Figure 3. P Figure 3: Linear Interpolation of Parallel Views Preserves Shape 47 47

StevenM. Seitz and Charles R. Dyer 6, have extended the above concepts to handle the case when the cameras do not necessarily move parallel to the image plane. They use pre- and postwarping which involves projection to and from parallel images, Figure 4. Intermediate views from parallel images are computing using linear interpolation. They call their technique View Morphing. Prewarp Figure 4: Pre/Post Warp in View Morphing We note that linear interpolation of perspective warps does not necessarily preserve the proper depth relationships between objects in a scene. In particular a linear morph does not necessarily preserve lines. We cannot, therefore, interpolate between two arbitrary camera scenes and expect to produce intermediate consistent images as the following Figure 5 suggests. In this case the interpolation is not shape preserving. In addition, commercial morphing software normally requires the beginning and ending images to be the same dimensions. Figure 5: Projection Warps not Preserved Under Linear Interpolation L. McMillan and G. Bishop 4 have introduced the concept of Plenoptic modeling. They determine the flow of points in an image which would take place if the camera were to move on an arbitrary path. Points in an image projected on the film plane of the camera would follow a path dependent on the motion of the camera relative to the original camera view. Those paths are called epipolar lines and are projections of rays from the epipole or center of projection (COP) of the original position of the camera. They show that visibility could be handled by dividing an image into quadrants that depend on the epipole of the new position of the camera. Hidden surfaces in the original scene that became visible after camera motion would leave holes, because there is no way to determine what is hidden by a given surface without considering additional images revealing such information. 48 48

3. MORPHING A morph f is a gradual warping of one image or object into another. Many technical and subjective constraints can be placed on morphs depending on the goal of the implementor. We restrict our attention to linear morphs or morphs that use linear interpolation: a point Pl in image 11 (s = 0) is to be transformed linearly into point P2 in image 12 (s = 1). The intermediate points P(s) in the intermediate images depend linearly on the parameter s as follows: P(s)=sP2+(1 -s)pl,o*s* 1. The transformation is applied to the properties of position, color and region shape and dimensions. Normally we specify regions or features in 11 and their matching or corresponding features in 12 and the morph technique ensures that the necessary region warping takes place and provides antialiasing if it is needed I7 3 7. Features which are not present in both images may produce holes, folds or ghosting in the intermediate images. Figure 6 is an example the initial and final images for Nancy and the region specification possible using Gryphon s Morph 2.5, which implements many of the techniques described in the previous references. Figure 6-a Figure 6-b Figure 6: Morph region specifications - Nancy 4. STEREO PAIRS FROM SINGLE IMAGES By reflecting horizontally an image that has bilateral symmetry, we can create what we can assume to be two parallel views of the object. Then using linear morphing we can create intermediate images from the two scenes that are parallel views of the scene. The point here is the linear interpolation between matching features automatically produces the correct parallax for the feature in intermediate scenes. By choosing values of the morphing parameters, which are sufficiently close, we can generate a sequence of stereo pairs from a single image. As long as the two views have all visible surfaces in common, the technique will not produce intermediate images with anomalies such as holes. The examples of Nancy and the Mona Lisa below have this property (apologies to Leonardo Da Vinci). We note that Nancy s hair arrangement is not perfectly symmetric and in Figure 9-b, we see some ghosting appear. The stereo pairs are arranged in threes (right-left-right) for both cross and parallel viewing. 49 49

Figure 7. Nancy - Original View (s = 0) Figure 8. Nancy - Reflected View (s = 1) Figure 9 - a: Nancy Figure 9 - b: Nancy (s =.6) (s =.76) Figure 9 - c: Nancy (s =.6) 50 50

51 I

5. SUMMARY AND CONCLUSIONS. Research in image based rendering has made it possible to create stereo images from 2D images without having to produce a 3D model of the scene or the actual photos. Predicting pixel flow based on camera movement has made this possible. This paper has shown how view morphing can be used to produce stereo images from a single image of an object which has bilateral symmetry. ACKNOWLEDGMENTS: I wish to thank S. Seitz and C. Dyer for allowing me to use the animations appearing on their Web site. REFERENCES 1. Beier, T., and Neely, S. Feature-based image metamorphosis, Proc. SIGGRAPH 92, pp. 35-42. 2. Chen, S.E. and Williams, L. View interpolation for image synthesis, Proc. ACM SIGGRAPH 93, pp. 279-288. 3. Lee, S. Y., Chwa, K. Y., Shin, S. Y., and Wolberg, G., Image metamorphosis using snakes and free-form deformations, Proc. SIGGRAPH 92, pp. 439-448. 4. McMillan, L. and Bishop, G. Plenoptic Modeling, Proc. ACM SIGGRAPH 95, pp. 160-165. 5. David F. McAllister, Ed., Stereo Computer Graphics and other True 3D Technologies, Princeton U. Press, Princeton, NJ, Oct. 1993. 6. Steven M. Seitz and Charles R. Dyer, View Morphing, Proc. ACM SIGGRAPH 96, pp. 21-30. 7. Wolberg, G., Digital Image Warping, IEEE Computer Society Press, Los Alamitos, CA, 1990. dfnz@adnz.csc.ncsu.edu http://multimedia.csc.ncsu.edu/ 52 52