A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

Similar documents
Image-Based Rendering

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image Base Rendering: An Introduction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering. Image-Based Rendering

Image or Object? Is this real?

Image-Based Modeling and Rendering

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Rendering by Manifold Hopping

Re-live the Movie Matrix : From Harry Nyquist to Image-Based Rendering. Tsuhan Chen Carnegie Mellon University Pittsburgh, USA

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Image Based Rendering

Rendering with Concentric Mosaics

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

PAPER Three-Dimensional Scene Walkthrough System Using Multiple Acentric Panorama View (APV) Technique

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based Modeling and Rendering: 8. Image Transformation and Panorama

Modeling Light. On Simulating the Visual Experience

Volumetric Scene Reconstruction from Multiple Views

Image-Based Rendering and Light Fields

Announcements. Mosaics. How to do it? Image Mosaics

Computational Photography

CS6670: Computer Vision

Image-Based Rendering Methods

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

calibrated coordinates Linear transformation pixel coordinates

A Survey of Image-based Rendering Techniques. Sing Bing Kang. CRL 97/4 August 1997

Projective Texture Mapping with Full Panorama

A Survey on Image-Based Rendering

An Algorithm for Seamless Image Stitching and Its Application

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Light Field Spring

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Image-Based Rendering using Image-Warping Motivation and Background

A Bayesian Framework for Panoramic Imaging of Complex Scenes

A Warping-based Refinement of Lumigraphs

Stereo pairs from linear morphing

Layered Depth Panoramas

Virtual Worlds using Computer Vision

The Light Field and Image-Based Rendering

CSc Topics in Computer Graphics 3D Photography

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

FAST ALGORITHM FOR CREATING IMAGE-BASED STEREO IMAGES

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Multiview Reconstruction

Modeling Light. Slides from Alexei A. Efros and others

Modeling Light. Michal Havlik

DECODING COMPLEXITY CONSTRAINED RATE-DISTORTION OPTIMIZATION FOR THE COMPRESSION OF CONCENTRIC MOSAICS

Image Based Rendering. D.A. Forsyth, with slides from John Hart

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

Topics and things to know about them:

Compositing a bird's eye view mosaic

Optimizing an Inverse Warper

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

The Plenoptic videos: Capturing, Rendering and Compression. Chan, SC; Ng, KT; Gan, ZF; Chan, KL; Shum, HY.

IRW: An Incremental Representation for Image-Based Walkthroughs

Multiple View Geometry

A Simple Vision System

Computer Graphics Shadow Algorithms

6.837 Introduction to Computer Graphics Quiz 2 Thursday November 20, :40-4pm One hand-written sheet of notes allowed

Linearizing the Plenoptic Space

Automatically Synthesising Virtual Viewpoints by Trinocular Image Interpolation

Image Warping and Mosacing

Hierarchical Matching Techiques for Automatic Image Mosaicing

Modeling Light. Michal Havlik

Capturing and View-Dependent Rendering of Billboard Models

5LSH0 Advanced Topics Video & Analysis

Chapter 5. Projections and Rendering

Towards a Perceptual Method of Blending for Image-Based Models

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Advanced Image Based Rendering Techniques

A Survey and Classification of Real Time Rendering Methods

Image Processing: Motivation Rendering from Images. Related Work. Overview. Image Morphing Examples. Overview. View and Image Morphing CS334

Last week. Machiraju/Zhang/Möller

Computational Photography

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Face Cyclographs for Recognition

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

Image based Object/Environment Representation for VR and MAR

Computer Graphics I Lecture 11

Tour Into the Picture using a Vanishing Line and its Extension to Panoramic Images

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Introduction Ray tracing basics Advanced topics (shading) Advanced topics (geometry) Graphics 2010/2011, 4th quarter. Lecture 11: Ray tracing

A Framework for 3D Pushbroom Imaging CUCS

Virtual Navigation of Complex Scenes using Clusters of Cylindrical. panoramic images.

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Multi-View Omni-Directional Imaging

Texture. Texture Mapping. Texture Mapping. CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

CS 475 / CS 675 Computer Graphics. Lecture 11 : Texture

Interactive 3D Scene Reconstruction from Images

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

View Synthesis by Trinocular Edge Matching and Transfer

Transcription:

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract In this paper, we survey the techniques for image-based rendering and fundamental principles behind each technique. Image-based rendering is used to create a geometric model in 3D and then projecting it onto a 2D image. Image-based rendering is divided into three categories on the basis of how much geometric information is used. First is rendering without geometry, second is rendering with implicit geometry, and third is rendering with explicit geometry. Image-based modelling and rendering allows the use of multiple two-dimensional images in order to generate directly original twodimensional images without using the manual modelling stage. Image-based rendering has gained attention in graphics to create realistic image. I. INTRODUCTION In recent years image-based rendering get a place in graphics community for creating realistic images. It s one of the important benefit is ability to create real world effects and details of the real world by using collection of images. According to previous work on image based rendering it reveals that it based on the tradeoffs that how many images are needed and how much is known about the scene geometry. Image based rendering techniques are classified into three categories rendering without geometry, rendering with implicit geometry and rendering with explicit geometry. It is shown in diagram. Rendering with no geometry Light field Lumigraph Mosaicking Concentric mosaics Rendering with implicit geometry Transfer methods View morphing View interpolation More Geometry Rendering with explicit geometry LDIs Texture-mapped models 3D warping Viewdependent geometry View-dependent texture Figure 1 Categories of rendering techniques There are three main categories of image based rendering: rendering with no geometry, rendering with implicit geometry and rendering with explicit geometry. Each category is further divided; rendering with no geometry has three categories light field, mosaicking and concentric mosaics. Other categories are also further subdivided which is already shown in above diagram. The remainder of this paper is organized as follows. Three categories of image-based rendering systems, with no geometry, implicit geometry, and explicit geometric information respectively, are presented in Sections 2, 3, and 4 and concluding remarks are discussed in Section 5. II. RENDERING WITH NO GEOMETRY In this section, we are going to discuss about rendering with no geometry. This technique L is dependent on plenoptic function. e s 2.1 Plenoptic modelling s Image based rendering is a powerful G new approach of generating photorealistic computer e graphics. They can provide animations without oany external geometric representations. Plenoptic function m is given by Adelson and Bergen, is a parametric efunction which describe anything that is visible from a given t point in space. Page 48

Plenoptic function is a 7D function to describe light intensity passing through every viewpoint, for every direction, for every wavelength, and for every time instant. The original 7D plenoptic function is defined as the intensity of light rays passing through the camera center at every location (Vx, Vy, Vz) at every possible angle (θ, φ), for every wavelength λ, at every time t, i.e., P7 = P(Vx, Vy, Vz, θ, φ, λ, t). (1) Adelson and Bergen extracted a compact and useful description of the plenoptic function s local properties (e.g., low order derivatives). It was proofed that light source directions can be incorporated into the plenoptic function for illumination control. McMillan and Bishop introduced 5D plenoptic function for plenoptic modelling, by eliminating two variables, time t (static environment) and light wavelength λ (fixed lighting condition). P5 = P(Vx, Vy, Vz, θ, φ). (2) The simplest plenoptic function is a 2D panorama (cylindrical or spherical) when the viewpoint is fixed. P2 = P(θ, φ). Figure 2 Light Field Representations [1] i.e. P4 = P(u,v,s,t) (4) where (u, v) and (s, t) parameterize two parallel planes of the bounding box. Aliasing effect can be reducing in light field by applying pre-filtering before rendering. Vector quantization approach is used to reduce amount of data in light field and on the other hand lumigraph can be constructed from a set of images taken from any viewpoint and after this a re-binning process is required. 2.3 Concentric mosaics Constrain camera motion to planar concentric circle; create concentric mosaics by composing slit images taken at different locations along each circle. It is a 3D parameterization of plenoptic function: radius, rotation angle, vertical elevation. If we want to capture all view point then we need 5D plenoptic functions. If we stay inside the convex hull we have 4D lightfield and if we don t move at all for that we have 2D panorama. And concentric mosaic is a 3D plenoptic function. (3) Image-based rendering becomes one of constructing a continuous representation of the plenoptic function from observed discrete samples. 2.2 Lumigraph and Light field It was seen that if we lie outside the convex hull of an object then we can convert a 5D plenoptic function into 4D plenoptic function for both lumigraph and light field, Page 49

Figure 3: Rendering a lobby: rebinned concentric mosaic (a) at the rotation center (b) at the outermost circle (c) at the outermost circle but looking at the opposite direction of (b) [1] (d) a child is occluded in one view but not in the other; [1] (e) parallax change between the plant and the poster [1] Concentric mosaic represents 3D plenoptic function: radius, rotation angle and vertical elevation. Vertical distortion exists in the rendering image, depth correction alleviate vertical distortion. In depth correction rays in the capture plane have no problem, for rays off the plane: Only a small subset of the rays off the plane is stored so have to approximate all rays off the plane from only the slit images. This may cause vertical distortion in the rendered images. Concentric mosaic has good space and high efficiency. In comparison off light field or lumigraph, concentric mosaic has smaller file size. Concentric mosaics are easy to capture except that it require more number of images. It is useful for capturing many virtual reality applications. Rendering a lobby scene is shown in figure 3. A rebinned concentric mosaic at the rotation center is shown in 3 (a), at the outermost circle is shown in 3(b) and at the outermost circle but looking at the opposite direction of (b) is shown in 3(c). In 3(d), a child is occluded in one view but not in the other is shown. In Figure 3(e), strong parallax can be seen between the plant and the poster in the rendered images. 2.4 Image mosaicing Page 50

Incomplete sample of images are used to create a complete plenoptic function at fixed point. A panoramic mosaic is constructed by many regular images. For arbitrarily camera motion, first register the images by recovering the camera movement before converting into the cylindrical/spherical map. Now a day s many systems has been built to construct cylindrical panorama by overlapping multiple images. Multiple slits of images are used to construct large panoramic mosaic. By using omnidirectional camera it is easier to capture large panorama. Many transformation operations are applied on set of images to construct a panorama. Transformation matrix is associated with each input image. A rotation mosaic representation associates a rotation matrix with each input image. Figure 4: Tessellated spherical panorama covering the North Pole (constructed from 54 images). [1] III. RENDERING WITH IMPLICIT GEOMETRY Rendering with implicit geometry is a technique that depends on positional correspondence across a small number of images to render new views. In this technique implicit geometry is not directly available but 3D information is computed only using the usual projection calculations. 3.1 View interpolation View interpolation is one of the algorithms for image based rendering. This method was given by Chen and Williams. In this method we need sample number of depth images, by this sample make adjacent graph in which images are nodes and edges are mapping between them and after that interpolate pixels to construct in between images. This method works well when all the input images have same gaze direction and all the output images have restriction to gaze angle less than 90 degree. [1] 3.2 View morphing By using this technique we can reconstruct any viewpoint on the line linking two optical centres of the original image from two input images. If the camera motions of the intermediate views are perpendicular to the camera viewing direction then intermediate views are exactly linear combinations of two views. A pre-warp stage can be employed to rectify two input images so that corresponding scan lines are parallel only if the two input images are not parallel [1]. Accordingly, a post-warp stage can be used to un-rectify the intermediate images. 3.3 Transfer methods Transfer methods use small number of images with the application of geometry constraints to reproject image pixels at a given virtual camera viewpoint. Geometric constraints can be of the forms of known depth value at image pixels. View interpolation and view morphing methods are specific instances of transfer method. For generating novel view from two or three reference images, first the reference trilinear tensor (that link correspondences between triplets of images) is calculated from the point correspondences between reference images. In the case of two images, one of the images is replicated as third image. IV. Rendering with explicit geometry This technique has 3D information encoded in it in the form of depth along known line-of-sight or in 3D coordinates. 4.1 3D warping 3D image warping is a process of generating a novel view on the scene based on depth information from one or more images. An image can be rendered by any near view point information by projecting the pixels of original image into 3D location and then re-projecting them into new image. Many holes are generated in wrap an image that is the main problem and this is happened because of sampling resolution difference in input and output image. For filling the holes, the method used is to splat a pixel in the input image to several pixels size in the output image. To improve the rendering speed of 3D warping, there are two steps firstly pre-warping step and then texture mapping step. Texture mapping step can be performed by graphics hardware. 3D warping technique can also be applied on multi-perspective images. 4.2 Layered depth images By storing not only what is visible in image but also what is behind the visible surface we can deal with disocclusion artifacts in 3D warping in LDI. In LDI, each pixel in input image store level of depth and colour information where ray from pixel intersect the environment. LDI is the simplest method of warping a single image. 4.3 View-dependent texture maps Page 51

In computer graphics texture mapping are used to generate photo realistic environment. Texture mapping models for real environments can be generated using 3D scanner. Vision techniques are not enough robust to recover accurate 3D models. It is difficult to capture visual effects such as highlights, reflections, and transparency using a single texture-mapped model. V. CONCLUDING REMARKS We have surveyed recent development in image based rendering techniques and, categorize them on the basis of how much geometry information is needed. Geometry information is used as compressing representation for rendering. Image based rendering representation have advantage of photorealistic rendering but have high cost for storing information. IBR and 3D model based rendering techniques have many commendatory characteristics that can be exploit that is clear by our survey. In future rendering hardware customization should be done to handle both 3D modelbased rendering and IBR. REFRENCES [1] Heung-Yeung Shum and Sing Bing Kang. A Review of Image-based Rendering Techniques, Proc. SPIE 4067, Visual Communications and Image Processing 2000, 2 (May 30, 2000); [2] https://graphics.stanford.edu/papers/light [3] https://www.wikipedia.org/ [4] Manuel M. Oliveira1.Image-Based Modeling and Rendering Techniques: A Survey, RITA, 2002 - inf.ufrgs.br [5] John Amanatides. Realism in Computer Graphics: A Survey Page 52