Image-based modeling (IBM) and image-based rendering (IBR)

Similar documents
But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

Computational Photography

More and More on Light Fields. Last Lecture

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering

CSc Topics in Computer Graphics 3D Photography

History of computer graphics

A million pixels, a million polygons. Which is heavier? François X. Sillion. imagis* Grenoble, France

Image Base Rendering: An Introduction

Image or Object? Is this real?

Light Fields. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Image-Based Rendering

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Choosing the right course

Image-Based Modeling and Rendering

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Image based Object/Environment Representation for VR and MAR

The Light Field and Image-Based Rendering

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

A million pixels, a million polygons: which is heavier?

IMAGE-BASED MODELING & RENDERING (1)

Structure from Motion and Multi- view Geometry. Last lecture

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

CHAPTER 1 Graphics Systems and Models 3

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Introduction to Visualization and Computer Graphics

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 1. Prof Emmanuel Agu

Image Based Rendering

Graphics and Interaction Rendering pipeline & object modelling

Modeling Light. Slides from Alexei A. Efros and others

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

CS 354R: Computer Game Technology

Point based Rendering

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Modeling Light. On Simulating the Visual Experience

Real Time Rendering. CS 563 Advanced Topics in Computer Graphics. Songxiang Gu Jan, 31, 2005

Image-Based Rendering and Light Fields

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

Volumetric Scene Reconstruction from Multiple Views

Multiple View Geometry

Stereo vision. Many slides adapted from Steve Seitz

Image-Based Rendering. Image-Based Rendering

Lecture 15: Image-Based Rendering and the Light Field. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

3D Scanning. Lecture courtesy of Szymon Rusinkiewicz Princeton University

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Jingyi Yu CISC 849. Department of Computer and Information Science

Modeling Light. Michal Havlik

Pipeline Operations. CS 4620 Lecture 10

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

A Warping-based Refinement of Lumigraphs

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Modeling Light. Michal Havlik

Computer Graphics Shadow Algorithms

Image-Based Rendering using Image-Warping Motivation and Background

Pipeline Operations. CS 4620 Lecture 14

Computer Graphics I Lecture 11

Multi-view stereo. Many slides adapted from S. Seitz

CSE528 Computer Graphics: Theory, Algorithms, and Applications

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

CS5670: Computer Vision

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Interactive Computer Graphics A TOP-DOWN APPROACH WITH SHADER-BASED OPENGL

Image Processing: Motivation Rendering from Images. Related Work. Overview. Image Morphing Examples. Overview. View and Image Morphing CS334

Projective Texture Mapping with Full Panorama

EECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems

Acquisition and Visualization of Colored 3D Objects

Perspective projection

Rendering by Manifold Hopping

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CS6670: Computer Vision

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

Light Field Spring

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2011

Digitization of 3D Objects for Virtual Museum

The Traditional Graphics Pipeline

Recap from Previous Lecture

Mosaics. Today s Readings

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

The Traditional Graphics Pipeline

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

Clipping. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

TSBK03 Screen-Space Ambient Occlusion

PAPER Three-Dimensional Scene Walkthrough System Using Multiple Acentric Panorama View (APV) Technique

CPSC / Texture Mapping

Lecture 9 & 10: Stereo Vision

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Three Main Themes of Computer Graphics

A Hybrid LOD-Sprite Technique for Interactive Rendering of Large Datasets

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

COMP30019 Graphics and Interaction Rendering pipeline & object modelling

Lecture outline. COMP30019 Graphics and Interaction Rendering pipeline & object modelling. Introduction to modelling

Introduction to 3D Concepts

GLOBAL EDITION. Interactive Computer Graphics. A Top-Down Approach with WebGL SEVENTH EDITION. Edward Angel Dave Shreiner

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

Sung-Eui Yoon ( 윤성의 )

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Transcription:

Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture

The graphics pipeline modeling animation rendering

The graphics pipeline the traditional pipeline modeling animation rendering the new pipeline? 3D scanning motion capture image-based rendering

IBM / IBR The study of image-based modeling and rendering is the study of sampled representations of geometry.

Image-based representations: the classics more geometry 3D model + texture/reflectance map [Blinn78] model + displacement map [Cook84] 2D + Z 2.5D volume rendering [Levoy87, Drebin88] range images [Binford73] disparity maps [vision literature] sprites [vis-sim, games] n 2D epipolar plane images [Bolles87] less geometry 2D movie maps [Lippman78] environment maps, a.k.a. panoramas [19th century]

Recent additions more geometry full model view-dependent textures[debevec96] surface light fields [Wood00] Lumigraphs [Gortler96] sets of range images view interpolation [Chen93, McMillan95, Mark97] layered depth images [Shade98] relief textures [Oliveira00] feature correspondences plenoptic editing [Seitz98, Dorsey01] camera pose image caching [Schaufler96, Shade96] less geometry sprites + warps [Lengyel97] light fields [Levoy96] no model outward-looking QTVR [Chen95]

Rangefinding technologies passive shape from stereo shape from focus shape from motion, etc. active texture-assisted shape-from-x triangulation using structured-light time-of-flight

Laser triangulation rangefinding

single scan of St. Matthew 1 mm

Post-processing pipeline steps 1. aligning the scans 2. combining aligned scans 3. filling holes

Digitizing the statues of Michelangelo using laser scanners 480 individually aimed scans 2 billion polygons 7,000 color images 30 nights of scanning 22 people

Replica of Michelangelo s David (20 cm tall)

Solving the jigsaw puzzle of the Forma Urbis Romae

The puzzle as it now stands

Clues for solving the puzzle incised lines incision characteristics marble veining fragment thickness shapes of fractured surfaces rough / smooth bottom surface straight sides, indicating slab boundaries location and shapes of clamp holes the wall: slab layout, clamp holes, stucco archaeological evidence

Matching incised lines fragment 156 fragment 167 fragment 134

fragment 156 fragment 167 fragment 134

Geometry-based versus image-based rendering conceptual world model construction geometry geometry-based rendering rendering computer vision real world image acquisition images image-based rendering flythrough of scene flythrough of scene

Shortcutting the vision/graphics pipeline real world vision pipeline geometry image-based rendering graphics pipeline views (from M. Cohen)

Apple QuickTime VR [Chen, Siggraph 95] outward-looking panoramic views taken at regularly spaced points inward-looking views taken at points on the surface of a sphere

View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Tesselate to create polygon mesh 4. Re-render from new viewpoint 5. Use depths to resolve overlaps Q. How to fill in holes?

View interpolation from multiple views 1. Render object from multiple viewpoints 2. Convert Z-buffers to range images 3. Tesselate to create multiple meshes 4. Re-render from new viewpoint 5. Use depths to resolve overlaps 6. Use multiple views to fill in holes

Post-rendering 3D warping [Mark et al., I3D97] render at low frame rate interpolate to real-time frame rate interpolate observer viewpoint using B-Spline convert reference images to polygon meshes warp meshes to interpolated viewpoint composite by Z-buffer comparison and conditional write

Results rendered at 5 fps, interpolated to 30 fps live system requires reliable motion prediction tradeoff between accuracy and latency fails on specular objects

Image caching [Shade et al., SIGGRAPH 1996] precompute BSP tree of scene (2D in this case) for first observer position draw nearby nodes (yellow) as geometry render distant nodes (red) to RGB images (black) composite images together as observer moves if disparity exceeds a threshold, rerender image

Light field rendering [Levoy & Hanrahan, SIGGRAPH 1996] must stay outside convex hull of the object like rebinning in computed tomography

The plenoptic function Radiance as a function of position and direction in a static scene with fixed illumination for general scenes 5D function L ( x, y, z, ) in free space 4D function the (scalar) light field

The free-space assumption applications for free-space light fields flying around a compact object flying through an uncluttered environment

Some candidate parameterizations Point-on-plane + direction y L ( x, y, ) x convenient for measuring luminaires

More parameterizations Chords of a sphere L (, ) convenient for spherical gantry facilitates uniform sampling

Two planes ( light slab ) v u t s L ( u, v, s, t ) uses projective geometry fast incremental display algorithms

Creating a light field off-axis (sheared) perspective views

A light field is an array of images

Displaying a light field foreach x,y compute u,v,s,t I(x,y) = L(u,v,s,t)

Devices for capturing light fields: Stanford Multi-Camera Array cameras closely packed high-x imaging synthetic aperture photography cameras widely spaced video light fields new computer vision algorithms

The BRDF kaleidoscope [Han et al., SIGGRAPH 2003] discrete number of views hard to capture grazing angles uniformity?

Light field morphing [Zhang et al., SIGGRAPH 2002] UI for specifying feature polygons and their correspondences sample morph feature correspondences = 3D model

Autostereoscopic display of light fields [Isaksen et al., SIGGRAPH 2000] image is at focal distance of lenslet collimated rays spatial resolution ~ # of lenslets in the array angular resolution ~ # of pixels behind each lenslet each eye sees a different sets of pixels stereo

End-to-end 3D television [Matusik et al., SIGGRAPH 2005] 16 cameras, 16 video projectors, lenticular lens array spatial resolution ~ # of pixels in a camera and projector angular resolution ~ # of cameras and projectors horizontal parallax only

Why didn t IBR take over the world? warping and rendering range images is slow pixel-sized triangles are inefficient just as many pixels need to be touched as in normal rendering arms race against improvements in 3D rendering level of detail (LOD) culling techniques hierarchical Z-buffer etc. visual artifacts are objectionable not small and homogeneous like 3D rendering artifacts