Faces and Image-Based Lighting

Similar documents
Synthesizing Realistic Facial Expressions from Photographs

The 7d plenoptic function, indexing all light.

Image-Based Lighting : Computational Photography Alexei Efros, CMU, Fall Eirik Holmøyvik. with a lot of slides donated by Paul Debevec

Traditional Image Generation. Reflectance Fields. The Light Field. The Light Field. The Light Field. The Light Field

What have we leaned so far?

Faces. Face Modeling. Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17

A Morphable Model for the Synthesis of 3D Faces

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve

Reflection Mapping

3D Morphable Model Based Face Replacement in Video

Image-based Lighting (Part 2)

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

Real-time Expression Cloning using Appearance Models

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Image-Based Lighting

Learning-Based Facial Rearticulation Using Streams of 3D Scans

Project 3 code & artifact due Tuesday Final project proposals due noon Wed (by ) Readings Szeliski, Chapter 10 (through 10.5)

Research White Paper WHP 143. Multi-camera radiometric surface modelling for image-based re-lighting BRITISH BROADCASTING CORPORATION.

Shape and Appearance from Images and Range Data

Data-Driven Face Modeling and Animation

Course overview. Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/2/23

Multiple View Geometry

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV Venus de Milo

Computational Photography

Stereo vision. Many slides adapted from Steve Seitz

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Face Relighting with Radiance Environment Maps

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

Light Field Appearance Manifolds

Lecture 10: Multi view geometry

SYNTHESIS OF 3D FACES

Other approaches to obtaining 3D structure

Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation

Passive 3D Photography

Image-Based Deformation of Objects in Real Scenes

Image-Based Lighting. Inserting Synthetic Objects

Recovering illumination and texture using ratio images

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Texture. Texture Maps

Image-Based Modeling and Rendering

Integrated three-dimensional reconstruction using reflectance fields

K A I S T Department of Computer Science

Simulating Spatially Varying Lighting on a Live Performance

Why is computer vision difficult?

Image-based Lighting

Lecture 10: Multi-view geometry

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Epipolar geometry contd.

3D Photography: Active Ranging, Structured Light, ICP

Multiple Importance Sampling

Real-Time Image Based Lighting in Software Using HDR Panoramas

Speech-driven Face Synthesis from 3D Video

3D Object Representations. COS 526, Fall 2016 Princeton University

Image-Based Lighting. Eirik Holmøyvik. with a lot of slides donated by Paul Debevec

Chaplin, Modern Times, 1936

Face Relighting with Radiance Environment Maps

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image stitching. Announcements. Outline. Image stitching

Handbook of 3D Machine Vision Optical Metrology and Imaging Song Zhang

Animating Lips-Sync Speech Faces with Compact Key-Shapes

3D FACE RECONSTRUCTION BASED ON EPIPOLAR GEOMETRY

CS 684 Fall 2005 Image-based Modeling and Rendering. Ruigang Yang

3D Photography: Stereo

Texture. Detail Representation

IMAGE-BASED RENDERING

Registration of Expressions Data using a 3D Morphable Model

Statistics and Information Technology

Image-based modeling (IBM) and image-based rendering (IBR)

HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS

CS5670: Computer Vision

EE795: Computer Vision and Intelligent Systems

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Overview of Active Vision Techniques

FACIAL ANIMATION FROM SEVERAL IMAGES

o Basic signal processing o Filtering, resampling, warping,... Rendering o Polygon rendering pipeline o Ray tracing Modeling

Chapter 7. Conclusions and Future Work

Computational Photography: Advanced Topics. Paul Debevec

Animating Human Face under Arbitrary Illumination

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Light Transport CS434. Daniel G. Aliaga Department of Computer Science Purdue University

Image Based Lighting with Near Light Sources

Image Based Lighting with Near Light Sources

Miniature faking. In close-up photo, the depth of field is limited.

The BJUT-3D Large-Scale Chinese Face Database

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

Determining Reflectance Parameters and Illumination Distribution from a Sparse Set of Images for View-dependent Image Synthesis

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

CS 354R: Computer Game Technology

Computer Animation Visualization. Lecture 5. Facial animation

The SIFT (Scale Invariant Feature

Schedule. Tuesday, May 10: Thursday, May 12: Motion microscopy, separating shading and paint min. student project presentations, projects due.

3D Active Appearance Model for Aligning Faces in 2D Images

Recovering Non-Rigid 3D Shape from Image Streams

Acquisition and Visualization of Colored 3D Objects

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011

Volumetric Scene Reconstruction from Multiple Views

Transcription:

Announcements Faces and Image-Based Lighting Project #3 artifacts voting Final project: Demo on 6/25 (Wednesday) 13:30pm in this room Reports and videos due on 6/26 (Thursday) 11:59pm Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2007/6/3 with slides by Richard Szeliski, Steve Seitz, Alex Efros, Li-Yi Wei and Paul Debevec Outline Image-based lighting 3D acquisition for faces Statistical methods (with application to face super-resolution) 3D Face models from single images Image-based faces Relighting for faces Image-based lighting

Rendering Rendering is a function of geometry, reflectance, lighting and viewing. To synthesize CGI into real scene, we have to match the above four factors. Viewing can be obtained from calibration or structure from motion. Geometry can be captured using 3D photography or made by hands. How to capture lighting and reflectance? Reflectance The Bidirectional Reflection Distribution Function Given an incoming ray and outgoing ray what proportion of the incoming light is reflected along outgoing ray? surface normal Answer given by the BRDF: Rendering equation L ( o p,ωo) = L e ( p,ωo) (p,ωo,ω i ) L (p,ω i) cosθ 2 i i d s + ρ ω o L o ( p,ωo) 5D light field p,ω ) ( i L i p ω i ω i Complex illumination L ( o p,ωo) = L e ( p,ωo) B p,ω ) = ( o + s f (p,ωo,ω i) L (p,ω i ) cosθ 2 i i d f (p,ωo,ω i ) L (p,ω i ) cosθ 2 d i d s reflectance lighting ω ω i i

Natural illumination People perceive materials more easily under natural illumination than simplified illumination. Natural illumination Rendering with natural illumination is more expensive compared to using simplified illumination Images courtesy Ron Dror and Ted Adelson directional source natural illumination Environment maps Acquiring the Light Probe Miller and Hoffman, 1984

HDRI Sky Probe Clipped Sky + Sun Source Lit by sun only Lit by sky only

Lit by sun and sky Illuminating a Small Scene Real Scene Example Goal: place synthetic objects on table

Light Probe / Calibration Grid Modeling the Scene light-based model real scene The Light-Based Room Model Rendering into the Scene Background Plate

Rendering into the scene Differential rendering Objects and Local Scene matched to Scene Local scene w/o objects, illuminated by model Differential rendering Differential rendering - = +

Differential Rendering Environment map from single image? Final Result Eye as light probe! (Nayar et al) Results

Application in Superman returns Capturing reflectance Application in The Matrix Reloaded 3D acquisition for faces

Cyberware scanners Making facial expressions from photos Similar to Façade, use a generic face model and view-dependent texture mapping Procedure 1. Take multiple photographs of a person 2. Establish corresponding feature points 3. Recover 3D points and camera parameters 4. Deform the generic face model to fit points 5. Extract textures from photos face & head scanner whole body scanner Reconstruct a 3D model input photographs Mesh deformation Compute displacement of feature points Apply scattered data interpolation generic 3D face model pose estimation more features deformed model generic model displacement deformed model

Texture extraction Texture extraction The color at each point is a weighted combination of the colors in the photos Texture can be: view-independent view-dependent Considerations for weighting occlusion smoothness positional certainty view similarity Texture extraction Texture extraction view-independent view-dependent

Model reconstruction Creating new expressions In addition to global blending we can use: Regional blending Painterly interface Use images to adapt a generic face model. Creating new expressions Creating new expressions New expressions are created with 3D morphing: + = x + x = /2 /2 Applying a global blend Applying a region-based blend

Creating new expressions Drunken smile + + + = Using a painterly interface Animating between expressions Video Morphing over time creates animation: neutral joy

Spacetime faces Spacetime faces black & white cameras color cameras video projectors time time Face surface

time time stereo stereo active stereo time Spacetime Stereo time Face surface surface motion stereo active stereo spacetime stereo time=1

Spacetime Stereo time Spacetime Stereo time Face surface Face surface surface motion surface motion time=2 time=3 Spacetime Stereo time Spacetime Stereo time Face surface Face surface surface motion surface motion time=4 time=5

Spacetime Stereo time Spacetime stereo matching surface motion Better spatial resolution temporal stableness time Video Fitting

FaceIK Animation 3D face applications: The one 3D face applications: Gladiator extra 3M

Statistical methods Statistical methods z z* = max P( z y) z f(z)+ε P( y z) P( z) = max z P( y) = min L( y z) + L( z) z y observed signal Example: super-resolution de-noising de-blocking Inpainting Statistical methods Statistical methods parameters parameters z z* = minl( y z) + L( z) z f(z)+ε y observed signal There are approximately 10 240 possible 10 10 gray-level images. Even human being has not seen them all yet. There must be a strong statistical bias. Takeo Kanade data evidence y f( z) 2 σ 2 a-priori knowledge Approximately 8X10 11 blocks per day per person.

Generic priors Generic priors Smooth images are good images. L ( z) = ρ( V( x)) x Gaussian MRF ρ( d ) = d 2 Huber MRF d ρ d) = T 2 ( 2 + 2T( d T) d d T > T Example-based priors Example-based priors Existing images are good images. L(z) six 200 200 Images 2,000,000 pairs

Example-based priors high-resolution low-resolution Model-based priors Face images are good images when working on face images Parametric model Z=WX+μ z* = minl( y z) + L( z) z X* = minl( y WX x z* = WX * + μ L(X) + μ) + L( X) PCA PCA on faces: eigenfaces Principal Components Analysis (PCA): approximating a high-dimensional data set with a lower-dimensional subspace Average face First principal component Second principal component * * ** * ** * * * * * * * * * * * Data points * * * * * First principal component Original axes Other components For all except average, gray = 0, white > 0, black < 0

Model-based priors Face images are good images when working on face images Parametric model Z=WX+μ z* = minl( y z) + L( z) z X* = minl( y WX x z* = WX * + μ L(X) + μ) + L( X) Super-resolution (a) (b) (c) (d) (e) (f) (a) Input low 24 32 (b) Our results (c) Cubic B-Spline (d) Freeman et al. (e) Baker et al. (f) Original high 96 128 Morphable model of 3D faces Start with a catalogue of 200 aligned 3D Cyberware scans Face models from single images Build a model of average shape and texture, and principal variations using PCA

Morphable model shape examplars texture examplars Morphable model of 3D faces Adding some variations Reconstruction from single image Reconstruction from single image prior shape and texture priors are learnt from database Rendering must be similar to the input if we guess right ρis the set of parameters for shading including camera pose, lighting and so on

Modifying a single image Animating from a single image Video Exchanging faces in images

Exchange faces in images Exchange faces in images Exchange faces in images Exchange faces in images

Morphable model for human body Image-based faces (lip sync.) Video rewrite (analysis) Video rewrite (synthesis)

Results Morphable speech model Video database 2 minutes of JFK Only half usable Head rotation training video Read my lips. I never met Forest Gump. Preprocessing Prototypes (PCA+k-mean clustering) We find I i and C i for each prototype image.

Morphable model Morphable model I analysis αβ synthesis analysis synthesis Synthesis Results

Results Relighting faces Light is additive lamp #1 lamp #2 Light stage 1.0

Light stage 1.0 Input images 64x32 lighting directions Reflectance function Relighting occlusion flare

Results Changing viewpoints Results Video

3D face applications: Spiderman 2 Spiderman 2 real synthetic Spiderman 2 Light stage 3 video

Light stage 6 Application: The Matrix Reloaded Application: The Matrix Reloaded References Paul Debevec, Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography, SIGGRAPH 1998. F. Pighin, J. Hecker, D. Lischinski, D. H. Salesin, and R. Szeliski. Synthesizing realistic facial expressions from photographs. SIGGRAPH 1998, pp75-84. Li Zhang, Noah Snavely, Brian Curless, Steven M. Seitz, Spacetime Faces: High Resolution Capture for Modeling and Animation, SIGGRAPH 2004. Blanz, V. and Vetter, T., A Morphable Model for the Synthesis of 3D Faces, SIGGRAPH 1999, pp187-194. Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar, Acquiring the Reflectance Field of a Human Face, SIGGRAPH 2000. Christoph Bregler, Malcolm Slaney, Michele Covell, Video Rewrite: Driving Visual Speeach with Audio, SIGGRAPH 1997. Tony Ezzat, Gadi Geiger, Tomaso Poggio, Trainable Videorealistic Speech Animation, SIGGRAPH 2002.