Free-viewpoint video renderer

Size: px
Start display at page:

Download "Free-viewpoint video renderer"

Transcription

1 jgt /9/26 14:24 page 1 #1 Vol. [VOL], No. [ISS]: 1 13 Free-viewpoint video renderer J. Starck, J. Kilner, and A. Hilton Centre for Vision, Speech and Signal Processing, University of Surrey, UK. Abstract. Free-viewpoint video allows interactive control of camera viewpoint in video playback. This paper describes a state-of-the-art technique to render freeviewpoint video on the GPU. The algorithm requires video streams from a set of fixed or dynamic real-world video cameras. Arbitrary viewpoints are synthesised from the video streams using a three-dimensional (3D) proxy for the world. The source code for the render system is made available as a resource to the computer graphics and computer vision community. This provides (i) the facility to simulate camera viewpoints from publically available multiple-view video datasets and (ii) a baseline technique for free-viewpoint video synthesis in the development of interactive 3D video and 3DTV applications. 1. Introduction In traditional video and film, events are recorded from a single fixed viewpoint. This confines the viewer to the fixed linear format dictated by the director and a flat two-dimensional (2D) viewing experience. Free-viewpoint video breaks this restriction by providing three-dimensional (3D) content with interactive control of viewpoint in visualisation. Application areas range from on-line visualization for mixed reality environments [Allard et al. 06], communications [Gross et al. 03], as well as production or pre-visualization in television [Grau et al. 03], games [Starck and Hlton 07] and 3DTV [Matusik and Pfister 04]. Free-viewpoint video is synthesised using video streams from a set of realworld cameras that record a scene from different viewpoints. A novel view is A K Peters, Ltd /06 $0.50 per page

2 jgt /9/26 14:24 page 2 #2 2 journal of graphics tools rendered using a 3D proxy for the scene geometry. The proxy is rendered to a virtual viewpoint and surface texture is sampled from adjacent camera images as illustrated in Figure 1. This approach, termed view-dependent rendering [Debevec et al. 98, Buehler et al. 01] can provide highly realistic digital images simply by resampling real-world content. A technique is presented to synthesise free-viewpoint video from multipleview video streams together with a time-varying geometric proxy for a scene. The technique is implemented on the GPU for video-rate view synthesis. The render system is based on developments in free-viewpoint video production of people [Starck and Hlton 07, Starck et al. 07] and incorporates stateof-the-art techniques in view-dependent rendering [Debevec et al. 96, Pulli et al. 97, Buehler et al. 01, Raskar and Low 02, Starck and Hilton 05, Eisemann et al. 08]. The source code for the renderer is released as an open source project. This provides a complete application for free-viewpoint video synthesis as a resource to the computer vision and computer graphics community. The software provides the following specific contributions. 1. A tool to synthesise camera viewpoints from publically available multipleview video data for the development of 3D video production technology. 2. The source for a state-of-the-art render technique as a baseline for the development of interactive 3D video techniques. 2. Free-viewpoint video The synthesis of visually realistic digital images is a central goal in computer graphics. Research to date has seen a convergence of computer vision and computer graphics techniques to synthesise highly realistic digital content directly from video images. Research has focussed on the multiple camera acquisition systems and the computer vision algorithms required to recover 3D scene geometry and perform virtual view synthesis either in real-time or as an off-line post-process [Starck et al. 07]. Recent advances have exploited imagebased reconstruction and image-based rendering to produce free-viewpoint video at a quality comparable to captured video [Zitnick et al. 04] a process termed image-based modelling and rendering (IBMR). Image-based reconstruction deals with the problem of deriving scene geometry from the appearance sampled in camera images. The Virtualized Reality system [Kanade et al. 97] first used 51 cameras distributed over a 5m dome to capture and visualise the performance of an actor in a studio. Real-time systems for mixed reality applications have since been been developed using geometry from image silhouettes [Grau et al. 03, Allard et al. 06]. Offline

3 jgt /9/26 14:24 page 3 #3 Starck et al.: Free-viewpoint video renderer 3 Figure 1. Image-based rendering: free-viewpoint visualisation is achieved by rendering a 3D scene model to a virtual viewpoint with the appearance sampled from adjacent camera images. systems [Vedula et al. 05, Starck and Hlton 07] combine multiple image cues to recover accurate scene representations for view synthesis. Image-based rendering is the process of synthesising novel views from camera images. Light-field techniques [Levoy and Hanrahan 96] perform view synthesis by directly resampling camera images independent of scene geometry. This technique requires dense camera samples to avoid artefacts in interpolating between views and has been applied in 3DTV applications [Matusik and Pfister 04] where the viewpoint is restricted. Scene geometry is used to provide the correspondence in sparse camera sets for image-based rendering [Debevec et al. 96]. Buehler et al. [Buehler et al. 01] provide a unified framework that extends light-field rendering to incorporate a geometric representation for virtual view synthesis. In free-viewpoint video a 3D proxy for a scene is rendered with a viewdependent appearance derived from real-world images. The underlying problem in rendering is to composite a virtual viewpoint by blending the appearance sampled from different camera viewpoints. Debevec et al. [Debevec et al. 96] introduced the concept of view-dependent texturing in which camera

4 jgt /9/26 14:24 page 4 #4 4 journal of graphics tools images are used as a set of view-dependent textures in rendering. Pulli et al. [Pulli et al. 97] applied view-dependent rendering to composite appearance and geometry from multiple viewpoints at the point of view synthesis. Buehler et al. [Buehler et al. 01] define a camera blend field to composite texture from camera images. Raskar and Low [Raskar and Low 02] compute global visibility constraints to feather image blending at depth discontinuities. Starck and Hilton [Starck and Hilton 05] pre-compute view-dependent shape and visibility for free-viewpoint video synthesis. Eisemann et al. [Eisemann et al. 08] synthesis free-viewpoint video in real-time on the GPU. These approaches are combined to present a single state-of-the-art render system. 3. Rendering Algorithm 3.1. Overview The input to the render technique is a set of real world video images and a 3D proxy for the scene geometry, together with the camera calibration defining the projective transformation from the scene coordinate system to each camera. Camera images are denoted I c, c = {1... N} where N is the total number of cameras and the surface of the scene is denoted S. For simplicity we consider only a single frame in the time varying data. Points in the scene x R 3 project to homogeneous image coordinates using the camera project matrix P c as u = P c x. View synthesis entails rendering the surface S to a virtual viewpoint with a projective transformation ˆP. The centre of projection for the real world cameras are denoted o c and the virtual viewpoint ô. A virtual view is synthesised by compositing the appearance sampled from the cameras closest to the virtual camera. At the point of view synthesis, the algorithm first selects a subset of cameras to use in rendering. Surface visibility is then computed in each camera to prevent sampling appearance in the presence of occlusion. A camera blend field is then derived to define the relative contribution for a camera at each pixel in the virtual view. The scene is then rendered using the camera images as view-dependent textures, compositing appearance using the camera blend fields. Successive stages in rendering are outlined in Figure Camera selection A subset of cameras M < N are selected for rendering according to the proximity to the virtual viewpoint ô. Typically only two or three cameras are used [Eisemann et al. 08], although with complex self occlusions in a scene

5 jgt /9/26 14:24 page 5 #5 Starck et al.: Free-viewpoint video renderer 5 Figure 2. Overview of the free-viewpoint video rendering technique. and irregular sampling of the scene appearance across camera viewpoints, a larger number of cameras can be required. Cameras are selected according to proximity in viewing direction. Given the centroid of the scene x 0, cameras are selected to minimise the angular difference between the virtual viewing direction (x 0 ô) and camera viewing direction (x 0 o c ) as depicted in Figure 3.

6 jgt /9/26 14:24 page 6 #6 6 journal of graphics tools Figure 3. Camera selection according to proximity in viewing direction Camera visibility Surface visibility is computed using a soft z-buffer technique [Pulli et al. 97], also termed an ɛ-z-buffer [Pajarola et al. 04]. The surface S is rendered to a depth buffer D c, c M. A fixed offset is applied in rendering to prevent z- fighting in subsequent depth tests. A conservative visibility test is required as the scene proxy S is often inexact and errors in visibility will occur at occlusion boundaries causing incorrect sampling of appearance across the scene surface [Starck and Hilton 05, Eisemann et al. 08]. The surface is therefore extended at occlusion boundaries as proposed by Carranza et al. [Carranza et al. 03] and Eisemann et al. [Eisemann et al. 08]. The surface S is rendered multiple times to the depth buffer D c with a fixed offset such that back-faces of the mesh extend the occlusion boundaries. An expected reprojection error e c is defined and the surface is rendered upto the expected error e c in each camera image.

7 jgt /9/26 14:24 page 7 #7 Starck et al.: Free-viewpoint video renderer 7 Figure 4. Camera visibility: A conservative depth test is provided by rendering an ɛ-z-buffer and displacing back-face occluders to extend occlusion boundaries Camera blend fields A blend field W c is constructed for each camera in the virtual view to define the relative contribution in view dependent texturing. The blend field consists of three components as proposed by Pulli et al. [Pulli et al. 97]. The first component defines the surface visibility in the camera viewpoint, a binary weight is derived using a depth test against D c. The second component defines the surface sampling density such that the surface appearance is derived from the camera images with the greatest sampling rate. The third component defines the view-dependent contribution of the camera according to the proximity to the virtual viewpoint. An angular weighting scheme is adopted [Pulli et al. 97, Buehler et al. 01] in which the weights are derived from the cosine of the angle subtended at the surface as illustrated in Figure 5. The weight maps W c, c M define the blend fields to composite the appearance for the camera images in the virtual view. With inexact surface geometry S for the scene, appearance will not necessarily be sampled consistently across camera viewpoints. Only a subset of appearance samples B < M are therefore blended at each output pixel. The weight maps W c are thresh-

8 jgt /9/26 14:24 page 8 #8 8 journal of graphics tools Figure 5. View-dependent weighting: Camera blend fields are constructed from three components (i) visibility, (ii) surface sampling and (iii) view proximity. olded against the B th largest weight at each pixel. The technique enables a larger number of cameras M to be used in rendering to ensure complete surface coverage in view synthesis with complex self occlusions in a scene while minimising artefacts in blending using a limited number of cameras B at each output pixel. The thresholded weight maps W c are finally feathered to smooth transitions in blending appearance towards boundaries [Pulli et al. 97, Raskar and Low 02, Eisemann et al. 08]. The feathered blend fields are derived using a distance filter on each weight map [Eisemann et al. 08]. Figure 6 illustrates the resulting blend fields Final composite Once the blend fields are derived for the camera set M, the final view is composited. The camera images are treated as a set of projective texture maps [Debevec et al. 98] using the projective transformation P c for each camera. The surface S is rendered to the virtual camera with texture modulated by the weight map W c. The weights at each output pixel are normalised such that they sum to one. Surface visibility is not guaranteed in the subset M and holes can result in the rendered surface appearance. A fill operation is

9 jgt /9/26 14:24 page 9 #9 Starck et al.: Free-viewpoint video renderer 9 Figure 6. Camera blend fields: Camera weighting in the virtual view is thresholded to blend a maximum of B cameras per pixel and feathered for smooth transitions at boundaries. performed by rendering to texture and applying a gaussian filter to propagate appearance to undefined regions. Holes in the final render pass are then filled from the composited texture. Figure 7 shows the resulting composite. 4. GPU Implementation The free-viewpoint video rendering algorithm requires the camera images I c, c = {1... N}, the camera calibration P c and geometric proxy for the scene S as a triangulated surface mesh at each time frame. The algorithm has several parameters, the number M < N of cameras to use in view-dependent

10 jgt /9/26 14:24 page 10 #10 10 journal of graphics tools Figure 7. Final composite: Camera images are used as projective texture maps and combined according to the camera blend fields with missing surface appearance filled from surrounding regions. rendering, the maximum number B < M of samples to blend at each output pixel and the reprojection error e c at occlusion boundaries in the camera images. The size of the depth buffers D c and blend fields W c are predefined. The rendering technique is implemented in OpenGL/GLSL for real-time rendering. Initially a frame buffer object is constructed to render depth and a frame buffer plus render buffer is constructed to render to texture. Textures are also constructed for the fixed size render targets D c and W c. At each time frame, a display list is built for the surface S to speed up multiple render passes. When the scene is drawn the set of cameras M is selected according to the virtual viewpoint. The render technique then proceeds as follows. Build Textures Texture maps for the camera images c N are built on demand. A camera image I c is cropped to the bounding box of the scene S and copied to texture to ensure no resampling. Camera Visibility The depth buffer D c for each camera is rendered on demand. The texture for the buffer D c is attached to the depth buffer object and a vertex shader used to render the scene. A soft z-buffer is achieved

11 jgt /9/26 14:24 page 11 #11 Starck et al.: Free-viewpoint video renderer 11 using GL POLYGON OFFSET FILL to displace the surface and a conservative visibility test is achieved using four render passes with front-face culling, displacing the surface in the camera image plane by ±e c. Camera Blend Field The view-dependent blend field W c for each camera is recomputed for each virtual view. A fragment shader is used to test visibility against the depth buffer D c and compute the view-dependent weighting at each pixel. The blend weight is thresholded to ensure only B cameras are used in blending. The weight map W c is then feathered using ping-pong texture processing with a two-pass distance filter. Render to Texture A texture target is bound to the render buffer and the scene is rendered using a fragment shader to combine the camera textures using the blend fields W c. Blend weights are normalised to one at each pixel. A texture fill operation is then performed using ping-pong texture processing with a two-pass gaussian filter sampling missing texels. Final Render The surface is finally rendered using a fragment shader to combine the camera textures with the blend fields W c and fill missing fragments using the rendered texture. 5. Application The open source project for the free-viewpoint video renderer provides a complete code base for scene graph management, scene and image input/output together with an OpenGL/GLSL implementation of the render technique together with tools and applications to use the renderer. Example usage is presented for publically available multiple view data-sets provided courtesy of the University of Surrey [Starck and Hlton 07, Starck et al. 07]. The data consists of 8 HD resolution video streams recorded in a blue screen studio together the reconstructed scene geometry and camera calibration. 6. View synthesis Rendering is shown in Figure 8 for a variety of different motion sequences. The technique was tested at rendering resolution using a NVIDIA Quadro FX 1700 graphics card. Rendering achieved an average frame rate of 19fps for a static frame with free-viewpoint interaction and 1fps while streaming image and geometry from disk.

12 jgt /9/26 14:24 page 12 #12 12 journal of graphics tools Figure 8. The open-source free-viewpoint video renderer, demonstrating interactive visualisation of multiple view video data. 7. Camera configuration synthesis Rendering is now shown in Figure 9 to simulate a novel camera configuration for the multiple view video data. The synthesised data-set provides both camera calibration parameters and ground-truth geometry in terms of the underlying 3D geometry used to synthesise the new viewpoints. Acknowledgments. i3dpost, EU Framework 7 ICT Programme This work was supported by the DTI Technology programme under Free-viewpoint video for interactive entertainment production TP/3/DSM/6/I/ and EP- SRC Grant EP/D033926, the EU Framework 7 ICT Project i3dpost, and the UK EPSRC Visual Media Platform Grant. For further details visit the iview project ( and the i3dpost project ( Web Information: For more information on the project please visit For further details on the iview project please visit J. Starck, Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK. (J.Starck@surrey.ac.uk) J. Kilner, Centre for Vision, Speech and Signal Processing, University of Surrey,

13 jgt /9/26 14:24 page 13 #13 Starck et al.: Free-viewpoint video renderer 13 Figure 9. Application of the render technique to simulate a novel camera configuration from fixed multiple view video data. Guildford, UK. (J.Kilner@surrey.ac.uk) A.Hilton, Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK. (A.Hilton@surrey.ac.uk ) Received [DATE]; accepted [DATE].

14 jgt /9/26 14:24 page 14 #14 14 journal of graphics tools References [Allard et al. 06] J. Allard, J-S. Franco, C. Menier, E. Boyer, and B. Raffin. The GrImage Platform: A Mixed Reality Environment for Interactions. IEEE International Conference on Computer Vision Systems (ICVS), p. 46. [Buehler et al. 01] C. Buehler, M. Bosse, L. McMillan, S. Gortler, and M. Cohen. Unstructured lumigraph rendering. ACM Transactions on Graphics (SIG- GRAPH), pp [Carranza et al. 03] J. Carranza, C. M. Theobalt, M. Magnor, and H.P. Seidel. Free-Viewpoint Video of Human Actors. ACM Transactions on Graphics (SIGGRAPH) 22(3) (2003), [Debevec et al. 96] P. Debevec, C. Taylor, and J. Malik. Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach. ACM Transactions on Graphics (SIGGRAPH), pp [Debevec et al. 98] P. Debevec, Y. Yu, and G. Borshukov. Efficient View- Dependent Image-Based Rendering with Projective Texture-Mapping. Proceedings of Eurographics Workshop on Rendering,, pp [Eisemann et al. 08] M. Eisemann, B. Decker, M. Magnor, P. Bekaert, E. Aguiar, N. Ahmed, C. Theobalt, and A. Sellent. Floating Textures. Computer Graphics Forum (Eurographics) 27(2) (2008), [Grau et al. 03] O. Grau, T. Pullen, and G. Thomas. A combined studio production system for 3D capturing of live action and immersive actor feedback. IEEE Transactions on Circuits and Systems for Video Technology 14(3) (2003), [Gross et al. 03] M. Gross, S. Würmlin, M. Naef, E. Lamboray, C. Spagno, A. Kunz, E. Koller-Meier, T. Svoboda, L. Van Gool, S. Lang, K. Strehlke, A. Vande Moere, and O. Staadt. blue-c: a spatially immersive display and 3D video portal for telepresence. ACM Transactions on Graphics (SIGGRAPH) 22(3) (2003), [Kanade et al. 97] T. Kanade, P.W. Rander, and P.J. Narayanan. Virtualized Reality: Constructing Virtual Worlds from Real Scenes. IEEE Multimedia 4(1) (1997), [Levoy and Hanrahan 96] M. Levoy and P. Hanrahan. Light Field Rendering. ACM Transactions on Graphics (SIGGRAPH) 30 (1996), [Matusik and Pfister 04] W. Matusik and H. Pfister. 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM Transactions on Graphics (SIGGRAPH), pp [Pajarola et al. 04] R. Pajarola, M. Sainz, and Y. Meng. DMesh: Fast depth image meshing and warping. International Journal of Image and Graphics 4(4) (2004), [Pulli et al. 97] K. Pulli, M. Cohen, T. Duchamp, H. Hoppe, L. G. Shapiro, and W. Stuetzle. View-base rendering: Visualizing real objects from scanned range and color data. Eurographics workshop on Rendering (EGWR), pp

15 jgt /9/26 14:24 page 15 #15 Starck et al.: Free-viewpoint video renderer 15 [Raskar and Low 02] R. Raskar and K.-L. Low. Blending multiple views. Pacific Conference on Computer Graphics and Applications, pp [Starck and Hilton 05] J. Starck and A. Hilton. Virtual view synthesis of people from multiple view video sequences. Graphical Models 67(6) (2005), [Starck and Hlton 07] J. Starck and A. Hlton. Surface Capture for Performance Based Animation. IEEE Computer Graphics and Applications 27(3) (2007), [Starck et al. 07] J. Starck, A. Maki, S. Nobuhara, A. Hilton, and T. Matsuyama. The 3D Production Studio. Technical Report VSSP-TR-4/2007. [Vedula et al. 05] S. Vedula, S. Baker, and T. Kanade. Image-Based Spatio- Temporal Modeling and View Interpolation of Dynamic Events. ACM Transactions on Graphics 24(2) (2005), [Zitnick et al. 04] C.L. Zitnick, S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski. High-quality Video View Interpolation Using a Layered Representation. ACM Transactions on Graphics (SIGGRAPH) 23(3) (2004),

A FREE-VIEWPOINT VIDEO SYSTEM FOR VISUALISATION OF SPORT SCENES

A FREE-VIEWPOINT VIDEO SYSTEM FOR VISUALISATION OF SPORT SCENES A FREE-VIEWPOINT VIDEO SYSTEM FOR VISUALISATION OF SPORT SCENES Oliver Grau 1, Adrian Hilton 2, Joe Kilner 2, Gregor Miller 2 Tim Sargeant 1, Jonathan Starck 2 1 BBC Research, UK Oliver.Grau Tim.Sargeant

More information

Capturing and View-Dependent Rendering of Billboard Models

Capturing and View-Dependent Rendering of Billboard Models Capturing and View-Dependent Rendering of Billboard Models Oliver Le, Anusheel Bhushan, Pablo Diaz-Gutierrez and M. Gopi Computer Graphics Lab University of California, Irvine Abstract. In this paper,

More information

Projective Surface Refinement for Free-Viewpoint Video

Projective Surface Refinement for Free-Viewpoint Video Projective Surface Refinement for Free-Viewpoint Video Gregor Miller, Jonathan Starck and Adrian Hilton Centre for Vision, Speech and Signal Processing, University of Surrey, UK {Gregor.Miller, J.Starck,

More information

Virtual view synthesis of people from multiple view video sequences

Virtual view synthesis of people from multiple view video sequences Graphical Models 67 (2005) 600 620 www.elsevier.com/locate/gmod Virtual view synthesis of people from multiple view video sequences J. Starck *, A. Hilton Centre for Vision, Speech and Signal Processing,

More information

Real-Time Video-Based Rendering from Multiple Cameras

Real-Time Video-Based Rendering from Multiple Cameras Real-Time Video-Based Rendering from Multiple Cameras Vincent Nozick Hideo Saito Graduate School of Science and Technology, Keio University, Japan E-mail: {nozick,saito}@ozawa.ics.keio.ac.jp Abstract In

More information

Interactive Free-Viewpoint Video

Interactive Free-Viewpoint Video Interactive Free-Viewpoint Video Gregor Miller, Adrian Hilton and Jonathan Starck Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford GU2 7XH, UK {gregor.miller, a.hilton,

More information

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene

More information

Model-Based Multiple View Reconstruction of People

Model-Based Multiple View Reconstruction of People Model-Based Multiple View Reconstruction of People Jonathan Starck and Adrian Hilton Centre for Vision, Speech and Signal Processing University of Surrey, Guildford, GU2 7XH, UK. e-mail: {j.starck, a.hilton}@eim.surrey.ac.uk

More information

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments

Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Morphable 3D-Mosaics: a Hybrid Framework for Photorealistic Walkthroughs of Large Natural Environments Nikos Komodakis and Georgios Tziritas Computer Science Department, University of Crete E-mails: {komod,

More information

Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering

Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering Using Shape Priors to Regularize Intermediate Views in Wide-Baseline Image-Based Rendering Cédric Verleysen¹, T. Maugey², P. Frossard², C. De Vleeschouwer¹ ¹ ICTEAM institute, UCL (Belgium) ; ² LTS4 lab,

More information

Polyhedral Visual Hulls for Real-Time Rendering

Polyhedral Visual Hulls for Real-Time Rendering Polyhedral Visual Hulls for Real-Time Rendering Wojciech Matusik Chris Buehler Leonard McMillan MIT Laboratory for Computer Science Abstract. We present new algorithms for creating and rendering visual

More information

Acquisition and Visualization of Colored 3D Objects

Acquisition and Visualization of Colored 3D Objects Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University

More information

Surface Capture for Performance-Based Animation

Surface Capture for Performance-Based Animation 3D Cinema Surface Capture for Performance-Based Animation Jonathan Starck and Adrian Hilton University of Surrey, Guildford, UK Surface capture is a fully automated system for capturing a human s shape

More information

3D Studio Production of Animated Actor Models

3D Studio Production of Animated Actor Models 3D Studio Production of Animated Actor Models Adrian Hilton, Michael Kalkavouras and Gordon Collins Centre for Vision, Speech and Signal Processing University of Surrey, Guildford GU27XH, UK a.hilton,m.kalkavouras,g.collins@surrey.ac.uk

More information

3D Image Analysis and Synthesis at MPI Informatik

3D Image Analysis and Synthesis at MPI Informatik Vision, Video, and Graphics (2005) E. Trucco, M. Chantler (Editors) 3D Image Analysis and Synthesis at MPI Informatik Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel Max-Planck-Institut für

More information

Volumetric stereo with silhouette and feature constraints

Volumetric stereo with silhouette and feature constraints Volumetric stereo with silhouette and feature constraints Jonathan Starck, Gregor Miller and Adrian Hilton Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, GU2 7XH, UK.

More information

Unconstrained Free-Viewpoint Video Coding

Unconstrained Free-Viewpoint Video Coding Unconstrained Free-Viewpoint Video Coding The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version Accessed

More information

ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts

ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts ASIAGRAPH 2008 The Intermediate View Synthesis System For Soccer Broadcasts Songkran Jarusirisawad, Kunihiko Hayashi, Hideo Saito (Keio Univ.), Naho Inamoto (SGI Japan Ltd.), Tetsuya Kawamoto (Chukyo Television

More information

Depth Images: Representations and Real-time Rendering

Depth Images: Representations and Real-time Rendering Depth Images: Representations and Real-time Rendering Pooja Verlani, Aditi Goswami, P. J. Narayanan Centre for Visual Information Technology IIIT, Hyderabad 2 INDIA pjn@iiit.ac.in Shekhar Dwivedi GE JFWTC

More information

Online Interactive 4D Character Animation

Online Interactive 4D Character Animation Online Interactive 4D Character Animation Marco Volino, Peng Huang and Adrian Hilton Web3D 2015 Outline 4D Performance Capture - 3D Reconstruction, Alignment, Texture Maps Animation - Parametric Motion

More information

Coherent Image-Based Rendering of Real-World Objects

Coherent Image-Based Rendering of Real-World Objects Coherent Image-Based Rendering of Real-World Objects Stefan Hauswiesner Graz University of Technology Matthias Straka Graz University of Technology (a) Gerhard Reitmayr Graz University of Technology (b)

More information

Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping

Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping Paul Debevec, Yizhou Yu, and George Borshukov Univeristy of California at Berkeley debevec@cs.berkeley.edu Abstract. This

More information

Algorithms for Image-Based Rendering with an Application to Driving Simulation

Algorithms for Image-Based Rendering with an Application to Driving Simulation Algorithms for Image-Based Rendering with an Application to Driving Simulation George Drettakis GRAPHDECO/Inria Sophia Antipolis, Université Côte d Azur http://team.inria.fr/graphdeco Graphics for Driving

More information

The GrImage Platform: A Mixed Reality Environment for Interactions

The GrImage Platform: A Mixed Reality Environment for Interactions The GrImage Platform: A Mixed Reality Environment for Interactions Jérémie Allard Jean-Sébastien Franco Clément Ménier Edmond Boyer Bruno Raffin Laboratoire Gravir, Laboratoire ID CNRS/INPG/INRIA/UJF INRIA

More information

Multi-view stereo. Many slides adapted from S. Seitz

Multi-view stereo. Many slides adapted from S. Seitz Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences

Modeling, Combining, and Rendering Dynamic Real-World Events From Image Sequences Modeling, Combining, and Rendering Dynamic Real-World Events From Image s Sundar Vedula, Peter Rander, Hideo Saito, and Takeo Kanade The Robotics Institute Carnegie Mellon University Abstract Virtualized

More information

Some books on linear algebra

Some books on linear algebra Some books on linear algebra Finite Dimensional Vector Spaces, Paul R. Halmos, 1947 Linear Algebra, Serge Lang, 2004 Linear Algebra and its Applications, Gilbert Strang, 1988 Matrix Computation, Gene H.

More information

Dual-Mode Deformable Models for Free-Viewpoint Video of Sports Events

Dual-Mode Deformable Models for Free-Viewpoint Video of Sports Events Dual-Mode Deformable Models for Free-Viewpoint Video of Sports Events J Kilner, J Starck, A Hilton University of Surrey, U.K. {J.Kilner,J.Starck,A.Hilton}@surrey.ac.uk O Grau BBC Research, U.K. Oliver.Grau@bbc.co.uk

More information

Joint Tracking and Multiview Video Compression

Joint Tracking and Multiview Video Compression Joint Tracking and Multiview Video Compression Cha Zhang and Dinei Florêncio Communication and Collaborations Systems Group Microsoft Research, Redmond, WA, USA 98052 {chazhang,dinei}@microsoft.com ABSTRACT

More information

Jingyi Yu CISC 849. Department of Computer and Information Science

Jingyi Yu CISC 849. Department of Computer and Information Science Digital Photography and Videos Jingyi Yu CISC 849 Light Fields, Lumigraph, and Image-based Rendering Pinhole Camera A camera captures a set of rays A pinhole camera captures a set of rays passing through

More information

The Light Field and Image-Based Rendering

The Light Field and Image-Based Rendering Lecture 11: The Light Field and Image-Based Rendering Visual Computing Systems Demo (movie) Royal Palace: Madrid, Spain Image-based rendering (IBR) So far in course: rendering = synthesizing an image from

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Point Cloud Streaming for 3D Avatar Communication

Point Cloud Streaming for 3D Avatar Communication 16 Point Cloud Streaming for 3D Avatar Communication Masaharu Kajitani, Shinichiro Takahashi and Masahiro Okuda Faculty of Environmental Engineering, The University of Kitakyushu Japan 1. Introduction

More information

Scalable 3D Video of Dynamic Scenes

Scalable 3D Video of Dynamic Scenes Michael Waschbüsch Stephan Würmlin Daniel Cotting Filip Sadlo Markus Gross Scalable 3D Video of Dynamic Scenes Abstract In this paper we present a scalable 3D video framework for capturing and rendering

More information

DATA FORMAT AND CODING FOR FREE VIEWPOINT VIDEO

DATA FORMAT AND CODING FOR FREE VIEWPOINT VIDEO DATA FORMAT AND CODING FOR FREE VIEWPOINT VIDEO P. Kauff, A. Smolic, P. Eisert, C. Fehn. K. Müller, R. Schäfer Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut (FhG/HHI), Berlin, Germany

More information

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Hybrid Rendering for Collaborative, Immersive Virtual Environments Hybrid Rendering for Collaborative, Immersive Virtual Environments Stephan Würmlin wuermlin@inf.ethz.ch Outline! Rendering techniques GBR, IBR and HR! From images to models! Novel view generation! Putting

More information

ARTICLE IN PRESS. Signal Processing: Image Communication

ARTICLE IN PRESS. Signal Processing: Image Communication Signal Processing: Image Communication 24 (2009) 3 16 Contents lists available at ScienceDirect Signal Processing: Image Communication journal homepage: www.elsevier.com/locate/image Objective quality

More information

Shape and Appearance from Images and Range Data

Shape and Appearance from Images and Range Data SIGGRAPH 2000 Course on 3D Photography Shape and Appearance from Images and Range Data Brian Curless University of Washington Overview Range images vs. point clouds Registration Reconstruction from point

More information

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection

3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection 3-D Shape Reconstruction from Light Fields Using Voxel Back-Projection Peter Eisert, Eckehard Steinbach, and Bernd Girod Telecommunications Laboratory, University of Erlangen-Nuremberg Cauerstrasse 7,

More information

Image-based rendering using plane-sweeping modelisation

Image-based rendering using plane-sweeping modelisation Author manuscript, published in "IAPR Machine Vision and Applications MVA2005, Japan (2005)" Image-based rendering using plane-sweeping modelisation Vincent Nozick, Sylvain Michelin and Didier Arquès Marne

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Gaze Correction for Home Video Conferencing

Gaze Correction for Home Video Conferencing Gaze Correction for Home Video Conferencing Claudia Kuster1 Tiberiu Popa1 ETH Zurich 1 Jean-Charles Bazin1 Craig Gotsman1,2 2 Technion - Israel Institute of Technology Markus Gross1 Figure 1: Top: frames

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Real-Time Free Viewpoint from Multiple Moving Cameras

Real-Time Free Viewpoint from Multiple Moving Cameras Real-Time Free Viewpoint from Multiple Moving Cameras Vincent Nozick 1,2 and Hideo Saito 2 1 Gaspard Monge Institute, UMR 8049, Marne-la-Vallée University, France 2 Graduate School of Science and Technology,

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro email:{martinian,jxin,avetro}@merl.com, behrens@tnt.uni-hannover.de Mitsubishi Electric Research

More information

Image-Based Rendering

Image-Based Rendering Image-Based Rendering COS 526, Fall 2016 Thomas Funkhouser Acknowledgments: Dan Aliaga, Marc Levoy, Szymon Rusinkiewicz What is Image-Based Rendering? Definition 1: the use of photographic imagery to overcome

More information

Appearance-Based Virtual View Generation From Multicamera Videos Captured in the 3-D Room

Appearance-Based Virtual View Generation From Multicamera Videos Captured in the 3-D Room IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 5, NO. 3, SEPTEMBER 2003 303 Appearance-Based Virtual View Generation From Multicamera Videos Captured in the 3-D Room Hideo Saito, Member, IEEE, Shigeyuki Baba, and

More information

Towards a Perceptual Method of Blending for Image-Based Models

Towards a Perceptual Method of Blending for Image-Based Models Towards a Perceptual Method of Blending for Image-Based Models Gordon Watson, Patrick O Brien and Mark Wright Edinburgh Virtual Environment Centre University of Edinburgh JCMB, Mayfield Road, Edinburgh

More information

View-based Rendering: Visualizing Real Objects from Scanned Range and Color Data

View-based Rendering: Visualizing Real Objects from Scanned Range and Color Data View-based Rendering: Visualizing Real Objects from Scanned Range and Color Data Kari Pulli Michael Cohen y Tom Duchamp Hugues Hoppe y Linda Shapiro Werner Stuetzle University of Washington, Seattle, WA

More information

Free Viewpoint Video Synthesis based on Visual Hull Reconstruction from Hand-Held Multiple Cameras

Free Viewpoint Video Synthesis based on Visual Hull Reconstruction from Hand-Held Multiple Cameras Free Viewpoint Video Synthesis based on Visual Hull Reconstruction from Hand-Held Multiple Cameras Songkran Jarusirisawad and Hideo Saito Department Information and Computer Science, Keio University 3-14-1

More information

4D Video Textures for Interactive Character Appearance

4D Video Textures for Interactive Character Appearance EUROGRAPHICS 2014 / B. Lévy and J. Kautz (Guest Editors) Volume 33 (2014), Number 2 4D Video Textures for Interactive Character Appearance Dan Casas, Marco Volino, John Collomosse and Adrian Hilton Centre

More information

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE

FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE FLY THROUGH VIEW VIDEO GENERATION OF SOCCER SCENE Naho INAMOTO and Hideo SAITO Keio University, Yokohama, Japan {nahotty,saito}@ozawa.ics.keio.ac.jp Abstract Recently there has been great deal of interest

More information

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. 1 2 Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. Crowd rendering in large environments presents a number of challenges,

More information

Towards Space-Time Light Field Rendering

Towards Space-Time Light Field Rendering Towards Space-Time Light Field Rendering Huamin Wang Georgia Institute of Technology Ruigang Yang University of Kentucky Abstract So far extending light field rendering to dynamic scenes has been trivially

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo Geometric Modeling Bing-Yu Chen National Taiwan University The University of Tokyo What are 3D Objects? 3D Object Representations What are 3D objects? The Graphics Process 3D Object Representations Raw

More information

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531

Department of Computer Engineering, Middle East Technical University, Ankara, Turkey, TR-06531 INEXPENSIVE AND ROBUST 3D MODEL ACQUISITION SYSTEM FOR THREE-DIMENSIONAL MODELING OF SMALL ARTIFACTS Ulaş Yılmaz, Oğuz Özün, Burçak Otlu, Adem Mulayim, Volkan Atalay {ulas, oguz, burcak, adem, volkan}@ceng.metu.edu.tr

More information

Live Video Integration for High Presence Virtual World

Live Video Integration for High Presence Virtual World Live Video Integration for High Presence Virtual World Tetsuro OGI, Toshio YAMADA Gifu MVL Research Center, TAO IML, The University of Tokyo 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656, Japan Michitaka HIROSE

More information

Capturing 2½D Depth and Texture of Time-Varying Scenes Using Structured Infrared Light

Capturing 2½D Depth and Texture of Time-Varying Scenes Using Structured Infrared Light Capturing 2½D Depth and Texture of Time-Varying Scenes Using Structured Infrared Light Christian Frueh and Avideh Zakhor Department of Computer Science and Electrical Engineering University of California,

More information

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0) Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on

More information

On-line Free-viewpoint Video: From Single to Multiple View Rendering

On-line Free-viewpoint Video: From Single to Multiple View Rendering International Journal of Automation and Computing 05(3), July 2008, 257-267 DOI: 10.1007/s11633-008-0257-y On-line Free-viewpoint Video: From Single to Multiple View Rendering Vincent Nozick Hideo Saito

More information

Hardware-Assisted Relief Texture Mapping

Hardware-Assisted Relief Texture Mapping EUROGRAPHICS 0x / N.N. and N.N. Short Presentations Hardware-Assisted Relief Texture Mapping Masahiro Fujita and Takashi Kanai Keio University Shonan-Fujisawa Campus, Fujisawa, Kanagawa, Japan Abstract

More information

Player Viewpoint Video Synthesis Using Multiple Cameras

Player Viewpoint Video Synthesis Using Multiple Cameras Player Viewpoint Video Synthesis Using Multiple Cameras Kenji Kimura *, Hideo Saito Department of Information and Computer Science Keio University, Yokohama, Japan * k-kimura@ozawa.ics.keio.ac.jp, saito@ozawa.ics.keio.ac.jp

More information

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based modeling (IBM) and image-based rendering (IBR) Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering

More information

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN

VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN VIDEO FOR VIRTUAL REALITY LIGHT FIELD BASICS JAMES TOMPKIN WHAT IS A LIGHT FIELD? Light field seems to have turned into a catch-all term for many advanced camera/display technologies. WHAT IS A LIGHT FIELD?

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging Fragment Processing The per-fragment attributes may include a normal vector, a set of texture coordinates, a set of color values, a depth, etc. Using these

More information

Dynamic Point Cloud Compression for Free Viewpoint Video

Dynamic Point Cloud Compression for Free Viewpoint Video MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Dynamic Point Cloud Compression for Free Viewpoint Video Edouard Lamboray, Michael Waschbusch, Stephan Wurmlin, Hanspeter Pfister, Markus Gross

More information

CONVERSION OF FREE-VIEWPOINT 3D MULTI-VIEW VIDEO FOR STEREOSCOPIC DISPLAYS

CONVERSION OF FREE-VIEWPOINT 3D MULTI-VIEW VIDEO FOR STEREOSCOPIC DISPLAYS CONVERSION OF FREE-VIEWPOINT 3D MULTI-VIEW VIDEO FOR STEREOSCOPIC DISPLAYS Luat Do 1, Svitlana Zinger 1, and Peter H. N. de With 1,2 1 Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven,

More information

Projective Texture Mapping with Full Panorama

Projective Texture Mapping with Full Panorama EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel Volume 21 (2002), Number 3 (Guest Editors) Projective Texture Mapping with Full Panorama Dongho Kim and James K. Hahn Department of Computer Science, The

More information

Historical Perspectives on 4D Virtualized Reality

Historical Perspectives on 4D Virtualized Reality Historical Perspectives on 4D Virtualized Reality Takeo Kanade and P. J. Narayanan Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213. U. S. A. Centre for Visual Information Technology

More information

ARTICULATED 3-D MODELLING IN A WIDE-BASELINE DISPARITY SPACE

ARTICULATED 3-D MODELLING IN A WIDE-BASELINE DISPARITY SPACE ARTICULATED 3-D MODELLING IN A WIDE-BASELINE DISPARITY SPACE S. Ivekovic, E. Trucco School of Computing, University of Dundee, Dundee DD 4HN, Scotland, e-mail: {spelaivekovic,manueltrucco}@computing.dundee.ac.uk

More information

UNIVERSITÄTSBIBLIOTHEK BRAUNSCHWEIG

UNIVERSITÄTSBIBLIOTHEK BRAUNSCHWEIG UNIVERSITÄTSBIBLIOTHEK BRAUNSCHWEIG Martin Eisemann, Bernd De Decker, Anita Sellent, Edilson De Aguiar, Naveed Ahmed, Hans-Peter Seidel, Philippe Bekaert, Marcus Magnor Floating Textures Technical Report

More information

Approach to Minimize Errors in Synthesized. Abstract. A new paradigm, the minimization of errors in synthesized images, is

Approach to Minimize Errors in Synthesized. Abstract. A new paradigm, the minimization of errors in synthesized images, is VR Models from Epipolar Images: An Approach to Minimize Errors in Synthesized Images Mikio Shinya, Takafumi Saito, Takeaki Mori and Noriyoshi Osumi NTT Human Interface Laboratories Abstract. A new paradigm,

More information

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey

International Conference on Communication, Media, Technology and Design. ICCMTD May 2012 Istanbul - Turkey VISUALIZING TIME COHERENT THREE-DIMENSIONAL CONTENT USING ONE OR MORE MICROSOFT KINECT CAMERAS Naveed Ahmed University of Sharjah Sharjah, United Arab Emirates Abstract Visualizing or digitization of the

More information

Hardware-Accelerated Visual Hull Reconstruction and Rendering

Hardware-Accelerated Visual Hull Reconstruction and Rendering Hardware-Accelerated Visual Hull Reconstruction and Rendering Ming Li Marcus Magnor Hans-Peter Seidel Computer Graphics Group Max-Planck-Institut für Informatik Abstract We present a novel algorithm for

More information

Marker-less Real Time 3D Modeling for Virtual Reality

Marker-less Real Time 3D Modeling for Virtual Reality Marker-less Real Time 3D Modeling for Virtual Reality Jérémie Allard, Edmond Boyer, Jean-Sébastien Franco, Clément Ménier, Bruno Raffin To cite this version: Jérémie Allard, Edmond Boyer, Jean-Sébastien

More information

Hardware-accelerated Dynamic Light Field Rendering

Hardware-accelerated Dynamic Light Field Rendering Hardware-accelerated Dynamic Light Field Rendering Bastian Goldlücke, Marcus Magnor, Bennett Wilburn Max-Planck-Institut für Informatik Graphics - Optics - Vision Stuhlsatzenhausweg 85, 66123 Saarbrücken,

More information

View Synthesis for Multiview Video Compression

View Synthesis for Multiview Video Compression MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com View Synthesis for Multiview Video Compression Emin Martinian, Alexander Behrens, Jun Xin, and Anthony Vetro TR2006-035 April 2006 Abstract

More information

Daniel Vlasic Λ Hanspeter Pfister Sergey Molinov Radek Grzeszczuk Wojciech Matusik Λ

Daniel Vlasic Λ Hanspeter Pfister Sergey Molinov Radek Grzeszczuk Wojciech Matusik Λ Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity Daniel Vlasic Λ Hanspeter Pfister Sergey Molinov Radek Grzeszczuk Wojciech Matusik Λ Abstract We present

More information

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Real-Time Universal Capture Facial Animation with GPU Skin Rendering Real-Time Universal Capture Facial Animation with GPU Skin Rendering Meng Yang mengyang@seas.upenn.edu PROJECT ABSTRACT The project implements the real-time skin rendering algorithm presented in [1], and

More information

Sashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006

Sashi Kumar Penta COMP Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006 Computer vision framework for adding CG Simulations Sashi Kumar Penta sashi@cs.unc.edu COMP 790-072 Final Project Report Department of Computer Science, UNC at Chapel Hill 13 Dec, 2006 Figure 1: (i) Top

More information

Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings

Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings Real-time Free-Viewpoint Navigation from Compressed Multi-Video Recordings Benjamin Meyer, Christian Lipski, Björn Scholz, Marcus Magnor Computer Graphics Lab TU Braunschweig, Germany {Meyer, Lipski, Scholz,

More information

High-Quality Interactive Lumigraph Rendering Through Warping

High-Quality Interactive Lumigraph Rendering Through Warping High-Quality Interactive Lumigraph Rendering Through Warping Hartmut Schirmacher, Wolfgang Heidrich, and Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany http://www.mpi-sb.mpg.de

More information

Reference Stream Selection for Multiple Depth Stream Encoding

Reference Stream Selection for Multiple Depth Stream Encoding Reference Stream Selection for Multiple Depth Stream Encoding Sang-Uok Kum Ketan Mayer-Patel kumsu@cs.unc.edu kmp@cs.unc.edu University of North Carolina at Chapel Hill CB #3175, Sitterson Hall Chapel

More information

Online Multiple View Computation for Autostereoscopic Display

Online Multiple View Computation for Autostereoscopic Display Online Multiple View Computation for Autostereoscopic Display Vincent Nozick and Hideo Saito Graduate School of Science and Technology, Keio University, Japan {nozick,saito}@ozawa.ics.keio.ac.jp Abstract.

More information

A Warping-based Refinement of Lumigraphs

A Warping-based Refinement of Lumigraphs A Warping-based Refinement of Lumigraphs Wolfgang Heidrich, Hartmut Schirmacher, Hendrik Kück, Hans-Peter Seidel Computer Graphics Group University of Erlangen heidrich,schirmacher,hkkueck,seidel@immd9.informatik.uni-erlangen.de

More information

Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity

Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity The Harvard community has made this article openly available. Please share how this access benefits you.

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

LAG CAMERA: A MOVING MULTI-CAMERA ARRAY

LAG CAMERA: A MOVING MULTI-CAMERA ARRAY LAG CAMERA: A MOVING MULTI-CAMERA ARRAY FOR SCENE ACQUISITION Daniel G. Aliaga Yi Xu Voicu Popescu {aliaga xu43 popescu}@cs.purdue.edu Department of Computer Science at Purdue University West Lafayette,

More information

More and More on Light Fields. Last Lecture

More and More on Light Fields. Last Lecture More and More on Light Fields Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 4 Last Lecture Re-review with emphasis on radiometry Mosaics & Quicktime VR The Plenoptic function The main

More information

Surface Modeling and Display from Range and Color Data

Surface Modeling and Display from Range and Color Data Surface Modeling and Display from Range and Color Data Kari Pulli 1, Michael Cohen 2, Tom Duchamp 1, Hugues Hoppe 2, John McDonald 1, Linda Shapiro 1, and Werner Stuetzle 1 1 University of Washington,

More information

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about

Image-Based Modeling and Rendering. Image-Based Modeling and Rendering. Final projects IBMR. What we have learnt so far. What IBMR is about Image-Based Modeling and Rendering Image-Based Modeling and Rendering MIT EECS 6.837 Frédo Durand and Seth Teller 1 Some slides courtesy of Leonard McMillan, Wojciech Matusik, Byong Mok Oh, Max Chen 2

More information

2D/3D Freeview Video Generation for 3DTV System

2D/3D Freeview Video Generation for 3DTV System 2D/3D Freeview Video Generation for 3DTV System Dongbo Min, 1 Donghyun Kim, 1 SangUn Yun, 2 Kwanghoon Sohn,1 Yonsei University, Shinchon-dong, Seodaemun-gu, Seoul, South Korea. 1 Samsung Electronics, Suwon,

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Image-Based Modeling and Rendering

Image-Based Modeling and Rendering Image-Based Modeling and Rendering Richard Szeliski Microsoft Research IPAM Graduate Summer School: Computer Vision July 26, 2013 How far have we come? Light Fields / Lumigraph - 1996 Richard Szeliski

More information

Real-Time Video- Based Modeling and Rendering of 3D Scenes

Real-Time Video- Based Modeling and Rendering of 3D Scenes Image-Based Modeling, Rendering, and Lighting Real-Time Video- Based Modeling and Rendering of 3D Scenes Takeshi Naemura Stanford University Junji Tago and Hiroshi Harashima University of Tokyo In research

More information

LAG CAMERA: A MOVING MULTI-CAMERA ARRAY

LAG CAMERA: A MOVING MULTI-CAMERA ARRAY LAG CAMERA: A MOVING MULTI-CAMERA ARRAY FOR SCENE ACQUISITION Daniel G. Aliaga Yi Xu Voicu Popescu {aliaga xu43 popescu}@cs.purdue.edu Department of Computer Science at Purdue University West Lafayette,

More information