Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery

Size: px
Start display at page:

Download "Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery"

Transcription

1 Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery Tyler Johnson and Henry Fuchs Department of Computer Science University of North Carolina Chapel Hill Abstract Calibration techniques for projector-based displays typically require that the display configuration remain fixed, since they are unable to adapt to changes such as the movement of a projector. In this paper, we present a technique that is able to automatically recalibrate a projector in real time without interrupting the display of user imagery. In contrast to previous techniques, our approach can be used on surfaces of complex geometry without requiring the quality of the projected imagery to be degraded. By matching features between the projector and a stationary camera, we obtain a new pose estimate for the projector during each frame. Since matching features between a projector and camera can be difficult due to the nature of the images, we obtain these correspondences indirectly by first matching between the camera and an image rendered to predict what the camera will capture. 1. Introduction Research in adaptive projector displays has enabled projection on display surfaces previously thought impractical. Raskar [16] and Bimber [3] describe general methods of correcting for the geometric distortions that occur when non-planar surfaces are used for display. Other work [2, 10, 13] has focused on eliminating the need for high-quality projection screens by compensating for the color and texture of the display surface. These techniques and others have greatly increased the versatility of the projector and brought us closer to a project anywhere display. Recently, the focus has begun to shift towards robust automatic calibration methods [1, 15, 17] that require little or no interaction on the part of the user. Within this category are techniques that can perform calibration while user imagery is being projected. This allows display interruption to be avoided in the event that the display configuration changes, e.g. a projector is moved. These online auto-calibration techniques can be divided into two categories. In the first are the active techniques where calibration aids that are imperceptible to a hu /07/$ IEEE Figure 1. Our tracking process adapts to changes in projector pose, allowing on the fly display reconfiguration. man observer are injected into the user imagery. Cotting et al.[5, 6] embed these imperceptible calibration patterns in the projected imagery by taking advantage of the micromirror flip sequences used to form images in DLP projectors. Image intensity is modified slightly at each pixel so that a camera exposed at a small frame interval will capture the desired calibration pattern. This technique currently requires that a portion of the projector s dynamic range be sacrificed, leading to a slight degredation of the user imagery. The second type of auto-calibration technique does not rely on the use of calibration aids and instead attempts to extract calibration information from the user-projected imagery itself. In Yang and Welch [20], features in the user imagery are matched between a pre-calibrated projector and camera and used to automatically estimate the geometry of the display surface. This technique leaves the user imagery unmodified, but relies on the presence of features in the user imagery that are suitable for matching. To our knowledge, no existing work has demonstrated the ability to continuously track a projector in real-time on complex display surface geometry without modifying the projected imagery or using fixed fiducials. In addition to

2 allowing dynamic repositioning of the projector during display, such a system could also be used for projector-based augmentation. Here, a hand-held projector can be used to augment objects with information or to alter their appearance [17, 18]. This effectively allows a projector to be used as a graphical flashlight where the projector can be controlled directly by the user to create imagery on previously unlit portions of a surface or to enhance image detail by moving the projector closer to the surface. In this paper, we describe a technique that enables such a system. By matching features between the projector and a stationary camera, we re-estimate the pose of the projector continuously. Since matching features between a projector and camera directly can be difficult, we propose obtaining these correspondences indirectly by first matching between the camera and an image generated to predict what the camera will capture. We present a simple radiometric model for generating this predicted image. Our technique does not affect the quality of the projected imagery and can be performed continuously without interrupting the display. 2. Predictive Rendering for Feature Matching Given a static camera of known calibration, where the depth at each camera pixel is also known, a set of image correspondences between the camera and a projector indirectly provides a set of 2D-3D correspondences in the projector, allowing the calibration of the projector to be determined. Unfortunately, there are a variety of factors that complicate the process of matching between projector and camera images as seen in Figure 4. Some of these complications include 1. differences in resolution, aspect ratio and bit-depth 2. large camera-projector baselines 3. radiometric effects e.g. projector/camera transfer function and surface BRDF present in camera image, but not in projector image To avoid these problems, we instead perform matching between an actual image captured by the camera and a prediction of this captured image generated using graphics hardware. We will refer to the image sent to the projector as the projected image, the image captured by the camera as the captured image, and the prediction of the captured image as the predicted image. Since the displacement of a projector within a single frame is small, geometric differences between the predicted and captured images will be small as well. Also, by generating a predicted image that takes into account the radiometric properties of the display, the captured and predicted images will be similar in intensity, leading to better matching results. By reversing the process used to generate the predicted image, we can map features in the predicted image to Figure 2. Mapping projector pixels to camera pixels. their original locations in the projected image to obtain 2D- 3D correspondences allowing us to calibrate the projector Geometric Prediction The first step in generating the predicted image is to determine the mapping from projector pixels to camera pixels. This allows us to warp the projected image into the frame of the camera as if it had observed the projected imagery. This mapping is completely defined by the calibration of the projector and camera and the geometry of the display surface and is the same process needed to determine the required warping to compensate for the display surface geometry when displaying a desired image to a viewer. In the case of generating the predicted image, we map between projector and the camera instead of between the projector and the viewer. We accomplish the warping between the camera and projector using the second pass of the two-pass rendering technique proposed by Raskar [16]. Using the projection matrix of the projector, we project the texture coordinates of the projected image onto the display surface geometry and render it using the projection matrix of the camera. An illustration of this process is provided in Figure Radiometric Prediction As Tables 1 and 2 show, incorporating a radiometric simulation into the generation of the predicted image can result in an immense improvement in feature matching performance, especially for rendered imagery. The radiometric model we use assumes that the projector can be reasonably approximated as a point light source, which implies that each point on the display surface receives illumination from only one direction. We will additionally assume a display surface with a uniform, diffuse BRDF since this type of surface is commonly used for projective display. We also ignore any contribution to the captured image from indirect

3 illumination. For a point X on the display surface, we will denote its projection in the camera and projector images as x c and x p. Given one of the these points, the locations of the other two are easily determined using the calibration of the camera and projector and the model of the display surface. The radiometric value measured by a camera sensor is irradiance, the amount of energy per area. The irradiance within a sensor pixel is integrated over time to produce the sensor s output. A pixel s intensity value in the final image may however be a non-linear function of the sensor output. The goal of our radiometric simulation is then to predict the irradiance arriving at each camera pixel and apply the transfer function of the camera, also called its input-output response function or simply response. Let R c be the response function of the camera and E x c ) the irradiance at a camera pixel x c. The intensity in the captured camera image M c x c ) is then M c x c ) = R c E x c )). 1) The irradiance at a point on the camera sensor is a function of the scene radiance, the energy per area per solid angle of the source. Assuming a small aperture and a thin lens, the irradiance E x) at a point x on the camera sensor is directly proportional to the scene radiance L arriving at x [9]. Specifically, this proportionality is E x) = L π 4 cos 4 θ n 2, 2) where n is the f-number of the lens and θ is the angle between the normal of the sensor plane and the unit vector from x to the center of the lens. This equation indicates a variation in the proportionality between scene radiance and sensor irradiance over the camera s field-of-view. We have found this variation to be negligible and instead use the simplification E x c ) = αl X, ) XC, 3) where L X, ) XC is the radiance at X in the direction of the camera center C, and α is some constant. The value of L X, ) XC is the result of illumination from the projector being reflected by the display surface towards the camera and depends on the surface BRDF. Since we consider the projector to be a point light source, the point X receives illumination only along the direction from the projector center XP. This, combined with the assumption of a uniform, diffuse surface, leads to L X, ) XC = ρe X, ) XP, 4) where E X, ) XP is the incident irradiance at X due to the projector and ρ is the surface BRDF. The incident irradiance at a surface point due to a point light source depends on the radiant intensity of the light source in the direction of the surface point, the orientation of the surface normal with respect to the incoming light direction, and the distance to the light source. If I P, ) PX is the radiant intensity of the projector in the direction of point X, then E X, XP ) = I P, ) cosθ) PX r 2, 5) where θ is the angle between the surface normal at X and the direction XP and r the distance between X and the projector. We model the radiant intensity IP, PX) as a function of the intensity of the projected image at pixel x p, M p x p ) = [r,g,b] [0,1] 3, and the response function R p of the projector such that I P, ) PX = Sx p )[R p M p x p )) [I r,i g,i b ]], 6) where I r..b are the maximum radiant intensities of the red, green and blue color channels of the projector and S [0,1] is a per-pixel attenuation map. The purpose of S, which we refer to as the projector intensity profile, is to model changes in brightness over a projector s field-of-view due to effects such as vignetting. Combining the above equations, the final intensity value measured at each camera pixel as a function of the intensity at its corresponding location in the projected image is cosθ) M c x c ) = R c r 2 Sx p ) [ R p M p x p )) [I ) ]] r,i g,i b, 7) where we have combined α, ρ and the I r..b into the terms I r..b. Since the field-of-view of each camera pixel may encompass more than a single projector pixel, we filter the projected image to obtain an average intensity value that we use to evaluate Equation 7). 3. Calibration In this section, we describe our process of calibrating the geometric and radiometric parameters necessary to apply the technique we use to generate the predicted image Geometric Calibration We accomplish initial geometric calibration in an upfront step by observing projected structured light patterns

4 with a stereo camera pair. This process yields a set of correspondences between the projector and camera pair that allows us to reconstruct the geometry of the display surface and calibrate the projector. Raskar provides a thorough discussion of this process in [16]. To obtain an accurate model of our room-like display surface as seen in Figure 2, we use the RANSAC-based plane fitting technique introduced by Quirk [14]. Any other reconstruction method that gives a geometric description of the surface could also be used Radiometric Calibration In addition to the geometric calibration, generation of the predicted image requires estimation of the projector and camera response, the projector intensity profile, and the terms I r..b. Since these are intrinsic properties, they can be calibrated once in an up-front process and then used in arbitrary geometric configurations. Since we use the stereo camera pair for geometric calibration, out of convenience, we also use one of these cameras for projector tracking. To prevent indirect scattering from affecting the results, we perform radiometric calibration by projecting on a portion of the display surface that produces low levels of these effects. In our case, we project onto a single plane and use our geometric calibration process to establish the geometric relationship between the camera, projector and display surface Projector Response The first step in our radiometric calibration process is to calibrate the projector response. We accomplish this manually by projecting an image where half the image is filled with some intensity I and the other half with a dither pattern computed using error-diffusion dithering [8] to have a certain proportion of black and full intensity pixels. By adjusting the proportion of black and white pixels such that the intensity of the image appears consistent throughout, we can determine what proportion of the projector s maximum output intensity the intensity I represents. This is done for a number of intensities and the resulting data points are interpolated with a spline to reconstruct the response function of the projector Intensity Profile We recover the intensity profile of the projector by projecting a sequence of solid grayscale images of increasing intensity. To remove the effect of projector response on the captured camera images, we linearize the output of the projector during this step using the inverse of the projector response. At each pixel in a captured camera image, we have υx c ) = r2 R 1 c M c x c )) = Sx p )[M p x p ) [I r,i g,i b ]]. 8) cosθ) The value of υ at each camera pixel can be evalutated given the response function of the camera. Since we have linearized the projector response when projecting the sequence of grayscale images, we can use the captured camera images to get a rough estimate of the camera response. We do this by finding the median intensity value in each camera image and use this as the camera response for the grayscale intensity of the projected image. In the next section, we will describe our technique for refining the camera response estimate. Next, we compute υx c ) at each pixel of the camera images. Let x m be the pixel in an image where the maximum value of υ occurs. Dividing the value of υ at each pixel by υx m ) gives the relation Sx p ) Sx pm ) = cosθ m)r 2 R 1 c M c x c )) cosθ)rmr 2 1 c M c x m )), 9) with subscript m indicating values for pixel x m. The M p x p ) [I r,i g,i b ] terms in both the numerator and denominator cancel since the projected image was uniform in intensity. Because υx m ) is a maximum in the camera image, Sx pm ) = 1, and we can compute an estimate of the intensity profile of the projector at each pixel using a single camera image. Since the computed intensity profile may vary between images, we use the average of the intensity profiles extracted from each image as our final estimate. It is important in this step to exclude camera images containing saturated pixels, which will affect the computation of the intensity profile. We also blur the extracted profile with a small gaussian kernel to remove noise Projector Brightness and Camera Response The remaining terms to be calibrated are the I r..b and the refinement of the camera response. The process we use for calibrating the camera response is based on the technique described in Debevec [7]. In addition to the grayscale images used to calibrate the intensity profile of the projector, we also project a series of solid color images of different intensities, linearizing the output of the projector as we did before. From Equation 7), at each pixel in a camera image taken of one of these solid color images, we have a linear equation in the I r..b and one of the 256 discrete values that form the domain of R 1. If the projected intensity is r, g, b) and the intensity measured by the camera at some pixel is i, we have sri r + sgi g + sbi b R 1 c i) = 0, 10)

5 where s = cosθ) r 2 Sx p ). Let g i = R 1 i) and let z k be a vector of length 256 composed of all zeros except that it contains the value 1 at the location corresponding to the intensity value measured by the camera in the kth equation. The system of equations can then be put into matrix form as s 1 r 1 s 1 g 1 s 1 b 1 z 1 s 2 r 2 s 2 g 2 s 2 b 2 z 2... s n r n s n g n s n b n z n I r I g I b g 1. g = ) This system can be efficiently solved using SVD to produce the best solution vector in the least-squares sense. In practice, it may be possible that no camera pixel of a certain intensity can be found in any of the camera images. In this case, the values of R 1 c for which data exists can be calibrated and then interpolated to produce the missing values. To enforce smoothness between the g i s, we also add a second derivative term to Equation 11) of the form λg i 1 2g i + g i+1 ) where λ can be used to control the smoothness. Since the solution vector is only defined up to an arbitrary scale and sign, we choose the sign such that the solution vector is positive and the scale such that the g i lie in the range [0,1]. The response function of the camera is easily obtained by inverting the g i s. In our testing, we captured images of 17 different intensities each of red, green, blue, and yellow. In choosing the camera image pixels we use as contraints in Equation 11), we have created an automated process that selects pixels uniformly across color, intensity, and image location Error To estimate the error in our calibration and validate our model, we used the calibration to predict the camera images used in calibrating the projector intensity profile. Comparing the predicted images to the actual camera images, we found the overall average error to be just under 4 camera intensity values. 4. Rendering We implemented our radiometric model in a real-time GPU pixel shader that allows predicted images to be produced at interactive rates. The shader takes as input all of the radiometric parameters with the projector and camera response stored as 1D textures and the intensity profile stored as a 2D texture. The geometric parameters are two 2D floating-point textures called the geometry and normal map, which store the x,y,z) position and normal of the display surface at each camera pixel. The projection matrix of the projector and its center-of-projection are also input parameters. To predict the intensity measured by the camera at each pixel, the shader program first estimates the average color of the projected image pixels that fall within the extent of each camera pixel. At each predicted image pixel, we look up the display surface vertex X in the geometry map and project it into the projected image using the projector calibration to obtain a texture coordinate. We repeat this process at three neighboring pixels, allowing us to compute two derivative vectors indicating the size of the region in the projected image that should be filtered. We pass this information to the texture mapping hardware, which filters the projected image and the intensity profile using a kernel of the appropriate size. We next apply the projector response to the filtered color by performing a per channel look-up in the projector response texture. Using the center-of-projection of the projector and the value of X, the value of r 2 for the pixel is easily computed. To compute the value of cosθ), we look up the display surface normal in the normal map and take the dot product of the normal and the unit vector from X to the projector center. Composing the rest of the model terms together, we do a final look-up in the camera response texture to get the final predicted intensity for the pixel. The shader program then returns the predicted intensity in one color channel of the predicted image with the other two channels left black. This allows us to read back the predicted image to the CPU as a single channel texture, greatly improving read-back efficiency. Figure 3 shows a captured camera image and a predicted image rendered using our technique. We computed the difference between these images to be 15.1 intensity levels on average per pixel with a standard deviation of 3.3. The images appear very similar, with the most significant difference being the higher contrast in the rendered image. The rendered image is also slightly darker. We attribute both of these differences to indirect scattering effects caused by the complex display surface geometry being present in the camera image, but not reproduced in the predicted image. 5. Continuous Projector Calibration To track the projector motion in real-time using the predicted image, we first detect a set of features in the captured camera image during each frame. As described previously, by matching features between the captured image and the predicted image, we indirectly obtain a set of 2D- 3D point projector point correspondences that allows us to estimate the pose of the projector. For feature detection in the captured image, we use Intel s OpenCV library implementation of the feature detection algorithm introduced by

6 Figure 3. An actual image captured by a camera left) and a rendered prediction of the camera image right). Shi and Tomasi [19]. To obtain correspondences for the detected features in the predicted image, we use pyramidal Lucas-Kanade tracking [12, 4], also part of OpenCV. To transform these correspondences into 2D-3D projector point correspondences, we use the same geometry map used in rendering the predicted image, which stores the mapping between camera pixels and 3D points on the display surface. To obtain the 3D correspondences, we perform a look-up in the geometry map for each feature in the captured image. To obtain the 2D correspondences for these points, we reverse the mapping from projected image pixels to predicted image pixels. This is done by consulting the geometry map using the feature locations in the predicted image and projecting the resulting points into the projected image using the projector calibration Pose Estimation We assume that any motion of the projector does not affect its intrinsic calibration, requiring only the 6 parameters comprising the extrinsic calibration position and orientation) to be re-estimated. To calculate the projector pose from the 2D-3D correspondences, we use a common technique from analytic photogrammetry described in Haralick [11]. The technique takes as input a set of 2D-3D correspondences as well as an initial estimate of the pose and minimizes the sum of squared reprojection errors using a nonlinear least-squares technique. While the need for an initial guess is a limitation of the technique, it has the advantage of being able to compute the correct pose in situations where a linear technique will fail, such as when all 3D correspondences are coplanar. We have found that using the previous pose of the projector as an initial guess for the current pose is adequate for convergence even when the projector pose is changing rapidly Outliers We have found it essential to incorporate RANSAC into the pose estimation to prevent false correspondences from affecting the pose estimation results. In our RANSAC approach, we estimate the projector pose using three correspondences selected at random and record the number of correspondences whose resulting reprojection error is less than 10 projector pixels as inliers. We perform this iteration repeatedly until there is a 99% chance that at least one interation has chosen a set of three correct corrspondences. We then take the largest inlier set over all iterations and perform a final pose estimate Filtering Since we do not synchronize the projector and camera, it is possible for the captured and predicted images to be out of step by a frame. This introduces error into the correspondences used to calibrate the projector and can lead to instability if the pose estimates are not filtered. In our experimentation, we have found that by limiting the amount of change that is made to the pose estimate each frame, any instability caused by an unsynchronized camera and projector can be removed. After the change in pose has been calculated, we add it to the current pose estimate at 10% of its original scale. We have found this to ensure tracking stability while still providing a responsive system. 6. Results We tested the tracking ability of our system using two dynamic applications. The first application displays a rotating panorama of a real environment and contains many strong features. The second application is a virtual flight simulator that displays rendered 3D geometry and is relatively sparse in strong features.

7 a) b) c) d) Figure 4. a) An image captured by a camera. b) An image rendered to predict the captured image. c) The projected image that was captured. d) A contrast-enhanced difference image of the captured and predicted images. Imagery Avg. Feature Count Avg. % Inliers Panoramafront) Panoramaback) Flight Simulator Table 1. Feature matching performance using both geometric and radiometric prediction of the captured image. Imagery Avg. Feature Count Avg. % Inliers Panoramafront) Panoramaback) Flight Simulator Table 2. Feature matching performance using only geometric prediction of the captured image. To test the feature detection and matching capability of the system, we recorded the number of features successfully matched between the predicted and camera images and the number of these matches found to be correct by RANSAC in each frame. Table 1 shows the results we collected over 1000 frames for both applications. The maximum number of features to detect and match was limited to 200 for this experiment, which we have found to be more than adequate for tracking. While the results are considerably better for the panorama application, we were not able to notice a visible difference in the quality of the tracking results. This is due to the lower number of matched features in the flight simulator being sufficient to accurately calculate the projector pose and the success of RANSAC in removing the incorrect correspondences. To estimate the improvement in feature matching gained by performing radiometric prediction, we ran the same tests using predicted images generated using only geometric pre-

8 diction. The predicted images were converted to grayscale using the NTSC coefficients for color to grayscale conversion. The results of this experiment are present in Table 2. Note that during this experiment, tracking for the flight simulator was lost after only 6 frames. The performance of the tracker is heavily dependent on the number of features that the system attempts to detect and match each frame. Setting the maximum number of features to detect and match at 75, we were able to obtain excellent tracking results and measured the performance of the tracker to be approximately 27 Hz for both applications. 7. Summary and Future Work We have described a new technique that enables a projector to be tracked in real-time without affecting the quality of the projected imagery. This technique allows dynamic repositioning of the projector without display interruption. By matching features between the projector and a static camera, the pose of the projector is estimated each frame. Since obtaining correspondences between the projector and camera can be difficult, we obtain these correspondences indirectly by matching between the camera image and an image rendered using the current calibration information to predict the image the camera will capture. We describe a simple radiometric model that can easily be implemented on graphics hardware to produce this predicted image. Our current technique has certain limitations we would like to overcome in future work. Our radiometric calibration process currently requires a uniform display surface albedo and that there be a portion of the display surface to project on that does not introduce substantial indirect scattering effects. A futher limitation is the reliance on the presence of features in the user imagery that are suitable for matching. We believe, however, that exposing the camera for a short frame interval will expose additional features in the user imagery when using a DLP projector since small differences in intensity can be magnified greatly by the mirror flips that occur. 8. Acknowledgements The authors wish to thank Herman Towles and Rick Skarbez for their insights and suggestions on this paper. References [1] M. Ashdown, M. Flagg, R. Sukthankar, and J. Rehg. A flexible projector-camera system for multi-planar displays, [2] O. Bimber, A. Emmerling, and T. Klemmer. Embedded entertainment with smart projectors. IEEE Computer, 381):56 63, [3] O. Bimber, G. Wetzstein, A. Emmerling, and C. Nitschke. Enabling view-dependent stereoscopic projection in real environments. In 4th IEEE and ACM International Symposium on Mixed and Augmented Reality, pages 14 23, [4] J.-Y. Bouguet. Pyramidal implementation of the lucas kanade feature tracker description of the algorithm. In Technical Report, Intel Corporation, [5] D. Cotting, M. Naef, M. Gross, and H. Fuchs. Embedding imperceptible patterns into projected images for simultaneous acquisition and display. In International Symposium on Mixed and Augmented Reality, pages , [6] D. Cotting, R. Ziegler, M. Gross, and H. Fuchs. Adaptive instant displays: Continuously calibrated projections using per-pixel light control. In Eurographics, pages , [7] P. E. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. Computer Graphics, 31Annual Conference Series): , [8] R. W. Floyd and L. Steinberg. An adaptive algorithm for spatial grayscale. Journal of the Society for Information Display, 172):75 77, [9] D. A. Forsyth and J. Ponce. Computer Vision A Modern Approach. Prentice Hall, 1st edition, [10] M. D. Grossberg, H. Peri, S. K. Nayar, and P. N. Belhumeur. Making one object look like another: Controlling appearance using a projector-camera system. In IEEE Computer Vision and Pattern Recognition, volume 1, pages , [11] R. Haralick and L. Shapiro. Computer and Robot Vision, volume [12] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of Imaging Understanding Workshop, pages , [13] S. K. Nayar, H. Peri, M. D. Grossberg, and P. N. Belhumeur. A projection system with radiometric compensation for screen imperfections. In ICCV Workshop on Projector- Camera Systems PROCAMS), October [14] P. Quirk, T. Johnson, R. Skarbez, H. Towles, F. Gyarfas, and H. Fuchs. Ransac-assisted display model reconstruction for projective display. In Emerging Display Technologies, [15] A. Raij and M. Pollefeys. Auto-calibration of multi-projector display walls. In International Conference on Pattern Recogniton, [16] R. Raskar, M. S. Brown, R. Yang, W.-C. Chen, G. Welch, H. Towles, W. B. Seales, and H. Fuchs. Multi-projector displays using camera-based registration. In IEEE Visualization, pages , [17] R. Raskar, J. van Baar, P. Beardsley, T. Willwacher, S. Rao, and C. Forlines. ilamps: Geometrically aware and selfconfiguring projectors. In ACM SIGGRAPH, [18] R. Raskar, G. Welch, K.-L. Low, and D. Bandyopadhyay. Shader lamps: Animating real objects with image-based illumination. In Eurographics Workshop on Rendering, [19] J. Shi and C. Tomasi. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition CVPR 94), Seattle, June [20] R. Yang and G. Welch. Automatic and continuous projector display surface estimation using every-day imagery. In 9th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2001.

Multi-Projector Display with Continuous Self-Calibration

Multi-Projector Display with Continuous Self-Calibration Multi-Projector Display with Continuous Self-Calibration Jin Zhou Liang Wang Amir Akbarzadeh Ruigang Yang Graphics and Vision Technology Lab (GRAVITY Lab) Center for Visualization and Virtual Environments,

More information

Color Mixing Property of a Projector-Camera System

Color Mixing Property of a Projector-Camera System Color Mixing Property of a Projector-Camera System Xinli Chen Xubo Yang Shuangjiu Xiao Meng Li School of Software Shanghai Jiao Tong University, China Abstract In this paper, we investigate the property

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo

Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Interreflection Removal for Photometric Stereo by Using Spectrum-dependent Albedo Miao Liao 1, Xinyu Huang, and Ruigang Yang 1 1 Department of Computer Science, University of Kentucky Department of Mathematics

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces

Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces Passive-Active Geometric Calibration for View-Dependent Projections onto Arbitrary Surfaces Stefanie Zollmann, Tobias Langlotz, and Oliver Bimber Bauhaus University Weimar Bauhausstrasse 11, 99425 Weimar,

More information

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers

Overview. Augmented reality and applications Marker-based augmented reality. Camera model. Binary markers Textured planar markers Augmented reality Overview Augmented reality and applications Marker-based augmented reality Binary markers Textured planar markers Camera model Homography Direct Linear Transformation What is augmented

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface. Lighting and Photometric Stereo CSE252A Lecture 7 Announcement Read Chapter 2 of Forsyth & Ponce Might find section 12.1.3 of Forsyth & Ponce useful. HW Problem Emitted radiance in direction f r for incident

More information

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning

ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning ENGN 2911 I: 3D Photography and Geometry Processing Assignment 2: Structured Light for 3D Scanning Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 26 February 2009 Figure 1: Structured

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing Computer Perceptual Vision and Sensory WS 16/17 Augmented Computing

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

Computer Vision Lecture 20

Computer Vision Lecture 20 Computer Perceptual Vision and Sensory WS 16/76 Augmented Computing Many slides adapted from K. Grauman, S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik Computer Vision Lecture 20 Motion and Optical Flow

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Capturing light. Source: A. Efros

Capturing light. Source: A. Efros Capturing light Source: A. Efros Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How can we approximate orthographic projection? Lenses

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Projector View Synthesis and Virtual Texturing

Projector View Synthesis and Virtual Texturing Projector View Synthesis and Virtual Texturing T. Molinier 1, D. Fofi 1, Joaquim Salvi 2, Y. Fougerolle 1, P. Gorria 1 1 Laboratory Le2i UMR CNRS 5158 University of Burgundy, France E-mail: thierry.molinier@bourgogne.fr

More information

Gain Adaptive Real-Time Stereo Streaming

Gain Adaptive Real-Time Stereo Streaming Gain Adaptive Real-Time Stereo Streaming S. J. Kim +, D. Gallup +, J.-M. Frahm +, A. Akbarzadeh, Q. Yang, R. Yang, D. Nistér and M. Pollefeys + + Department of Computer Science Department of Computer Science

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Integrated three-dimensional reconstruction using reflectance fields

Integrated three-dimensional reconstruction using reflectance fields www.ijcsi.org 32 Integrated three-dimensional reconstruction using reflectance fields Maria-Luisa Rosas 1 and Miguel-Octavio Arias 2 1,2 Computer Science Department, National Institute of Astrophysics,

More information

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary

Ambien Occlusion. Lighting: Ambient Light Sources. Lighting: Ambient Light Sources. Summary Summary Ambien Occlusion Kadi Bouatouch IRISA Email: kadi@irisa.fr 1. Lighting 2. Definition 3. Computing the ambient occlusion 4. Ambient occlusion fields 5. Dynamic ambient occlusion 1 2 Lighting: Ambient

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Computer Graphics. Illumination and Shading

Computer Graphics. Illumination and Shading () Illumination and Shading Dr. Ayman Eldeib Lighting So given a 3-D triangle and a 3-D viewpoint, we can set the right pixels But what color should those pixels be? If we re attempting to create a realistic

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Epipolar geometry contd.

Epipolar geometry contd. Epipolar geometry contd. Estimating F 8-point algorithm The fundamental matrix F is defined by x' T Fx = 0 for any pair of matches x and x in two images. Let x=(u,v,1) T and x =(u,v,1) T, each match gives

More information

Camera Calibration with a Simulated Three Dimensional Calibration Object

Camera Calibration with a Simulated Three Dimensional Calibration Object Czech Pattern Recognition Workshop, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 4, Czech Pattern Recognition Society Camera Calibration with a Simulated Three Dimensional Calibration Object Hynek

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

Overview of Active Vision Techniques

Overview of Active Vision Techniques SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active

More information

Fast HDR Image-Based Lighting Using Summed-Area Tables

Fast HDR Image-Based Lighting Using Summed-Area Tables Fast HDR Image-Based Lighting Using Summed-Area Tables Justin Hensley 1, Thorsten Scheuermann 2, Montek Singh 1 and Anselmo Lastra 1 1 University of North Carolina, Chapel Hill, NC, USA {hensley, montek,

More information

Color Correction for Projected Image on Colored-screen Based on a Camera

Color Correction for Projected Image on Colored-screen Based on a Camera Color Correction for Projected Image on Colored-screen Based on a Camera Dae-Chul Kim a, Tae-Hyoung Lee a, Myong-Hui Choi b, and Yeong-Ho Ha* a a School of Electronics Engineering, Kyungpook Natl. Univ.,

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA V. A. Knyaz * GosNIIAS, State Research Institute of Aviation System, 539 Moscow, Russia knyaz@gosniias.ru

More information

Augmented Reality: projectors

Augmented Reality: projectors Augmented Reality: projectors by Michael Vögele 1. Introduction 1.1. Brief overview on research areas 2. History of displays/projectors 3. Smart Projectors 3.1. Description 3.2. Areas of application 3.3.

More information

WHEN an image is projected onto an everyday surface

WHEN an image is projected onto an everyday surface IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. XX, NO. X, XXXXXXX 201X 1 Artifact Reduction in Radiometric Compensation of Projector-Camera Systems for Steep Reflectance Variations

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Small-form Spatially Augmented Reality on the Jetson TX1

Small-form Spatially Augmented Reality on the Jetson TX1 Small-form Spatially Augmented Reality on the Jetson TX1 Rahul Prabala Stanford University rprabala@stanford.edu Abstract This paper demonstrates a small-form projector-camera system built from the Intel

More information

Comparison between Motion Analysis and Stereo

Comparison between Motion Analysis and Stereo MOTION ESTIMATION The slides are from several sources through James Hays (Brown); Silvio Savarese (U. of Michigan); Octavia Camps (Northeastern); including their own slides. Comparison between Motion Analysis

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Mosaics. Today s Readings

Mosaics. Today s Readings Mosaics VR Seattle: http://www.vrseattle.com/ Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html Today s Readings Szeliski and Shum paper (sections

More information

Estimating the surface normal of artwork using a DLP projector

Estimating the surface normal of artwork using a DLP projector Estimating the surface normal of artwork using a DLP projector KOICHI TAKASE 1 AND ROY S. BERNS 2 1 TOPPAN Printing co., ltd. 2 Munsell Color Science Laboratory, Rochester Institute of Technology Summary:

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Multi-View 3D-Reconstruction

Multi-View 3D-Reconstruction Multi-View 3D-Reconstruction Cedric Cagniart Computer Aided Medical Procedures (CAMP) Technische Universität München, Germany 1 Problem Statement Given several calibrated views of an object... can we automatically

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 12 130228 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Panoramas, Mosaics, Stitching Two View Geometry

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi

Motion and Optical Flow. Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi Motion and Optical Flow Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali Farhadi We live in a moving world Perceiving, understanding and predicting motion is an important part of our daily lives Motion

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Color Characterization and Calibration of an External Display

Color Characterization and Calibration of an External Display Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

Other approaches to obtaining 3D structure

Other approaches to obtaining 3D structure Other approaches to obtaining 3D structure Active stereo with structured light Project structured light patterns onto the object simplifies the correspondence problem Allows us to use only one camera camera

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen Video Mosaics for Virtual Environments, R. Szeliski Review by: Christopher Rasmussen September 19, 2002 Announcements Homework due by midnight Next homework will be assigned Tuesday, due following Tuesday.

More information

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization.

Overview. Video. Overview 4/7/2008. Optical flow. Why estimate motion? Motion estimation: Optical flow. Motion Magnification Colorization. Overview Video Optical flow Motion Magnification Colorization Lecture 9 Optical flow Motion Magnification Colorization Overview Optical flow Combination of slides from Rick Szeliski, Steve Seitz, Alyosha

More information

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012) Foundations of Computer Graphics (Spring 2012) CS 184, Lecture 21: Radiometry http://inst.eecs.berkeley.edu/~cs184 Overview Lighting and shading key in computer graphics HW 2 etc. ad-hoc shading models,

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Thomas Buchberger, Matthias Zwicker Universität Bern Herbst 2008 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome!

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Shading Models. Welcome! INFOGR Computer Graphics J. Bikker - April-July 2016 - Lecture 10: Shading Models Welcome! Today s Agenda: Introduction Light Transport Materials Sensors Shading INFOGR Lecture 10 Shading Models 3 Introduction

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

An introduction to 3D image reconstruction and understanding concepts and ideas

An introduction to 3D image reconstruction and understanding concepts and ideas Introduction to 3D image reconstruction An introduction to 3D image reconstruction and understanding concepts and ideas Samuele Carli Martin Hellmich 5 febbraio 2013 1 icsc2013 Carli S. Hellmich M. (CERN)

More information

General Principles of 3D Image Analysis

General Principles of 3D Image Analysis General Principles of 3D Image Analysis high-level interpretations objects scene elements Extraction of 3D information from an image (sequence) is important for - vision in general (= scene reconstruction)

More information

Application questions. Theoretical questions

Application questions. Theoretical questions The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. Please find below a non exhaustive list of possible application questions. The list

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Practical Non-linear Photometric Projector Compensation

Practical Non-linear Photometric Projector Compensation Practical Non-linear Photometric Projector Compensation Anselm Grundhöfer Disney Research Zürich Clausiusstrasse 49, 8092 Zürich, Switzerland anselm@disneyresearch.com Abstract We propose a novel approach

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7

Photometric Stereo. Lighting and Photometric Stereo. Computer Vision I. Last lecture in a nutshell BRDF. CSE252A Lecture 7 Lighting and Photometric Stereo Photometric Stereo HW will be on web later today CSE5A Lecture 7 Radiometry of thin lenses δa Last lecture in a nutshell δa δa'cosα δacos β δω = = ( z' / cosα ) ( z / cosα

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision

What is Computer Vision? Introduction. We all make mistakes. Why is this hard? What was happening. What do you see? Intro Computer Vision What is Computer Vision? Trucco and Verri (Text): Computing properties of the 3-D world from one or more digital images Introduction Introduction to Computer Vision CSE 152 Lecture 1 Sockman and Shapiro:

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES D. Beloborodov a, L. Mestetskiy a a Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University,

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #6: Lights Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Thursday in class: midterm #1 Closed book Material

More information

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2

Outline. Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion. Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration Marker Tracking Pose Estimation of Markers Conclusion Media IC & System Lab Po-Chen Wu 2 Outline Introduction System Overview Camera Calibration

More information

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16 Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information