Accurate estimation of the boundaries of a structured light pattern

Size: px
Start display at page:

Download "Accurate estimation of the boundaries of a structured light pattern"

Transcription

1 954 J. Opt. Soc. Am. A / Vol. 28, No. 6 / June 2011 S. Lee and L. Q. Bui Accurate estimation of the boundaries of a structured light pattern Sukhan Lee* and Lam Quang Bui Intelligent Systems Research Center, Sungkyunkwan University, Suwon, South Korea *Corresponding author: lsh@ece.skku.ac.kr Received October 27, 2010; revised March 9, 2011; accepted March 16, 2011; posted March 16, 2011 (Doc. ID ); published May 5, 2011 Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the textureinduced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions Optical Society of America OCIS codes: , , , , INTRODUCTION The method of structured-light-based 3D reconstruction has been well established with rich literature available to date [1]. The advantage of structured-light-based 3D reconstruction over other approaches lies in its capability of acquiring a high depth accuracy of, as well as a high density of, reconstructed images within its dynamic range. For structured-light-based 3D reconstruction methods using stripe patterns as a regionbased codec, pattern boundaries may serve as a definite feature for defining the projector camera correspondence on a subpixel level. Therefore, a crucial step for ensuring high depth accuracy is the accurate estimation of the boundaries of a projected light pattern captured on an image. The conventional methods for estimating the boundaries of light stripes can be summarized into the following two (refer to Fig. 1 for illustration): 1. To threshold the captured image signal of a stripe pattern, using the average of the two reference image signals obtained from the projection of the two reference patterns, all-bright and all-dark, as the threshold [Fig. 1(a)]. 2. To obtain the zero crossings or intersections of the two image signals captured, respectively, from the two projected light stripe patterns, the original stripe pattern and its inverse stripe pattern [Fig. 1(b)]. Here, the inverse stripe pattern implies the pattern obtained by reversing the bright and dark signals of the original stripe pattern, respectively, to the dark and bright signals. For more details, refer to Trobina [2]. These conventional methods can accurately estimate the boundaries on the textureless surfaces where only optical blurring is involved in the deformation of captured stripe edge signals, and, thus, no asymmetric deformation is incurred, as illustrated by Fig. 2(a). However, they fail to estimate the correct boundaries when the textures at or around the boundaries of the light stripes incur a large variation in surface reflectance, leading to the severe asymmetric deformation of the captured stripe edge signals, as shown in Fig. 2(b). In Fig. 2(b), due to the abrupt variation of the surface reflectance at the boundary, the captured edge signal of the inverse stripe is deformed asymmetrically to that of the original stripe, causing the zero crossing or intersection of the two signals located far off from that of no asymmetric deformation, such as the one shown in Fig. 2(a), and leading to erroneous boundary estimation. As a matter of fact, the conventional methods are considered not firmly based on a, theoretically well-founded, formal model on the errors involved in boundary estimation. This paper presents a novel (to the best of our knowledge) method for estimating the true boundaries of light stripes under severe deformation due to heavily textured surfaces. Particular interest is given to the accurate estimation of boundaries even for the case of abrupt discontinuities in reflectance. To solve the problem based on a firm theoretical foundation, we formulate a formal mathematical model describing the deformation of the projected light pattern at the imaging end, taking the light reflectance variation, optical blurring, and sensor noise into account. Unlike conventional methods, we transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal, i.e., a virtual pattern that would have been obtained if neither the /11/ $15.00/ Optical Society of America

2 S. Lee and L. Q. Bui Vol. 28, No. 6 / June 2011 / J. Opt. Soc. Am. A 955 Fig. 1. (Color online) Two boundary estimation methods: (a) to threshold the signal, using the average of the two, all-bright and all-dark, reference signals (shown as the top and bottom signals) as the threshold and (b) to use the zero crossing of the two signals, the signal (blue) captured from an original bright and dark pattern and its inverse signal (red) with the bright and dark pattern inversed from the original pattern. Fig. 2. (Color online) Illustration of the potential error in conventional zero-crossing-based boundary estimation due to the asymmetric deformation between the original (blue) and its inverse (red) signals incurred by the reflectance variation at/around the boundary: (a) no asymmetric deformation present at the boundary on a textureless surface and (b) asymmetric deformation present at the boundary on a textured surface. Fig. 3. Schematics of ray tracing from the pattern projection by a projector to the image captured by a camera.

3 956 J. Opt. Soc. Am. A / Vol. 28, No. 6 / June 2011 S. Lee and L. Q. Bui Fig. 4. Edge blurring as the result of the convolution between step function and Gaussian blur kernel. reflectance variation nor the blurring in imaging optics were present. This transformation is made feasible by the local reflectance indices estimated from the model. The key feature of the proposed method, which is also unique, is the introduction of the edge-corrected, canonical pattern that is independent of the variation of surface reflectance, thus, allowing the boundary estimation to be independent of surface reflectance. The true boundary is estimated by intersecting the canonical form of a captured light pattern with that of its inverse pattern. The originality of this paper is fourfold: (i) the formulation of a formal mathematical model for describing the deformation of a captured light pattern, (ii) the derivation of a new formula for estimating the reflectance indices from the model, (iii) the novelty in transforming an edge-deformed, captured light pattern into an edge-corrected, canonical pattern independent of surface reflectance, and (iv) the estimation of true boundaries using the canonical form of the light patterns. The remainder of the paper is organized as follows: in Section 2, we describe our boundary estimation method. The Fig. 5. Block diagram of the proposed boundary estimation: (a) process of estimating the boundaries with asymmetric deformation based on the zero crossing of canonically recovered signals and (b) overall process of estimating boundaries implemented by combining the two classes of boundaries. The boundaries with asymmetric deformation and with no asymmetric deformation. The conventional zero crossing is applied to the latter for computational efficiency.

4 S. Lee and L. Q. Bui Vol. 28, No. 6 / June 2011 / J. Opt. Soc. Am. A 957 Fig. 6. Deformed edges (blue in left) and their canonically recovered versions (middle) and after smoothing to eliminate noise (right). (Continues on next page) experimental results are provided in Section 3. Finally, Section 4 concludes the paper. 2. PROPOSED METHOD A. Modeling of a Captured Light Pattern Without losing generality, we assume that a projector camera pair is configured in such a way that the depth can be extracted along its epipolar lines independently [3]. This allows us to represent a light pattern in one-dimensional space along an epipolar line. Let sðxþ represent the ideal pattern along an epipolar line on the image, where ideal implies a virtual pattern on the image that would have been generated if no ambient light as well as no noise or disturbance were present along the entire optical passage. Because a projected light pattern is mostly of a binary form of dark and bright stripes, so is sðxþ. For further simplicity, here we represent sðxþ as a step function, where the step refers to an edge of stripes to be recovered, as follows: sðxþ ¼ H x 0 L x < 0 : In practice, sðxþ is subject to deformation and corruption: first, sðxþ is blurred from the optical passage through the projector lens, and then it is attenuated by the reflection from the object surface. Here we represent the attenuation due to the reflection at the local surface corresponding x by the reflectance index, RðxÞ, RðxÞ ½0; 1Š. Note that RðxÞ not only carries the variation of reflection coefficients due to the local differences in color and material, but it also carries the change in the amount of reflecting light due to the differences in the orientation of surface locals to the projector camera pair. In addition to the deformation of sðxþ due to the blurring and surface reflectance, it should be further corrupted from the additive signal, AðxÞ, reflected at the surface by ambient light. Finally, it is blurred once more from the optical passage through the camera lens, before it is contaminated by the noise, WðxÞ, associated with the imaging sensors. The whole process is described in Fig. 3. Taking all the aforementioned factors that deform and corrupt sðxþ into consideration, the captured light stripe signal, f s ðxþ, can be modeled as follows: f s ðxþ ¼ððsðxÞ g p ðx; σ p ÞÞRðxÞþAðxÞÞ g c ðx; σ c ÞþWðxÞ; ð1þ where represents a convolution operator. In Eq. (1), the two blurring processes associated with the projector and camera lenses are modeled as a convolution with the respective blur kernels, g p ðx; σ p Þ and g c ðx; σ c Þ, as illustrated in Fig. 4. The

5 958 J. Opt. Soc. Am. A / Vol. 28, No. 6 / June 2011 S. Lee and L. Q. Bui Fig. 6. (Color online) Fig. 6 Continued blur kernels, g p ðx; σ p Þ and g c ðx; σ c Þ, are chosen to be a normalized Gaussian function with ðx; σ p Þ and ðx; σ c Þ representing their respective (center, variance) pairs: where i is p or c. g i ðx; σ i Þ¼ p 1 ffiffiffiffiffi e 2π σ i B. Estimation of the Reflectance Indices Now, applying Eq. (1) to the two specially chosen, uniform light pattern projections, all-bright and all-dark, RðxÞ can be estimated. Let f 1 ðxþ and f 0 ðxþ represent the light patterns captured on the image in correspondence to the respective allbright and all-dark pattern projections [i.e., s 1 ðxþ ¼H and s 0 ðxþ ¼L]: x2 2σ 2 i ; f 1 ðxþ ¼ðH RðxÞþAðxÞÞ gðx; σ c ÞþW 1 ðxþ f 0 ðxþ ¼ðL RðxÞþAðxÞÞ gðx; σ c ÞþW 0 ðxþ: Assuming that the subtraction of sensor noise [W 1 ðxþ W 0 ðxþ], is small enough compared to that of captured light patterns, we have f 1 ðxþ f 0 ðxþ ððh LÞRðxÞÞ gðx; σ c Þ: The reflection index can now be estimated as follows: RðxÞ deconvlucyððf 1ðxÞ f 0 ðxþþ; σ c Þ ; ð2þ H L where deconvlucy represents the Richardson Lucy deconvolution operator [4,5] that we chose to use for the computation of deconvolution.

6 S. Lee and L. Q. Bui Vol. 28, No. 6 / June 2011 / J. Opt. Soc. Am. A 959 Fig. 7. (Color online) Profile of error in boundary estimation in terms of the degree of asymmetric signal deformation represented by dx: the relative pixel distance between the edge of a light stripe and the edge of a white black texture or of an abrupt change in reflection on an image. It shows that the smaller the dx is, or, equivalently, the more significant the asymmetric signal deformation becomes, the larger are the errors incurred by the conventional zero-crossing method (blue). On the other hand, the proposed zero-crossing method (red) based on canonically recovered signals is able to contain the errors within a small bound regardless of dx or, equivalently, of the significance of the asymmetric signal deformation. C. Canonical Representation of a Captured Light Pattern In order to be able to accurately estimate the boundary of a captured light pattern, we consider transforming the captured light pattern, f s ðxþ, into a canonical form, f c ðxþ, with which we can estimate the boundary independent of RðxÞ. The amount of light from the projector hitting the local surface corresponding to x, [SðxÞ gðx; σ p Þ L], before reflection works well as it eliminates RðxÞ as well as the ambient Fig. 9. (Color online) 3D point clouds of the calibration block using HOC-B (left) and HOC (right) in different views. (a) Full view of visible faces. (b) Front view and (c) top view of left face (plane X ¼ 0). (d) Front view and (e) top view of right face (plane Y ¼ 0). disturbance, AðxÞ, and the blurring in imaging optics. The transformation from f s ðxþ to f c ðxþ, f c ðxþ ¼ðSðxÞ gðx; σ p Þ LÞ, can be done as follows: The captured stripe pattern is subtracted by the reference data in correspondence to the all-dark pattern projection in order to remove the ambient light AðxÞ: f s ðxþ f 0 ðxþ ¼½ðSðxÞ gðx; σ p Þ LÞRðxÞŠ gðx; σ c ÞþðW s ðxþ W 0 ðxþþ: We can approximate as f s ðxþ f 0 ðxþ ½ðSðxÞ gðx; σ p Þ LÞRðxÞŠ gðx; σ c Þ: Thus the amount of light from the projector hitting the local surface corresponding to x is fcðxþ deconvlucyððf sðxþ f 0 ðxþþ; σ c Þ ; RðxÞ Fig. 8. (Color online) Performance of the proposed boundary estimation: (a) light pattern projected on a checker patterned calibration block to produce asymmetric signal deformation in experimentation, (b) boundaries of light stripes estimated by the proposed zero-crossing method with canonical signal representation, and (c) boundaries estimated by the conventional zero-crossing method. or fcðxþ deconvlucyððf sðxþ f 0 ðxþþ; σ c Þ ðh LÞ: deconvlucyððf 1 ðxþ f 0 ðxþþ; σ c Þ ð3þ

7 960 J. Opt. Soc. Am. A / Vol. 28, No. 6 / June 2011 S. Lee and L. Q. Bui Fig. 10. (Color online) Horizontal section of the 3D points of the calibration block. 3D points are reconstructed using (a) the HOC-B version and (b) with the original HOC version. The canonical form of the light pattern, f c ðxþ, computed by Eq. (3) is regarded as correcting the edges of f s ðxþ corrupted by RðxÞ, as well as by AðxÞ and gðx; σ c Þ, thus providing a means of recovering the true boundary embedded in sðxþ. D. Boundary Estimation The true boundary is now estimated by intersecting the canonical form of a captured light pattern with that of its inverse pattern. The block diagram of the proposed boundary estimation is described in Fig. 5(a). In practice, the proposed method is applied to cases of textured surfaces; in the case of a textureless surface, the method of projecting the additional inverse pattern and using the zero-crossing value as the threshold [2] is used. Figure 5(b) describes the block diagram of the general boundary estimation. In order to check the reflectivity of the surface, the reference data are used: f 1 ðxþ f 0 ðxþ ¼ððH LÞRðxÞÞ gðx; σ c ÞþðW 1 ðxþ W 0 ðxþþ ððh LÞRðxÞÞ gðx; σ c Þ: 1. If RðxÞ ¼R, the reflection index is a constant. From a property of the convolution: the convolution of a normalized Gaussian function with a constant A is A. We have f 1 ðxþ f 0 ðxþ ððh LÞRÞ gðx; σ c Þ¼ðH LÞR ¼ constant: Thus the first derivative is Table 1. ðf 1 ðxþ f 0 ðxþþ x ¼ 0: Errors of Reconstructed 3D Data Using HOC and HOC-B Standard Deviation of Error (mm) Plane X ¼ 0 Plane Y ¼ 0 Error Max (mm) Standard Deviation of Error (mm) Error Max (mm) HOC-B HOC ð4þ 2. If RðxÞ is not a constant, the absolute of the first derivative is ðf ðxþ f ðxþþ 1 0 x ¼ ðððh LÞRðxÞÞ gðx; σ c ÞÞ x constant: ð5þ These properties [Eqs. (4) and (5)] help us check the reflectivity of the surface. 3. EXPERIMENTAL RESULTS A. Deformed Edge Recovery When the stripe pattern is projected on a textured surface, depending on the direction and the relative distance between the stripe and the texture, the edge is deformed in different ways. To evaluate our proposed canonical form of the light pattern, a stripe pattern is projected on a textured surface that has a sharp change in reflectance. Then the direction and relative distance between the stripe and the texture are varied to measure the different deformations of the stripe edge as well as its recovered version computed by Eq. (3). The reference data in Fig. 6, especially when the pattern is all-bright, indicate the variation of the surface reflectance. In Figs. 6(a) 6(c), the left-side figures show the deformed edges when the edges and reflectance variation have the opposite direction, and the cases that they have the same direction are shown in the left-side figures of Figs. 6(d) 6(f). The middle figures illustrate the corrected edges using the canonical form of light pattern. However, the correction steps based on deconvolution may amplify the noise; thus, the corrected edges are simply smoothed to eliminate noise, as shown in the rightside figures. B. Evaluating the Proposed Boundary Estimation Method The model of the captured light stripe signal in Eq. (1) can be rewritten as f s ðxþ ¼ððsðxÞ g p ðx; σ p ÞÞRðxÞÞ g c ðx; σ c Þ þðaðxþ g c ðx; σ c ÞþWðxÞÞ:

8 S. Lee and L. Q. Bui Vol. 28, No. 6 / June 2011 / J. Opt. Soc. Am. A 961 In the case of a textureless surface, assuming that the reflectance is a constant, RðxÞ ¼R, thus f s ðxþ ¼ððsðxÞ g p ðx; σ p ÞÞRÞ g c ðx; σ c Þ þðaðxþ g c ðx; σ c ÞþWðxÞÞ ¼ RðsðxÞ g p ðx; σ p Þ g c ðx; σ c ÞÞ þðaðxþ g c ðx; σ c ÞþWðxÞÞ; where [AðxÞ g c ðx; σ c ÞþWðxÞ] is the ambient disturbance and noise, but it plays a role of lifting the ground of the captured signal; RðsðxÞ g p ðx; σ p Þ g c ðx; σ c ÞÞ is the purely captured light stripe. We can see that along the x axis, the stripe edge is only blurred by the optical effects of the projector and camera; thus, the boundary is preserved meaning that on a textureless surface, we can get the correct boundaries by using only the intersection between the stripe pattern and its inverse. To evaluate the proposed boundary estimation method, the relative distance between the stripe boundary and the change of surface texture, namely, dx, is manually varied, then at each position the boundary of the stripe is estimated using both the proposed method and the conventional method. In order to obtain the ground truth, at those same positions, the textured object is replaced by a white object that has the same shape. The stripe patterns are projected and captured, and then the stripe boundaries are estimated to use as references. Figure 7 shows the error between the estimated boundaries using the proposed method as well as the conventional method compared with the reference boundaries. As can be observed, the proposed boundary estimation method is much more accurate than the conventional one. C. Application of the Proposed Method in 3D Reconstruction In our experiments, we used a Canon LDP-3260K projector and a PGR Flea2 IEEE 1394 digital camera mounted with a TV LENS of 8 mm 1:1.3. The resolution of the projector was and of the camera was The position of the camera was about 30 cm on the right of the projector. The distance between the system and the objects was about 1 m. The original Hierarchical Orthogonal Code (HOC [6]) and the HOC with the boundary correction version (namely HOC- B) were implemented and evaluated. The system was calibrated using a calibration block with a coordinate attached on it, as shown in Fig. 8(a); thus, two front planes of the calibration block have the equation X ¼ 0 (left plane) and Y ¼ 0 (right plane), respectively. We also reconstructed 3D data of this calibration block to evaluate the HOC and HOC-B, because it has flat faces, which are easy for quantitative evaluations. Figure 8(a) shows the pattern of layer 4 of HOC on the calibration block. The boundaries of the four patterns estimated by the proposed method are shown in Fig. 8(b), and the ones estimated by the conventional method are in Fig. 8(c). Because the faces of the calibration block are flat, theoretically the detected boundaries of the light stripes should be straight lines. But the variant of the surface reflection deforms the edge of the pattern s stripe, thus the detected boundaries using conventional method are not on straight lines. As can be seen, the proposed method gives better estimation of the boundaries than the conventional method. Figure 9 illustrates the reconstructed 3D point clouds in various views of the calibration block using the HOC-B version on the left and the original HOC version on the right. Figure 10 shows the horizontal sections of the 3D point cloud of the HOC-B [Fig. 10(a)] and HOC [Fig. 10(b)]. Obviously, without boundary correction, the 3D point of the surface has much variation at the boundary of the black and white squares. Table 1 shows the quantitative measurement of the error in 3D data of only deformed edges. The error was defined as the distance between the reconstructed point to the corresponding plane (left plane, X ¼ 0 or right plane, Y ¼ 0). As can be seen, the max and standard deviation of the error were decreased when the boundary correction was applied to the HOC. 4. CONCLUSIONS We have presented a method to estimate the boundary of a structured light pattern on a textured object in order to improve the accuracy of the depth measurement. The idea of the proposed method is to estimate the incident light pattern hitting the object surface, which has the correct information of the boundary. In this paper, the proposed boundary estimator is implemented for HOC, but it also can be applied to other structured light codes, such as gray code and binary code. ACKNOWLEDGMENTS This research was performed for the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs (F ), and in part by the KORUS-Tech Program (KT-2008-SW-AP-FSO-0004) funded by the Korea Ministry of Knowledge Economy (MKE). This work was also partially supported by the Ministry of Education, Science and Technology, Korea, under the World Class University Program supervised by the Korea Science and Engineering Foundation (R ), and by MKE, Korea under ITRC NIPA-2010-(C ) [NTIS-2010-( )]. REFERENCES 1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, A state of the art in structured light patterns for surface profilometry, Patt. Recogn. 43, (2010). 2. M. Trobina, Error model of a coded-light range sensor, Technical report BIWI-TR-164 (Communication Technology Laboratory, 1995). 3. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004). 4. W. H. Richardson, Bayesian-based iterative method of image restoration, J. Opt. Soc. Am. A 62, (1972). 5. L. B. Lucy, An iterative technique for the rectification of observed distributions, Astron. J. 79, (1974). 6. L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, An active 3D robot camera for home environment, in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp

Boundary Inheritance Codec for high-accuracy structured light three-dimensional reconstruction with comparative performance evaluation

Boundary Inheritance Codec for high-accuracy structured light three-dimensional reconstruction with comparative performance evaluation Boundary Inheritance Codec for high-accuracy structured light three-dimensional reconstruction with comparative performance evaluation Lam Quang Bui 1 and Sukhan Lee 1,2, * 1 Intelligent Systems Research

More information

Auto-focusing Technique in a Projector-Camera System

Auto-focusing Technique in a Projector-Camera System 2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School

More information

High-resolution 3D profilometry with binary phase-shifting methods

High-resolution 3D profilometry with binary phase-shifting methods High-resolution 3D profilometry with binary phase-shifting methods Song Zhang Department of Mechanical Engineering, Iowa State University, Ames, Iowa 511, USA (song@iastate.edu) Received 11 November 21;

More information

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas Joon-Jae Lee, 1 Byung-Gook Lee, 2 and Hoon Yoo 3,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

More information

Perceptual Quality Improvement of Stereoscopic Images

Perceptual Quality Improvement of Stereoscopic Images Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Enhanced two-frequency phase-shifting method

Enhanced two-frequency phase-shifting method Research Article Vol. 55, No. 16 / June 1 016 / Applied Optics 4395 Enhanced two-frequency phase-shifting method JAE-SANG HYUN AND SONG ZHANG* School of Mechanical Engineering, Purdue University, West

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Image restoration. Restoration: Enhancement:

Image restoration. Restoration: Enhancement: Image restoration Most images obtained by optical, electronic, or electro-optic means is likely to be degraded. The degradation can be due to camera misfocus, relative motion between camera and object,

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Comments on Consistent Depth Maps Recovery from a Video Sequence

Comments on Consistent Depth Maps Recovery from a Video Sequence Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Absolute Scale Structure from Motion Using a Refractive Plate

Absolute Scale Structure from Motion Using a Refractive Plate Absolute Scale Structure from Motion Using a Refractive Plate Akira Shibata, Hiromitsu Fujii, Atsushi Yamashita and Hajime Asama Abstract Three-dimensional (3D) measurement methods are becoming more and

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA Engin Burak Anil 1 *, Burcu Akinci 1, and Daniel Huber 2 1 Department of Civil and Environmental

More information

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW

ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW ROBUST LINE-BASED CALIBRATION OF LENS DISTORTION FROM A SINGLE VIEW Thorsten Thormählen, Hellward Broszio, Ingolf Wassermann thormae@tnt.uni-hannover.de University of Hannover, Information Technology Laboratory,

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging

3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging 3D image reconstruction with controllable spatial filtering based on correlation of multiple periodic functions in computational integral imaging Jae-Young Jang 1, Myungjin Cho 2, *, and Eun-Soo Kim 1

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns

Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns Avery Ma avery.ma@uwaterloo.ca Alexander Wong a28wong@uwaterloo.ca David A Clausi dclausi@uwaterloo.ca

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

A three-step system calibration procedure with error compensation for 3D shape measurement

A three-step system calibration procedure with error compensation for 3D shape measurement January 10, 2010 / Vol. 8, No. 1 / CHINESE OPTICS LETTERS 33 A three-step system calibration procedure with error compensation for 3D shape measurement Haihua Cui ( ), Wenhe Liao ( ), Xiaosheng Cheng (

More information

Image Processing: Final Exam November 10, :30 10:30

Image Processing: Final Exam November 10, :30 10:30 Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Octree-Based Obstacle Representation and Registration for Real-Time

Octree-Based Obstacle Representation and Registration for Real-Time Octree-Based Obstacle Representation and Registration for Real-Time Jaewoong Kim, Daesik Kim, Junghyun Seo, Sukhan Lee and Yeonchool Park* Intelligent System Research Center (ISRC) & Nano and Intelligent

More information

Stereo Vision Image Processing Strategy for Moving Object Detecting

Stereo Vision Image Processing Strategy for Moving Object Detecting Stereo Vision Image Processing Strategy for Moving Object Detecting SHIUH-JER HUANG, FU-REN YING Department of Mechanical Engineering National Taiwan University of Science and Technology No. 43, Keelung

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Keehoon Hong, 1 Jisoo Hong, 1 Jae-Hyun Jung, 1 Jae-Hyeung Park, 2,* and Byoungho

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Surround Structured Lighting for Full Object Scanning

Surround Structured Lighting for Full Object Scanning Surround Structured Lighting for Full Object Scanning Douglas Lanman, Daniel Crispell, and Gabriel Taubin Brown University, Dept. of Engineering August 21, 2007 1 Outline Introduction and Related Work

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Bayesian perspective-plane (BPP) with maximum likelihood searching for visual localization

Bayesian perspective-plane (BPP) with maximum likelihood searching for visual localization DOI 1.17/s1142-14-2134-8 Bayesian perspective-plane (BPP) with maximum likelihood searching for visual localization Zhaozheng Hu & Takashi Matsuyama Received: 8 November 213 /Revised: 15 April 214 /Accepted:

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Efficient Stereo Image Rectification Method Using Horizontal Baseline

Efficient Stereo Image Rectification Method Using Horizontal Baseline Efficient Stereo Image Rectification Method Using Horizontal Baseline Yun-Suk Kang and Yo-Sung Ho School of Information and Communicatitions Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro,

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

Image Processing. Filtering. Slide 1

Image Processing. Filtering. Slide 1 Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

New Edge-Enhanced Error Diffusion Algorithm Based on the Error Sum Criterion

New Edge-Enhanced Error Diffusion Algorithm Based on the Error Sum Criterion New Edge-Enhanced Error Diffusion Algorithm Based on the Error Sum Criterion Jae Ho Kim* Tae Il Chung Hyung Soon Kim* Kyung Sik Son* Pusan National University Image and Communication Laboratory San 3,

More information

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

Available online at ScienceDirect. Energy Procedia 69 (2015 )

Available online at   ScienceDirect. Energy Procedia 69 (2015 ) Available online at www.sciencedirect.com ScienceDirect Energy Procedia 69 (2015 ) 1885 1894 International Conference on Concentrating Solar Power and Chemical Energy Systems, SolarPACES 2014 Heliostat

More information

Coupling of surface roughness to the performance of computer-generated holograms

Coupling of surface roughness to the performance of computer-generated holograms Coupling of surface roughness to the performance of computer-generated holograms Ping Zhou* and Jim Burge College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA *Corresponding author:

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

Applications of Temporal Phase Shift Shearography

Applications of Temporal Phase Shift Shearography Chapter 5 Applications of Temporal Phase Shift Shearography The main applications of temporal phase shift shearography are in NDT and strain measurement. 5.1. Temporal Phase Shift Shearography for NDT

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Estimation of 3D Geometry Using Multi-View and Structured Circular Light System

Estimation of 3D Geometry Using Multi-View and Structured Circular Light System Journal of Image and Graphics, Volume, No., June, 04 Estimation of 3D Geometry Using Multi-View and Structured Circular Light System Deokwoo Lee Samsung Electronics / Camera R&D Lab, Division of Mobile

More information

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION

DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION 2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent

More information

Optics and Lasers in Engineering

Optics and Lasers in Engineering Optics and Lasers in Engineering 51 (213) 79 795 Contents lists available at SciVerse ScienceDirect Optics and Lasers in Engineering journal homepage: www.elsevier.com/locate/optlaseng Phase-optimized

More information

Null test for a highly paraboloidal mirror

Null test for a highly paraboloidal mirror Null test for a highly paraboloidal mirror Taehee Kim, James H. Burge, Yunwoo Lee, and Sungsik Kim A circular null computer-generated hologram CGH was used to test a highly paraboloidal mirror diameter,

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Blur Space Iterative De-blurring

Blur Space Iterative De-blurring Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,

More information

Real-time scattering compensation for time-of-flight camera

Real-time scattering compensation for time-of-flight camera Real-time scattering compensation for time-of-flight camera James Mure-Dubois and Heinz Hügli University of Neuchâtel Institute of Microtechnology, 2000 Neuchâtel, Switzerland Abstract. 3D images from

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry

Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry Lei Huang,* Chi Seng Ng, and Anand Krishna Asundi School of Mechanical and Aerospace Engineering, Nanyang Technological

More information

Registration of Moving Surfaces by Means of One-Shot Laser Projection

Registration of Moving Surfaces by Means of One-Shot Laser Projection Registration of Moving Surfaces by Means of One-Shot Laser Projection Carles Matabosch 1,DavidFofi 2, Joaquim Salvi 1, and Josep Forest 1 1 University of Girona, Institut d Informatica i Aplicacions, Girona,

More information

Natural method for three-dimensional range data compression

Natural method for three-dimensional range data compression Natural method for three-dimensional range data compression Pan Ou,2 and Song Zhang, * Department of Mechanical Engineering, Iowa State University, Ames, Iowa 5, USA 2 School of Instrumentation Science

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Mitsubishi Electric Research Laboratories Raskar 2007 Coding and Modulation in Cameras Ramesh Raskar with Ashok Veeraraghavan, Amit Agrawal, Jack Tumblin, Ankit Mohan Mitsubishi Electric Research Labs

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Image Processing. Traitement d images. Yuliya Tarabalka  Tel. Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an

More information

Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks

Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks Proceedings of the 1999 IEEVRSJ International Conference on Intelligent Robots and Srjtems Visual Sensor-Based Measurement for Deformable Peg-in-Hole Tasks J. Y. Kim* and H. S. Cho** * Department of Robot

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

Tensor Factorization and Continous Wavelet Transform for Model-free Single-Frame Blind Image Deconvolution

Tensor Factorization and Continous Wavelet Transform for Model-free Single-Frame Blind Image Deconvolution Tensor actorization and Continous Wavelet Transform for Model-free Single-rame Blind Image Deconvolution Ivica Kopriva Ruđer Bošković Institute Bijenička cesta 54 10000 Zagreb, Croatia ikopriva@irb.hr

More information

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications

High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications High Performance GPU-Based Preprocessing for Time-of-Flight Imaging in Medical Applications Jakob Wasza 1, Sebastian Bauer 1, Joachim Hornegger 1,2 1 Pattern Recognition Lab, Friedrich-Alexander University

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Constructing a 3D Object Model from Multiple Visual Features

Constructing a 3D Object Model from Multiple Visual Features Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

EECS 442: Final Project

EECS 442: Final Project EECS 442: Final Project Structure From Motion Kevin Choi Robotics Ismail El Houcheimi Robotics Yih-Jye Jeffrey Hsu Robotics Abstract In this paper, we summarize the method, and results of our projective

More information

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal

Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Compact and Low Cost System for the Measurement of Accurate 3D Shape and Normal Ryusuke Homma, Takao Makino, Koichi Takase, Norimichi Tsumura, Toshiya Nakaguchi and Yoichi Miyake Chiba University, Japan

More information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Liu Chun College of Computer Science and Information Technology Daqing Normal University Daqing, China Sun Bishen Twenty-seventh

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe

Structured Light. Tobias Nöll Thanks to Marc Pollefeys, David Nister and David Lowe Structured Light Tobias Nöll tobias.noell@dfki.de Thanks to Marc Pollefeys, David Nister and David Lowe Introduction Previous lecture: Dense reconstruction Dense matching of non-feature pixels Patch-based

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information