Depth extraction from unidirectional integral image using a modified multi-baseline technique

Size: px
Start display at page:

Download "Depth extraction from unidirectional integral image using a modified multi-baseline technique"

Transcription

1 Depth extraction from unidirectional integral image using a modified multi-baseline technique ChunHong Wu a, Amar Aggoun a, Malcolm McCormick a, S.Y. Kung b a Faculty of Computing Sciences and Engineering, De Montfort University, U.K. b Electrical Engineering, Princeton University, USA ABSTRACT Integral imaging is a technique capable of displaying images with continuous parallax in full natural colour. This paper presents a modified multi-baseline method for extracting depth information from unidirectional integral images. The method involves first extracting sub-images from the integral image. A sub-image is constructed by extracting one pixel from each micro-lens rather than a macro-block of pixels corresponding to a micro-lens unit. A new mathematical expression giving the relationship between object depth and the corresponding sub-image pair displacement is derived by geometrically analyzing the three-dimensional image recording process. A correlation-based matching technique is used to find the disparity between two sub-images. In order to improve the disparity analysis, a modified multi-baseline technique where the baseline is defined as the distance between two corresponding pixels in different sub-images is adopted. The effectiveness of this modified multi-baseline technique in removing the mismatching caused by similar patterns in object scenes has been proven by analysis and experiment results. The developed depth extraction method is validated and applied to both photographic and computer generated unidirectional integral images. The depth estimation solution gives a precise description of object thickness with an error of less than.0% from the photographic image in the example. Keywords: Integral Image, Depth Extraction, Multi-baseline Technique. INTRODUCTION The development of three-dimensional (3-D) imaging systems is a constant pursuit of the scientific community and entertainment industries. Many applications exist for fully three dimensional video communication systems. One much discussed application is 3-D television. Integral imaging is a 3-D display technique capable of encoding a true volume spatial optical model of the object scene in the form of a planar intensity distribution by using unique optical capture apparatus. The recorded planar intensity distribution can be stored or transmitted as a conventional two dimensional pixel array. It is akin to holography in that 3-D information is recorded on a two dimensional medium, but does not require coherent light sources. This allows continuous parallax, wide viewing zone and very good live capture and display practicality. All integral imaging can be traced from the work of Gabriel Lippmann, 908 [], where a micro-lenses sheet was used to record the optical model of an object scene. A full natural colour scene with continuous parallax can be replayed when another micro-lenses sheet with appropriate parameters is used to overlay the original image, see figure. A modification to the system was proposed by Ives [] where a two-stage photograph is used to overcome the problem imposed by the pseudoscopic (spatially depth invert) nature of the image. A two-tier network as a combination of macro-lens arrays and micro-lens arrays designed by Davies and McCormick [3][4] further overcomes the image degradation caused by the two-stage recording. The two-tier network works as an optical transmission inversion screen, which allows direct spatially correct 3-D image capture. With progress in micro-lens manufacturing, integral imaging is becoming a practical and prospective 3-D display technique and hence is attracting much interest.

2 Figure. The principle of integral image This paper is concerned with the extraction of depth information and the reconstruction of a 3-D scene from the recorded integral image data. One particular usage of depth extraction is to enable content-based interactive manipulation, which allows flexible operations on visual objects to be carried out. Hence, real and computer generated 3-D object space can be combined in a virtual studio. It can also be used for content-base image coding. Recently, a method for extracting depth based the point spread function (PSF) of the optical recording process has been reported [5] and [6]. The object space is conceived as a discrete set of points endowed with intensity. A correspondence matrix is associated to the PSF function, which transforms the object space into the pixel defined integral image. Depth estimation from the 3-D integral image data is formulated as an inverse problem. Although this technique works well on synthetic numerical experimental data, produced using the PSF matrix assumed, further research is needed towards its possible application to the real integral image [5] and [6]. This paper presents an alternative method for extracting depth information from an integral image. The method involves first extracting sub-images from the integral image. A sub-image is generated by taking one pixel out from each micro-lens unit rather than the macro block corresponding to the micro-lens unit. Each sub-image contains pixels at the same position under different micro-lens hence each sub-image record the object scene from and only from one particular direction. A mathematical expression giving the relationship between the depth of the object and the corresponding sub-images displacement is then derived by geometrically analyzing the integral image recording process. Correlation-based disparity analysis methods are used and a multi-baseline technique is adopted in order to remove the mismatch arising from ambiguity in object space. Application and validation of this method is presented for both photographic and computer generated images. Although current work is applied to unidirectional integral images, extension of the technique to omni-direction integral images (parallax in all direction) is straightforward.. EXTRACTING SUB-IMAGES FROM UNIDIRECTIONAL INTEGRAL IMAGE The key point of the integral image involves using a micro-lenses sheet in recording. A micro-lenses sheet is made up of many micro-lenses having the same parameters and lying in the same focal plane, each micro-lens works as an individual small low-resolution camera. A recording film is placed behind the micro-lenses sheet and coincident with the focal plane. As all parallel incident rays pass through the same point in the focal plane after refraction in an ideal lens, the parallel incident rays will be recorded at the same position under each micro-lens. The different recording position only depends on which micro-lens surface it reaches, as shown in figure. As an example, all rays along direction θ will be recorded on position numbered n while all rays along θ will be recorded on position numbered n, in other words, all pixels at the same position under different micro-lenses record the object scene from the same direction. Therefore a new image representing a particular viewpoint can be composed by simply sampling the pixels at the same position under different micro-lenses. This new image (termed here sub-image) records the object scene from and only from one particular view direction. Changing the positions of the pixels selected, other sub-images can be constructed. Figure 3 illustrates how the sub-image is extracted from the unidirectional integral image. For simplicity,

3 only four pixels are assumed under each micro-lens. Pixels in the same position under different micro-lenses, which record rays from one particular direction and represented by the same color, are employed to form one sub-image. Figure. The direction selectivity of integral recording procedure Figure 3. Illustration of sub-image extraction Figure 4a is an example of a unidirectional integral image. The original object scene contains a flat background with Chinese characters and a small box attached to the front of it. The pitch size (%), focal length (F) of the micro-lenses and the radius of curvature (r) of the micro-lens surface are 600nm,.37mm and 0.88mm, respectively. The optical system used to take the image is explained in Davies and McCormick [4]. The 3-D scene can be replayed by scaling the image back to the original size (0.0cm 9.065cm) and overlaying with micro-lens sheet having the same parameters. Figures 4b and 4c are two sub-images extracted from the unidirectional integral image in figure 4a. Each sub-image presents a two-dimensional (-D) recording of the object space from a particular direction. These sub-images are used in the following section for the disparity analysis. (a) (b) (c) Figure 4. An example of an unidirectional integral image example and two corresponding sub-images

4 3. MATHEMATICAL RELATIONSHIP BETWEEN OBJECT DEPTH AND SUB-IMAGES DISPLACEMENT Further sub-images are extracted from the original integral image and a correlation-based matching technique is used to find the disparity between sub-images. The extracted sub-image pairs are different to those used in stereoscopic calibration since they are generated from a different source. Geometric analysis on the optical recording procedure is necessary to find the relationship between the object depth and the disparity information of sub-images. Figure 5 depicts the Cartesian coordinate system used in the analysis. Only one dimensional disparity is considered here since the unidirectional integral image is being discussed. The z-axis denotes the depth and the x-axis represents the lateral position. The z-axis starts from the plane coinciding with the micro-lenses surface, while the x-axis starts from the center of the first micro-lens. Consider an object point P (x 0, D) at distance D from the micro-lens surface plane. Suppose the first sub-image is obtained by choosing pixels at distance ds from the micro-lenses center, which records rays from the direction. The ray from P along the direction intersects the plane of the micro-lenses sheet surface (x-axis) at the N th micro-lens. The intersection is at P (x,0). Following lens refraction, the ray is recorded on film at point Q (x, -t) at distance ds from the micro-lens center, see figure 5b. The following geometric relationship can be easily obtained from figure 5: ( D + d r ) ds 0.5) ψ < x0 + < ( 0.5) ψ F ( N N A similar relationship can be obtained for the second sub-image: ( D + d r ) ds 0.5) ψ < x0 + < ( 0.5) ψ F ( N N In this paper, the baseline is defined DV û GV -ds, which represents the sampling distance between two sub-images. The name baseline is inherited from stereoscopic. Since only one pixel is sampled out from each micro-lens in each sub-image, the disparity between two sub-images, d, corresponds to the micro-lenses numbers differences between the position Q and Q, d= N N. Substituting d and û and manipulating equation (), () yields the following: () ()

5 ( d ± ) ψ F D = d r (3) Here d ± means the expected value is d but it may vary from the range d- to d+. In most cases dr<<d, the depth equation can be simplified as or ( d ± ) ψ F D = D d = ± (5) ψ F For an object at a particular depth, equation (5) describes the relationship of the disparity between two sub-images and the object depth, given the baseline between the two sub-images and micro-lenses parameters. It can be seen from the equation that the disparity is proportional to the object depth and also increases with distance between two subimages (baseline) increase. The equation also reveal that the accuracy of the depth estimate obtained by this method is OLPLWHG E\ % ) 7KLV GHSWK HVWLPDWH DPELJXLW\ LV FDXVHG E\ WKH DPELJuity existing in the recording process. The coordinate of P in the X direction can be solved from equation () using the achieved depth information: D ds x 0 = N ± ψ (6) F The original object scene can be reconstructed by mapping the intensity information on the reconstructed depth map. (4) 4. DISPARITY ANALYSIS AND OBJECT DEPTH ESTIMATION Having derived the mathematical relationship between object depth and disparity information of the sub-images, efforts are made to find the correct disparity information from sub-images. Obviously depth estimation relies upon the precision and the correctness of the disparity analysis. In this section, correlation-based disparity analysis is carried out. 4. Correlation-based Disparity Analysis Method Three popular correlation-based disparity analysis methods are used to find out the displacement between two subimages, namely, sum of the square difference (SSD), mean absolute error (MAE) and cross correlation (CC). Only one directional disparity is considered since a unidirectional image is used here. Assuming two sub-images, I and I, (x,y) are the coordinates of the point being analyzed. I(x,y) is the intensity of the point (x,y). w+ is the width of the correlation window used in matching and R is the search scope in the second image associate with the first image. The three correlation-based disparity methods can be described as: SSD (d)= arg min d R w w i= w j= w [ I ( x + i, y + j) I ( x + i + d, y + j)] (7) MAE (d)= arg min I( x + i, y + j) I ( x + i + d, y + d R w w i= w j= w j) (8) CC (d) = arg max w w I ( x + i, y + j) I i= w j= w d R σ ( x, y) σ ( x + ( x + i + d, y + d, y) j) (9)

6 w w Where I ( x, y) = I( x + i, y + j) is the average intensity of the chosen window, N i= w j= w I ( x + i, y + j) = I ( x + i, y + j) I ( x, y) is the intensity after local adjustment, σ = w w N I i= w j= w ( x + i, y + j) is the mean square error of the chosen window, N = ( w + ) (w + ) is the window size of the window. arg min, arg max means for all d R d R d R maximum value of the function respectively., find d for which the equation obtains the minimum value or Disparity analysis with all the above three correlation algorithms was performed on the three selected windows of the sub-image shown in Figure 6, which represent areas containing either object or background only. Figures 7, 8 and 9 show the disparity analysis results for each window for the three methods. The plot is normalized such that the maximum SSD(d) and MAE(d) are equalized to respectively. The searching scope R is from 0 to 0 pixels for object (window ) and 0 to 0 pixels for background (window and 3). The disparity of window of object is identified as 8 pixels as shown in figure 7 and the disparity of window of background can be recognized at pixels as shown in figure 8. The difference in the results among the three correlation-based methods is insignificant. However, when the disparity between image pairs is examined in window 3, see figure 9, a disparity of -5 is obtained from the SSD and CC method and 9 from the MAE method respectively. All of the three curves tends to periodical and have extrema around 5, and 9. This is caused by the similar background pattern that exists. Mismatching around the extrema positions occurs due to noise and other intensity distortion exist in recording process, such as the unbalanced illumination, projection distortion, etc. It can be noticed that even when a large window is chosen from the background, window, figure 8, there still exist local extrema at d=-5 and d=9 that are competitive to the global extrema. In this situation, correct disparity is obtained from the global extrema because a larger window size is chosen, which provides a higher probability of obtaining a higher signal-to-noise ratio. A multi-baseline technique is adopted in order to improve the disparity analysis result. Window3 Window Window Figure 6. Region choosing in depth estimation

7 0.8 MAE SSD CC disparity (d) Figure 7. Disparity analysis on window (object region) 0.9 SSD MAE CC disparity (d) Figure 8. Disparity analysis on window (background region) 0.8 MAE 0.6 SSD CC disparity (d) Figure 9. Disparity analysis on window 3 (background region)

8 4. Removing Mismatching with a Modified Multi-baseline Technique In the previous sections, a way of obtaining depth information from unidirectional integral images by extracting subimages from integral image and analyzing disparity between sub-images has been described. It is obvious from the equation (4) that the precision and the correctness of the depth estimate depend upon the performance of disparity analysis obtained. Consequently, a correct disparity analysis is of great importance in order to have correct depth estimation. As discussed in the last section, mismatching tends to occur when periodic or ambiguous information exists in the object scene. In order to tackle this problem, multi-baseline technique is adopted which uses several sub-image pairs simultaneously in the disparity analysis. Theory and experiments prove that the technique works effectively in remove mismatching. The multi-baseline technique was proposed by Okutomi and Kanadi [7] as a stereo matching method using multiple stereo pairs with various baselines in obtaining precise distance estimates without suffering from ambiguity. Unlike the general fusing technique, which computes the displacement for each pair first and then calculates the final displacement from the intermediate results, this technique accumulates the matching evaluation function from all the sub-image pairs into a single evaluation function and makes one single decision at the end. The definition of the baseline in this work is different from that used in the stereoscopic case and consequently the disparity and depth equation is different from that for stereoscopy. Instead of computing the SSSD-in-inverse distance function, SSSD(/D), a direct distance function,-scc(d), is used here. Using SCC(D) instead of SSSD(/D) makes the calculation and illustration easier since the disparity is proportional to the depth here and the CC function is a normalized function itself. In addition it is much easier to find the minimal than the maximal of a function. Figure 0 illustrate the disparity analysis result for window 3 obtained by using this modified multi-baseline technique. In order to compare, horizontal axis is normalized to the longest baseline searching scope. Compared to the evaluation function obtained by direct disparity analysis from a pair, i, -CC i (d), only one clear and sharp minimum exists in -SCC (d) curve. The error caused by mismatching in the situation has been effectively removed by the new approach. Figure 0. Disparity analysis on window 3 by using multi-baseline based method (-SCC(d)) 4.3 Precise Depth estimate and Depth Mapping of 3-D Object Space Choosing five points around the minimum and using a polynomial curve fitting method allows the precise disparity estimate of the background to be obtained as.475 pixel. The corresponding depth can be calculated as.43mm by using equation 5. Applying the same method to window, the disparity is founded to be pixels indicating the precise object depth is at 8.3mm, as shown in figure. The object thickness can be identified as 5.7mm, which is very close to the actual value of 5.6mm manually measured with Vernier caliper.

9 dispaity (d) Figure. Polynomial curve fitting and precise solution finding Applying the same technique on every pixel of the sub-image generates the dense depth map of the object scene, shown in a. The window size used in this example is *. With this depth map, the reconstructing of threedimension object space becomes straightforward by mapping the original intensity image on the construct depth map. The reconstructed object scene is shown in figure b. The depth of the object is correctly estimated for all but the shadow region below and to the right edge of the object. Figure. Depth map and object space reconstruction from unidirectional integral image using multi-baseline technique A computer generated unidirectional integral image was also produced and analyzed to compare the result obtained from the photographic integral image. Figure 3a shows the computer-generated image using modified PoV-Ray software. It contains a background plane with Chinese characters, the same as that in photographic image and a thin box in front of it. The parameters of the recording micro-lenses sheet are the same as in photographic image. Figure3b and 3c are two sub-images extracted from the computer generated integral image. Figure4 is the corresponding depth map and object space reconstruction from the image. A 7*7 window size is used in this disparity analysis. Better resolution can be obtained from the computer-generated integral image since balanced illumination is used and also it is noise free. The notable error in the computer-generated image is along the right edge of the object, which is mainly caused by occlusion.

10 a) b) c) Figure 3. Computer generated integral image a) b) Figure 4. Depth map and object space reconstruction from computer generated image 5.CONCLUSION In this paper, a method of depth extraction from a unidirectional integral image using disparity analysis and a modified multi-baseline technique has been described. This method involves first extracting sub-images from the integral image. The relationship between object depth and sub-image displacement is derived through geometrical analysis of the optical recording procedure. A modified multi-baseline technique is used to improve the disparity analysis results. The developed depth extraction method is applied to both captured and computer-generated images. Results show that the proposed method works effectively in both applications. Although only unidirectional integral images are discussed, the method can be easily applied to omni-direction integral images by processing continuous parallax in all direction with minor adjustments. Further work is ongoing to improve the disparity analysis accuracy. REFERENCES [] Lippmann G: La Photographie integrale, comtes Rendus, Academie des Sciences, Vol.46, pp ,908. [] Ives H E: Optical Properties of a Lippmann Lenticulated Sheet, J. Opt. Soc. Amer., Vol., pp7-76,93. [3] Davis N and McCormick M: Holoscopic imaging with true 3D-content in full natural colour, J. Phot. Science, Vol. 40, pp46-49, 99. [4] Davies N, McCormick M and Brewin M: Design and analysis of an image transfer system using microlens arrays, Opt. Eng., Vol. 33, No., pp , 994.

11 [5] Manolache S, Aggoun A, McCormick M and Davies N: A mathematical model of 3D-unidirectional integral recording system, Proceedings of Vision, Modeling, and Visualisation '99', pp.5-58, Erlangen, Germany, Nov [6] Manolache S, McCormick M and Kung SY: Hirearchical adaptive regularization method for depth extraction from planar recording of 3D-integral images, Proceedings of ICASSP, May, 00. [7] Okutomi M and Kanade T: A Multiple-Baseline Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence,

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS

PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS PRE-PROCESSING OF HOLOSCOPIC 3D IMAGE FOR AUTOSTEREOSCOPIC 3D DISPLAYS M.R Swash, A. Aggoun, O. Abdulfatah, B. Li, J. C. Fernández, E. Alazawi and E. Tsekleves School of Engineering and Design, Brunel

More information

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System

Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System Journal of Image and Graphics, Volume 2, No.2, December 2014 Enhanced Still 3D Integral Images Rendering Based on Multiprocessor Ray Tracing System M. G. Eljdid Computer Sciences Department, Faculty of

More information

P H Y L A B 1 : G E O M E T R I C O P T I C S

P H Y L A B 1 : G E O M E T R I C O P T I C S P H Y 1 4 3 L A B 1 : G E O M E T R I C O P T I C S Introduction Optics is the study of the way light interacts with other objects. This behavior can be extremely complicated. However, if the objects in

More information

Three-dimensional integral imaging for orthoscopic real image reconstruction

Three-dimensional integral imaging for orthoscopic real image reconstruction Three-dimensional integral imaging for orthoscopic real image reconstruction Jae-Young Jang, Se-Hee Park, Sungdo Cha, and Seung-Ho Shin* Department of Physics, Kangwon National University, 2-71 Republic

More information

Enhanced Techniques 3D Integral Images Video Computer Generated

Enhanced Techniques 3D Integral Images Video Computer Generated Enhanced Techniques 3D Integral Images Video Computer Generated M. G. Eljdid, A. Aggoun, O. H. Youssef Computer Sciences Department, Faculty of Information Technology, Tripoli University, P.O.Box: 13086,

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Apr 22, 2012 Light from distant things We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light.

Chapter 7: Geometrical Optics. The branch of physics which studies the properties of light using the ray model of light. Chapter 7: Geometrical Optics The branch of physics which studies the properties of light using the ray model of light. Overview Geometrical Optics Spherical Mirror Refraction Thin Lens f u v r and f 2

More information

AP Physics: Curved Mirrors and Lenses

AP Physics: Curved Mirrors and Lenses The Ray Model of Light Light often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but is very useful for geometric

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Physics 1C Lecture 26A. Beginning of Chapter 26

Physics 1C Lecture 26A. Beginning of Chapter 26 Physics 1C Lecture 26A Beginning of Chapter 26 Mirrors and Lenses! As we have noted before, light rays can be diverted by optical systems to fool your eye into thinking an object is somewhere that it is

More information

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement

Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement Curved Projection Integral Imaging Using an Additional Large-Aperture Convex Lens for Viewing Angle Improvement Joobong Hyun, Dong-Choon Hwang, Dong-Ha Shin, Byung-Goo Lee, and Eun-Soo Kim In this paper,

More information

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]

Correspondence and Stereopsis. Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri] Introduction Disparity: Informally: difference between two pictures Allows us to gain a strong

More information

Multiple Baseline Stereo

Multiple Baseline Stereo A. Coste CS6320 3D Computer Vision, School of Computing, University of Utah April 22, 2013 A. Coste Outline 1 2 Square Differences Other common metrics 3 Rectification 4 5 A. Coste Introduction The goal

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics 23.1 The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,

More information

Light: Geometric Optics (Chapter 23)

Light: Geometric Optics (Chapter 23) Light: Geometric Optics (Chapter 23) Units of Chapter 23 The Ray Model of Light Reflection; Image Formed by a Plane Mirror Formation of Images by Spherical Index of Refraction Refraction: Snell s Law 1

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

DIFFRACTION 4.1 DIFFRACTION Difference between Interference and Diffraction Classification Of Diffraction Phenomena

DIFFRACTION 4.1 DIFFRACTION Difference between Interference and Diffraction Classification Of Diffraction Phenomena 4.1 DIFFRACTION Suppose a light wave incident on a slit AB of sufficient width b, as shown in Figure 1. According to concept of rectilinear propagation of light the region A B on the screen should be uniformly

More information

Computational Photography: Real Time Plenoptic Rendering

Computational Photography: Real Time Plenoptic Rendering Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering

More information

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea

Department of Game Mobile Contents, Keimyung University, Daemyung3-Dong Nam-Gu, Daegu , Korea Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas Joon-Jae Lee, 1 Byung-Gook Lee, 2 and Hoon Yoo 3,

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

Chapter 26 Geometrical Optics

Chapter 26 Geometrical Optics Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray

More information

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship Proc. Natl. Sci. Counc. ROC(A) Vol. 25, No. 5, 2001. pp. 300-308 Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship YIH-SHYANG CHENG, RAY-CHENG CHANG, AND SHIH-YU

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional

More information

CS 4495/7495 Computer Vision Frank Dellaert, Fall 07. Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang

CS 4495/7495 Computer Vision Frank Dellaert, Fall 07. Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang CS 4495/7495 Computer Vision Frank Dellaert, Fall 07 Dense Stereo Some Slides by Forsyth & Ponce, Jim Rehg, Sing Bing Kang Etymology Stereo comes from the Greek word for solid (στερεο), and the term can

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement

Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of

More information

The Lens. Refraction and The Lens. Figure 1a:

The Lens. Refraction and The Lens. Figure 1a: Lenses are used in many different optical devices. They are found in telescopes, binoculars, cameras, camcorders and eyeglasses. Even your eye contains a lens that helps you see objects at different distances.

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19 Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging

Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Rectification of distorted elemental image array using four markers in three-dimensional integral imaging Hyeonah Jeong 1 and Hoon Yoo 2 * 1 Department of Computer Science, SangMyung University, Korea.

More information

Algebra Based Physics

Algebra Based Physics Slide 1 / 66 Slide 2 / 66 Algebra Based Physics Geometric Optics 2015-12-01 www.njctl.org Table of ontents Slide 3 / 66 lick on the topic to go to that section Reflection Spherical Mirror Refraction and

More information

Aberrations in Holography

Aberrations in Holography Aberrations in Holography D Padiyar, J Padiyar 1070 Commerce St suite A, San Marcos, CA 92078 dinesh@triple-take.com joy@triple-take.com Abstract. The Seidel aberrations are described as they apply to

More information

Stereo Observation Models

Stereo Observation Models Stereo Observation Models Gabe Sibley June 16, 2003 Abstract This technical report describes general stereo vision triangulation and linearized error modeling. 0.1 Standard Model Equations If the relative

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System

Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System International Conference on Complex, Intelligent and Software Intensive Systems Vision-Based 3D Fingertip Interface for Spatial Interaction in 3D Integral Imaging System Nam-Woo Kim, Dong-Hak Shin, Dong-Jin

More information

PART A Three-Dimensional Measurement with iwitness

PART A Three-Dimensional Measurement with iwitness PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Digital Image Correlation of Stereoscopic Images for Radial Metrology

Digital Image Correlation of Stereoscopic Images for Radial Metrology Digital Image Correlation of Stereoscopic Images for Radial Metrology John A. Gilbert Professor of Mechanical Engineering University of Alabama in Huntsville Huntsville, AL 35899 Donald R. Matthys Professor

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

Theory of Stereo vision system

Theory of Stereo vision system Theory of Stereo vision system Introduction Stereo vision is a technique aimed at extracting depth information of a scene from two camera images. Difference in pixel position in two image produces the

More information

Chapter 12 Notes: Optics

Chapter 12 Notes: Optics Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc. Chapter 32 Light: Reflection and Refraction Units of Chapter 32 The Ray Model of Light Reflection; Image Formation by a Plane Mirror Formation of Images by Spherical Mirrors Index of Refraction Refraction:

More information

Chapter 34. Thin Lenses

Chapter 34. Thin Lenses Chapter 34 Thin Lenses Thin Lenses Mirrors Lenses Optical Instruments MFMcGraw-PHY 2426 Chap34a-Lenses-Revised: 7/13/2013 2 Inversion A right-handed coordinate system becomes a left-handed coordinate system

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

High Accuracy Depth Measurement using Multi-view Stereo

High Accuracy Depth Measurement using Multi-view Stereo High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel

More information

Transparent Object Shape Measurement Based on Deflectometry

Transparent Object Shape Measurement Based on Deflectometry Proceedings Transparent Object Shape Measurement Based on Deflectometry Zhichao Hao and Yuankun Liu * Opto-Electronics Department, Sichuan University, Chengdu 610065, China; 2016222055148@stu.scu.edu.cn

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics Regular and Diffuse Reflection Sections 23-1 to 23-2. How We See Weseebecauselightreachesoureyes. There are two ways, therefore, in which we see: (1) light from a luminous object

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

Planar pattern for automatic camera calibration

Planar pattern for automatic camera calibration Planar pattern for automatic camera calibration Beiwei Zhang Y. F. Li City University of Hong Kong Department of Manufacturing Engineering and Engineering Management Kowloon, Hong Kong Fu-Chao Wu Institute

More information

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES

DEPTH ESTIMATION USING STEREO FISH-EYE LENSES DEPTH ESTMATON USNG STEREO FSH-EYE LENSES Shishir Shah and J. K. Aggamal Computer and Vision Research Center Department of Electrical and Computer Engineering, ENS 520 The University of Texas At Austin

More information

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman Assignment #1. (Due date: 10/23/2012) x P. = z

Computer Vision I Name : CSE 252A, Fall 2012 Student ID : David Kriegman   Assignment #1. (Due date: 10/23/2012) x P. = z Computer Vision I Name : CSE 252A, Fall 202 Student ID : David Kriegman E-Mail : Assignment (Due date: 0/23/202). Perspective Projection [2pts] Consider a perspective projection where a point = z y x P

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera Stereo

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Final Exam. Today s Review of Optics Polarization Reflection and transmission Linear and circular polarization Stokes parameters/jones calculus

Final Exam. Today s Review of Optics Polarization Reflection and transmission Linear and circular polarization Stokes parameters/jones calculus Physics 42200 Waves & Oscillations Lecture 40 Review Spring 206 Semester Matthew Jones Final Exam Date:Tuesday, May 3 th Time:7:00 to 9:00 pm Room: Phys 2 You can bring one double-sided pages of notes/formulas.

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

AIM To determine the frequency of alternating current using a sonometer and an electromagnet.

AIM To determine the frequency of alternating current using a sonometer and an electromagnet. EXPERIMENT 8 AIM To determine the frequency of alternating current using a sonometer and an electromagnet. APPARATUS AND MATERIAL REQUIRED A sonometer with a soft iron wire stretched over it, an electromagnet,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213

Victor S. Grinberg M. W. Siegel. Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, PA, 15213 Geometry of binocular imaging III : Wide-Angle and Fish-Eye Lenses Victor S. Grinberg M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

Supplementary materials of Multispectral imaging using a single bucket detector

Supplementary materials of Multispectral imaging using a single bucket detector Supplementary materials of Multispectral imaging using a single bucket detector Liheng Bian 1, Jinli Suo 1,, Guohai Situ 2, Ziwei Li 1, Jingtao Fan 1, Feng Chen 1 and Qionghai Dai 1 1 Department of Automation,

More information

2.3 Thin Lens. Equating the right-hand sides of these equations, we obtain the Newtonian imaging equation:

2.3 Thin Lens. Equating the right-hand sides of these equations, we obtain the Newtonian imaging equation: 2.3 Thin Lens 6 2.2.6 Newtonian Imaging Equation In the Gaussian imaging equation (2-4), the object and image distances S and S, respectively, are measured from the vertex V of the refracting surface.

More information

Light: Geometric Optics

Light: Geometric Optics Light: Geometric Optics The Ray Model of Light Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but

More information

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , 3D Sensing and Reconstruction Readings: Ch 12: 12.5-6, Ch 13: 13.1-3, 13.9.4 Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by Space Carving 3D Shape from X means getting 3D coordinates

More information

Refraction and Lenses. Honors Physics

Refraction and Lenses. Honors Physics Refraction and Lenses Honors Physics Refraction Refraction is based on the idea that LIGHT is passing through one MEDIUM into another. The question is, WHAT HAPPENS? Suppose you are running on the beach

More information

Waves & Oscillations

Waves & Oscillations Physics 42200 Waves & Oscillations Lecture 40 Review Spring 2016 Semester Matthew Jones Final Exam Date:Tuesday, May 3 th Time:7:00 to 9:00 pm Room: Phys 112 You can bring one double-sided pages of notes/formulas.

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Stereo Vision II: Dense Stereo Matching

Stereo Vision II: Dense Stereo Matching Stereo Vision II: Dense Stereo Matching Nassir Navab Slides prepared by Christian Unger Outline. Hardware. Challenges. Taxonomy of Stereo Matching. Analysis of Different Problems. Practical Considerations.

More information

CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007.

CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007. CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007. In this assignment you will implement and test some simple stereo algorithms discussed in

More information

Omni-directional Multi-baseline Stereo without Similarity Measures

Omni-directional Multi-baseline Stereo without Similarity Measures Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Optics Course (Phys 311) Geometrical Optics Refraction through Lenses

Optics Course (Phys 311) Geometrical Optics Refraction through Lenses Optics Course (Phys ) Geometrical Optics Refraction through Lenses Lecturer: Dr Zeina Hashim Slide 1 Objectives covered in this lesson : 1. The refracting power of a thin lens. 2. Thin lens combinations.

More information

Thick Lenses and the ABCD Formalism

Thick Lenses and the ABCD Formalism Thick Lenses and the ABCD Formalism Thursday, 10/12/2006 Physics 158 Peter Beyersdorf Document info 12. 1 Class Outline Properties of Thick Lenses Paraxial Ray Matrices General Imaging Systems 12. 2 Thick

More information

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD

DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD DEVELOPMENT OF REAL TIME 3-D MEASUREMENT SYSTEM USING INTENSITY RATIO METHOD Takeo MIYASAKA and Kazuo ARAKI Graduate School of Computer and Cognitive Sciences, Chukyo University, Japan miyasaka@grad.sccs.chukto-u.ac.jp,

More information

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Jeffrey S. McVeigh 1 and Siu-Wai Wu 2 1 Carnegie Mellon University Department of Electrical and Computer Engineering

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

A Novel Stereo Camera System by a Biprism

A Novel Stereo Camera System by a Biprism 528 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 16, NO. 5, OCTOBER 2000 A Novel Stereo Camera System by a Biprism DooHyun Lee and InSo Kweon, Member, IEEE Abstract In this paper, we propose a novel

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 Teesta suspension bridge-darjeeling, India Mark Twain at Pool Table", no date, UCR Museum of Photography Woman getting eye exam during

More information

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG. Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging

Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Rectification of elemental image set and extraction of lens lattice by projective image transformation in integral imaging Keehoon Hong, 1 Jisoo Hong, 1 Jae-Hyun Jung, 1 Jae-Hyeung Park, 2,* and Byoungho

More information