A The left scanline The right scanline
|
|
- Scot Parrish
- 6 years ago
- Views:
Transcription
1 Dense Disparity Estimation via Global and Local Matching Chun-Jen Tsai and Aggelos K. Katsaggelos Electrical and Computer Engineering Northwestern University Evanston, IL , USA Abstract A new divide-and-conquer technique for disparity estimation is proposed in this paper. This technique performs feature matching recursively, starting with the strongest feature point in the left scanline. Once the first matching pair is established, the ordering constraint in disparity estimation allows the original intra-scanline matching problem to be divided into two smaller subproblems. Each subproblem can then be solved recursively, or via a disparity space technique. An extension to the standard disparity space technique is also proposed to compliment the divide-andconquer algorithm. Experimental results demonstrate the effectiveness of the proposed approaches. 1 Introduction Disparity estimation is the most fundamental problem in stereo image processing. Given two images taken simultaneously with a pair of cameras, the goal of this process is to locate for each point in one image its corresponding point in the other image. Let I L (x; y) and I R (x; y) denote the left-channel and right-channel image functions. The solution to the correspondence problem of I L (x; y) and I R (x; y) is a disparity field, d(x; y), such that, I L (p) =I R (p + d(p)) (1) where p =(x; y) T represent the image coordinates. Many researchers have been working on the disparity estimation problem since the 1970s. The most often used criteria for matching are photometric similarity and special consistency. The photometric similarity criterion rises naturally from equation (1). The spatial consistency criterion confines the search space for d(p) to piecewise smooth disparity fields. There are several techniques available for disparity estimation, including, block matching [1], regularization techniques [2], and disparity space-based techniques [3, 4]. To reduce the complexity of the disparity estimation problem, many researchers assume that the cameras are arranged in a parallel-axis configuration. In this case, the stereo matching problem is simplified to an intra-scanline pixel matching problem. The disparity space-based technique is of particular interest in this case because it incorporates occlusion models directly into the estimation 1
2 process, while other techniques usually require an occlusion determination process after the estimation of the disparity field. In addition, if a dense disparity field is desired, the disparity space technique usually gives better results. In this paper, we propose a divide-and-conquer technique for feature matching. This technique first establishes the matching of strong feature points. Because of the ordering constraint in disparity estimation, these matching points divide the original intra-scanline matching problem into several smaller subproblems. These subproblems can be solved recursively, or via a disparity space technique when there is no feature point in the sub-intervals. An extension to the standard disparity space technique is also proposed to compliment the divide-and-conquer algorithm. The paper is organized as follows. In section 2, the divide-and-conquer global feature matching algorithm is introduced. In section 3, the proposed extended-step disparity space technique is described. Experiments are presented in section 4, while conclusions are given in section 5. 2 The Divide-and-Conquer Global Matching Algorithm Figure 1 shows corresponding scanlines extracted from a pair of video conferencing stereo images. To create a dense disparity map, we have to perform pixel-wise matching between these two scanlines. Noise which results from image acquisition and sampling, differences in lighting, etc., inhibits perfect matching. Furthermore, some pixels in the left scanline have no match in the right scanline due to occlusions. Even though disparity space based techniques handle these two problems (noise and occlusion) adequately well, they usually produce rough disparity estimates in uniform areas (for example, the background areas represented by the right and left end parts of the scanlines in Figure 1). In addition, image areas with high intensity variance are usually treated as occlusions when the actual disparity is not uniform in such areas. In the proposed algorithm, the ordering constraint in disparity estimation is imposed explicitly. Referring to, for example the scanline profiles in Fig. 1, the ordering constraint states that if A $ B is a matching pair, then any point to the rightofa can only be matched to a point to the right ofb. If, furthermore, C $ D is a matching pair to the right ofa $ B, then the interval [A; C] should be matched to the interval [B; D]. Therefore, the establishment of matching pairs A $ B and C $ D divide the full intra-scanline matching problem into 3 subproblems: the intra-interval matching between intervals [0;A] and [0;B], [A; C] and [B; D], and [C; N] and [D; N], where N is the last pixel in each scanline. The break-down of a large problem into sub-problems fits the divide-andconquer framework. Assume that the problem is to perform intra-scanline matching between intervals [A; C] and [B; D]. The algorithm is summarized as follows:
3 C 100 D 50 A 50 B The left scanline The right scanline Figure 1: The scanline profiles from a pair of real stereo images. Cross marks in the profiles are some feature points with high intensity variance. Step 1: Compute the intensity variance at each pixel in [A; C]. Pick the point with the largest intensity variance as the feature point. Step 2: Apply block matching to find the corresponding feature point in the interval [B; D]. Two measures are used to compute the reliability of the match. The first one is the signal-variance-to-matchingerror ratio (SER) and the second one is the variance similarity (VS) between the left feature point and the right feature point. They are defined respectively by: SER(p) = ffil 2 (p) + ff IR P 2 (p+d) (IL ( ) I R ( )) ; (2) 2 and VS(p) =jff 2 IL (p) ff 2 IR (p+d) j; (3) where the summation in Eq. 2 is over the window used for matching. If SER(p): is too small or VS(p) is too large, the matching pair is considered unreliable. The pixel with the next highest variance is picked as the feature point and step 2 is repeated at this new position. Step 3: If a reliable matching feature pair R $ S is found, R 2 [A; C] and S 2 [B; D], the matching between interval [A; C] and [B; D] is divided into two sub-problems: the matching between [A; R] and [B; S] and the matching between [R; C] and [S; D]. Each subproblem can be solved recursively, i.e. Step 1. is applied. Step 4: If there is no reliable matching feature pair between [A; C] and [B; D], the disparity in (A; C) can be either filled in using linear interpolation of the disparity ata and C, or, it can be solved with a disparity space technique. In this paper, the latter approach is demonstrated.
4 3 The Extended Disparity Space Technique The disparity space image (DSI) is the intra-scanline matching error image. It is defined for scanline n as follows: DSI n (x L ;x R )= W=2 X i;j= W=2 fi I L (x L + i; n + j) I R (x R + i; n + j)fi ; (4) where W is the matching window size. The estimation of disparity between these two scanlines is equivalent to finding a path across a DSI starting at the lower-left corner and ending at the upper-right corner which has the smallest cost. This path can be found using dynamic programming techniques. To simplify the dynamic programming for disparity estimation, three step types are typically assumed, namely left occlusion, match, and right occlusion. This simple step model tends to produce mis-detected occlusions in an area with non-uniform disparity. More sophisticated step models have been proposed ([5], [6]) to resolve the conflict between occlusion and non-uniform disparity. However, these new models introduce more cost parameters that are usually determined empirically. In this paper we propose to use an extended step model which does not introduce extra cost parameters. In Fig. 2, five possible steps (instead of three) are considered during the cost evaluation. The introduction of the two new steps dp ~ and ~ep allows for the path to model non-uniform disparity properly. According to Fig. 2, the cost function is now defined by: COST(p) = min 8>< >: COST(c)+DSI(p) COST(a)+COST occlusion COST(b)+COST occlusion COST(d)+DSI(p)+ DSI(b)+DSI(c) 2 COST(e)+DSI(p)+ DSI(a)+DSI(c) 2 9>= >; ; (5) where COST occlusion is the fixed occlusion cost, and a; b; c; d; e, and p are pixel coordinates. x R d a c e p b n n+1 n+2 x L Figure 2: The extended step model for disparity space technique. Figure 3: Left and right frames from the man" stereo image sequence.
5 4 Experiments In this section, two stereo image pairs are used to evaluate the performance of the proposed algorithm. The first stereo image pair comes from the video conferencing sequence man," two figures are shown in Fig. 3. A comparison of the estimated disparity maps using the conventional ([4]) and the proposed disparity space techniques is shown in Fig. 4. The disparity maps computed with the proposed extended-step algorithm and the divide-and-conquer algorithm are also shown in Fig. 4. Figure 4: From left to right: 1) disparity map estimated with the basic disparity space algorithm. 2) disparity map estimated with the modified disparity space algorithm in [6]. 3) disparity map estimated with the extended-step disparity space algorithm alone. 4) disparity map estimated with the proposed divideand-conquer global matching disparity space algorithm. All disparity maps are histogram-equalized for visualization purposes. The brightest spots are detected occlusions. By comparing the results in Fig. 4, one can see that the conventional algorithm does not handle background well. The matching error in the background is propagated into the foreground and undermines the estimation of disparity on the person's face area. On the other hand, the proposed divide-and-conquer algorithm clearly separates the foreground from the background and does not suffer from the lack of texture in the background. Notice how the global matching information introduced by the divide-and-conquer algorithm (Fig. 4.4) help removing the mis-match from pure local matching algorithm (the bright horizontal line across the chin of the face in Fig. 4.3). The aqua" image sequence (Fig. 5) is used for the second experiment. The estimated disparity maps using the conventional disparity space algorithm and the proposed divide-and-conquer algorithm are shown in Fig. 6. From these figures, one can observe that the quality of the disparity map in the latter case is better in details. You can see more structure in the coral, the largest fish, and the background using the proposed algorithm. 5 Conclusions We have presented a two-level disparity estimation algorithm. The top level uses a divide-and-conquer algorithm for global feature matching while the bot-
6 Figure 5: Left frame from the aqua" stereo image sequence. Figure 6: Left: disparity map from the conventional disparity space algorithm. Right: disparity map from the proposed algorithm. tom level uses a disparity space technique to perform local matching and occlusion detection. As the experiments show, the proposed technique works very well for both low-feature scenes from video conferencing sequences and complex scenes from natural scenic image sequences. Another advantage of the divide-and-conquer algorithm is that it separates texture-less regions (e.g. background) from texture-rich regions (e.g. foreground). We are investigating the application of adaptive disparity space matching algorithm to improve the performance further. References [1] M. J. Hannah, Computer Matching of Areas in Stereo Images," Ph.D. Dissertation, Stanford University, Stanford, CA, Report STAN-CS , [2] D. Terzopoulos, Regularization of Inverse Visual Problems Involving Discontinuities," IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 4, pp , July [3] Y. Ohta and T. Kanade, Stereo by Intra- and Inter-Scanline Search Using Dynamic Programming," IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-7, No.2, March [4] S.S. Intille and A.F. Bobick, Disparity-Space Images and Large Occlusion Stereo," MIT Media Lab Perceptual Computing Group Technical Report No. 220, [5] L. Falkenhagen, "Disparity Estimation from Stereo Image Pairs Assuming Piecewise Continuous Surfaces," in: Paker, Y. and Wilbur, S. (Ed.), Image Processing for Broadcast and Video Production, ISBN , Springer, Great Britain, [6] A. Redert, C.-J. Tsai, E. Hendriks, and A.K. Katsaggelos, "Disparity Estimation with Modeling of Occlusion and Object Orientation," Proc. SPIE Visual Communication and Image Processing, San Jose, pp , Jan
x L d +W/2 -W/2 W x R (d k+1/2, o k+1/2 ) d (x k, d k ) (x k-1, d k-1 ) (a) (b)
Disparity Estimation with Modeling of Occlusion and Object Orientation Andre Redert, Chun-Jen Tsai +, Emile Hendriks, Aggelos K. Katsaggelos + Information Theory Group, Department of Electrical Engineering
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationSTEREO BY TWO-LEVEL DYNAMIC PROGRAMMING
STEREO BY TWO-LEVEL DYNAMIC PROGRAMMING Yuichi Ohta Institute of Information Sciences and Electronics University of Tsukuba IBARAKI, 305, JAPAN Takeo Kanade Computer Science Department Carnegie-Mellon
More informationCS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching
Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix
More informationCS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence
CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are
More informationStereo Matching.
Stereo Matching Stereo Vision [1] Reduction of Searching by Epipolar Constraint [1] Photometric Constraint [1] Same world point has same intensity in both images. True for Lambertian surfaces A Lambertian
More informationAsymmetric 2 1 pass stereo matching algorithm for real images
455, 057004 May 2006 Asymmetric 21 pass stereo matching algorithm for real images Chi Chu National Chiao Tung University Department of Computer Science Hsinchu, Taiwan 300 Chin-Chen Chang National United
More informationDepth Discontinuities by
Depth Discontinuities by Pixel-to-Pixel l Stereo Stan Birchfield, Carlo Tomasi Proceedings of the 1998 IEEE International Conference on Computer Vision, i Bombay, India - Introduction Cartoon artists Known
More informationChaplin, Modern Times, 1936
Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections
More informationContext based optimal shape coding
IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationFinal project bits and pieces
Final project bits and pieces The project is expected to take four weeks of time for up to four people. At 12 hours per week per person that comes out to: ~192 hours of work for a four person team. Capstone:
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationEECS 442 Computer vision. Stereo systems. Stereo vision Rectification Correspondence problem Active stereo vision systems
EECS 442 Computer vision Stereo systems Stereo vision Rectification Correspondence problem Active stereo vision systems Reading: [HZ] Chapter: 11 [FP] Chapter: 11 Stereo vision P p p O 1 O 2 Goal: estimate
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationLecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15
Lecture 6 Stereo Systems Multi- view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-24-Jan-15 Lecture 6 Stereo Systems Multi- view geometry Stereo systems
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationFast Lighting Independent Background Subtraction
Fast Lighting Independent Background Subtraction Yuri Ivanov Aaron Bobick John Liu [yivanov bobick johnliu]@media.mit.edu MIT Media Laboratory February 2, 2001 Abstract This paper describes a new method
More informationAnnouncements. Stereo Vision Wrapup & Intro Recognition
Announcements Stereo Vision Wrapup & Intro Introduction to Computer Vision CSE 152 Lecture 17 HW3 due date postpone to Thursday HW4 to posted by Thursday, due next Friday. Order of material we ll first
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationCPSC 425: Computer Vision
1 / 49 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 49 Menu March 10, 2016 Topics: Motion
More informationPre- and Post-Processing for Video Compression
Whitepaper submitted to Mozilla Research Pre- and Post-Processing for Video Compression Aggelos K. Katsaggelos AT&T Professor Department of Electrical Engineering and Computer Science Northwestern University
More informationCOMP 558 lecture 22 Dec. 1, 2010
Binocular correspondence problem Last class we discussed how to remap the pixels of two images so that corresponding points are in the same row. This is done by computing the fundamental matrix, defining
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationMulti-stable Perception. Necker Cube
Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix
More informationMulti-Resolution Stereo Matching Using Maximum-Surface Techniques
Digital Image Computing: Techniques and Applications. Perth, Australia, December 7-8, 1999, pp.195-200. Multi-Resolution Stereo Matching Using Maximum-Surface Techniques Changming Sun CSIRO Mathematical
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationNEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING
NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING Nicole Atzpadin 1, Serap Askar, Peter Kauff, Oliver Schreer Fraunhofer Institut für Nachrichtentechnik, Heinrich-Hertz-Institut,
More informationComparative Study of Linear and Non-linear Contrast Enhancement Techniques
Comparative Study of Linear and Non-linear Contrast Kalpit R. Chandpa #1, Ashwini M. Jani #2, Ghanshyam I. Prajapati #3 # Department of Computer Science and Information Technology Shri S ad Vidya Mandal
More informationNormalized cuts and image segmentation
Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image
More informationSegment-based Stereo Matching Using Graph Cuts
Segment-based Stereo Matching Using Graph Cuts Li Hong George Chen Advanced System Technology San Diego Lab, STMicroelectronics, Inc. li.hong@st.com george-qian.chen@st.com Abstract In this paper, we present
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationLarge Occlusion Stereo
International Journal of Computer Vision 33(3), 181 200 (1999) c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Large Occlusion Stereo AARON F. BOBICK College of Computing, Georgia Institute
More informationStereo: Disparity and Matching
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To
More informationA New Parallel Implementation of DSI Based Disparity Computation Using CUDA
International Journal of Computer and Communication Engineering, Vol. 3, No. 1, January 2014 A New Parallel Implementation of DSI Based Disparity Computation Using CUDA Aamer Mehmood, Youngsung Soh, and
More informationColour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation
ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationCS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007.
CS 787: Assignment 4, Stereo Vision: Block Matching and Dynamic Programming Due: 12:00noon, Fri. Mar. 30, 2007. In this assignment you will implement and test some simple stereo algorithms discussed in
More informationA virtual tour of free viewpoint rendering
A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationA Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection
A Semi-Automatic 2D-to-3D Video Conversion with Adaptive Key-Frame Selection Kuanyu Ju and Hongkai Xiong Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China ABSTRACT To
More informationA Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo
A Switched Gaussian Process for Estimating Disparity and Segmentation in Binocular Stereo Oliver Williams Microsoft Research Ltd. Cambridge, UK omcw2@cam.ac.uk Abstract This paper describes a Gaussian
More informationStereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision
Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras
More informationCoupling of surface roughness to the performance of computer-generated holograms
Coupling of surface roughness to the performance of computer-generated holograms Ping Zhou* and Jim Burge College of Optical Sciences, University of Arizona, Tucson, Arizona 85721, USA *Corresponding author:
More informationIntegrating LIDAR into Stereo for Fast and Improved Disparity Computation
Integrating LIDAR into Stereo for Fast and Improved Computation Hernán Badino, Daniel Huber, and Takeo Kanade Robotics Institute, Carnegie Mellon University Pittsburgh, PA, USA Stereo/LIDAR Integration
More informationModule 7 VIDEO CODING AND MOTION ESTIMATION
Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five
More informationFundamental matrix. Let p be a point in left image, p in right image. Epipolar relation. Epipolar mapping described by a 3x3 matrix F
Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix F Fundamental
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationBinocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?
Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationSensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications II, Morley M. Blouke, John Canosa, Nitin Sampat, Editors,
Photocurrent Estimation from Multiple Non-destructive Samples in a CMOS Image Sensor Xinqiao Liu and Abbas El Gamal Information Systems Laboratory Department of Electrical Engineering Stanford University
More informationLEFT IMAGE WITH BLUR PARAMETER RIGHT IMAGE WITH BLUR PARAMETER. σ1 (x, y ) R1. σ2 (x, y ) LEFT IMAGE WITH CORRESPONDENCE
Depth Estimation Using Defocused Stereo Image Pairs Uma Mudenagudi Λ Electronics and Communication Department B VBCollege of Engineering and Technology Hubli-580031, India. Subhasis Chaudhuri Department
More informationHigh Accuracy Depth Measurement using Multi-view Stereo
High Accuracy Depth Measurement using Multi-view Stereo Trina D. Russ and Anthony P. Reeves School of Electrical Engineering Cornell University Ithaca, New York 14850 tdr3@cornell.edu Abstract A novel
More informationApproach to Minimize Errors in Synthesized. Abstract. A new paradigm, the minimization of errors in synthesized images, is
VR Models from Epipolar Images: An Approach to Minimize Errors in Synthesized Images Mikio Shinya, Takafumi Saito, Takeaki Mori and Noriyoshi Osumi NTT Human Interface Laboratories Abstract. A new paradigm,
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationA STEREO VISION SYSTEM FOR AN AUTONOMOUS VEHICLE
A STEREO VISION SYSTEM FOR AN AUTONOMOUS VEHICLE Donald B. Gennery Computer Science Department Stanford University Stanford, California 94305 Abstract Several techniques for use in a stereo vision system
More informationSEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH
SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT
More informationRobert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms
Lecture 09: Stereo Algorithms left camera located at (0,0,0) Recall: Simple Stereo System Y y Image coords of point (X,Y,Z) Left Camera: x T x z (, ) y Z (, ) x (X,Y,Z) z X right camera located at (T x,0,0)
More informationPeripheral drift illusion
Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video
More informationA Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks.
A Novel Criterion Function in Feature Evaluation. Application to the Classification of Corks. X. Lladó, J. Martí, J. Freixenet, Ll. Pacheco Computer Vision and Robotics Group Institute of Informatics and
More informationLecture 10: Multi view geometry
Lecture 10: Multi view geometry Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Stereo vision Correspondence problem (Problem Set 2 (Q3)) Active stereo vision systems Structure from
More informationA New Approach for Stereo Matching Algorithm with Dynamic Programming
A New Approach for Stereo Matching Algorithm with Dynamic Programming Miss. Priyanka M. Lohot PG Student, Computer Engineering, Shah & Anchor Kutchhi Engineering College, Mumbai University. priya.lohot@gmail.com
More informationChallenges and solutions for real-time immersive video communication
Challenges and solutions for real-time immersive video communication Part III - 15 th of April 2005 Dr. Oliver Schreer Fraunhofer Institute for Telecommunications Heinrich-Hertz-Institut, Berlin, Germany
More informationMotion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationTargil 12 : Image Segmentation. Image segmentation. Why do we need it? Image segmentation
Targil : Image Segmentation Image segmentation Many slides from Steve Seitz Segment region of the image which: elongs to a single object. Looks uniform (gray levels, color ) Have the same attributes (texture
More informationGraph Based Image Segmentation
AUTOMATYKA 2011 Tom 15 Zeszyt 3 Anna Fabijañska* Graph Based Image Segmentation 1. Introduction Image segmentation is one of the fundamental problems in machine vision. In general it aims at extracting
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationCS 4495 Computer Vision Motion and Optic Flow
CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris
More informationVIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING
Engineering Review Vol. 32, Issue 2, 64-69, 2012. 64 VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING David BARTOVČAK Miroslav VRANKIĆ Abstract: This paper proposes a video denoising algorithm based
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationLecture 6 Stereo Systems Multi-view geometry
Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems
More informationA Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space
A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space Naoyuki ICHIMURA Electrotechnical Laboratory 1-1-4, Umezono, Tsukuba Ibaraki, 35-8568 Japan ichimura@etl.go.jp
More informationData Term. Michael Bleyer LVA Stereo Vision
Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that
More informationAutomatic Texture Segmentation for Texture-based Image Retrieval
Automatic Texture Segmentation for Texture-based Image Retrieval Ying Liu, Xiaofang Zhou School of ITEE, The University of Queensland, Queensland, 4072, Australia liuy@itee.uq.edu.au, zxf@itee.uq.edu.au
More informationConstructing a 3D Object Model from Multiple Visual Features
Constructing a 3D Object Model from Multiple Visual Features Jiang Yu Zheng Faculty of Computer Science and Systems Engineering Kyushu Institute of Technology Iizuka, Fukuoka 820, Japan Abstract This work
More informationFace Tracking in Video
Face Tracking in Video Hamidreza Khazaei and Pegah Tootoonchi Afshar Stanford University 350 Serra Mall Stanford, CA 94305, USA I. INTRODUCTION Object tracking is a hot area of research, and has many practical
More informationMode-Dependent Pixel-Based Weighted Intra Prediction for HEVC Scalable Extension
Mode-Dependent Pixel-Based Weighted Intra Prediction for HEVC Scalable Extension Tang Kha Duy Nguyen* a, Chun-Chi Chen a a Department of Computer Science, National Chiao Tung University, Taiwan ABSTRACT
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationREGION-BASED SUPER-RESOLUTION FOR COMPRESSION
REGION-BASED SUPER-RESOLUTION FOR COMPRESSION D. Barreto 1, L.D. Alvarez 2, R. Molina 2, A.K. Katsaggelos 3 and G.M. Callicó 1 INTERNATIONAL CONFERENCE ON SUPERRESOLUTION IMAGING Theory, Algorithms and
More informationMultiview Image Compression using Algebraic Constraints
Multiview Image Compression using Algebraic Constraints Chaitanya Kamisetty and C. V. Jawahar Centre for Visual Information Technology, International Institute of Information Technology, Hyderabad, INDIA-500019
More informationAutomated Segmentation Using a Fast Implementation of the Chan-Vese Models
Automated Segmentation Using a Fast Implementation of the Chan-Vese Models Huan Xu, and Xiao-Feng Wang,,3 Intelligent Computation Lab, Hefei Institute of Intelligent Machines, Chinese Academy of Science,
More informationAN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES
AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com
More informationDoes Color Really Help in Dense Stereo Matching?
Does Color Really Help in Dense Stereo Matching? Michael Bleyer 1 and Sylvie Chambon 2 1 Vienna University of Technology, Austria 2 Laboratoire Central des Ponts et Chaussées, Nantes, France Dense Stereo
More informationSegmentation Based Stereo. Michael Bleyer LVA Stereo Vision
Segmentation Based Stereo Michael Bleyer LVA Stereo Vision What happened last time? Once again, we have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > We have investigated the matching
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More information3DPIXA: options and challenges with wirebond inspection. Whitepaper
3DPIXA: options and challenges with wirebond inspection Whitepaper Version Author(s) Date R01 Timo Eckhard, Maximilian Klammer 06.09.2017 R02 Timo Eckhard 18.10.2017 Executive Summary: Wirebond inspection
More informationTemperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System
Sensors & Transducers 2013 by IFSA http://www.sensorsportal.com Temperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System Shinji Ohyama, Masato Mukouyama Graduate School
More informationAn Algorithm to Determine the Chromaticity Under Non-uniform Illuminant
An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom
More informationIntroduction to Medical Imaging (5XSA0) Module 5
Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed
More informationVision-Motion Planning with Uncertainty
Vision-Motion Planning with Uncertainty Jun MIURA Yoshiaki SHIRAI Dept. of Mech. Eng. for Computer-Controlled Machinery, Osaka University, Suita, Osaka 565, Japan jun@ccm.osaka-u.ac.jp Abstract This paper
More informationFace Hallucination Based on Eigentransformation Learning
Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,
More informationStereo imaging ideal geometry
Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and
More informationProblem Set 7 CMSC 426 Assigned Tuesday April 27, Due Tuesday, May 11
Problem Set 7 CMSC 426 Assigned Tuesday April 27, Due Tuesday, May 11 1. Stereo Correspondence. For this problem set you will solve the stereo correspondence problem using dynamic programming, as described
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More information