Analysis of Off-The-Shelf Stereo Camera System Bumblebee XB3 for the Fruit Volume and Leaf Area Estimation

Size: px
Start display at page:

Download "Analysis of Off-The-Shelf Stereo Camera System Bumblebee XB3 for the Fruit Volume and Leaf Area Estimation"

Transcription

1 Ref: C0265 Analysis of Off-The-Shelf Stereo Camera System Bumblebee XB3 for the Fruit Volume and Leaf Area Estimation Dejan Šeatović, Vincent Meiser, Martin Brand 1 and Daniel Scherly, Institute of Mechatronic Systems (IMS), ZHAW, Technikumstrasse 5, CH-8401 Winterthur, Switzerland Abstract The paper describes a set of simulated and real-world experiments to determine the total leaf area and fruit volume of a fruit tree using 3-dimensional data gathered from colour stereo images. Initially, the accuracy of the stereo algorithms is determined, and then the method for leaf area computation is developed and tested on particular trees with a known leaf area. Finally, leaf area computation is carried out on a number of images of real fruit trees obtained in the ICT-AGRI project 3D-Mosaic. Furthermore, limitations of the system are shown using the example of fruit volume and leaf area computation from orchard data. Several stereo algorithms are compared with regard to their accuracy and performance. The experimental system consisting of several sensors and software components is briefly explained, enabling rebuilding of the system with minimum effort with respect to construction and software development. Lastly, recommendations are proposed for the scientific and industrial teams interested in similar problems on selecting sensors and open-source and BSD software components to successfully approach this complex problem. Keywords: visual perception, sensor evaluation 1 Introduction The Point Grey Bumblebee XB3 is an IEEE-1394b complete stereo camera system that includes software and hardware. The system allows the user, without prior photogrammetric knowledge, to acquire and process stereo images in order to obtain 3D point clouds (Point- GREY, 2014). The main motivation for the purchase and analysis of this system was its relatively low price ($3500) and off-the-shelf processing software and SDK, promising manageable implementation effort. It is intended to be used for the automatic monitoring and analysis of orchard trees, see 3D-Mosaic, (Zude, 2014). In the project, the main goal of the computer vision work package was to develop software that is able to distinguish and localise two main objects on an orchard tree: the leaf and the fruit. The analysis of the camera system succeeded in the lab and was then applied in the field. The goal of the experiments described here was to determine the total leaf area of a tree using 3D data gathered from colour stereo images. For this, the accuracy of the stereo algorithms had to be determined first. Then, the method for the leaf area computation had to be tested on a tree with a known leaf area. Finally, leaf area computation was carried out on a number of images of real fruit trees obtained in Potsdam in August Martin Brand is former employee of Institute of Mechatronic Systems and MSE student. The focus of his master thesis was analysis of dense stereo algorithms and their performance. His research is one of the fundamentals of this paper. Proceedings International Conference of Agricultural Engineering, Zurich, /8

2 Figure 1: Bumblebee XB3 Stereo Camera System. Source: (PointGREY, 2014) The problem can be divided into two independent problems, the first is point cloud matching. Matching of two point clouds requires not only the estimation of the rigid transformation (rotation matrix and translation vector) but also the estimation of the scale factor. The goal of these experiments was to find an algorithm to automatically find these transformation parameters to match the two point clouds obtained from colour and infrared images. The second is the segmentation of 3D point clouds and 2D intensity images. To achieve a robust segmentation between the fruits and leaves it is necessary to obtain the object observation from as many different sensors as are available. The approach is motivated through remote sensing applications such as SPOT and LANDSAT, which allow separation of objects within the intensity images based on their reflectivity in different spectra parts. 2 Accuracy of Stereo Algorithms The analysis of the Triclops (manufacturer SDK) stereo algorithms was abandoned shortly after the camera was purchased and the basic functionality was tested. Unfortunately, the system does not allow the user to perform the calibration of the camera system. The vendor offers the calibration service for the end user which requires that the sensor is sent to the vendor s service point, however due to the time pressure and lack of openness of the system, the decision was made to abdicate the service and proceed with own system evaluation based on OpenCV (G. Bradski, 2000; Gary Bradski & Kaehler, 2008). The decision was supported by the fact that our own Infrared stereo camera system should be used for more reliable distinction between leaves, fruits and other parts of the tree (branches and stem). Dense stereo is a state-of-the-art research topic, for more details see (Tola, Lepetit, & Fua, 2010; Wu & Wang, 2006). 2.1 Description The experiment was designed to serve as a test of the accuracy of the stereo algorithms using the fact that the 3D information of the scene objects is known. The test scene consists of different arrangements of two geometric objects, Blocks and Wall, which are both made of Styrofoam and painted with an irregular colour pattern to facilitate the detection of correspondences in image pairs. Wall is a planar object (dimensions 448 x 448 x 20mm) chosen to verify the planarity of the resulting 3D points. Blocks consists of two cubes (200 x 200 x 200mm and 150 x 150 x 150mm) mounted such that two of their sides become coplanar. This shape provides different combinations of planes to verify the accuracy of plane angles in reconstructed points Stereo Processing To reconstruct the point clouds from the stereo images, two baselines of the Bumblebee camera were used: the short left baseline using the left and the center camera and the long baseline using the left and right camera. Minimum and maximum disparity were chosen as 160 and 255 pixels for the short baseline LC and 327 and 523 pixels for the long baseline LR, corresponding to a distance range of 0.75 to 1.20 m. For Wall one plane was fitted through parts of the image that are visible in the left image of both the short baseline LC and the long baseline LR. For Blocks there were four planes fitted. All Proceedings International Conference of Agricultural Engineering, Zurich, /8

3 implemented stereo algorithms (see axis of abscissas in Figure 4), except for Kosov s algorithm, were tested. Figure 2: Left: "Wall" object with indicated area for plane fitting. Right: "Blocks" object with indicated areas for plane fitting Results The results of this experiment indicate that from the tested algorithms, the two algorithms with the best reconstruction accuracy are the OpenCV implementation of Hirschmüller s algorithm, see (Hirschmueller, 2001; Hirschmuller, Innocent, & Garibaldi, 2002; Hirschmuller, 2008), and the normalized cross correlation (NCC) block matching. This can be inferred from the fact that the reconstructed 3D data contained the least number of outliers and the angles between the fitted planes were reconstructed most accurately by these two algorithms. Figure 3: Reconstructed "Blocks" object. Top row left to right: AD census, OpenCV Hirschmüller. Bottom row left to right: NCC blockmatching, SAD blockmatching. Errors such as outliers and holes are marked in red. 2.2 Leaf Area Computation for a Test Tree For this experiment, a small tree was imaged using a Point Grey Bumblebee XB3 at distances of 1, 2 and 3m at 12 orientations each differing by 30, resulting in a total of 36 images. In order to test the results achieved using the stereo images, all the leaves of the tree were scanned at a resolution of 300 dpi after the stereo images had been acquired. The surface area of the whole tree was then computed by counting all the pixels belonging to a leaf and multiplying this number by the area per pixel. This resulted in a total area of m 2 from a total of 445 leaves. Proceedings International Conference of Agricultural Engineering, Zurich, /8

4 Figure 4: Deviation from 90 angle in [ ] for reconstructed planes on the "Blocks" object for the tested algorithms and two baselines LC and LR Image Pre-processing Since the images obtained with the Bumblebee XB3 tend to be too dark, the images used in both experiments have been pre-processed in the following way to improve the results obtained by the stereo algorithms: 1. Convert the RGB image to HSV (Hue, Saturation, Value) 2. Change Saturation 3. Change Value 4. Convert back to RGB Figure 5: Test tree used in experiment. Left: Original image, Right: Pre-processed image Stereo Processing To reconstruct the point clouds from the stereo images, two baselines of the Bumblebee camera were used: the short baseline using the left and the center camera and the long baseline using the left and right camera. All implemented algorithms, except for Kosov s algorithm and the OpenCV implementation of SAD block matching, were tested. A full list of parameters and their description can be found in the appendix of Martin Brand s Master Thesis. Proceedings International Conference of Agricultural Engineering, Zurich, /8

5 2.2.3 Segmentation In this step all pixels belonging to leaves were selected to act as a mask limiting the leaf area calculation to the region of interest. For a pixel to belong to a leaf, the following criteria had to be met: Pixels had to be within a manually defined rectangle., for RGB values, with, for the hue value to select green pixels Leaf Area Computation In a first step, the range image (the z-data) was filtered using a 3x3 median filter and the x- and y-data was then reconstructed from the changed z-data using the rectification information for the respective baseline (LC or LR). In the next step, a Delaunay triangulation was carried out on the segmented data with the constraint that the 2D distance in pixels must be below and the 3D distance must be below 3cm for two points to form one side of a triangle. On the triangulated mesh, the sum of all individual areas from all triangles was computed as the leaf area of the tree, where and are two vectors of a single triangle Results The experiment was designed to infer a consistent factor to estimate the complete leaf area of a tree from one stereo image pair. A straight forward assumption would be to expect to see 50% of the leaf area in one view. The algorithm with the best performance over all distances is Hirschmüller s algorithm, which has consistent values for both the leaf area as well as the standard deviation at distances of 2 and 3m for both baselines LC and LR, as can be seen in Figure 6. Figure 6: Results for test tree leaf area using Hirschmüller's algorithm When looking at the average percentage of the computed leaf area in relation to the scanned tree over all 12 orientations of the images, a clear effect of the pre-processing step on the amount of leaf area can be seen for most algorithms. This effect is visible for all distances and both baselines. However, not only is the computed leaf area larger but so is the standard deviation across the images. It can also be seen that the closer the camera is located to the tree, the lower the leaf area becomes. This could be attributed to the relative increase in size of the leaves when moving closer, leading to a more sparse disparity image. Increasing the window size for the algorithms could help here but that would also introduce errors at leaf Proceedings International Conference of Agricultural Engineering, Zurich, /8

6 edges. Using the longer baseline instead of the shorter one also results in a lower computed leaf area. It was also found that the AD-census algorithm needs much smaller parameters for the building of the cross based regions (see appendix A.1 of Martin Brand's master thesis) since the leaves of the tree all have a very similar colour, leading to excessive smoothing. The zero mean (Z***) variants of the algorithms as well as RANK and SSD block-matching lead to the lowest values for the leaf area while NCC and SAD block-matching, Hirschmüller s algorithm and AD-census consistently result in larger leaf areas. The results of OpenCV s block-matching algorithm are not included, since the input images are only grayscale and no selection of green pixels has taken place other than the manually selected rectangular region, leading to incorrect results. 2.3 Leaf Area Computation for Fruit Trees in an Orchard The data for this experiment was collected in Potsdam, Germany on the 28th and 29th of August, 2012 in an Orchard consisting of 180 trees arranged in 6 rows of 30 trees. Each tree was imaged from both sides, denoted as A1 and A2 respectively. The camera used was a Point Grey Bumblebee BBX3 stereo camera. The images were pre-processed in the same manner as in the previous experiment with the test tree Stereo Processing Based on the results obtained in the two previous experiments, only the short baseline LC was used in this experiment. This choice was made since the longer baseline introduced too much distortion at the distances used in this application. Also, only normalized cross correlation (NCC) block-matching and the OpenCV implementation of Hirschmüller s algorithm were chosen since these performed best in the previous experiments regarding reconstruction accuracy and consistency in leaf area computation. The following table lists the parameters for the two chosen algorithms which were used in this experiment. The disparity range of 4 to 95 pixels corresponds to a distance range of 2 to 50m. Table 1: Stereo Processing Parameters Parameters NCC block-matching OpenCV Hirschmüller Minimum Disparity 4 4 Maximum Disparity Window Size 7x max-diff pre-filter cap uniqueness ratio speckle window size speckle range full DP - true Segmentation As in the previous experiment, only pixels that belonged to leaves were considered. For a pixel to belong to a leaf, the following criteria had to be met:, to eliminate points belonging to objects further in the background. This is also the reason why the disparity range during stereo processing was chosen to include pixels as far away as 50 m., for RGB values, with W = 250 being a threshold to eliminate bright pixels belonging predominantly to the sky or specular reflections. Also, only connected components of a size were selected in this step. Proceedings International Conference of Agricultural Engineering, Zurich, /8

7 , for RGB values, with K = 40 being a threshold to eliminate pixels belonging predominantly to the black cloth in the background. Using the white and the black pixels as mask, edges in that mask were used to find lines using the Hough transform. This was done because the black cloth behind the tree has long and straight edges which separate the background behind the cloth from the tree in front of it. Lines that had a length and were within +/- 10 of being either horizontal or vertical were used to eliminate pixels on the outside of the background cloth., for RGB values, with, for the hue value to select green pixels Leaf Area Computation The computation of the leaf area was done in the same manner as in the previous experiment, except for the use of the different pixel selection criteria mentioned above to find pixels belonging to a leaf Results The leaf areas were computed for both views for all 180 trees of which only two had ground truth data of their real leaf area available. The assumption made in the previous experiment with the test tree, that 50 % of the leaf area would be visible in one view, could not be confirmed in this experiment. The actual percentage is actually lower. Reasons for this are: Dynamic range of images is low, since the sensors don t allow more sophisticated adjustment; amount of false matches is greater than expected 10% (in average). This leads to errors in the segmentation and stereo processing. Several trees were not automatically distinguishable, so their leaf areas were not accurately determined at all. Values were manually corrected. 2.4 Colour and Infrared Stereo Point Clouds These point clouds were generated from images captured in Potsdam in August 2012, using stereo algorithms implemented and tested during the course of the 3D-Mosaic project. The sources were colour images from a Point Grey Bumblebee XB3 and two Point Grey Grasshopper 2 cameras with infrared filters to block out visible light. Figure 7: Input data set obtained from measurement in Potsdam. The infrared point cloud has been coloured orange for a better contrast. 3 Conclusions The Bumblebee XB3 as stereo camera system has several advantages when compared to high-end systems, such as price, and also when compared to systems assembled using offthe-shelf cameras. Unfortunately, the system is closed to the user and important parameters cannot be accessed (camera calibration). Furthermore, due to its Bayer pattern sensor, the system delivers only a fraction of the spectra and therefore stereo algorithms discover many similar pixels, which lead to very noisy point clouds, see Figure 7. At the end of the project, Proceedings International Conference of Agricultural Engineering, Zurich, /8

8 the implementation effort exceeded the estimated time by a factor of three. Due to the data quality problems, the final goal of matching the point clouds was not fully achieved. While the synthetic point cloud data test was successful, the real world data obtained from the images captured could not be matched by any of the algorithms. A bad signal-to-noise ratio also prevented further systematic algorithm analysis. Results obtained during the work on field data here give a coarse overview of the procedure, rather than deliver hard facts. However, one can rate the approach and results as success, since higher quality sensors will allow more accurate data processing. Additional research is necessary to finally answer the question if point clouds produced by stereo cameras are sufficient for leaf area estimation and fruit detection in orchards. 4 Acknowledgements The authors would like to thank the ATB Potsdam, and especially Dr. Manuela Zude for the patience and understanding during the 3D-Mosaic project. We also want to thank Dr. Thomas Anken for almost a decade of cooperation and support in the matter of the application of visual perception in agricultural applications. 5 References Bradski, G. (2000). The OpenCV Library. Dr. Dobb s Journal of Software Tools. Bradski, G., & Kaehler, A. (2008). Learning OpenCV: Computer Vision with the OpenCV Library. O Reilly Media. Hirschmueller, H. (2001). Improvements in Real-Time Correlation-Based Stereo Vision. In SMBV01. Hirschmuller, H. (2008). Stereo Processing by Semiglobal Matching and Mutual Information. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(2), doi: /tpami Hirschmuller, H., Innocent, P. R., & Garibaldi, J. (2002). Real-Time Correlation-Based Stereo Vision with Reduced Border Errors. IJCV, 47(1-3), PointGREY. (2014). Point Grey - Stereo Vision Products - Triclops SDK. Retrieved June 2, 2014, from Tola, E., Lepetit, V., & Fua, P. (2010). DAISY: An Efficient Dense Descriptor Applied to Wide- Baseline Stereo. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(5), doi: /tpami Wu, C.-C., & Wang, Z.-F. (2006). Stereo Correspondence Using Stripe Adjacency Graph. In Pattern Recognition, ICPR th International Conference on (Vol. 1, pp ). doi: /icpr Zude, M. (2014). 3D-Mosaic. Retrieved April 2, 2014, from Proceedings International Conference of Agricultural Engineering, Zurich, /8

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16

Computer Vision I. Dense Stereo Correspondences. Anita Sellent 1/15/16 Computer Vision I Dense Stereo Correspondences Anita Sellent Stereo Two Cameras Overlapping field of view Known transformation between cameras From disparity compute depth [ Bradski, Kaehler: Learning

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Stereo Vision 2 Inferring 3D from 2D Model based pose estimation single (calibrated) camera > Can

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Does Color Really Help in Dense Stereo Matching?

Does Color Really Help in Dense Stereo Matching? Does Color Really Help in Dense Stereo Matching? Michael Bleyer 1 and Sylvie Chambon 2 1 Vienna University of Technology, Austria 2 Laboratoire Central des Ponts et Chaussées, Nantes, France Dense Stereo

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Dense 3D Reconstruction. Christiano Gava

Dense 3D Reconstruction. Christiano Gava Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem

More information

Rectification and Disparity

Rectification and Disparity Rectification and Disparity Nassir Navab Slides prepared by Christian Unger What is Stereo Vision? Introduction A technique aimed at inferring dense depth measurements efficiently using two cameras. Wide

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics

Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Fast, Unconstrained Camera Motion Estimation from Stereo without Tracking and Robust Statistics Heiko Hirschmüller, Peter R. Innocent and Jon M. Garibaldi Centre for Computational Intelligence, De Montfort

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

The raycloud A Vision Beyond the Point Cloud

The raycloud A Vision Beyond the Point Cloud The raycloud A Vision Beyond the Point Cloud Christoph STRECHA, Switzerland Key words: Photogrammetry, Aerial triangulation, Multi-view stereo, 3D vectorisation, Bundle Block Adjustment SUMMARY Measuring

More information

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION

CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo II CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo II CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Camera parameters A camera is described by several parameters Translation T of the optical center from the origin of world

More information

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION

Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen I. INTRODUCTION Kinect Cursor Control EEE178 Dr. Fethi Belkhouche Christopher Harris Danny Nguyen Abstract: An XBOX 360 Kinect is used to develop two applications to control the desktop cursor of a Windows computer. Application

More information

Camera Drones Lecture 3 3D data generation

Camera Drones Lecture 3 3D data generation Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

COS Lecture 10 Autonomous Robot Navigation

COS Lecture 10 Autonomous Robot Navigation COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence

CS4495/6495 Introduction to Computer Vision. 3B-L3 Stereo correspondence CS4495/6495 Introduction to Computer Vision 3B-L3 Stereo correspondence For now assume parallel image planes Assume parallel (co-planar) image planes Assume same focal lengths Assume epipolar lines are

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Hand-eye calibration with a depth camera: 2D or 3D?

Hand-eye calibration with a depth camera: 2D or 3D? Hand-eye calibration with a depth camera: 2D or 3D? Svenja Kahn 1, Dominik Haumann 2 and Volker Willert 2 1 Fraunhofer IGD, Darmstadt, Germany 2 Control theory and robotics lab, TU Darmstadt, Darmstadt,

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16

Computer Vision I. Announcements. Random Dot Stereograms. Stereo III. CSE252A Lecture 16 Announcements Stereo III CSE252A Lecture 16 HW1 being returned HW3 assigned and due date extended until 11/27/12 No office hours today No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Robert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms

Robert Collins CSE486, Penn State. Lecture 09: Stereo Algorithms Lecture 09: Stereo Algorithms left camera located at (0,0,0) Recall: Simple Stereo System Y y Image coords of point (X,Y,Z) Left Camera: x T x z (, ) y Z (, ) x (X,Y,Z) z X right camera located at (T x,0,0)

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Conversion of 2D Image into 3D and Face Recognition Based Attendance System

Conversion of 2D Image into 3D and Face Recognition Based Attendance System Conversion of 2D Image into 3D and Face Recognition Based Attendance System Warsha Kandlikar, Toradmal Savita Laxman, Deshmukh Sonali Jagannath Scientist C, Electronics Design and Technology, NIELIT Aurangabad,

More information

Computer Vision, Lecture 11

Computer Vision, Lecture 11 Computer Vision, Lecture 11 Professor Hager http://www.cs.jhu.edu/~hager Computational Stereo Much of geometric vision is based on information from (or more) camera locations hard to recover 3D information

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

Study on road sign recognition in LabVIEW

Study on road sign recognition in LabVIEW IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Study on road sign recognition in LabVIEW To cite this article: M Panoiu et al 2016 IOP Conf. Ser.: Mater. Sci. Eng. 106 012009

More information

Stereo Image Rectification for Simple Panoramic Image Generation

Stereo Image Rectification for Simple Panoramic Image Generation Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

An investigation into stereo algorithms: An emphasis on local-matching. Thulani Ndhlovu

An investigation into stereo algorithms: An emphasis on local-matching. Thulani Ndhlovu An investigation into stereo algorithms: An emphasis on local-matching Thulani Ndhlovu Submitted to the Department of Electrical Engineering, University of Cape Town, in fullfillment of the requirements

More information

Fast and Robust 3D Terrain Surface Reconstruction of Construction Site Using Stereo Camera

Fast and Robust 3D Terrain Surface Reconstruction of Construction Site Using Stereo Camera 33 rd International Symposium on Automation and Robotics in Construction (ISARC 2016) Fast and Robust 3D Terrain Surface Reconstruction of Construction Site Using Stereo Camera Changhun Sung a, Sang Hoon

More information

Fast Stereo Matching of Feature Links

Fast Stereo Matching of Feature Links Fast Stereo Matching of Feature Links 011.05.19 Chang-il, Kim Introduction Stereo matching? interesting topics of computer vision researches To determine a disparity between stereo images A fundamental

More information

CONSISTENT COLOR RESAMPLE IN DIGITAL ORTHOPHOTO PRODUCTION INTRODUCTION

CONSISTENT COLOR RESAMPLE IN DIGITAL ORTHOPHOTO PRODUCTION INTRODUCTION CONSISTENT COLOR RESAMPLE IN DIGITAL ORTHOPHOTO PRODUCTION Yaron Katzil 1, Yerach Doytsher 2 Mapping and Geo-Information Engineering Faculty of Civil and Environmental Engineering Technion - Israel Institute

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA V. A. Knyaz * GosNIIAS, State Research Institute of Aviation System, 539 Moscow, Russia knyaz@gosniias.ru

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

International Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images

International Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images A Review: 3D Image Reconstruction From Multiple Images Rahul Dangwal 1, Dr. Sukhwinder Singh 2 1 (ME Student) Department of E.C.E PEC University of TechnologyChandigarh, India-160012 2 (Supervisor)Department

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a

Model-Based Stereo. Chapter Motivation. The modeling system described in Chapter 5 allows the user to create a basic model of a 96 Chapter 7 Model-Based Stereo 7.1 Motivation The modeling system described in Chapter 5 allows the user to create a basic model of a scene, but in general the scene will have additional geometric detail

More information

Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera

Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera Michael P. Cutter University of California at Santa Cruz Baskin School of Engineering (Computer Engineering department) Santa Cruz,

More information

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING

PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING PRECEDING VEHICLE TRACKING IN STEREO IMAGES VIA 3D FEATURE MATCHING Daniel Weingerl, Wilfried Kubinger, Corinna Engelhardt-Nowitzki UAS Technikum Wien: Department for Advanced Engineering Technologies,

More information

Comments on Consistent Depth Maps Recovery from a Video Sequence

Comments on Consistent Depth Maps Recovery from a Video Sequence Comments on Consistent Depth Maps Recovery from a Video Sequence N.P. van der Aa D.S. Grootendorst B.F. Böggemann R.T. Tan Technical Report UU-CS-2011-014 May 2011 Department of Information and Computing

More information

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity

Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity Stereo Vision Based Traversable Region Detection for Mobile Robots Using U-V-Disparity ZHU Xiaozhou, LU Huimin, Member, IEEE, YANG Xingrui, LI Yubo, ZHANG Hui College of Mechatronics and Automation, National

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.14 International Journal of Advance Engineering and Research Development Volume 3, Issue 3, March -2016 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Research

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza

Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza Lecture 10 Multi-view Stereo (3D Dense Reconstruction) Davide Scaramuzza REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time, ICRA 14, by Pizzoli, Forster, Scaramuzza [M. Pizzoli, C. Forster,

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

3D-2D Laser Range Finder calibration using a conic based geometry shape

3D-2D Laser Range Finder calibration using a conic based geometry shape 3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University

More information

Homographies and RANSAC

Homographies and RANSAC Homographies and RANSAC Computer vision 6.869 Bill Freeman and Antonio Torralba March 30, 2011 Homographies and RANSAC Homographies RANSAC Building panoramas Phototourism 2 Depth-based ambiguity of position

More information

Projector Calibration for Pattern Projection Systems

Projector Calibration for Pattern Projection Systems Projector Calibration for Pattern Projection Systems I. Din *1, H. Anwar 2, I. Syed 1, H. Zafar 3, L. Hasan 3 1 Department of Electronics Engineering, Incheon National University, Incheon, South Korea.

More information

A New Approach For 3D Image Reconstruction From Multiple Images

A New Approach For 3D Image Reconstruction From Multiple Images International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 4 (2017) pp. 569-574 Research India Publications http://www.ripublication.com A New Approach For 3D Image Reconstruction

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

Measurement of 3D Foot Shape Deformation in Motion

Measurement of 3D Foot Shape Deformation in Motion Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

ENGR3390: Robotics Fall 2009

ENGR3390: Robotics Fall 2009 J. Gorasia Vision Lab ENGR339: Robotics ENGR339: Robotics Fall 29 Vision Lab Team Bravo J. Gorasia - 1/4/9 J. Gorasia Vision Lab ENGR339: Robotics Table of Contents 1.Theory and summary of background readings...4

More information

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS

METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS METRIC PLANE RECTIFICATION USING SYMMETRIC VANISHING POINTS M. Lefler, H. Hel-Or Dept. of CS, University of Haifa, Israel Y. Hel-Or School of CS, IDC, Herzliya, Israel ABSTRACT Video analysis often requires

More information

Indoor Object Recognition of 3D Kinect Dataset with RNNs

Indoor Object Recognition of 3D Kinect Dataset with RNNs Indoor Object Recognition of 3D Kinect Dataset with RNNs Thiraphat Charoensripongsa, Yue Chen, Brian Cheng 1. Introduction Recent work at Stanford in the area of scene understanding has involved using

More information

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry 55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Bumblebee2 Stereo Vision Camera

Bumblebee2 Stereo Vision Camera Bumblebee2 Stereo Vision Camera Description We use the Point Grey Bumblebee2 Stereo Vision Camera in this lab section. This stereo camera can capture 648 x 488 video at 48 FPS. 1) Microlenses 2) Status

More information

Lecture 14: Basic Multi-View Geometry

Lecture 14: Basic Multi-View Geometry Lecture 14: Basic Multi-View Geometry Stereo If I needed to find out how far point is away from me, I could use triangulation and two views scene point image plane optical center (Graphic from Khurram

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d 2017 International Conference on Mechanical Engineering and Control Automation (ICMECA 2017) ISBN: 978-1-60595-449-3 3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Freehand Voxel Carving Scanning on a Mobile Device

Freehand Voxel Carving Scanning on a Mobile Device Technion Institute of Technology Project in Image Processing and Analysis 234329 Freehand Voxel Carving Scanning on a Mobile Device Author: Student Number: 305950099 Supervisors: Aaron Wetzler, Yaron Honen,

More information

Highlight detection with application to sweet pepper localization

Highlight detection with application to sweet pepper localization Ref: C0168 Highlight detection with application to sweet pepper localization Rotem Mairon and Ohad Ben-Shahar, the interdisciplinary Computational Vision Laboratory (icvl), Computer Science Dept., Ben-Gurion

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras

Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Three-Dimensional Sensors Lecture 2: Projected-Light Depth Cameras Radu Horaud INRIA Grenoble Rhone-Alpes, France Radu.Horaud@inria.fr http://perception.inrialpes.fr/ Outline The geometry of active stereo.

More information

Recognition of Object Contours from Stereo Images: an Edge Combination Approach

Recognition of Object Contours from Stereo Images: an Edge Combination Approach Recognition of Object Contours from Stereo Images: an Edge Combination Approach Margrit Gelautz and Danijela Markovic Institute for Software Technology and Interactive Systems, Vienna University of Technology

More information

AN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES

AN AUTOMATIC 3D RECONSTRUCTION METHOD BASED ON MULTI-VIEW STEREO VISION FOR THE MOGAO GROTTOES The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-4/W5, 05 Indoor-Outdoor Seamless Modelling, Mapping and avigation, May 05, Tokyo, Japan A AUTOMATIC

More information

A virtual tour of free viewpoint rendering

A virtual tour of free viewpoint rendering A virtual tour of free viewpoint rendering Cédric Verleysen ICTEAM institute, Université catholique de Louvain, Belgium cedric.verleysen@uclouvain.be Organization of the presentation Context Acquisition

More information