Plant and Canopy Reconstruction User Documentation. The University of Nottingham
|
|
- Alexina Flowers
- 6 years ago
- Views:
Transcription
1 Plant and Canopy Reconstruction User Documentation The University of Nottingham 2014
2 Table of Contents Overview... 2 Introduction... 2 Program Input... 3 Point Clouds... 3 Image Sets and Camera Geometries... 5 Directory Structure... 7 Executing the Tool... 8 References Contact
3 Overview Introduction The Centre for Plant Integrative Biology s 3D plant reconstruction tool is a commandline based windows program that can be used to reconstruct surface information in plant canopies. The input of the tool is a 3D point cloud, combined with an input image set and camera geometries. The output is a completed plant mesh in the standard PLY format that can be exported into modelling or graphical applications. The tool is written in C#.NET, and makes extensive use of the.net libraries. For this reason a modern Windows machine is essential. A version of this software for other operating systems is under consideration, but is unlikely for the foreseeable future. Due to the complexity of the data structures used within our software, it is likely that the computer running the reconstruction will require at least 4Gb of available RAM. We would recommend at least 8Gb of available RAM for most reconstructions. 2
4 Program Input Out tool reconstructs plant mesh data from an existing point cloud. Point clouds can be obtained through a variety of means, such as with laser scanning, however in our experiments we have made use of the algorithms presented in [1] and the associated tool PMVS. Along with the point cloud, our software requires a set of images of the plant, or plants, to be reconstructed. The image set must be calibrated, in other words the 3D world position of each camera must be known, or have been calculated before reconstruction can begin. Details on this process can be found below. With all data available, files must be placed in a single location, using a consistent and specific directory structure, also described below. This approach avoids the requirement of manually specifying the location of each file before reconstruction can begin. Point Clouds A point cloud is simply a list of 3-dimensional points that comprise our input data. Various file formats exist to store point clouds, and three formats can be read by the reconstruction tool. Point cloud data can be read in either straight text format, the Stanford PLY format or the PMVS Patch format. The file format used will often depend on the means by which the point cloud was produced, however the choice of format will not affect the operation of our reconstruction tool. Figure 1 provides a sample of each of the three file formats that can be input into the reconstruction software. PLY and Patch files are identified by their header information, the text format should contain no header information. The text format should contain only the X, Y and Z co-ordinates of each point, with spaces used to separate each value. Each point should be output on a single line. The PLY and Patch formats are more verbose, with options to include additional information, such as the normal direction of a point, and colour. This information can be included, but will be ignored by our software as it is not used during reconstruction. During our experiments we have used the PMVS software to construct the initial point clouds. This software outputs clouds as both a PLY and Patch file, they are usually located in the models directory of the PMVS reconstruction. Either is suitable as a starting point for reconstruction using our tool. An optional file named clip.txt can be included inside the patches directory, and will be read if the --plane-filter option is set when the program is run. This file simply contains six numbers, X Y Z U V W, where X Y Z is a point on the clip plane, and U V W is the normal orientation. If the plane filter is applied, any points below the plane will be removed. 3
5 Text Format PLY Format ply format ascii 1.0 element vertex property float x property float y property float z end_header The recognised text format contains no header. Each line contains a single point, given as X Y Z separated by spaces A PLY file contains a header, found between the ply and end_header lines. After this a number of points are listed, one on each line. The number of points must equal the value given by element vertex for the file to be read successfully. Patch Format PATCHES PATCHS clip.txt The patch file is output by the PMVS point cloud reconstruction software. The file begins with a header PATCHES, and a value representing how many points are included in this file. The first line after each PATCHS sub-heading gives the position of each point, other information is included but is ignored The optional clip file gives the position and orientation of a plane that is used to filter points. This is useful for removing large areas of background, such as points that have been reconstructed on the floor below the plant. These points will likely be removed by the colour filter, but the plane filter is faster and more precise in some cases. Figure 1: Examples showing the three available file formats for point cloud input. 4
6 Image Sets and Camera Geometries Our tool requires images of the captured scene, taken from a variety of angles. The number of images required will depend on the scene, but in general the greater the number of images, the better the reconstruction is likely to be. This is because reconstruction of each small section of a leaf surface is best when at least one image has a good view of that leaf. To be specific, we define a good view as taken perpendicular to the leaf surface, and not obscured by other leaves or objects. In our experiments we use more than twenty images per set, however we have experimented with as many as 70. We would encourage users to test their image capture setup and ascertain the optimum number of images for their experiment. Note that the greater the number of images, the greater the memory and computational requirements of the tool. The majority of allocated memory is given to image and related data, thus doubling the number of input images is likely to double the memory requirements of the tool. In order to use the 2D images within the reconstruction process, our software must have access to the geometric information for each camera that was used to capture each image. This can be obtained through camera calibration, when used with a static image capture system, or through so-called structure from motion algorithms that will operate on arbitrarily placed images. If automated calibration is required, we recommend the use of the VisualSFM [2] system for automated camera calibration. This software also utilises PMVS for point cloud reconstruction, so can produce all necessary input files for the reconstruction tool. Users are encouraged to read the documentation supplied with these external tools for more information. Output of VisualSFM is provided in numerous files, our tool reads the detailed data file named cameras_v2.txt, which can be found in the root directory of each VisualSFM reconstruction. This file can be renamed as required, however the contents mut remain unchanged. Where VisualSFM is not the calibration tool used, our tool can read standard text files containing the camera projection matrix and the camera normal (direction of the view). Figure 2 shows examples of these file formats. Please note that our software requires camera data associated with each image, thus the number and order of cameras listed in cameras_v2.txt should match the number and order of images found in the image directory. Alternatively, the number of individual files in text format should match the number of images provided. 5
7 Text Format The directory should contain multiple numerically ordered files, one for each camera. Each file contains three lines for the 3x4 projection matrix for this camera. Details of the derivation of this matrix can be found in our paper, published alongside this tool. VisualSFM Format jpg C:\Directory\images\0004.jpg The VisualSFM camera format contains a number identifying the camera count, followed by a list of parameters for each camera. The parameters included contain all the information necessary to reconstruct the projection matrix. All camera geometry is contained in a single text file, the name is unimportant. Figure 2: Examples showing the two possible file formats for inputting camera geometry into our tool. 6
8 Directory Structure Our tool requires a specific directory structure in which input files are stored, with each folder stored in a working directory that is specified when the program executes. Figure 3 provides an overview of this directory structure. Figure 3: Example of the file structure expected by the tool, containing a working directory and three sub-directories. The working directory can have any name, and is specified when execution begins. Below this are three mandetory directories, named patches, images and cameras. The patches folder contains the point cloud data associated with this reconstruction. Usually this will be a single file, but multiple files can be read if the point cloud is stored in this way. The images folder contains each of the captured images. Specific names for each image are not required, but images are loaded in alphabetical or numerical order. Finally, the cameras folder contains the calibration data for each camera, either in a series of files, or a single VisualSFM file as discussed above. As with the image folder, each camera is loaded in order. This means that the ordering of both images and cameras must be consistent to match each image to the geometry. 7
9 Executing the Tool The tool can be executed from the command line, with additional parameters providing the information required to direct the reconstruction. Users may find it easier to store the entire command in a Windows batch file, if it is to be used or adjusted multiple times. To run the tool with all default settings, the program can be executed as follows: reconstructor.exe path-to-working-directory Most parameters used in the tool, such as the radius to use for point segmentation, provide default values that will work on many datasets. However, it is recommended that these values be customised for a given image capture setup to ensure optimum reconstruction. Options are given after the working directory, when the program is run. For example: reconstructor.exe C:\Reconstruction\ --min-cluster-size 30 Brief details of each optional parameter are given with the command: reconstructor.exe --help Each option can be given as a full command, preceded by a double dash, --. Some options that don t require values can be shortened to a single letter preceded by a dash. A detailed description of each option is provided in Table 1. The optional flags, also shown in table 1, do not require values, it is assumed that a specific flag is true if passed as a parameter. For example: reconstructor.exe C:\Reconstruction\ -fs This command includes both -f -s, and will instruct the software to output both a filtered cloud and a segmented cloud before resuming the reconstruction process. Where such options are chosen, output files will be saved in the working directory, under the sub-directory output. The final output of the reconstruction software will be saved as working-directory/output/triangulation.ply in the PLY file format. 8
10 Option Expected values Default Description --camera-type Either of: VSFM PR VSFM Indicates which of the two accepted formats will be used for the camera calibration data --segmentation-radius A positive real number 0.01 The distance between points below which they will be considered for the same cluster. This value will depend on the scale of the point cloud --min-cluster-size --max-cluster size Any integer greater than zero An integer greater than the minimum cluster size 10 The minimum number of points allowed in a single segmented cluster. Points in clusters smaller than this will be discarded 60 The maximum number of points allowed in a single cluster. Points above this value will be split into other clusters --alpha radius A positive real number 0.01 The alpha value used when creating alpha-shape surface estimates. This number should usually be similar to the segmentation radius, as both are representative of the expected distance between points on the same surface --level-set-iterations Any positive integer 200 The number of level set iterations to run --halting-percentage --zbuffer-resync-frequency Any real number zero or greater Any integer greater than zero 0.0 A value indicating when level sets should halt due to inactivity. If a level set changes size by less than the indicated percentage, it will stop iterating. 20 How many level set iterations to run between resynchronising the z buffer data structures --terminate-after One of: filtering segmentation surface-estimation all all Indicates that processing should stop before the full reconstruction is complete. This is useful for testing earlier stages of the reconstruction, such as segmentation. This should be used with the -f, -s, and -a flags to view output at the appropriate stage -p, --plane-filter - - Indicates whether to apply a planar clipping line to the points before other processing. The position and orientation of this plane must be supplied in clip.txt within the patches folder -c, --colour-filter - - Indicates whether to apply a greenbased colour filter to remove non plant points before surface reconstruction -f, --output-filtered-points - - Indicates whether to output the plane and colour filtered points. The points are saved in ply file format in the software output folder, as filtered.points.ply 9
11 -s, --output-segmented-points - - Indicates whether to output the segmented point cloud, with vertices coloured based on the cluster they are in. The points are saved in ply file format in the software output folder, as segmented.points.ply -a, --output-alpha-triangulation - - Indicates whether to output the initial surface reconstruction based on alpha shapes. The mesh is saved in ply file format in the software output folder, as alpha.triangulation.ply Table 1: Details of all optional parameters and flags that can be passed as command line arguments to reconstructor.exe. The following example shows the full command line execution of the software, using a number of the optional parameters. Those parameters that are left at the default values are not required. reconstructor.exe C:\Reconstructions\Test\ -pcfs --camera-type PR -- max-cluster-size segmentation-radius level-setiterations 100 These options indicate that both the plane and colour filters should be applied, and that the filtered and segmented point clouds should be output. The camera type is set to NP, that is, the normal and projection matrices stored in separate files for each camera. The maximum cluster size is increased to 140, and the segmentation radius is increased to Finally, the number of level set iterations is decreased from the default value of 200, to 100. As the working directory has been set as C:\Reconstructions\Test\, the program will look for the necessary files within: C:\Reconstructions\Test\patches\ C:\Reconstructions\Test\images\ C:\Reconstructions\Test\cameras\ All output files will be saved in: C:\Reconstructions\Test\output\ 10
12 Once execution begins, the console window will provide information on the progress of the reconstruction. Depending on the stage of reconstruction being processed, it will appear much like Figure 4. Reading camera parameters: Done Loading images: Done 40 Reading patches: Done Colour Filter: Done Constructing Search Tree: Done Clustering patches: Done Flattening Clusters: Done Triangulating Clusters: Done Calculating Cluster Visibility: Done Calculating Distance Maps: Done Converting RGB images to NG: Done Syncing Z Buffers: Done Analysing Cluster Histograms: Done Running Level Sets... Iteration 2 Figure 4: Expected output of the reconstruction tool References [1] Furukawa, Yasutaka and Ponce, Jean. Accurate, Dense, and Robust Multi-View Stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, Issue 8, Pages , PMVS software available at [2] Wu, Changchang. VisualSFM: A visual structure from motion system VisualSFM software available at Contact Details on the tool and its development can be found on the Centre for Plant Integrative Biology website at 11
Multi-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More informationVisual Pose Estimation System for Autonomous Rendezvous of Spacecraft
Visual Pose Estimation System for Autonomous Rendezvous of Spacecraft Mark A. Post1, Junquan Li2, and Craig Clark2 Space Mechatronic Systems Technology Laboratory Dept. of Design, Manufacture & Engineering
More information3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots
3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,
More informationPART A Three-Dimensional Measurement with iwitness
PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationTarget Shape Identification for Nanosatellites using Monocular Point Cloud Techniques
Target Shape Identification for Nanosatellites using Monocular Point Cloud Techniques 6th European CubeSat Symposium Oct. 16, 2014 Mark A. Post and Xiu.T. Yan Space Mechatronic Systems Technology (SMeSTech)
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationCOMP30019 Graphics and Interaction Rendering pipeline & object modelling
COMP30019 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering
More informationLecture outline. COMP30019 Graphics and Interaction Rendering pipeline & object modelling. Introduction to modelling
Lecture outline COMP30019 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Introduction to Modelling Polygonal geometry The rendering
More informationGraphics and Interaction Rendering pipeline & object modelling
433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering
More informationVisual Pose Estimation and Identification for Satellite Rendezvous Operations
Post, Mark A. and Yan, Xiu T. (2015) Visual pose estimation and identification for satellite rendezvous operations. In: Sixth China- Scotland SIPRA workshop "Recent Advances in Signal and Image Processing",
More informationStructured light 3D reconstruction
Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance
More information3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava
3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t R 2 3,t 3 Camera 1 Camera
More information3D Reconstruction of a Hopkins Landmark
3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known
More informationSFU CMPT 361 Computer Graphics Fall 2017 Assignment 2. Assignment due Thursday, October 19, 11:59pm
SFU CMPT 361 Computer Graphics Fall 2017 Assignment 2 Assignment due Thursday, October 19, 11:59pm For this assignment, you are to interpret a 3D graphics specification language, along with doing some
More informationStep-by-Step Model Buidling
Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure
More informationMulti-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011
Multi-view Stereo Ivo Boyadzhiev CS7670: September 13, 2011 What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape
More informationCluster-based 3D Reconstruction of Aerial Video
Cluster-based 3D Reconstruction of Aerial Video Scott Sawyer (scott.sawyer@ll.mit.edu) MIT Lincoln Laboratory HPEC 12 12 September 2012 This work is sponsored by the Assistant Secretary of Defense for
More information55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography Estimating homography from point correspondence
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Wide baseline matching (SIFT) Today: dense 3D reconstruction
More informationCSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling
CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 4 due tomorrow Project
More informationGrafica 3D per i beni culturali: Multiview stereo matching, the tools Maggio 2018
Grafica 3D per i beni culturali: Multiview stereo matching, the tools 10-15 Maggio 2018 0 Image-based 3D Reconstruction Advantages: Automatic Fast (relatively to manual built) Good scalability (both small
More informationCS 4620 Midterm, March 21, 2017
CS 460 Midterm, March 1, 017 This 90-minute exam has 4 questions worth a total of 100 points. Use the back of the pages if you need more space. Academic Integrity is expected of all students of Cornell
More informationInternational Journal for Research in Applied Science & Engineering Technology (IJRASET) A Review: 3D Image Reconstruction From Multiple Images
A Review: 3D Image Reconstruction From Multiple Images Rahul Dangwal 1, Dr. Sukhwinder Singh 2 1 (ME Student) Department of E.C.E PEC University of TechnologyChandigarh, India-160012 2 (Supervisor)Department
More informationCSE528 Computer Graphics: Theory, Algorithms, and Applications
CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu
More informationCOMP30019 Graphics and Interaction Perspective & Polygonal Geometry
COMP30019 Graphics and Interaction Perspective & Polygonal Geometry Department of Computing and Information Systems The Lecture outline Introduction Perspective Geometry Virtual camera Centre of projection
More informationCHETTINAD COLLEGE OF ENGINEERING & TECHNOLOGY CS2401 COMPUTER GRAPHICS QUESTION BANK
CHETTINAD COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CS2401 COMPUTER GRAPHICS QUESTION BANK PART A UNIT I-2D PRIMITIVES 1. Define Computer graphics. 2. Define refresh
More informationDETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION
DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,
More informationMetric Structure from Motion
CS443 Final Project Metric Structure from Motion Peng Cheng 1 Objective of the Project Given: 1. A static object with n feature points and unknown shape. 2. A camera with unknown intrinsic parameters takes
More informationMicro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification
Micro-scale Stereo Photogrammetry of Skin Lesions for Depth and Colour Classification Tim Lukins Institute of Perception, Action and Behaviour 1 Introduction The classification of melanoma has traditionally
More informationSfM_Georef v.2.2.
SfM_Georef v.2.2 http://www.lancs.ac.uk/staff/jamesm/software/sfm_georef.htm Introduction Sfm_georef is a gui-based tool for scaling and orienting SfM point clouds to real-world coordinates, using observations
More informationStructure from Motion CSC 767
Structure from Motion CSC 767 Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R,t R 2,t 2 R 3,t 3 Camera??
More informationCSE328 Fundamentals of Computer Graphics
CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu
More informationData Representation in Visualisation
Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have
More information3D models preparation
3D models preparation Single-res, multi-res, point-clouds http://3dhop.net 24/5/2018 3DHOP and 3D models 3DHOP can manage three types of geometries: Single resolution 3D model Triangular meshes or point-clouds,
More informationCSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018
CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be
More informationGeometry of Multiple views
1 Geometry of Multiple views CS 554 Computer Vision Pinar Duygulu Bilkent University 2 Multiple views Despite the wealth of information contained in a a photograph, the depth of a scene point along the
More informationLATEST TRENDS on APPLIED MATHEMATICS, SIMULATION, MODELLING
3D surface reconstruction of objects by using stereoscopic viewing Baki Koyuncu, Kurtuluş Küllü bkoyuncu@ankara.edu.tr kkullu@eng.ankara.edu.tr Computer Engineering Department, Ankara University, Ankara,
More informationCSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling
CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Project 3 due Monday Nov 13 th at
More informationWhat s New Topics in 3.5.1: 1. User Interface 5. Reference Geometry 2. Display 6. Sketch 3. livetransfer 7. Surface / Solid 4. Scan Tools 8.
]^ New Release Turning Measured Points into Solid Models What s New Topics in 3.5.1: 1. User Interface 5. Reference Geometry 2. Display 6. Sketch 3. livetransfer 7. Surface / Solid 4. Scan Tools 8. Measure
More informationA Constrained Delaunay Triangle Mesh Method for Three-Dimensional Unstructured Boundary Point Cloud
International Journal of Computer Systems (ISSN: 2394-1065), Volume 03 Issue 02, February, 2016 Available at http://www.ijcsonline.com/ A Constrained Delaunay Triangle Mesh Method for Three-Dimensional
More informationCS 4620 Program 3: Pipeline
CS 4620 Program 3: Pipeline out: Wednesday 14 October 2009 due: Friday 30 October 2009 1 Introduction In this assignment, you will implement several types of shading in a simple software graphics pipeline.
More information3D Object Representation
3D Object Representation Object Representation So far we have used the notion of expressing 3D data as points(or vertices) in a Cartesian or Homogeneous coordinate system. We have simplified the representation
More informationTextured Mesh Surface Reconstruction of Large Buildings with Multi-View Stereo
The Visual Computer manuscript No. (will be inserted by the editor) Textured Mesh Surface Reconstruction of Large Buildings with Multi-View Stereo Chen Zhu Wee Kheng Leow Received: date / Accepted: date
More informationReconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants
1 Reconstruction of Polygonal Faces from Large-Scale Point-Clouds of Engineering Plants Hiroshi Masuda 1, Takeru Niwa 2, Ichiro Tanaka 3 and Ryo Matsuoka 4 1 The University of Electro-Communications, h.masuda@euc.ac.jp
More informationManhattan-world Stereo and Surface Reconstruction
Manhattan-world Stereo and Surface Reconstruction Kuan-Ting Yu CSAIL MIT Cambridge, USA peterkty@csail.mit.edu Abstract Depth estimation from 2D images has been extensively studied in the computer vision
More informationOpenGL Graphics System. 2D Graphics Primitives. Drawing 2D Graphics Primitives. 2D Graphics Primitives. Mathematical 2D Primitives.
D Graphics Primitives Eye sees Displays - CRT/LCD Frame buffer - Addressable pixel array (D) Graphics processor s main function is to map application model (D) by projection on to D primitives: points,
More informationGUIDE TO POST-PROCESSING OF THE POINT CLOUD
GUIDE TO POST-PROCESSING OF THE POINT CLOUD Contents Contents 3 Reconstructing the point cloud with MeshLab 16 Reconstructing the point cloud with CloudCompare 2 Reconstructing the point cloud with MeshLab
More informationFreehand Voxel Carving Scanning on a Mobile Device
Technion Institute of Technology Project in Image Processing and Analysis 234329 Freehand Voxel Carving Scanning on a Mobile Device Author: Student Number: 305950099 Supervisors: Aaron Wetzler, Yaron Honen,
More informationSemantic 3D Reconstruction of Heads Supplementary Material
Semantic 3D Reconstruction of Heads Supplementary Material Fabio Maninchedda1, Christian Ha ne2,?, Bastien Jacquet3,?, Amae l Delaunoy?, Marc Pollefeys1,4 1 ETH Zurich 2 UC Berkeley 3 Kitware SAS 4 Microsoft
More information4D Crop Analysis for Plant Geometry Estimation in Precision Agriculture
4D Crop Analysis for Plant Geometry Estimation in Precision Agriculture MIT Laboratory for Information & Decision Systems IEEE RAS TC on Agricultural Robotics and Automation Webinar #37 Acknowledgements
More informationBy Bonemap Extending Isadora s 3D Particles and 3D Model Particles
Extending Isadora s 3D Particles and 3D Model Particles By Bonemap 2018 Page 1 v.1 Extending Isadora s 3D Particles and 3D Model Particles with media instances By Bonemap 2018 One of the more difficult
More informationRendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane
Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world
More informationVisual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors
Visual Hulls from Single Uncalibrated Snapshots Using Two Planar Mirrors Keith Forbes 1 Anthon Voigt 2 Ndimi Bodika 2 1 Digital Image Processing Group 2 Automation and Informatics Group Department of Electrical
More informationOptimal Möbius Transformation for Information Visualization and Meshing
Optimal Möbius Transformation for Information Visualization and Meshing Marshall Bern Xerox Palo Alto Research Ctr. David Eppstein Univ. of California, Irvine Dept. of Information and Computer Science
More informationA New Progressive Lossy-to-Lossless Coding Method for 2.5-D Triangle Mesh with Arbitrary Connectivity
A New Progressive Lossy-to-Lossless Coding Method for 2.5-D Triangle Mesh with Arbitrary Connectivity Dan Han University of Victoria October 31, 2016 New Mesh-Coding Method Dan Han 1 Outline 1 Introduction
More informationL1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming
L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the
More informationStructure from Motion
Structure from Motion Outline Bundle Adjustment Ambguities in Reconstruction Affine Factorization Extensions Structure from motion Recover both 3D scene geoemetry and camera positions SLAM: Simultaneous
More informationMultiple View Geometry in Computer Vision Second Edition
Multiple View Geometry in Computer Vision Second Edition Richard Hartley Australian National University, Canberra, Australia Andrew Zisserman University of Oxford, UK CAMBRIDGE UNIVERSITY PRESS Contents
More informationSingle Spin Image-ICP matching for Efficient 3D Object Recognition
Single Spin Image-ICP matching for Efficient 3D Object Recognition Ernst Bovenkamp ernst.bovenkamp@tno.nl Arvid Halma arvid.halma@tno.nl Pieter Eendebak pieter.eendebak@tno.nl Frank ter Haar frank.terhaar@tno.nl
More informationProduct information. Hi-Tech Electronics Pte Ltd
Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,
More informationAssignment 2: Stereo and 3D Reconstruction from Disparity
CS 6320, 3D Computer Vision Spring 2013, Prof. Guido Gerig Assignment 2: Stereo and 3D Reconstruction from Disparity Out: Mon Feb-11-2013 Due: Mon Feb-25-2013, midnight (theoretical and practical parts,
More informationA Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India
A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India Keshav Mahavidyalaya, University of Delhi, Delhi, India Abstract
More informationLecture 8.2 Structure from Motion. Thomas Opsahl
Lecture 8.2 Structure from Motion Thomas Opsahl More-than-two-view geometry Correspondences (matching) More views enables us to reveal and remove more mismatches than we can do in the two-view case More
More informationLecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009
Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window
More information3D Models Preparation
3D Models Preparation Single-res, Mutires, Point-clouds http://3dhop.net 13/7/2016 3DHOP and 3D models 3DHOP can manage three types of geometries: Single resolution 3D model Triangular meshes, ideally
More informationThe Ball-Pivoting Algorithm for Surface Reconstruction
The Ball-Pivoting Algorithm for Surface Reconstruction 1. Briefly summarize the paper s contributions. Does it address a new problem? Does it present a new approach? Does it show new types of results?
More informationA New Online Clustering Approach for Data in Arbitrary Shaped Clusters
A New Online Clustering Approach for Data in Arbitrary Shaped Clusters Richard Hyde, Plamen Angelov Data Science Group, School of Computing and Communications Lancaster University Lancaster, LA1 4WA, UK
More informationCRONOS 3D DIMENSIONAL CERTIFICATION
CRONOS 3D DIMENSIONAL CERTIFICATION This dimensional certification is structured as follow: 1. 2. 3. 4. 5. Test description Dimensional report basic workflow Dimensional report, acquisition field 18mm
More informationModels and The Viewing Pipeline. Jian Huang CS456
Models and The Viewing Pipeline Jian Huang CS456 Vertex coordinates list, polygon table and (maybe) edge table Auxiliary: Per vertex normal Neighborhood information, arranged with regard to vertices and
More informationSegmentation of point clouds
Segmentation of point clouds George Vosselman INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Extraction of information from point clouds 1 Segmentation algorithms Extraction
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem
More information3D Reconstruction of Dynamic Textures with Crowd Sourced Data. Dinghuang Ji, Enrique Dunn and Jan-Michael Frahm
3D Reconstruction of Dynamic Textures with Crowd Sourced Data Dinghuang Ji, Enrique Dunn and Jan-Michael Frahm 1 Background Large scale scene reconstruction Internet imagery 3D point cloud Dense geometry
More informationStructure from motion
Structure from motion Structure from motion Given a set of corresponding points in two or more images, compute the camera parameters and the 3D point coordinates?? R 1,t 1 R 2,t 2 R 3,t 3 Camera 1 Camera
More informationOutline of the presentation
Surface Reconstruction Petra Surynková Charles University in Prague Faculty of Mathematics and Physics petra.surynkova@mff.cuni.cz Outline of the presentation My work up to now Surfaces of Building Practice
More informationBIL Computer Vision Apr 16, 2014
BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm
More informationContents. 1 Introduction Background Organization Features... 7
Contents 1 Introduction... 1 1.1 Background.... 1 1.2 Organization... 2 1.3 Features... 7 Part I Fundamental Algorithms for Computer Vision 2 Ellipse Fitting... 11 2.1 Representation of Ellipses.... 11
More informationCulling. Computer Graphics CSE 167 Lecture 12
Culling Computer Graphics CSE 167 Lecture 12 CSE 167: Computer graphics Culling Definition: selecting from a large quantity In computer graphics: selecting primitives (or batches of primitives) that are
More informationComputer Vision I - Algorithms and Applications: Multi-View 3D reconstruction
Computer Vision I - Algorithms and Applications: Multi-View 3D reconstruction Carsten Rother 09/12/2013 Computer Vision I: Multi-View 3D reconstruction Roadmap this lecture Computer Vision I: Multi-View
More informationIBM Pietà 3D Scanning Project :
The IBM Pieta Project: A Historical Perspective Gabriel Taubin Brown University IBM Pietà 3D Scanning Project : 1998-2000 Shape Appearance http://www.research.ibm.com/pieta IBM Visual and Geometric Computing
More informationENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows
ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows Instructor: Gabriel Taubin Assignment written by: Douglas Lanman 29 January 2009 Figure 1: 3D Photography
More informationCVPR 2014 Visual SLAM Tutorial Kintinuous
CVPR 2014 Visual SLAM Tutorial Kintinuous kaess@cmu.edu The Robotics Institute Carnegie Mellon University Recap: KinectFusion [Newcombe et al., ISMAR 2011] RGB-D camera GPU 3D/color model RGB TSDF (volumetric
More informationApplications. Oversampled 3D scan data. ~150k triangles ~80k triangles
Mesh Simplification Applications Oversampled 3D scan data ~150k triangles ~80k triangles 2 Applications Overtessellation: E.g. iso-surface extraction 3 Applications Multi-resolution hierarchies for efficient
More information3D Model Acquisition by Tracking 2D Wireframes
3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract
More informationImage Based Reconstruction II
Image Based Reconstruction II Qixing Huang Feb. 2 th 2017 Slide Credit: Yasutaka Furukawa Image-Based Geometry Reconstruction Pipeline Last Lecture: Multi-View SFM Multi-View SFM This Lecture: Multi-View
More information3D shape from the structure of pencils of planes and geometric constraints
3D shape from the structure of pencils of planes and geometric constraints Paper ID: 691 Abstract. Active stereo systems using structured light has been used as practical solutions for 3D measurements.
More informationThree Main Themes of Computer Graphics
Three Main Themes of Computer Graphics Modeling How do we represent (or model) 3-D objects? How do we construct models for specific objects? Animation How do we represent the motion of objects? How do
More informationReal-Time Graphics Architecture
Real-Time Graphics Architecture Kurt Akeley Pat Hanrahan http://www.graphics.stanford.edu/courses/cs448a-01-fall Geometry Outline Vertex and primitive operations System examples emphasis on clipping Primitive
More informationAdvanced Digital Photography and Geometry Capture. Visual Imaging in the Electronic Age Lecture #10 Donald P. Greenberg September 24, 2015
Advanced Digital Photography and Geometry Capture Visual Imaging in the Electronic Age Lecture #10 Donald P. Greenberg September 24, 2015 Eye of a Fly AWARE-2 Duke University http://www.nanowerk.com/spotlight/spotid=3744.php
More informationMesserli Informatik GmbH EliteCAD ME V13 R2 update information 1
Messerli Informatik GmbH EliteCAD ME V13 R2 update information 1 EliteCAD ME V13 R2 update information February 2016 This update of EliteCAD ME contains numerous improvements and optimizations which have
More informationCSCI 5980/8980: Assignment #4. Fundamental Matrix
Submission CSCI 598/898: Assignment #4 Assignment due: March 23 Individual assignment. Write-up submission format: a single PDF up to 5 pages (more than 5 page assignment will be automatically returned.).
More informationIllumination and Geometry Techniques. Karljohan Lundin Palmerius
Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward
More informationWhat s New in BullCharts. Version BullCharts staff
What s New in BullCharts www.bullcharts.com.au Version 4.3 Welcome to the latest revisions to Australia's BullCharts charting software. This version (4.3) runs on: Windows 7, Windows 8 (both 32-bit and
More informationComputational Geometry
Computational Geometry 600.658 Convexity A set S is convex if for any two points p, q S the line segment pq S. S p S q Not convex Convex? Convexity A set S is convex if it is the intersection of (possibly
More informationSETTLEMENT OF A CIRCULAR FOOTING ON SAND
1 SETTLEMENT OF A CIRCULAR FOOTING ON SAND In this chapter a first application is considered, namely the settlement of a circular foundation footing on sand. This is the first step in becoming familiar
More informationReference Manual. Version 1.1
Version 1.1 TABLE OF CONTENTS 1. INSTALLATION... 1 Requirements... 1 Installation process... 1 After installation... 3 2. USER INTERFACE... 4 Introduction... 4 Main window... 5 Project management window...
More informationCamera Drones Lecture 3 3D data generation
Camera Drones Lecture 3 3D data generation Ass.Prof. Friedrich Fraundorfer WS 2017 Outline SfM introduction SfM concept Feature matching Camera pose estimation Bundle adjustment Dense matching Data products
More informationCS231A Midterm Review. Friday 5/6/2016
CS231A Midterm Review Friday 5/6/2016 Outline General Logistics Camera Models Non-perspective cameras Calibration Single View Metrology Epipolar Geometry Structure from Motion Active Stereo and Volumetric
More information