PART A Three-Dimensional Measurement with iwitness

Similar documents
Chapters 1 7: Overview

Photometrix, Australia Version 7, October User Manual for Australis

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

Photogrammetry: A Modern Tool for Crash Scene Mapping

ADS40 Calibration & Verification Process. Udo Tempelmann*, Ludger Hinsken**, Utz Recke*

Structure from Motion. Prof. Marco Marcon

Mapping in a City Environment Using a Single Hand-Held Digital Camera

iwitness maps traffic accident scenes over distances of several hundred feet

Chapters 1 9: Overview

Camera model and multiple view geometry

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

Unit 3 Multiple View Geometry

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Photogrammetry: DTM Extraction & Editing

EVALUATION OF SEQUENTIAL IMAGES FOR PHOTOGRAMMETRICALLY POINT DETERMINATION

calibrated coordinates Linear transformation pixel coordinates

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Integrating the Generations, FIG Working Week 2008,Stockholm, Sweden June 2008

CSE 252B: Computer Vision II

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

Geometry of Aerial photogrammetry. Panu Srestasathiern, PhD. Researcher Geo-Informatics and Space Technology Development Agency (Public Organization)

Exterior Orientation Parameters

Understanding Variability

A software system for automated off-line digital close range photogrammetric image measurement, orientation/ triangulation and sensor calibration

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Trimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA

3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: ,

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION

Stereo Observation Models

Viewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points

The Accuracy of Determining the Volumes Using Close Range Photogrammetry

Pinhole Camera Model 10/05/17. Computational Photography Derek Hoiem, University of Illinois

Epipolar Geometry Prof. D. Stricker. With slides from A. Zisserman, S. Lazebnik, Seitz

Centre for Digital Image Measurement and Analysis, School of Engineering, City University, Northampton Square, London, ECIV OHB

CSE 4392/5369. Dr. Gian Luca Mariottini, Ph.D.

Projective Geometry and Camera Models

Using Perspective Rays and Symmetry to Model Duality

C / 35. C18 Computer Vision. David Murray. dwm/courses/4cv.

COMP30019 Graphics and Interaction Perspective & Polygonal Geometry

TOPOSCOPY, A CLOSE RANGE PHOTOGRAMMETRIC SYSTEM FOR ARCHITECTS AND LANDSCAPE DESIGNERS

Computer Vision cmput 428/615

Introduction to photogrammetry (with an underwater flavor)

Characterizing Strategies of Fixing Full Scale Models in Construction Photogrammetric Surveying. Ryan Hough and Fei Dai

Precise Omnidirectional Camera Calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Projective Geometry and Camera Models

TERRESTRIAL AND NUMERICAL PHOTOGRAMMETRY 1. MID -TERM EXAM Question 4

Introduction to 3D Concepts

How to print a Hypercube

1. ABOUT INSTALLATION COMPATIBILITY SURESIM WORKFLOWS a. Workflow b. Workflow SURESIM TUTORIAL...

DESIGN AND TESTING OF MATHEMATICAL MODELS FOR A FULL-SPHERICAL CAMERA ON THE BASIS OF A ROTATING LINEAR ARRAY SENSOR AND A FISHEYE LENS

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

THE CAMERA TRANSFORM

Chapters 1-4: Summary

Structure from Motion. Introduction to Computer Vision CSE 152 Lecture 10

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Image formation. Thanks to Peter Corke and Chuck Dyer for the use of some slides

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

Short on camera geometry and camera calibration

V-STARS Measurement Report

3D shape from the structure of pencils of planes and geometric constraints

Distortion Correction for Conical Multiplex Holography Using Direct Object-Image Relationship

Module 1: Basics of Solids Modeling with SolidWorks

AUTOMATED CALIBRATION TECHNIQUE FOR PHOTOGRAMMETRIC SYSTEM BASED ON A MULTI-MEDIA PROJECTOR AND A CCD CAMERA

3DM Analyst 2.1 Trial at BMA Coal s Goonyella Mine. 10th March, 2004

CS 664 Slides #9 Multi-Camera Geometry. Prof. Dan Huttenlocher Fall 2003

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

A simple method for interactive 3D reconstruction and camera calibration from a single view

Chapters 1 5. Photogrammetry: Definition, introduction, and applications. Electro-magnetic radiation Optics Film development and digital cameras

Image-Based Rendering

55:148 Digital Image Processing Chapter 11 3D Vision, Geometry

arxiv: v1 [cs.cv] 28 Sep 2018

Randy H. Shih. Jack Zecher PUBLICATIONS

CS201 Computer Vision Camera Geometry

Lesson 1 Parametric Modeling Fundamentals

Chapter 1: Overview. Photogrammetry: Introduction & Applications Photogrammetric tools:

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Rectification and Disparity

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Stereo Vision. MAN-522 Computer Vision

Camera models and calibration

123D Catch - Tutorial

Light: Geometric Optics

THE VIEWING TRANSFORMATION

Digital Photogrammetric System. Version 5.3 USER GUIDE. Processing of UAV data

Structure from motion

COMPARATIVE ANALYSIS OF ALTERNATIVE IN-DOOR CALIBRATION TECHNIQUES FOR OFF-THE-SHELF DIGITAL CAMERAS INTRODUCTION

3D DEFORMATION MEASUREMENT USING STEREO- CORRELATION APPLIED TO EXPERIMENTAL MECHANICS

CS231A Course Notes 4: Stereo Systems and Structure from Motion

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Homogeneous Coordinates. Lecture18: Camera Models. Representation of Line and Point in 2D. Cross Product. Overall scaling is NOT important.

Introduction to Homogeneous coordinates

Today. Stereo (two view) reconstruction. Multiview geometry. Today. Multiview geometry. Computational Photography

Parametric Modeling. With. Autodesk Inventor. Randy H. Shih. Oregon Institute of Technology SDC PUBLICATIONS

Multiview Photogrammetry 3D Virtual Geology for everyone

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Camera Model and Calibration

Perspective Projection in Homogeneous Coordinates

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

Transcription:

PART A Three-Dimensional Measurement with iwitness A1. The Basic Process The iwitness software system enables a user to convert two-dimensional (2D) coordinate (x,y) information of feature points on an object recorded in two or more images of a photographed scene into three-dimensional (3D) coordinates (X,Y,Z). The measurement process is illustrated in basic form in Figure A1. Imagine that three images of an object are recorded from three different viewing directions, with a consumer grade digital camera, such that feature points P 1 to P 5 appear in all images. Intuitively, it is clear that if the positions of the camera stations S 1, S 2 and S 3 are known in a 3D reference system, with the X, Y and Z axes as illustrated, and the directions of the three imaging rays to a feature point are also known, then the position, say P 1, will lie at the point of intersection of the three rays at coordinates (X 1, Y 1, Z 1 ). This part is straightforward. Unfortunately, the matter is complicated because we generally do not know the precise locations of the camera stations and we do not directly measure the spatial directions of the imaging rays. Figure A1. Photogrammetric triangulation; object point XYZ coordinates are determined from intersecting rays. This is where the science of photogrammetry comes in. If you imagine Figure A1 as illustrating the mutual intersection of three bundles of imaging rays, then this assemblage of camera stations and object points forms a 3D shape. The bundles will only fit together in one way if the corresponding rays for each point are going to intersect perfectly. To achieve this mutual intersection of all matching rays, it is necessary to recover the same Relative Orientation between the images that they possessed at the time of photography. This reconstruction of the spatial orientation of images, with the 3D reconstruction of the true shape represented by the object points, is termed photogrammetric orientation. The situation described is no different in principle to the way the human brain recreates 3D scenes from stereo-imagery, for example as 13

with a 3D movie. But with iwitness any number of images, any number of points, and a wide variety of camera viewing directions are accommodated in the 3D coordinate determination. In order for the bundle of rays for each image to be established, it is necessary to determine the angular relationship between the rays, which all pass through the perspective centre of the camera lens. This is where the requirement to mark (actually measure) image coordinates comes in, for although we might be marking the 2D location on an image, we are actually determining the angular direction of each ray with respect to the camera s pointing axis. This is illustrated in Figure A2. By thinking of the image measurement process as the forming of a bundle of rays with known relative directions, we make it easier to visualise the mutual fitting together of these bundles to form a 3D shape. This shape can have arbitrary scale (move the cameras stations apart and the shape enlarges) and an arbitrary 3D coordinate system. The assignment of scale, or size, and orientation and position of the intersected points P 1 to P 5 in a chosen XYZ coordinate system is termed Absolute Orientation in photogrammetry. Figure A2. The camera as an angle measuring device. So, the 3D feature point determination can be viewed as a four-stage process: i) Record the two or more images from suitable camera viewpoints. ii) Mark the x,y image coordinates to establish the angular relationship between rays forming each bundle of rays. The corresponding points are said to be referenced when they are marked in two or more images. iii) Relatively orient the images, ie bundles of rays, to recreate the geometry at the time of photography and so define the 3D shape of the array of referenced feature points. iv) Assign a desired scale and XYZ coordinate system to the relatively oriented assemblage of bundles of rays, thus producing the desired outcome: scaled XYZ feature point coordinates. 14

A2. What about camera calibration? We need to state at the outset that photogrammetric measurement is both accurate and reliable, and the orientation procedures in iwitness are more robust, if the camera is calibrated. A very easy way to visualise the importance of calibration is through reference to Figure A2. As mentioned, the purpose of marking/referencing x,y image coordinates is to determine the two angles α and β shown for each ray. But, this cannot be done unless the distance S i to O is precisely known, and this happens to be the focal length of the camera lens. More strictly, we talk of principal distance, c, rather than the focal length, which is typically a nominal value corresponding to infinity focus. Note that c changes with focusing, which is why it is imperative for accurate measurement in a project that images be recorded at a single focus. Note also that the computation of the angles α and β will be influenced by how closely the camera optical axis intersects the assumed origin of the x,y coordinate axes (there are two Interior Orientation parameters here). Furthermore, there is an assumption that the object point P, the image point p and the camera station S form a straight line, but lens distortion causes the ray to deviate from a straight line and thus there is a need to correct for such distortion effects. The topic of calibration is explained in more detail in Parts B and C of this users manual. The purpose here is simply to highlight why calibration is important if the best possible accuracy and measurement reliability is desired. A3. What about the number of images and feature points? In order to perform the relative orientation of one image with respect to another (corresponding points referenced to one another), there are two necessary conditions that must be fulfilled. The first is that the camera stations are separated such that the angle between the two intersecting rays at each point is greater than a few degrees (ie, the rays are not near parallel). The second is that there must be at least five corresponding feature points referenced for the two images and these cannot all lie within a straight line. While these conditions are necessary, they may not be sufficient to ensure a robust relative orientation, since the process of intersecting the bundles of rays is a complex mathematical estimation process, which can lead to either a failed solution, an incorrect solution or even multiple possible solutions in cases of very poor geometry. One of the features of iwitness is its very robust determination of photogrammetric orientation, but there are still a number of simple rules the user can follow to aid in ensuring a correct and accurate orientation of the many images forming the network of camera stations and feature points. These include: a) For the first two oriented images, ensure that the camera station separation is at least 10-20% (or more) of the average distance to the array of feature points. b) Ensure that the referenced feature points used for initial orientation are as well spread as possible in two directions, and preferably three. It can be expected that there will be less problematic relative orientations in situations where the feature points do not all lie within the same plane. c) Be conscious of the fact that, generally speaking, the larger the angle of intersection of two or more rays, the more accurate will be the XYZ coordinates. d) Where possible, try to have the object point field fill as much of the image format as possible (this will increase accuracy and likely orientation reliability as well). 15

e) Always opt for more marked and referenced feature points rather than less. Although in theory a minimum of 5 per image in a stereo pair is required, and iwitness expects at least 6 for reliability purposes, a reasonable minimum of referenced points per image should be 12 or more. f) Opt to record more images rather than less. Images not needed do not have to be referenced to others in the network, but the recording of an insufficient number of images is a more difficult situation to recover from. We need to ensure that all feature points are imaged in a sufficient number of images (2 is the minimum but 4 with good geometry should be a working minimum). Not all points need to be seen in all images, of course, but each feature point must be seen by enough images. A4. How does image marking/referencing and orientation proceed? The procedure for importing images into iwitness, marking and referencing image feature points, performing the photogrammetric orientation, assigning scale and an XYZ coordinate system, and finally exporting 3D coordinate data is further detailed in Part B of this manual. To set the scene, however, we briefly review the marking/referencing and computational processes within iwitness: a) We start with the assumption that the user has recorded a network of images of an object upon which the 3D coordinates of feature points are to be determined. We further assume that the network of images and points forms a favourable geometry for image orientation, as per the guidelines in the previous section. The object scene may have natural feature points of interest, or artificial targets, or indeed both. Also the digital camera may be either calibrated or uncalibrated. b) The images are stored within the desired Folder on the PC and iwitness is started. The first step is to select from the images recorded those desired to be marked and referenced. These are simply dragged and dropped into the project. In the likely event that the images are JPEG (*.JPG) files written by the digital camera, iwitness immediately determines the camera type which recorded the images. The program will then assign the correct camera parameters to the images. Shown in Figure A3 is the user interface of iwitness, with project images on the left, the icon of the automatically recognized camera above these, and two selected images being referenced in the main working window. c) The next essential step is to establish the relative orientation between two suitable images in the network (eg the two shown in Figure A3). This involves the referencing of common feature points. The referencing process is further explained in Part B; here we need only think of referencing as the linking of two corresponding marked points. After six or seven points have been referenced, an initial relative orientation is automatically performed and the user can view a graphical display of the network of feature points and the two camera locations, as shown in Figure A4. With the completion of the initial twoimage orientation, a 3D set of measured object points within an arbitrary XYZ coordinate system is established. Additional object points can then be added to the point array, through both further referencing in the initial two images and through extending the referencing to additional images. 16

Figure A3. User interface of iwitness. Figure A4. Relatively oriented stereo images in iwitness. 17

d) The third and subsequent images are processed one at a time, in combination with a previously referenced image. The sequence is typically as follows: The new image is referenced to an already oriented image. As soon as the referencing of four suitable points is carried out, iwitness automatically computes the spatial orientation of the new image. At this point, the predicted locations of as yet unreferenced feature points are displayed to aid in further referencing. Then, as further points are referenced, the coordinates of these are determined as each measurement is performed. Thus, at any time beyond the referencing of the first four or so points for the newest image, the 3D graphical display will depict all measured images and points. The ability to keep the full image orientation and object point determination current, with automatic updating accompanying any measurement, is a major feature of iwitness. Processing does not have to be manually selected; instead it happens automatically and continuously, not just following the completion of all measurements on some or all images, as has been the traditional approach. e) The operator can continue to sequentially build the network of oriented images and object points, and at any time can choose to assign a particular scale (true size) or coordinate system alignment to the object feature points. By selecting options within the 3D graphical display, scale is set by assigning one or more point-to-point distances, and the coordinate system is set according to the 3-2-1 procedure. Here, three points are highlighted: the first defines the XYZ coordinate system origin, the second the X-axis and the third the XY plane and the direction of the Z axis. These two concepts are illustrated in Figure A5. Figure A5. 3-image network showing assigned XYZ system, 3D points and scaled distance. f) Although there are a number of object point analysis features in iwitness, the primary purpose of this photogrammetric processing system is to generate a 3D array of feature point coordinates which can be exported to either a CAD or alternative modeling system 18

Thus, the final output from iwitness is an exported DXF and/or ASCII formatted file (*.TXT) of 3D coordinates. g) One very important feature worthy of highlighting at this point is that as well as accurately producing the desired 3D XYZ object point coordinates, iwitness can precisely calibrate the camera or cameras used, as an integral and again fully automatic component of the overall photgrammetric processing. This is not always a desired feature, and it does require special consideration of camera station geometry, but it is nevertheless a very important component of high-accuracy feature point measurement. A5. Summary The purpose of this brief overview of the operational stages involved when performing imagebased 3D measurements with iwitness has been both to introduce the potential user to the basic concepts of image-based 3D measurement, and to set the scene for the more detailed account provided Parts B and C. The reader is now encouraged to work through the tutorial (Part B) to become better acquainted with the many features and options available within iwitness. It is envisaged that once the user has worked through the tutorial, he or she is unlikely to need to call upon this users manual other than for reminders and for occasional reference to less commonly applied functions, most of which are described in Part C. The ease of use of iwitness is one of its most attractive features. 19