QUASI-3D SCANNING WITH LASERSCANNERS
|
|
- Magdalen Carson
- 6 years ago
- Views:
Transcription
1 QUASI-3D SCANNING WITH LASERSCANNERS V. Willhoeft, K. Ch. Fuerstenberg, IBEO Automobile Sensor GmbH, INTRODUCTION: FROM 2D TO 3D Laserscanners are laser-based range-finding devices. They create a range image of the environment. Unlike their smaller brothers, the fixed-beam lasers, they do not measure one direction (1d), but create a range profile of the surrounding. Their performance, such as the scan angle, angular resolution, range and accuracy all vary greatly from scanner to scanner. The Laserscanners available today which are suitable for automotive use are typically 2d scanners. They have only one scan plane in which they detect obstacles. Although this is sufficient for a range of applications, a 3d view with multiple scan planes will enhance most applications and add extra safety to the results of the sensors. The information from extra scan planes becomes especially valuable if the vehicle on which the Laserscanner is mounted is pitching. This pitch movement can be caused by bumpy roads, but varying loads also change the overall pitch angle. If only some additional information is measured without actually creating complete new profiles, the term 2.5d is used throughout this document. An example of a simple 2.5d extension is shown below in Fig.1. Top view Side view Top view Side view Fig. 1: Configuration of 2d (left) and 2.5d (right) Laserscanner FUNCTION PRINCIPLE OF A LASERSCANNER LD A AF: A 2D SCANNER The Laserscanner described here is from IBEOs Ladar Digital A AF series, which is a Laserscanner adapted for automotive use. It is a 2d Laserscanner with a single scan plane. The LD A AF uses a time-of-flight measurement principle to measure the distance to a target. An infrared laser light pulse is emitted by a laser diode and sent towards the target. The reflected beam is then picked up by a photo diode. The time between the emission of the pulse and the reception of the echo signal is measured by a high-speed timer with a resolution of 1/256 m (this equals approx. 10 ps) and an accuracy of ±5 cm (1σ). By rotating its head with a deflection mirror, the measurements form a scan plane with up to 270 opening angle. The angular resolution Fig. 2: LD A AF (3P-Version with mirrors) Page 1 of 7
2 is 0.25, so that the resulting scan consists of 1080 individual scanpoints. The range of the Laserscanner depends highly on the reflectivity of the target surface, but is typically 100 m on natural, non-reflective targets while being eye-safe with laserclass 1. The LD A AF Laserscanners contain a TMS320C32 DSP computer. This computer runs sensor-internal signal processing algorithms such as the object tracking described later. 2.5D EXTENSION In order to create some additional scan planes, a simple mirror construction may be added to the LD A AF. This extension is called "2.5d" because it does create more than a 2d view, but is not quite what is commonly refered to as "3d". Two products have evolved from the LD A AF, both using two mirrors behind the scanner head to deflect a part of the main scan plane to a "region of interest". The principle of this method is shown on the right side of Fig. 1. The first implementation, the LD A AF (FrontMirror), is Fig. 3: LD A AF (FrontMirror version) shown in Fig. 3. Although its mirrors seem to be transparent, they act as mirrors for the infrared light. This scanner is configured for use in the middle of the vehicle front, so it uses only up to 180 of its 270 scan plane. The rest of the scan area forms two 45 sections at both ends of the scan area. From these areas, approx. 25 of each section are deflected to the front. Since the mirrors are slightly tilted (Fig. 1, right side), these two sections form two additional scan planes, one above and one below the main scan plane. 3D LASERSCANNER: THE MOTIV In the course of the MOTIV project (see [hipp00]), IBEO has developed the prototype of a 3d Laserscanner. It had a fixed lower part for the mouting of the scanner, and a rotating head, containing the laser sender, the receiver APD- Array and the measurement hardware. This hardware was built using the components of the LD A AF, one for each channel. The measured data was sent from the rotating to the fixed part using IrDA infrared transceivers. Like the LD A AFs, the MOTIV has a TMS320C32 DSP for internal signal processing. The Laser is shaped like a vertical standing line. The reflected echo is picked up by the receive lens and focused on the 25-point APD array. To obtain 4 independent scan planes, 24 of these points are used, grouped as 4 x 6 receive diodes. The four channels are picked up by four individual measurement units from the LD Laserscanner, mounted in the scanner head. A special transceiver Fig. 4: The MOTIV Laserscanner board then reads the four different profiles, assembles one combined scan and transmits the scan to the transceiver board in the fixed base of the scanner. The scan is then read by the DSP, which runs the object tracking algorithm. The MOTIV sends object data like the LD A AF, but has Page 2 of 7
3 some extra information available due to its four scan planes. It has a 175 field of vision with an angular resolution of 0.35, resulting in 500 measurements per scan plane. Like the LD scanners, the MOTIV has a scan frequency of 10 Hz. An image of the resulting 4-profile scan is show in the next section. SCAN DATA The scan data of the laserscanners is shown from a birdview perspective. Fig. 5 (left) shows a scan from a 2d Laserscanner. The scanner covers 270, starting at 180 (straight downward), and scanning clockwise to -90 (to the right). Each red dot is a measured distance. One can easily see the typical clusters of raw data which are the outline of objects seen by the scanner. Fig. 5: Scan data of the LD A AF (2d, left side) and the FrontMirror Version (2.5d, right side) This is the most common configuration and can be used for a range of applications around automated vehicle operation and driver assistance, such as collision avoidance or turn-off and lanechange assists. It can also be used to monitor areas and survey the objects moving there. However, the range of the scanner is effectively limited if the vehicle is pitching. In this case, the scan plane often either points too high upward or down into the ground, limiting the range to approx m around the scanner. Those pitch movements can be caused by rough terrain, speedbumps, load changes etc. To compensate for those pitch movements, the FrontMirror sensor has two additional scan planes. The scan of this sensor (Fig. 5, right side) is shown in the same perspective, but here, all three scan planes are overlaid into one image. The main scan line is shown in blue, while the two extra scan lines are shown in red (upper) and green (lower). In this figure, the vehicle onto which the Laserscanner is mounted is pitching violently, causing both the lower and the main scan line to hit the ground 10 m in front of the scanner. Although the upper scan line does not cover a very wide opening angle, it still sees road markings along the road and an obstacle blocking the road 50 m in front of the vehicle. This information can be vital to continue driving safely. The LD A 3P with its bigger mirrors is the extension of the FrontMirror sensor. It has two much wider extra scan planes, each covering approx. 40. The 3P can be used in the same way as the FrontMirror sensor, to compensate pitch movements. In another application, both mirrors are pointing downward, so that both extra scan planes are monitoring the ground in front of the Laserscanner. This configuration is shown in Fig. 6 (upper right). On even ground, the measurements of the extra scan lines form an X in front of the sensor, as shown in the same image. On uneven ground, the straight lines are bent toward the sensor or away from it, depending on the Page 3 of 7
4 geometry of the ground. Fig. x (left) shows a scan from a LD A 3P which is standing just right of a steep ramp. The main scan line sees the ramp to the left of the scanner, but cannot decide if this is a ramp or a wall. The first extra scan line (red) forms a sharp turn in front of the scanner. This means a steeply ascending slope on the left side. The second extra scan line (green) shows an almost even line along the slope but slightly elevated, confirming that it is a ramp and not a wall. From the range of the extra scan lines, together with the known range values from the calibration on even ground, the elevation of the single scanpoints can be calculated. Top view Side view Fig. 6: Scan data of a LD A 3P (left), the system configuration (upper right), and the height plot of the extra scan lines (lower right) Fig. 6 (lower right) shows a height plot of the scanpoints of the extra scan lines. The sensor is mounted where the small vehicle is drawn (not up to scale!); each gray scale is 0.1 m in height and 0.5m length, respectively. The steepness of the slope can now be calculated easily from the height gradient of this scan data. This configuration allows the navigation of the vehicle on rough terrain, monitoring the steepness of ascending or descending slopes in front of the vehicle. Also, small obstacles in the vehicles path, such as curbstones or debries, can easily be detected. However, due to the high pitch angle and the thereby limited range of the extra scan lines, the vehicle must be comparatively slow in order to allow the sensor to detect all obstacles. MOTIV SCAN DATA The MOTIV has a configuration like a standard 2d Laserscanner, but it covers a vertical opening angle of approx. 4 with its 4 scan planes. A sample scene is shown in Fig. 7. In Fig. 7: Scan data from the MOTIV sensor. The colors are (from lower to upper scan plane): white, green, blue, red Page 4 of 7
5 this image, the different planes are displayed in different colors: The lowest one is white, the next is green, blue and the upper one is shown in red. The sensor is at the lower middle of the image. A vehicle is coming from above, passing us (middle of the image). The lower two scan lines are on the vehicle front, while the upper two scan lines hit the windscreen above. All four scan lines measure along the side of the vehicle. The slope on the left (upper left of the image) is the entryway to IBEO, Hamburg. It is slightly ascending from the street. The lowest scan line hits the ground first, then the next one 2-3 m further, and the upper two scan lines measure into the bushes beyond. OBJECT TRACKING All of IBEOs automotive laserscanners contain a DSP for sensor-internal signal processing. This DSP runs a complete object detection and tracking algorithm. This algorithm splits the scan data into objects and tracks those objects through subsequent scans. The result is, instead of the raw scan data, a set of object data with information like position, size, outline and velocity. The object data is sent to the host system on a standard CAN bus. This frees the host computer from the task of isolating the relevant information from the huge amount of scan data that the laserscanners produce. An overview of the standard (2d-) tracking algorithm is shown in Fig. 8. After receiving the scan data, it is split into segments. Segments are clusters of raw data that are believed to belong to one object. Then, the characteristics for each segment are calculated, such as the position, size, and number of scanpoints. At this stage of the algorithm, all those characteristics are purely static values. In parallel, the prediction of the object movement is calculated using the output of the Kalman filter. All objects are extrapolated one step into the future to predict their position in the current scan. Then, the segments of the current scan are matched with the predicted objects, and the best matches are assigned. More than one segment may be assigned to one object, because parts of the object may be blocked from view by some smaller object in the foreground. Finally, the object properties (position, size, velocity, uncertainties) are updated, using the precisely measured position of the assigned segment. This is done by updating the state vector of the object and running this vector through a Kalman filter. Unassigned segments are stored as new objects with default properties. After the object detection and tracking is complete for the scan, the objects are sent to the host computer on a CAN bus. The information for each object consists at least of a set of points on the object outline including the leftmost, rightmost and closest points, a velocity, and uncertainties for all values. All information is given in both x- and y-direction. 2.5D EXTENSION OF THE OBJECT TRACKING Receive scan Segment scan Segment characteristics Object/segment assignment Object filtering (Kalman filter) Object prediction Fig. 8: Overview of the object tracking algorithm Using a 2.5d Laserscanner, additional information becomes available that can be used in the object tracking. To use this information, extensions have to be made to the tracking itself. A typical disadvantage of the 2d tracking is that it is impossible to decide between ground (or a shallow Page 5 of 7
6 slope) and an obstacle such as a wall. This means that a measurement into the ground - e.g. due to vehicle pitching - or at a wall looks the same for the Laserscanner, and both are tracked and reported as obstacles blocking the path. Using the data from the lower scan plane with its known configuration, a slope or ground can be plausibly detected and removed from the object list (see Fig. 5, right side). In this case, because the main scan line hits the ground or a ramp, objects that have been tracked in the main scan plane are lost from view, so the data from the upper scan plane becomes valuable and is integrated into the object tracking. In Fig. 5, although both the lower and main scan planes hit the ground, the upper scan plane sees an object 50 m in front of the vehicle. If the vehicle is pitching upwards, the same process applies to the lower scan plane which is then used for the object tracking. In the configuration of the LD A 3P, where both extra scan planes are tilted downwards (see Fig. 6), information is generated about the near field only. In order to be effective, the two scan planes must hit the ground quite close to the scanner, typically 2-5 m in front of the sensor. In this distance, the scanpoints are close enough to allow a precise calculation of what is seen on the ground. Both the steepness Fig. 9: Curbstone, seen by the right extra scan plane of ascending or decending slopes and obstacles such as curbstones can be detected with the data from the extra scan planes. Fig. 9 shows the data of the two scan planes on the ground. The left scan plane (red) shows even ground and no obstacles, but the beginning curbstone can easily be detected from the z-shaped right scan plane. Also, the curbstone heigth (10 cm) can be calculated by comparing the measured distances with the expected values for even ground. Using this configuration makes it possible to manouver a vehicle in rough terrain while watching the ground for traps such as slopes or obstacles. 3D-EXTENSION OF THE OBJECT TRACKING Using a 3d Laserscanner such as the MOTIV requires a new design of the object detection and tracking algorithm. Although basic strategies such as the object assignment and Kalman filter can be used, the information from multiple scan palnes offers a whole set of new information. Therefore, the segmentation process which clusters the raw data into separate segments must not only consider neighbors in its own scan plane, but in the other planes as well, effectively making it work in 3d space. The resulting segments do not only have size information in width and length, but also in height. Because the pitch angle of the vehicle is not known to the Laserscanner, this is a relative height information. However, the height will enhance a classification of the objects, if desired. Due to the fact that at least one scan plane is pointing downwards, ground is almost always seen by one scan plane. This means that an effective ground removal algorithm has to be used with a multi-plane Laserscanner. On the MOTIV, this was realised using a heuristic of the caracteristic ground shape (appearance in the scan data) and position, taking into account the neighborhood points. CONCLUSION AND OUTLOOK Classic single-plane Laserscanners are currently used in a range of applications around either automatic vehicles or driver assistance functions. Their big field of vision and excellent measurement accuracy make them well suited for precise measurement tasks. However, adding some extra scan planes by deflecting parts of the single scan plane can greatly increase the flexibility of the scanner, either adding extra safety to the object tracking or allowing the operation Page 6 of 7
7 under extreme conditions. However, these modifications are only the first step on the way to true multi-line Laserscanners. The MOTIV prototype has proven the capabilities of such a Laserscanner. Currently, the next generation of multi-line Laserscanners is under development in several projects supported by the European Commission, some of which will be presented at this conference. [hipp00] [reich97] [am00] [ae00] [vwi00] REFERENCES Hipp, E.; Reichart, G.: "MOTIV - Fahrerassistenzsysteme". In: Mobilitätsforschung für das 21. Jahrhundert: TÜV Verlag, Köln 2000, S , Final presentation on 04./ Reichart, G.: "MOTIV - A cooperative Research Programme for the Mobility in Urban Areas". Proceedings of ITS 97: 4th World Congress on Intelligent Transport Systems, , ICC Berlin. C. Ameling, A. Kirchner: "The emergency braking module for an electronic copilot design and first results", 9 th IFAC symposium 2000, Braunschweig A. Ewald, V. Willhoeft: "Object Detection with Laserscanners for Automotive Applications", 9 th IFAC symposium 2000, Braunschweig V. Willhoeft: "Laserscanners for Automotive Applications in the AF Project", ITS2000, Torino Page 7 of 7
Laserscanner Based Cooperative Pre-Data-Fusion
Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel
More informationSensory Augmentation for Increased Awareness of Driving Environment
Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie
More informationSketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set
Google Google is a 3D Modelling program which specialises in making computer generated representations of real-world objects, especially architectural, mechanical and building components, such as windows,
More informationSpatio temporal Segmentation using Laserscanner and Video Sequences
Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D 89081
More informationOptical Sensors: Key Technology for the Autonomous Car
Optical Sensors: Key Technology for the Autonomous Car Rajeev Thakur, P.E., Product Marketing Manager, Infrared Business Unit, Osram Opto Semiconductors Autonomously driven cars will combine a variety
More informationPedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016
edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract
More informationAdvanced point cloud processing
Advanced point cloud processing George Vosselman ITC Enschede, the Netherlands INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Laser scanning platforms Airborne systems mounted
More informationENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning
1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles
More informationREFLECTION & REFRACTION
REFLECTION & REFRACTION OBJECTIVE: To study and verify the laws of reflection and refraction using a plane mirror and a glass block. To see the virtual images that can be formed by the reflection and refraction
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationResearch on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System
Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing
More information12 Feb 19. Images and text courtesy of John Bean, University of Virginia
Here we take the covers off the atomic force microscope. Inside, there is a circuit board that controls and monitors the probe s movement. The probe, itself, is under the cover at the right. 1 2 The probe
More informationEvaluating the Performance of a Vehicle Pose Measurement System
Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of
More informationLight and the Properties of Reflection & Refraction
Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction
More informationExperiment 6. Snell s Law. Use Snell s Law to determine the index of refraction of Lucite.
Experiment 6 Snell s Law 6.1 Objectives Use Snell s Law to determine the index of refraction of Lucite. Observe total internal reflection and calculate the critical angle. Explain the basis of how optical
More informationHigh Resolution Laserscanning, not only for 3D-City Models
Lohr 133 High Resolution Laserscanning, not only for 3D-City Models UWE LOHR, Ravensburg ABSTRACT The TopoSys laserscanner system is designed to produce digital elevation models (DEMs) of the environment
More informationPedestrian Recognition in Urban Traffic using a vehicle based Multilayer Laserscanner
Pedestrian Recognition in Urban Traffic using a ehicle d Multilayer aserscanner Kay Ch. Fuerstenberg, Klaus C. J. Dietmayer Uniersity of Ulm, Department of Measurement, Control and Microtechnology Albert-Einstein-Allee
More informationCalibration of a rotating multi-beam Lidar
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract
More informationIntelligent Robotics
64-424 Intelligent Robotics 64-424 Intelligent Robotics http://tams.informatik.uni-hamburg.de/ lectures/2013ws/vorlesung/ir Jianwei Zhang / Eugen Richter Faculty of Mathematics, Informatics and Natural
More informationAirborne Laser Survey Systems: Technology and Applications
Abstract Airborne Laser Survey Systems: Technology and Applications Guangping HE Lambda Tech International, Inc. 2323B Blue Mound RD., Waukesha, WI-53186, USA Email: he@lambdatech.com As mapping products
More information3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit
3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing
More informationSICK AG WHITEPAPER. LiDAR SENSOR FUNCTIONALITY AND VARIANTS
SICK AG WHITEPAPER 2018-07 AUTHOR Harald Weber Head of Product Unit Ranging LiDAR sensors at SICK AG in Waldkirch / Germany SUMMARY This white paper explores the many areas in which LiDAR can be applied.
More informationLidar Sensors, Today & Tomorrow. Christian Sevcik RIEGL Laser Measurement Systems
Lidar Sensors, Today & Tomorrow Christian Sevcik RIEGL Laser Measurement Systems o o o o Online Waveform technology Stand alone operation no field computer required Remote control through wireless network
More informationTeam-LUX DARPA Urban Challenge Technical Paper, June 1st, 2007
Team-LUX DARPA Urban Challenge 2007 Technical Paper, June 1st, 2007 Martin Dittmer, Jörg Kibbel, Holger Salow, Volker Willhoeft IBEO Automobile Sensor GmbH www.team-lux.com, www.ibeo-as.com DISCLAIMER:
More informationLaser Sensor for Obstacle Detection of AGV
Laser Sensor for Obstacle Detection of AGV Kyoung-Taik Park*, Young-Tae Shin and Byung-Su Kang * Nano-Mechanism Lab, Department of Intelligent Precision Machine Korea Institute of Machinery & Materials
More informationDesign Considerations And The Impact of CMOS Image Sensors On The Car
Design Considerations And The Impact of CMOS Image Sensors On The Car Intuitive Automotive Image Sensors To Promote Safer And Smarter Driving Micron Technology, Inc., has just introduced a new image sensor
More informationLZR -U920/U921 Protocol
LZR -U920/U921 Protocol Abbreviations LZR: laser scanner platform ToF: Time-of-Flight MDI: measured distance information 1. Introduction This application note contains useful information for communication
More informationUAS based laser scanning for forest inventory and precision farming
UAS based laser scanning for forest inventory and precision farming M. Pfennigbauer, U. Riegl, P. Rieger, P. Amon RIEGL Laser Measurement Systems GmbH, 3580 Horn, Austria Email: mpfennigbauer@riegl.com,
More informationSpeed of light E Introduction
Notice: All measurements and calculated values must be presented with SI units with an appropriate number of significant digits. Uncertainties required only when explicitly asked for. 1.0 Introduction
More informationLaserGuard LG300 area alarm system. 3D laser radar alarm system for motion control and alarm applications. Instruction manual
LaserGuard LG300 area alarm system 3D laser radar alarm system for motion control and alarm applications Instruction manual LaserGuard The LaserGuard program is the user interface for the 3D laser scanner
More informationW4. Perception & Situation Awareness & Decision making
W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic
More informationTEL: / FAX:
Self-leveling Rotary Laser FRE-211 Operating Manual P.R.Engineering Ltd UK www.laser-level.co.uk TEL: 01246 269 777 / FAX: 01246 260 007 SAFETY PRECAUTIONS: During instrument operation, be careful not
More informationAUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA
F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The
More informationProcessing of distance measurement data
7Scanprocessing Outline 64-424 Intelligent Robotics 1. Introduction 2. Fundamentals 3. Rotation / Motion 4. Force / Pressure 5. Frame transformations 6. Distance 7. Scan processing Scan data filtering
More informationPedestrian Detection Using Multi-layer LIDAR
1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu
More informationOperation Manual. Autocollimator Test Wedge for quick testing of Electronic Autocollimators
for quick testing of Electronic Autocollimators For Ident.-No.: 223 244 Version date 05.05.2004 1. Basics The is designed for quick testing of accuracy of electronic autocollimators. If a light beam passes
More informationStereo Vision Inside Tire
1 Stereo Vision Inside Tire P.S. Els C.M. Becker University of Pretoria W911NF-14-1-0590 Final Report. February 2015- August 2015. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting
More informationOn Fig. 7.1, draw a ray diagram to show the formation of this image.
1- A small object is placed 30 cm from the centre of a convex lens of focal length 60 cm An enlarged image is observed from the other side of the lens (a) On Fig 71, draw a ray diagram to show the formation
More informationPhysically-Based Laser Simulation
Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on
More informationMini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011
Mini Survey Paper (Robotic Mapping) Ryan Hamor CPRE 583 September 2011 Introduction The goal of this survey paper is to examine the field of robotic mapping and the use of FPGAs in various implementations.
More informationMobile mapping system and computing methods for modelling of road environment
Mobile mapping system and computing methods for modelling of road environment Antero Kukko, Anttoni Jaakkola, Matti Lehtomäki, Harri Kaartinen Department of Remote Sensing and Photogrammetry Finnish Geodetic
More informationAcuity. Acuity Sensors and Scanners. Product Brochure
Acuity Acuity Sensors and Scanners Product Brochure CCS PRIMA The CCS Prima series of confocal displacement sensors are amongst the most precise measuring instruments in the world. Using a confocal chromatic
More informationVehicle Localization. Hannah Rae Kerner 21 April 2015
Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular
More informationChapter 17: The Truth about Normals
Chapter 17: The Truth about Normals What are Normals? When I first started with Blender I read about normals everywhere, but all I knew about them was: If there are weird black spots on your object, go
More information3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS
3D BUILDING MODEL GENERATION FROM AIRBORNE LASERSCANNER DATA BY STRAIGHT LINE DETECTION IN SPECIFIC ORTHOGONAL PROJECTIONS Ellen Schwalbe Institute of Photogrammetry and Remote Sensing Dresden University
More informationSensors. Compact: S60 Series. Compact: S60 Series. Multifunction Optoelectronic Sensors
Multifunction Optoelectronic Long operating distance Sensitivity adjustment Independent NO-NC outputs M12 connection with standard NPN or PNP configuration The S60 sensors have a sensitivity adjustment
More informationAirborne Laser Scanning: Remote Sensing with LiDAR
Airborne Laser Scanning: Remote Sensing with LiDAR ALS / LIDAR OUTLINE Laser remote sensing background Basic components of an ALS/LIDAR system Two distinct families of ALS systems Waveform Discrete Return
More informationExterior Orientation Parameters
Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product
More informationlecture 10 - depth from blur, binocular stereo
This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather
More informationPrecise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting
Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting Gary Robertson ShapeQuest Inc. Ottawa, Canada gary@shapecapture.com Commission V KEY WORDS: Automated measurement, binary
More informationIndoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner
AARMS Vol. 15, No. 1 (2016) 51 59. Indoor Mobile Robot Navigation and Obstacle Avoidance Using a 3D Camera and Laser Scanner Peter KUCSERA 1 Thanks to the developing sensor technology in mobile robot navigation
More informationActive Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth
Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different
More informationSICK AG WHITEPAPER SAFETY LASER SCANNERS VS. SAFETY MATS WHICH ONE DO I CHOOSE?
SICK AG WHITEPAPER 2014-10 AUTHOR Steve Aamodt Safety Product Manager at SICK Inc. in Bloomington ABSTRACT Since 1990, safety mats are facing an increasing competition from safety laser scanners. Because
More informationVisual Perception Sensors
G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and
More informationAdam Hammet & Anthony McClaren. Getting to know the Trimble SX10 Scanning Total Station - Inside and Out ISV Regional Conference 2017: Moama, NSW
Adam Hammet & Anthony McClaren Getting to know the Trimble SX10 Scanning Total Station - Inside and Out ISV Regional Conference 2017: Moama, NSW Contents Difference between scanner and a Total station
More informationHAWAII KAUAI Survey Report. LIDAR System Description and Specifications
HAWAII KAUAI Survey Report LIDAR System Description and Specifications This survey used an Optech GEMINI Airborne Laser Terrain Mapper (ALTM) serial number 06SEN195 mounted in a twin-engine Navajo Piper
More informationTrimble Engineering & Construction Group, 5475 Kellenburger Road, Dayton, OH , USA
Trimble VISION Ken Joyce Martin Koehler Michael Vogel Trimble Engineering and Construction Group Westminster, Colorado, USA April 2012 Trimble Engineering & Construction Group, 5475 Kellenburger Road,
More informationAn actor-critic reinforcement learning controller for a 2-DOF ball-balancer
An actor-critic reinforcement learning controller for a 2-DOF ball-balancer Andreas Stückl Michael Meyer Sebastian Schelle Projektpraktikum: Computational Neuro Engineering 2 Empty side for praktikums
More informationNew! 3D Smart Sensor your assistant on mobile machines. ifm.com/us/mobile
New! 3D Smart Sensor your assistant on mobile machines ifm.com/us/mobile 1 See objects with ifm s 3D Smart Sensor Your assistant on mobile machines Obstacle detection challenges are amplified on large
More informationCE30-A Solid State Array LiDAR Specification
CE30-A Solid State Array LiDAR Specification Table of Contents 1. Product Overview... 2 2. Principle of Ranging... 3 3. Description of Obstacle Avoidance... 5 3.1. Obstacle Avoidance Mode... 5 3.2. Setting
More informationGEOMETRIC OPTICS. LENSES refract light, so we need to know how light bends when entering and exiting a lens and how that interaction forms an image.
I. What is GEOMTERIC OPTICS GEOMETRIC OPTICS In geometric optics, LIGHT is treated as imaginary rays. How these rays interact with at the interface of different media, including lenses and mirrors, is
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationPhysics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET
Physics 101, Lab 1: LINEAR KINEMATICS PREDICTION SHEET After reading through the Introduction, Purpose and Principles sections of the lab manual (and skimming through the procedures), answer the following
More informationLAS extrabytes implementation in RIEGL software WHITEPAPER
in RIEGL software WHITEPAPER _ Author: RIEGL Laser Measurement Systems GmbH Date: May 25, 2012 Status: Release Pages: 13 All rights are reserved in the event of the grant or the registration of a utility
More informationControl Pad and Touch Unit Installation Guide
Control Pad and Touch Unit Installation Guide About This Installation Guide This guide describes how to install the Control Pad and Touch Unit (BrightLink Pro 1430Wi) when using the ELPMB28 wall mount
More informationAN APPROACH TO DEVELOPING A REFERENCE PROFILER
AN APPROACH TO DEVELOPING A REFERENCE PROFILER John B. Ferris TREY Associate SMITH Professor Graduate Mechanical Research Engineering Assistant Virginia Tech RPUG October Meeting 08 October 28, 2008 Overview
More informationMiniaturized Camera Systems for Microfactories
Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland
More informationLight Detection and Ranging (LiDAR)
Light Detection and Ranging (LiDAR) http://code.google.com/creative/radiohead/ Types of aerial sensors passive active 1 Active sensors for mapping terrain Radar transmits microwaves in pulses determines
More informationSF30 SF30. The SF30 is a high speed, light weight laser rangefinder for mapping and obstacle detection by robotic vehicles such as UAVs.
The is a high speed, light weight laser rangefinder for mapping and obstacle detection by robotic vehicles such as UAVs. The can take up to 36633 readings per second and can be incorporated into scanning
More informationOverview of Active Vision Techniques
SIGGRAPH 99 Course on 3D Photography Overview of Active Vision Techniques Brian Curless University of Washington Overview Introduction Active vision techniques Imaging radar Triangulation Moire Active
More information1. Motivation 2. Nanopositioning and Nanomeasuring Machine 3. Multi-Sensor Approach 4. Conclusion and Outlook
Prospects of multi-sensor technology for large-area applications in micro- and nanometrology 08/21/2011-08/25/2011, National Harbor E. Manske 1, G. Jäger 1, T. Hausotte 2 1 Ilmenau University of Technology,
More informationFuzzy Estimation and Segmentation for Laser Range Scans
2th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Fuzzy Estimation and Segmentation for Laser Range Scans Stephan Reuter, Klaus C. J. Dietmayer Institute of Measurement,
More informationToF Camera for high resolution 3D images with affordable pricing
ToF Camera for high resolution 3D images with affordable pricing Basler AG Jana Bartels, Product Manager 3D Agenda Coming next I. Basler AG II. 3D Purpose and Time-of-Flight - Working Principle III. Advantages
More informationLeica s Pinpoint EDM Technology with Modified Signal Processing and Novel Optomechanical Features
Leica s Pinpoint EDM Technology with Modified Signal Processing and Novel Optomechanical Features Fadi A Bayoud Ph.D. Geomatics Engineering Contents Leica s Total Stations and Telescope ATR PS Leica s
More informationReplacing expensive machine vision systems with low-cost 3D Time of Flight sensors in mobile robotics
White paper PULUWHITE001 Replacing expensive machine vision systems with low-cost 3D Time of Flight sensors in mobile robotics Antti Alhonen, Pulu Robotics Oy, 2018 www.pulurobotics.fi Definitions Autonomous
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More informationTHE ML/MD SYSTEM .'*.,
INTRODUCTION A vehicle intended for use on the planet Mars, and therefore capable of autonomous operation over very rough terrain, has been under development at Rensselaer Polytechnic Institute since 1967.
More informationAirborne LiDAR Data Acquisition for Forestry Applications. Mischa Hey WSI (Corvallis, OR)
Airborne LiDAR Data Acquisition for Forestry Applications Mischa Hey WSI (Corvallis, OR) WSI Services Corvallis, OR Airborne Mapping: Light Detection and Ranging (LiDAR) Thermal Infrared Imagery 4-Band
More informationFOOTPRINTS EXTRACTION
Building Footprints Extraction of Dense Residential Areas from LiDAR data KyoHyouk Kim and Jie Shan Purdue University School of Civil Engineering 550 Stadium Mall Drive West Lafayette, IN 47907, USA {kim458,
More informationOutline of Presentation. Introduction to Overwatch Geospatial Software Feature Analyst and LIDAR Analyst Software
Outline of Presentation Automated Feature Extraction from Terrestrial and Airborne LIDAR Presented By: Stuart Blundell Overwatch Geospatial - VLS Ops Co-Author: David W. Opitz Overwatch Geospatial - VLS
More informationSimulation study for the EUDET pixel beam telescope
EUDET Simulation study for the EUDET pixel beam telescope using ILC software T. Klimkovich January, 7 Abstract A pixel beam telescope which is currently under development within the EUDET collaboration
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More information3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving
3D LIDAR Point Cloud based Intersection Recognition for Autonomous Driving Quanwen Zhu, Long Chen, Qingquan Li, Ming Li, Andreas Nüchter and Jian Wang Abstract Finding road intersections in advance is
More informationAdvanced Driver Assistance: Modular Image Sensor Concept
Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready
More informationPlatform Games Drawing Sprites & Detecting Collisions
Platform Games Drawing Sprites & Detecting Collisions Computer Games Development David Cairns Contents Drawing Sprites Collision Detection Animation Loop Introduction 1 Background Image - Parallax Scrolling
More informationSolid-State Hybrid LiDAR for Autonomous Driving Product Description
Solid-State Hybrid LiDAR for Autonomous Driving Product Description What is LiDAR Sensor Who is Using LiDARs How does LiDAR Work Hesai LiDAR Demo Features Terminologies Specifications What is LiDAR A LiDAR
More informationExam in DD2426 Robotics and Autonomous Systems
Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20
More informationT-SCAN 3 3D DIGITIZING
T-SCAN 3 3D DIGITIZING 2 T-SCAN 3: THE HANDHELD LASER SCANNER Launching the innovative concept of an intuitive-to-use high-precision laser scanner a few years ago, Steinbichler Optotechnik, as the first
More informationTHE RANGER-UAV FEATURES
THE RANGER-UAV The Ranger Series Ranger-UAV is designed for the most demanding mapping applications, no compromises made. With a 9 meter laser range, this system produces photorealistic 3D point clouds
More informationCeilbot vision and mapping system
Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize
More informationUniversity of Technology Building & Construction Department / Remote Sensing & GIS lecture
5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationHigh Altitude Balloon Localization from Photographs
High Altitude Balloon Localization from Photographs Paul Norman and Daniel Bowman Bovine Aerospace August 27, 2013 Introduction On December 24, 2011, we launched a high altitude balloon equipped with a
More informationCamera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993
Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video
More informationA Vision System for Automatic State Determination of Grid Based Board Games
A Vision System for Automatic State Determination of Grid Based Board Games Michael Bryson Computer Science and Engineering, University of South Carolina, 29208 Abstract. Numerous programs have been written
More informationIdentification of process phenomena in DMLS by optical inprocess
Lasers in Manufacturing Conference 2015 Identification of process phenomena in DMLS by optical inprocess monitoring R. Domröse a, *, T. Grünberger b a EOS GmbH Electro Optical Systems, Robert-Stirling-Ring
More informationInstructions. Remote Controlled Rotator RCR
LIMITED 1 YEAR WARRANTY Zircon Corporation, ("Zircon") warrants this product to be free from defects in materials and workmanship for one year from the date of purchase. Any in-warranty defective product
More information