CS4758: Rovio Augmented Vision Mapping Project

Similar documents
CS 4758: Automated Semantic Mapping of Environment

International Journal of Advance Engineering and Research Development

Practice Exam Sample Solutions

Tracking Under Low-light Conditions Using Background Subtraction

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set

BCC Rays Ripply Filter

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

Sensor Modalities. Sensor modality: Different modalities:

DEALING WITH SENSOR ERRORS IN SCAN MATCHING FOR SIMULTANEOUS LOCALIZATION AND MAPPING

CS 4758 Robot Navigation Through Exit Sign Detection

Combining Monocular and Stereo Depth Cues

BCC Rays Streaky Filter

REVIT ARCHITECTURE 2016

HOUGH TRANSFORM CS 6350 C V

Advanced Vision Practical

Robot Localization based on Geo-referenced Images and G raphic Methods

Laboratory of Applied Robotics

TRAINING SESSION Q2 2016

Tutorial 4: Texture Mapping Techniques

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

Range Sensors (time of flight) (1)

CS 231A Computer Vision (Winter 2014) Problem Set 3

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

EV3 Programming Workshop for FLL Coaches

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Calibration of a rotating multi-beam Lidar

Rutherford Atomic Model: Hidden Obstacles Student Advanced Version

Exam in DD2426 Robotics and Autonomous Systems

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

Digital Makeup Face Generation

A Vision System for Automatic State Determination of Grid Based Board Games

CS4733 Class Notes, Computer Vision

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

: Easy 3D Calibration of laser triangulation systems. Fredrik Nilsson Product Manager, SICK, BU Vision

3D graphics, raster and colors CS312 Fall 2010

Introducing Robotics Vision System to a Manufacturing Robotics Course

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

3D scanning. 3D scanning is a family of technologies created as a means of automatic measurement of geometric properties of objects.

Adept Lynx Triangle Drive: Configuration and Applications

POME A mobile camera system for accurate indoor pose

Linescan System Design for Robust Web Inspection

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

CS4495 Fall 2014 Computer Vision Problem Set 6: Particle Tracking

A Tutorial Guide to Tribology Plug-in

Math 9 Final Exam Review and Outline

Reference Tracking System for a Mobile Skid Steer Robot (Version 1.00)

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Ceilbot vision and mapping system

HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Edge Detection (with a sidelight introduction to linear, associative operators). Images

DISTANCE MEASUREMENT USING STEREO VISION

Augmenting Reality with Projected Interactive Displays

Anno accademico 2006/2007. Davide Migliore

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

BCC Optical Stabilizer Filter

EE795: Computer Vision and Intelligent Systems

Project 1 : Dead Reckoning and Tracking

Sherlock 7 Technical Resource. Laser Tools

CS4758: Moving Person Avoider

1 Projective Geometry

ezimagex2 User s Guide Version 1.0

Structured light 3D reconstruction

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

We can use square dot paper to draw each view (top, front, and sides) of the three dimensional objects:

Correcting User Guided Image Segmentation

A Reactive Bearing Angle Only Obstacle Avoidance Technique for Unmanned Ground Vehicles

XPEL DAP SUPPORT. DAP Tool List & Overview DESCRIPTION ICON/TOOL (SHORTCUT)

A Guide to Processing Photos into 3D Models Using Agisoft PhotoScan

Panoramic Image Stitching

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

A Symmetry Operator and Its Application to the RoboCup

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

The Wireframe Update Buttons. The Frontface and Backface Buttons. The Project Designer 265

CS143 Introduction to Computer Vision Homework assignment 1.

3D Modeling and Design Glossary - Beginner

Ch 22 Inspection Technologies

ADOBE ILLUSTRATOR CS3

Lecture 7: Most Common Edge Detectors

Paint/Draw Tools. Foreground color. Free-form select. Select. Eraser/Color Eraser. Fill Color. Color Picker. Magnify. Pencil. Brush.

Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera

The Villa Savoye ( ), Poisy, Paris.

Computer Vision I - Basics of Image Processing Part 2

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

FINDING THE INDEX OF REFRACTION - WebAssign

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

Probabilistic Robotics

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

Uncertainties: Representation and Propagation & Line Extraction from Range data

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING

Outline Sensors. EE Sensors. H.I. Bozma. Electric Electronic Engineering Bogazici University. December 13, 2017

Localization and Map Building

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Transcription:

CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer mounted on the robot. It uses basic visual algorithms to isolate the angular location of the laser dot in the frame, and uses that to determine distance to the object. A non-precision mounting setup introduces error into the system. To compensate for this, we have applied supervised learning techniques to derive an estimate for the actual position and angle of the laser pointer on the Rovio. These parameters and additional training images are processed using statistical error modeling techniques. The specific error models were selected to match the data measured (Gaussian). Using this information and a localization reference from the Rovio base station, the Rovio builds maps. T I. INTRODUCTION HE Rovio mobile webcam only provides image and localization data. It is a proprietary system that does not allow any additional inputs to be wired in. This precludes the use of traditional range finding sensors, like ultrasonic range finders. In addition when working with a cheap off-the-shelf systems like the Rovio, it is desirable not to have to add expensive sensors on them. This creates a need for a way to add a cheap distance sensor without having to connect to any new hardware lines. This project builds a new sensor by mounting a laser pointer on the Rovio. Using its existing camera, and the dot projected by the laser pointer, it is able to determine the range to an object. This method does not require any additional input channels on the robot. This technique can be applied to any robotic system with access to a camera. This paper is organized as follows. In section II, we discuss related work in the field then proceed to lay out the basic geometry underlying our system in section III. Next, in section IV, we discuss our application of supervised learning to correctly determine the precise geometry of the given system. We then provide an overview of the software architecture as implemented on our robot in section V. In section VI, we give a short description of the mapping system and then proceed to discuss our experimental setups and testing results in secions VII and VIII respectively. Next, we give a brief system evaluation discussing appropriate application situations for this system in section IX. Lastly, we conclude and present suggestions for future improvements on this system and methodology. Detection With Active Laser Triangulation [1]. This paper focuses mainly on the obstacle detection using a fixed laser and camera on a wheel chair. We extend this idea to integrate with data from the Rovio s localization system. This creates obstacle location data that is suitable for use in a map creation system. Another reference paper on this is by Quigley et. al [2] This paper describes the use of an actuated laser to implement a line scan with a camera. This provides more data points but at higher complexity than our system, and places greater requirements on the output capacity of the robotic system. Since robots like the Rovio do not provide outputs capable of controlling extra sensors, this technique cannot be directly used on it. In addition, we use a laser that projects a dot instead of a line. We experimented with using a line laser, but with the power class of the laser we were able to acquire, the line did not provide enough contrast to be visible through the Rovio s camera. II. RELATED WORK A paper that describes a similar method is Obstacle

III. EXTRACTING DEPTH FROM A FIXED LASER IN AN IMAGE To test the values learned, we use a separate set of test images collected in the same way. We calculated using the equation (i) above, and compared this with the measured value of, and calculated the mean error d θ φ V. SOFTWARE SYSTEM ARCHITECTURE l Figure 1: Basic Geometry Involved in the Project Figure 1 illustrates the basic geometry involved in our project. From this geometry, we are able to extract the distance given the angle at which the laser dot appears in the camera. In equation (i), and are determined by the mounting position of the laser on the robot. These parameters are learned after mounting, since direct measurement is difficult with any great precision. IV. SUPERVISED LEARNING OF MOUNTING PARAMETERS In order to learn the specific and for a given robot and mounted laser, a supervised learning algorithm is used. This learning is accomplished by using a set of training images taken with known distances to an object. The dataset is created by giving the robot its current distance, and recording that with the angle of the laser dot in the image. This dataset is then processed to find the and that minimize the error. Due to the geometry of the system, the resolution is reduced as the distance increases. As such, we use a fractional error metric to weight the errors of the shorter distances higher than the further ones. ( ) Due to the number of local minima in this problem, a straight gradient descent gets stuck in non-ideal values. To work around this, we calculated the error over a large range of and, and found the global minimum. Figure 2: System Architecture There are three main pathways in the software implementation of this project. These are : 1. Training 2. Testing 3. Application A. Training The training code takes input from both the vision processing algorithm and a human trainer. The processing algorithms provide the location of the laser dot in the image which is run through a Gaussian filter to minimize noise. The human input consists of the measured distance to the object that the laser is measuring. These two are written to a file to be used in training the parameters. Once trained, these parameters are passed to various apply learning sections of the code. The training data is run through a batch apply-learning processing which gives the RMS error of each training point and the overall RMS error of the dataset (Training error).

B. Testing The testing code follows a similar path to the training code in taking data from both the visual algorithm and the human input. However, instead of being used to generate new parameters, this dataset is only used to test the previous parameters. This dataset is run through a batch applylearning process using parameters generated from the Training section of the code. This supplies the RMS error of each point in the testing set as well as the overall RMS error of the testing dataset (Testing error). C. Application This section of the code uses the learned parameters in order to provide a stream of distance measurements. Each image is processed using the same frontend as the training system and each output position of the dot in the image is converted into a distance measurement. These distances are filtered using a Gaussian filter to reduce noise in the sensor. In parallel to the processing of the laser image, the Rovio localization data is recorded and filtered using a Gaussian filter, in order to provide a stable position reading. The measured distance from the learning algorithm is stamped with this position as a header. This provides the metadata required to figure out where the object was in the global reference frame instead of relative to the Rovio. These annotated distances are then fed into the mapping subsystem. VI. VISION SYSTEM In order to isolate the laser dot, the images are run through a set of filters. Since, by the geometry of the system, the dot can only appear on the right side of the image and in a limited range over the y-axis, we construct a mask to remove the rest of the image data. This mask is constructed by creating a rectangle of white over the area we need to keep and setting the rest of the mask image to black. This mask is ANDed with the input image to remove the unwanted sections of the image. This is applied across all three of the color channels. The image is then shifted into Hue Saturation Value (HSV) coordinates. A threshold is then applied to the hue channel to isolate the areas that are close to red. The resulting image is ANDed with the value channel to mask of areas that are not red enough. The coordinates of the brightest remaining pixels in the image are obtained and converted into an angle by multiplying the field of view of the camera by the ratio of the pixel position along the x-axis relative to the width of the image. Figure 3: Vision System

RMS Error VII. MAPPING Figure 4: Simulation in Stage various points along this axis and several datapoints were collected at each position. Clear outliers from the dataset were eliminated to avoid unduly affecting the training parameters. Testing images were collected in a similar manner. In order to test the suitability of the system for mapping, the Rovio was positioned in side of an area where most of the walls were within the range of the sensor s reliable operation i.e. not looking down a long corridor. The Rovio base station was setup in this area in order to provide accurate localization. The Rovio was then driven using an open-loop controller to slowly rotate around in circles in discrete steps. This allowed it to see an area around it by 36. While doing this, the Rovio continuously took range readings from the laser sensor. The laser scans were accumulated in RViz in order to provide a basic drawing of the environment around the Rovio. B. Test Results 1.4 Training Error 1.2 1.8.6 Figure 5: Maps created in Simulation The mapping code we implemented uses a grid held within an image. Each time a laser reading is taken, a Gaussian blob is added, centered on where the object is believed to be. In addition, a Gaussian blurred line is subtracted along the path between the robot s and object s perceived positions. This results in decreasing the probability of an obstacle in the path that the laser is believed to indicate is empty, and to add to the probability that an object exists at a position that the laser sensor indicates that there s an obstacle. Figure 5 shows a simulation run using the mapping software, adding a Gaussian error to the position of the robot, and only using the central laser values from the simulated laser sensors. The left image indicates areas that are believed to be empty (negative values). The image on the right shows where obstacles are believed to be in white (positive values). The black areas in both images indicate areas that the robot has no knowledge of. A. Test Methodology VIII. EXPERIMENTS In order to obtain good reference datasets, the Rovio was rigidly mounted on a square platform to keep it aligned with the coordinate system and to provide a reliable way to accurately position the Rovio for measurements. A tape measure was affixed to the floor perpendicular to the object to be used in collecting the data. The Rovio was moved to.4.2.25.2.15.1.5 2 4 6 8 Error Fractional Training Error 2 4 6 8 Figure 6: Training Error

RMS Error (in) RMS Error The average training error was found to be.2657in RMS (.625%). As can be seen from the graph, the bulk of this error is located in the larger distance measurements as expected when using the fractional error metric..25 Testing Error for Dist < 3 8 7 6 5 4 3 2 1.1.8.6.4.2 Testing Error 2 4 6 8 1 Error Fractional Testing Error 2 4 6 8 1 Figure 7: Testing Error.2.15.1.5 Error 5 1 15 2 25 3 Figure 8: Testing Error for distances less than 3in The test results can be seen in the screen shot of RViz in Figure 9. Figure 9: Map created in tests The testing error was found to be 2.4956in RMS (2.72%). Most of this error is again in the longer distances. The RMS error for distances less than 3in decreases to a miniscule.115in (.715%). This implies that this algorithm and setup work well for measuring moderate distances, but loses accuracy as the distance increases. Figure 1: Location where map was taken

measurable distance. Also, as the distance increases, the area represented by a pixel increases faster than the laser disperses. This makes it hard to pick up the laser dot in the image. This technique also has difficulty with objects that are extremely reflective (shiny metals) or light absorbent (matte black). This is because these materials do not effectively radiate the laser light back in the direction of the camera. Figure 11: A sketch of the floor plan of the location The image outline shows the three walls of an alcove and the opposite side of a hallway. Some erroneous points can be seen where the laser was not correctly identified on the longer distances due to adverse lighting conditions. This resulted in the Rovio measuring bad values, usually near zero. Another interesting feature that can be seen is the bow that appears in the back wall of the alcove. This occurs because the distance measured is to the laser dot and is not in line with the camera. This transform was not taken into account in the code. This artifact can be corrected by applying an additional shift in the angle by the parameter, as this is the direction the laser pointer was pointed at relative to the Rovio s camera. IX. SYSTEM EVALUATION This system has a number of features that make it desirable for a certain subset of robotic work. Laser pointers are cheap, and the mounting does not require any great precision. This allows it to be quickly installed on a system without any specialized skills. This method does not require any direct access to the system hardware and can be added to any robot system that has a camera. This is particularly useful when working with proprietary platforms not originally intended for robotic work. While this system boasts a number of advantages, it is not without its limitations. Since the laser dot and camera are not on the same axis, it is possible for objects to occlude the laser dot from the camera. In addition, due to the nonlinearity of the geometry of the setup, there is a tradeoff between the minimum distance that can be measured, and accuracy at distance. This tradeoff is accomplished by varying the angle of the laser. The larger is, the higher the accuracy at range, and the greater the minimum X. CONCLUSION In conclusion, we have demonstrated a method for finding distances using a single camera and a fixed laser. We ve demonstrated that a supervised learning technique eliminated the need for a precise mounting and alignment of the laser. The system can be applied to any other system with a camera on it and can have a laser glued, stapled, taped or otherwise affixed on it. We ve also demonstrated that this type of distance measuring technique can be used to create basic maps of an environment. There are a few a few areas that future research into this type of system could explore. One obvious method to improve performance on this setup would be to use a more powerful laser or a different color laser. This would improve contrast and increase range. A more powerful laser would also enable the projection of different shapes, like a line, and this would allow a mapping in three dimensions instead of two. The vision algorithms could also be improved to better isolate the laser from the surrounding environment. This would allow this method to work with a greater variety of textures and at greater distances. Another possible improvement would be to use Kalman filters. A Kalman filter would allow greater movement of the Rovio while mapping, as opposed to the current Gaussian filters. With a Kalman filter in place, it makes sense to create a SLAM system, getting rid of the need for a base station. In this project, we demonstrated the ability to get distances and create maps using only the data form the laser. An extension on this would be to use the rest of the image data from the camera for augmenting the map. XI. ACKNOWLEDGEMENTS We would like to thank Professor Saxena for providing feedback on our project and suggesting alternative error metrics to use for our supervised learning implementation. We would like to thank the fine folks at WowWee for developing the robotic platform we used and the developers at ROS.org for developing a framework for inter-process communication and linking together the various robotic packages. XII. ADDITIONAL MEDIA A video showing the project in operation was also submitted as part of the poster session.

XIII. REFERENCES 1. Obstacle Detection With Active Laser Triangulation. Klancnik, S, Balic, J and Planinsic, P. 27, Advances in Production Engineering & Management, pp. 79-9. 2. High-Accuracy 3D Sensing for Mobile Manipulation: Improving Object Detection and Door Opening. Quigley, M., et al., et al. 29, International Conference on Robotics and Automation, pp. 364-361.