DETC SLAM USING 3D RECONSTRUCTION VIA A VISUAL RGB & RGB-D SENSORY INPUT

Similar documents
The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

Positioning of a robot based on binocular vision for hand / foot fusion Long Han

Color Correction Using 3D Multiview Geometry

Massachusetts Institute of Technology Department of Mechanical Engineering

Fifth Wheel Modelling and Testing

Detection and Recognition of Alert Traffic Signs

Journal of World s Electrical Engineering and Technology J. World. Elect. Eng. Tech. 1(1): 12-16, 2012

Illumination methods for optical wear detection

Segmentation of Casting Defects in X-Ray Images Based on Fractal Dimension

An Assessment of the Efficiency of Close-Range Photogrammetry for Developing a Photo-Based Scanning Systeminthe Shams Tabrizi Minaret in Khoy City

3D Reconstruction from 360 x 360 Mosaics 1

Optical Flow for Large Motion Using Gradient Technique

A Mathematical Implementation of a Global Human Walking Model with Real-Time Kinematic Personification by Boulic, Thalmann and Thalmann.

Obstacle Avoidance of Autonomous Mobile Robot using Stereo Vision Sensor

Controlled Information Maximization for SOM Knowledge Induced Learning

Augmented Reality. Integrating Computer Graphics with Computer Vision Mihran Tuceryan. August 16, 1998 ICPR 98 1

Mono Vision Based Construction of Elevation Maps in Indoor Environments

Prof. Feng Liu. Fall /17/2016

A New and Efficient 2D Collision Detection Method Based on Contact Theory Xiaolong CHENG, Jun XIAO a, Ying WANG, Qinghai MIAO, Jian XUE

ISyE 4256 Industrial Robotic Applications

A ROI Focusing Mechanism for Digital Cameras

(a, b) x y r. For this problem, is a point in the - coordinate plane and is a positive number.

IP Network Design by Modified Branch Exchange Method

EYE DIRECTION BY STEREO IMAGE PROCESSING USING CORNEAL REFLECTION ON AN IRIS

Topological Characteristic of Wireless Network

Development and Analysis of a Real-Time Human Motion Tracking System

An Unsupervised Segmentation Framework For Texture Image Queries

A modal estimation based multitype sensor placement method

View Synthesis using Depth Map for 3D Video

Color Interpolation for Single CCD Color Camera

Kalman filter correction with rational non-linear functions: Application to Visual-SLAM

Goal. Rendering Complex Scenes on Mobile Terminals or on the web. Rendering on Mobile Terminals. Rendering on Mobile Terminals. Walking through images

Transmission Lines Modeling Based on Vector Fitting Algorithm and RLC Active/Passive Filter Design

MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

Ego-Motion Estimation on Range Images using High-Order Polynomial Expansion

A Shape-preserving Affine Takagi-Sugeno Model Based on a Piecewise Constant Nonuniform Fuzzification Transform

Detection and tracking of ships using a stereo vision system

Coordinate Systems. Ioannis Rekleitis

9-2. Camera Calibration Method for Far Range Stereovision Sensors Used in Vehicles. Tiberiu Marita, Florin Oniga, Sergiu Nedevschi

Adaptation of Motion Capture Data of Human Arms to a Humanoid Robot Using Optimization

CSE 165: 3D User Interaction

A Minutiae-based Fingerprint Matching Algorithm Using Phase Correlation

3D inspection system for manufactured machine parts

Haptic Glove. Chan-Su Lee. Abstract. This is a final report for the DIMACS grant of student-initiated project. I implemented Boundary

Improved Fourier-transform profilometry

Topic -3 Image Enhancement

Frequency Domain Approach for Face Recognition Using Optical Vanderlugt Filters

QUANTITATIVE MEASURES FOR THE EVALUATION OF CAMERA STABILITY

Active Monocular Fixation using the Log-polar Sensor

Multi-azimuth Prestack Time Migration for General Anisotropic, Weakly Heterogeneous Media - Field Data Examples

Image Enhancement in the Spatial Domain. Spatial Domain

3D Hand Trajectory Segmentation by Curvatures and Hand Orientation for Classification through a Probabilistic Approach

An Extension to the Local Binary Patterns for Image Retrieval

Desired Attitude Angles Design Based on Optimization for Side Window Detection of Kinetic Interceptor *

A NEW GROUND-BASED STEREO PANORAMIC SCANNING SYSTEM

Insertion planning for steerable flexible needles reaching multiple planar targets

Shortest Paths for a Two-Robot Rendez-Vous

Fast quality-guided flood-fill phase unwrapping algorithm for three-dimensional fringe pattern profilometry

A Neural Network Model for Storing and Retrieving 2D Images of Rotated 3D Object Using Principal Components

Haptic Simulation of a Tool In Contact With a Nonlinear Deformable Body

Improvement of First-order Takagi-Sugeno Models Using Local Uniform B-splines 1

17/5/2009. Introduction

University of Alberta, range data with the aid of an o-the-shelf video-camera.

Performance Optimization in Structured Wireless Sensor Networks

A Novel Automatic White Balance Method For Digital Still Cameras

5 4 THE BERNOULLI EQUATION

Extract Object Boundaries in Noisy Images using Level Set. Final Report

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

RANDOM IRREGULAR BLOCK-HIERARCHICAL NETWORKS: ALGORITHMS FOR COMPUTATION OF MAIN PROPERTIES

Comparisons of Transient Analytical Methods for Determining Hydraulic Conductivity Using Disc Permeameters

Lecture # 04. Image Enhancement in Spatial Domain

Conservation Law of Centrifugal Force and Mechanism of Energy Transfer Caused in Turbomachinery

Assessment of Track Sequence Optimization based on Recorded Field Operations

= dv 3V (r + a 1) 3 r 3 f(r) = 1. = ( (r + r 2

4.2. Co-terminal and Related Angles. Investigate

PROBABILITY-BASED OPTIMAL PATH PLANNING FOR TWO-WHEELED MOBILE ROBOTS

A Memory Efficient Array Architecture for Real-Time Motion Estimation

A Two-stage and Parameter-free Binarization Method for Degraded Document Images

Image Registration among UAV Image Sequence and Google Satellite Image Under Quality Mismatch

A Novel Image-Based Rendering System With A Longitudinally Aligned Camera Array

Dense pointclouds from combined nadir and oblique imagery by object-based semi-global multi-image matching

Title. Author(s)NOMURA, K.; MOROOKA, S. Issue Date Doc URL. Type. Note. File Information

Hand Tracking and Gesture Recognition for Human-Computer Interaction

A VISION-BASED UNMANNED AERIAL VEHICLE NAVIGATION METHOD

10/29/2010. Rendering techniques. Global Illumination. Local Illumination methods. Today : Global Illumination Modules and Methods

On Error Estimation in Runge-Kutta Methods

XFVHDL: A Tool for the Synthesis of Fuzzy Logic Controllers

Voting-Based Grouping and Interpretation of Visual Motion

2. PROPELLER GEOMETRY

DEVELOPMENT OF A PROCEDURE FOR VERTICAL STRUCTURE ANALYSIS AND 3D-SINGLE TREE EXTRACTION WITHIN FORESTS BASED ON LIDAR POINT CLOUD

Module 6 STILL IMAGE COMPRESSION STANDARDS

Extension of Reeds & Shepp Paths to a Robot with Front and Rear Wheel Steer

arxiv: v2 [cs.ro] 11 Apr 2018

SYSTEM LEVEL REUSE METRICS FOR OBJECT ORIENTED SOFTWARE : AN ALTERNATIVE APPROACH

ADDING REALISM TO SOURCE CHARACTERIZATION USING A GENETIC ALGORITHM

An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

Cellular Neural Network Based PTV

Structured Light Stereoscopic Imaging with Dynamic Pseudo-random Patterns

Visual-Inertial Curve SLAM

Spiral Recognition Methodology and Its Application for Recognition of Chinese Bank Checks

Transcription:

Poceedings of the ASME 211 Intenational Design Engineeing Technical Confeences & Computes and Infomation in Engineeing Confeence IDETC/CIE 211 August 28-31, 211, Washington, DC, USA DETC211-47735 SAM USING 3D ECONSTUCTION VIA A VISUA GB & GB-D SENSOY INPUT Helge A. Wudemann * PhD Candidate King s College ondon ondon, United Kingdom ei Cui Postdoctoal Fellow King s College ondon ondon, United Kingdom Evangelos Geogiou * PhD Candidate King s College ondon ondon, United Kingdom Jian S. Dai Pofesso of Mechanisms and obotics King s College ondon ondon, United Kingdom *joint fist authos with equal contibution to this pape ABSTACT This pape investigates simultaneous localization and mapping (SAM poblem by exploiting the Micosoft Kinect senso aay and an autonomous mobile obot capable of self-localization. The combination of them coves the majo featues of SAM including mapping, sensing, locating, and modeling. The Kinect senso aay povides a dual camea output of GB, using a CMOS camea, and GB-D, using a depth camea. The sensos will be mounted on the KCBOT, an autonomous nonholonomic two wheel maneuveable mobile obot. The mobile obot platfom has the ability to self-localize and pefom navigation maneuves to tavese to set taget points using intelligent pocesses. The taget point fo this opeation is a fixed coodinate position, which will be the goal fo the mobile obot to each, taking into consideation the obstacles in the envionment which will be epesented in a 3D spatial model. Extacting the images fom the senso afte a calibation outine, a 3D econstuction of the tavesable envionment is poduced fo the mobile obot to navigate. Using the constucted 3D model the autonomous mobile obot follows a polynomial-based nonholonomic tajectoy with obstacle avoidance. The expeimental esults demonstate the cost effectiveness of this off the shelf senso aay. The esults show the effectiveness to poduce a 3D econstuction of an envionment and the feasibility of using the Micosoft Kinect senso fo mapping, sensing, locating, and modeling, that enables the implementation of SAM on this type of platfom. 1. INTODUCTION Ove the past 3 yeas, eseaches have been developing multiple solutions of visual-based mobile obots that ae able to navigate within an unknown indoo and outdoo envionment. Only duing the last decade, this wide aea has been focused on function diven navigation such as impoving the living standad and inceasing the independency of blind people. In this egad, concepts of obstacle avoidance and location as well as path planning using vision have been poposed by [1]. Anothe aea of application is secuity: In [2], the WITH mobile obot [3] is pesented fo detection of theat by evaluating unexpected objects and faces. The lagest field apat fom militay use is uban seach and escue obots [4] [5]. Mobile obot systems aimed at this secto should be obust and available at athe low costs. Futhemoe, tagets ae often not only identifiable via vision but via noise. Mobile obot navigation has been of majo inteest since the 198s. The development duing this peiod is summaized the development in [6]. This suvey concentates on indoo and outdoo navigation. These ae divided into thee goups: Mapbased systems depend on pe-defined geometic models o topological maps of the envionment, wheeas mapless navigation ae systems that ecognize objects found in the space o tack those objects by geneating motions based on visual 1 Copyight 211 by ASME

obsevations. Map-building-based navigation is an intemediate way whee sensos constuct thei own geometic o topological models of the envionment fo navigation. A simple, but obust and efficient algoithm fo a mobile obot path planning is discussed in [7]. Hee, a path is taught and eplayed in indoo and outdoo envionments. The system navigates by compaing featue coodinates qualitatively. Obstacle avoidance and global localization is pat of the autho s futue wok. Indoo navigation using 2-dimensional vision systems can be anothe way to exploe the envionment. ie et al. [8] pesents a two-stages-technique: Duing the offline pat the suounding is constucted with the ao-black-wellized paticle filte. A location ecognition algoithm then allocates featues to the pe-built map in ode to move autonomously within the aea. A simila appoach is used in [9]: Images ae taken by a monocula camea, segmented, filteed by an edge algoithm, and modeled as a topological gaph, whee a cetain position of the mobile obot is equivalent to a node. Hwang and Shih [1] use two chage-couple-device (CCD cameas contolled by two stepping motos each to navigate a ca-like wheeled obot. The cameas ae mounted ovehead and the obot is tagged with two landmaks. Duing indoo expeiments images ae locating the mobile obot using the landmaks and obstacles. Steeo cameas ae popula techniques fo mobile obot navigation, some vision cameas ae also expanded to an omnidiectional system [11] [12] [13]. In [14] a steeo visual system is mounted on an autonomous ai vehicle fo navigation afte having tested this technique on a gound obot. The main contibution of this pape is to self-localize and estimate the change in position ove time. A futhe step [15] descibes an online steeo camea algoithm fo econstuction of uban envionments. The sequence is as follows: Using a point cloud a 3D model is econstucted, the envionment divided into tavesable gound egions, and a local safety map is built. This plot supplies infomation about safe and unsafe aeas that is essential fo the obotic system to navigate autonomously. In [16] a method is pesented fo obstacle avoidance and path planning in an indoo envionment. Using a steeo camea mounted on a humanoid obot, the system ecognizes the floo and detects obstacles via plane extaction without any a pioi infomation of the suounding space. The disadvantage of this method is that the envionment needs to contain enough textue. In [17], the eseaches also use steeo vision guidance fo a humanoid obot. The main goal is to make this obot walking up stais and cawling undeneath obstacles. This is achieved by using scan-line gouping in ode to segment planes in the envionment. The key contibution of this pape is the extaction of height infomation that is used fo path planning and navigation. Howeve, it is mentioned that the success of steeo camea systems significantly depends on the level of textue since steeo vision elies on the hoizontal dispaity in ode to ceate 3D images. Anothe way of getting a 3D econstucted map of an envionment is apply a 3D lase senso with a hemispheical field [18] o use an I senso in combination with a single camea [19]. These last two sensos ae esponsible fo a diffeent pat of the mobile obot navigation system: The vision camea is used fo planning the closest path to the taget, wheeas the I sensos will help to avoid static and dynamic obstacles. The goal will be hit as the path is divided into intemediate steps. Figue 1. THE KINECT SENSO & THE KCBOT This pape integates the Micosoft Kinect to the simultaneous localization and mapping (SAM technique using not only 2D data (GB images but also depth infomation (GB-D. Futhe, this system aims autonomously fo a pe-defined audio signal. Without any a pioi knowledge about the suounding envionment, the mobile obot taveses its own planned path and navigates to the souce. Followed by a section that descibes the vision device and calibation, the pape intoduces the autonomous mobile obot KCBOT, as depicted in Fig. 1. Next, an algoithm fo polynomial-based nonholonomic path planning and obstacle avoidance is pesented. Expeimental esults pove the stability and obustness of this appoach. 2. GB & GB-D VISUA IMAGE CAPTUE The GB and GB-D captuing device was launched in the UK ealy Novembe 21. The vision devices, that ae located on a hoizontal line, ae connected to a small base with a motoized tilt mechanism. The Kinect TM consists of an GB camea, depth senso and multi-aay micophone (Fig. 1. This chapte descibes the functionality and ability of the device as well as the calibation. 2.1 The Micosoft Kinect Senso The GB images obtained by the colo CMOS camea have 8-bit esolution (64 48 pixels. An extacted GB image can be seen in Fig. 2(a. The CMOS senso that will eceive the I light fom the tansmitte povides input fo the depth map with 11-bit esolution (32 24 pixels. Howeve in 2 Copyight 211 by ASME

this pape an 8-bit esolution (64 48 pixels will be extacted (Fig. 2(b. The pinciple of the Kinect senso is as follows: Between the I tansmitte, sending out stuctued light, and eceive is a small angle. Also, the I senso should be povided with a band-pass filte in ode to captue the I light only. Using tiangulation the depth can be ecalculated. 1.12 d.91 GB (5.26 and fo the I senso: 557.14 34.44 M 556.75 229.18 I nt i nsi, cgb D (6 1 (a (b.2 d GB D.54 (7.48 Figue 2. (a GB AND (b BG-D IMAGE CAPTUE Fig. 2 (a and (b pesent the GB and GB-D images captued by the senso, espectively. 2.2 Senso Calibation The two CMOS cameas ae calibated using the widely known pinhole camea model. egading the extinsic paametes, the GB camea will be used as the wold coodinate fame, so that the depth senso needs to be tanslated by -25mm in y-diection. The intinsic matix M Intinsic is descibed by the focal length f x and f y and the pinciple point p x and p y, so that eveything adds up to the following camea matix: f p x x M f p In tin sic y y (1 1 In ode to conside non-linea effects, the intinsic matix has to be multiplied with the adial distotion vecto d : 2 2 3 2 2 2 2 2 d ( x y d ( x y d ( x y 1 x y z 2 2 3 2 2 2 2 2 d ( x y d ( x y d ( x y 1 d x y z (2 1 x X / Z y Y / Z whee, X, Y and Z is a point in the camea efeence fame. Fo the GB camea, the intinsic paametes ae: (3 522.82 32.61 M I nt i nsi, cgb 521.63 242 (4 1 Figue 3. GB IMAGE WITH INTINSIC CAIBATION The GB-D image shows a cetain GB colo sequence going fom close to deep. As z inceases, the ode is as follows: Magenta (1,, 1, Blue (,, 1, Cyan (, 1, 1, Geen (, 1,, Yellow (1, 1,, ed (1,,, whee, B,1. This can be witten in cylindical-coodinate epesentations by calculating the hue, satuation and lightness value in the HSV colo space. The thee equations ae given by [2]:, if G B G B 6, if max (, max (, min (, H B 6 2, if max (, G max (, min (, G 6 4, if max (, B max (, min (, S, if G B max(, min(,, othewise max(, (8 (9 V max(, (1 3 Copyight 211 by ASME

(c Sample 3 5 (d Sample 4 Figue 5. FOU SAMPES FO DISTANCE CACUATION Table 1. GB-D SAMPE DISTANCE ESTIMATION Figue 4. QUADATIC INTEPOATION BETWEEN DISTANCE AND HUE VAUE Fig. 4 plots the distance d = [7,17] in [cm] against the hue value H in [ ]. Unlike a linea appoximation, a quadatic equation descibes the atio between the distance and the hue value moe accuate: 2 d.3 2 H 1.949 2 H 359.26 (11 Having Equation (11 allows fo the computation of distance based on quadatic elationship to Hue. 2.3 HSV-Distance esults Since the I depth senso is calibated fo a distance between 7cm and 17cm, tests ae taken within this inteval. Fou samples can be seen in Fig. 9. The obstacle in the middle of the image is located d=8, 95, 11 and 162cm fom the Kinect TM. Table 1 shows the tanslation fom the GB colo space to the HSV colo space. As mentioned befoe, the hue value is of special inteest because this is elated to the distance d by the quadatic Equation (11. Using this intepolation, the distance can be calculated. Compaed with the measued distance, these is an aveage eo of 1.1%. Sample G B Hue in [ ] Distance d in [cm] 1..24 1. 225.88 81.67 2.1.64 1. 22.6 95.6 3..98 1. 181.41 11.6 4. 1..12 127.29 162.81 Implementing the calibated quadatic distance equation fo d, Equation (11, Table 1 and Fig. 5 (a, (b, (c, and (d pesent the expected distance valuation of the tacked obstacle. 2.4 3D econstuction using a GB & GB-D Senso Fom the 3D data gained fom the GD and GB-D senso, it is possible to geneate a point cloud. The point cloud includes a desciption of the alignment of sufaces specified by a 3-tuple in ode to econstuct a polygonal mesh. These points ae efeed to as vetices if they ae to be used as cones. Futhemoe, the data supplies infomation about the GB values fo each point. Fig. 6(a shows a view along the positive x-axis. It can be clealy distinguished between the backgound and obstacle. In Fig. 6(b this view has been pitched by 45. (a (b Figue 6. 3D ECONSTUCTION (a FONT VIEW AND (b OTATED BY 45 The pocessed 3D econstuction, Fig. 6 (a and (b, povides the mobile obot with an envionment map fo path planning. (a Sample 1 (b Sample 2 3 THE KCBOT: AN AUTONOMOUS MOBIE OBOT The KCBOT [21] is a non-holonomic two wheeled mobile obot. The mobile obot is built aound the specifications fo Micomouse obot and the obocup competition. These specifications contibute to the mobile obot s fom facto and size. This mobile obot holds a complex 4 Copyight 211 by ASME

electonic system to suppot on-line path planning, selflocalization, and even simultaneous localization and mapping (SAM, which is made possible by the onboad senso aay. Figue 7. THE KCBOT: A NONHOONOMIC MOBIE OBOT A suitable autonomous mobile obot is equied as a platfom fo the Micosoft Kinect senso. Fig. 7 pesents the KCBOT which is the platfom used to suppot the senso aay. 3.1 Mobile obot Configuation In the maneuveable classification of mobile obots [22], the vehicle is defined as being constained to move in the vehicle s fixed heading angle. Fo the vehicle to change maneuve configuation, it needs to otate about itself. As the vehicle taveses on a two dimensional plane both left and ight wheels follow a path that moves aound the instantaneous cente of cuvatue at the same angle, which can be defined as ω, and thus the angula velocity of the left and ight wheel otation is deduced as follows: θ ω(icc 2 (12 θ ω(icc 2 (13 Whee is the distance between the centes of the two otating wheels and the paamete icc is the distance between the mid-point of the otating wheels and the instantaneous cente of cuvatue. Using the velocities Equations (12 and (13 of the otating left and ights wheels, θ and θ espectively, the instantaneous cente of cuvatue, icc and the cuvatue angle, ω can deived as follows: (θ θ icc (14 2 (θ θ (θ θ ω (15 Using Equations (14 and (15, two singulaities can be identified. When θ θ, the adius of instantaneous cente of cuvatue, icc tends towads infinity and this is the condition when the mobile obot is moving in a staight line. When θ θ, the mobile obot is otating about its own cente and the adius of instantaneous cente of cuvatue, icc, is null. When the wheels on the mobile obot otate, the quadatue shaft encode etuns a counte tick value; the otation diection of the otating wheel is given by positive o negative value etuned by the encode. Using the numbes of tick counts etuned, the distance tavelled by the otating left and ight wheel can be deduced in the following way: ticks πd d (16 es ticks πd d (17 es Whee ticks and ticks depicts the numbe of encode pulses counted by left and ight wheel encodes, espectively, since the last sampling, and whee D is defined as the diamete of the wheels. With esolution of the left and ight shaft encodes es and es, espectively, it is possible to detemine the distance tavelled by the left and ight otating wheel, d and d. This calculation is epesented in Equations (16-17. 3.2 Self-localization via a Dual Shaft Encode Configuation By using the quadatue shaft encodes that accumulate the distance tavelled by the wheels, a fom of position can be deduced by deiving the mobile obot s x, y Catesian position and the maneuveable vehicle s oientation, with espect to time. The deivation stats by defining and consideing s(t and (t to be function of time, which epesents the velocity and oientation of the mobile obot, espectively. The velocity and oientation ae deived fom diffeentiating the position fom as follows: dx s( t.cos( ( t (18 dt dy s( t.sin( ( t (19 dt The change in oientation with espect to time which was defined in Equation (15 and can be descibed as follows: d l (2 dt When Equation (2 is integated, the mobile obot s angle oientation value (t with espect to time is achieved. The mobile obot s initial angle of oientation ( is witten as and is epesented as follows: b 5 Copyight 211 by ASME

( t l ( t (21 b The velocity of the mobile obot is equal to the aveage speed of the two wheels and this can be incopoated into Equations (18 and (19, which is depicted as follows: dx l cos( ( t (22 dt 2 dy l.sin( ( t (23 dt 2 The next step is to integate equations (22 and (23 to the initial position of the mobile obot, which is depicted as follows: ( ( t l l x( t x sin sin( 2( b l ( ( t l l y( t y cos cos( 2( b l (24 (25 Equations (24 and (25 descibe the mobile obot s position, whee x( x and y( y ae the mobile obot s initial positions. The next step is to epesent Equations (21, (24 and (25 in tems of the distances that the left and ight wheels have tavesed, which ae defined by d and d. This can be achieved by substituting θ and θ (in Equations (21, (24 l and (25 fo d and d, espectively, and also dopping the time constant t to achieve the following: d d (26 2 ( d d ( d d t x( t x sin sin( (27 2( d d ( d d ( d d t y( t y cos cos( 2( d d b b (28 By implementing Equations (26 to (28, they povide a solution to the elative position of a maneuveable mobile obot. This might offe a possible solution to the selflocalization poblem but is subject to accumulative dift of the position and oientation with no method of e-alignment. The accuacy of this method is subject to the sampling ate of the data accumulation, such that if small position o oientation changes ae not ecoded then the position and oientation will be eoneous. 4. POYNOMIA-BASED NONHOONOMIC PATH PANNING AND OBSTACE AVOIDANCE This pat concentates on finding a path fo the KCBOT fom its initial configuation as descibed by (x, y, φ to a final one (x 1, y 1, φ 1. The nonholonomic constaint has to be satisfied and the thee dimensional final configuation space has to be eached with two contols only. The pape adopts a polynomial appoach to the path planning while obstacle avoidance is ealized by using the highe ode of the polynomials. The vetices and edges of the KCBOT as well as those of the obstacles ae enclosed in simple shapes such as cicles o squaes. To achieve the task of path planning detailed infomation about the tavesable space and location of potential obstacles is equied. Using the GB-D image a localization map is poduced fo the path planning and obstacle avoidance. 4.1 GB-D Image to 2D Envionment Mapping Befoe the autonomous mobile obot can complete any path planning o path following tasks, it equies sufficient infomation about the envionment that it will be tavesing. To povide the mobile obot with this infomation the detail fom the GB-D camea is used to make plot of the teain, plotting the un-obstucted space the mobile obot can utilize. Befoe the GB-D image can be used, the noise esolved as black pixels in the ange of #E4 E1 Ch to #FF FF FFh needs to be emoved fom the image. This is achieved by conveting the GB-D image to gay scale [23]. This pocess is caied out to potect natual colos in the #E4 E1 Ch to #FF FF FFh ange. In the GB colo model, a colo image can be epesented by the following intensity function: I GB ( F, FG, FB (29 Fom Equation (29, F is the intensity of the pixel (x,y in the ed channel, F is the intensity of pixel (x,y in the geen G channel, and F B is the intensity of pixel (x,y in the blue channel. Using only the bightness infomation the colo image can be tansfomed into a gay scale image [23]. I.333F.5 F. 1666F (3 GS Whee Equation (3 pesents the equation that convets a colo pixel to a gay scale pixel. (a G B (b Figue 8. GB-D (a TO GAY SCAE (b CONVESION 6 Copyight 211 by ASME

Afte the image has been conveted to gay scale, as depicted in Fig. 8, the black pixels ae filteed out of the image. Figue 9. GAY SCAE FITEED IMAGE Once the image has been stipped fom the black noise pixels, as depicted in Fig. 9, the colo detail is equied fo mapping the tavesable envionment. the two diven wheels do not slip sideways. The velocity of any point on the wheel axis is nomal to this axis. This leads to following constaint equation: xsin( ycos( (31 Whee epesents the width of the obot. The above equation is a nonholonomic constaint involving velocities and, as is well known, it cannot be integated analytically to esult in a constaint between the configuation vaiables of the platfom, namely, x, y, and φ. Also, the configuation space of this system is thee-dimensional while the velocity space is two-dimensional. The nonholonomic constaint can be witten in the fom of u xsin( y cos( v x cos( y sin( If we choose functions f and g as follows: f t, u g t, v du d (32 Figue 1. COO EMAPPING ON FITEED IMAGE The GB-D depth colo infomation fom Fig. 8 (a is emapped onto the gay scale filteed image; the esult is pesented in Fig. 1. Using the HSV [24] cylindical-coodinate epesentation of points in an GB colo model, the image is otated by 9, esulting in an image of a topological view of the tavesable space. (a Figue 11. EMAPPED GB-D FITEED IMAGE OTATION The otation of the GB-D image, Fig. 11 (a, esults in detailed localization mapping infomation, pesented in Fig. 11 (b, that the mobile obot can use fo path planning. 4.2 Obstacle Avoidance: A Polynomial Appoach Two independently diven wheels ae used to dive the mobile obot vehicle. It is assumed that the system moves at a low speed and the gound povides enough fiction foce. So (b and select the functions f and g to be fifth and thid ode time polynomials, we can obtain the tajectoy with obstacle avoidance. Details can be found in [25]. 6. CONCUSION & DISCUSSION This pape pesents the utilization of the Micosoft Kinect Senso to suppot the SAM methodology by exploiting the GB and GB-D images fo mapping, sensing, locating, and modeling. Befoe any image pocessing is possible the image inputs ae calibated to acquie the image in a pinhole model with intinsic calibation. Using a HSV cylindical-coodinate mapping space, a quadatic distance estimation model is pesented to esolve the estimation of a potential obstacles distance fom the senso. Using the GB and GB-D images a 3D econstuction method is pesented fo envionment modeling. The KCBOT, an autonomous nonholonomic maneuveable mobile obot, is used as a platfom fo the senso to captue the expeimental images. The mobile obot is self-localizing using the quadatue shaft encodes to esolve oientation and plana position. The mobile obot is povided with an envionment oveview map by the pesented GB-D image otation method. This mapping infomation is applied to the polynomial based obstacle avoidance and path planning appoach. The expeimental images demonstate the cost effectiveness of this off the shelf senso aay. The esults show the effectiveness to poduce a 3D econstuction of an envionment and the feasibility of using the Micosoft Kinect senso fo mapping, sensing, locating, and modeling, that enables the implementation of SAM on this type of platfom. 7 Copyight 211 by ASME

EFEENCES [1] Amutha, B., Ponnavaikko, M., Novembe 29, "Mobile Assistant as a Navigational Aid fo Blind Childen to identify andm," Intenational Jounal of ecent Tends in Engineeing, 2(3, pp. 152-154. [2] Godon, S., Pang, S., Nishioka,.,Kasabov, N., and Yamakawa, T., 29, "Vision Based Mobile obot fo Indoo Envionmental Secuity," Poc. 15th Intenational Confeence on Neual Infomation Pocessing of the Asia-Pacific Neual Netwok Assembly, Spinge-Velag, Belin Heidelbeg, pp. 962-969. [3] Moi, K., Sato, M., Sonoda, T., and Ishii,K., "Towad ealization of swam intelligence," Poc. 7th Postech-Kyutech Joint Wokshop on Neuoinfomatics. [4] Scholtz, J., Young, J., Duy, J.., and Yanco, H.A., "Evaluation of Human-obot Inteaction Awaeness in Seach and escue," Poc. 24 IEEE Intenational Confeence on obotics and Automation, pp. 2327-2332. [5] Davids, A., 22, "Uban Seach and escue obots: Fom Tagedy to Technology," IEEE INTEIGENT SYSTEMSHistoies and Futues, pp. 81-83. [6] DeSouza, N. G., and Kak, A.C., 22, "Vision fo Mobile obot Navigation: A Suvey," IEEE Tansactions on Patten Analysis and Machine Intelligence, 24(2, pp. 237-267. [7] Chen, Z., and Bichfield, S.T., "Qualitative Vision-Based Mobile obot Navigation," Poc. 26 IEEE Intenational Confeence on obotics and Automation, pp. 2686-2692. [8] i, M.-H., Hong, B.-., Cai, Z.-S., Piao, S.-H., and Huang, Q.-C., 27, "Novel indoo mobile obot navigation using monocula vision," Engineeing Applications of Atificial Intelligence, 21, pp. 485 497. [9] Santosh, D., Acha, S., and Jawaha, C.V., "Autonomous Image-based Exploation fo Mobile obot Navigation," Poc. 28 IEEE Intenational Confeence on obotics and Automation, pp. 2717-2722. [1] Hwang, C., and Shih, C., Mach 29, "A Distibuted Active-Vision Netwok-Space Appoach fo the Navigation of a Ca-ike Wheeled obot," IEEE Tansactions on Industial Electonics, 56(3, pp. 846-855. [11] Gaspa, J., Wintes, N., and Santos-Victo, J., 2, "Vision-Based Navigation and Envionmental epesentations with an Omnidiectional Camea," IEEE Tansactions on obotics and Automation, 16(6, pp. 89-898. [12] Adoni, G., Modonini, M., Cagnoni, C., Sgobissa, A., "Omnidiectional steeo systems fo obot navigation," Poc. 23 Confeence on Compute Vision and Patten ecognition Wokshop, pp. 1-7. [13] ui, W.. D., and Javis,., 21, "Eye-Full Towe: A GPU-based vaiable multibaseline omnidiectional steeovision system with automatic baseline selection fo outdoo mobile obot navigation," obotics and Autonomous Systems, 58, pp. 747-761. [14] Mejias,., Campoy, P., Mondagon, I., and Dohety, P., 3-5 Septembe 27, "Steeo Vision-Based Navigation fo an Autonomous Helicopte," 6th IFAC Symposium on Intelligent Autonomous Vehicle. [15] Muaka, A., and Kuipes, B., "A Steeo Vision Based Mapping Algoithm fo Detecting Inclines, Dop-offs, and Obstacles fo Safe ocal Navigation," Poc. 29 IEEE/SJ Intenational Confeence on Intelligent obots and Systems, pp. 1646-1653. [16] Sabe, K., Fukuchi, M., Gutmann, J.S., Ohashi, T., Kawamoto, K., and Yoshigahaa, T., "Obstacle Avoidance and Path Planning fo Humanoid obots using Steeo Vision," Poc. 24 IEEE Intenational Confeence on obotics and Automation, pp. 592-597. [17] Gutmann, J.-S., Fukuchi, M., and Fujita, M., 28, "3D Peception and Envionment Map Geneation fo Humanoid obot Navigation," The Intenational Jounal of obotics eseach, 27(1, pp. 1117 1134. [18] yde, J., and Hu, H., "3D ase ange Scanne with Hemispheical Field of View fo obot Navigation," Poc. 28 IEEE/ASME Intenational Confeence on Advanced Intelligent Mechatonics, pp. 891-896. [19] Singh, N. N., Chattejee, A., Chattejee, A., and akshit, A., 211, "A two-layeed subgoal based mobile obot navigation algoithm with vision system and I sensos," Measuement, in pess. [2] Smith, A.., "Colo gamut tansfom pais," Poc. 5th Annual Confeence on Compute Gaphics and Inteactive Techniques pp. 12-19. [21] Geogiou, E., 21, "The KCBOT Mobile obot," www.kcbot.com. [22] Campion, G., Bastin, G., D Andea-Novel, B., 1996, "Stuctual Popeties and Classification of Kinematic and Dynamic Models of Wheeled Mobile obots," IEEE Tansactions on obotics and Automation, 12(2, pp. 47 62. [23] Kuma, T., and Vema, K., 21, "A Theoy Based on Convesion of GB image to Gay image," Intenational Jounal of Compute Applications, 7(2, pp. 7-1. [24] Joblove, G., and Geenbeg, D., "Colo spaces fo compute gaphics," Poc. 5th Annual Confeence on Compute Gaphics and Inteactive Techniques. [25] Papadopoulos, E., Poulakakis, I., and Papadimitiou, I., 22, "On Path Planning and Obstacle Avoidance fo Nonholonomic Platfoms with Manipulatos: A Polynomial Appoach," The Intenational Jounal of obotics eseach, 21(4, pp. 367-383. 8 Copyight 211 by ASME