Positioning of a robot based on binocular vision for hand / foot fusion Long Han

Similar documents
Color Correction Using 3D Multiview Geometry

Journal of World s Electrical Engineering and Technology J. World. Elect. Eng. Tech. 1(1): 12-16, 2012

A New and Efficient 2D Collision Detection Method Based on Contact Theory Xiaolong CHENG, Jun XIAO a, Ying WANG, Qinghai MIAO, Jian XUE

Segmentation of Casting Defects in X-Ray Images Based on Fractal Dimension

17/5/2009. Introduction

Illumination methods for optical wear detection

Prof. Feng Liu. Fall /17/2016

Obstacle Avoidance of Autonomous Mobile Robot using Stereo Vision Sensor

EYE DIRECTION BY STEREO IMAGE PROCESSING USING CORNEAL REFLECTION ON AN IRIS

Detection and tracking of ships using a stereo vision system

Detection and Recognition of Alert Traffic Signs

Optical Flow for Large Motion Using Gradient Technique

A modal estimation based multitype sensor placement method

Mono Vision Based Construction of Elevation Maps in Indoor Environments

A Mathematical Implementation of a Global Human Walking Model with Real-Time Kinematic Personification by Boulic, Thalmann and Thalmann.

Color Interpolation for Single CCD Color Camera

Augmented Reality. Integrating Computer Graphics with Computer Vision Mihran Tuceryan. August 16, 1998 ICPR 98 1

Fast quality-guided flood-fill phase unwrapping algorithm for three-dimensional fringe pattern profilometry

Goal. Rendering Complex Scenes on Mobile Terminals or on the web. Rendering on Mobile Terminals. Rendering on Mobile Terminals. Walking through images

AN ARTIFICIAL NEURAL NETWORK -BASED ROTATION CORRECTION METHOD FOR 3D-MEASUREMENT USING A SINGLE PERSPECTIVE VIEW

View Synthesis using Depth Map for 3D Video

3D inspection system for manufactured machine parts

Research Article. Regularization Rotational motion image Blur Restoration

A Minutiae-based Fingerprint Matching Algorithm Using Phase Correlation

Title. Author(s)NOMURA, K.; MOROOKA, S. Issue Date Doc URL. Type. Note. File Information

9-2. Camera Calibration Method for Far Range Stereovision Sensors Used in Vehicles. Tiberiu Marita, Florin Oniga, Sergiu Nedevschi

ISyE 4256 Industrial Robotic Applications

A Novel Automatic White Balance Method For Digital Still Cameras

An Unsupervised Segmentation Framework For Texture Image Queries

THE SOLID IMAGE: a new concept and its applications

Desired Attitude Angles Design Based on Optimization for Side Window Detection of Kinetic Interceptor *

Conservation Law of Centrifugal Force and Mechanism of Energy Transfer Caused in Turbomachinery

Cellular Neural Network Based PTV

Controlled Information Maximization for SOM Knowledge Induced Learning

An Assessment of the Efficiency of Close-Range Photogrammetry for Developing a Photo-Based Scanning Systeminthe Shams Tabrizi Minaret in Khoy City

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

3D Reconstruction from 360 x 360 Mosaics 1

A VISION-BASED UNMANNED AERIAL VEHICLE NAVIGATION METHOD

Improved Fourier-transform profilometry

CSE 165: 3D User Interaction

Insertion planning for steerable flexible needles reaching multiple planar targets

Transmission Lines Modeling Based on Vector Fitting Algorithm and RLC Active/Passive Filter Design

Generalized Grey Target Decision Method Based on Decision Makers Indifference Attribute Value Preferences

Journal of Machine Engineering, Vol. 15, No. 4, 2015

Adaptation of Motion Capture Data of Human Arms to a Humanoid Robot Using Optimization

2. PROPELLER GEOMETRY

Dense pointclouds from combined nadir and oblique imagery by object-based semi-global multi-image matching

Massachusetts Institute of Technology Department of Mechanical Engineering

4.2. Co-terminal and Related Angles. Investigate

MULTI-TEMPORAL AND MULTI-SENSOR IMAGE MATCHING BASED ON LOCAL FREQUENCY INFORMATION

ANALYSIS TOOL AND COMPUTER SIMULATION OF A DOUBLE LOBED HYPERBOLIC OMNIDIRECTIONAL CATADIOPTRIC VISION SYSTEM

Development and Analysis of a Real-Time Human Motion Tracking System

An Extension to the Local Binary Patterns for Image Retrieval

LIDAR SYSTEM CALIBRATION USING OVERLAPPING STRIPS

Fifth Wheel Modelling and Testing

QUANTITATIVE MEASURES FOR THE EVALUATION OF CAMERA STABILITY

High performance CUDA based CNN image processor

University of Alberta, range data with the aid of an o-the-shelf video-camera.

Several algorithms exist to extract edges from point. system. the line is computed using a least squares method.

XFVHDL: A Tool for the Synthesis of Fuzzy Logic Controllers

A Novel Image-Based Rendering System With A Longitudinally Aligned Camera Array

Topic -3 Image Enhancement

A NEW GROUND-BASED STEREO PANORAMIC SCANNING SYSTEM

IP Network Design by Modified Branch Exchange Method

Point-Biserial Correlation Analysis of Fuzzy Attributes

Space-variant image sampling for foveation

Topological Characteristic of Wireless Network

Environment Mapping. Overview

Embeddings into Crossed Cubes

Image Enhancement in the Spatial Domain. Spatial Domain

Lecture 3: Rendering Equation

SYSTEM LEVEL REUSE METRICS FOR OBJECT ORIENTED SOFTWARE : AN ALTERNATIVE APPROACH

A Memory Efficient Array Architecture for Real-Time Motion Estimation

Shortest Paths for a Two-Robot Rendez-Vous

ANN Models for Coplanar Strip Line Analysis and Synthesis

Frequency Domain Approach for Face Recognition Using Optical Vanderlugt Filters

ADDING REALISM TO SOURCE CHARACTERIZATION USING A GENETIC ALGORITHM

A General Characterization of Representing and Determining Fuzzy Spatial Relations

Multi-azimuth Prestack Time Migration for General Anisotropic, Weakly Heterogeneous Media - Field Data Examples

Assessment of Track Sequence Optimization based on Recorded Field Operations

A Two-stage and Parameter-free Binarization Method for Degraded Document Images

A ROI Focusing Mechanism for Digital Cameras

Ego-Motion Estimation on Range Images using High-Order Polynomial Expansion

Directional Stiffness of Electronic Component Lead

Extended Perspective Shadow Maps (XPSM) Vladislav Gusev, ,

Also available at ISSN (printed edn.), ISSN (electronic edn.) ARS MATHEMATICA CONTEMPORANEA 3 (2010)

2D Transformations. Why Transformations. Translation 4/17/2009

= dv 3V (r + a 1) 3 r 3 f(r) = 1. = ( (r + r 2

Module 6 STILL IMAGE COMPRESSION STANDARDS

Extract Object Boundaries in Noisy Images using Level Set. Final Report

PROBABILITY-BASED OPTIMAL PATH PLANNING FOR TWO-WHEELED MOBILE ROBOTS

An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

Computer Graphics and Animation 3-Viewing

INDEXATION OF WEB PAGES BASED ON THEIR VISUAL RENDERING

Cryptanalysis of Hwang-Chang s a Time-Stamp Protocol for Digital Watermarking

The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

Spiral Recognition Methodology and Its Application for Recognition of Chinese Bank Checks

Towards Adaptive Information Merging Using Selected XML Fragments

Robust Object Detection at Regions of Interest with an Application in Ball Recognition

Introduction to Medical Imaging. Cone-Beam CT. Introduction. Available cone-beam reconstruction methods: Our discussion:

Transcription:

2nd Intenational Confeence on Advances in Mechanical Engineeing and Industial Infomatics (AMEII 26) Positioning of a obot based on binocula vision fo hand / foot fusion Long Han Compute Science and Technology, Tieling nomal College, Tieling City, 2, Liaoning Povince, China Email: holms85@63.com Keywods: Robot; Vision; Positioning Abstact. In this pape, a binocula vision system is adopted to ealize the positioning of the obot. By using colo and shape ecognition method, the taget image is pocessed by image pocessing, and the taget image is obtained moe complete and accuate, and the position of the taget object is obtained by using the pinciple of double taget steeo vision. Intoduction In vaious obot localization methods, the visual system is used to ealize the self localization. In the pocess of moving, the obot acquies the infomation of the colo, shape and location of the infomation in the envionment, and uses the visual technology to calculate the position elationship between the oad signs and thei own. In 977, Pofesso Ma poposed the Ma vision theoy, which had a wide influence on [][2]. With the development of binocula vision technology, and the poblems of intefeence and invese imaging in eal opeation, the limitation of Ma theoy is moe and moe appaent. In the late 8's, the use of spatial geomety method and the application of elated physical knowledge in the study of binocula vision technology. In this peiod, the intoduction of active vision eseach methods, as well as the application of distance senso and infomation fusion technology, makes the Ma vision theoy a lot of ill posed poblem into a good [3][4]. In the ealy 9 "s, the theoy of binocula vision and the height of the technology of vision, making the binocula vision technology as a new technology to be widely used, many epets also will be tansfeed to the field of vision. In the new centuy, the binocula vision technology is moe and moe applied to the obot eseach. Using binocula vision to obtain the image, the coodinates of the featue points etacted fom the two images can be achieved by the Stewat platfom. The navigation system of the autonomous socce obot is ealized by using the heteogeneous binocula vision system of Habin Institute of Technology [5]. The binocula vision sevo system was developed, and the mati of the taget image was calculated by using the pinciple of binocula vision and the [6] mati of the taget image was calculated. The detection and adaptive tacking of the unknown taget was ealized by using the Jacobi mati of thee efeence signs. A senso fusion method is poposed fo intelligent tanspotation system, which is used in intelligent tanspotation system. The taget depth is detected by the ada system, and the taget location is etacted by using the binocula vision and the colo image segmentation technology. 26. The authos - Published by Atlantis Pess 853

Paallel binocula vision measuement pinciple Z P Z 2 p p 2 o o 2 O X O 2 Y Y 2 y 2 2 Fig.. Measuing pinciple of binocula vision system Figue epesents the pinciple of paallel binocula vision, that is, the optical ais of the two cameas ae paallel to each othe. X ais and Y ais ae paallel to each othe. Two camea at the same time to see the same featue points P, Respectively, in the left and ight cameas to get a point P image,left camea coodinate system O X Y Z located at the oigin of the wold coodinate system and no otation. Image coodinate system O y ; Right camea coodinate system O2 X 2Y 2 Z 2, Image coodinate system O2 2 y. The two camea coodinate system has 2 a tanslation distance in the diection of the X ais B (also known as baseline distance). P ( X, Y, Z), P 2 ( X 2, Y 2, Z 2) ae the coodinates of the camea coodinate system; p (, y, z), p 2 ( 2, y2, z2 ) ae espectively in the left and ight cameas on the image coodinates, the pojection elationship can be got: X u u α Z Y v v α y () Z X B u2 u α Z Y v2 v α y Z In the fomula : u v α α y -Intenal paametes of the camea. ( u, v ) -Piel coodinates in the image coodinate system of the left camea. ( u, v 2 2 )-Piel coodinates of the image coodinates of the ight camea imaging point. By the fomula (), the 3D coodinates of P in the camea coodinate system can be calculated: Z( u u) X α (2) Z( v v ) Y α y B f Z u2 u Theefoe, any point on the plane of the left camea head can be detemined by the 3D coodinates of the point. This method is a point to point opeation, fom the uppe type, by the depth of the paalla is vey easy, but the dispaity itself is the most difficult pat of the steeo vision, we need to find the coesponding points in the left and ight two images. In pactical application, some X 2 854

paametes of the camea ae unknown, which need to be calibated. Taget location algoithm As shown in Figue 2, the taget localization algoithm is mainly divided into the following steps: ) Using two cameas to obtain the image of the envionment, get the oad signs; 2) In the YUV colo space, the colo theshold segmentation method based on the colo theshold segmentation method is used to obtain the taget colo featue, and the aea of the egion, which is based on the connected domain, the aea of the connected egion, the aea of the taget, the colo ecognition; 3) The bounday of the taget is etacted fom the bounday of the taget centoid and the shape featue of the edge detection is pefomed by using the Hof tansfom, which impoves the accuacy of taget ecognition; 4) Accoding to the distance measuement pinciple of binocula system and its mathematical model and the sceen coodinates of the taget centoid, the thee-dimensional space coodinates of the taget centoid ae obtained, and the position and diection of the binocula vision system ae accomplished. 5) The epeimental platfom of binocula vision system based on hand / foot is constucted, and the elative pose of obot and atificial landmak is obtained by the method of coodinate system tansfomation. Image acquisition Robot localization RGB conveted to YUV Taget location Cente of mass calibation Hadwae components of binocula vision system (colo theshold segmentation) (aea theshold segmentation) Shape ecognition Colo ecognition Fig.2. Location algoithm block diagam (Hof tansfom) Binaizat ion Closed opeation Hand / foot fusion of a fou legged obot with a binocula vision system as shown in figue 3. The binocula vision system designed by this pape is composed of two C27 cameas, and thei main function is to captue the oiginal image fom multiple angles. Camea though the diamete of 4mm scew and the hoizontal scale ba connected, and hoizontal scale ba by thee 5mm scew fied on the obot, in ode to fom a binocula vision system. Fig.3. Hand / foot fusion of a fou legged obot and its binocula vision system 855

Epeimental esults ) Results of image acquisition and taget ecognition Fist, using two digital cameas also fom binocula system cm, objects fom left to ight evey 5cm is aanged fo one time, a total of thee epeiments. Secondly, the image pocessing based on the colo and the status quo of the souce image is caied out. In Figue 4, (a), (b), the souce image is epesented by the left and ight (c), (d), the esult of colo ecognition; (e), (f) the esults of image pepocessing; (g), (h) the esult of shape ecognition. (a) Souce image (left) (b) Souce image(ight) (c) Colo ecognition (left) (d) Colo ecognition (ight) (e) Image pepocessing (left) (f) Image pepocessing (ight) (g) Shape ecognition (left) (h) Shape ecognition (ight) Fig.4. Identifies the taget object 2) Thee-dimensional coodinates of the taget space A paallel binocula anging model, calibation method of camea paamete mati fo the use of the zhangzhengyou: α u 47.445 8.643 M α y vo 45.527 23.48 (3) 856

M α α y 2 v u 43.334 o 4.395 5.82 2.23 (4) M 2 2 3 2 22 32 3 23 33 t t y t z 362.59926 (5) The centoid of the taget aea is obtained by the fomula (5) in the image coodinates of the left and ight camea images, as shown in Table. Table Coodinates of the cente of mass of the cente of the sceen. Seial numbe Left camea image coodinates Right camea image coodinates (4,94) (9,94) 2 (3,94) (8,94) 3 (2,94) (7,94) Fom this, the 3D coodinate of the object can be calculated by the paallel binocula vision, and the esults ae as shown in table 2. Table 2 Calculation esults of 3D coodinates (unit CM) Seial numbe X Y Z Actual distance Eo 33.8 43.9 95.6 2.4% 2 3.54 44.2 98. 7.9% 3 3.97 45. 4.3 4.3% Position eo analysis Fom the above calculation esults, it can be seen that the epeimental esults and the actual distance eist some eos, afte analysis, the main easons ae the following aspects: () Envionmental impact: due to the weathe, tempeatue, lighting and othe etenal causes can make the calibation esults have eos, and this issue has not consideed the influence of the etenal envionment on the calibation epeiments. (2) The accuacy of the camea: because of the camea's optical system design and the eo geneated duing the pocessing, esulting in the use of the camea to obtain the image contains such as adial distotion and tangential distotion of the geometic defomation. In the calibation method used in this pape is the ideal camea model, so that the esults have a cetain degee of eo. (3) The position is not accuate: in the calibation epeiment, afte epeated adjustment, the length and length of the image pocessing can not be eactly equal. Refeence [] Bamad S, etal. Computational steeo. ACM Computing Suvegs, 982, 4:553-572 [2] Li Qi, Feng Huajun, Xu Zhihai, Huang Hongqiang. The compute deals with South Koea, steeo vision technology [J]. optical technology,27(5):7-73 857

[3] Xiang Haibing. The poblem of compute vision in the development of the new wave of [J].996(4):29-3 [4] Zhang Fuue. Robotics - intelligent obot sensing technology [M]. Electonics Industy Pess, Beijing, 996 [5] Gao Qingji, Hong Bing Rong, Yu Feng Ruan. Based on heteogeneous binocula vision of autonomous socce obot navigation [J]. Jounal of Habin Institute of ology.23:35(9):2-24 [6] Minou Asada, Takamao Tanaka. Visual Tacking of Unknown Moving Object by Adaptive Binocula Visual Seving. Poceeding of the 999 IEEE Intenational Confeence on Multi-senso Fusion and Intelligent Systems 858