FABRICATION AND CONTROL OF VISION BASED WALL MOUNTED ROBOTIC ARM FOR POSE IDENTIFICATION AND MANIPULATION

Size: px
Start display at page:

Download "FABRICATION AND CONTROL OF VISION BASED WALL MOUNTED ROBOTIC ARM FOR POSE IDENTIFICATION AND MANIPULATION"

Transcription

1 FABRICATION AND CONTROL OF VISION BASED WALL MOUNTED ROBOTIC ARM FOR POSE IDENTIFICATION AND MANIPULATION Induraj R 1*, Sudheer A P 2 1* M Tech Scholar, NIT Calicut, , indurajrk@gmail.com: 2 Assistant Professor, NIT Calicut, , apsudheer@nitc.ac.in Abstract Locating and handling different work part configuration in a material handling system is more effective with the assistance of a vision system. Usually object identification done through RGB image which is illumination sensitive and other interferences from the surrounding. These limitations can be avoided if object recognition is done by depth segmentation. This work is a 3D vision sensor based work piece re-orientation. The work piece is changed to the desired orientation by a wall mounted robotic arm. In industries delta robots are used for fast response but it is not capable to handle all work part configurations. This is solved using a serial manipulator which is having four degrees of freedom. This work also presents modeling, kinematic and dynamic analysis of the robotic arm. This project work focus on real-time object pose recognition based on point cloud approach. It can be applied for the automation of any work cell with the vision assistance. Main focus of the work is on control of the manipulator by processing the point cloud data. Keywords: Work piece handling, Vision Assisted Robot, 3D Vision, Re-orientation 1 Introduction As a comprehensive technology, machine vision has been used widely in various fields including industrial fields making a significant contribution to ensure competitiveness in modern industrial manufacturing. On automatic sorting lines, machine vision is used to detect and track the moving target and guide sorting robot complete the sorting task. Application of computer vision is extensively used in the food industries for automated sorting of vegetables, inspection, labeling, packing, etc. In an automated production line such as Flexible Manufacturing System (FMS), material handling is completely automated by Automated Guided Vehicle (AGV), conveyor, Automated Storage and Retrieval System (ASRS), robot, etc. With the invention of NC machines the entire production process were automated without any human intervention. The technological advances lead to the development of CNC and DNC by which the operator is able to control the production line with few mouse clicks. As far as the computer vision in industry is concerned, it is still growing at a fast pace. It may be due to the need for more processing power or lack of innovative sensing technology. The present scenario of the vision system used in industry and its drawbacks are described in the following section. Object recognition systems have been an active research topic over the last decade. Wenchang Zhang et al. [1] proposed a vision based control strategy to perform high speed pick-and-place tasks on automation product line. CCD camera used for image capturing by which position and shape of objects obtained. The sorting is done by a delta robot. This model used a target tracking method based on servo motor with synchronous conveyer and used LabVIEW for control. Recent technology advances have enabled the development of devices, as for example the Microsoft Kinect that captures 3D data from the real world. Many works have been done as for obstacle avoidance in indoor environments. Jose-Juan Hernandez-Lopez et al.[2] used both color and depth data for object detection from a scene. RGB image is converted to the CIE-Lab color space to avoid light sensitivity. A mask is generated using depth information to extract object of interest. Since RGB and depth cameras of Kinect is having an offset, homography relationship is used. Antonio Sgorbissa et al.[3] used a structure based object representation with Scale Invariant Feature Transform (SIFT) algorithm for object identification in indoor environment. In this work he used Euclidean clustering and RANSAC plane estimation to identify different furniture in the scene. Most of the processing techniques for 3D point clouds are computationally intensive. The processing power requirement for depth image is reduced with fovea approach [4, 5]. It proposed a new method in which more processing power is dedicated to a small area in the in the image where the object of interest is present. Data obtained from a 3D spin image has been used for 3D difference detection between a CAD model 355-1

2 FABRICATION AND CONTROL OF VISION BASED WALL MOUNTED ROBOTIC ARM FOR POSE IDENTIFICATION AND MANIPULATION and a real object by Svenja Kahn et al.[7]. Rostislav Hulik et al.[8] used Hough transform techniques in 3D image processing and it is applied for the continuous plane detection from a point cloud. The proposed system is then compared with RANSAC-based plane detector.. 2 Computer vision in industry Out of the vision systems used in industries, most of them are based on 2D vision sensors. Image will be taken by CCD camera and processing will be done by 2D processing techniques. Such systems rely on color segmentation or boundary detection for the extraction of the required data from the RGB image. This is having fast response since low processing power is required. On the downside it has drawbacks such as color of background affects the performance and proper illumination methods are necessary. 3D vision sensors such as Time of flight (TOF) camera and stereo imaging are very rarely used in industry because of the lack of fast response. A product from Microsoft called Kinect can be used as a solution for this problem. Kinect has its own infrared light source for depth imaging and its sensor is capable of giving data at a rate of 30 frames per second. In machining process, where re-orientation of the work piece is required frequently, robotic arm can be used with the assistance of kinect vision. This avoids the need for continuously recording the orientation of work piece in a transfer line. Since a vision system based on depth imaging is used there will be no need of illumination facility and is more reliable. Depth segmentation is used rather than color segmentation for object identification. The robotic arm is mounted on wall which results in less utilization of the floor space. 3 Model of robotic manipulator Most of the industrial robots or serial manipulators are having five or six degrees of freedom (DOF). In this work a robotic arm with four degrees of freedom is fabricated for the work piece re-orientation. By reducing the degrees of freedom of a serial manipulator the control becomes easy. A 4 DOF robotic arm is designed and fabricated for a particular workspace configuration. Since reduction in DOF leads to reduced flexibility, it cannot adapt to the changes in work space configuration. 3.1 Four - DOF robotic arm The design with axes representation is as shown in figure 1. Joint 1 provides yaw. Pitch is provided by joint 2 and 3. Roll is done through joint 4. Figure 1 Schematic diagram of robotic arm Joint θ d a α (degree) () () (degree) 1 θ 1 0 l θ 2 0 l θ θ l 3 +l Table 1 DH parameters of robotic arm By conducting dynamic analysis (Lagrange-Euler formulation) servomotors were selected for actuation of each joint. The maximum payload considered in the design is 0.5 kg. The fabricated robotic arm is shown in figure Workspace Figure 2 Fabricated robotic arm With the help of Denavit-Hartenberg (DH) parameters of the robotic arm given in Table 1, forward kinematic relations are formulated and workspace plotted in MATLAB using scatter3 function as shown in figure 3. This plot helps to study effect of reduced DOF on reachable workspace of the robotic arm

3 Figure 3 Workspace of robotic arm Rotation of 180º is provided for joints 1, 3 and 4. For joint 2 rotation is limited to 90º downward since the robotic arm is vertically mounted at a height from the floor and its work space of interest lies below to it. For re-orienting the work piece to the desired pose rotations of joint 2, 3 and 4 are used. 4 Vision and control The vision sensor used is Microsoft Kinect is capable of both RGB imaging and depth imaging of object. In this work only the depth imaging capability is utilized. Kinect has its own infrared light source for structured lighting and depth imaging. It can work even in the absolute darkness. Data from Kinect is used for database recording of different poses and then pose identification by the software called Processing. Processing also controls the robotic arm, for reorientating objects, with the help of Arduino uno microcontroller. An external dc power supply of 4.9V is used for actuating robotic arm. Schematic diagram is shown in figure 4. Kinect 10 Z Database collection Y X Processing Figure 4 Schematic diagram 0 Arduino Pose identification Robotic Arm 4.1 Three dimensional vision Point cloud approach Point cloud is a collection of co-ordinate data of the points on the surface of the object on which infrared rays from the Kinect can reach. The co-ordinate system (XYZ) of Kinect and Processing are different which is considered while using inverse kinematics. To obtain the point cloud data of the object, segmentation algorithm (Refer Algorithm 1) is used in which boundary is limited in x, y and z axes. To compare the different point clouds the first step is to align a common reference point. To attain this, the co-ordinates of each point cloud is updated with its centroid as origin. Algorithm 1 Processing algorithm for Point cloud segmentation Input point cloud {P} in from kinect Input boundary limits in XYZ directions Output point cloud {P} out for every point in {P} in if point in {P} in inside boundary limits store the point in {P} out CG = average of {P} out //finding centroid of {P} out for every point in {P} out substract CG //updating {P} out with CG as origin Algorithm 2 Processing algorithm for Point cloud refining Input point cloud {P} out //output of algorithm 1 Input number of scans parameter K 1 Input point elimination parameter d 1max Output point cloud {P} 1 //each time of using {P} out it get updated {P} 1 = {P} out //storing first point cloud for K 1 iterations {P} 2 = {P} out //storing updated point cloud for every point in {P} 1 find closest point in {P} 2 if distance to closest point < d 1max find average and update {P} 1 else delete point from {P} 1 While scanning some junk co-ordinates also get included due to improper reflection of infrared rays. To obtain more accurate results average point cloud of K 1 number of consecutive scans is taken. At each scan some points are deleted deping on the parameter 355-3

4 FABRICATION AND CONTROL OF VISION BASED WALL MOUNTED ROBOTIC ARM FOR POSE IDENTIFICATION AND MANIPULATION d 1max which is the maximum allowable distance of each point to the closest point in consecutive scans. If the distance is more than d 1max it corresponds to a junk data. The variables K 1 and d 1max are changed to get an optimum result. With increase in these values number of points in output point cloud {P} 1 also reduces which may also give inaccurate results. Algorithm 2 is used to attain these results. Same algorithm is used for collecting the database point clouds of different poses and for scanning point cloud for pose identification. Algorithm 3 Processing algorithm for Point cloud matching Input point clouds {PC} 1 {PC} 2 {PC} 3 //point clouds from database Input point cloud {P} 1 //point cloud for matching Input d 2max //maximum allowable distance while comparing two points for similarity Output score1, score2, score3 //matching score with each point cloud in database score1=0, score2=0, score3=0 for every point in {P} 1 if distance with closest point in {PC} 1 < d 2max score1++ if distance with closest point in {PC} 2 < d 2max score2++ if distance with closest point in {PC} 3 < d 2max score3++ Matching of a scanned point cloud with that in the database is according to Algorithm 3. It is controlled by a factor d 2max which is the allowable distance with the corresponding point in the database. For experimental purpose database of three poses are collected as {PC} 1 {PC} 2 and {PC} Controlling robotic arm Based on the score value the pose is identified and Processing selects the sequence of joint rotations for reorienting the object into required pose. Picking operation is made through the inverse kinematic relations from centroid of the point cloud. Position of the Kinect in the base frame of the robot is essential for proper functioning of inverse kinematics. Communication with micro controller is done through serial communication. Micro controller contains coding in its memory for the control of six servo motors connected to it. 5 Results and discussion Design and fabrication of robotic arm are completed. Robot is integrated with vision sensor. Threshold values: d 1max = 3 and d 2max = 1.5 are selected after testing the results with different values of the distance threshold parameters for image processing and analysis. The number of iterations K 1 is taken as 10 for getting average point cloud. The algorithm has identified the pose of object accurately in every test runs. However, the centroid of the object calculated is different from the actual value. This is mainly because three sides of object is only scanned by the Kinect sensor during the processing. Due to this, small errors in inverse kinematics are occurred, which has affected the picking operation. Based on the identified pose the algorithm selects the sequence of joint movements from memory to reorient the object. One of the limitations of the Kinect based vision system is the minimum range of depth camera (about 20 inches) within which no data will be available. Another serious issue is occlusion which has reduced by carefully selecting the position of Kinect. Proposed vision based robotic manipulator has the limitation of adaptation to some points in workspace, where it cannot perform picking operation. This is due to the low resolution of sensors/actuators and problems with vision sensor mentioned above. 6 Conclusion and outlook The proposed robotic arm is fabricated and automated with microcontroller. Algorithm is presented for pose identification with point cloud approach. Based on the identified pose re-orientation is carried out by selecting programmed cycles. Experiments conducted on serial robotic arm to validate the proposed control strategy. Further improvements can be done by incorporating color data and by full scanning of the object. The system can be integrated with work cells and by changing the programming, the system can be applied also in the sorting operation of work parts. References [1] Wenchang Zhang., Jiangping Mei. and Yabin Ding. (2012), Design and development of a high speed sorting system based on machine vision guiding, Physics Procedia, Vol. 25, pp [2] Jose-Juan Hernandez-Lopez., Ana-Linnet Quintanilla-Olvera., Jose-Luis Lopez-Ramırez., Francisco-Javier Rangel-Butanda., Mario-Alberto Ibarra-Manzano. and Dora-Luz Almanza-Ojeda. (2012), Detecting objects using color and depth segmentation with kinect sensor, Procedia Technology, Vol. 3, pp [3] Antonio Sgorbissa. and Damiano Verda. (2013), Structure-based object representation and classification in mobile robotics through a Microsoft Kinect, Robotics and Autonomous Systems, Vol. 61, pp [4] Isil Bozma, H. and Hulya Yalcin. (2002), Visual processing and classification of items on a moving conveyor: a selective perception approach, Robotics and 355-4

5 Computer Integrated manufacturing, Vol. 18, pp [5] Rafael Beserra Gomes., Bruno Marques Ferreira da Silva., Lourena Karin de Medeiros Rocha., Rafael Vidal Aroca., Luiz Carlos Pacheco Rodrigues Velho. and Luiz Marcos GarciaGonçalves. (2013), Efficient 3D object recognition using foveated point cloud, Computers & Graphics, Vol. 37, pp [6] Mattone, R., Campagiorni, G. and Galati, F. (2000), Sorting of items on a moving conveyor belt. Part 1: a technique for detecting and classifying objects, Robotics and Computer Integrated Manufacturing, Vol. 16, pp [7] Svenja Kahn., Ulrich Bockholt., Arjan Kuijper. and Dieter, W. F. (2013), Towards precise real-time 3D difference detection for industrial application, Computers in Industry, Vol. 64, pp [8] Rostislav Hulik., Michal Spanel., Pavel Smrz. and Zdenek Materna. (2013), Continuous plane detection in point-cloud data based on 3D Hough Transform, Journal of visual communication and image representation, Vol. 25, pp [9] Han-Pang Chiu., Leslie Pack Kaelbling. and Tomas Lozano-Perez. (2009), Learning to generate novel views of objects for class recognition, Computer Vision and Image Understanding, Vol. 113, pp

CS231A: Project Final Report Object Localization and Tracking using Microsoft Kinect

CS231A: Project Final Report Object Localization and Tracking using Microsoft Kinect CS231A: Project Final Report Object Localization and Tracking using Microsoft Kinect Gerald Brantner Laura Stelzner Martina Troesch Abstract Localization and tracking of objects is a fundamental component

More information

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding Available online at www.sciencedirect.com Physics Procedia 25 (2012 ) 1955 1965 2012 International Conference on Solid State Devices and Materials Science Design and Development of a High Speed Sorting

More information

Ceilbot vision and mapping system

Ceilbot vision and mapping system Ceilbot vision and mapping system Provide depth and camera data from the robot's environment Keep a map of the environment based on the received data Keep track of the robot's location on the map Recognize

More information

Design, Development and Kinematic Analysis of a Low Cost 3 Axis Robot Manipulator

Design, Development and Kinematic Analysis of a Low Cost 3 Axis Robot Manipulator Design, Development and Kinematic Analysis of a Low Cost 3 Axis Robot Manipulator Sudhakar Ramasamy 1, Sivasubramanian R 2, Krishnakumar M 1, Prakashpandian.M.D 1 1 Department of Mechanical Engineering,

More information

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING

OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING OBJECT SORTING IN MANUFACTURING INDUSTRIES USING IMAGE PROCESSING Manoj Sabnis 1, Vinita Thakur 2, Rujuta Thorat 2, Gayatri Yeole 2, Chirag Tank 2 1 Assistant Professor, 2 Student, Department of Information

More information

Industrial Robots : Manipulators, Kinematics, Dynamics

Industrial Robots : Manipulators, Kinematics, Dynamics Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations

More information

BIN PICKING APPLICATIONS AND TECHNOLOGIES

BIN PICKING APPLICATIONS AND TECHNOLOGIES BIN PICKING APPLICATIONS AND TECHNOLOGIES TABLE OF CONTENTS INTRODUCTION... 3 TYPES OF MATERIAL HANDLING... 3 WHOLE BIN PICKING PROCESS... 4 VISION SYSTEM: HARDWARE... 4 VISION SYSTEM: SOFTWARE... 5 END

More information

Available online at ScienceDirect. Marko Švaco*, Bojan Šekoranja, Filip Šuligoj, Bojan Jerbić

Available online at   ScienceDirect. Marko Švaco*, Bojan Šekoranja, Filip Šuligoj, Bojan Jerbić Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 69 ( 2014 ) 459 463 24th DAAAM International Symposium on Intelligent Manufacturing and Automation, 2013 Calibration of an Industrial

More information

Robot For Assistance. Master Project ME-GY 996. Presented By: Karim Chamaa. Presented To: Dr. Vikram Kapila

Robot For Assistance. Master Project ME-GY 996. Presented By: Karim Chamaa. Presented To: Dr. Vikram Kapila Robot For Assistance Master Project ME-GY 996 Presented By: Karim Chamaa Presented To: Dr. Vikram Kapila Project Description Building a robot with an assistance duty. Goals: Build a cheap and independent

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Design and building motion capture system using transducer Microsoft kinect to control robot humanoid

Design and building motion capture system using transducer Microsoft kinect to control robot humanoid Design and building motion capture system using transducer Microsoft kinect to control robot humanoid Gun Gun Maulana 1 *, Yuliadi Erdani 1, Aris Budiyarto 1, and Wahyudi Purnomo 1 1 Politeknik Manufaktur

More information

Indoor Home Furniture Detection with RGB-D Data for Service Robots

Indoor Home Furniture Detection with RGB-D Data for Service Robots Indoor Home Furniture Detection with RGB-D Data for Service Robots Oscar Alonso-Ramirez 1, Antonio Marin-Hernandez 1, Michel Devy 2 and Fernando M. Montes-Gonzalez 1. Abstract Home furniture detection

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Colour Object Counting and Sorting Mechanism Using Image Processing Approach Avadhoot R.Telepatil 1, 2 Prashant M. Jadhav 2 1

Colour Object Counting and Sorting Mechanism Using Image Processing Approach Avadhoot R.Telepatil 1, 2 Prashant M. Jadhav 2 1 e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com Colour Object Counting and Sorting Mechanism

More information

MDP646: ROBOTICS ENGINEERING. Mechanical Design & Production Department Faculty of Engineering Cairo University Egypt. Prof. Said M.

MDP646: ROBOTICS ENGINEERING. Mechanical Design & Production Department Faculty of Engineering Cairo University Egypt. Prof. Said M. MDP646: ROBOTICS ENGINEERING Mechanical Design & Production Department Faculty of Engineering Cairo University Egypt Prof. Said M. Megahed APPENDIX A: PROBLEM SETS AND PROJECTS Problem Set # Due 3 rd week

More information

Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer

Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer Maitreyi More 1, Rahul Abande 2, Ankita Dadas 3, Santosh Joshi 4 1, 2, 3 Department of Mechanical

More information

On the basis of degree of freedom of the arm and the type of joints used, a robotic arm can have any of the following designs:

On the basis of degree of freedom of the arm and the type of joints used, a robotic arm can have any of the following designs: Available online at www.sciencedirect.com ScienceDirect Procedia Manufacturing 20 (2018) 400 405 www.elsevier.com/locate/procedia 2nd International Conference on Materials Manufacturing and Design Engineering

More information

Chapter 1: Introduction

Chapter 1: Introduction Chapter 1: Introduction This dissertation will describe the mathematical modeling and development of an innovative, three degree-of-freedom robotic manipulator. The new device, which has been named the

More information

Introduction to Robotics

Introduction to Robotics Université de Strasbourg Introduction to Robotics Bernard BAYLE, 2013 http://eavr.u-strasbg.fr/ bernard Modelling of a SCARA-type robotic manipulator SCARA-type robotic manipulators: introduction SCARA-type

More information

Design of a Three-Axis Rotary Platform

Design of a Three-Axis Rotary Platform Design of a Three-Axis Rotary Platform William Mendez, Yuniesky Rodriguez, Lee Brady, Sabri Tosunoglu Mechanics and Materials Engineering, Florida International University 10555 W Flagler Street, Miami,

More information

Force control of redundant industrial robots with an approach for singularity avoidance using extended task space formulation (ETSF)

Force control of redundant industrial robots with an approach for singularity avoidance using extended task space formulation (ETSF) Force control of redundant industrial robots with an approach for singularity avoidance using extended task space formulation (ETSF) MSc Audun Rønning Sanderud*, MSc Fredrik Reme**, Prof. Trygve Thomessen***

More information

PPGEE Robot Dynamics I

PPGEE Robot Dynamics I PPGEE Electrical Engineering Graduate Program UFMG April 2014 1 Introduction to Robotics 2 3 4 5 What is a Robot? According to RIA Robot Institute of America A Robot is a reprogrammable multifunctional

More information

Kinematics and dynamics analysis of micro-robot for surgical applications

Kinematics and dynamics analysis of micro-robot for surgical applications ISSN 1 746-7233, England, UK World Journal of Modelling and Simulation Vol. 5 (2009) No. 1, pp. 22-29 Kinematics and dynamics analysis of micro-robot for surgical applications Khaled Tawfik 1, Atef A.

More information

3D object recognition used by team robotto

3D object recognition used by team robotto 3D object recognition used by team robotto Workshop Juliane Hoebel February 1, 2016 Faculty of Computer Science, Otto-von-Guericke University Magdeburg Content 1. Introduction 2. Depth sensor 3. 3D object

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

Autonomous Navigation of Nao using Kinect CS365 : Project Report

Autonomous Navigation of Nao using Kinect CS365 : Project Report Autonomous Navigation of Nao using Kinect CS365 : Project Report Samyak Daga Harshad Sawhney 11633 11297 samyakd@iitk.ac.in harshads@iitk.ac.in Dept. of CSE Dept. of CSE Indian Institute of Technology,

More information

Design & Kinematic Analysis of an Articulated Robotic Manipulator

Design & Kinematic Analysis of an Articulated Robotic Manipulator Design & Kinematic Analysis of an Articulated Robotic Manipulator Elias Eliot 1, B.B.V.L. Deepak 1*, D.R. Parhi 2, and J. Srinivas 2 1 Department of Industrial Design, National Institute of Technology-Rourkela

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit

3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 3D Terrain Sensing System using Laser Range Finder with Arm-Type Movable Unit 9 Toyomi Fujita and Yuya Kondo Tohoku Institute of Technology Japan 1. Introduction A 3D configuration and terrain sensing

More information

Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis

Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis Motion planning for industrial manipulators is a challenging task when obstacles are present in the workspace so that collision-free

More information

High Precision Man-machine Collaborative Assembly System Xin YE 1,*, Yu-hong LIU 1, Hao WU 2, Zhi-jing ZHANG 1 and Yi-jin ZHAO 1

High Precision Man-machine Collaborative Assembly System Xin YE 1,*, Yu-hong LIU 1, Hao WU 2, Zhi-jing ZHANG 1 and Yi-jin ZHAO 1 2017 2nd International Conference on Computer, Mechatronics and Electronic Engineering (CMEE 2017) ISBN: 978-1-60595-532-2 High Precision Man-machine Collaborative Assembly System Xin YE 1,*, Yu-hong LIU

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Research on the Control Strategy of Decoupled 3-DOF Joystick for Teleoperation

Research on the Control Strategy of Decoupled 3-DOF Joystick for Teleoperation Advances in Engineering Research, volume 0 Proceedings of the rd International Conference on Material Engineering and Application (ICMEA 06) Research on the Control Strategy of Decoupled -DOF Joystick

More information

METR4202: ROBOTICS & AUTOMATION

METR4202: ROBOTICS & AUTOMATION Sort Pattern A This exam paper must not be removed from the venue School of Information Technology and Electrical Engineering Mid-Term Quiz METR4202: ROBOTICS & AUTOMATION September 20, 2017 First Name:

More information

An Interactive Technique for Robot Control by Using Image Processing Method

An Interactive Technique for Robot Control by Using Image Processing Method An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.

More information

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission

IFAS Citrus Initiative Annual Research and Extension Progress Report Mechanical Harvesting and Abscission IFAS Citrus Initiative Annual Research and Extension Progress Report 2006-07 Mechanical Harvesting and Abscission Investigator: Dr. Tom Burks Priority Area: Robotic Harvesting Purpose Statement: The scope

More information

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013

Lecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013 Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth

More information

Automatic Control Industrial robotics

Automatic Control Industrial robotics Automatic Control Industrial robotics Prof. Luca Bascetta (luca.bascetta@polimi.it) Politecnico di Milano Dipartimento di Elettronica, Informazione e Bioingegneria Prof. Luca Bascetta Industrial robots

More information

Assignment 3. Position of the center +/- 0.1 inches Orientation +/- 1 degree. Decal, marker Stereo, matching algorithms Pose estimation

Assignment 3. Position of the center +/- 0.1 inches Orientation +/- 1 degree. Decal, marker Stereo, matching algorithms Pose estimation Assignment 3 1. You are required to analyze the feasibility of designing a vision system for the robot gas station attendant. Assume that the driver parks the car so that the flap and the cap are in a

More information

Simulation and Modeling of 6-DOF Robot Manipulator Using Matlab Software

Simulation and Modeling of 6-DOF Robot Manipulator Using Matlab Software Simulation and Modeling of 6-DOF Robot Manipulator Using Matlab Software 1 Thavamani.P, 2 Ramesh.K, 3 Sundari.B 1 M.E Scholar, Applied Electronics, JCET, Dharmapuri, Tamilnadu, India 2 Associate Professor,

More information

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors

Robotic Grasping Based on Efficient Tracking and Visual Servoing using Local Feature Descriptors INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING Vol. 13, No. 3, pp. 387-393 MARCH 2012 / 387 DOI: 10.1007/s12541-012-0049-8 Robotic Grasping Based on Efficient Tracking and Visual Servoing

More information

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT Hongjun ZHU ABSTRACT:In order to better study the trajectory of robot motion, a motion trajectory planning and simulation based

More information

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse

More information

Final Project Report: Mobile Pick and Place

Final Project Report: Mobile Pick and Place Final Project Report: Mobile Pick and Place Xiaoyang Liu (xiaoyan1) Juncheng Zhang (junchen1) Karthik Ramachandran (kramacha) Sumit Saxena (sumits1) Yihao Qian (yihaoq) Adviser: Dr Matthew Travers Carnegie

More information

Team Description Paper Team AutonOHM

Team Description Paper Team AutonOHM Team Description Paper Team AutonOHM Jon Martin, Daniel Ammon, Helmut Engelhardt, Tobias Fink, Tobias Scholz, and Marco Masannek University of Applied Science Nueremberg Georg-Simon-Ohm, Kesslerplatz 12,

More information

3D Reconstruction of a Hopkins Landmark

3D Reconstruction of a Hopkins Landmark 3D Reconstruction of a Hopkins Landmark Ayushi Sinha (461), Hau Sze (461), Diane Duros (361) Abstract - This paper outlines a method for 3D reconstruction from two images. Our procedure is based on known

More information

Colour And Shape Based Object Sorting

Colour And Shape Based Object Sorting International Journal Of Scientific Research And Education Volume 2 Issue 3 Pages 553-562 2014 ISSN (e): 2321-7545 Website: http://ijsae.in Colour And Shape Based Object Sorting Abhishek Kondhare, 1 Garima

More information

Sensor Based Color Identification Robot For Type Casting

Sensor Based Color Identification Robot For Type Casting International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 9, Number 1 (2016), pp. 83-88 International Research Publication House http://www.irphouse.com Sensor Based Color Identification

More information

A Geometric Approach to Inverse Kinematics of a 3 DOF Robotic Arm

A Geometric Approach to Inverse Kinematics of a 3 DOF Robotic Arm A Geometric Approach to Inverse Kinematics of a 3 DOF Robotic Arm Ayush Gupta 1, Prasham Bhargava 2, Ankur Deshmukh 3, Sankalp Agrawal 4, Sameer Chourika 5 1, 2, 3, 4, 5 Department of Electronics & Telecommunication,

More information

EE565:Mobile Robotics Lecture 3

EE565:Mobile Robotics Lecture 3 EE565:Mobile Robotics Lecture 3 Welcome Dr. Ahmad Kamal Nasir Today s Objectives Motion Models Velocity based model (Dead-Reckoning) Odometry based model (Wheel Encoders) Sensor Models Beam model of range

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

This overview summarizes topics described in detail later in this chapter.

This overview summarizes topics described in detail later in this chapter. 20 Application Environment: Robot Space and Motion Overview This overview summarizes topics described in detail later in this chapter. Describing Space A coordinate system is a way to describe the space

More information

Keywords: clustering, construction, machine vision

Keywords: clustering, construction, machine vision CS4758: Robot Construction Worker Alycia Gailey, biomedical engineering, graduate student: asg47@cornell.edu Alex Slover, computer science, junior: ais46@cornell.edu Abstract: Progress has been made in

More information

Basilio Bona ROBOTICA 03CFIOR 1

Basilio Bona ROBOTICA 03CFIOR 1 Kinematic chains 1 Readings & prerequisites Chapter 2 (prerequisites) Reference systems Vectors Matrices Rotations, translations, roto-translations Homogeneous representation of vectors and matrices Chapter

More information

MTRX4700 Experimental Robotics

MTRX4700 Experimental Robotics MTRX 4700 : Experimental Robotics Lecture 2 Stefan B. Williams Slide 1 Course Outline Week Date Content Labs Due Dates 1 5 Mar Introduction, history & philosophy of robotics 2 12 Mar Robot kinematics &

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu University of California, Riverside krystof@litomisky.com, bhanu@ee.ucr.edu Abstract. Three-dimensional simultaneous localization

More information

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller 3D Computer Vision Depth Cameras Prof. Didier Stricker Oliver Wasenmüller Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Visual Perception Sensors

Visual Perception Sensors G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and

More information

Carnegie Mellon University

Carnegie Mellon University Actuators & Motion Instructors: Prof. Manuela Veloso & Dr. Paul E. Rybski TAs: Sonia Chernova & Nidhi Kalra 15-491, Fall 2004 http://www.andrew.cmu.edu/course/15-491 Computer Science Department Carnegie

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING Name Code Class Branch Page 1 INSTITUTE OF AERONAUTICAL ENGINEERING : ROBOTICS (Autonomous) Dundigal, Hyderabad - 500 0 MECHANICAL ENGINEERING TUTORIAL QUESTION BANK : A7055 : IV B. Tech I Semester : MECHANICAL

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT

CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT CONCEPTUAL CONTROL DESIGN FOR HARVESTER ROBOT Wan Ishak Wan Ismail, a, b, Mohd. Hudzari Razali, a a Department of Biological and Agriculture Engineering, Faculty of Engineering b Intelligent System and

More information

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics

3D Perception. CS 4495 Computer Vision K. Hawkins. CS 4495 Computer Vision. 3D Perception. Kelsey Hawkins Robotics CS 4495 Computer Vision Kelsey Hawkins Robotics Motivation What do animals, people, and robots want to do with vision? Detect and recognize objects/landmarks Find location of objects with respect to themselves

More information

θ x Week Date Lecture (M: 2:05p-3:50, 50-N202) 1 23-Jul Introduction + Representing Position & Orientation & State 2 30-Jul

θ x Week Date Lecture (M: 2:05p-3:50, 50-N202) 1 23-Jul Introduction + Representing Position & Orientation & State 2 30-Jul θ x 2018 School of Information Technology and Electrical Engineering at the University of Queensland Lecture Schedule Week Date Lecture (M: 2:05p-3:50, 50-N202) 1 23-Jul Introduction + Representing Position

More information

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision

A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision A Simple Interface for Mobile Robot Equipped with Single Camera using Motion Stereo Vision Stephen Karungaru, Atsushi Ishitani, Takuya Shiraishi, and Minoru Fukumi Abstract Recently, robot technology has

More information

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position.

Kinematics. Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. Kinematics Kinematics analyzes the geometry of a manipulator, robot or machine motion. The essential concept is a position. 1/31 Statics deals with the forces and moments which are aplied on the mechanism

More information

SIMULATION ENVIRONMENT PROPOSAL, ANALYSIS AND CONTROL OF A STEWART PLATFORM MANIPULATOR

SIMULATION ENVIRONMENT PROPOSAL, ANALYSIS AND CONTROL OF A STEWART PLATFORM MANIPULATOR SIMULATION ENVIRONMENT PROPOSAL, ANALYSIS AND CONTROL OF A STEWART PLATFORM MANIPULATOR Fabian Andres Lara Molina, Joao Mauricio Rosario, Oscar Fernando Aviles Sanchez UNICAMP (DPM-FEM), Campinas-SP, Brazil,

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

INTELLIGENT AUTONOMOUS SYSTEMS LAB

INTELLIGENT AUTONOMOUS SYSTEMS LAB Matteo Munaro 1,3, Alex Horn 2, Randy Illum 2, Jeff Burke 2, and Radu Bogdan Rusu 3 1 IAS-Lab at Department of Information Engineering, University of Padova 2 Center for Research in Engineering, Media

More information

ROS-Industrial Basic Developer s Training Class

ROS-Industrial Basic Developer s Training Class ROS-Industrial Basic Developer s Training Class Southwest Research Institute 1 Session 4: More Advanced Topics (Descartes and Perception) Southwest Research Institute 2 MOVEIT! CONTINUED 3 Motion Planning

More information

LASERDATA LIS build your own bundle! LIS Pro 3D LIS 3.0 NEW! BETA AVAILABLE! LIS Road Modeller. LIS Orientation. LIS Geology.

LASERDATA LIS build your own bundle! LIS Pro 3D LIS 3.0 NEW! BETA AVAILABLE! LIS Road Modeller. LIS Orientation. LIS Geology. LIS 3.0...build your own bundle! NEW! LIS Geology LIS Terrain Analysis LIS Forestry LIS Orientation BETA AVAILABLE! LIS Road Modeller LIS Editor LIS City Modeller colors visualization I / O tools arithmetic

More information

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features

Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Vision-based Mobile Robot Localization and Mapping using Scale-Invariant Features Stephen Se, David Lowe, Jim Little Department of Computer Science University of British Columbia Presented by Adam Bickett

More information

People detection and tracking using stereo vision and color

People detection and tracking using stereo vision and color People detection and tracking using stereo vision and color Rafael Munoz-Salinas, Eugenio Aguirre, Miguel Garcia-Silvente. In Image and Vision Computing Volume 25 Issue 6 (2007) 995-1007. Presented by

More information

Robot Transport Unit. Extend Reach and Add 7th Axis Capabilities

Robot Transport Unit. Extend Reach and Add 7th Axis Capabilities Robot Transport Unit Extend Reach and Add 7th Axis Capabilities 1 Help my robot can t reach that far! Robots have become pervasive in manufacturing worldwide. According to the Association for Advancing

More information

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots

3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots 3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,

More information

Position Control and Novel Application of SCARA Robot with Vision System

Position Control and Novel Application of SCARA Robot with Vision System Position Control and Novel Application of SCARA Robot with Vision System Hsiang-Chen Hsu 1,2,*, Li-Ming Chu 3, Zong-Kun Wang 2 and Shu-Chi Tsao 2 1 Department of Industrial management, I-Shou University,

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information

More information

Application Note. Fiber Alignment Using The HXP50 Hexapod PROBLEM BACKGROUND

Application Note. Fiber Alignment Using The HXP50 Hexapod PROBLEM BACKGROUND Fiber Alignment Using The HXP50 Hexapod PROBLEM The production of low-loss interconnections between two or more optical components in a fiber optic assembly can be tedious and time consuming. Interfacing

More information

Application Note. Fiber Alignment Using the HXP50 Hexapod PROBLEM BACKGROUND

Application Note. Fiber Alignment Using the HXP50 Hexapod PROBLEM BACKGROUND Fiber Alignment Using the HXP50 Hexapod PROBLEM The production of low-loss interconnections between two or more optical components in a fiber optic assembly can be tedious and time consuming. Interfacing

More information

DISTANCE MEASUREMENT USING STEREO VISION

DISTANCE MEASUREMENT USING STEREO VISION DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,

More information

CS 4758: Automated Semantic Mapping of Environment

CS 4758: Automated Semantic Mapping of Environment CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic

More information

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots

Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots Construction and Calibration of a Low-Cost 3D Laser Scanner with 360º Field of View for Mobile Robots Jorge L. Martínez, Jesús Morales, Antonio, J. Reina, Anthony Mandow, Alejandro Pequeño-Boter*, and

More information

Robotics kinematics and Dynamics

Robotics kinematics and Dynamics Robotics kinematics and Dynamics C. Sivakumar Assistant Professor Department of Mechanical Engineering BSA Crescent Institute of Science and Technology 1 Robot kinematics KINEMATICS the analytical study

More information

Design Contemplation and Modelling of a Bi-Axial Manipulator

Design Contemplation and Modelling of a Bi-Axial Manipulator Design Contemplation and Modelling of a Bi-Axial Manipulator Shuprajhaa T t.shuprajhaa94@gmail.com Vaitheeshwari M, mvaithee3795@gmail.com Subasree S subasreesridhar.12@gmail.com Sivakumar S Assistant

More information

3D Scene Reconstruction with a Mobile Camera

3D Scene Reconstruction with a Mobile Camera 3D Scene Reconstruction with a Mobile Camera 1 Introduction Robert Carrera and Rohan Khanna Stanford University: CS 231A Autonomous supernumerary arms, or "third arms", while still unconventional, hold

More information

Grasping Known Objects with Aldebaran Nao

Grasping Known Objects with Aldebaran Nao CS365 Project Report Grasping Known Objects with Aldebaran Nao By: Ashu Gupta( ashug@iitk.ac.in) Mohd. Dawood( mdawood@iitk.ac.in) Department of Computer Science and Engineering IIT Kanpur Mentor: Prof.

More information

A SURVEY ON SENSING METHODS AND FEATURE EXTRACTION ALGORITHMS FOR SLAM PROBLEM

A SURVEY ON SENSING METHODS AND FEATURE EXTRACTION ALGORITHMS FOR SLAM PROBLEM A SURVEY ON SENSING METHODS AND FEATURE EXTRACTION ALGORITHMS FOR SLAM PROBLEM Adheen Ajay and D. Venkataraman Department of Computer Science and Engineering, Amrita VishwaVidyapeetham, Coimbatore, India

More information

Robotics. SAAST Robotics Robot Arms

Robotics. SAAST Robotics Robot Arms SAAST Robotics 008 Robot Arms Vijay Kumar Professor of Mechanical Engineering and Applied Mechanics and Professor of Computer and Information Science University of Pennsylvania Topics Types of robot arms

More information

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS SIFT BASE ALGORITHM FOR POINT FEATURE TRACKING Adrian Burlacu, Cosmin Copot and Corneliu Lazar Gh. Asachi Technical University of Iasi, epartment

More information

Finding Reachable Workspace of a Robotic Manipulator by Edge Detection Algorithm

Finding Reachable Workspace of a Robotic Manipulator by Edge Detection Algorithm International Journal of Advanced Mechatronics and Robotics (IJAMR) Vol. 3, No. 2, July-December 2011; pp. 43-51; International Science Press, ISSN: 0975-6108 Finding Reachable Workspace of a Robotic Manipulator

More information

ÉCOLE POLYTECHNIQUE DE MONTRÉAL

ÉCOLE POLYTECHNIQUE DE MONTRÉAL ÉCOLE POLYTECHNIQUE DE MONTRÉAL MODELIZATION OF A 3-PSP 3-DOF PARALLEL MANIPULATOR USED AS FLIGHT SIMULATOR MOVING SEAT. MASTER IN ENGINEERING PROJET III MEC693 SUBMITTED TO: Luc Baron Ph.D. Mechanical

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 3: Forward and Inverse Kinematics MCE/EEC 647/747: Robot Dynamics and Control Lecture 3: Forward and Inverse Kinematics Denavit-Hartenberg Convention Reading: SHV Chapter 3 Mechanical Engineering Hanz Richter, PhD MCE503 p.1/12 Aims of

More information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino ROBOTICS 01PEEQW Basilio Bona DAUIN Politecnico di Torino Kinematic chains Readings & prerequisites From the MSMS course one shall already be familiar with Reference systems and transformations Vectors

More information

Sensory Augmentation for Increased Awareness of Driving Environment

Sensory Augmentation for Increased Awareness of Driving Environment Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie

More information