12 th International Conference on DEVELOPMENT AND APPLICATION SYSTEMS, Suceava, Romania, May 15-17, 2014 Robotic Arm Control in 3D Space Using Stereo Distance Calculation Roland Szabó Applied Electronics Department Fac. of Elec. and Telecomm., Politehnica Univ. Timișoara Continental Automotive România SRL Timișoara, România roland.szabo@etc.upt.ro Abstract We shall present a robotic arm control method with stereo vision. No too many robots in the industry are controlled with a camera and the robots which are controlled with two or more cameras are even harder to find. We made a vision recognition application which can recognize certain key points on the robotic arm. These points are placed at the joints, over each motor. These points have different colors, so the recognition of them was a task of color recognition. After color recognition with analytical geometry we could compute the exact angle which is needed for each motor to move, For the 0X axis movement we used stereo distance calculation to know how much movement needs the robotic arm at the base joint. Keywords color measurement; decision making; machine vision; manipulators; stereo vision; video cameras I. INTRODUCTION This paper presents the robotic arm control with stereo cameras. In the industry most of the robots have no vision system, they just move following predefined paths, which they have learned previously, but no decision is made by them, we can tell almost no artificial intelligence is implemented in their control software. This is not the best solution in today s dynamic world, because if we think just about cars, it s not like in old times, that they were 5 types of colors and only one option configuration, so it was really easy to built them. The colors of the cars were made the following way, the first day they built only one color, the second they other color and so on. Today we know that on the same production line after one car, the second car mostly has different color and more than sure that it has different option configuration, today everyone s car is different, it has some personalization that can t be found in other cars. If we talk about very expensive cars we can tell that you can chose what color you want, or you can match your cars color to some clothing item you like to wear. We know all this configuration can t be done with a blind robot, so robots with computer vision, decision making and artificial intelligence are needed. We need to populate the industry with robots with one, two or even more cameras, this way to minimize the works of the operator almost to zero. Of course, this robotic arm image recognition method is Aurel Gontean Applied Electronics Department Fac. of Elec. and Telecomm., Politehnica Univ. Timișoara Timișoara, România aurel.gontean@upt.ro not the best, or maybe it s in its starting phase. We never think about a method which can replace a well known and well tested method, but to complete it, to make it as a complementary method, maybe a back-up method, if everything will work together it will be more complex robotic arm control methods, which can minimize the human work as much as possible. We think is better to keep the inverse and forward kinematics, but sometimes, the PTP (Point To Point) movement method can be completed with a computer vision method. This way not all the time an operator needs to teach the robotic arm, but the robot can teach its self, it can recognize its own arm and move it accordingly. The calibration of the arm is very important and very time consuming, because the production line needs to be stopped to do it, and it needs to be done quite often to increase or maintain precision. With the computer vision method the calibration of the robotic arm can be done during each movement, because the robotic arm will always check its position with the cameras and will make small adjustment to reach the exact point which is needed to perform the operation at the highest standards. II. PROBLEM FORMULATION We had a Lynxmotion AL5 type robotic arm in our laboratory with no control software. We also had a couple of webcams and computers. We had many ideas, but we needed to choose something which could be implemented, so we started to study all the mathematics which for computer vision applications which we thought it would be necessary to accomplish our goal [1]-[7]. Our task was clear, to create a system which can control the robotic arm, move it in any physically reachable place just with color recognition algorithms. Basically the robot calculates and decides the movement; the only trigger is the placed object, which we can tell it s like a biscuit to a dog. When the dogs sees the biscuit it goes for it, but if you show him something else, maybe he won t reach it. Basically the mechanism needs to decide everything to reach the object or not and how much to reach for it and in which direction. The authors would like to thank the AFCEA Chapter from POLITEHNICA University of Bucharest and Continental Automotive România SRL for their support. 50 978-1-4799-5094-2/14/$31.00 2014 IEEE
III. PROBLEM SOLVING A. Theoretical Background The idea was to recognize each joints of the robot using only computer vision. To recognize each joint we used a well known method, mostly used on athletes when we want to introduce the data of their movement in the PC. For this there are used luminous spots at their joints and they are united with lines to create a computer skeleton of their movement. This method is also used when the computer game creators want to make human like movements to their characters and they record the movement data of actors and they assign the data to the character which is moving in the computer game. We used the same method, but for the robotic arm movement. We glued colored bottle stoppers to each joint, this way the color recognition algorithm can recognize it. We united the recognized spots with straight lines and this way we obtained the skeleton of the robotic arm. With this robotic arm recognized we drew a 2D coordinate system which looks like a parallelogram at the gripper and with this we could calculate the movement on 0Z and 0Y coordinates. This is not actually an orthogonal coordinate system, but it does its job flawlessly. The length of each side of the parallelogram is on side of an orthogonal triangle and the distance between the colored spots is the other side. From this we can compute exactly the arctangent angle which needs the specific motor to move. For the movement around the base (0X axis) we need to computer distance with stereo triangulation. We compute the distance between the cameras and the gripper and the distance between the cameras and the target point (green bottle stopper in our case). The difference between these two distances is the distance between the gripper and the target point. This way we can move our robotic arm on 3 axes in 3D. We needed to compute the robotic values from the angles, for this we came up with equation (1). After this we concatenated these values into the SCPI (Standard Commands for Programmable Instruments) commands sent to the robotic arm on the RS-232 interface [1]-[7]. = 180 0 = 2500 500 180 0 =11, 1 Where 2500 is the maximum and 500 is the minim robotic value and we know that the motors from the robot can make a 180º maximum movement. This means the following shown in equation (2) [1]-[7]. (1) 1 ~11, 1 (2) On Fig. 1 we can see the block diagram of the experiment. The image recognition algorithm is presented next. We made an RGB filter for each color, after a blob size threshold to get rid of noise and false positives, after we compute the center of gravity to get the center of the colored spot, to get the coordinate where we shall draw the joint number and where we shall have the start point of the joint uniting straight line. Logitech C270 Webcams AL5A Robotic Arm Center of Gravity USB USB PC RS-232 SCPI Commands VISA Driver for RS-232 Mathematical Calculations Drawing Overlay on Image Blob Size Threshold RGB Filter Hardware Software Fig. 1. Block diagram of the experiment. B. Mathematical Calculations On Fig. 2 we can see the overlay drawings on the acquisitioned image with webcam. The difference between vectors we calculate the following way. = (3) 51
= (4) = + (5) The vector length we calculate with the following formula. P 5 P 6 P 1 P T P 2 P 7 P 0 P 4 Fig. 2. The overlay drawings on the robotic arm for the mathematical calculations. We can compute the orthogonal vector in the following way. = (6) = 2 + (12) The parallelogram is calculated the following way. = (7) = (13) = (8) = 2 + (9) = 2 + (10) = (14) = + (15) = + (16) = 2 + (11) 52
= + (17) = + (18) = + (19) = = (28) (29) = + (20) = (30) 2 = + (21) = (31) = + (22) = = + (23) = + (24) = = 2 = (32) = + (25) = (33) = + (26) = 2 = + (27) The stereo distance calculation is done as shown on Fig. 3. = = The angle is calculated in the following way. (34) distance = 180 h (35) h camera separation a b factor offset The robotic values are calculated the following way. = 2000 180 (36) The real world coordinates were converted to pixels with the following formulas. Fig. 3. Stereo distance calculation algorithm. h =320 =240 (37) 53
=19" =15.2" 11.4"=21.05 (38) = = 21.05 2.54 (39) C. Software Implementation The software was implemented in LabWindows/CVI. We chose this programming language, because it has a diverse GUI (Graphical User Interface), it has many buttons and displays and it has the possibility to control many computer ports without using external libraries. This way we could easily control the robotic arm on the RS-232 interface. On Fig. 4 we can see the GUI of the application. Here we can see the connected LED which shows that the webcams are working. We have the SSC Version buttons which read the version of the SSC-32 servo control board s version. This board controls the robotic arm s motors. We can see on the indicator that we have the version 2.01 of the SSC- 32 board. We have an Initialize Motors buttons which tests the digital ports of the ATmega168 microcontroller from the SSC- 32 board. The testing of the digital ports also tests the servomotors too. The All Servos buttons wakes up the robotic arm. Basically it puts all servos in the middle position, which has the value of 1500. The robot is assembled mechanically that way that after waking up is has the shape of the Greek capital gamma letter (Γ). This is the robots initial position. The last button is the Draw button which draws all the overlays over the robotic arm s webcam image and move the robotic arm to reach the desired object (in our case, the green bottle stopper). On Fig. 5.a we can see the blue color detection, the detection of the blue bottle stopper at the base joint from the left and right image. On Fig. 5.b we can see the yellow color detection, the detection of the yellow bottle stopper at the elbow joint from the left and right image. On Fig. 5.c we can see the red color detection, the detection of the red bottle stopper at the gripper joint from the left and right image. On Fig. 5.d we can see the green color detection, the detection of the green bottle stopper at the target point from the left and right image. Fig. 5.a. Fig.5.b. Blue color detection of the left and right image. Yellow color detection of the left and right image. Fig. 5.c. Red color detection of the left and right image. Fig. 5.d. Green color detection of the left and right image. Fig. 4. Robotic arm control GUI. On Fig. 6 we can see the initial stereo image of the robotic arm image acquired with the two webcams. 54
Fig. 6. The initial stereo image of the robotic arm acquired by the two webcams. On Fig. 7 we can see the result of the overly after the Draw button in pressed. All color are recognized, the lines and circles are drawn, after the many mathematic calculations are done. Finally with the lights of the side of the parallelogram we can compute the exact angles which are needed for each motor to move. The movement on the 0X axis is done by the stereo triangulation calculations, which was presented in the previous chapter. Fig. 7. Overlay of the lines and circles on the robotic arm after the color detections. IV. CONCLUSION As we could see we created an application which can control the robotic arm with stereo vision in the 3D space. We can control 3 motor, one for each axis in the XYZ coordinate system. The system is not perfect, but performs very well. The algorithm could be enhanced to control robotic arm with even more joints, but the basic idea remains the same. We also want to extend the system and to implement it more programming language and operating systems to see how it performs on different platforms, environments. We would also like to create an embedded version to create all the system on an FPGA. After this we could also create the layout of the silicon die and create our own ASIC (Application-Specific Integrated Circuit) with the robotic arm control [1]-[7]. The whole system could be very interesting, it will control robotic arm and it would have very diverse I/O interfaces, 2 USB or 2 Ethernet inputs for the stereo cameras, one RS-232 output to control the robotic arm and maybe one HDMI (DVI) or VGA output to see on a display what the arm is doing. 55
REFERENCES [1] R. Szabó, A. Gontean, Controlling a Robotic Arm in the 3D Space with Stereo Vision, 21th Telecommunications Forum (TELFOR), 2013, pp. 916-919. [2] R. Szabó, A. Gontean, Remotely Commanding the Lynxmotion AL5 Type Robotic Arms, 21th Telecommunications Forum (TELFOR), 2013, pp. 889-892. [3] R. Szabó, A. Gontean, Full 3D Robotic Arm Control with Stereo Cameras Made in LabVIEW, Federated Conference on Computer Science and Information Systems (FedCSIS), Cracow, Poland, 2013, pp. 37-42. [4] R. Szabó, A. Gontean, Creating a Programming Language for the AL5 Type Robotic Arms, 36th International Conference on Telecommunications and Signal Processing (TSP), 2013, pp. 62-65. [5] R. Szabó, A. Gontean, Controlling Robotic Arm in 2D Space with Computer Vision Using a Webcam, 19th International Conference on Soft Computing (MENDEL), 2013, pp. 301-306. [6] R. Szabó, I. Lie, Automated Colored Object Sorting Application for Robotic Arms, International Symposium on Electronics and Telecommunications (ISETC). Tenth Edition, 2012, pp. 95-98. [7] R. Szabó, A. Gontean, I. Lie, Smart Commanding of a Robotic Arm with Mouse and a Hexapod with Microcontroller, 18th International Conference on Soft Computing (MENDEL), 2012, pp. 476-481. [8] SSC-32 User Manual. Available: http://www.lynxmotion.com/images/ data/ssc-32.pdf, last visited April 20, 2013. [9] Wong Guan Hao, Yap Yee Leck, Lim Chot Hun, 6-DOF PC-Based Robotic Arm (PC-ROBOARM) with efficient trajectory planning and speed control, 4th International Conference On Mechatronics (ICOM), 2011, pp. 1-7. [10] Woosung Yang, Ji-Hun Bae, Yonghwan Oh, Nak Young Chong, Bum- Jae You, Sang-Rok Oh, CPG based self-adapting multi-dof robotic arm control, International Conference on Intelligent Robots and Systems (IROS), 2010, pp. 4236-4243. [11] E. Oyama, T. Maeda, J.Q. Gan, E.M. Rosales, K.F. MacDorman, S. Tachi, A. Agah, Inverse kinematics learning for robotic arms with fewer degrees of freedom by modular neural network systems, International Conference on Intelligent Robots and Systems (IROS), 2005, pp. 1791-1798. [12] N. Ahuja, U.S. Banerjee, V.A. Darbhe, T.N. Mapara, A.D. Matkar, R.K. Nirmal, S. Balagopalan, Computer controlled robotic arm, 16th IEEE Symposium on Computer-Based Medical Systems, 2003, pp. 361-366. [13] M.H. Liyanage, N. Krouglicof, R. Gosine, Design and control of a high performance SCARA type robotic arm with rotary hydraulic actuators, Canadian Conference on Electrical and Computer Engineering (CCECE), 2009, pp. 827-832. 56