DESIGN AND DEVELOPMENT OF AUTOMATED 3D VISUAL TRACKING SYSTEM

Size: px
Start display at page:

Download "DESIGN AND DEVELOPMENT OF AUTOMATED 3D VISUAL TRACKING SYSTEM"

Transcription

1 Journal of the Chinese Institute of Engineers, Vol. 28, No. 6, pp (2005) 907 DESIGN AND DEVELOPMENT OF AUTOMATED 3D VISUAL TRACKING SYSTEM Nazim Mir-Nasiri*, Nur Azah binti Hamzaid, and Abdul Basid bin Shahul Hameed ABSTRACT Camera-based systems are frequently used to track moving objects which are in the field of their view. This paper describes design and development of a camerabased visual system that can constantly track a moving object without the necessity of calibrating the camera in real world coordinates. This reduces the complexity of the system and processing time due to unnecessary conversions and calibrations. The system consists of a two-motor pan-tilt camera driving mechanism, PCI image acquisition board, and PWM-based DC-motor driver board. It uses image processing techniques to identify and locate the object in the 3D scene and motion control algorithms to direct the camera towards the object. Thus the objective of the project is to develop a vision system and control algorithms which can be locked on a moving object within the field of its view. The developed software and related interface hardware monitor and control the motors in such a way that the moving object should always be located right at the center of the camera image plane. The system, in general, simulates the 3D motion of the human eye which always tends to focus on a moving object within the range of its view. It actually imitates the tracking ability of human eye. Key Words: image processing, object recognition, motion control. I. INTRODUCTION Many developed visual systems are used to track moving objects. There are many methods used to implement visual servoing or visual tracking. Some methods use training, a known model, or initialization, whereas other methods involve a signature vector or motion prediction. These methods somehow require grid and calibration for position estimation. One of the approaches for visual tracking is to use Active Appearance Models (AAM). However, it is limited to having all points of the model visible in all frames. Birkbeck et al. (2004) have introduced a notion of visibility uncertainty for the points in the AAM in their work, removing the above limitation and Based on an awarded paper presented at Automation 2005, the 8th international conference on automation technology, Taichung, Taiwan, R.O.C. during May 5-6, *Corresponding author. ( nazim@iiu.edu.my) The authors are with the Department of Mechatronics Engineering, International Islamic University Malaysia, Jalan Gombak, 53100, KL, Malaysia. therefore allowing the object to contain selfocclusions. The visibility uncertainty is easily integrated into the existing AAM framework, keeping model initialization time to a minimum. Mikhalsky et al. (2004) have proposed an algorithm which is based on extracting a signature vector from a target image and subsequently detecting and tracking its location in the vicinity of the origin. The process is comprised of three main phases: signal formation, extraction of signatures, and matching. Leonard et al. (2004) have implemented visual servoing by learning to perform tasks such as centering. The system uses function approximation from reinforcement learning to learn the visuomotor function of a task which relates actions to perceptual variations. Sim et al. (2002) proposed the Modified Smith Predictor- DeMenthon-Horaud (MSP-DH) visual servoing system. The DH pose estimation algorithm has the desired accuracy and convergence rate to make visual tracking possible. For both the pose estimation and visual servo controller, a simplistic and elegant structure is retained. The research done by Denzler et al. (1994) describes a two stage active vision

2 908 Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005) system for tracking a moving object which is detected in an overview image of the scene, and a close up view is then taken by changing the frame grabber s parameters and by a positional change of the camera mounted on a robot s hand. For object tracking, they used active contour models, where the active contour is interactively initialized on the first image sequence. However, errors may occur if there are strong background edges near the object, or if the ROI only partially covers the moving object. Another method by Carter et al. (2003) for tracking an object is by using a robust algorithm for arbitrary object tracking in long image sequences. This technique extends the dynamic Hough to detect arbitrary shapes undergoing affine motion. The proposed tracking algorithm requires the whole image sequence to be processed globally. Crowley et al. (1995) have conducted research to compare kinematics and visual servoing techniques. Kinematics is shown to be efficient if there is a sufficiently precise model of the kinamatic chain. Large errors in the kinamatic model can cause the system to oscillate. In contrast, visual servoing is extremely robust with respect to errors in the kinematic model, but requires a much larger number of cycles to converge. Xiao et al. (2003) used state vector for visual servoing. A simple image Jacobian matrix is taken from the state equation and leads to a simple adaptive controller which can drive a camera to the ideal position. However, the system needs to be calibrated. Kragic et al. (2001) have discussed methods for integration, which are based on weak or modelfree approaches to integration. In particular, voting and fuzzy logic methods have been studied. The results show that integration using weak methods enables a significant increase in robustness. In this paper, the main objective is to present a simple but effective camera-based vision system which acquires gray images of the scene in continuous mode (25 frames/second), detects the presence of a particular shape object in a 3D scene, determines the object s center of mass in pixel coordinates within the image plane for every frame and automatically guides the camera driving mechanism in order to align the view axis of the camera with the line of sight of the object per frame base. In other words the system forces the mass center of the object s image right into the center of the image plane. Guidance of the camera motors has been accomplished by a quadrant approach. Taking into consideration the high speed of the PCI acquisition board, simple and effective binary image processing and measuring algorithms which do not require calibration of the camera in real world coordinates, and fast interface between the motor IC driver chip and computer, an acceptable level of system time response has been achieved. The response time is mainly determined by the drive ratio of the pan-tilt mechanism. Fig. 1 Acquired image II. CONCEPTUAL DESIGN OF THE VISUAL TRACKING PROCESS The developed system enables continuous tracking of the moving object and simulates the motion of the human eye. For our experiments a spherical black ball has been selected in order to simplify the recognition process. A solid black color object has been selected to provide sufficient contrast between the object image and the surrounding background. A spherical object has been selected to provide constant circular boundary projection of the object into the image plane regardless of its 3D orientation and distance from the camera. 1. Image Enhancement and Processing The first step in the tracking system operation after system initialization is image acquisition and then enhancement. The enhancement is essential in order to ensure the images will have sufficient contrast with the background and fewer shades of gray within the object s image. Since the selected object is close to black, i.e. 0 value on the gray scale, the nonlinear power 1/Y function or inverse Gamma correction has been selected to increase the contrast in dark areas at the expense of the contrast in bright areas of the image. This correction tool also increases overall brightness of the image. The Gamma value 1/Y has been selected to be The original acquired image is shown in Fig. 1 while the Gammacorrected image is shown in Fig. 2. The next tool used for image enhancement is the linear function to correct the image. The use of this tool makes dark objects darker and light objects lighter. It makes two corrections to the image. Firstly, it reduces variation of pixel gray values within the image of the ball and, secondly, it increases the contrast between black ball pixels and pixels of other objects in the background. The resulting image after the application of this tool with the line slope 53 is shown in Fig. 3.

3 N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System 909 Fig. 2 Gamma-corrected image Fig. 4 Binary image of the scene 2. Object Identification Fig. 3 Linear-corrected image Identification and recognition of the object within the gray image of the entire scene is implemented first by converting the acquired gray image into a binary one and then by shape recognition of the object. An appropriate threshold value for the 8 bit gray image for a given amount of ambient luminescent illumination has been selected in the range between 2 and 211 of gray scale. The result of applying this threshold to the image of the scene is shown in Fig. 4. The identification or recognition of the circular object within the binary image still remains a problem although the image has been enhanced before the binarization. Blob analysis is implemented on the binary image by using several binary filters to isolate the object of interest from other unwanted objects in the scene. The system first labels all blobs (or particles) in the binary image, removes border objects and noise particles from the image, calculates and matches critical circularity parameters of the remaining objects to isolate the object of interest. By removing border objects we always assume that the object of interest is well within the plane of projection and does not touch its borders. Removing noise particles from the image by using well known particle area filters is not a simple task because the system may consider the object of interest as a noise too if it is located far from the camera. The experiments with the selected size object show that the best choice for the particle area filter is the range between 50 and pixels. All the particles beyond this range must be removed by the filter. According to measurements, the object appears to be pixels in area when it is 0.3 m from the camera, and 50 pixels in area when it is 6 m from the camera. According to observation, if the object appears to be less than 0.3 m in distance from the camera, the corresponding image of the object within the image plane shows lack of contrast with the background which causes problems in segmenting the image. Thus the value of 0.3 m is selected to be the minimum allowed distance from the object to the camera. On the other hand, if the object is more than 6 m from the camera it will appear very small in the image plane and this, in turn, will cause difficulties in differentiating between the object and the noises that are always present in the image. Thus the value of 6 m is selected to be the maximum allowed distance from the camera to the object. Of course, these limits can vary depending on resolution of the CCD sensor and conditions of scene illumination. This range of distances was selected based on the experiments done in the lab with a particular type of monochrome camera and fixed lighting conditions. However, the particle area filter alone is not sufficient to recognize the object since some other objects with similar areas but different shapes may be present in the image. An object of interest should have certain shape criteria to differentiate it from the surroundings. It should be identifiable and always distinct from the environment to ensure the system always receives proper input for continuous recognition. In this work the main concern was not to use any complex method for object recognition, but rather to assure continuous success of recognition process for subsequent control of camera motors to follow the moving object. Thus, the strategy in selecting the shape was to choose

4 910 Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005) (-x, -y) FOV of Camera Quadrant 1 Quadrant 2 (x, -y) dx < 0 dy < 0 dx > 0 dy < 0 (0, 0) Quadrant 4 Quadrant Pixels dx < 0 dy > 0 dy (-X, Y) dx dx > 0 dy > 0 Fig. 5 Recognized object (-x, y) 768 Pixels (x, y) Fig. 6 Quadrants of the image plane a 3D object which has a simple and unchanged 2D projection shape in the image plane regardless of its orientation and position in 3D space. The spherical shape object has always a circular projection shape in the image plane regardless of its orientation. The circularity criteria are simple and the most reliable criteria for recognition. The experiments show that the best tool to isolate the circles from other shapes is the elongation factor filter. The elongation factor is the value showing the ratio of the maximum diameter of a binary object to the short side of a rectangle with the same area encompassing the object. The more elongated the shape of a particle, the higher its elongation factor. This factor is able to identify a binary circular object as it will have a smaller elongation factor. Based on the experiments done in the lab (Fig. 1) all blobs or particles with elongation factor greater than 1.4 should be eliminated. The main advantage of this type of elongation factor tool compared to, for example, compactness factor is that it is able to differentiate between a circle and a square having the same pixel area. The result of applying the border objects removal, area and elongation factor filters to the binary image is shown in Fig Tracking Strategy: Quadrant Approach Once the spherical object has been identified the final step is to apply appropriate strategies for extracting this moving object from every image frame acquired by the camera in continuous mode. The strategy developed in this work is based on calculation of the object s blob centroid and applying the quadrant approach. Just like the human eye performs tracking of moving objects by changing the orientation of the eye in order to keep the object within its view, the developed system changes the orientation of its camera in order to keep the objects centered, right at the center of the image plane. In order to keep the object s centroid always at the center of the image plane the system must constantly check the errors of location of the centroid with respect to the image plane s center in every acquired image frame and take corrective measures to compensate for it. Since the camera acquires 25 frames/second, the system s response to the errors can be quite appreciable. The quadrant approach to estimate current position error of the object with respect to center of the image plane has been explained in Fig. 6. The field of view of the camera is divided into four quadrants. The error between the object s centroid (x, y) and the center of the image plane (0, 0), i.e. offsets from the center in the form of two components dx and dy are calculated. The four possible combinations of signs for dx and dy determine uniquely the quadrant in which the object falls. The system then chooses one of the four possible combinations of two motor rotation senses (but not magnitudes) for compensation of the current position errors dx and dy respectively. A pan-motion motor runs with an appropriate speed to compensate for dx error and a tilt-motion motor does the same to compensate for dy error. If the system successfully compensates for both errors the object should appear right at the center of the image plane and the motors cease running immediately. Thus, there is no need for calculation of amount of shaft rotations in this system. If the system does corrections for every frame of successively acquired images (one every 1/25 second) then it should be able to track moving objects with reasonable speeds. The objects moving before the camera can be detected only if their speed does not exceed the corresponding speed of the ready-made camera driving mechanism. The testing of the developed system shows that in a drastically changing environment the system may recognize either more than one object of interest or lose the object from its view. In the first case more than one object may satisfy the recognition criteria set for the system, i.e. pass the elongation factor filter.

5 N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System 911 In the second case the moving object of interest may temporarily overlap with another object with similar shades of gray color, thus making this object unrecognizable by the system. In order to stabilize performance of the system in such situations a temporarily halt has been incorporated into the system. That means when the system encounters such situations it does not react to ambiguous image frames and waits for the following image frames with expected results. III. CAMERA MOTION CONTROL The camera motion is controlled via speed control of dc motors driving the camera. As mentioned earlier the tracking strategy is to constantly drive the motors as long as there is an error in positioning of the object in the image plane. According to this strategy the amount of camera rotation is not important but the speed of rotation is. The system should be able to track fast moving objects at the full speed of the motors and slow moving objects with reduced motor speed. 1. Motor Speed Control Strategies The dc motor speed is varied by varying supplied voltage. The voltage is supplied in pulses with variable widths, i.e. using Pulse Width Modulation (PWM). Therefore, the motor speed is controlled by adjusting the duty cycle of the PWM function and can vary from zero to the nominal value. The direction of motor rotation is controlled by the polarity of supplied voltage. The amount of voltage supplied by the system to the motors depends on absolute values of the errors dx and dy. If the instantaneous magnitude of the error is large or getting larger because of the object moving faster than the camera then the motor should accelerate to catch up with the object. On the other hand, if the magnitude of the error is small or getting smaller because of the object moving slower than the camera then the motor should decelerate accordingly in order to avoid excessive oscillations when the camera reaches the target, i.e. then the camera s center is close to the object s centroid. The numerous tests of the system comprising a ready-made pan-tilt surveillance mechanism with built-in dc motors show that the following cycloidal and ramp functions to control the duty cycle (in terms of percentage) of PWM signal help to smooth the motion of the tracking camera and reduce its response time: f (x)= (13 5)x y = + 5 ; for x 2 2 y = π(x 2) sin(2π(x 2)) *( ) + 13 ; for 2 x 50 2π (50 2) (50 2) y = 100 ; for x 50 (1) f (x)= (100 25) y =[ *( 2π 2πx 15 sin(2πx) )] + 25 ; for x y = 100 ; for x 15 (2) The function (1) is used to control the speed of the pan-motion motor and the function (2) is used to control the speed of the tilt-motion motor. These two functions are different because of the difference in velocity ratios of two gear transmissions for pan and tilt motions of the camera. Reduction ratio for the build-in tilt mechanism is more than that of the pan mechanism. That makes the camera move slower in the vertical direction compared to the horizontal direction for the same value of input motor voltages. The difference in speeds of the two motors becomes more pronounced when the camera needs to be driven at very low speed just before the stop. That is the reason why an additional ramp function has been introduced into the control of the pan motion to further reduce panning speed when the object is very close to the center of the image plane. The main purpose of two cycloidal functions in (1) and (2) is to provide smooth transition from full speed rotation of the motors when the object is far from the center of the image plane, low speed rotation of the motors when the object is close to the center, and zero speed when the object has actually reached the center. The advantage of the cycloidal speed control function is that

6 912 Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005) Waveform for Pan Plot 0 PWM Pixel dx Time Fig. 7 Speed control functions for pan motor Fig. 9 Pan motor response PWM Pixel it has zero acceleration (first derivative of the velocity function) at its boundaries. Any kind of discontinuity between the motion functions at the boundary conditions may destabilize the system and cause unnecessary fluctuation of the object about the center of the image plane. Fig. 7 shows the graph of three speed control functions for the pan-motion motor according to (1). The graph specifies the PWM signal value versus the amount of object positioning error (in pixels) with respect to the center of the image plane. The ramp function is effective when the object is closer than 2 pixels to the center of the image plane and acts for the range of input voltages from 5% to 13% of nominal voltage value of the motor. Any value of the voltage less than 5% of its nominal value is unable to run the motor shaft coupled with the pan mechanism. The cycloidal function is effective when the object is in between 2 and 50 pixels apart from the center of the image plane and acts for the range of voltages from 13% to 100% of nominal value of the motor. Finally constant full motor speed is applied when the object is more than 50 pixels away 15 Fig. 8 Speed control functions for tilt motor from the center of the image plane. These boundary values for three functions have been selected based on the experimental results obtained in the lab and set to achieve the following two objectives. The first objective is to achieve fast response of the system if the object is moving at a the speed higher than that of the camera. The second objective is to reduce the speed response of the system if the camera is moving at a speed higher than that of the object, i.e. the object is getting closer to the center of the image plane. The coefficients of the ramp and cycloidal functions in (1) are selected to satisfy all boundary conditions accordingly. Figure 8 shows the graph of the two distinct functions (2) to control speed of the tilt-motion motor and values of the boundary conditions optimized based on the experiments with the equipment. The figure shows that an optimal boundary condition between full speed motion of the motor and motion with reduced speed is the error value of 15 pixels from the center of the image plane. The figure also shows that any value of the voltage less than 25% of its nominal value is unable to run the motor shaft coupled with the tilt mechanism. The coefficients of the cycloidal function have been selected to match the boundary conditions for the speed of the motor and the position errors of the object in the image plane. 2. Performance Criteria of the Tracking System The time responses of the system for a step input error of value dx = -350 pixels (pan motion) and dy = -250 pixels (tilt motion) are shown in Figs. 9 and 10, respectively. The data has been acquired by National Instruments vision interface hardware, calculated and recorded by NI LabVIEW software. In this experiment the camera was set 1.5 m in front of the object, and field of view was m 2. All major performance criteria of the system are

7 N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System 913 Table 1 Performance criteria of the system Performance Pan motion Tilt motion Rise time, T r 4.4 s 7 s Peak time, T p 5 s 8 s Maximum overshoot 45 pixels 20 pixels Settling time, T s 8 s 10 s CPU LabVIEW environment Center position error (0, 0) + (dx, dy) Trajectory planning Image processing & blob analysis BNC connector block Control circuit board DC motors Estimated object position in image plane (x, y) CCIR Camera Image acquisition Waveform for Tilt Plot Time dy Fig. 10 Tilt motor response presented in Table 1. The results obtained from the test run of the system as well as visual observations of the system performance show that the response of the system is satisfactory and the quality of system performance is acceptable. For this experiment the camera driving mechanism is capable of tracking the object moving with 0.1 m/s speed. IV. COMPONENT SETUP OF THE VISUAL TRACKING SYSTEM The visual tracking system developed in this work is specifically designed to track a black, spherical object. The entire system consists of three major subsystems. The first one is the vision subsystem which contains a monochrome analog camera with PCI vision acquisition card. The second one is the mechanical subsystem which is a pan-tilt mechanism driven by two dc motors and their speed control circuits. The last one is LabVIEW computer software to implement all control strategies for camera motion control. An analog camera is the main component of the tracking system. It acquires the image of the scene and passes it to the computer for processing. The computer extracts the object of interest from the entire image of the scene and measures the offset in pixels from the object s center to the center of the image plane. By using the quadrant approach, the computer drives two Motion part Fig. 11 Block diagram of the system Frame Grabber National Instruments PCI 1409 Visual part camera motors to align the camera with the object so that the centers of the camera plane and the object coincide. This sequence is executed repeatedly. Thus the camera acts as a feedback sensor to correct its own position in 3D space through the pan-tilt mechanism. Fig. 11 shows functional interactions between various components of the system. A Graphical User Interface has been developed to control and monitor performance of the system. In this system, a monochrome analog camera CS8320BC with CCIR standard and resolution of is used. To cover a large field of view in 3D space, a lens with short focal length of 8 mm has been used. The sensible working distance is selected to be in between 0.3 m and 6 m. Precise focusing on the object of interest is not a crucial issue because the system manipulates the calculated centroid value of the object image, which should have at least an identifiable circular contour with sufficient contrast to the background. The light source was a fluorescent lamp as it provides relatively even illumination over a large area of the room. The data acquisition device is the NI PCI-1409 image acquisition board. It is a high accuracy monochrome device equipped with 10-bit analog-to-digital converter and has its own onboard memory. To control the two DC motors of the pan-tilt mechanism, the L293D motor driver chip is used. It is a pull-pull four-channel driver with diodes. This single chip can be used to control two DC motors of up to 36V of nominal voltage. This chip requires one PWM input for the motor speed control and also two TTL logic input lines for motor direction control (DIR). The TTL logic signals for PWM and DIR are produced by the developed software and forwarded to the chip through NI PCI-6014 DAQ card and NI BNC-2110 connector block. PCI-6014 has eight digital I/O lines that are compatible with 5V TTL lines. The computer program generates only one logic signal for the motor direction control. The second

8 914 Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005) V cc 0.33 µf DIO 5 DIO 1 DIO 0 PWM DIR M PAN LM7812 MOTOR +12V +V motor 0.1 µf DUMMY 20k Ohm 100k Ohm 1,2EN 1A 1Y 2Y 2A V CC2 SENSOR OUTPUT L293D 1 16 V CC1 4A 4Y LS ACH 7 3Y 3A 3,4EN M TILT MOTOR Fig. 12 Driving circuit of the motors opposite polarity logic signal which is required by the driver chip is generated within the control circuit of the motor by a TTL inverter chip 74LS04. The motor speed control circuit is shown in Fig. 12. The motor driver chip requires two input voltages. The first is V CC1 = 5V for the chip itself and the second V CC2 = 12V is required for the motor. Both voltages are supplied through two voltage regulators: LM7805 for 5V and LM7812 for 12V. V. CONCLUSIONS LM7805 This project introduces a new and effective approach for an object tracking system using a camera as a feedback sensor. The system has been designed to identify a black spherical object in 3D space, track its location within the field of view, and turn the camera with two motors right towards the object. The system, in general, simulates human eye behavior, staring at objects of interest before deciding to manipulate them. The developed system uniquely integrates vision and image processing techniques for object recognition and perception with the camera actuation system through the designed computer control programs. The advantage of this system is that it does not require camera calibration and manipulates all measurable parameters only in pixel values. The experimental results show that the system is stable and performs well. The system is intended to be an 0.01 µf 0.01 µf +5 V 0.1 µf DR PWM DUMMY DIO 6 DIO 3 DIO µf intelligent eye sensor for an object tracking mobile robot in 3D space. REFERENCES Birkbeck, N., and Jagersand, M., 2004, Visual Tracking Using Active Appearance Models, Proceedings of the 1st Canadian Conference on Computer and Robot Vision, pp Mikhalsky, M., and Sitte, J., 2004, Real-Time Motion Tracker for a Robotic Vision System, Proceedings of the 1st Canadian Conference on Computer and Robot Vision, pp Leonard, S., and Jagersand, M., 2004, Approximating the Visuomotor Function for Visual Servoing, Proceedings of the 1st Canadian Conference on Computer and Robot Vision, pp Sim, T. P., Hong, G. S., and Lim, K. B, 2002., A Pragmatic 3D Visual Servoing System, Proceedings of the IEEE International Conference on Robotics and Automation 2002 (ICRA 02), Vol. 4, pp Denzler, J., and Paulus, D. W. R, 1994, Active Motion Detection and Object Tracking, Proceedings of the IEEE International Conference on Image Processing (ICIP-94), Vol. 3, pp Carter, J. N., Lappas, P., and Damper, R. I., 2003, Acoustics Evidence-Based Object Tracking via Blobal Energy Maximization, Proceedings of the 2003 IEEE International Conference on Speech, and Signal (ICASSP 03), Vol. 3, pp Crowley, J. L., Mesrabi, M., and Chaumette, F., 1995, Comparison of Kinematic and Visual Servoing for Fixation, IEEE/RSJ Proceedings of the International Conference on Intelligent Robots and Systems 95, Vol. 1, pp Shen, X. J., and Pan, J. M., 2003, A simple Adaptive Control for Visual Servoing, Proceedings of the 2003 International Conference on Machine Learning and Cybernetics, Vol. 2, pp Kragic, D., and Christensen, H. I, 2001, Cue Integration for Visual Servoing, IEEE Transactions on Robotics and Automation, Vol. 7, No. 1, pp Manuscript Received: Jun. 17, 2005 and Accepted: Jul. 05, 2005

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Using Edge Detection in Machine Vision Gauging Applications

Using Edge Detection in Machine Vision Gauging Applications Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such

More information

Counting Particles or Cells Using IMAQ Vision

Counting Particles or Cells Using IMAQ Vision Application Note 107 Counting Particles or Cells Using IMAQ Vision John Hanks Introduction To count objects, you use a common image processing technique called particle analysis, often referred to as blob

More information

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992 Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,

More information

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING

DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING DESIGN AND IMPLEMENTATION OF VISUAL FEEDBACK FOR AN ACTIVE TRACKING Tomasz Żabiński, Tomasz Grygiel, Bogdan Kwolek Rzeszów University of Technology, W. Pola 2, 35-959 Rzeszów, Poland tomz, bkwolek@prz-rzeszow.pl

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Linescan System Design for Robust Web Inspection

Linescan System Design for Robust Web Inspection Linescan System Design for Robust Web Inspection Vision Systems Design Webinar, December 2011 Engineered Excellence 1 Introduction to PVI Systems Automated Test & Measurement Equipment PC and Real-Time

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

POME A mobile camera system for accurate indoor pose

POME A mobile camera system for accurate indoor pose POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Modern Robotics Inc. Sensor Documentation

Modern Robotics Inc. Sensor Documentation Sensor Documentation Version 1.0.1 September 9, 2016 Contents 1. Document Control... 3 2. Introduction... 4 3. Three-Wire Analog & Digital Sensors... 5 3.1. Program Control Button (45-2002)... 6 3.2. Optical

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Auto-focusing Technique in a Projector-Camera System

Auto-focusing Technique in a Projector-Camera System 2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School

More information

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

Design and Development of a High Speed Sorting System Based on Machine Vision Guiding Available online at www.sciencedirect.com Physics Procedia 25 (2012 ) 1955 1965 2012 International Conference on Solid State Devices and Materials Science Design and Development of a High Speed Sorting

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Miniaturized Camera Systems for Microfactories

Miniaturized Camera Systems for Microfactories Miniaturized Camera Systems for Microfactories Timo Prusi, Petri Rokka, and Reijo Tuokko Tampere University of Technology, Department of Production Engineering, Korkeakoulunkatu 6, 33720 Tampere, Finland

More information

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation

Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation Reinforcement Learning for Appearance Based Visual Servoing in Robotic Manipulation UMAR KHAN, LIAQUAT ALI KHAN, S. ZAHID HUSSAIN Department of Mechatronics Engineering AIR University E-9, Islamabad PAKISTAN

More information

Robotics 2 Visual servoing

Robotics 2 Visual servoing Robotics 2 Visual servoing Prof. Alessandro De Luca Visual servoing! objective use information acquired by vision sensors (cameras) for feedback control of the pose/motion of a robot (or of parts of it)

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Robot vision review. Martin Jagersand

Robot vision review. Martin Jagersand Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders

More information

Speed Control of A DC Motor Through Temperature Variation Using NI ELVIS LabVIEW

Speed Control of A DC Motor Through Temperature Variation Using NI ELVIS LabVIEW Speed Control of A DC Motor Through Temperature Variation Using NI ELVIS LabVIEW 1 Y Madhusudhan Reddy, 2 Shaik Abdul Rahim, 3 J Leela Mahendra Kumar 1,2,3 Dept. of EIE, RGM Engineering College, Nandyal,

More information

A New Technique of Extraction of Edge Detection Using Digital Image Processing

A New Technique of Extraction of Edge Detection Using Digital Image Processing International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) A New Technique of Extraction of Edge Detection Using Digital Image Processing Balaji S.C.K 1 1, Asst Professor S.V.I.T Abstract:

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Self-calibration of a pair of stereo cameras in general position

Self-calibration of a pair of stereo cameras in general position Self-calibration of a pair of stereo cameras in general position Raúl Rojas Institut für Informatik Freie Universität Berlin Takustr. 9, 14195 Berlin, Germany Abstract. This paper shows that it is possible

More information

ROBOT SENSORS. 1. Proprioceptors

ROBOT SENSORS. 1. Proprioceptors ROBOT SENSORS Since the action capability is physically interacting with the environment, two types of sensors have to be used in any robotic system: - proprioceptors for the measurement of the robot s

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

An Accurate Method for Skew Determination in Document Images

An Accurate Method for Skew Determination in Document Images DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research

More information

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS

THE ANNALS OF DUNAREA DE JOS UNIVERSITY OF GALATI FASCICLE III, 2007 ISSN X ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS ELECTROTECHNICS, ELECTRONICS, AUTOMATIC CONTROL, INFORMATICS SIFT BASE ALGORITHM FOR POINT FEATURE TRACKING Adrian Burlacu, Cosmin Copot and Corneliu Lazar Gh. Asachi Technical University of Iasi, epartment

More information

Integrating Machine Vision and Motion Control. Huntron

Integrating Machine Vision and Motion Control. Huntron 1 Integrating Machine Vision and Motion Control Huntron 2 System Overview System Overview PXI Color Vision: Cameras, Optics, Lighting, Frame Grabbers and Software Serial 3 Axis Motion Control: Application

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Color Tracking Robot

Color Tracking Robot Color Tracking Robot 1 Suraksha Bhat, 2 Preeti Kini, 3 Anjana Nair, 4 Neha Athavale Abstract: This Project describes a visual sensor system used in the field of robotics for identification and tracking

More information

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki

More information

Research on Evaluation Method of Video Stabilization

Research on Evaluation Method of Video Stabilization International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks

More information

Basic Algorithms for Digital Image Analysis: a course

Basic Algorithms for Digital Image Analysis: a course Institute of Informatics Eötvös Loránd University Budapest, Hungary Basic Algorithms for Digital Image Analysis: a course Dmitrij Csetverikov with help of Attila Lerch, Judit Verestóy, Zoltán Megyesi,

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Operation of machine vision system

Operation of machine vision system ROBOT VISION Introduction The process of extracting, characterizing and interpreting information from images. Potential application in many industrial operation. Selection from a bin or conveyer, parts

More information

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators Robotics and automation Dr. Ibrahim Al-Naimi Chapter two Introduction To Robot Manipulators 1 Robotic Industrial Manipulators A robot manipulator is an electronically controlled mechanism, consisting of

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras

Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Proceedings of the 5th IIAE International Conference on Industrial Application Engineering 2017 Development of 3D Positioning Scheme by Integration of Multiple Wiimote IR Cameras Hui-Yuan Chan *, Ting-Hao

More information

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa

Cecilia Laschi The BioRobotics Institute Scuola Superiore Sant Anna, Pisa University of Pisa Master of Science in Computer Science Course of Robotics (ROB) A.Y. 2016/17 cecilia.laschi@santannapisa.it http://didawiki.cli.di.unipi.it/doku.php/magistraleinformatica/rob/start Robot

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Teaching Control System Principles Using Remote Laboratories over the Internet

Teaching Control System Principles Using Remote Laboratories over the Internet , July 6-8, 2011, London, U.K. Teaching Control System Principles Using Remote Laboratories over the Internet Lutfi Al-Sharif, Ashraf Saleem, Walid Ayoub, and Mohammad Naser Abstract Remote laboratories

More information

A Study on the Distortion Correction Methodology of Vision Sensor

A Study on the Distortion Correction Methodology of Vision Sensor , July 2-4, 2014, London, U.K. A Study on the Distortion Correction Methodology of Vision Sensor Younghoon Kho, Yongjin (James) Kwon 1 Abstract This study investigates a simple and effective vision calibration

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

XCL-SG Series. XCL-CG Series. Digital Camera Module Equipped with the Global Shutter CMOS Sensor

XCL-SG Series. XCL-CG Series. Digital Camera Module Equipped with the Global Shutter CMOS Sensor Digital Camera Module Equipped with the Global Shutter CMOS Sensor XCL-SG Series XCL-CG Series (B/W) C (Colour) 1.1-type 12.4 MP 20 fps Camera Link Key Features Camera Link Base Configuration (1/2/3 tap

More information

CS4758: Rovio Augmented Vision Mapping Project

CS4758: Rovio Augmented Vision Mapping Project CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Tutorial: Instantaneous Measurement of M 2 Beam Propagation Ratio in Real-Time

Tutorial: Instantaneous Measurement of M 2 Beam Propagation Ratio in Real-Time Tutorial: Instantaneous Measurement of M 2 Beam Propagation Ratio in Real-Time By Allen M. Cary, Jeffrey L. Guttman, Razvan Chirita, Derrick W. Peterman, Photon Inc A new instrument design allows the M

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 4 Part-2 February 5, 2014 Sam Siewert Outline of Week 4 Practical Methods for Dealing with Camera Streams, Frame by Frame and De-coding/Re-encoding for Analysis

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING Name Code Class Branch Page 1 INSTITUTE OF AERONAUTICAL ENGINEERING : ROBOTICS (Autonomous) Dundigal, Hyderabad - 500 0 MECHANICAL ENGINEERING TUTORIAL QUESTION BANK : A7055 : IV B. Tech I Semester : MECHANICAL

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

What is Mechatronics

What is Mechatronics Mechatronics What is Mechatronics What Is Mechatronics? Mechatronics is a methodology used for the optimal design of electromechanical products. Multi-disciplinary system design has employed a sequential

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri

DRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,

More information

Control Technology. motion controller and power amplifier

Control Technology. motion controller and power amplifier Control Technology motion controller and power amplifier Erik van Hilten Rik Prins National Instruments Agenda Controller, the central element Tools for controller design in drive systems: - in PC-based

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Introducing Robotics Vision System to a Manufacturing Robotics Course

Introducing Robotics Vision System to a Manufacturing Robotics Course Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a

More information

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK Xiangyun HU, Zuxun ZHANG, Jianqing ZHANG Wuhan Technique University of Surveying and Mapping,

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

STEPPER MOTOR DRIVES SOME FACTORS THAT WILL HELP DETERMINE PROPER SELECTION

STEPPER MOTOR DRIVES SOME FACTORS THAT WILL HELP DETERMINE PROPER SELECTION SOME FACTORS THAT WILL HELP DETERMINE PROPER SELECTION Authored By: Robert Pulford and Engineering Team Members Haydon Kerk Motion Solutions This white paper will discuss some methods of selecting the

More information

Electrically tunable large aperture lens EL TC

Electrically tunable large aperture lens EL TC Datasheet: EL-16-4-TC Electrically tunable large aperture lens EL-16-4-TC By applying an electric current to this shape changing polymer lens, its optical power is controlled within milliseconds over a

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

A Two-stage Scheme for Dynamic Hand Gesture Recognition

A Two-stage Scheme for Dynamic Hand Gesture Recognition A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,

More information

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation

Robust and Accurate Detection of Object Orientation and ID without Color Segmentation 0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

MC-E - Motion Control

MC-E - Motion Control IDC Technologies - Books - 1031 Wellington Street West Perth WA 6005 Phone: +61 8 9321 1702 - Email: books@idconline.com MC-E - Motion Control Price: $139.94 Ex Tax: $127.22 Short Description This manual

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Complete High-Speed Motion Capture System

Complete High-Speed Motion Capture System Xcitex Professional Motion System Complete High-Speed Motion Capture System Life Sciences Engineering Manufacturing Automotive Up to 5 hours of continuous recording 5 1000 fps high resolution cameras Synchronize

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity. Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

A deformable model driven method for handling clothes

A deformable model driven method for handling clothes A deformable model driven method for handling clothes Yasuyo Kita Fuminori Saito Nobuyuki Kita Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology (AIST) AIST

More information

MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE

MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE MACHINE VISION AS A METHOD FOR CHARACTERIZING SOLAR TRACKER PERFORMANCE M. Davis, J. Lawler, J. Coyle, A. Reich, T. Williams GreenMountain Engineering, LLC ABSTRACT This paper describes an approach to

More information

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID EyeTech Particle Size Particle Shape Particle concentration Analyzer A new technology for measuring particle size in combination with particle shape and concentration. COMBINED LASERTECHNOLOGY & DIA Content

More information