Hough Transform Run Length Encoding for Real-Time Image Processing
|
|
- Sabina Simpson
- 6 years ago
- Views:
Transcription
1 IMTC 25 Instrumentation and Measurement Technology Conference Ottawa, Canada, May 25 Hough Transform Run Length Encoding for Real-Time Image Processing C. H. Messom 1, G. Sen Gupta 2,3, S. Demidenko 4 1 IIMS, Massey University, Albany, New Zealand 2 IIS&T, Massey University, Palmerston North, New Zealand 3 School of EEE, Singapore Polytechnic, 5 Dover Road, Singapore 4 School of Engineering & Science, Monash University, Kuala Lumpur, Malaysia C.H.Messom@massey.ac.nz, G.SenGupta@massey.ac.nz, Serge.Demidenko@engsci.monash.edu.my Abstract This paper introduces a real-time image processing algorithm based on run length encoding (RLE) for a vision based intelligent controller of a Humanoid Robot system. The RLE algorithms identify objects in the image providing their size and position. A RLE Hough transform is also presented for recognition of landmarks in the image to aid robot localization. The vision system presented has been tested by simulating the dynamics of the robot system as well as the image processing subsystem. The real-time image processing and control algorithms allow the unstable dynamic model of the biped robot to be controlled Keywords Run Length Encoding, Hough Transform, Real-time Image processing, Edge detection I. INTRODUCTION A vision based humanoid robot system requires a highspeed vision system that does not introduce significant delays in the control loop. This paper presents a vision system for biped control that performs in real-time. The humanoid robot used to test the system is a 12-degree of freedom biped robot. The image from the camera attached to the top of the robot is processed to identify positions of obstacles as well as any landmarks in the field of view. Obstacles can be accurately placed relative to the robot, and with the identification of land marks the robot can be accurately localized and a map of the obstacles developed in world coordinates. Once the objects have been located in the 2 dimensional image, a coordinate transformation based on the fact that the ground is level and all the joint angles are available, allows us to determine the object s position relative to the camera. If the joint angles are not available, an approximation of the camera position and orientation must be calculated based on the image from the camera. Visual features that will contribute to this calculation include position of the horizon and any gravitationally vertical lines in the image. The first part of this paper discusses the run length encoding algorithm (RLE) [1-3] applied to a simulated biped robot system. This algorithm allows objects to be identified in the field of view based on color and size. The second part of this paper discusses the localization problem based on landmark identification using edge detection and RLE. Given large landmarks such as the horizon or large obstacles filtering short lines effectively removes noise due to multiple small objects in the field of view. One disadvantage with the edge detection algorithms is the computational time associated with detecting and processing the edge image. This paper introduces a RLE edge representation to improve the performance of edge processing. II. BACKGROUND Much early work in biped and humanoid robotics has focused on the basic dynamics and control of the biped robot system [4-7]. However more recently researchers have started to address the higher-level functionality such as biped robot vision for navigation and localization. To test and develop this functionality toolkits that support full simulation of the vision and control system have been developed [8]. This study builds upon the vision enabled robot simulation environment using the 12-degree of freedom m2 biped robot [5, 6, 9]. A typical view from the robot in an environment with obstacles is shown in figure 1. The key objective of the vision system is to identify the objects in view given changing viewing angles and lighting conditions. This requires object s characteristic color and size to be continuously updated based on current conditions. Fig. 1. Biped Robot View
2 Image Capture RLE image processing Identified objects Hough Transform based object recognition Fig. 2. Image Processing pipeline Recently researchers have investigated biped vision strategies based on both simulation [1, 11] and real robot systems [1 13]. Braunl reported some of the problems associated with a reality gap when transferring results from a simulated system to a real robot system. Ensuring that the systems developed in simulation are not dependent on specifics of the simulation system ensure that this reality gap can be closed to the point that simulated solutions are useful for solving the real problem. III. IMAGE PROCESSING PIPELINE Figure 2 shows the image processing pipeline for the system presented in this paper. The core image processing algorithm used is the RLE based image segmentation and object tracking. This subsystem [3] provides a real-time object tracking algorithm. Its weakness is that it needs the range of color space occupied by the objects being tracked to be specified. This requires an environment with uniform lighting and little variation over time in order for robust performance to be achieved. To use the RLE algorithm in an environment with unknown objects and varying light conditions the required color space range values must be dynamically updated. This study uses a Hough Transform based edge detection technique [14] to identify new objects and their associated color space range values. This phase of the processing is slow and so runs in a separate low priority thread concurrent to the real-time RLE image processing. S v = T, S h = S v (1) Fig. 3. Edge detected image IV. EDGE DETECTION Edge detection is often used to identify objects and regions of interest in an image where there can be significant variation in size and colors of the objects of interest or the colors of the objects of interest are not known. In this paper edge detection is used to identify landmarks in the image, particularly the horizon and any unknown large obstacles. A 5x5 RGB Sobel edge detection filter (S v and S h see equation 1) can be applied to the raw RGB image (figure 1) producing the edge detected image (figure 3). This edge detection technique is computationally relatively expensive as compared to RLE, however in the situation where color identifiers are unknown it provides a suitable image that can be further processed to find information about the environment in which the robot is operating. In the simulated domain studied one of the key features that can be identified from the edge-detected image are the horizon from which the body position of the robot can be inferred (this is useful if the joint angles are not explicitly available to the system). The second types of feature available in the edge-filtered image are land marks such as large obstacles which can be used to aid robot localization. In this study the colors of the obstacles where known and so were identified using RLE rather than the edge detection techniques. Identifying the horizon means that a long almost horizontal (at least not near vertical) line must be identified. A similar approach will need to be adopted if there are walls or corridors in the image, that is, long lines in the image are identified before further processing.
3 +ve +ve +ve -ve +ve -ve -ve -ve α Fig. 4. x and y axis intercepts for lines of angle and - vertical range horizontal α range a) b) β β Fig. 5. c) Topological view of the polar parameter space, showing continuity of the, boundary and the join of the boundary, where α = -max(height, width) and β = height + width. Fig. 5. a) Maximum x and y axis intercepts, b) Range values in parameter space showing discontinuity at θ =, where α = -max(height, width) and β = height + width. V. HOUGH TRANSFORM OF RLE EDGES Identifying straight lines in an image requires a first order Hough transform to be applied, normally this transfers the image into the parameter space of straight lines, that is from the position (x, y) of the pixel to the parameter space of the lines in the image (m, c), where y=mx+c represents the equations of the lines in the image. Where the lines in the parameter space intersect represents the equations of the lines in the image, this is also the position where there is a peak of data points in the parameter space. This study uses a polar representation of the first order Hough transform (θ, c) rather than the normal gradient intercept (m, c) format. This is so that singularities associated with vertical lines (infinite gradient and intercept with the y axis) are removed. The angle θ in the polar representation is the angle of the line to the horizontal (range from to ) and the intercept c represents the intercept of the line with the x or y axis. The intercept with the y axis is used for - < θ while the intercept with the x axis is used for θ - and θ >. With this parameter space when θ = - the x and y intercepts are equal (see figure 4) so the representation is continuous as the angle changes across this boundary. When θ = the x and y intercepts are additive inverses (see figure 4) which means that there is a discontinuity in the representation as the angle changes across this boundary (see figure 5 b), however with correct implementation of the neighborhood grouping this is not a problem since a topological mapping of the parameter space is possible (see figure 5 c). The polar representation has an additional advantage in bounding the size of the range of values of intercepts by 2 max(height, width) + min(height, width), see figure 5 a & b. Fig. 6. Grouped Edge detected pixels If the polar parameter space has a high resolution it will be necessary to group neighboring peaks in the parameter space so that similar lines in the image are amalgamated into one. A contiguous near neighbor grouping algorithm [14] is applied to the parameter space to combine similar lines in the image, in this way the number of candidate lines in the image are reduced. The peaks in parameter space are used to identify the straight lines in each object in the image. The linear Hough transform is computationally expensive, especially if all the combinations of pixels that have been edge detected are considered. Even if a statistical approach is adopted the computational time complexity can still be high if a large number of pixels are tested to ensure that no edges in the image are missed. This paper proposes modifying the linear Hough transform algorithm, by applying it to the run length encoded image of the edge-filtered image. This requires a class of color identifiers to be supplied for identifying the lines in the edgefiltered image. In this study a class of sharp lines close to white (255, 255, 255) in the edge filtered image and a wide gray band were suitable to detect both the edges of the obstacles and the edges of the horizon. Having run length encoded the edge-filtered image connected lines of contiguous edges are detected as objects. In the example illustrated in figure 3 this is equivalent to three obstacles (note the obstacles in the distance are viewed as a
4 single object using this algorithm since the edge maps overlap) Several vertical lines are detected due to edges caused by rapid variation in the object colors due to shading and shadow effects. Four horizon elements are identified, one of which is very small and is filtered off as noise. Figure 6 illustrates a region of edge detected image that has been run length encoded. The Hough transform of the RLE edges is performed on each object separately. This reduces the computational complexity of the algorithm as interactions between the objects are not added to the parameter space model, reducing the interference effect between the different lines in the image. With run length encoding only the start and end positions of the pixels in each horizontal row are recorded, so random pixels within the run length are selected for the Hough Transform to parameter space. Each obstacle in the example image (figure 3) consists of two long straight lines and two short straight lines. The Hough transform is biased towards the long lines as they provide more candidate points in the parameter space. The two short lines are also identified since they provide two peaks in parameter space after applying the neighbor grouping algorithm. This is the case for the base line of the obstacles even though this line is barely straight. Figure 7 illustrates the selection of the selection of the candidate lines in the given object. The RLE edges of the horizon form 3 objects which are also transformed individually using a linear Hough transform. Each object produces a single candidate line in parameter space after grouping neighboring candidate lines in parameter space. Since we are looking for only one horizon line the three candidate lines are compared to see whether they can also be amalgamated into a single overall candidate line. In this case (figure 3) the three candidate lines provided are collinear and so produce only one candidate line. Given the positions of the obstacles, obtained either from the edge detected image or the original run length encoded image as well as the position of the horizon the robot can be localized. The angle of the horizon gives the rotation of the camera and based on the height of the robot and the flat environment the positions of the obstacles relative to the robot can be calculated. VI. OBJECT IDENTIFICATION The lines and points formed by the intersection of the lines in each object define the boundaries of the objects of interest in the image. The pixels within this boundary are used to calculate the color space range values to be used by the RLE algorithm. See figure 8. The mean value of the color components of the pixels that form the object are calculated so that outlier pixels can be eliminated. Outliers occur near the edges and are not representative of the object under study. Typically, variations of more than 4 from the mean represent outliers. For objects that are larger than 1 pixels plus or minus three times the standard deviation of the pixel values is used to calculate the maximum and minimum range values to be used by the RLE algorithm. For small objects the mean plus or minus 15 is used by the RLE algorithm as the standard deviation may not be reliable. If particular features and landmarks in the environment are known, they can be identified from the objects that have been identified above. VII. RESULTS The standard Hough Transform applied to this problem with out using the RLE and aggregation of contiguous pixels is very slow. This is due to the fact that we need to identify all lines, even short ones in the image. The probability of selecting a pixel for the Hough Transform is given by equation 2. p i = n i / p (2) where p i is the probability of selecting a pixel in the line of interest (i), n i is the number of points in the line of interest and p is the number of edge/line points in the image. The probability of selecting two pixels that are in the same line of interest is given by equation 3. 2 p ii = p i (3) where p ii is the probability of selecting two pixels in the line of interest (i). Equation 4 gives the number of sample pairs that must be taken to reliably achieve the given number of candidate lines from the Hough Transform. Fig. 7. Candidate lines in object Fig. 8. Candidate line intersection and enclosed pixels
5 S ii = n/p ii (4) where S ii is the required number of sample pairs to reliably result in n candidate lines that match the given line of interest (i). The long horizon line will consist of about 32 pixels in a 32x24 image. If the image has about 1 edge/line pixels, then the chance of selecting two pixels from the horizon are (32/1) 2. This means the chance of retrieving the horizon line from the Hough Transform is about 1%. This means we need to take ten pairs of points before we find a candidate line that would represent the horizon. If we want to have at least 5 candidate lines before taking that as a legitimate line in the image we need to take at least 5 sample pairs. For smaller lines, say 5 pixels long, we can see that a significantly larger number of sample pairs are required, 5/ (5/1) 2 = 2,. As the number of edge/line points increase we can see the time the algorithm takes will increase significantly O(p 2 ), where p is the number of edge/line points in the image. This means that for small objects the standard Hough Transform would not be able to update the RLE color space range values regularly. The result of this is that as lighting and objects in the image are changed they are not correctly identified by the real-time RLE system. In the biped robot scenario this results in collisions with moving obstacles and selection of non optimal paths through obstacles. For the RLE augmented Hough Transform the contiguous pixels that form one object are used in identifying the required straight lines. If we take the example illustrated in figure 6, the total number of edge/line pixels is 18, and the shortest line is 3 pixels. This requires only 18 (=5/(3/18) 2 ) sample pairs to reliably identify the 4 straight lines with at least 5 candidate lines. VIII. CONCLUSIONS This paper has presented a real time image-processing algorithm based on run length encoding for a simulated biped robot system. This system can be implemented on real biped robot systems to detect obstacles quickly in the field of view. This paper has also presented and edge detection algorithm that uses a Sobel edge detection algorithm augmented with run length encoding to improve post processing of the image using a linear Hough transform. Although run length encoding and the RLE augmented Hough transform have shown promise, significant research effort needs to be directed at real-time generation the color identifiers used in the RLE component of the vision system. In real world environments with highly varying lighting conditions this dynamic update of color identifiers will be essential. This simulated system used in this study provides clean images and so does not reflect reality where there are often variations in the image due to sensor noise. Localization of the robot relative to the obstacles and mapping the environment with sensor noise requires particle filtering and optimal filtering approaches as discussed in [15 & 16]. Future research will model sensor noise and will require these additional techniques to provide reliable recognition of obstacle positions and localization of the robot. ACKNOWLEDGEMENTS The authors would like to acknowledge the use of Massey University s parallel computer the Helix for the computational experiments that supported the results presented in this paper. REFERENCES [1] G. Sen Gupta, D. Bailey, C. Messom, "A New Colour Space for Efficient and Robust Segmentation", Proceedings of IVCNZ 24, pp , 24. [2] J. Bruce, T. Balch and M. Veloso, "Fast and Inexpensive Colour Image Segmentation for Interactive Robots", IROS 2, San Francisco, 2. [3] C. H. Messom, S. Demidenko, K. Subramaniam and G. Sen Gupta, "Size/Position Identification in Real-Time Image Processing using Run Length Encoding", IEEE Instrumentation and Measurement Technology Conference, pp , ISBN , 22 [4] C. Zhou and Q. Meng, "Dynamic balance of a biped robot using fuzzy reinforcement learning agents", Fuzzy Sets and Systems, Vol.134, No.1, pp , 23 [5] K. Jagannathan, G. Pratt, J. Pratt and A. Persaghian, "Pseudo-trajectory Control Scheme for a 3-D Model of a Biped Robot", Proc of ACRA, pp , 21 [6] K. Jagannathan, G. Pratt, J. Pratt and A. Persaghian, "Pseudo-trajectory Control Scheme for a 3-D Model of a Biped Robot (Part 2. Body Trajectories)", Procs of CIRAS, pp , 21 [7] J. Baltes, S. McGrath and J. Anderson, "Feedback Control of Walking for a Small Humanoid Robot", Proceedings of the FIRA World Congress, 23, Vienna, Austria. [8] C.H. Messom, "Vision Controlled Humanoid Toolkit", Knowledge- Based Intelligent Information and Engineering Systems, Lecture Notes in Artificial Intelligence vol 3213, pp , Springer Verlag, Berlin Heidelberg 24. [9] J.Pratt and G.Pratt, "Exploiting Natural Dynamics in the Control of a 3D Bipedal Walking Simulation", Proceedings of the International Conf on Climbing and Walking Robots, [1] A. Boeing, S. Hanham and T. Braunl, "Evolving Autonomous Biped Control from Simulation to Reality", Proc of 2nd International Conference on Autonomous Robots and Agents, pp , 24. [11] J. Chestnutt, J. Kuffner, K. Nishiwaki and S. Kagami, "Planning Biped Navigation Strategies in Complex Environments", Proceedings of Int ernationalconf on Humanoid Robotics, 23. [12] M. Ogino, Y. Katoh, M. Aono, M. Asada and K. Hosoda, "ision-based Reinforcement Learning for Humanoid Behaviour Generation with Rhythmic Walking Parameters", Proceedings of 23 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp , 23 [13] O. Lorch, A. Albert, J. Denk, M. Gerecke, R. Cupec, J. F. Seara, W. Gerth and G. Schmidt, Experiments in Vision-Guided Biped Walking, Proceedings of IEEE/ RSJ International Conf on Intelligent Robots and Systems, 23. [14] D.C.I Walsh and A.E. Raftery, "Accurate and Efficient Curve Detection in Images: The Importance Sampling Hough Transform.", Pattern Recognition, vol 35, pp , 22. [15] D.C.K. Yuen, and B.A. MacDonald, "Theoretical Considerations of Multiple Particle Filters for Simultaneous Localisation and Map- Building", Knowledge-Based Intelligent Information and Engineering Systems, Lecture Notes in Artificial Intelligence vol 3213, pp 23 29, Springer Verlag, Berlin Heidelberg 24. [16] G. Sen Gupta, C. H. Messom, S. Demidenko, "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp
Hough Transform Run Length Encoding for Real-Time Image Processing
Hough Transform Run Length Encoding for Real-Time Image Processing C. H. Messom 1, G. Sen Gupta 2,3, S. Demidenko 4 1 IIMS, Massey University, Albany, New Zealand 2 IIS&T, Massey University, Palmerston
More informationStereo Vision Controlled Humanoid Robot Tool-kit
Abstract Stereo Vision Controlled Humanoid Robot Tool-kit C. H. Messom IIMS, Massey University, Albany, New Zealand C.H.Messom@massey.ac.nz This paper introduces a novel stereo vision based simulation
More informationRobust and Accurate Detection of Object Orientation and ID without Color Segmentation
0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com
More informationUsing Layered Color Precision for a Self-Calibrating Vision System
ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jüngel Institut für Informatik, LFG Künstliche
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationElastic Bands: Connecting Path Planning and Control
Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis
More informationRedundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators
56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically
More informationThree-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization
Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for
More informationFinal Exam Practice Fall Semester, 2012
COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:
More informationHOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder
HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationOut-of-Plane Rotated Object Detection using Patch Feature based Classifier
Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 170 174 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Out-of-Plane Rotated Object Detection using
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationMotion Planning for Humanoid Robots
Motion Planning for Humanoid Robots Presented by: Li Yunzhen What is Humanoid Robots & its balance constraints? Human-like Robots A configuration q is statically-stable if the projection of mass center
More informationAUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE
AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International
More informationJames Kuffner. The Robotics Institute Carnegie Mellon University. Digital Human Research Center (AIST) James Kuffner (CMU/Google)
James Kuffner The Robotics Institute Carnegie Mellon University Digital Human Research Center (AIST) 1 Stanford University 1995-1999 University of Tokyo JSK Lab 1999-2001 Carnegie Mellon University The
More informationScientific Visualization Example exam questions with commented answers
Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during
More informationManipulator trajectory planning
Manipulator trajectory planning Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Czech Republic http://cmp.felk.cvut.cz/~hlavac Courtesy to
More informationImproving Vision-Based Distance Measurements using Reference Objects
Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationSubpixel Corner Detection Using Spatial Moment 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute
More informationCoarse-to-Fine Search Technique to Detect Circles in Images
Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationRobust Control of Bipedal Humanoid (TPinokio)
Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 643 649 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Robust Control of Bipedal Humanoid (TPinokio)
More informationSeminar Heidelberg University
Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box
More informationVisuo-Motor Learning for Face-to-face Pass between Heterogeneous Humanoids
Visuo-Motor Learning for Face-to-face Pass between Heterogeneous Humanoids Masaki Ogino a,, Masaaki Kikuchi a, Minoru Asada a,b a Dept. of Adaptive Machine Systems b HANDAI Frontier Research Center Graduate
More informationAuto-focusing Technique in a Projector-Camera System
2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School
More informationA Novel Map Merging Methodology for Multi-Robot Systems
, October 20-22, 2010, San Francisco, USA A Novel Map Merging Methodology for Multi-Robot Systems Sebahattin Topal Đsmet Erkmen Aydan M. Erkmen Abstract In this paper, we consider the problem of occupancy
More informationRectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform
Send Orders for Reprints to reprints@benthamscience.net 58 The Open Mechanical Engineering Journal, 2014, 8, 58-62 Open Access Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationParticle-Filter-Based Self-Localization Using Landmarks and Directed Lines
Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationMatching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment
Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationInstant Prediction for Reactive Motions with Planning
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and
More informationTURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN
1 TURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN M. M. DABBEERU AND P. SVEC Department of Mechanical Engineering, University of Maryland, College
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationInverse Kinematics for Humanoid Robots Using Artificial Neural Networks
Inverse Kinematics for Humanoid Robots Using Artificial Neural Networks Javier de Lope, Rafaela González-Careaga, Telmo Zarraonandia, and Darío Maravall Department of Artificial Intelligence Faculty of
More informationCanny Edge Based Self-localization of a RoboCup Middle-sized League Robot
Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,
More informationCover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data
Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417
More informationImage Enhancement Techniques for Fingerprint Identification
March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement
More informationOnline Environment Reconstruction for Biped Navigation
Online Environment Reconstruction for Biped Navigation Philipp Michel, Joel Chestnutt, Satoshi Kagami, Koichi Nishiwaki, James Kuffner and Takeo Kanade The Robotics Institute Digital Human Research Center
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationLecture 15: Segmentation (Edge Based, Hough Transform)
Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationResearch on the Algorithms of Lane Recognition based on Machine Vision
International Journal of Intelligent Engineering & Systems http://www.inass.org/ Research on the Algorithms of Lane Recognition based on Machine Vision Minghua Niu 1, Jianmin Zhang, Gen Li 1 Tianjin University
More informationPath Planning. Jacky Baltes Dept. of Computer Science University of Manitoba 11/21/10
Path Planning Jacky Baltes Autonomous Agents Lab Department of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca http://www.cs.umanitoba.ca/~jacky Path Planning Jacky Baltes Dept. of
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationResearch on Evaluation Method of Video Stabilization
International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and
More informationThree-Dimensional Computer Vision
\bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision
More informationTowards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training
Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer
More informationGraph-based Planning Using Local Information for Unknown Outdoor Environments
Graph-based Planning Using Local Information for Unknown Outdoor Environments Jinhan Lee, Roozbeh Mottaghi, Charles Pippin and Tucker Balch {jinhlee, roozbehm, cepippin, tucker}@cc.gatech.edu Center for
More informationComputer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han
Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationA Document Image Analysis System on Parallel Processors
A Document Image Analysis System on Parallel Processors Shamik Sural, CMC Ltd. 28 Camac Street, Calcutta 700 016, India. P.K.Das, Dept. of CSE. Jadavpur University, Calcutta 700 032, India. Abstract This
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationRobotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.
Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:
More informationPOME A mobile camera system for accurate indoor pose
POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble
More informationHorus: Object Orientation and Id without Additional Markers
Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky
More informationImage Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter
Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Hua Zhong 1, Takeo Kanade 1,andDavidSchwartzman 2 1 Computer Science Department, Carnegie Mellon University, USA 2 University
More informationDecision Algorithm for Pool Using Fuzzy System
Decision Algorithm for Pool Using Fuzzy System S.C.Chua 1 W.C.Tan 2 E.K.Wong 3 V.C.Koo 4 1 Faculty of Engineering & Technology Tel: +60-06-252-3007, Fax: +60-06-231-6552, E-mail: scchua@mmu.edu.my 2 Faculty
More informationAn Image Based Approach to Compute Object Distance
An Image Based Approach to Compute Object Distance Ashfaqur Rahman * Department of Computer Science, American International University Bangladesh Dhaka 1213, Bangladesh Abdus Salam, Mahfuzul Islam, and
More informationPersonal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery
Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,
More informationWave front Method Based Path Planning Algorithm for Mobile Robots
Wave front Method Based Path Planning Algorithm for Mobile Robots Bhavya Ghai 1 and Anupam Shukla 2 ABV- Indian Institute of Information Technology and Management, Gwalior, India 1 bhavyaghai@gmail.com,
More informationCOMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS
COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS Adriano Flores Dantas, Rodrigo Porfírio da Silva Sacchi, Valguima V. V. A. Odakura Faculdade de Ciências Exatas e Tecnologia (FACET) Universidade
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationCOMP 102: Computers and Computing
COMP 102: Computers and Computing Lecture 23: Computer Vision Instructor: Kaleem Siddiqi (siddiqi@cim.mcgill.ca) Class web page: www.cim.mcgill.ca/~siddiqi/102.html What is computer vision? Broadly speaking,
More informationNew Edge Detector Using 2D Gamma Distribution
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 New Edge Detector Using 2D Gamma Distribution Hessah Alsaaran 1, Ali El-Zaart
More informationLow-level Image Processing for Lane Detection and Tracking
Low-level Image Processing for Lane Detection and Tracking Ruyi Jiang 1, Reinhard Klette 2, Shigang Wang 1, and Tobi Vaudrey 2 1 Shanghai Jiao Tong University, Shanghai, China 2 The University of Auckland,
More informationA Three dimensional Path Planning algorithm
A Three dimensional Path Planning algorithm OUARDA HACHOUR Department Of Electrical and Electronics Engineering Faculty Of Engineering Sciences Signal and System Laboratory Boumerdes University Boulevard
More informationLocalization of Multiple Robots with Simple Sensors
Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab
More informationVisual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion
Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationCS 223B Computer Vision Problem Set 3
CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationInverse Kinematics for Humanoid Robots using Artificial Neural Networks
Inverse Kinematics for Humanoid Robots using Artificial Neural Networks Javier de Lope, Rafaela González-Careaga, Telmo Zarraonandia, and Darío Maravall Department of Artificial Intelligence Faculty of
More informationA 100Hz Real-time Sensing System of Textured Range Images
A 100Hz Real-time Sensing System of Textured Range Images Hidetoshi Ishiyama Course of Precision Engineering School of Science and Engineering Chuo University 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551,
More informationStraight Lines and Hough
09/30/11 Straight Lines and Hough Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li Project 1 A few project highlights
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More informationAn Algorithm to Determine the Chromaticity Under Non-uniform Illuminant
An Algorithm to Determine the Chromaticity Under Non-uniform Illuminant Sivalogeswaran Ratnasingam and Steve Collins Department of Engineering Science, University of Oxford, OX1 3PJ, Oxford, United Kingdom
More informationDevelopment of a Fall Detection System with Microsoft Kinect
Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationMotion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationModeling the manipulator and flipper pose effects on tip over stability of a tracked mobile manipulator
Modeling the manipulator and flipper pose effects on tip over stability of a tracked mobile manipulator Chioniso Dube Mobile Intelligent Autonomous Systems Council for Scientific and Industrial Research,
More informationSPATIAL GUIDANCE TO RRT PLANNER USING CELL-DECOMPOSITION ALGORITHM
SPATIAL GUIDANCE TO RRT PLANNER USING CELL-DECOMPOSITION ALGORITHM Ahmad Abbadi, Radomil Matousek, Pavel Osmera, Lukas Knispel Brno University of Technology Institute of Automation and Computer Science
More informationA Symmetry Operator and Its Application to the RoboCup
A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,
More informationAutonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor
International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun
More informationusing an omnidirectional camera, sufficient information for controlled play can be collected. Another example for the use of omnidirectional cameras i
An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,
More informationMobile Robot Path Planning in Static Environments using Particle Swarm Optimization
Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers
More informationRobust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras
Robust Horizontal Line Detection and Tracking in Occluded Environment for Infrared Cameras Sungho Kim 1, Soon Kwon 2, and Byungin Choi 3 1 LED-IT Fusion Technology Research Center and Department of Electronic
More informationBased on Regression Diagnostics
Automatic Detection of Region-Mura Defects in TFT-LCD Based on Regression Diagnostics Yu-Chiang Chuang 1 and Shu-Kai S. Fan 2 Department of Industrial Engineering and Management, Yuan Ze University, Tao
More informationChapter 11 Arc Extraction and Segmentation
Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge
More information