Hough Transform Run Length Encoding for Real-Time Image Processing
|
|
- Arlene Tate
- 6 years ago
- Views:
Transcription
1 Hough Transform Run Length Encoding for Real-Time Image Processing C. H. Messom 1, G. Sen Gupta 2,3, S. Demidenko 4 1 IIMS, Massey University, Albany, New Zealand 2 IIS&T, Massey University, Palmerston North, New Zealand 3 School of EEE, Singapore Polytechnic, 500 Dover Road, Singapore 4 School of Engineering and Science, Monash University, Kuala Lumpur, Malaysia C.H.Messom@massey.ac.nz, G.SenGupta@massey.ac.nz, Serge.Demidenko@engsci.monash.edu.my Abstract This paper introduces a real-time image processing algorithm based on run length encoding (RLE) for a vision based intelligent controller of a Humanoid Robot system. The RLE algorithms identify objects in the image providing their size and position. A RLE Hough transform is also presented for recognition of landmarks in the image to aid robot localization. The vision system presented has been tested by simulating the dynamics of the robot system as well as the image processing subsystem. Keywords Run Length Encoding, Hough Transform, Real-time Image processing, Edge detection I. INTRODUCTION A vision based humanoid robot system requires a high-speed vision system that does not introduce significant delays in the control loop. This paper presents a vision system for biped control that performs in real-time. The humanoid robot used to test the system is a 12-degree of freedom biped robot. The image from the camera attached to the top of the robot is processed to identify positions of obstacles as well as any landmarks in the field of view. Obstacles can be accurately placed relative to the robot, and with the identification of land marks the robot can be accurately localized and a map of the obstacles developed in world coordinates. Once the objects have been located in the 2 dimensional image, a coordinate transformation based on the fact that the ground is level and all the joint angles are available, allows us to determine the object s position relative to the camera. If the joint angles are not available, an approximation of the camera position and orientation must be calculated based on the image from the camera. Visual features that will contribute to this calculation include position of the horizon and any gravitationally vertical lines in the image. This paper discusses the localization problem based on landmark identification using edge detection and RLE [1-3]. Given large landmarks such as the horizon or large obstacles filtering short lines effectively removes noise due to multiple small objects in the field of view. One disadvantage with the edge detection algorithms is the computational time associated with de-
2 tecting and processing the edge image. This paper introduces a RLE edge representation to improve the performance of edge processing. II. BACKGROUND Much early work in biped and humanoid robotics has focused on the basic dynamics and control of the biped robot system [4-7]. However more recently researchers have started to address the higher-level functionality such as biped robot vision for navigation and localization. To test and develop this functionality, toolkits that support full simulation of the vision and control system have been developed [8]. This study builds upon the vision enabled robot simulation environment using the 12-degree of freedom m2 biped robot [5, 6, 9]. A typical view from the robot in an environment with obstacles is shown in figure 1. The key objective of the vision system is to identify the objects in view given changing viewing angles and lighting conditions. This requires object s characteristic color and size to be continuously updated based on current conditions. Fig. 1. Biped Robot View Recently researchers have investigated biped vision strategies based on both simulation [10, 11] and real robot systems [10 13]. Braunl reported some of the problems associated with a reality gap when transferring results from a simulated system to a real robot system. Ensuring that the systems developed in simulation are not dependent on specifics of the simulation system ensure that this reality gap can be closed to the point that simulated solutions are useful for solving the real problem.
3 Image Capture Edge detection of Image RLE image processing Identify color of object Identified objects Modify tracked colors RLE and grouping of contiguous run lengths Hough Transform of each grouping Fig. 2. Image Processing pipeline III. IMAGE PROCESSING PIPELINE Figure 2 shows the image processing pipeline for the system presented in this paper. The core image processing algorithm used is the RLE based image segmentation and object tracking. This subsystem [3] provides a real-time object tracking algorithm. Its weakness is that it needs the range of color space occupied by the objects being tracked to be specified. This requires an environment with uniform lighting and little variation over time in order for robust performance to be achieved. To use the RLE algorithm in an environment with unknown objects and varying light conditions the required color space range values must be dynamically updated. This study uses a Hough Transform based edge detection technique [14] to identify new objects and their associated color space range values. This phase of the processing is slow and so runs in a separate low priority thread concurrent to the real-time RLE image processing. Fig. 3. Edge detected image
4 IV. EDGE DETECTION Edge detection is often used to identify objects and regions of interest in an image where there can be significant variation in size and colors of the objects of interest or the colors of the objects of interest are not known. In this paper edge detection is used to identify landmarks in the image, particularly the horizon and any unknown large obstacles. A 5x5 RGB Sobel edge detection filter (S v and S h given by equation 1) S v = T, S h = S v (1) can be applied to the raw RGB image (figure 1) producing the edge detected image (figure 3). This edge detection technique is computationally relatively expensive as compared to RLE, however in the situation where color identifiers are unknown it provides a suitable image that can be further processed to find information about the environment in which the robot is operating. In the simulated domain studied, one of the key features that can be identified from the edge-detected image are the horizon from which the body position of the robot can be inferred (this is useful if the joint angles are not explicitly available to the system). The second types of feature available in the edge-filtered image are land marks such as large obstacles which can be used to aid robot localization. Identifying the horizon means that a long almost horizontal (at least not near vertical) line must be identified. A similar approach will need to be adopted if there are walls or corridors in the image, that is, long lines in the image are identified before further processing. +ve +ve -ve -ve +ve -ve -ve +ve Fig. 4. x and y axis intercepts for lines of angle π/4 and -π/4
5 V. HOUGH TRANSFORM OF RLE EDGES Identifying straight lines in an image requires a first order Hough transform to be applied, normally this transfers the image into the parameter space of straight lines, that is from the position (x, y) of the pixel to the parameter space of the lines in the image (m, c), where y=mx+c represents the equations of the lines in the image. Where the lines in the parameter space intersect represents the equations of the lines in the image, this is also the position where there is a peak of data points in the parameter space. This study uses a polar representation of the first order Hough transform (θ, c) rather than the normal gradient intercept (m, c) format. This is so that singularities associated with vertical lines (infinite gradient and intercept with the y axis) are removed. The angle θ in the polar representation is the angle of the line to the horizontal (range from -π/2 to π/2) and the intercept c represents the intercept of the line with the x or y axis. The intercept with the y axis is used for -π/4 < θ π/4 while the intercept with the x axis is used for θ -π/4 and θ > π/4. With this parameter space when θ = -π/4 the x and y intercepts are equal (see figure 4) so the representation is continuous as the angle changes across this boundary. When θ = π/4 the x and y intercepts are additive inverses (see figure 4) which means that there is a discontinuity in the representation as the angle changes across this boundary (see figure 5 b), however with correct implementation of the neighborhood grouping this is not a problem since a topological mapping of the parameter space is possible (see figure 5 c). The polar representation has an additional advantage in bounding the size of the range of values of intercepts by 2 max(height, width) + min(height, width), see figure 5 a and b. vertical range horizontal range α π/2 π/4 0 -π/2 β a) b) Fig. 5. a) Maximum x and y axis intercepts, b) Range values in parameter space showing discontinuity at θ = π/4, where α = -max(height, width) and β = height + width.
6 0 π/4 π/2 -π/2 0 α π/2 -π/2 π/4 0 β Fig. 5. c) Topological view of the polar parameter space, showing continuity of the π/2, -π/2 boundary and the join of the π/4 boundary, where α = -max(height, width) and β = height + width. The figure shows that the topology of the parameter space is finite, limited by the values of α and β. Fig. 6. Grouped Edge detected pixels If the polar parameter space has a high resolution it will be necessary to group neighboring peaks in the parameter space so that similar lines in the image are amalgamated into one. A contiguous near neighbor grouping algorithm [14] is applied to the parameter space to combine similar lines in the image, in this way the number of candidate lines in the image are reduced. The peaks in parameter space are used to identify the straight lines in each object in the image. The linear Hough transform is computationally expensive, especially if all the combinations of pixels that have been edge detected are considered. Even if a statistical approach is adopted the computational time complexity can still be high, if a large number of pixels are tested to ensure that no edges in the image are missed. This paper proposes modifying the linear Hough transform algorithm, by applying it to the run length encoded image of the edge-filtered image. This requires a class of color identifiers to be supplied for identifying the lines in the edge-filtered image. In this study a class of sharp lines close to white (255, 255, 255) in the edge filtered image and a wide gray band were suitable to detect both the edges of the obstacles and the edges of the horizon.
7 Having run length encoded the edge-filtered image, connected lines of contiguous edges are detected as single objects. In the example illustrated in figure 3 this is equivalent to three obstacles (note the obstacles in the distance are viewed as a single object using this algorithm since the edge maps overlap). Several vertical lines are detected due to edges caused by rapid variation in the object colors due to shading and shadow effects. Four horizon elements are identified, one of which is very small and is filtered off as noise. Figure 6 illustrates a region of edge detected image that has been run length encoded. The Hough transform of the RLE edges is performed on each object separately. This reduces the computational complexity of the algorithm as interactions between the objects are not added to the parameter space model, reducing the interference effect between the different lines in the image. With run length encoding only the start and end positions of the pixels in each horizontal row are recorded, so random pixels within the run length are selected for the Hough Transform to parameter space. This is done by choosing a particular run length randomly, weighted by the number of pixels in each run length and then selecting a random value between the start and end position of the selected run length. Each obstacle in the example image (figure 3) consists of two long straight lines and two short straight lines. The Hough transform is biased towards the long lines as they provide more candidate points in the parameter space. The two short lines are also identified since they provide two peaks in parameter space after applying the neighbor grouping algorithm. This is the case for the base line of the obstacles even though this line is barely straight. Figure 7 illustrates the selection of the candidate lines in the given object. The RLE edges of the horizon form 3 objects which are also transformed individually using a linear Hough transform. Each object produces a single candidate line in parameter space after grouping neighboring candidate lines in parameter space. Since we are looking for only one horizon line the three candidate lines are compared to see whether they can also be amalgamated into a single overall candidate line. In this case (figure 3) the three candidate lines provided are collinear and so produce only one candidate line. Given the positions of the obstacles, obtained either from the edge detected image or the original run length encoded image as well as the position of the horizon, the robot can be localized. The angle of the horizon gives the rotation of the camera and based on the height of the robot and the flat environment, the positions of the obstacles relative to the robot can be calculated.
8 Fig. 7. Candidate lines in object VI. OBJECT IDENTIFICATION The lines and points formed by the intersection of the lines in each object define the boundaries of the objects of interest in the image. The pixels within this boundary are used to calculate the color space range values to be used by the RLE algorithm. See figure 8. The mean value of the color components of the pixels that form the object are calculated so that outlier pixels can be eliminated. Outliers occur near the edges and are not representative of the object under study. Typically, variations of more than three standard deviations from the mean represent outliers. For objects that are larger than 10 pixels plus or minus three times the standard deviation of the pixel values is used to calculate the maximum and minimum range values to be used by the RLE algorithm. For small objects the mean plus or minus 15 is used by the RLE algorithm as the standard deviation may not be reliable. If particular features and landmarks in the environment are known, they can be identified from the objects that have been located above. VII. ANALYSIS The standard Hough Transform applied to this problem with out using the RLE and aggregation of contiguous pixels is very slow. This is due to the fact that we need to identify all lines, even short ones in the image. The probability of selecting a pixel for the Hough Transform is given by equation 2. p i = n i / p (2) where p i is the probability of selecting a pixel in the line of interest (i), n i is the number of points in the line of interest and p is the number of edge/line points in the image.
9 The probability of selecting two pixels that are in the same line of interest is given by equation 3. p ii = p i 2 (3) where p ii is the probability of selecting two pixels in the line of interest (i). Equation 4 gives the number of sample pairs that must be taken to reliably achieve the given number of candidate lines from the Hough Transform. Fig. 8. Candidate line intersection and enclosed pixels (i). S ii = n/p ii (4) where S ii is the required number of sample pairs to reliably result in n candidate lines that match the given line of interest VIII. RESULTS AND DISCUSSION The edge detection takes about three times as long as the standard RLE algorithm. In addition the Hough transform of the RLE edge detected image depends on the length of the lines and the size of the objects in the image. A long horizon line in the 320x240 image will consist of about 320 pixels. If the image has about 1000 edge/line pixels, then the chance of selecting two pixels from the horizon are (320/1000) 2. This means the chance of retrieving the horizon line from the Hough Transform is about 10%. This means we need to take ten pairs of points before we find a candidate line that would represent the horizon. If we want to have at least 5 candidate lines before taking that as a legitimate line in the image we need to take at least 50 sample pairs. For smaller lines, say 5 pixels long, we can see that a significantly larger number of sample pairs are required, 5/ (5/1000) 2 = 200,000. As the number of edge/line points increase we can see the time the algorithm takes will increase significantly O(p 2 ), where p is the number of edge/line points in the image.
10 This means that for small objects the standard Hough Transform would not be able to update the RLE color space range values regularly. The result of this is that as lighting and objects in the image are changed they are not correctly identified by the real-time RLE system. In the biped robot scenario this results in collisions with moving obstacles and selection of non optimal paths through obstacles. For the RLE augmented Hough Transform the contiguous pixels that form one object are used in identifying the required straight lines. If we take the example illustrated in figure 6, the total number of edge/line pixels is 18, and the shortest line is 3 pixels. This requires only 180 (=5/(3/18) 2 ) sample pairs to reliably identify the 4 straight lines with at least 5 candidate lines. This means that the RLE augmented Hough Transform is significantly faster than standard approaches and so can be used in adaptive vision systems. Figure 9 shows the variation of the required number of samples to identify various line sizes when the number of available edge pixels ranges from 10 to 80. It can be seen that for shorter lines there are significant improvements in reducing the number of edge pixels given by the RLE augmented Hough Transform. Samples required Line size (pixels) Fig. 9. Samples required to identify lines of give size (with at least 5 candidate lines) for varying number of edge pixels available IX. CONCLUSIONS This paper has presented a real time image-processing algorithm based on run length encoding for a simulated biped robot system. This system can be implemented on real biped robot systems to detect obstacles quickly in the field of view. This paper has also presented and edge detection algorithm that uses a Sobel edge detection algorithm augmented with run length encoding to improve post processing of the image using a linear Hough transform.
11 Although run length encoding and the RLE augmented Hough transform have shown promise, significant research effort needs to be directed at real-time generation of the color identifiers used in the RLE component of the vision system. In real world environments with highly varying lighting conditions this dynamic update of color identifiers will be essential. Environments with gradual variations in color across the object will require additional approaches such as modeling objects with multiple colors to apply this technique successfully. This simulated system used in this study provides clean images and so does not reflect reality where there are often variations in the image due to sensor noise. Localization of the robot relative to the obstacles and mapping the environment with sensor noise requires particle filtering and optimal filtering approaches as discussed in [15 and 16]. Future research will model sensor noise and will require these additional techniques to provide reliable recognition of obstacle positions and localization of the robot. ACKNOWLEDGEMENTS The authors would like to acknowledge the use of Massey University s parallel computer the Helix for the computational experiments that supported the results presented in this paper. REFERENCES [1] G. Sen Gupta, D. Bailey, C. Messom, "A New Colour Space for Efficient and Robust Segmentation", Proceedings of IVCNZ 2004, pp , [2] J. Bruce, T. Balch and M. Veloso, "Fast and Inexpensive Colour Image Segmentation for Interactive Robots", IROS 2000, San Francisco, [3] C. H. Messom, S. Demidenko, K. Subramaniam and G. Sen Gupta, "Size/Position Identification in Real-Time Image Processing using Run Length Encoding", IEEE Instrumentation and Measurement Technology Conference, pp , ISBN , 2002 [4] C. Zhou and Q. Meng, "Dynamic balance of a biped robot using fuzzy reinforcement learning agents", Fuzzy Sets and Systems, Vol.134, No.1, pp , 2003 [5] K. Jagannathan, G. Pratt, J. Pratt and A. Persaghian, "Pseudo-trajectory Control Scheme for a 3-D Model of a Biped Robot", Proc of ACRA, pp , 2001 [6] K. Jagannathan, G. Pratt, J. Pratt and A. Persaghian, "Pseudo-trajectory Control Scheme for a 3-D Model of a Biped Robot (Part 2. Body Trajectories)", Procs of CIRAS, pp , 2001 [7] J. Baltes, S. McGrath and J. Anderson, "Feedback Control of Walking for a Small Humanoid Robot", Proceedings of the FIRA World Congress, 2003, Vienna, Austria. [8] C.H. Messom, "Vision Controlled Humanoid Toolkit", Knowledge-Based Intelligent Information and Engineering Systems, Lecture Notes in Artificial Intelligence vol 3213, pp , Springer Verlag, Berlin Heidelberg 2004.
12 [9] J.Pratt and G.Pratt, "Exploiting Natural Dynamics in the Control of a 3D Bipedal Walking Simulation", Proceedings of the International Conf on Climbing and Walking Robots, [10] A. Boeing, S. Hanham and T. Braunl, "Evolving Autonomous Biped Control from Simulation to Reality", Proc of 2nd International Conference on Autonomous Robots and Agents, pp , [11] J. Chestnutt, J. Kuffner, K. Nishiwaki and S. Kagami, "Planning Biped Navigation Strategies in Complex Environments", Proceedings of Int ernational- Conf on Humanoid Robotics, [12] M. Ogino, Y. Katoh, M. Aono, M. Asada and K. Hosoda, "Vision-Based Reinforcement Learning for Humanoid Behaviour Generation with Rhythmic Walking Parameters", Proceedings of 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp , 2003 [13] O. Lorch, A. Albert, J. Denk, M. Gerecke, R. Cupec, J. F. Seara, W. Gerth and G. Schmidt, Experiments in Vision-Guided Biped Walking, Proceedings of IEEE/ RSJ International Conf on Intelligent Robots and Systems, [14] D.C.I Walsh and A.E. Raftery, "Accurate and Efficient Curve Detection in Images: The Importance Sampling Hough Transform.", Pattern Recognition, vol 35, pp , [15] G. Sen Gupta, C. H. Messom, S. Demidenko, "Real-Time Identification and Predictive Control of Fast Mobile Robots using Global Vision Sensing", IEEE Transactions on Instrumentation and Measurement, vol 54, No 1, pp [16] D.C.K. Yuen, and B.A. MacDonald, "Theoretical Considerations of Multiple Particle Filters for Simultaneous Localisation and Map-Building", Knowledge-Based Intelligent Information and Engineering Systems, Lecture Notes in Artificial Intelligence vol 3213, pp , Springer Verlag, Berlin Heidelberg 2004.
Hough Transform Run Length Encoding for Real-Time Image Processing
IMTC 25 Instrumentation and Measurement Technology Conference Ottawa, Canada, 17-19 May 25 Hough Transform Run Length Encoding for Real-Time Image Processing C. H. Messom 1, G. Sen Gupta 2,3, S. Demidenko
More informationStereo Vision Controlled Humanoid Robot Tool-kit
Abstract Stereo Vision Controlled Humanoid Robot Tool-kit C. H. Messom IIMS, Massey University, Albany, New Zealand C.H.Messom@massey.ac.nz This paper introduces a novel stereo vision based simulation
More informationRobust and Accurate Detection of Object Orientation and ID without Color Segmentation
0 Robust and Accurate Detection of Object Orientation and ID without Color Segmentation Hironobu Fujiyoshi, Tomoyuki Nagahashi and Shoichi Shimizu Chubu University Japan Open Access Database www.i-techonline.com
More informationUsing Layered Color Precision for a Self-Calibrating Vision System
ROBOCUP2004 SYMPOSIUM, Instituto Superior Técnico, Lisboa, Portugal, July 4-5, 2004. Using Layered Color Precision for a Self-Calibrating Vision System Matthias Jüngel Institut für Informatik, LFG Künstliche
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationOBJECT detection in general has many applications
1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for
More informationSeminar Heidelberg University
Seminar Heidelberg University Mobile Human Detection Systems Pedestrian Detection by Stereo Vision on Mobile Robots Philip Mayer Matrikelnummer: 3300646 Motivation Fig.1: Pedestrians Within Bounding Box
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationElastic Bands: Connecting Path Planning and Control
Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationMotion Planning for Humanoid Robots
Motion Planning for Humanoid Robots Presented by: Li Yunzhen What is Humanoid Robots & its balance constraints? Human-like Robots A configuration q is statically-stable if the projection of mass center
More informationRedundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators
56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically
More informationThree-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization
Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for
More informationManipulator trajectory planning
Manipulator trajectory planning Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering Department of Cybernetics Czech Republic http://cmp.felk.cvut.cz/~hlavac Courtesy to
More informationImproving Vision-Based Distance Measurements using Reference Objects
Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,
More informationSubpixel Corner Detection Using Spatial Moment 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute
More informationOut-of-Plane Rotated Object Detection using Patch Feature based Classifier
Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 170 174 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Out-of-Plane Rotated Object Detection using
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationAUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE
AUTOMATED THRESHOLD DETECTION FOR OBJECT SEGMENTATION IN COLOUR IMAGE Md. Akhtaruzzaman, Amir A. Shafie and Md. Raisuddin Khan Department of Mechatronics Engineering, Kulliyyah of Engineering, International
More informationTURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN
1 TURN AROUND BEHAVIOR GENERATION AND EXECUTION FOR UNMANNED GROUND VEHICLES OPERATING IN ROUGH TERRAIN M. M. DABBEERU AND P. SVEC Department of Mechanical Engineering, University of Maryland, College
More informationScientific Visualization Example exam questions with commented answers
Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during
More informationJames Kuffner. The Robotics Institute Carnegie Mellon University. Digital Human Research Center (AIST) James Kuffner (CMU/Google)
James Kuffner The Robotics Institute Carnegie Mellon University Digital Human Research Center (AIST) 1 Stanford University 1995-1999 University of Tokyo JSK Lab 1999-2001 Carnegie Mellon University The
More informationParticle-Filter-Based Self-Localization Using Landmarks and Directed Lines
Particle-Filter-Based Self-Localization Using Landmarks and Directed Lines Thomas Röfer 1, Tim Laue 1, and Dirk Thomas 2 1 Center for Computing Technology (TZI), FB 3, Universität Bremen roefer@tzi.de,
More informationRobotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.
Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationHOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder
HOG-Based Person Following and Autonomous Returning Using Generated Map by Mobile Robot Equipped with Camera and Laser Range Finder Masashi Awai, Takahito Shimizu and Toru Kaneko Department of Mechanical
More informationFinal Exam Practice Fall Semester, 2012
COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:
More informationLecture 15: Segmentation (Edge Based, Hough Transform)
Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................
More informationCoarse-to-Fine Search Technique to Detect Circles in Images
Int J Adv Manuf Technol (1999) 15:96 102 1999 Springer-Verlag London Limited Coarse-to-Fine Search Technique to Detect Circles in Images M. Atiquzzaman Department of Electrical and Computer Engineering,
More informationTypes of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection
Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image
More informationInstant Prediction for Reactive Motions with Planning
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA Instant Prediction for Reactive Motions with Planning Hisashi Sugiura, Herbert Janßen, and
More informationECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space
ECE276B: Planning & Learning in Robotics Lecture 5: Configuration Space Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Tianyu Wang: tiw161@eng.ucsd.edu Yongxi Lu: yol070@eng.ucsd.edu
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationAuto-focusing Technique in a Projector-Camera System
2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School
More informationCanny Edge Based Self-localization of a RoboCup Middle-sized League Robot
Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,
More informationVisuo-Motor Learning for Face-to-face Pass between Heterogeneous Humanoids
Visuo-Motor Learning for Face-to-face Pass between Heterogeneous Humanoids Masaki Ogino a,, Masaaki Kikuchi a, Minoru Asada a,b a Dept. of Adaptive Machine Systems b HANDAI Frontier Research Center Graduate
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationThree-Dimensional Computer Vision
\bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision
More informationRectangle Positioning Algorithm Simulation Based on Edge Detection and Hough Transform
Send Orders for Reprints to reprints@benthamscience.net 58 The Open Mechanical Engineering Journal, 2014, 8, 58-62 Open Access Rectangle Positioning Algorithm Simulation Based on Edge Detection and Hough
More informationA Novel Map Merging Methodology for Multi-Robot Systems
, October 20-22, 2010, San Francisco, USA A Novel Map Merging Methodology for Multi-Robot Systems Sebahattin Topal Đsmet Erkmen Aydan M. Erkmen Abstract In this paper, we consider the problem of occupancy
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationTowards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training
Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer
More informationA Document Image Analysis System on Parallel Processors
A Document Image Analysis System on Parallel Processors Shamik Sural, CMC Ltd. 28 Camac Street, Calcutta 700 016, India. P.K.Das, Dept. of CSE. Jadavpur University, Calcutta 700 032, India. Abstract This
More informationDept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan
An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi
More informationMatching Evaluation of 2D Laser Scan Points using Observed Probability in Unstable Measurement Environment
Matching Evaluation of D Laser Scan Points using Observed Probability in Unstable Measurement Environment Taichi Yamada, and Akihisa Ohya Abstract In the real environment such as urban areas sidewalk,
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationImage Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter
Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Hua Zhong 1, Takeo Kanade 1,andDavidSchwartzman 2 1 Computer Science Department, Carnegie Mellon University, USA 2 University
More informationDecision Algorithm for Pool Using Fuzzy System
Decision Algorithm for Pool Using Fuzzy System S.C.Chua 1 W.C.Tan 2 E.K.Wong 3 V.C.Koo 4 1 Faculty of Engineering & Technology Tel: +60-06-252-3007, Fax: +60-06-231-6552, E-mail: scchua@mmu.edu.my 2 Faculty
More informationRobust Control of Bipedal Humanoid (TPinokio)
Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 643 649 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Robust Control of Bipedal Humanoid (TPinokio)
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationInverse Kinematics for Humanoid Robots Using Artificial Neural Networks
Inverse Kinematics for Humanoid Robots Using Artificial Neural Networks Javier de Lope, Rafaela González-Careaga, Telmo Zarraonandia, and Darío Maravall Department of Artificial Intelligence Faculty of
More informationCOMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS
COMPARISON OF ROBOT NAVIGATION METHODS USING PERFORMANCE METRICS Adriano Flores Dantas, Rodrigo Porfírio da Silva Sacchi, Valguima V. V. A. Odakura Faculdade de Ciências Exatas e Tecnologia (FACET) Universidade
More informationImage Enhancement Techniques for Fingerprint Identification
March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement
More informationWave front Method Based Path Planning Algorithm for Mobile Robots
Wave front Method Based Path Planning Algorithm for Mobile Robots Bhavya Ghai 1 and Anupam Shukla 2 ABV- Indian Institute of Information Technology and Management, Gwalior, India 1 bhavyaghai@gmail.com,
More informationMobile Robot Path Planning in Static Environments using Particle Swarm Optimization
Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization M. Shahab Alam, M. Usman Rafique, and M. Umer Khan Abstract Motion planning is a key element of robotics since it empowers
More informationOnline Environment Reconstruction for Biped Navigation
Online Environment Reconstruction for Biped Navigation Philipp Michel, Joel Chestnutt, Satoshi Kagami, Koichi Nishiwaki, James Kuffner and Takeo Kanade The Robotics Institute Digital Human Research Center
More informationResearch on the Algorithms of Lane Recognition based on Machine Vision
International Journal of Intelligent Engineering & Systems http://www.inass.org/ Research on the Algorithms of Lane Recognition based on Machine Vision Minghua Niu 1, Jianmin Zhang, Gen Li 1 Tianjin University
More informationPath Planning. Jacky Baltes Dept. of Computer Science University of Manitoba 11/21/10
Path Planning Jacky Baltes Autonomous Agents Lab Department of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca http://www.cs.umanitoba.ca/~jacky Path Planning Jacky Baltes Dept. of
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationA Three dimensional Path Planning algorithm
A Three dimensional Path Planning algorithm OUARDA HACHOUR Department Of Electrical and Electronics Engineering Faculty Of Engineering Sciences Signal and System Laboratory Boumerdes University Boulevard
More informationGraph-based Planning Using Local Information for Unknown Outdoor Environments
Graph-based Planning Using Local Information for Unknown Outdoor Environments Jinhan Lee, Roozbeh Mottaghi, Charles Pippin and Tucker Balch {jinhlee, roozbehm, cepippin, tucker}@cc.gatech.edu Center for
More informationComputer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han
Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has
More informationLocal Image Registration: An Adaptive Filtering Framework
Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,
More informationMachine Learning on Physical Robots
Machine Learning on Physical Robots Alfred P. Sloan Research Fellow Department or Computer Sciences The University of Texas at Austin Research Question To what degree can autonomous intelligent agents
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationPOME A mobile camera system for accurate indoor pose
POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble
More informationDevelopment of a Fall Detection System with Microsoft Kinect
Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West
More informationHorus: Object Orientation and Id without Additional Markers
Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationAn Image Based Approach to Compute Object Distance
An Image Based Approach to Compute Object Distance Ashfaqur Rahman * Department of Computer Science, American International University Bangladesh Dhaka 1213, Bangladesh Abdus Salam, Mahfuzul Islam, and
More informationPersonal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery
Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,
More informationA New Algorithm for Detecting Text Line in Handwritten Documents
A New Algorithm for Detecting Text Line in Handwritten Documents Yi Li 1, Yefeng Zheng 2, David Doermann 1, and Stefan Jaeger 1 1 Laboratory for Language and Media Processing Institute for Advanced Computer
More informationBall tracking with velocity based on Monte-Carlo localization
Book Title Book Editors IOS Press, 23 1 Ball tracking with velocity based on Monte-Carlo localization Jun Inoue a,1, Akira Ishino b and Ayumi Shinohara c a Department of Informatics, Kyushu University
More informationLast update: May 6, Robotics. CMSC 421: Chapter 25. CMSC 421: Chapter 25 1
Last update: May 6, 2010 Robotics CMSC 421: Chapter 25 CMSC 421: Chapter 25 1 A machine to perform tasks What is a robot? Some level of autonomy and flexibility, in some type of environment Sensory-motor
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationCOMP 102: Computers and Computing
COMP 102: Computers and Computing Lecture 23: Computer Vision Instructor: Kaleem Siddiqi (siddiqi@cim.mcgill.ca) Class web page: www.cim.mcgill.ca/~siddiqi/102.html What is computer vision? Broadly speaking,
More informationA Symmetry Operator and Its Application to the RoboCup
A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,
More informationSPATIAL GUIDANCE TO RRT PLANNER USING CELL-DECOMPOSITION ALGORITHM
SPATIAL GUIDANCE TO RRT PLANNER USING CELL-DECOMPOSITION ALGORITHM Ahmad Abbadi, Radomil Matousek, Pavel Osmera, Lukas Knispel Brno University of Technology Institute of Automation and Computer Science
More informationLow-level Image Processing for Lane Detection and Tracking
Low-level Image Processing for Lane Detection and Tracking Ruyi Jiang 1, Reinhard Klette 2, Shigang Wang 1, and Tobi Vaudrey 2 1 Shanghai Jiao Tong University, Shanghai, China 2 The University of Auckland,
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationSupplementary Material for SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images
Supplementary Material for SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images Benjamin Coors 1,3, Alexandru Paul Condurache 2,3, and Andreas Geiger
More informationLocalization of Multiple Robots with Simple Sensors
Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Localization of Multiple Robots with Simple Sensors Mike Peasgood and Christopher Clark Lab
More informationCover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data
Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417
More informationVisual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion
Visual Attention Control by Sensor Space Segmentation for a Small Quadruped Robot based on Information Criterion Noriaki Mitsunaga and Minoru Asada Dept. of Adaptive Machine Systems, Osaka University,
More informationCamera Model and Calibration
Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the
More informationLevel lines based disocclusion
Level lines based disocclusion Simon Masnou Jean-Michel Morel CEREMADE CMLA Université Paris-IX Dauphine Ecole Normale Supérieure de Cachan 75775 Paris Cedex 16, France 94235 Cachan Cedex, France Abstract
More informationEdge detection. Gradient-based edge operators
Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:
More informationCS 223B Computer Vision Problem Set 3
CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.
More informationInverse Kinematics for Humanoid Robots using Artificial Neural Networks
Inverse Kinematics for Humanoid Robots using Artificial Neural Networks Javier de Lope, Rafaela González-Careaga, Telmo Zarraonandia, and Darío Maravall Department of Artificial Intelligence Faculty of
More informationFitting: The Hough transform
Fitting: The Hough transform Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not vote consistently for any single model Missing data
More informationA NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 A NOVEL LANE FEATURE EXTRACTION ALGORITHM BASED ON DIGITAL INTERPOLATION Yifei Wang, Naim Dahnoun, and Alin
More informationStraight Lines and Hough
09/30/11 Straight Lines and Hough Computer Vision CS 143, Brown James Hays Many slides from Derek Hoiem, Lana Lazebnik, Steve Seitz, David Forsyth, David Lowe, Fei-Fei Li Project 1 A few project highlights
More informationCS4758: Rovio Augmented Vision Mapping Project
CS4758: Rovio Augmented Vision Mapping Project Sam Fladung, James Mwaura Abstract The goal of this project is to use the Rovio to create a 2D map of its environment using a camera and a fixed laser pointer
More informationResearch on Evaluation Method of Video Stabilization
International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and
More informationRelating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps
Relating Local Vision Measurements to Global Navigation Satellite Systems Using Waypoint Based Maps John W. Allen Samuel Gin College of Engineering GPS and Vehicle Dynamics Lab Auburn University Auburn,
More information