A Visual Landmark Recognition System for Autonomous Robot Navigation
|
|
- Brendan Hensley
- 5 years ago
- Views:
Transcription
1 A Visual Landmark Recognition System for Autonomous Robot Navigation Quoc V. Do and Lakhmi C. Jain Knowledge-Based Intelligent Engineering Systems Centre School of Electrical and Information Engineering, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA, Australia Abstract This paper presents a vision system for autonomously guiding a robot along a known route using a single CCD camera. The prominent feature of the system is the real-time recognition of shape-based visual landmarks in cluttered backgrounds, using a memory feedback modulation (MFM) mechanism, which provides a means for the knowledge from the memory to interact and enhance the earlier stages in the system. Its feasibility in autonomous robot navigation is demonstrated in both indoor and outdoor experiments using a vision-based navigating vehicle. 1 Introduction Autonomous robot navigation by means of visual landmark recognition has been an attractive field of research that models the ability in which human navigates. In human navigation, one does not need to memorise everything along that pathway, but only significant or salient features. These features are known as landmarks and serve as navigational references to bring about space perception. For instance, traditional mariners relied on the observation of a star s position to assist them to localise their ships and successfully navigated across the ocean. Humans recognise landmarks and memorise previously traversed routes with extreme accuracy and apparently with minimal effort. Despite over thirty years of intensive research, with rapid advance in technology and computational power, the emulation of this ability still remains as a challenge for both the computer vision and robotics research communities. The major challenge lies in the excessive amount of computations required to process 2D visual data, especially with a need for real-time object recognition [1]. A vision system that takes minutes to extract a large amount of useful information from an image is useless in the context of navigation, as the scene changes significantly as the robot progresses. Therefore, it is required to completely process the current observation before the scene varies excessively, thus requires real-time processing. Most importantly, salient objects in the surrounding environment appear in complex cluttered backgrounds and are viewed differently depending on the approaching angles and environmental conditions. As a result, many researchers have simplified the problem, taking a bottom-up approach by considering the recognition of simple object-based landmarks instead of complex ones. For instance, Cheng and Zelinsky [2], used an image processing hardware for recognising circular objects such as soccer and tennis balls in clean backgrounds to guide the YamHa robot in an indoor environment. Similarly, Mata [3] developed a genetic algorithm for recognising quadrangle shapes on walls (doors, windows and posters) to guide a robot along a corridor. Furthermore, Li [4] implemented a genetic algorithm for detecting numerical signs (a black number 0-9 written on a white plate), which were placed at critical points along the navigational route. The system recognised these highly-contrasted signs to self-localising the robot in the outdoor environment. Unlike the above approaches, this paper describes a vision system that focuses on recognising object-based visual landmarks in cluttered backgrounds. The system has a novel memory feedback nodulation (MFM) mechanism, which enables the usage of the knowledge from the memory to enhance the operations of the earlier stages. This achieves object-background separation at the feature extraction stage and real-time visual landmark recognition. The paper is structured with section II describes the implementation of the vision system. Section III presents the indoor and outdoor experiments for evaluating and demonstrating the feasibility of the vision system in mobile robotic applications. Finally, the conclusions are presented in section IV. 2 The Vision System The vision system exploits and integrates the concepts from the pre-attentive and attentive stages in the human visual system [5, 6], the feedforward architecture that underlines the computational template matching approaches, and the top-down feedback facilitatory and inhibition modulation mechanism of the selective attention adaptive resonance theory (SAART) neural
2 network [7], in order to implement a feedforwardfeedbackward architecture. However, unlike computational template approaches, the vision system processes directly on the high dimensional image space rather than a dimensional reduction image through some means of transformations, such as the principle component analysis (PCA) or linear discriminant analysis (LDA) [8]. Similarly, it uses the concept of top-down facilitatory and inhibition modulation in the SAART neural network [7, 9], but achieves the steady state solution much faster and offers real-time processing [10, 11]. The vision system is illustrated in Figure 1. applying eq.1 to the memory template, while the LEA filter was obtained by analysing the region that the landmark occupies as shown at the bottom of Figure 2 (c) and Figure 2(d) respectively. (a) (c) Figure 2. The memory template and memory binary filters. (a) the gray-level image, (b) the memory template, (c) the memory active edge (MAE) filter and (d) the landmark enclosed area (LEA) filter. (b) (d) 2.2 Feature Extraction Figure 1. The developed visual landmark recognition architecture that combines concepts from the human visual system, biological neural networks and the template matching approaches. 2.1 Memory Module The memory templates are selected manually using a simple process, which involves an acquisition of a graylevel image of an object that serves as a landmark within a clean background, even-though the system subsequently recognises the landmarks in complex scenes. Sobel edge detection is applied to the input image and a region of size 50x50 pixels that enclosed the landmark is extracted and stored as a memory template as shown in Figure 2(b). E( > τ MAE ( = 1 (1) MAE ( = E( < τ MAE ( = 0 Where E( is the edge image,τ is a small threshold (τ =0.1 and was selected by trial and error) and MAE( is the memory active edge filter. Each template is used to create two additional binary memory filters: memory active edge (MAE) and landmark enclosed area (LEA). The MAE filter is created by The feature extraction is based on the MFM mechanism and lateral competition. The former is developed by extracting and extending the SAART s topdown memory feedback facilitation and inhibition modulation [7], to cope with real-time visual landmark recognition in cluttered background. The mechanism uses memory filters associated with the active memory template to selectively enhance the selection of the desired information, while discarding the insignificant data from the bottom-up edge input in order to improve the performance of the extraction process. In practice, the object-background separation is achieved using eq.2, where the extracted feature Ex( is equal to the sum of the product of the input S(, the memory filter MAE( and a control gain G. Ex ( = S ( + G( S ( * MAE ( ) (2) The MAE filter denotes the corresponding pixels locations in the memory template that encodes the landmark s shape. Therefore, pixels in the input edged image that have elementary alignments with the active pixels in the MAE filter will receive amplification, while others are experiencing no modification as the term S( * MAE( in eq.2 equals zero. These are regarded as background features and removed by a lateral competition. The lateral competition between pixels within the extracted region, Ex( is implemented in a simple but effective way using the unit vector operation. This involves the division of every pixel by the length of the image vector, as the result of the memory modulation using eq.2 high value pixels experience a small amount of suppression from smaller value pixels, while conversely small pixels suffer a much higher level of inhibition from larger ones. This effectively causes small pixels to be
3 suppressed below a pre-set threshold and being discarded to achieve object-background separation. 2.3 Landmark Recognition Stage The aim of this stage is to establish a unique correspondence between the input pattern and the active template in the memory database. The landmark recognition module uses a simple but effective template matching algorithm, which is governed by the MFM mechanism using both the MAE and the LEA filters. These are used to create matching channels that selectively control and guide the matching process. The matching between the data that lies within the matching channel in both the input image and the memory template is achieved by analysing their similarity. Several methods have been proposed for achieving this, including: the sum of absolute difference [12], the Hausdoff distance [13] and the cosine rule [7]. This system employs the cosine rule and is implemented using eq.3. This approach produces a degree of match (DoM M ) that ranges from 0 to 1, where one represents 100% match, providing an easy means for setting a match threshold, which is used to determine the recognition status of the desired landmark. Each DoM M is evaluated against the match threshold of 90%, an input region with a DoM M greater than the threshold is passed into an additional evaluation stage for further evaluation to ensure robust recognition. DoM M = ε + P( * M( * F( P( * P( * F( M( * M( * F(..(3) Where DoM is the degree of match, ε is a small constant, P( is the input region and M( is the current memory template and F( is the MEA filter. The evaluation stage is developed to increase the system s robustness by overcoming a deficiency of the MFM mechanism. This is inherited from its predecessor, the top-down memory feedback facilitation and inhibition in the SAART neural network and was reported in [7, 9]. The MFM mechanism focuses on input edge activities that align with the memory feedback pathways, denoting pixels positions that describe the landmark s shape in order to separate relevant data from the background features. The deficiency is revealed when edge activities within a complex cluttered input region have sufficient edge alignments with the memory feedback pathways, resulting in fault formation of a new shape (using the MEA filter) that is very similar to the current memory template as illustrated in Figure 3. This discards the fact that the desired landmark is not presence in the input. This leads to occasional fault visual landmark detection. This problem can be overcome by top-down modulating the edge detection for different orientations, but it is computationally intensive. Alternatively, the issue is resolved using additional memory feedback pathways created by the LEA filter, which provides the necessary memory guidance for the assessment of the landmark s surface information, as shown in Figure 3. As illustrated, the surface of the landmark in the memory has no edge and is completely different from the feature created by the LEA filter. Thus, faulty landmark recognition is overcome by re-matching the extracted feature with the memory template using eq.3, with the LEA filter replacing the MAE filter. Figure 3. The evaluation process for overcoming the deficiency of the MFM mechanism. 2.4 The Searching Stage This section presents an extension to the previous work [10] and named memory assisted local edge analysis (MALEA). It emulates concepts of the pre-attentive stage in the human vision system using the MFM mechanism, which provides memory feedback pathways that give access to the distribution of edge information within the memory template. This information is used to guide the evaluation process of the likelihood of a region containing the landmark and is illustrated in Figure 4. The central idea of the MALEA approach is to use the memory active edge (MAE) and the landmark enclosed area (LEA) filters to provide memory guidance for determining the ROIs within the input image. The search considers only pixels that correspond to these filters, as only these edges are relevant in describing the landmark s shape. All others are discarded, thus significantly reduces the amount of computation required. The searching process involves the extraction of patches as a search window scans across the input image. The regions that satisfy the ROI-threshold are passed through and further evaluated with the signature threshold to confirm and classify as ROIs, otherwise discarded. Both the ROI and the signature thresholds are set dynamically for each activated memory template.
4 Figure 4. The process of the MALEA searching approach. 2.5 Invariant Landmark Recognition Significant amount of evidence has suggested that the human vision system uses the view-based representation, where an object can be represented by multiple 2D sample images, covering degrees around the object [14]. Inspired by these findings, the developed vision system stores multiple views of every known landmark. However, the number of views required to sufficiently represent an object can be infinite. Therefore, the architecture employs two concepts known as band transformation and shape attraction to limit the number of 2D memory templates needed to uniquely represent each chosen landmark. The purpose of the band transformation and shape attraction is to recover distorted edges by considering adjacent pixels due to the fact that those edges commonly reappear in their neighbourhood. The analysis of adjoining pixels enables the distorted edges to be compensated and recovered, giving the system an elemental size and view invariant landmark recognition capability. Thus, by treating this feature as a building block, a simple but effective method has been developed for recognising an object from different views [11]. Clean-Background: The landmark was placed alone in the scene. Clutter_1: Up to three objects were placed behind the landmark to create proximity clutters. Clutter_2: At least four objects were placed behind the landmark. Light-Reduction: Lighting level was varied by turningoff some of the existing lights. This condition was ignored for the outdoor scene. A total of 75 different input images were collected: 20 clean-background, 20 clut_1, 20 clut_2 and 15 lightreduction images. These were fed into both the developed vision system and the traditional template matching method to evaluate and compare their performances. All the image processing stages were kept constant, the only difference was an additional MFM mechanism. The recognition results are summarized and plotted into four different graphs as shown in Figure 5. In these graphs, the vertical axis shows the degree of match (DoM), while the horizontal axis indicates the corresponding input image. The first five sample images were collected from the laboratory environment, samples (6-10) were taken from the corridor, samples (11-15) were generated in the foyer and finally, samples (16-20) were gathered in an outdoor environment. 3. Results 3.1 Real-Image Simulations This section describes the evaluation of the vision system and compares it with the traditional template matching approach, by using a large number of real-images, simulating two types of background clutter: proximity and distance. The former are objects that appear immediately behind or close to the landmark. The latter are the features in the far background. These features appear naturally in the environment and represent different levels of distance clutter. Different indoor and outdoor scenes were chosen to generate different levels of distance clutter. At each location, five different landmarks were used, in which four different images were captured for each landmark to simulate the following level of proximity clutter. Figure 5. Comparison between the proposed landmark recognition architecture and traditional template matching approaches. (a) Clean-background input images. (b) Clut_1 images. (c) Clut_2 images. (d) Light-reduction images. The graphs in Figure 5, show clearly that the developed vision system (_MFM) has superior performance over the traditional template matching approach (_Trad). The vision system produced very high
5 degree of matches (DoMs), fluctuating at 90% match, while the traditional template matching approach had comparatively low DoMs, fluctuating between 50% to 80%. The graphs further show that the template matching approach produced the best performance with cleanbackground images, which gradually reduced as the level of proximity clutter increased, while the developed vision system maintained high and stable performance. In addition, the sample images that resulted with DoM value of zero in the graphs indicate situations, where the template match approach has failed to detect the landmark within the input image, as the result of background regions obtaining DoMs greater than the region that contained the landmark. In comparison, the proposed vision system has maintained DoMs at approximately 90% for all of these sample images. 3.2 Real-Time Experiments A robot platform was designed and implemented to test and evaluate the vision system s feasibility in vision-based robot navigation as illustrated in Figure 6. The robot has a remote vision system, which was implemented on a Pentium IV, 2.4 GHz, communication is achieved using wireless RF data and video links. On-board the robot, are an embedded PC (PC/104) running under Linux operating system, a TCM2 magnetic compass, wheels odometers and GP2YA02Yk infrared range sensors. Three different indoor laboratory trials were conducted under three different conditions: clean background, cluttered background and with light reduction. In the clean background trial, objects were placed in front of a clean area without any other being in close proximity. Conversely, in the cluttered background trial, the objects were placed in front of many others immediately behind them to simulate proximity background clutter. Similarly, in the light reduction trial, the landmarks were embedded in complex backgrounds with the reduction of light in the laboratory (by turning off all the light, leaving a minimal amount of light entering from the windows). The robot was able to successfully navigate the predefined path, starting from L 1, traverse to L 2, L 3 and back to L 1, for all of the three trials. This involved the reliable recognition of nine visual landmarks, under different levels of close proximity background clutter and light variations, while navigating the indoor environment with a route s length of 24m at an average speed of 10cm/s. These trials were video recorded. The Robot The vision system Figure 6. The robot navigation system, consisting of the remote image processing module, which is implemented on a local computer that communicates with the navigating platform using wireless RF data and video links. A number of indoor and outdoor trials have been performed to investigate the vision system s feasibility in vision-based mobile robot. The robot was provided a topological map of the indoor and outdoor environments, which was 8m and 42m in length respectively and are shown in Figure 7. These routes were created in the School of Electrical and Information Engineering, University of South Australia laboratory and car park environments. During navigation the robot used these maps to perform self-localisation - approximating its current position in the environment, upon successfully recognising the desired visual landmark, it travelled toward the direction of the next expected object. Thus, autonomous route traversal was achieved by successfully recognising a pre-defined sequence of targets. (a) Figure 7. Topological maps. (a) Indoor. (b) Outdoor Outdoor robot experiments were setup with four landmarks placed at four different corners forming an enclosed path. Proximity background clutter was created by placing books, folders, tree branches, leaves and other objects. This was to illustrate the vision system s ability to recognise a visual landmark which was embedded in four heavily cluttered outdoor scenes. Two sample images from day and night trials are shown in Figure 8. In the day trials, ten were conducted under cloudy conditions. The first five trials were performed in a clockwise direction, where the robot traveled from L4 lead to L1, L2, L3 and back to L4. During navigation, the robot was required to recognise four different landmarks at the four corners at an average speed of 13cm/s. The robot successfully navigated four out of five trials. Similarly, five other trials were carried out in an anti-clockwise direction but with a different combination of landmarks and background clutter. The robot was successful in four out of five trials. When night trials were conducted, the main light source was the car park s light poles. A different arrangement of landmarks was set up for these trials. The robot was able to completely traverse three out of the five trials. Two (c)
6 failed trials were the result of shadows caused by multiple light sources in the car park. The induced shadows suppressed the detection of landmark s edges, while inducing additional unwanted edges, thus leading to many landmark recognition failures. These trials were video recorded. In summary, the robot has successfully traversed eleven out of the fifteen trials in both day and night conditions, recognising 48 out of 60 landmarks, travelling at an average speed of 13cm/s and covering a path s length of 42m. Figure 8. The sample images of day and night trials (a) and (b) are samples scenes of day trials, (c) and (d) are samples scenes from night trials. 4 Conclusions This paper presents a novel vision system for detecting a 2D edge-based visual landmark that is embedded in a cluttered background. The system is developed based on the recent biological SAART neural network, the preattentive and attentive stages in the human visual system and the traditional template matching approach. It has the ability to extract relevant data while simultaneously suppressing irrelevant data from the bottom-up input using the memory feedback modulation (MFM) mechanism. The simulation results show that the vision system s ability to recognise visual landmarks in clutter scenes is superior over the traditional edge-based template matching. The vision system has successfully demonstrated its feasibility in vision-based mobile robot navigation through indoor and outdoor experiments. In indoor trials, the robot successfully navigated the pre-defined routes, recognising all the visual landmarks, which are purposely embedded in complex backgrounds and subjected to light reduction. Similarly, in the outdoor environment, the robot successfully traversed eleven out of fifteen trials, with the landmarks situated under the extremes of proximity and distant clutter, and lighting conditions as trials were conducted in both the day and night. Acknowledgment The work described in this paper was funded by Weapons Systems Division, Australian Defence Science and Technology Organisation. References [1] Schalkoff, Digital Image Processing and Computer Vision, Wiley and Sons, INC, [2] G. Cheng and A. Zelinsky, "Goal-oriented behaviour-based visual navigation," In Proc. IEEE International Conference on Robotics and Automation, pp ,1998. [3] M. Mata, J. M. Armingol, A. de la Escalera, and M. A. Salichs, "Using learned visual landmarks for intelligent topological navigation of mobile robots," In Proc. IEEE International Conference on Robotics and Automation, Proceedings. ICRA-03, pp ,2003. [4] H. Li and S. X. Yang, "A behavior-based mobile robot with a visual landmark-recognition system," IEEE/ASME Transactions on Mechatronics, vol. 8, pp , [5] S. J. Luck, S. Fan, and S. A. Hillyard, "Attention-related modulation of Sensory-evoked Brain Activity in a Visual Search Task," Journal of Cognitive Neuroscience, vol. 5, pp , [6] E. Weichselgartner and G. Sperling, "Dynamics of Automatic and Controlled Visual Attention," Science, vol. 238, pp , [7] P. Lozo, "Neural theory and model of selective visual attention and 2D shape recognition in visual clutter", PhD Thesis, Department of Electrical and Electronic Engineering. Adelaide, University of Adelaide, [8] L. Wolf and S. Bilesch "Combining variable selection with dimensionality reduction," pp. 801,2005. [9] P. Lozo and C.-C. Lim, "Neural circuit for object recognition in complex and cluttered visual images," In Proc. The Australian and New Zealand Conference on Intelligent Information Systems, pp ,1996. [10] Q. V. Do, P. Lozo, and L. Jain, "A Fast Visual Search and Recognition Mechanism for Real-time Robotic Applications," In Proc. The 17th Australian Joint Conference on Artificial Intelligence, Cairns, Australia, pp ,2004. [11] Q. V. Do, P. Lozo, and L. C. Jain, "Autonomous Robot Navigation using SAART for Visual Landmark Recognition," In Proc. The 2nd International Conference on Artificial Intelligence in Science and Technology, Tasmania, Australia, pp ,2004. [12] C. Watman, D. Austin, N. Barnes, G. Overett, and S. Thompson, "Fast sum of absolute differences visual landmark detector," In Proc. IEEE International Conference on Robotics and Automation, pp ,2004. [13] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, "Comparing images using the Hausdorff distance," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 15, pp. 850, [14] N. K. Logothetis and J. Pauls, "Psychopysical and Physiological Evidence for View-centered Object Representation in the Primate," Cerebral Cortex, vol. 3, pp , 1995.
A Biological Inspired Visual Landmark Recognition Architecture
2009 Digital Image Computing: Techniques and Applications A Biological Inspired Visual Landmar Recognition Architecture Quoc Do and Lahmi Jain University of South Australia, Mawson Laes Campus, South Australia,
More informationLocalisation using Automatically Selected Landmarks from Panoramic Images
Localisation using Automatically Selected Landmarks from Panoramic Images Simon Thompson, Toshihiro Matsui and Alexander Zelinsky Abstract Intelligent Systems Division Electro-Technical Laboratory --4
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More informationPixel-Pair Features Selection for Vehicle Tracking
2013 Second IAPR Asian Conference on Pattern Recognition Pixel-Pair Features Selection for Vehicle Tracking Zhibin Zhang, Xuezhen Li, Takio Kurita Graduate School of Engineering Hiroshima University Higashihiroshima,
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationScene Text Detection Using Machine Learning Classifiers
601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department
More informationReal Time Motion Detection Using Background Subtraction Method and Frame Difference
Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today
More informationEnsemble of Bayesian Filters for Loop Closure Detection
Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information
More informationJanitor Bot - Detecting Light Switches Jiaqi Guo, Haizi Yu December 10, 2010
1. Introduction Janitor Bot - Detecting Light Switches Jiaqi Guo, Haizi Yu December 10, 2010 The demand for janitorial robots has gone up with the rising affluence and increasingly busy lifestyles of people
More informationm Environment Output Activation 0.8 Output Activation Input Value
Learning Sensory-Motor Cortical Mappings Without Training Mike Spratling Gillian Hayes Department of Articial Intelligence University of Edinburgh mikes@dai.ed.ac.uk gmh@dai.ed.ac.uk Abstract. This paper
More informationFast Corner Detection Using a Spiral Architecture
Fast Corner Detection Using a Spiral Architecture J. Fegan, S.A., Coleman, D. Kerr, B.W., Scotney School of Computing and Intelligent Systems, School of Computing and Information Engineering, Ulster University,
More informationMotion Detection Algorithm
Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection
More informationComputer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia
Application Object Detection Using Histogram of Oriented Gradient For Artificial Intelegence System Module of Nao Robot (Control System Laboratory (LSKK) Bandung Institute of Technology) A K Saputra 1.,
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationSimultaneous surface texture classification and illumination tilt angle prediction
Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona
More informationCHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS
CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary
More informationCOLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING
COLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING Saulat Farooque and Lakhmi Jain School of Electrical and Information Engineering, University of South Australia, Adelaide, Australia saulat.farooque@tenix.com,
More informationAUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S
AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the
More informationNew wavelet based ART network for texture classification
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 1996 New wavelet based ART network for texture classification Jiazhao
More informationOptimizing Monocular Cues for Depth Estimation from Indoor Images
Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,
More informationEDGE BASED REGION GROWING
EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.
More informationRecognition of a Predefined Landmark Using Optical Flow Sensor/Camera
Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Galiev Ilfat, Alina Garaeva, Nikita Aslanyan The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau ilfat.galiev@tu-ilmenau.de;
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationMOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK
MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,
More informationRobust contour extraction and junction detection by a neural model utilizing recurrent long-range interactions
Robust contour extraction and junction detection by a neural model utilizing recurrent long-range interactions Thorsten Hansen 1, 2 & Heiko Neumann 2 1 Abteilung Allgemeine Psychologie, Justus-Liebig-Universität
More informationA SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS
A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS Enrico Giora and Clara Casco Department of General Psychology, University of Padua, Italy Abstract Edge-based energy models
More informationChange detection using joint intensity histogram
Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationImage Feature Generation by Visio-Motor Map Learning towards Selective Attention
Image Feature Generation by Visio-Motor Map Learning towards Selective Attention Takashi Minato and Minoru Asada Dept of Adaptive Machine Systems Graduate School of Engineering Osaka University Suita Osaka
More informationFace Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN
2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine
More informationCOMPUTER-BASED WORKPIECE DETECTION ON CNC MILLING MACHINE TOOLS USING OPTICAL CAMERA AND NEURAL NETWORKS
Advances in Production Engineering & Management 5 (2010) 1, 59-68 ISSN 1854-6250 Scientific paper COMPUTER-BASED WORKPIECE DETECTION ON CNC MILLING MACHINE TOOLS USING OPTICAL CAMERA AND NEURAL NETWORKS
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationTowards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training
Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer
More informationDomain Adaptation For Mobile Robot Navigation
Domain Adaptation For Mobile Robot Navigation David M. Bradley, J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, 15217 dbradley, dbagnell@rec.ri.cmu.edu 1 Introduction An important
More informationA Robust Wipe Detection Algorithm
A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,
More informationMULTI-VIEW TARGET CLASSIFICATION IN SYNTHETIC APERTURE SONAR IMAGERY
MULTI-VIEW TARGET CLASSIFICATION IN SYNTHETIC APERTURE SONAR IMAGERY David Williams a, Johannes Groen b ab NATO Undersea Research Centre, Viale San Bartolomeo 400, 19126 La Spezia, Italy Contact Author:
More informationAn Angle Estimation to Landmarks for Autonomous Satellite Navigation
5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationAn Event-based Optical Flow Algorithm for Dynamic Vision Sensors
An Event-based Optical Flow Algorithm for Dynamic Vision Sensors Iffatur Ridwan and Howard Cheng Department of Mathematics and Computer Science University of Lethbridge, Canada iffatur.ridwan@uleth.ca,howard.cheng@uleth.ca
More informationCollecting outdoor datasets for benchmarking vision based robot localization
Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell
More informationA RECOGNITION SYSTEM THAT USES SACCADES TO DETECT CARS FROM REAL-TIME VIDEO STREAMS. Predrag Neskouic, Leon N Cooper* David Schuster t
Proceedings of the 9th International Conference on Neural Information Processing (ICONIP'OZ), Vol. 5 Lip0 Wang, Jagath C. Rajapakse, Kunihiko Fukushima, Soo-Young Lee, and Xin Yao (Editors) A RECOGNITION
More informationIDE-3D: Predicting Indoor Depth Utilizing Geometric and Monocular Cues
2016 International Conference on Computational Science and Computational Intelligence IDE-3D: Predicting Indoor Depth Utilizing Geometric and Monocular Cues Taylor Ripke Department of Computer Science
More informationEffects Of Shadow On Canny Edge Detection through a camera
1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow
More informationText-Tracking Wearable Camera System for the Blind
2009 10th International Conference on Document Analysis and Recognition Text-Tracking Wearable Camera System for the Blind Hideaki Goto Cyberscience Center Tohoku University, Japan hgot @ isc.tohoku.ac.jp
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationDESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK
DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK A.BANERJEE 1, K.BASU 2 and A.KONAR 3 COMPUTER VISION AND ROBOTICS LAB ELECTRONICS AND TELECOMMUNICATION ENGG JADAVPUR
More informationDiscovering Visual Hierarchy through Unsupervised Learning Haider Razvi
Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping
More informationA Novel Field-source Reverse Transform for Image Structure Representation and Analysis
A Novel Field-source Reverse Transform for Image Structure Representation and Analysis X. D. ZHUANG 1,2 and N. E. MASTORAKIS 1,3 1. WSEAS Headquarters, Agiou Ioannou Theologou 17-23, 15773, Zografou, Athens,
More informationIQanalytics Vandal, Motion and Intrusion Detection. User Guide
IQanalytics Vandal, Motion and Intrusion Detection User Guide 1 Contents 1 Overview 3 2 Installation 4 3 IQanalytics Configuration 5 4 Advanced Pages 8 5 Camera Placement & Guidelines 9 6 Recommended Settings
More informationAn Accurate Method for Skew Determination in Document Images
DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research
More information2 OVERVIEW OF RELATED WORK
Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method
More informationECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination
ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.
More informationObject detection using non-redundant local Binary Patterns
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh
More informationMalaysian License Plate Recognition Artificial Neural Networks and Evolu Computation. The original publication is availabl
JAIST Reposi https://dspace.j Title Malaysian License Plate Recognition Artificial Neural Networks and Evolu Computation Stephen, Karungaru; Fukumi, Author(s) Minoru; Norio Citation Issue Date 2005-11
More informationELL 788 Computational Perception & Cognition July November 2015
ELL 788 Computational Perception & Cognition July November 2015 Module 6 Role of context in object detection Objects and cognition Ambiguous objects Unfavorable viewing condition Context helps in object
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationA NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION
A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering
More informationDrywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König
Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Chair for Computing in Engineering, Department of Civil and Environmental Engineering, Ruhr-Universität
More informationA Symmetry Operator and Its Application to the RoboCup
A Symmetry Operator and Its Application to the RoboCup Kai Huebner Bremen Institute of Safe Systems, TZI, FB3 Universität Bremen, Postfach 330440, 28334 Bremen, Germany khuebner@tzi.de Abstract. At present,
More informationDynamic visual attention: competitive versus motion priority scheme
Dynamic visual attention: competitive versus motion priority scheme Bur A. 1, Wurtz P. 2, Müri R.M. 2 and Hügli H. 1 1 Institute of Microtechnology, University of Neuchâtel, Neuchâtel, Switzerland 2 Perception
More informationHaresh D. Chande #, Zankhana H. Shah *
Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information
More informationLocating 1-D Bar Codes in DCT-Domain
Edith Cowan University Research Online ECU Publications Pre. 2011 2006 Locating 1-D Bar Codes in DCT-Domain Alexander Tropf Edith Cowan University Douglas Chai Edith Cowan University 10.1109/ICASSP.2006.1660449
More informationFilm Line scratch Detection using Neural Network and Morphological Filter
Film Line scratch Detection using Neural Network and Morphological Filter Kyung-tai Kim and Eun Yi Kim Dept. of advanced technology fusion, Konkuk Univ. Korea {kkt34, eykim}@konkuk.ac.kr Abstract This
More informationA Hierarchial Model for Visual Perception
A Hierarchial Model for Visual Perception Bolei Zhou 1 and Liqing Zhang 2 1 MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems, and Department of Biomedical Engineering, Shanghai
More informationAugmenting Reality with Projected Interactive Displays
Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationPedestrian Detection with Improved LBP and Hog Algorithm
Open Access Library Journal 2018, Volume 5, e4573 ISSN Online: 2333-9721 ISSN Print: 2333-9705 Pedestrian Detection with Improved LBP and Hog Algorithm Wei Zhou, Suyun Luo Automotive Engineering College,
More informationDetection and recognition of moving objects using statistical motion detection and Fourier descriptors
Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de
More informationA Novel Image Transform Based on Potential field Source Reverse for Image Analysis
A Novel Image Transform Based on Potential field Source Reverse for Image Analysis X. D. ZHUANG 1,2 and N. E. MASTORAKIS 1,3 1. WSEAS Headquarters, Agiou Ioannou Theologou 17-23, 15773, Zografou, Athens,
More informationCOLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON. Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij
COLOR FIDELITY OF CHROMATIC DISTRIBUTIONS BY TRIAD ILLUMINANT COMPARISON Marcel P. Lucassen, Theo Gevers, Arjan Gijsenij Intelligent Systems Lab Amsterdam, University of Amsterdam ABSTRACT Performance
More informationTexture Segmentation by Windowed Projection
Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw
More informationINTELLIGENT MACHINE VISION SYSTEM FOR ROAD TRAFFIC SIGN RECOGNITION
INTELLIGENT MACHINE VISION SYSTEM FOR ROAD TRAFFIC SIGN RECOGNITION Aryuanto 1), Koichi Yamada 2), F. Yudi Limpraptono 3) Jurusan Teknik Elektro, Fakultas Teknologi Industri, Institut Teknologi Nasional
More informationNavigation of Multiple Mobile Robots Using Swarm Intelligence
Navigation of Multiple Mobile Robots Using Swarm Intelligence Dayal R. Parhi National Institute of Technology, Rourkela, India E-mail: dayalparhi@yahoo.com Jayanta Kumar Pothal National Institute of Technology,
More informationA Neural Network for Real-Time Signal Processing
248 MalkofT A Neural Network for Real-Time Signal Processing Donald B. Malkoff General Electric / Advanced Technology Laboratories Moorestown Corporate Center Building 145-2, Route 38 Moorestown, NJ 08057
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More informationA Keypoint Descriptor Inspired by Retinal Computation
A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement
More informationLarge-Scale Traffic Sign Recognition based on Local Features and Color Segmentation
Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,
More informationVisualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps
Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury
More informationREINFORCED FINGERPRINT MATCHING METHOD FOR AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM
REINFORCED FINGERPRINT MATCHING METHOD FOR AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM 1 S.Asha, 2 T.Sabhanayagam 1 Lecturer, Department of Computer science and Engineering, Aarupadai veedu institute of
More informationFractional Discrimination for Texture Image Segmentation
Fractional Discrimination for Texture Image Segmentation Author You, Jia, Sattar, Abdul Published 1997 Conference Title IEEE 1997 International Conference on Image Processing, Proceedings of the DOI https://doi.org/10.1109/icip.1997.647743
More informationOne type of these solutions is automatic license plate character recognition (ALPR).
1.0 Introduction Modelling, Simulation & Computing Laboratory (msclab) A rapid technical growth in the area of computer image processing has increased the need for an efficient and affordable security,
More informationFingerprint Image Enhancement Algorithm and Performance Evaluation
Fingerprint Image Enhancement Algorithm and Performance Evaluation Naja M I, Rajesh R M Tech Student, College of Engineering, Perumon, Perinad, Kerala, India Project Manager, NEST GROUP, Techno Park, TVM,
More informationFast Reliable Level-Lines Segments Extraction
Fast Reliable Level-Lines Segments Extraction N. Suvonvorn, S. Bouchafa, L. Lacassagne nstitut d'electronique Fondamentale, Université Paris-Sud 91405 Orsay FRANCE nikom.suvonvorn@ief.u-psud.fr, samia.bouchafa@ief.u-psud.fr,
More informationCHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION
CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant
More informationEvaluation of regions-of-interest based attention algorithms using a probabilistic measure
Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Martin Clauss, Pierre Bayerl and Heiko Neumann University of Ulm, Dept. of Neural Information Processing, 89081
More informationIMAGE PROCESSING AND IMAGE REGISTRATION ON SPIRAL ARCHITECTURE WITH salib
IMAGE PROCESSING AND IMAGE REGISTRATION ON SPIRAL ARCHITECTURE WITH salib Stefan Bobe 1 and Gerald Schaefer 2,* 1 University of Applied Sciences, Bielefeld, Germany. 2 School of Computing and Informatics,
More informationMulti-View Image Coding in 3-D Space Based on 3-D Reconstruction
Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:
More informationTHE SPEED-LIMIT SIGN DETECTION AND RECOGNITION SYSTEM
THE SPEED-LIMIT SIGN DETECTION AND RECOGNITION SYSTEM Kuo-Hsin Tu ( 塗國星 ), Chiou-Shann Fuh ( 傅楸善 ) Dept. of Computer Science and Information Engineering, National Taiwan University, Taiwan E-mail: p04922004@csie.ntu.edu.tw,
More informationFast Natural Feature Tracking for Mobile Augmented Reality Applications
Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai
More informationPERFORMANCE MEASUREMENTS OF FEATURE TRACKING AND HISTOGRAM BASED TRAFFIC CONGESTION ALGORITHMS
PERFORMANCE MEASUREMENTS OF FEATURE TRACKING AND HISTOGRAM BASED TRAFFIC CONGESTION ALGORITHMS Ozgur Altun 1 and Kenan Aksoy 2 Proline Bilisim Sistemleri, Istanbul, Turkey 1 Research and Development Engineer,
More informationAcquisition of Qualitative Spatial Representation by Visual Observation
Acquisition of Qualitative Spatial Representation by Visual Observation Takushi Sogo Hiroshi Ishiguro Toru Ishida Department of Social Informatics, Kyoto University Kyoto 606-8501, Japan sogo@kuis.kyoto-u.ac.jp,
More informationMouse Pointer Tracking with Eyes
Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating
More informationCHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT
CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT 9.1 Introduction In the previous chapters the inpainting was considered as an iterative algorithm. PDE based method uses iterations to converge
More informationComparing Classification Performances between Neural Networks and Particle Swarm Optimization for Traffic Sign Recognition
Comparing Classification Performances between Neural Networks and Particle Swarm Optimization for Traffic Sign Recognition THONGCHAI SURINWARANGKOON, SUPOT NITSUWAT, ELVIN J. MOORE Department of Information
More informationCHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS
38 CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 3.1 PRINCIPAL COMPONENT ANALYSIS (PCA) 3.1.1 Introduction In the previous chapter, a brief literature review on conventional
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationLocating Objects Visually Using Opposing-Colour-Channel Coding
Locating Objects Visually Using Opposing-Colour-Channel Coding Ulrich Nehmzow and Hugo Vieira Neto Department of Computer Science University of Essex Wivenhoe Park Colchester, Essex CO4 3SQ, UK {udfn,
More informationThe Vehicle Logo Location System based on saliency model
ISSN 746-7659, England, UK Journal of Information and Computing Science Vol. 0, No. 3, 205, pp. 73-77 The Vehicle Logo Location System based on saliency model Shangbing Gao,2, Liangliang Wang, Hongyang
More information6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION
6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm
More information