Suitability Analysis Based on Multi-Feature Fusion Visual Saliency Model in Vision Navigation

Size: px
Start display at page:

Download "Suitability Analysis Based on Multi-Feature Fusion Visual Saliency Model in Vision Navigation"

Transcription

1 Suitability Analysis Based on Multi-Feature Fusion Visual Saliency Model in Vision Navigation Zhen-lu Jin, Quan Pan, Chun-hui Zhao, Yong Liu School of Automation Northwestern Polytechnical University Xi an, , China Abstract Matching-area suitability analysis in vision navigation system for unmanned aerial vehicle (UAV) is a very worthy but full of challenges research area. In this paper, a multi-feature fusion based visual saliency model () was established by introducing invariant features of speeded-up robust features (SURF) directly into the visual saliency model, based on which the extraction method of suitable matching-areas was proposed. With the integration of cross-scale SURF feature maps in the way we defined, the conspicuity map of SURF channel is obtained. By adding SURF channel into the traditional visual saliency model and fusing multi-feature of SURF, color, intensity and orientation, the model is proposed. Based on the, salient locations in sensed map could be obtained and chosen as suitable matching-areas. Simulation results show that the error of image registration with extracted matching-areas based on meet the demands of vision navigation system. The proposed method may provide new ideas for autonomous navigation of UAV in the future. Keywords Visual Saliency Model; Multi-feature Fusion; SURF; Suitability Analysis; Vision Navigation I. INTRODUCTION Suitability analysis is a key issue in UAV scene matching vision navigation. The quality of the chosen areas influences the reliability and effectiveness of vision navigation observably. Hence, the selection of areas with robust suitability is a primary step for vision navigation. In general, the selection of suitable matching areas is an effective way to assure the matching precision for scene matching vision navigation system. Based on a series of basic hypotheses, several definitions and theoretical approximation methods regarding matching probability were studied in [1]. This paper discussed the area suitability for the very first time and established the foundation of this research field. So far, lots of papers have aimed at this issue [2-7], and there are mainly two kinds of suitability analysis methods: image signal correlation calculation method and comprehensive features evaluation method. However, the supposed signal correlation model usually cannot fit well with the actual situation, and parameters of the model are often difficult to be achieved. So, the image signal correlation based method is not always reliable. The comprehensive features evaluation method determines suitable matching areas by constructing statistic models of relationship This work was supported by Major Program of National Natural Science Foundation of China ( ), by National Natural Science Foundation of China ( and ), by Ai Sheng Innovation Foundation of Northwestern Polytechnical University (2012K001-). between image features and matching assessment indexes. This kind of method needs a mass of samples, and a suitable threshold in the model is hard to be given for most kinds of landscape features and imaging conditions. Moreover, the principal factors that influence matching suitability ( including landscape features, imaging differences, performance requirements for scene matching and so on ) are interacting and interdependent with each other. There will be conflicts and contradictions among results of suitability analysis. Therefore, the existing two kinds of methods for area suitability analysis both have certain inevitable drawbacks. Visual saliency [8-10] is a kind of psychological adjustment mechanism found in human perception system for visual scene analysis. With the help of this mechanism, we could extract relevant and useful information from outside circumstance and reject useless information. In general, visual saliency is the guarantee of efficiency and reliability in human visual perception process. In the research of vision navigation, lots of researchers are conducting studies about visual saliency [11-16]. Based on human visual attention system and radar, Wang et al. proposed a robust and real-time method for vision navigation by information fusion in [11]. Kottas et al. presented an efficient and consistent aided navigation system using line detector and visual saliency model in [12]. Siagian et al. designed a novel autonomous location and navigation system for mobile robot in [1] and [1]. The main idea is to choose rough location based on Gist model and then to refine the position with vision saliency model and Monte-Carlo method. Ouerhani et al. applied a modified visual saliency model into robot navigation field for landmark autonomously selection and recognition in [15]. Frintrop et al. presented a simultaneous location and mapping scheme for mobile robot based on visual landmark detecting, tracking and matching with active gaze control technology in [16]. Based on visual saliency, salient locations could be abstracted rapidly, which are often significant with certain feature. Using image matching method based on this feature, the corresponding salient area should show good matching performance. Therefore, between the visual salient locations and the suitable matching-areas for scene matching vision navigation, there are certain consistency. By introducing visual saliency model, a new area matching suitability analysis method should be given, and this novel method would be able to provide certain guiding significance for this issue.

2 The visual saliency model presented in [10] only needs to consider the saliency of visual scene itself and requires few priori knowledge. This model is a visual saliency calculation model of biologically credible, bottom-up, fast computation, and is also by far one of the most influential computation models. In order to facilitate the description, the computation model proposed in [10] would be called Visual Saliency Model (VSM) in this paper. With extracted color, intensity and orientation features, the VSM method integrates these features and abstracts salient locations. However, the above three features are not enough to extract the suitable matching-areas in vision navigation for UAV. Therefore, this paper will introduce Speeded - Up Robust Features (SURF) [17] into the traditional VSM calculation model and establish a Multi- Feature Fusion Visual Saliency Model (). Although there are a few papers trying to combine VSM and SURF [18-19], the VSM mechanism and SURF are processed individually and the VSM is used to reduce the number of abstracted SURF points. In our method, SURF is considered as a parallel channel with color, intensity and orientation in VSM, and the density of SURF points in all scale spaces is computed to achieve the conspicuity map of SURF channel. In a word, our method aims at coupling tightly the VSM and SURF for the very first time. Based on the proposed computation model of, salient locations in sensed map will be extracted and chosen as suitable matching-areas. Moreover, two universal and effective image registration methods based on Normalized Cross Correlation (NCC) and SURF [20] would be utilized to match the chosen suitable matching-areas in scene matching experiments, and the effectiveness of the presented method could be validated. II. VISUAL SALIENCY As an important psychological adjustment mechanism of the human visual perception system, visual attention could reduce the complexity of understanding the visual scene by pre-selecting visual information before further processing. Visual saliency computation model is originated from the study of attention mechanism in human visual system. From the formation mechanism of visual attention, it is usually divided into two kinds: one is bottom-up and featuredriven, and the other is top-down and task-driven. The topdown visual attention depends heavily on specific task, requiring correct human cognition or certain prior knowledge. The bottom-up visual attention relies only on the visual scene itself and determines saliency by pure visual input. Due to independence from task, bottom-up visual attention possesses a fast processing speed. One of the most widely used bottom-up computation models is proposed by Itti et al. in [10]. A. Visual Saliency Model Itti et al put forward a bottom-up Visual Saliency Model (VSM) in [10], which is mainly composed of two parts: calculation of saliency map and shifts of visual attention focus. The basic idea could be summed up as follows: First, multiple features are extracted from the input image, including color, intensity and orientation. Second, with Gaussian pyramids, center-surround difference operator and normalization calculation, three feature conspicuity maps are formed. Then, the saliency map is obtained by integration of the feature conspicuity maps. Lastly, attention focus is shifted in the winner-take-all neural network to get the salient locations, and the return inhibition mechanism is used to suppress the current attended location and shift attention focus to next salient area. Let r, g, b represent respectively the red, green and blue value of input image, the intensity can be defined as I = ( r+ g+ b)/. The broadly-tuned colors are calculated respectively by: R = r g+ b /2 G = g r+ b /2 B = b r+ g /2 Y = r+ g /2 r g /2 b With dyadic Gaussian pyramids method, a 9-scales pyramid will be produced in each channel of I, R, G, B and Y, represented by I( σ ), R( σ ), G( σ ), B( σ ) and Y ( σ ) respectively, where σ [0,,8]. As for the orientation channel, Gabor filters are used to convolve with the pyramid in intensity channel, and the obtained pyramid is represented by o o o o O( σθ, ), where θ {0,5,90,15 } is orientation specified in Gabor filters. Using center-surround difference operator (represented by ), the feature maps in intensity, red/green, blue/yellow and orientation channels could be achieved as follows: (1) I(,) cs = Ic () Is () (2) RG(,) cs = Rc () Gc () Gs () Rs () () BY(,) cs = Bc () Yc () Ys () Bs () () Ocs (,, θ) = Oc (, θ) Os (, θ) (5) Where c = {2,, } is the scale of center map, s = c+ δ is the scale of surround map, and δ {, } is the scale difference. It is important to note that, different feature channels concern different aspects of the image, and the magnitudes of these channels usually differ apparently from each other. Moreover, a strongly salient location appears in one channel may be easily masked by noise or less salient in other channels. Therefore, to combine the feature maps effectively, the normalization process is essential. After normalization (represented by N) and cross-scale integration (represented by ), the conspicuity maps in intensity, color and orientation channel are obtained: c+ c= 2 s= c+ ( (, )) I = N I c s (6)

3 c+ c= 2 s= c+ ( (, )) ( (, )) C = N RG c s + N BY c s (7) Input Image c+ O = N N( O( c, s, θ )) { o o o o c= 2 s= c+ θ 0,5,90,15 } (8) The final saliency map is the linear combination of above conspicuity maps, i.e. ( ) 1 S = N I + N C + N O (9) B. VSM based Area Matching Suitability Analysis By utilizing visual saliency model, the attended regions could be extracted easily and rapidly. The abstracted salient regions are usually significant with certain features, such as color, intensity, orientation, etc. Theoretically, image matching in these salient locations should show higher matching precision. Therefore, by introducing visual saliency calculation, a new analysis method for matching suitability is proposed. The major processes of VSM based matching-area suitability analysis are as follows: 1) Extract the color, intensity and orientation features from sensed image. 2) Produce the gaussian pyramids in each feature channels. ) Based on the center-surround difference operator and normalization, compute the feature maps of different channels. ) Integrate the feature maps in all scales to get the feature consipicuity maps of each channel. 5) Combine all consipicuity maps linearly to obtain the saliency map. 6) Using winner-take-all and return inhibition mechanisms to choose salient locations as the suitable matching-areas for scene matching. III. BASED SUITABALITY ANALYSIS Considering the continuous change of platform attitude, there will be large rotation angle, severe scale zoom and significant illumination difference between sensed image and reference image. Under these circumstances, the traditional scene matching methods based on pixel feature extraction usually require to calculate the deviation angle in heading of the UAV so as to forecast the rotation angle between sensed image and reference image and then conduct scene matching. This kind of scene matching method is ordinarily complex and the estimated rotation angle is usually not precise. However, a new kind of scene matching method by introducing invariant features, such as Speeded-Up Robust Feature (SURF) [1], could obtain high precision and robustness during image registration. For suitability analysis in vision navigation of UAV, by introducing SURF feature channel, a Multi Feature Fusion Visual Saliency Model () is proposed in this paper. Due to the robustness of SURF feature and its essentiality in image registration, the proposed model is robust to Color Linear Filtering Intensity Center-Surround Difference and Normalization Orientation Feature Feature Feature 12 images 6 images 2 images Cross-scale Image Integration and Normalization Linear Combination Saliency Winner-Take-All Salient Locations Return Inhibatation Fig. 1 Flow Chart of Model SURF Feature 6 images Cross-scale SURF Feature Density Calculation large rotation angle, severe scale zoom and significant illumination change. Based on this model, the process of extracting suitable matching-areas is stable and effective, ensuring the reliability and validity of vision navigation system. The flow chart of model is shown in Fig. 1. A. Multi Feature Fusion based Visual Saliency Model In order to add SURF channel to VSM visual attention model, the SURF pyramid is firstly computed using images of 9-scales. The SURF feature at pixel position of (i, j) in scale σ could be expressed as Surf σ ( i, j) ( i j) 1, SURF feature exists at pixel, SURF feature does not exist at pixel, = 0, Where, σ [0,,8]. ( i j) (10) And then, SURF feature maps in different scales are adjusted to be the same size as that of feature maps in color, intensity and orientation channel defined in the VSM visual attention model. By integration of cross-scale feature maps, it is easy to get SURF conspicuity map S. This step is actually to compute the density of SURF points cross all scales at each pixel as defined in (11). Surf i j 8 (, ) Surf ( i, j, σ ) = (11) σ = 0 Finally, after normalization and linearly weighted combination of color, intensity, orientation and SURF conspicuity maps, the saliency map in model could be achieved by

4 ( ) 1 S = N I + N C + N O + N S (12) B. Based Area Matching Suitability Analysis The extraction processes of suitable matching-areas based on model are as follows: 1) Extract the color, intensity orientation and SURF features from sensed image. 2) Produce the gaussian pyramids in each feature channels. ) Based on the center-surround difference operator and normalization, compute the feature maps of color, intensity and orientation channels. ) Integrate the feature maps in all scales to get the feature conspicuity maps, where the SURF conspicuity map is obtained by computing the cross-scale density of SURF points. 5) Combine all consipicuity maps linearly to obtain the final saliency map. 6) Using winner-take-all and return inhibition mechanisms to choose salient locations as suitable matching-areas. IV. SIMULATION AND ANALYSIS All the algorithms are coded by Matlab 2011a in Windows 7 6 bit operating system and are operated on a Pentium IV.0 GHz CPU with 2 GB memory capacity. Experimental data: All the reference images are captured from Google earth, in which the landscape features includes roads, buildings, farmland, mountains and so on. The sensed images are achieved from the reference images by rotating, zooming, adding noise and changing color, intensity contrast and so on. This procedure is trying to simulate the differences between sensed images and reference images. Since the correction process of sensed images in real-world system could not completely eliminate the imaging differences with reference images, there will exist diversities in color, intensity, noise, scale, and rotation angle. (b) (c) (d) Fig.2 Reference Images (column 1) and Sensed Images (column 2) (The scale ratios between sensed images and corresponding reference images are all 1.1. With clockwise being positive, the rotation angles between sensed images and reference images are -, -5, and -5 respectively for (a) - (d)) A. based Salient Areas Extraction The following experiment is aimed at extracting salient locations based on the proposed model. Four pairs of reference images and sensed images are showed in Fig. 2. The feature conspicuity maps of sensed image presented in Fig.2 (b) are shown in Fig., and the corresponding salient locations extraction results are presented in Fig.. To illustrate the advancement of our saliency model, the salient locations extraction results based on the original VSM model are also listed in Fig.. (a) Color (b) Intensity (c) Orientation (d) SURF Fig. Feature s of Sensed Image Presented in Fig. 2 (b) (a)

5 (a) so they could be used as suitable matching-areas for scene matching in UAV vision navigation. Currently, the scene matching algorithms could be divided into two classes: the algorithms based on areas and the algorithms based on features. In this paper, we choose the representative NCC matching method and SURF matching method to validate the performance of suitability analysis methods. The scene matching error curves are shown in Fig. 5, and the specific scene matching error data are detailed in Table I - IV. Regarding the scene matching error, it is calculated as the Euclidean distance between the estimated locations of sensed image in the reference image using scene matching algorithms and using manually image matching method. (b) (a) (c) (b) (d) Fig. Salient Areas Extraction Results of Sensed Images (Each row is corresponding to image pairs presented in Fig. 2 (a) - (d). The first column is VSM based areas extraction results using STB toolbox in [21], and the second column is salient areas extraction results by our method based on. Where, yellow irregular box shows the salient location) As shown in Fig., certain consistency and complementarity exists between the SURF conspicuity map and the other feature conspicuity maps in color, intensity and orientation channels. Moreover, as shown in Fig., the introduction of SURF channel ensures the possibility of extracting several other salient locations based on model, and these new locations are significant in SURF feature. In order to compare the performance of the two models for suitability analysis, the following part is going to conduct some scene matching experiments. B. based Matching-Area Suitability Analysis The extracted salient areas based on visual saliency models are significant with intensity, color, orientation and/or SURF, (c) (d) Fig. 5 Scene Matching Error Curves by NCC and SURF (Each row is corresponding to image pairs presented in Fig. 2 (a) - (d). The first column is VSM based areas extraction results using STB toolbox in [21], and the second column is areas extraction results by our method based on. Where, black represents the NCC matching method, while red represents the SURF matching method)

6 TABLE I. Scene Matching Error of Fig. 2(a) TABLE II. Scene Matching Error of Fig. 2(b) TABLE III Scene Matching Error of Fig. 2(c) TABLE IV. Scene Matching Error of Fig. 2(d) According to Fig., Fig. 5 and Table I - IV, following analyses are easy to be obtained: 1) As for the images shown in Fig. 2 (a), the VSM model and model both found 10 salient areas. However, the 10th salient location found by the former model is eliminated in the results of the latter method, and the SURF registration error of this area reaches as high as pixels. Moreover, a new salient area numbered as 8 is added, the NCC and SURF registration errors of this area are both below 5 pixels. The registration performance is very excellent. 2) As for the images shown in Fig. 2 (b), the VSM model and model found 1 and 15 salient areas respectively. Compared with the former model, three new salient locations are discovered and numbered as 8, 1, 15 in the results of the latter method, the NCC and SURF registration errors of the three areas are both below 5 pixels. In the results of the latter method, the salient location numbered as 6 is almost at the same position as the salient location numbered as 7 in the results of the former method, but the SURF registration error is much smaller (reduced from pixels to 1.52 pixels). What s more, in the results of the former method, the salient location numbered as 1 is not ever a salient one in the results of the latter method, the NCC registration error of this area is as large as pixels and the SURF registration error is also very big being pixels. ) As for the images shown in Fig. 2 (c), the VSM model and model found 11 and 17 salient areas respectively. Compared with the former model, six new salient locations are discovered and numbered as 8, 9, 10, 12, 16, 17 in the results of the latter method, the maximum NCC registration error of these six areas is.8522 pixels and the maximum SURF registration error is pixels. The registration precision is pretty high. ) As for the images shown in Fig. 2 (d), the VSM model and model found 10 and 12 salient areas respectively. Compared with the former model, two new salient locations are discovered and numbered as 11 and 12 in the results of the latter method, the NCC and SURF registration errors of the three areas are both below 5 pixels. The registration performance is very excellent. 5) Overall, in scene matching experiments of extracted salient locations based on the proposed model, the maximum NCC registration error is about 10 pixels and the minimum NCC registration error is about 1 pixel, while the maximum SURF registration error is about 8 pixels and the minimum SURF registration error is also about 1 pixel.

7 In summary, it can be found that no matter which kind of image registration methods is adopted, the extracted suitable matching-areas based on visual saliency models could achieve very high precision in scene matching and meet the requirements of UAV autonomous vision navigation system. In addition, registration errors of matching-areas abstracted by the proposed model are lower than that of the traditional VSM model. Therefore, it can be concluded that the established model is very effective for suitability analysis in vision navigation system. V. CONCLUSIONS This paper presents a novel suitability analysis method based on model in vision navigation for UAV. An improved version of the traditional visual saliency model that considering SURF feature has been proposed and used to extract spatial salient locations. These locations are then chosen as suitable matching-areas in vision navigation for UAV. Due to the introduction of invariant feature of SURF, the proposed approach is able to cope with severe navigation environments and guarantees the precision of vision navigation system. Simulation results have validated the ability of this method to select stable matching-areas. This paper could provide new ideas and theoretical guidance for engineering applications of UAV systems in the future. REFERENCES [1] Johnson M W. Analytical development and test results of acquisition probability for terrain correction devices used in navigation systems, AIAA 10th Aerospace Sciences Meeting, [2] G. Zhang, and L. Shen, Rule-Based Expert System for Selecting Scene Matching Area, Intelligent Control and Automation, Springer Berlin, [] JIAO Wei, LIU Guang-bin, ZHANG Jin-sheng, ZHANG Bo, and QIAO YU-kun, Immune PSO Algorithm-Based Geomagnetic Characteristic Area Selection, Journal of Astronautics, vol. 1, Jun. 2010, pp [] WANG Zhe, WANG Shi-cheng, Zhang Jin-sheng, Qiao Yukun, and CHEN Li-hua, A Matching Suitability Evaluation Method Based on Analytic Hierarchy Process in Geomagnetism Matching Guidance, Journal of Astronautics, vol. 0, Oct. 2009, pp [5] Chen, Yingying; Qian, Xinqiang; Yuan, Ming; Gao, Enting, Predicting the suitability for scene matching using SVM, International Conference on Audio, Language and Image Processing, 2008, pp [6] Wang Peng, Geomagnetic Aided Navigation Suitability Evaluation Based on Principal Component Analysis, International Conference on Industrial Control and Electronics Engineering (ICICEE), 2012, pp [7] Zhang G, Shen L. Rule-Based Expert System for Selecting Scene Matching Area, Intelligent Control and Automation, Springer Berlin [8] Treisman, A. M. & Gelade, G. A feature-integration theory of attention, Cogn. Psychol. vol. 12, pp , [9] C. Koch, S. Ullman, Shifts in selec1ive visual attention: towards the underlying neural circuitry, Human Neurobiology, vol., pp , [10] L. Itti, C. Koch, E. Niebur, A model of saliency-based visual attention for rapidscene analysis, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 20, pp , [11] Tao Wang, Jingmin Xin, Nanning Zheng. A Method Integrating Human Visual Attention and Consciousness of Radar and Vision Fusion for Autonomous Vehicle Navigation, th International Conference on Space Mission Challenges for Information Technology, 2011, pp [12] Dimitrios G. Kottas, Stergios I. Roumeliotis. Efficient and Consistent Vision-aided Inertial Navigation using Line Observations, Multiple Autonomous Robotic Systems Labroratory, University of Minnesota, [1] C. Siagian, L. Itti. Biologically inspired mobile robot vision localization, IEEE Trans. on Robotics, 25(): , [1] Chin-Kai Chang, Siagian, C., Itti, L., Mobile robot vision navigation and localization using Gist and Saliency, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, pp [15] Nabil Ouerhani, Heinz HÄugli, Gabriel Gruener, and Alain Codourey. A Visual Attention-based Approach for Automatic Landmark Selection and Recognition, Attention and Performance in Computational Vision (Lecture Notes in Computer Science), vol. 68, pp , [16] Frintrop, S., Jensfelt, P., Attentional Landmarks and Active Gaze Control for Visual SLAM, IEEE Trans. on Robotics, 2(5): , [17] Bay H, Ess A, Tuytelaars T, et al, Speeded-up robust features (SURF), Computer Vision and Image Understanding, 7(): 6-59, [18] Fernando López-García, Xosé Ramón Fdez-Vidal, Xosé Manuel Pardo and Raquel Dosil, Scene Recognition through Visual Attention and Image Features: A Comparison between SIFT and SURF Approaches, Ch.12 in Object Recognition, edited by Tam Phuong Cao, 2011, pp [19] Sergieh, H.M.; Egyed-Zsigmond, E.; Doller, M.; Coquil, D.; Pinon, J.- M.; Kosch, H., Improving SURF Image Matching Using Supervised Learning, in Proc. Int. Conf. Signal Image Technology and Internet Based Systems (SITIS), Nov. 2012, pp [20] Yufan Wang, Qiuze Yu, Wenxian Yu, An improved Normalized Cross Correlation algorithm for SAR image registration, Geoscience and Remote Sensing Symposium, 2012, pp [21] Dirk Walther and Christof Koch, Modeling attention to salient protoobjects, Neural Networks, 19, pp , 2006.

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

A Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences

A Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences Published in Computational Methods in Neural Modeling. (In: Lecture Notes in Computer Science) 2686, vol. 1, 702-709, 2003 which should be used for any reference to this work 1 A Model of Dynamic Visual

More information

International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015)

International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015) International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015) Brief Analysis on Typical Image Saliency Detection Methods Wenwen Pan, Xiaofei Sun, Xia Wang, Wei Zhang

More information

The Vehicle Logo Location System based on saliency model

The Vehicle Logo Location System based on saliency model ISSN 746-7659, England, UK Journal of Information and Computing Science Vol. 0, No. 3, 205, pp. 73-77 The Vehicle Logo Location System based on saliency model Shangbing Gao,2, Liangliang Wang, Hongyang

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

An Improved Image Resizing Approach with Protection of Main Objects

An Improved Image Resizing Approach with Protection of Main Objects An Improved Image Resizing Approach with Protection of Main Objects Chin-Chen Chang National United University, Miaoli 360, Taiwan. *Corresponding Author: Chun-Ju Chen National United University, Miaoli

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

Dynamic visual attention: competitive versus motion priority scheme

Dynamic visual attention: competitive versus motion priority scheme Dynamic visual attention: competitive versus motion priority scheme Bur A. 1, Wurtz P. 2, Müri R.M. 2 and Hügli H. 1 1 Institute of Microtechnology, University of Neuchâtel, Neuchâtel, Switzerland 2 Perception

More information

Predicting Visual Saliency of Building using Top down Approach

Predicting Visual Saliency of Building using Top down Approach Predicting Visual Saliency of Building using Top down Approach Sugam Anand,CSE Sampath Kumar,CSE Mentor : Dr. Amitabha Mukerjee Indian Institute of Technology, Kanpur Outline Motivation Previous Work Our

More information

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature ITM Web of Conferences, 0500 (07) DOI: 0.05/ itmconf/070500 IST07 Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature Hui YUAN,a, Ying-Guang HAO and Jun-Min LIU Dalian

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Implementation and Comparison of Feature Detection Methods in Image Mosaicing

Implementation and Comparison of Feature Detection Methods in Image Mosaicing IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p-ISSN: 2278-8735 PP 07-11 www.iosrjournals.org Implementation and Comparison of Feature Detection Methods in Image

More information

A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish

A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish Introduction/Motivation: As autonomous vehicles, such as Google s self-driving car, have recently become

More information

A Modified Approach to Biologically Motivated Saliency Mapping

A Modified Approach to Biologically Motivated Saliency Mapping A Modified Approach to Biologically Motivated Saliency Mapping Shane Grant Department of Computer Science University of California, San Diego La Jolla, CA 9093 wgrant@ucsd.edu Kevin A Heins Department

More information

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

Open Access Research on the Prediction Model of Material Cost Based on Data Mining Send Orders for Reprints to reprints@benthamscience.ae 1062 The Open Mechanical Engineering Journal, 2015, 9, 1062-1066 Open Access Research on the Prediction Model of Material Cost Based on Data Mining

More information

Real-time target tracking using a Pan and Tilt platform

Real-time target tracking using a Pan and Tilt platform Real-time target tracking using a Pan and Tilt platform Moulay A. Akhloufi Abstract In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of

More information

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis

Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Improvement of SURF Feature Image Registration Algorithm Based on Cluster Analysis 1 Xulin LONG, 1,* Qiang CHEN, 2 Xiaoya

More information

PERFORMANCE ANALYSIS OF COMPUTING TECHNIQUES FOR IMAGE DISPARITY IN STEREO IMAGE

PERFORMANCE ANALYSIS OF COMPUTING TECHNIQUES FOR IMAGE DISPARITY IN STEREO IMAGE PERFORMANCE ANALYSIS OF COMPUTING TECHNIQUES FOR IMAGE DISPARITY IN STEREO IMAGE Rakesh Y. Department of Electronics and Communication Engineering, SRKIT, Vijayawada, India E-Mail: rakesh.yemineni@gmail.com

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

A threshold decision of the object image by using the smart tag

A threshold decision of the object image by using the smart tag A threshold decision of the object image by using the smart tag Chang-Jun Im, Jin-Young Kim, Kwan Young Joung, Ho-Gil Lee Sensing & Perception Research Group Korea Institute of Industrial Technology (

More information

An Object-based Visual Attention Model for Robots

An Object-based Visual Attention Model for Robots An Object-based Visual Attention Model for Robots Yuanlong Yu, George K. I. Mann, and Raymond G. Gosine Faculty of Engineering Memorial University of Newfoundland St. John s, Newfoundland, A1B 3X5, Canada

More information

Research on Evaluation Method of Video Stabilization

Research on Evaluation Method of Video Stabilization International Conference on Advanced Material Science and Environmental Engineering (AMSEE 216) Research on Evaluation Method of Video Stabilization Bin Chen, Jianjun Zhao and i Wang Weapon Science and

More information

A Visual Attention-based Approach for Automatic Landmark Selection and Recognition

A Visual Attention-based Approach for Automatic Landmark Selection and Recognition WAPCV 04, in Lecture Notes in Computer Science, Springer Verlag, LNCS 3368, pp. 183-195, 2005 A Visual Attention-based Approach for Automatic Landmark Selection and Recognition Nabil Ouerhani 1, Heinz

More information

Modeling Attention to Salient Proto-objects

Modeling Attention to Salient Proto-objects Modeling Attention to Salient Proto-objects Dirk Bernhardt-Walther Collaborators: Laurent Itti Christof Koch Pietro Perona Tommy Poggio Maximilian Riesenhuber Ueli Rutishauser Beckman Institute, University

More information

A Novel Approach for Saliency Detection based on Multiscale Phase Spectrum

A Novel Approach for Saliency Detection based on Multiscale Phase Spectrum A Novel Approach for Saliency Detection based on Multiscale Phase Spectrum Deepak Singh Department of Electronics & Communication National Institute of Technology Rourkela 769008, Odisha, India Email:

More information

The Use of Attention and Spatial Information for Rapid Facial Recognition in Video

The Use of Attention and Spatial Information for Rapid Facial Recognition in Video * Manuscript The Use of Attention and Spatial Information for Rapid Facial Recognition in Video J. Bonaiuto & L. Itti Neuroscience Department University of Southern California Los Angeles, CA, 90089 Abstract

More information

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI 2017 International Conference on Electronic, Control, Automation and Mechanical Engineering (ECAME 2017) ISBN: 978-1-60595-523-0 The Establishment of Large Data Mining Platform Based on Cloud Computing

More information

A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL

A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL A NOVEL SHIP DETECTION METHOD FOR LARGE-SCALE OPTICAL SATELLITE IMAGES BASED ON VISUAL LBP FEATURE AND VISUAL ATTENTION MODEL Sui Haigang a, *, Song Zhina b a State Key Laboratory of Information Engineering

More information

Quasi-thematic Features Detection & Tracking. Future Rover Long-Distance Autonomous Navigation

Quasi-thematic Features Detection & Tracking. Future Rover Long-Distance Autonomous Navigation Quasi-thematic Feature Detection And Tracking For Future Rover Long-Distance Autonomous Navigation Authors: Affan Shaukat, Conrad Spiteri, Yang Gao, Said Al-Milli, and Abhinav Bajpai Surrey Space Centre,

More information

Saliency Extraction for Gaze-Contingent Displays

Saliency Extraction for Gaze-Contingent Displays In: Workshop on Organic Computing, P. Dadam, M. Reichert (eds.), Proceedings of the 34th GI-Jahrestagung, Vol. 2, 646 650, Ulm, September 2004. Saliency Extraction for Gaze-Contingent Displays Martin Böhme,

More information

Performance Degradation Assessment and Fault Diagnosis of Bearing Based on EMD and PCA-SOM

Performance Degradation Assessment and Fault Diagnosis of Bearing Based on EMD and PCA-SOM Performance Degradation Assessment and Fault Diagnosis of Bearing Based on EMD and PCA-SOM Lu Chen and Yuan Hang PERFORMANCE DEGRADATION ASSESSMENT AND FAULT DIAGNOSIS OF BEARING BASED ON EMD AND PCA-SOM.

More information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Iterative Removing Salt and Pepper Noise based on Neighbourhood Information Liu Chun College of Computer Science and Information Technology Daqing Normal University Daqing, China Sun Bishen Twenty-seventh

More information

A SPATIOTEMPORAL SALIENCY MODEL OF VISUAL ATTENTION BASED ON MAXIMUM ENTROPY

A SPATIOTEMPORAL SALIENCY MODEL OF VISUAL ATTENTION BASED ON MAXIMUM ENTROPY A SPATIOTEMPORAL SALIENCY MODEL OF VISUAL ATTENTION BASED ON MAXIMUM ENTROPY Longsheng Wei, Nong Sang and Yuehuan Wang Institute for Pattern Recognition and Artificial Intelligence National Key Laboratory

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Robust Frequency-tuned Salient Region Detection

Robust Frequency-tuned Salient Region Detection Robust Frequency-tuned Salient Region Detection 1 Yong Zhang, 2 Yihua Lan, 3 Haozheng Ren, 4 Ming Li 1 School of Computer Engineering, Huaihai Institute of Technology, Lianyungang, China, zhyhglyg@126.com

More information

Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model

Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model Salient Region Detection using Weighted Feature Maps based on the Human Visual Attention Model Yiqun Hu 2, Xing Xie 1, Wei-Ying Ma 1, Liang-Tien Chia 2 and Deepu Rajan 2 1 Microsoft Research Asia 5/F Sigma

More information

A Novel Real-Time Feature Matching Scheme

A Novel Real-Time Feature Matching Scheme Sensors & Transducers, Vol. 165, Issue, February 01, pp. 17-11 Sensors & Transducers 01 by IFSA Publishing, S. L. http://www.sensorsportal.com A Novel Real-Time Feature Matching Scheme Ying Liu, * Hongbo

More information

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article Available online www.jocpr.com Journal of Chemical and Pharmaceutical esearch, 015, 7(3):175-179 esearch Article ISSN : 0975-7384 CODEN(USA) : JCPC5 Thread image processing technology research based on

More information

Car Detecting Method using high Resolution images

Car Detecting Method using high Resolution images Car Detecting Method using high Resolution images Swapnil R. Dhawad Department of Electronics and Telecommunication Engineering JSPM s Rajarshi Shahu College of Engineering, Savitribai Phule Pune University,

More information

Small Object Segmentation Based on Visual Saliency in Natural Images

Small Object Segmentation Based on Visual Saliency in Natural Images J Inf Process Syst, Vol.9, No.4, pp.592-601, December 2013 http://dx.doi.org/10.3745/jips.2013.9.4.592 pissn 1976-913X eissn 2092-805X Small Object Segmentation Based on Visual Saliency in Natural Images

More information

Supplementary Material for submission 2147: Traditional Saliency Reloaded: A Good Old Model in New Shape

Supplementary Material for submission 2147: Traditional Saliency Reloaded: A Good Old Model in New Shape Supplementary Material for submission 247: Traditional Saliency Reloaded: A Good Old Model in New Shape Simone Frintrop, Thomas Werner, and Germán M. García Institute of Computer Science III Rheinische

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

A Novel Algorithm for Color Image matching using Wavelet-SIFT

A Novel Algorithm for Color Image matching using Wavelet-SIFT International Journal of Scientific and Research Publications, Volume 5, Issue 1, January 2015 1 A Novel Algorithm for Color Image matching using Wavelet-SIFT Mupuri Prasanth Babu *, P. Ravi Shankar **

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2 International Conference on Advances in Mechanical Engineering and Industrial Informatics (AMEII 015) An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

AN EMBEDDED ARCHITECTURE FOR FEATURE DETECTION USING MODIFIED SIFT ALGORITHM

AN EMBEDDED ARCHITECTURE FOR FEATURE DETECTION USING MODIFIED SIFT ALGORITHM International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 5, Sep-Oct 2016, pp. 38 46, Article ID: IJECET_07_05_005 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=5

More information

Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task

Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task Maria T. López 1, Miguel A. Fernández 1, Antonio Fernández-Caballero 1, and Ana E. Delgado 2 1 Departamento de Informática

More information

A Hierarchical Visual Saliency Model for Character Detection in Natural Scenes

A Hierarchical Visual Saliency Model for Character Detection in Natural Scenes A Hierarchical Visual Saliency Model for Character Detection in Natural Scenes Renwu Gao 1, Faisal Shafait 2, Seiichi Uchida 3, and Yaokai Feng 3 1 Information Sciene and Electrical Engineering, Kyushu

More information

Rotation Invariant Finger Vein Recognition *

Rotation Invariant Finger Vein Recognition * Rotation Invariant Finger Vein Recognition * Shaohua Pang, Yilong Yin **, Gongping Yang, and Yanan Li School of Computer Science and Technology, Shandong University, Jinan, China pangshaohua11271987@126.com,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

SALIENT OBJECT DETECTION IN HYPERSPECTRAL IMAGERY

SALIENT OBJECT DETECTION IN HYPERSPECTRAL IMAGERY SALIENT OBJECT DETECTION IN HYPERSPECTRAL IMAGERY Jie Liang 1 Jun Zhou 2 Xiao Bai 3 Yuntao Qian 4 1 Research School of Computer Science, Australian National University, Canberra, Australia 2 School of

More information

CMPUT 616 Implementation of a Visual Attention Model

CMPUT 616 Implementation of a Visual Attention Model CMPUT 616 Implementation of a Visual Attention Model Nathan Funk April 14, 2004 1 Introduction This project implements the visual attention model described by Itti, Koch and Niebur in their 1998 paper

More information

A Real-time Visual Attention System Using Integral Images

A Real-time Visual Attention System Using Integral Images A Real-time Visual Attention System Using Integral Images Simone Frintrop 1, Maria Klodt 2, and Erich Rome 2 1 Institute of Computer Science III, Rheinische Friedrich-Wilhems-Universität, 53111 Bonn, Germany

More information

A Rapid Automatic Image Registration Method Based on Improved SIFT

A Rapid Automatic Image Registration Method Based on Improved SIFT Available online at www.sciencedirect.com Procedia Environmental Sciences 11 (2011) 85 91 A Rapid Automatic Image Registration Method Based on Improved SIFT Zhu Hongbo, Xu Xuejun, Wang Jing, Chen Xuesong,

More information

SVM-based Filter Using Evidence Theory and Neural Network for Image Denosing

SVM-based Filter Using Evidence Theory and Neural Network for Image Denosing Journal of Software Engineering and Applications 013 6 106-110 doi:10.436/sea.013.63b03 Published Online March 013 (http://www.scirp.org/ournal/sea) SVM-based Filter Using Evidence Theory and Neural Network

More information

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

An Approach for Real Time Moving Object Extraction based on Edge Region Determination An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,

More information

FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM

FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM FEATURE BASED IMAGE MATCHING FOR AIRBORNE PLATFORM 1 HUMERA SIDDIQUA, 2 A.H.SHANTHAKUMARA, 3 MD. SHAHID 1 M. Tech(CNE), 2 Asst Professor, (Computer Science), SIT, Tumkur, Scientist E, ADE, DRDO E-mail:

More information

A Novel Approach to Saliency Detection Model and Its Applications in Image Compression

A Novel Approach to Saliency Detection Model and Its Applications in Image Compression RESEARCH ARTICLE OPEN ACCESS A Novel Approach to Saliency Detection Model and Its Applications in Image Compression Miss. Radhika P. Fuke 1, Mr. N. V. Raut 2 1 Assistant Professor, Sipna s College of Engineering

More information

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure

Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Evaluation of regions-of-interest based attention algorithms using a probabilistic measure Martin Clauss, Pierre Bayerl and Heiko Neumann University of Ulm, Dept. of Neural Information Processing, 89081

More information

Color-Texture Segmentation of Medical Images Based on Local Contrast Information

Color-Texture Segmentation of Medical Images Based on Local Contrast Information Color-Texture Segmentation of Medical Images Based on Local Contrast Information Yu-Chou Chang Department of ECEn, Brigham Young University, Provo, Utah, 84602 USA ycchang@et.byu.edu Dah-Jye Lee Department

More information

Feature Detection and Matching

Feature Detection and Matching and Matching CS4243 Computer Vision and Pattern Recognition Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS4243) Camera Models 1 /

More information

Research on Quality Inspection method of Digital Aerial Photography Results

Research on Quality Inspection method of Digital Aerial Photography Results Research on Quality Inspection method of Digital Aerial Photography Results WANG Xiaojun, LI Yanling, LIANG Yong, Zeng Yanwei.School of Information Science & Engineering, Shandong Agricultural University,

More information

Infrared Image Stitching Based on Immune Memory Clonal Selection Algorithm

Infrared Image Stitching Based on Immune Memory Clonal Selection Algorithm Infrared Image Stitching Based on Immune Memory Clonal Selection Algorithm by Tong Hejun, Fu Dongmei, Dong Lin and Yang Tao School of Automation and Electrical Engineering, University of Science and Technology

More information

Defense Technology, Changsha , P. R. China

Defense Technology, Changsha , P. R. China Progress In Electromagnetics Research M, Vol. 18, 259 269, 2011 SAR IMAGE MATCHING METHOD BASED ON IMPROVED SIFT FOR NAVIGATION SYSTEM S. Ren 1, *, W. Chang 1, and X. Liu 2 1 School of Electronic Science

More information

Resource Load Balancing Based on Multi-agent in ServiceBSP Model*

Resource Load Balancing Based on Multi-agent in ServiceBSP Model* Resource Load Balancing Based on Multi-agent in ServiceBSP Model* Yan Jiang 1, Weiqin Tong 1, and Wentao Zhao 2 1 School of Computer Engineering and Science, Shanghai University 2 Image Processing and

More information

Fast Image Matching Using Multi-level Texture Descriptor

Fast Image Matching Using Multi-level Texture Descriptor Fast Image Matching Using Multi-level Texture Descriptor Hui-Fuang Ng *, Chih-Yang Lin #, and Tatenda Muindisi * Department of Computer Science, Universiti Tunku Abdul Rahman, Malaysia. E-mail: nghf@utar.edu.my

More information

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training

Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

Object Tracking Algorithm based on Combination of Edge and Color Information

Object Tracking Algorithm based on Combination of Edge and Color Information Object Tracking Algorithm based on Combination of Edge and Color Information 1 Hsiao-Chi Ho ( 賀孝淇 ), 2 Chiou-Shann Fuh ( 傅楸善 ), 3 Feng-Li Lian ( 連豊力 ) 1 Dept. of Electronic Engineering National Taiwan

More information

Adaptive Doppler centroid estimation algorithm of airborne SAR

Adaptive Doppler centroid estimation algorithm of airborne SAR Adaptive Doppler centroid estimation algorithm of airborne SAR Jian Yang 1,2a), Chang Liu 1, and Yanfei Wang 1 1 Institute of Electronics, Chinese Academy of Sciences 19 North Sihuan Road, Haidian, Beijing

More information

Qiqihar University, China *Corresponding author. Keywords: Highway tunnel, Variant monitoring, Circle fit, Digital speckle.

Qiqihar University, China *Corresponding author. Keywords: Highway tunnel, Variant monitoring, Circle fit, Digital speckle. 2017 2nd International Conference on Applied Mechanics and Mechatronics Engineering (AMME 2017) ISBN: 978-1-60595-521-6 Research on Tunnel Support Deformation Based on Camera and Digital Speckle Improvement

More information

result, it is very important to design a simulation system for dynamic laser scanning

result, it is very important to design a simulation system for dynamic laser scanning 3rd International Conference on Multimedia Technology(ICMT 2013) Accurate and Fast Simulation of Laser Scanning Imaging Luyao Zhou 1 and Huimin Ma Abstract. In order to design a more accurate simulation

More information

Published in Pattern Recognition (In: Lecture Notes in Computer Science) 2449, , 2002 which should be used for any reference to this work

Published in Pattern Recognition (In: Lecture Notes in Computer Science) 2449, , 2002 which should be used for any reference to this work Published in Pattern Recognition (In: Lecture Notes in Computer Science) 29, 282-289, 2002 which should be used for any reference to this work 1 A Real Time Implementation of the Saliency-Based Model of

More information

Image Registration and Mosaicking Based on the Criterion of Four Collinear Points Chen Jinwei 1,a*, Guo Bin b, Guo Gangxiang c

Image Registration and Mosaicking Based on the Criterion of Four Collinear Points Chen Jinwei 1,a*, Guo Bin b, Guo Gangxiang c 2016 2 nd International Conference on Mechanical, Electronic and Information Technology Engineering (ICMITE 2016) ISBN: 978-1-60595-340-3 Image Registration and Mosaicking Based on the Criterion of Four

More information

An efficient face recognition algorithm based on multi-kernel regularization learning

An efficient face recognition algorithm based on multi-kernel regularization learning Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel

More information

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS

LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS 8th International DAAAM Baltic Conference "INDUSTRIAL ENGINEERING - 19-21 April 2012, Tallinn, Estonia LOCAL AND GLOBAL DESCRIPTORS FOR PLACE RECOGNITION IN ROBOTICS Shvarts, D. & Tamre, M. Abstract: The

More information

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Video Inter-frame Forgery Identification Based on Optical Flow Consistency Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong

More information

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION Gengjian Xue, Li Song, Jun Sun, Meng Wu Institute of Image Communication and Information Processing, Shanghai Jiao Tong University,

More information

A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection

A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection A Linear Approximation Based Method for Noise-Robust and Illumination-Invariant Image Change Detection Bin Gao 2, Tie-Yan Liu 1, Qian-Sheng Cheng 2, and Wei-Ying Ma 1 1 Microsoft Research Asia, No.49 Zhichun

More information

A method of three-dimensional subdivision of arbitrary polyhedron by. using pyramids

A method of three-dimensional subdivision of arbitrary polyhedron by. using pyramids 5th International Conference on Measurement, Instrumentation and Automation (ICMIA 2016) A method of three-dimensional subdivision of arbitrary polyhedron by using pyramids LIU Ji-bo1,a*, Wang Zhi-hong1,b,

More information

Selective visual attention enables learning and recognition of multiple objects in cluttered scenes

Selective visual attention enables learning and recognition of multiple objects in cluttered scenes Selective visual attention enables learning and recognition of multiple objects in cluttered scenes Dirk Walther, 1 Ueli Rutishauser, 1 Christof Koch, and Pietro Perona Computation and Neural Systems,

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

VISUAL ATTENTION MODEL WITH ADAPTIVE WEIGHTING OF CONSPICUITY MAPS FOR BUILDING DETECTION IN SATELLITE IMAGES

VISUAL ATTENTION MODEL WITH ADAPTIVE WEIGHTING OF CONSPICUITY MAPS FOR BUILDING DETECTION IN SATELLITE IMAGES VISUAL ATTENTION MODEL WITH ADAPTIVE WEIGHTING OF CONSPICUITY MAPS FOR BUILDING DETECTION IN SATELLITE IMAGES A.-M. Cretu 1 and P. Payeur 2 1 Department of Computer Science and Engineering, Université

More information

IMAGE SALIENCY DETECTION VIA MULTI-SCALE STATISTICAL NON-REDUNDANCY MODELING. Christian Scharfenberger, Aanchal Jain, Alexander Wong, and Paul Fieguth

IMAGE SALIENCY DETECTION VIA MULTI-SCALE STATISTICAL NON-REDUNDANCY MODELING. Christian Scharfenberger, Aanchal Jain, Alexander Wong, and Paul Fieguth IMAGE SALIENCY DETECTION VIA MULTI-SCALE STATISTICAL NON-REDUNDANCY MODELING Christian Scharfenberger, Aanchal Jain, Alexander Wong, and Paul Fieguth Department of Systems Design Engineering, University

More information

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Galiev Ilfat, Alina Garaeva, Nikita Aslanyan The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau ilfat.galiev@tu-ilmenau.de;

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,

More information

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2015, 7(3):2413-2417 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Research on humanoid robot vision system based

More information

Attentional Landmarks and Active Gaze Control for Visual SLAM

Attentional Landmarks and Active Gaze Control for Visual SLAM 1 Attentional Landmarks and Active Gaze Control for Visual SLAM Simone Frintrop and Patric Jensfelt Abstract This paper is centered around landmark detection, tracking and matching for visual SLAM (Simultaneous

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT

A Hybrid Feature Extractor using Fast Hessian Detector and SIFT Technologies 2015, 3, 103-110; doi:10.3390/technologies3020103 OPEN ACCESS technologies ISSN 2227-7080 www.mdpi.com/journal/technologies Article A Hybrid Feature Extractor using Fast Hessian Detector and

More information

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607

More information

Spatio-temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform

Spatio-temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform Spatio-temporal Saliency Detection Using Phase Spectrum of Quaternion Fourier Transform Chenlei Guo, Qi Ma and Liming Zhang Department of Electronic Engineering, Fudan University No.220, Handan Road, Shanghai,

More information

Saliency-based Object Recognition in 3D Data

Saliency-based Object Recognition in 3D Data Saliency-based Object Recognition in 3D Data Simone Frintrop, Andreas Nüchter, Hartmut Surmann, and Joachim Hertzberg Fraunhofer Institute for Autonomous Intelligent Systems (AIS) Schloss Birlinghoven,

More information

A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO

A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO Sensors & Transducers 2013 by IFSA http://www.sensorsportal.com A Kind of Wireless Sensor Network Coverage Optimization Algorithm Based on Genetic PSO Yinghui HUANG School of Electronics and Information,

More information

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH Sai Tejaswi Dasari #1 and G K Kishore Babu *2 # Student,Cse, CIET, Lam,Guntur, India * Assistant Professort,Cse, CIET, Lam,Guntur, India Abstract-

More information

An Improved Optical Flow Method for Image Registration with Large-scale Movements

An Improved Optical Flow Method for Image Registration with Large-scale Movements Vol. 34, No. 7 ACTA AUTOMATICA SINICA July, 2008 An Improved Optical Flow Method for Image Registration with Large-scale Movements XIONG Jing-Yi 1 LUO Yu-Pin 1 TANG Guang-Rong 1 Abstract In this paper,

More information