Orthogonal Complement Component Analysis for Positive Samples in SVM Based Relevance Feedback Image Retrieval

Similar documents
Feature Reduction and Selection

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Laplacian Eigenmap for Image Retrieval

The Research of Support Vector Machine in Agricultural Data Classification

Support Vector Machines

Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines

Recognizing Faces. Outline

A Binarization Algorithm specialized on Document Images and Photos

Parallelism for Nested Loops with Non-uniform and Flow Dependences

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

SVM-based Learning for Multiple Model Estimation

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Face Recognition Based on SVM and 2DPCA

Relevance Feedback for Image Retrieval

Announcements. Supervised Learning

Cluster Analysis of Electrical Behavior

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Face Detection with Deep Learning

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Detection of an Object by using Principal Component Analysis

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Smoothing Spline ANOVA for variable screening

Classifier Selection Based on Data Complexity Measures *

Lecture 4: Principal components

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Machine Learning 9. week

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Multi-stable Perception. Necker Cube

Multi-View Face Alignment Using 3D Shape Model for View Estimation

Fingerprint matching based on weighting method and SVM

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Face Recognition Method Based on Within-class Clustering SVM

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Classification / Regression Support Vector Machines

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

High Dimensional Data Clustering

Classifying Acoustic Transient Signals Using Artificial Intelligence

Data Mining: Model Evaluation

Video Content Representation using Optimal Extraction of Frames and Scenes

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1

Local Quaternary Patterns and Feature Local Quaternary Patterns

Manifold-Ranking Based Keyword Propagation for Image Retrieval *

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Discriminative Dictionary Learning with Pairwise Constraints

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

UB at GeoCLEF Department of Geography Abstract

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Learning an Image Manifold for Retrieval

General Regression and Representation Model for Face Recognition

Image Alignment CSC 767

Robust Shot Boundary Detection from Video Using Dynamic Texture

The Discriminate Analysis and Dimension Reduction Methods of High Dimension

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Object-Based Techniques for Image Retrieval

Semantic Image Retrieval Using Region Based Inverted File

RECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

PRÉSENTATIONS DE PROJETS

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

Collaboratively Regularized Nearest Points for Set Based Recognition

S1 Note. Basis functions.

Learning-Based Top-N Selection Query Evaluation over Relational Databases

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

Improved SIFT-Features Matching for Object Recognition

A Workflow for Spatial Uncertainty Quantification using Distances and Kernels

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

Using Neural Networks and Support Vector Machines in Data Mining

Structure from Motion

The Study of Remote Sensing Image Classification Based on Support Vector Machine

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Wavelets and Support Vector Machines for Texture Classification

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

PCA Based Gait Segmentation

Hierarchical Image Retrieval by Multi-Feature Fusion

A New Approach For the Ranking of Fuzzy Sets With Different Heights

Lecture 5: Multilayer Perceptrons

Improving Web Image Search using Meta Re-rankers

A Background Subtraction for a Vision-based User Interface *

Backpropagation: In Search of Performance Parameters

Mercer Kernels for Object Recognition with Local Features

Efficient Text Classification by Weighted Proximal SVM *

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Network Intrusion Detection Based on PSO-SVM

Large-scale Web Video Event Classification by use of Fisher Vectors

Performance Evaluation of Information Retrieval Systems

CS 534: Computer Vision Model Fitting

3D vector computer graphics

COMPLEX WAVELET TRANSFORM-BASED COLOR INDEXING FOR CONTENT-BASED IMAGE RETRIEVAL

Optimizing Document Scoring for Query Retrieval

Transcription:

Orthogonal Complement Component Analyss for ostve Samples n SVM Based Relevance Feedback Image Retreval Dacheng Tao and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong {dctao2, tang}@e.cuhk.edu.hk Abstract Relevance feedback (RF) s an mportant tool to mprove the performance of content-based mage retreval system. Support vector machne (SVM) based RF s popular because t can generalze better than most other classfers. However, drectly usng SVM n RF may not be approprate, snce SVM treats the postve and negatve feedbacks equally. Gven the dfferent propertes of postve samples and negatve samples n RF, they should be treated dfferently. Consderng ths, we propose an orthogonal complement components analyss (OCCA) combned wth SVM n ths paper. We then generalze the OCCA to Hlbert space and defne the kernel emprcal OCCA (KEOCCA). Through eperments on a Corel hoto database wth 7,800 mages, we demonstrate that the proposed method can sgnfcantly mprove the performance of conventonal SVM-based RF.. Introducton Content-based mage retreval (CBIR) system [] tres to retreve mages semantcally relevant to user s query from an mage database based on automatcally etracted vsual features. However, the gap [2] between the low-level vsual feature and the hgh-level semantc concepts of the mage often leads to poor results. To brdge the gap and to scale the performance, the nteractons between the user and the search engne are requred. The user labels the prevous retreved mages as semantcally relevant or rrelevant and the computer uses the nformaton to refne the retreval results. The technque s generally named as relevance feedback (RF) [2-4]. RF s wdely used as an mportant method to scale the performance n CBIR systems. MARS [4] ntroduced both a query movement and re-weghtng technques to estmate user s sentment. MndReader [3] formulated a mnmzaton problem on parameters estmaton process. chunter [5] proposed a stochastc comparson search as ts RF algorthm. Zhou and Huang [6-7] formulated the RF as an optmal learnng problem. Jng modeled the RF as a mult-class problem [8]. Fredman tred to learn local feature relevance to combne the best ones for k-nearestneghbor search [9]. Recently, support vector machne (SVM), a small sample learnng algorthm, was ntroduced to RF procedure [0-3] because of ts generalzaton ablty. However, drectly usng SVM to RF may not be sutable because SVM handles the postve and negatve feedbacks equally. In order to mprove the performance of SVM RF, we propose an orthogonal complement components analyss to put more emphass on postve samples and also smplfy the SVM hyper-plane. Eperments on a Corel database show sgnfcant mprovement of the RF performance by the new approach. 2. Orthogonal Complement Components Analyss for SVM 2.. Analyss From the statstcal learnng theory [4], we know that the followng nequalty () holds wth probablty of at least δ for any n > h. R[ f ] Remp[ f ] G( h, n, δ ) 2n δ h ln ln () h 4 G( h, n, δ ) = n where h denotes the Vapnk-Chervonenks (VC) dmenson of the classfer functon set, n s the sze of the tranng set, and R emp descrbes the emprcal rsk. For all δ > 0 and f F the nequalty bounds the rsk. The nequalty () gves us a way to estmate the error on future data based only on the tranng error and the VC dmenson of the classfer functon set. It s well known that the smaller the rsk value R [ f ], the better the performance of the classfer. From (), we can see that the rsk depends on the emprcal rsk R emp and G ( h, n,δ ). Based on the representaton of G ( h, n,δ ), we know that G ( h, n,δ ) s a strctly monotoncally ncreasng functon of h for gven n and δ. h s determned by the support roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)

vectors when tranng data number s smaller than feature dmenson. In addton, the VC dmenson h s almost an ncreasng functon of the number of support vectors. Consequently, SVM s performance depends mostly on the emprcal rsk, the number of the support vectors, and δ. Snce δ cannot be controlled manually, we can restrct R emp and the number of support vectors to acheve a good performance. In CBIR RF, t s easy to acheve zero emprcal rsk R emp by enough number of the support vectors. However, a large number of support vectors enlarge the VC dmenson of SVM classfer h. Therefore, we want to restrct both h and R emp. To solve the problem, an ntutve way s to search a subspace to reduce the tranng set. There are two possbltes:. project all postve feedbacks onto ther center and then project all negatve feedbacks onto the subspace; 2. project all negatve feedbacks onto ther center and then project all postve feedbacks onto the subspace. For CBIR, the frst method s much more reasonable than the second one because all postve feedbacks are smlar to the query mage. Meanwhle, n the projecton step, the optmal hyper-plane of SVM classfer can be deformed by any ncreasng postve feedbacks and SVM classfer wll not be senstve to any negatve feedbacks. Therefore, more emphass s put on postve samples. In addton, the resultng SVM hyper-plane wll be smpler around the projecton center. Followng ths observaton, we propose an orthogonal complement components analyss to mprove SVM. 2.2. Orthogonal Complement Components Analyss Support Vector Machne Orthogonal Complement Components Analyss SVM can be manly mplemented n three steps. The frst step s to project all postve feedback samples onto ther center, the second step s to project all negatve feedback samples onto the subspace, and the last step s to construct a SVM classfer n the subspace. For a set of postve feedback samples {, }, M where R, M s the dmenson of the feature space and s the number of the postve feedbacks, Karhunen-Leove transformaton (KLT) can be used to etract the prncpal subspace and ts orthogonal complement. The prncple components descrbe the varaton of the postve feedbacks dstrbuton whle the orthogonal complement components descrbe the n-varaton of the postve feedbacks dstrbuton. The bass functons for the KLT are obtaned by solvng the egenvalue problem: T [ ] [ ] 0 =, (2) 0 0 where s the covarance matr of the postve feedbacks, s the prncple subspace of, s the orthogonal complement subspace of n, s the correspondng dagonal matr of egenvalues of, and the egenvalues of are 0. The untary matr defnes a coordnate transform, whch decorrelates the data, makes eplct the nvarant subspaces of the matr operator, and ensures that all postve feedbacks are mapped to ther center. By KLT, we can obtan the orthogonal complement T feature vector y = ( ) ( ), where = = s the center of the postve feedbacks, s the data matr constructed by all postve feedbacks, and y s the projected data matr of postve feedbacks (It s clear that all columns of y are equal). We call the transformaton as orthogonal complement component analyss (OCCA), just lke prncpal component analyss (CA). OCCA preserves the nvarant drecton of the data dstrbuton. Table. The algorthm of OCCA SVM.. Calculate the covarance matr of the postve feedbacks. 2. Calculate the orthogonal complement components of accordng to [ ] T = 0. 3. roject all postve feedbacks onto ther center y. 4. roject all negatve feedback samples onto the orthogonal complement subspace, T ( ) ( ) y =. 5. roject the remanng mages n the database onto the orthogonal complement subspace, T y = ( ) ( ). 6. Tran a standard SVM classfer on = [ y y ] z,. 7. Resort the remanng projected mages y usng the output of SVM f ( ) = y K( z, y) b s y α. = After projectng all postve feedbacks onto ther center, we can project all negatve feedbacks onto the T subspace accordng to y = ( ) ( ), where s the data matr constructed by all negatve feedbacks and feedbacks. Then all the mages n the database are also y s the projected data matr of the negatve T projected onto the subspace through y = ( ) ( ), roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)

where y s the projected data matr of the orgnal data matr. The standard SVM classfcaton algorthm s eecuted on z = [ y, y ], where z = and s the number of the negatve feedbacks. Fnally, we can measure the dssmlarty through the output of SVM f s ( ) = y K( z, y) b y α, where S s the number of = the support vectors. The outlne of the proposed algorthm s shown n Table. 2.3. Orthogonal Complement Components Analyss SVM n the Kernel Space In last Secton, we derved the lnear space OCCA. We know that a sngle Gaussan dstrbuton often accurately descrbes the dstrbuton of samples n the nput feature space when the postve feedbacks are smlar objects under the same condtons (e.g. smlar vew angle, smlar llumnaton, etc.). However, ths s not the case for CBIR. Therefore consderng all postve feedbacks formng a sngle Gaussan s not reasonable. Meanwhle, the dmenson of the orthogonal complement components decreases wth the ncreasng of the postve feedbacks. Consequently, the performance of the system wll be degraded by the nose. Therefore, generalzng the algorthm to ts kernel verson (KEOCCA SVM) wll be helpful. To complete the KEOCCA SVM, the kernel verson of KLT s requred. The prncpal components can be etracted by kernel prncpal component analyss (KCA) [6], because all egenvectors wth nonzero egenvalues must be n the span of the mapped data. However, we cannot obtan all the orthogonal complement components of the postve feedbacks n ths way. A feasble soluton s to etract a subset of the orthogonal complement components. It means we can thnk that parts of the orthogonal complement space of postve feedbacks are spanned by the postve and negatve feedbacks n the Hlbert space. ote that the orthogonal complement space of the postve feedbacks cannot be spanned by all mages n the database, because many of the mages n the database whch are query relevant but not postve feedbacks, and we can only obtan the covarance matr of the postve feedbacks. Hence the orthogonal complement components of the postve feedbacks constructed by all feedbacks are called the kernel emprcal orthogonal complement components (KEOCC), whle the transformaton s called kernel emprcal orthogonal complement component analyss (KEOCCA). Smlar to SVM and KCA, we frst map the data ψ n Hlbert space, and then the kernel trck to ( ) K T ( ) =ψ ( ) ψ ( ), s utlzed to obtan the soluton. j j We frst calculate the covarance matr of the postve feedbacks n the Hlbert space accordng to, = ( ( ) ψ ( ))( ψ ( ) ψ ( ) = where ( ) = ψ ( ) ψ, (3) ψ s the center of the postve = feedbacks n the Hlbert space. Accordng to the prevous analyss of the orthogonal complement components n the Hlbert space, we know that { ψ ( ), ψ ( ),..., ψ ( ), ψ ( ),..., ψ ( )} ~ span 2 (because we cannot obtan the complete orthogonal complement space, we mark the emprcal orthogonal complement components as ~.) Therefore the bass functon for the KEOCCA can be solved by the egenvalue problem, ~ 0 = ( ), ~ = ξ ψ ξ ψ. where ( ) ( ) = = Through the kernel trck, the egenvalue problem can be solved by usng the kernel matr K, T ~ T ( ) = K (, ) K (, k ) K (, ) K (, k ) Kernel matr s defned as, K = K... K... K = k = k = [ (,) (, ) (, )] (, )... K(, ) K(, )... K(, ) T. (4) K.................. K(, )... (, ) (, )... (, ) = K(, )... K(, ) K(, )... K(, ).................. K(, )... (, ) (, )... (, ). (5) Therefore, we can obtan the KEOCC accordng to, ~ whch makes 0 = ( ). Smlar to OCCA SVM, we project the postve feedbacks, negatve feedbacks, and all mages n the database onto the KEOCC spanned space by ( ) ψ ( ) ψ ( ) ( ) y =. In KEOCC, the postve feedback, negatve feedback, and mage n the database are represented by y, y, and y respectvely. Usng z = [ y, y ], the standard SVM classfcaton algorthm s traned. Fnally, we can measure the dssmlarty through the output of SVM accordng to s ( ) = y K( z, y) b f y α, where S s the number of = the support vectors. The algorthm s shown n Table 2. roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)

Table 2. The algorthm of KEOCCA SVM.. Calculate the kernel matr K. 2. Calculate the kernel emprcal orthogonal complement components ~ of the kernel covarance matr of the postve feedbacks by ~ 0 = ( ). 3. roject all postve feedbacks onto ther center y accordng to y = ( ) ( ψ ( ) ψ ( ). 4. roject all negatve feedback samples onto the emprcal kernel orthogonal complement subspace accordng to y = ( ) ( ψ ( ) ψ ( ). 5. roject the remanng mages n the database to the subspace accordng to y = ( ) ( ψ ( ) ψ ( ). 6. Tran a standard SVM classfer on z = [ y, y ]. 7. Resort the projected remanng mages y usng the output of SVM f ( ) = y K( z, y) b s y α. = 3. Image Retreval System In CBIR, we assume that the user epects the most possble retreval results after each RF teratons,.e. the search engne s requred to feedback the most semantcally relevant mages accordng to the prevous feedback samples. Meanwhle, the user s mpatent, who wll never label a large number of mages n each RF teraton and only does a few numbers of teratons [7]. For mage retreval, the mages are represented by color [8], teture [9], and shape [20]. Color nformaton s the most mportant features for mage retreval because color s robust wth respect to scalng, orentaton, perspectve, and occluson of mages [8]. Teture nformaton s also an mportant cue for mage retreval. revous studes on teture have shown that teture nformaton based on structure and orentaton fts the model of human percepton well. Shape nformaton s another type of mportant clues that ft the percepton of human, and many mage retreval systems use the feature. In ths paper, we select the color hstogram [8], Gabor teture [9], and edge drecton hstogram [20] to represent mages. Fgure shows the user nterface of our mage retreval system. Here query by eample s used. To scale the performance, we focus on the RF algorthms. Frst, user selects a query mage from the thumbnal gallery and clcks the Set as Query button. Then user clcks the Retreval button, and the mages n the gallery are resorted. et, user provde the feedback by clckng on the thumb up or thumb down button n terms of hs judgment of the relevance of the retreved mage. Fnally, user clcks the Retreval button to resort the mages n the gallery. The last two steps can be done teratvely to obtan a satsfactory performance. Fgure. The user nterface of the system. 4. Epermental Results The eperments were dvded nto three parts. Accuracy, whch s the rato of the number of relevant mages retreved to the top retreved mages, s used to evaluate the retreval performance. For algorthms,.e. SVM [0], OCCA SVM, KEOCCA SVM, we choose the Gaussan kernel: (, y) 2 ρ y 2 K = e, ρ =. (6) The frst evaluaton eperment was eecuted on a small sze database, whch ncludes,600 wldlfe mages wth 6 dfferent types of wldlfe anmals from Corel. We use all,600 mages as queres. Durng RF teratons, the frst 5 query relevant and rrelevant mages were selected as postve and negatve feedbacks from the top 48 retreved mages n the prevous teraton, respectvely. In the frst eperment, we want to compare the performance between these proposed algorthms and the tradtonal SVM based RF algorthms. In ths eperment, we dd RF 4 tmes. Fgure 2 shows the epermental results. We can see that the proposed KEOCCA SVM can sgnfcantly outperform SVM. Most recent CBIR evaluaton eperments were eecuted on large-scale mage database. In ths eperment, we compare the new algorthm KEOCCA SVM wth SVM n a subset of Corel hoto Gallery [], whch ncludes 7, 800 mages wth 90 concepts. The computer randomly selected 300 queres. For each query mage, 9 RF teratons were eecuted. The epermental results are shown n Fgure 3. From the fgure, we can see that the proposed method KEOCCA SVM performs much better than the orgnal SVM. roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)

7. References Fgure 2. Evaluaton eperment on small database. At last, we also dd some real-world eperments. We randomly select some mages as the queres. For each query, we dd RF teraton 4 tmes. For each RF teraton, we select some query relevant and rrelevant mages as postve and negatve feedbacks from the frst three screen shots, respectvely. The number of the postve and negatve feedbacks s less than 0. Meanwhle, they are not the top retreved mages. We chose them accordng to the sentments. Fgure 4 shows the epermental results. The top-left mage of each fgure s the query. We can see that the proposed algorthm KEOCCA SVM can work well n practcal applcatons. 5. Concluson To mprove the performance of content-based mage retreval (CBIR), relevance feedback (RF) plays an essental role. Recently, Support Vector Machne (SVM) has been used n RF. The advantage of SVM s that t can generalze better than many other classfers. To mprove SVM based-rf we propose the orthogonal complement component analyss (OCCA) combned wth the SVM. We then generalze the OCCA to Hlbert space and defne the kernel emprcal OCCA (KEOCCA). Fnally, we combne the KEOCCA wth SVM. Through eperments on Corel hoto Galley wth 7,800 mages, we show that our new method can outperform the orgnal SVM-based RF sgnfcantly. 6. Acknowledgement The work descrbed n ths paper was fully supported by a grant from the Research Grants Councl of the Hong Kong SAR. (roject no. AoE/E-0/99). [] J.Z. Wang, J. L, G. Wederhold, SIMLIcty: Semantcs- Senstve Integrated Matchng for cture Lbrares, IEEE Trans. on AMI, vol. 23, no. 9, pp. 947-963, Sept. 200. [2] Y. Ru, T. S. Huang, and S. Mehrotra. Content-based Image Retreval wth Relevance Feedback n MARS, In roc. IEEE ICI, 997. [3] Y. Ishkawa, R. Subramanya, and C. Faloutsos. Mndreader: Queryng Databases through Multple Eamples, In VLDB 998, pp 433-438. [4] Y. Ru, T. S. Huang, M. Ortega, and S. Mehrotra. Relevance Feedback: A ower Tool n Interactve Content-based Image Retreval, IEEE Trans. on CSVT, Sept. 998. [5] I.J. Co, L. Mller,. Mnka, V. apthomas, and. Yanlos, The Bayesan Image Retreval System, chunter: Theory, Implementaton and sychophyscal Eperments, IEEE Trans. on I, vol 9, no., 20-37, 2000. [6] X. S. Zhou, T. S. Huang, Small Sample Learnng Durng Multmeda Retreval Usng Basmap, In roc. IEEE CVR, 200. [7] X. S. Zhou, T. S. Huang, Comparng Dscrmnantng Transformatons and SVM for Learnng durng Multmeda Retreval, In roc. ACM Int. Conf. on MM, 200. [8]. Jng, Mult-class Relevance Feedback Content-based Image Retreval, Computer Vson and Image Understandng, pp. 42-67 2003. [9] J.H. Fredman, Fleble Metrc earest eghbor Classfcaton, Technque Report, Dept. of Statstcs, Stanford U. 994. [0] L. Zhang, F. Ln, and B. Zhang, Support Vector Machne Learnng for Image Retreval, In roc. IEEE ICI, 200. []. Hong, Q. Tan, and T. S. Huang. Incorporate Support Vector Machnes to Content-based Image Retreval wth Relevant Feedback, In roc. IEEE ICI, 2000. [2] Y. Chen, X. S. Zhou, and T. S. Huang, One-class SVM for Learnng n Image Retreval, In roc. IEEE ICI, 200. [3] G. Guo, A. K. Jan, W. Ma, and H. Zhang, Learnng Smlarty Measure for atural Image Retreval wth Relevance Feedback, IEEE Trans. on, vol. 2, no. 4, pp.8-820, July 2002. [4] Vapnk, V. The ature of Statstcal Learnng Theory, Sprnger-Verlag, ew York (995). [5] J. Burges, A Tutoral on Support Vector Machnes for attern Recognton, Data Mnng and Knowledge Dscovery 2, pp. 2-67, 998. [6] K.R. Muller, S. Mka, G. Ratsch, K. Tsuda, and B. Scholkopf, An Introducton to Kernel-based Learnng Algorthms, IEEE Trans. on, vol 2, no. 2, Mar. 200. [7] X.S. Zhou, T.S. Huang, Relevance Feedback for Image Retreval: a Comprehensve Revew, ACM Multmeda Systems Journal, vol. 8, no. 6, pp. 536-544, Apr. 2003. [8] M.J. Swan and D.H. Ballard. Color Indeng, IJCV, vol. 7, no. pp.-32, 99. [9] B. S. Manjunath and W. Y. Ma. Teture Features for Browsng and Retreval of Image Data, IEEE Trans. on AMI, vol.8 no. 8 pp. 837-42, Aug. 996. [20] A. K. Jan and A. Valaya. Image Retreval Usng Color and Shape, attern Recognton, vol. 29, no.8 pp.233-244, Aug. 996. roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)

Fgure 3. Evaluaton Epermental Results on Large-Scale Corel hoto Gallery wth 7,800 mages. The top-left, top-mddle, top-rght, bottom-left, bottom-mddle, and bottom-rght fgures show the mean accuracy curve wth 9 RF teratons n the top 0, 20, 30, 40, 50, and 60 retreved mages, respectvely. Fgure 4. Real-World Epermental results n the 4 th RF teraton. The top-left mage of each subfgure s the query. roceedngs of the 2004 IEEE Computer Socety Conference on Computer Vson and attern Recognton (CVR 04)