Boosting for transfer learning with multiple sources

Size: px
Start display at page:

Download "Boosting for transfer learning with multiple sources"

Transcription

1 Boostng for transfer learnng wth multple sources Y Yao Ganfranco Doretto Vsualzaton and Computer Vson Lab, GE Global Research, Nskayuna, NY 239 yaoy@gecom doretto@researchgecom Abstract Transfer learnng allows leveragng the knowledge of source domans, avalable a pror, to help tranng a classfer for a target doman, where the avalable data s scarce The effectveness of the transfer s affected by the relatonshp between source and target Rather than mprovng the learnng, brute force leveragng of a source poorly related to the target may decrease the classfer performance One strategy to reduce ths negatve transfer s to mport knowledge from multple sources to ncrease the chance of fndng one source closely related to the target Ths work extends the boostng framework for transferrng knowledge from multple sources Two new algorthms, MultSource- TrAdaBoost, and TaskTrAdaBoost, are ntroduced, analyzed, and appled for object category recognton and specfc object detecton The experments demonstrate ther mproved performance by greatly reducng the negatve transfer as the number of sources ncreases TaskTrAdaBoost s a fast algorthm enablng rapd retranng over new targets Introducton A common assumpton of tradtonal machne learnng algorthms s that the probablty dstrbutons of the tranng and testng data are the same Under such an assumpton, when presented wth a new set of data wth a dfferent dstrbuton, tranng samples need to be collected for learnng new classfers Let us consder a classc computer vson problem, such as object category recognton, whch s known to requre a large number of tranng samples to ensure good generalzaton [] When tasked wth the problem of recognzng a new object category, new tranng data has to be collected and labeled so as to represent the new dstrbuton However, one would save tme f he/she could leverage useful nformaton from exstng annotated data and/or classfers of old object categores In addton, dfferent reasons may mpede easy access to new data, and only a small number of samples may be avalable Tranng a new classfer under such condtons would dramatcally ncrease the rsk of overfttng the new data, leadng to poor generalzaton It would be more effcent f one could regularze the learnng n ths scenaro by explotng the knowledge prevously accumulated from smlar problems Transfer learnng [6, 22] represents a famly of algorthms that relaxes the dentcal dstrbuton assumpton of the tradtonal machne learnng approach As the name suggests, transfer learnng algorthms leverage and transfer nformatve knowledge from old data domans (sources) to a new data doman (target) The transferred knowledge helps mprovng the learnng n the target doman when the tranng samples are scarce Among the works n ths area TrAdaBoost [4] s becomng a popular boostng based algorthm that s most closely related to our work In general, the ablty to transfer knowledge from a source to a target depends on how they are related The stronger the relatonshp, the more usable wll be the prevous knowledge On the other hand, brute force transferrng n case of weak relatonshps may lead to performance deteroraton of the resultng classfer Ths s known as negatve transfer In order to avod ths effect one would have to answer the queston when to transfer Lmted work has been done n ths area [6] One strategy to decrease the rsk for negatve transfer s to mport knowledge not from one, but from multple sources In ths way, the chance to borrow benefcal knowledge closely related to the target doman sgnfcantly ncreases From another pont of vew, ths mples that answerng the queston of when to transfer becomes less mportant TrAdaBoost reles only on one source, and therefore s ntrnscally vulnerable to negatve transfer Ths work formally states the problem of transfer learnng from multple sources to mprove the tranng of a target classfer (Secton 3) Two boostng based approaches addressng ths problem are proposed The frst one, called MultSource- TrAdaBoost, extends the TrAdaBoost framework for handlng multple sources (Secton 4) The second one, called Task- TrAdaBoost, ntroduces a tranng process wth two phases (Secton 5) One s dedcated to the summarzaton of the knowledge from multple sources The other one s devoted to transferrng knowledge to the target, and has a very low tme complexty, whch enables rapd retranng when presented wth a new target The theoretcal performance of

2 these two algorthms, and ther dynamc behavor, are dscussed n relaton to AdaBoost (Secton 6) The algorthms are general, and have the potental for sgnfcantly mprovng the performance of several computer vson applcatons The approaches are deployed and evaluated wthn the context of object category recognton, and specfc object detecton (Secton 7) A thorough comparson aganst TrAdaBoost, and the baselne tradtonal machne learnng algorthm AdaBoost, s also provded 2 Related work Transfer learnng has been deployed over a wde varety of applcatons, such as sgn language recognton [7], text classfcaton [25], WF localzaton [2], and adaptve updatng of land-cover maps [3] The approaches to transfer learnng are categorzed based on the means used for mportng knowledge from source to target, and can be based on nstance-transfer [4, ], feature-representaton-transfer [7, 24], parameter-transfer [8, 2, 9], and relatonalknowledge-transfer For more detals we refer the reader to the followng surveys: [6, 22] In the nstance-transfer approach samples from the source are drectly appled for tranng the target classfer TrAdaBoost [4] falls nto ths category, as well as MultSourceTrAdaBoost, the frst proposed approach In featurerepresentaton-transfer the focus s to fnd a representaton of the feature space that mnmzes the dfferences between source and target Parameter-transfer approaches assume that the target could share parameters wth a related source TaskTrAdaBoost, the second proposed method, falls nto ths category Relatonal-knowledge-transfer assumes that data wthn the source s correlated and the goal s to export ths correlaton to the target [6] In addton to these approaches, [7] appled the sparse prototype representaton to transductve transfer learnng, where no labeled data n the target doman s avalable [5] presented a learnng method where hgh-level semantc attrbutes, descrbng shape and color, are exploted to transfer knowledge from multple sources to the target Support vector machnes (SVM) have been modfed for transfer learnng In [27] an SVM s derved by adjustng exstng classfers accordng to the target data [4] derved more adaptable decson boundares by tranng a target SVM wth the help of weghted support vectors learned from multple sources [6] performed vdeo concept detecton by learnng an SVM where the kernel s derved by mnmzng the dstrbuton msmatch between the labeled source data and the unlabeled target data Although SVM-based transfer learnng has been extended to leverage knowledge from more than one source, to the best of the knowledge of the authors, ths s the frst work that extends boostng-based transfer learnng to multple sources Some related work has extended boostng for mult-task learnng [26] and on-lne ncremental learnng [2] In ths context, [2] presented a mult-class boostng-based classfcaton framework that jontly selects weak classfers shared among dfferent tasks In contrast, here the nterested s n boostng a sngle target classfer by leveragng the (nstance-based, or parameter-based) knowledge transferrable from multple sources Also, [2] assumes a comparable number of tranng samples for every task In contrast, the proposed method focuses on scenaros where target tranng samples are scarce 3 Problem statement In ths secton we ntroduce some notaton and defne the type of transfer learnng problem we ntend to approach Formally, a doman D s made of a feature space X, and a margnal probablty dstrbuton P (X), where X = {x,, x n }, and x X A task T s made of a label space Y = {+, }, and a boolean functon f : X Y Learnng the task T for the doman D, n tradtonal machne learnng, amounts to estmatng a classfer functon ˆf : X Y, from the gven tranng data D = {(x, y ),, (x n, y n ) x X, y Y}, that best approxmates f, accordng to certan crtera Let us now ndcate wth D T = (X, PT (X)) a target doman for whch we would lke to learn the target task T T = (Y, ft ), from the target tranng data D T = {(x T, y T ),, (x T n T, yn T T )} Let us also ndcate wth D S = (X, PS (X)) a source doman, and wth T S = (Y, fs ) a source task, for whch we have avalable the source tranng data D S = {(x S, y S ),, (x S n S, yn S S )} Improvng the learnng of the target classfer functon ˆf T : X Y by explotng the knowledge of the source task T S, n the source doman D S, s a so-called nductve transfer learnng problem [6, 22] Performng nductve transfer learnng, as opposed to tradtonal machne learnng, should be advantageous n the cases when the sze of the target tranng data D T, s very small n absolute terms, and also relatve to the sze of the source tranng data D S, e n T n S In fact, under such condtons, tradtonal machne learnng would suffer from serous overfttng problems From here comes the dea of attemptng to regularze the learnng problem by transferrng knowledge from a source doman, where resources have already been allocated to collect abundant tranng data for learnng the source task The TrAdaBoost algorthm [4] has become a popular boostng-based soluton for ths case of the nductve transfer learnng problem The source and target domans and tasks may dffer ether because ther margnal probablty dstrbutons dffer (P T P S ), or ther boolean functons dffer (f T f S ), or both dffer TrAdaBoost provdes a framework for automatcally dscoverng whch part of knowledge s specfc for the source doman or task, and whch part may be common between source and target domans or tasks, and provdes a way to attempt to transfer ths knowledge from the source

3 to the target The effectveness of any nductve transfer learnng method depends on the source doman and task, and on how they relate to the target doman and task It s reasonable to expect a transfer method to take advantage of strong relatonshps The most effectve transfer would occur when D T = D S, and T T = T S, whch reduces nductve transfer learnng to tradtonal machne learnng On the other hand, a weak relatonshp may cause the transfer method to not only be neffectve, but also to decrease the performance of the target task, when compared to tradtonal machne learnng performance Ths effect s known as negatve transfer In order to ncrease postve transfer, and avod the negatve, one can thnk of transferrng knowledge not from one but from multple sources In ths case the transfer learnng method could dentfy and take advantage of the source, among the ones that have been made avalable, that s found to be the most closely related to the target Or even better, t could take advantage of the best peces of knowledge, comng from varous avalable sources, that are found to be the most closely related to the target Therefore, n ths work we make the assumpton that N source domans D S,, D SN, wth source tasks T S,, T SN, and source tranng data D S,, D SN, are avalable, and would lke to explot them to mprove the learnng of the target classfer functon ˆf T : X Y Snce TrAdaBoost transfers knowledge only from one source, ts performance heavly reles on the relatonshp between source and target In Secton 4 we extend TrAda- Boost from handlng only one source to handlng multple sources, makng t much less vulnerable to negatve transfer In Secton 5 we further expand ths boostng-based approach by ntroducng a two-step learnng procedure that, gven the same sources, can transfer knowledge to a new target wth mnmal computatonal complexty 4 TrAdaBoost wth multple sources In ths secton we are gong to present an extenson of TrAdaBoost [4] to multple sources We recall that AdaBoost [] s a tradtonal machne learnng algorthm, whch assumes that the domans and tasks from where the tranng and testng data come from are the same (e there s no dstncton between source and target because D S = D T, and T S = T T ) AdaBoost at every teraton ncreases the accuracy of the selecton of the next weak classfer by carefully adjustng the weghts of the tranng nstances In partcular, t gves more mportance to msclassfed nstances because they are beleved to be the most nformatve for the next selecton TrAdaBoost assumes that there s abundant source tranng data to learn a classfer, but the target doman and task are dfferent from the source (D S D T, and T S T T ) Therefore, the TrAdaBoost learnng paradgm allows to ex- Algorthm : MultSourceTrAdaBoost Input: Source tranng data D S,, D SN, target tranng data D T, and the maxmum number of teratons M Output: Target classfer functon ) ˆf T : X Y Set α S = ( 2 ln + 2 ln n S M, where n S = k n S k 2 Intalze the weght vector (w S,, w S N, w T ), where w S k = (w S k,, ws k n Sk ), and w T = (w T,, wn T T ) to the desred dstrbuton for t to M do 3 Empty the set of canddate weak classfers, F 4 Normalze to the weght vector (w S,, w S N, w T ) for k to N do 5 Fnd the canddate weak classfer h k t : X Y that mnmzes the classfcaton error over the combned set D Sk DT, weghted accordng to (w S k, w T ) 6 Compute the error of h k t on D T : ɛ k t = j 7 F F (h k t, ɛk t ) w T j [yt j hk t (xt j )] wt 8 Fnd the weak classfer h t : X Y such that (h t, ɛ t) = arg mn (h,ɛ) F ɛ 9 Set α t = 2 ln ɛ t, where ɛ ɛ t < /2 t Update the weght vector w S k e α S h t (x S k w S k w T w T eα t h t (x T ) yt return ˆf T (x) = sgn ( t αtht(x)) ) y S k plot a small target tranng data set D T, n conjuncton wth the source tranng data set D S, for drvng the boostng of a target classfer ˆf T The target tranng nstances drve the selecton of a week classfer n the same way as AdaBoost does On the other hand, at every teraton the source tranng nstances are gven less mportance when they are msclassfed Ths s because they are beleved to be the most dssmlar to the target nstances, and therefore ther mpact to the next weak classfer selecton should be weakened We now extend TrAdaBoost to the case where abundant tranng data s avalable from multple sources, each of whch s dfferent from the target (D Sk D T, and T Sk T T ) The strategy for assgnng the mportance to the source and target tranng nstances remans the same as explaned above However, we no longer have to fnd a week classfer by leveragng only one source, and a mechansm has been ntroduced such that every weak classfer s selected from the source that appears to be the most closely related to the target, at the current teraton Clearly, ths approach greatly reduces the effects of negatve transfer caused by the mposton to transfer knowledge from a sngle source, potentally loosely related to the target More precsely, at every teraton each source, ndependently from the others,

4 Algorthm 2: Phase-I of TaskTrAdaBoost Input: Source tranng data D S,, D SN, the maxmum number of teratons M, and the regularzng threshold γ Output: Set of canddate weak classfers H Empty the set of canddate weak classfers, H for k to N do 2 Intalze the weght vector w S k = (w S k,, ws k n Sk ), to the desred dstrbuton for t to M do 3 Normalze to the weght vector w S k 4 Fnd the canddate weak classfer h k t : X Y that mnmzes the classfcaton error over the set D Sk, weghted accordng to w S k 5 Compute the error ɛ j ws k j 6 α ɛ ln, where ɛ < /2 2 ɛ f α > γ then 7 H H h k t 8 Update the weghts w S k return H w S k [y S k j e αys k h k t (xs k j )] h k t (xs k ) combnes ts tranng data wth the target tranng data to propose a canddate weak classfer The fnal weak classfer s then chosen from the source that mnmzes the target classfcaton error A detaled descrpton of the proposed extenson s gven n Algorthm, and s called MultSourceTrAdaBoost, where N source tranng data sets are gven as nput, and M week classfers are extracted to compose ˆf T As t can be seen from lne, the weghtng update of the source tranng nstances s the same as n TrAdaBoost, and the weghtng update of the target tranng nstances s the same as n Ada- Boost At every teraton the nner loop computes: (a) N canddate week classfers from N dfferent tranng data sets, {D Sk DT }; (b) how each weak classfer relates to the target tranng data by computng the classfcaton error Lne 8 then selects the weak classfer correspondng to the source that mnmzes the target classfcaton error Fnally, when N = the algorthm reduces to TrAdaBoost 5 Boostng for transferrng source tasks Fgure (a) depcts the conceptualzaton of nductve transfer learnng, whch s ntended as the explotaton of the knowledge from dfferent sources to mprove the learnng of a classfer that s meant to work n a target doman, to address the target task that was defned on t MultSource- TrAdaBoost s one partcular mplementaton of ths concept More precsely, t tres to dentfy whch tranng nstances, comng from the varous source domans, can be reused, together wth the target tranng nstances, to boost the target classfer Fgure (b) depcts ths stuaton, whch s typcally referred to as an nstance-transfer approach Another way of mplementng nductve transfer learnng s by admttng that the target classfer model wll share some parameters wth the most closely related sources Therefore, Algorthm 3: Phase-II of TaskTrAdaBoost Input: Target tranng data D T, the set of canddate weak classfers H, and the maxmum number of teratons M Output: Target classfer functon ˆf T : X Y Intalze the weght vector w T = (w T,, w T n T ), to the desred dstrbuton for t to M do 2 Normalze to the weght vector w T 3 Empty the current weak classfer set F foreach h H do 4 Compute the error of h on D T ɛ j f ɛ > /2 then 5 h h 6 Update ɛ va () 7 F F (h, ɛ) wj T [yt j h(xt j )] () 8 Fnd the weak classfer h t : X Y such that (h t, ɛ t) = arg mn (h,ɛ) F ɛ 9 H H \ h t Set α t = 2 ln ɛ t ɛ t Update the weghts w T w T e α ty T hk t (xt ) return ˆf T (x) = sgn ( t αtht(x)) ths parameter-transfer approach tres to dentfy whch parameters, comng from varous sources, can be reused, together wth the target tranng data, to mprove the target classfer learnng In ths secton we ntroduce an nductve transfer learnng framework made of two phases Phase-I deploys tradtonal machne learnng to extract sutable parameters that summarze the knowledge from the sources Phase-II s a parameter-transfer approach for boostng the target classfer ˆfT More precsely, phase-i extracts the parameters that consttute the models of the source task classfers ˆf S,, ˆf SN Therefore, the source tasks are descrbed explctly, and not mplctly through the labeled source tranng data For ths reason, ths nstance of parameter-transfer approach can be thought of as a task-transfer approach, where sub-tasks, comng from the varous source tasks, can be reused, together wth the target tranng nstances, to boost the target classfer, because they are beleved to be closely related to the target task The sub-tasks wll be represented under the form of weak classfers Fgure (c) depcts ths stuaton A detaled descrpton of the proposed approach, whch s called TaskTrAdaBoost, s gven n Algorthm 2 for phase- I, and n Algorthm 3 for phase-ii Phase-I s nothng but AdaBoost run for each of the source tranng data The output H s a collecton of all the canddate weak classfers that are beng computed, and that are the most dscrmnatve In fact, we constran the coeffcent α to be greater

5 Learnng System Transfer Learnng Instance Transfer Learnng System Task Transfer Learnng System (a) (b) (c) Fgure Transfer learnng approaches (a) Inductve transfer learnng (b) Instance-transfer based approach (MultSourceTrAdaBoost) (c) Parameter-transfer based approach (TaskTrAdaBoost) than a gven regularzng threshold γ We do so because we are not nterested n transferrng parameters that may lead the target classfer to overfttng the data Phase-II s agan an AdaBoost loop over the target tranng data D T However, at every teraton, from H t s pcked the weak classfer wth the lowest classfcaton error on the target tranng data, ensurng the transfer of the knowledge that s more closely related to the target task Moreover, the update of the weghts of the target tranng nstances drves the search for the transfer of the next sub-task that s needed the most for boostng the target classfer 6 Algorthm comparsons Decson boundares Fgure 2 shows a data dstrbuton, and a sketch of how varous learnng algorthms would attempt to separate the nstances Squares are negatve samples Crosses are postve samples from one source, D S Crcles are postve samples from another source, D S2 Orange stars, and orange crosses are postve tranng and testng nstances n the target doman, respectvely Fgure 2(a) shows that a tradtonal machne learnng algorthm, such as AdaBoost, would overft the target tranng data wth decson boundares unable to guarantee good generalzaton Fgure 2(b) shows the decson boundares obtaned by TrAdaBoost when D S and D S2 are used jontly, whch means that the sources are seen as one In ths case there s no overfttng Fgure 2(c) shows how MultSource- TrAdaBoost mproves the decson boundares Each source separately combnes wth the target, vrtually producng the dashed boundares on the left On the rght, the boundary parts more closely related to the target are transferred to produce tghter target decson boundares Fnally, Fgure 2(d) shows how TaskTrAdaBoost would behave Phase-I learns the dashed boundares between S, and everythng else, as well as the dashed boundares between S 2, and everythng else At every teraton phase-ii grabs the most useful peces of the dashed boundares to buld the tght target decson boundares Performance analyss The convergence propertes of MultSourceTrAdaBoost can be nherted drectly from TrAda- Boost [4], whereas for TaskTrAdaBoost they can be nherted drectly from AdaBoost [, 8] Moreover, because the condton of ɛ t < 5 s satsfed n both algorthms, the predcton error ɛ over the target tranng data D T s bounded by ɛ 2 M M t= ɛt ( ɛ t ), and the upper bound of the assocated generalzaton error s gven by ɛ + O( Md V C /n T ), where d V C s the VC-dmenson of the weak classfer model [23] We now make an observaton regardng how the cardnalty H, of the set of canddate weak classfers H, affects the performance of TaskTrAdaBoost If H s of the same order of magntude of M, the offerng of weak classfers may be too lmtng Therefore, the probablty to chose weak classfers wth hgher classfcaton error ɛ t ncreases, leadng to a hgher predcton error ɛ On the other hand, an overly rch H ( H very bg), would very much ncrease the probablty to choose weak classfers wth low classfcaton error, leadng to a low predcton error However, the VC-dmenson d V C would ncrease as well, leadng to a hgher rsk for overfttng, as well as poorer generalzaton Ths s the reason for nsertng the regularzng threshold γ n phase-i, whch allows to strke a balance between predcton and generalzaton performance The set H plays also another role n TaskTrAdaBoost More precsely, the fact that t lmts the freedom n pckng the weak classfers leads to a greater predcton error, n comparson wth MultSourceTrAdaBoost On the other hand, n the generalzaton error ths effect s compensated because ths reduced freedom also leads to a smaller VCdmenson d V C, and therefore a lower upper bound Fnally, snce we have n T << n S, the convergence rate of TaskTrAdaBoost has a reduced upper bound [, 4], compared to MultSourceTrAdaBoost, whch means that t requres fewer teratons 7 Expermental results The performance of the proposed methods are nvestgated based on two applcatons: object category recognton and specfc object detecton In object category recognton, t s assumed that we are gven a small number of tranng samples of a target object category, and abundant tranng samples of other source object categores When presented wth a test sample, we verfy whether t belongs to the target object category As for specfc object detecton, t s assumed that we are gven a small number of tranng samples of a target object, and abundant tranng samples of other source objects of the same category and of other categores (background) When presented wth a test mage, we want to verfy whether t contans the target object, and where t s located n the mage We use AdaBoost and TrAdaBoost as the baselne methods for performance comparson All the

6 (a) (b) (c) (d) Fgure 2 Decson boundares Representaton of the decson boundares between the postve and negatve samples n the target doman, as computed by AdaBoost (a), TrAdaBoost (b), MultSourceTrAdaBoost (c), and TaskTrAdaBoost (d) Orange crosses and stars represent the target postve samples Dashed lnes represent canddate decson boundares Sold lnes are the learned boundares algorthms use a lnear SVM as the basc learner to buld a weak classfer For every experment, we provde the recever operatng characterstc (ROC) curve of the classfer output for performance comparson We also compute the area under the ROC curve A ROC as a quanttatve performance evaluaton 7 Object category recognton Data sets For object category recognton, we have used the Caltech 256 data set [3], whch contans 256 object categores Among them, we have used the 36 categores that have more than samples We have also used the background data set, collected va the Google mage search engne, along wth the remanng categores as our augmented background data set Expermental setup The bag-of-words method [9] s used to map mages nto the feature space for classfcaton We desgnate the target category and randomly draw the postve samples that form the target data The number of postve samples for tranng n + T, s lmted from to 5, for testng s 5 We treat the remanng categores as the repostory from whch to draw postve samples for the source data We vary the number of source categores, or domans, N, from to to nvestgate the performance of the classfers wth respect to the varablty of the domans The number of postve samples for one source of data s The negatve samples of both source and target data are randomly drawn from the augmented background data set The number of negatve tranng samples n the target data s gven by 5n + T The number of negatve testng samples n the target data s 25 The number of negatve tranng samples n the source data s 5 For each target object category, the performance of the classfer s evaluated over 2 random combnatons of N source object categores Gven the target and source categores, the performance of the classfer s obtaned by averagng over 2 trals of experments The overall performance of the classfer s averaged over 2 target categores Results Fgure 3 compares AdaBoost, TrAdaBoost, MultSourceTrAdaBoost, and TaskTrAdaBoost based on the area under the ROC wth dfferent number of postve target tranng samples (n + T {, 5, 5, 5}) and source domans (N {, 2, 3, 5}) Fgure 3(a) assumes N = 3 and shows the behavor of the algorthms as n + T ncreases Snce Ada- Boost does not transfer any knowledge from the source, ts performance heavly depends on n + T For a very small n + T t performs slghtly better than chance, accordng to the A ROC TrAdaBoost combnes the three sources nto one and mproves upon AdaBoost due to the transfer learnng mechansm By ncorporatng the ablty to transfer knowledge from multple ndvdual domans, MultSource- TrAdaBoost and TaskTrAdaBoost demonstrate a sgnfcant mprovement n recognton accuracy, even for a very small n + T In addton, the performance of AdaBoost and TrAda- Boost strongly depends on the selecton of source domans and target postve samples, as revealed by the standard devaton of A ROC On the other hand, a much smaller standard devaton s observed from both of the proposed algorthms As expected, the performance gaps among all the approaches dwndle as n + T ncreases They show a sgnfcant decrease when n + T = 5, for the gven dataset wth a lmted amount of postve testng samples Fgure 3(b) assumes that N = It shows that Mult- SourceTrAdaBoost reduces to TrAdaBoost and therefore they have the same performance Moreover, t shows that Task- TrAdaBoost outperforms MultSourceTrAdaBoost when n + T s very small, and underperforms t for a larger n + T These are the effects of a low VC-dmenson offered by TaskTrAda- Boost, whch s leveragng only one source When n + T s very small t helps avodng overfttng more than other approaches When n + T ncreases t lmts the ablty to buld the desred decson boundares Fgure 3(c) assumes n + T =, and shows that as the number of source domans ncreases, the A ROC of MultSourceTrAdaBoost and TaskTrAdaBoost ncreases, and the correspondng standard devatons decrease, ndcatng an mproved performance n both accuracy and consstency TrAdaBoost s ncapable of explorng the decson boundares separatng multple source domans, resultng n a mantaned performance regardless of the number of source domans Wth N = 3 and n + T =, MultSourceTrAdaBoost and TaskTrAdaBoost have an A ROC of 966 ± 47 and 972 ± 43, respectvely, n comparson wth an A ROC of 848 ± from TrAdaBoost Tme complexty Wth C h and C w we ndcate the tme complexty to compute a weak classfer, and update the weght of one tranng nstance The tme complexty of AdaBoost s approxmately C h O(M) + C w O(Mn T ), the tme complexty of TrAdaBoost s C h O(M)+C w O(Mn S ),

7 A ROC Standard devaton AdaBoost TrAdaBoost MSTrAdaBoost TaskTrAdaBoost Number of postve tranng samples A ROC Standard devaton AdaBoost TrAdaBoost MSTrAdaBoost TaskTrAdaBoost Number of postve tranng samples (a) (b) (c) (d) Fgure 3 Performance comparson (a) Area under the ROC curve (A ROC), wth correspondng standard devaton (σ AROC ) aganst the number of postve target tranng samples n + T wth N = 3 sources; (b) AROC and σa aganst ROC n+ T wth N = ; (c) AROC and σ AROC aganst N wth n + T = (d) Processng tme aganst n+ T wth N = 2 (top), and aganst N wth n+ T = (bottom) MSTrAdaBoost represents MultSourceTrAdaBoost A ROC Standard devaton AdaBoost TrAdaBoost MSTrAdaBoost TaskTrAdaBoost Number of source domans Processng tme (s) Processng tme (s) AdaBoost TrAdaBoost MSTrAdaBoost TaskTrAdaBoost Number of postve tranng samples Number of source domans and the tme complexty of MultSourceTrAdaBoost s C h O(MN) + C w O(Mn S ), whch s roughly the same as that of phase-i of TaskTrAdaBoost, whereas phase-ii has a tme complexty of C w O(M H n T ) Therefore, snce we typcally have C h C w and H n T n S, the phase-ii of TaskTrAdaBoost s very fast and deployable when there s a strong need for rapd retranng over a new target doman Fgure 3(d) plots the recorded average tranng tme per experment tral aganst n + T and N of all the algorthms 72 Specfc object detecton Data sets For ths experment we have collected a data set made of two vdeo sequences of hghway traffc The frst one (Sequence A) ncludes vehcle mages from a fxed vew pont, whereas the second one (Sequence B) ncludes vehcle mages from dfferent vew ponts For each vdeo, we manually annotated the ground-truth vehcle locatons, and szes, by recordng rectangular regons of nterest (ROIs) around each vehcle movng along the hghway, resultng n a total of about 7 dfferent ROIs, correspondng to 4 dfferent vehcles The szes of the ROIs vary from about 3 2 to 2 4 pxels, dependng on the type of the vehcle and the vew ponts The average number of annotated ROIs per vehcle s approxmately and 4 for Sequence A and Sequence B, respectvely Expermental setup Consderng the small szes of the ROIs, n ths experment we have chosen the regon moment descrptor [5] for mappng mage ROIs onto the feature space We fx the number of source domans to N = 5 Postve samples are selected from the annotated ROIs and negatve samples are randomly cropped from the background For a target object, n + T vares from to 5, whereas the remanng postve samples are used for testng The overall ROC curves are obtaned by averagng the performances over 5 target vehcles The remanng expermental setup s the same as for the object category recognton The traned classfers have been deployed to perform specfc object detecton n vdeo by usng a multscale sldng wndow scheme, whch reveals poston and scale of the detected vehcle (see Fgure 5) Results Fgure 4 compares the performances among classfers based on ROC curves, and Table lsts the correspondng A ROC values For n + T = 5, all the classfers produce comparable performances As n + T decreases and becomes we observe sgnfcant performance gaps As expected, the proposed approaches can effectvely explot the decson boundares between multple sources and outperform TrAdaBoost The reduced tme complexty of the phase-ii of TaskTrAdaBoost makes t a good canddate for applcatons that need rapd retranng for the detecton of a new target object Fnally, Fgure 5 shows example frames wth the detecton of specfc vehcles usng Task- TrAdaBoost True postve rate AdaBoost n T AdaBoost n T + = TrAdaBoost n T TrAdaBoost n T + = MSTrAdaBoost n T MSTrAdaBoost n T + = TaskTrAdaBoost n T TaskTrAdaBoost n T + = False postve rate False postve rate (a) (b) Fgure 4 ROC curves (a) Sequence A and (b) Sequence B True postve rate AdaBoost n T AdaBoost n T + = TrAdaBoost n T TrAdaBoost n T + = MSTrAdaBoost n T MSTrAdaBoost n T + = TaskTrAdaBoost n T TaskTrAdaBoost n T + = 8 Conclusons Ths work extends the boostng framework for nductve transfer learnng when knowledge from multple sources s avalable to help tranng a target classfer Consderng multple sources drectly addresses the problem of negatve transfer because the chance to mport knowledge from a source related to the target ncreases sgnfcantly MultSourceTrAdaBoost, an nstance-transfer approach, and TaskTrAdaBoost, a parameter-transfer (or more specfcally a task-transfer) approach, are ntroduced They are appled to the problem of object category recognton and specfc object detecton when the target data avalable s scarce Compared to TrAdaBoost, an mpressve performance ncrease n terms of recognton and detecton rates s observed, even wth only one postve target tranng sample,

8 Sequence A n + T = 5 n+ T = AdaBoost 976 ± ± 7 TrAdaBoost 979 ± ± 7 MultSourceTrAdaBoost 976 ± 3 99 ± 87 TaskTrAdaBoost 985 ± ± 68 Sequence B n + T = 5 n+ T = AdaBoost 94 ± ± 3 TrAdaBoost 896 ± ± 29 MultSourceTrAdaBoost 9 ± ± 8 TaskTrAdaBoost 922 ± ± 36 Table ROC area Area under the ROC curves, A ROC, and correspondng standard devatons (a) (c) (d) Fgure 5 Specfc object detecton Example frames wth specfc vehcles detected (a)-(b) Sequence A and (c)-(d) Sequence B Green and red boxes depct the ground-truth and detected ROIs, respectvely Gray boxes show a zoom-n vew of the detected ROIs showng the effectveness of both of the approaches n explotng multple sources, wth a slght advantage of Task- TrAdaBoost Moreover, as the number of sources ncreases, a dramatc decrease n performance varablty s observed, showng that the approaches tend to become ndependent of the source choces, and therefore they properly address the negatve transfer problem The framework s general, and applcable to help the learnng n a wde varety of computer vson problems Fnally, an mportant property of TaskTrAdaBoost s ts speed when t s presented wth a new target task to learn Ths aspect makes t a good canddate for applcatons where the source knowledge needs to be quckly reused, for nstance for the on-lne retranng for the detecton of a new specfc object References [] S Bckel, M Bruckner, and T Scheffer Dscrmnatve learnng for dfferng tranng and test dstrbutons In Int l Conf on Machne Learnng, 27 [2] E Bonlla, K M Cha, and C Wllams Mult-task gaussan process predcton In Annual Conf on Neural Informaton Processng Systems, pages 53 6, 28 (b) [3] L Bruzzone and M Marconcn Toward the automatc updatng of land-cover maps by a doman-adapton SVM classfer and a crcular valdaton strategy IEEE Trans on Geoscence and Remote Sensng, 47(4):8 22, Apr 29 [4] W Da, Q Yang, G Xue, and Y Yu Boostng for transfer learnng In Int l Conf on Machne Learnng, Corvalls, OR, 27 [5] G Doretto and Y Yao Regon Moments: Fast nvarant descrptors for detectng small mage structures In CVPR, 2 [6] L Duan, I W Tsang, D Xu, and S J Maybank Doman transfer SVM for vdeo concept detecton In CVPR, 29 [7] A Farhad, D Forsyth, and R Whte Transfer lernng n sgn language In CVPR, 27 [8] L Fe-Fe, R Fergus, and P Perona One-shot learnng of object categores IEEE TPAMI, 28(4):594 6, Apr 26 [9] L Fe-Fe and P Perona A Bayesan herarchcal model for learnng natural scene categores In CVPR, volume 2, pages , June 2 25, 25 [] R Fergus, P Perona, and A Zsserman Object class recognton by unsupervsed scale-nvarant learnng In CVPR, 23 [] Y Freund and R E Schapre A decson-theoretc generalzaton of on-lne learnng and an applcaton to boostng Journal of Computer and System Scence, 55:9 39, 997 [2] H Grabner and H Bschof On-lne boostng and vson In CVPR, pages , New York, NY, Jun 26 [3] G Grffn, A Holub, and P Perona Caltech-256 object category dataset Techncal Report 7694, Calforna Insttute of Technology, 27 [4] W Jang, E Zavesky, S-F Chang, and A Lou Cross-doman learnnng methods for hgh-level vsual concept classfcaton In ICIP, 28 [5] C H Lampert, H Ncksch, and S Harmelng Learnng to detect unseen object class by between-class attbute transfer In CVPR, pages , Mam Beach, FL, Jun 29 [6] S J Pan and Q Yang A survey on transfer learnng Techncal Report HKUST-CS8-8, Department of Computer Scence and Engneerng, Hong Kong Unversty of Scence and Technology, Hong Kong, Chna, Nov 28 [7] A Quatton, M Collns, and T Darrell Transfer learnng for mage classfcaton wth sparse prototype representaton In CVPR, 28 [8] R E Schapre A bref ntroducton to boostng In Int l Conf on Artfcal Intellgence, 999 [9] M Stark, M Goesele, and B Schele A shape-based object class model for knowledge transfer In ICCV, pages , Tokyo, Japan, Oct 29 [2] Z Sun, Y Chen, J Q, and J Lu Adaptve localzaton through transfer learnng n ndoor W-F envronment In Int l Conf on Machne Learnng and Applcatons, 28 [2] A Torralba, K P Murphy, and W T Freeman Sharng vsual features for multclass and multvew object detecton IEEE TPAMI, 29(5): , May 27 [22] L Torrey and J Shavlk Transfer learnng IGI Global, 29 [23] V N Vapnk Estmaton of Dependences Based on Emprcal Data Sprnger-Verlag, 982 [24] C Wang and S Mahadevan Manfold algnment usng procrustes analyss In Int l Conf on Machne Learnng, 28 [25] P Wang, C Domencon, and J Hu Usng wkpeda for coclusterng based cross-doman text classfcaton In IEEE Int l Conf on Data Mnng, 28 [26] X Wang, C Zhang, and Z Zhang Boosted mult-task learnng for face verfcaton wth applcatons to web mage and vdeo search In CVPR, 29 [27] J Yang, R Yan, and A G Hauptmann Cross-doman vdeo concept detecton usng adaptve SVMs In ACM Multmeda, 27

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Adaptive Transfer Learning

Adaptive Transfer Learning Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Face Detection with Deep Learning

Face Detection with Deep Learning Face Detecton wth Deep Learnng Yu Shen Yus122@ucsd.edu A13227146 Kuan-We Chen kuc010@ucsd.edu A99045121 Yzhou Hao y3hao@ucsd.edu A98017773 Mn Hsuan Wu mhwu@ucsd.edu A92424998 Abstract The project here

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

X- Chart Using ANOM Approach

X- Chart Using ANOM Approach ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are

More information

y and the total sum of

y and the total sum of Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

FAHP and Modified GRA Based Network Selection in Heterogeneous Wireless Networks

FAHP and Modified GRA Based Network Selection in Heterogeneous Wireless Networks 2017 2nd Internatonal Semnar on Appled Physcs, Optoelectroncs and Photoncs (APOP 2017) ISBN: 978-1-60595-522-3 FAHP and Modfed GRA Based Network Selecton n Heterogeneous Wreless Networks Xaohan DU, Zhqng

More information

PERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM

PERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM PERFORMACE EVALUAIO FOR SCEE MACHIG ALGORIHMS BY SVM Zhaohu Yang a, b, *, Yngyng Chen a, Shaomng Zhang a a he Research Center of Remote Sensng and Geomatc, ongj Unversty, Shangha 200092, Chna - yzhac@63.com

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. [Type text] [Type text] [Type text] ISSN : 0974-74 Volume 0 Issue BoTechnology 04 An Indan Journal FULL PAPER BTAIJ 0() 04 [684-689] Revew on Chna s sports ndustry fnancng market based on market -orented

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Related-Mode Attacks on CTR Encryption Mode

Related-Mode Attacks on CTR Encryption Mode Internatonal Journal of Network Securty, Vol.4, No.3, PP.282 287, May 2007 282 Related-Mode Attacks on CTR Encrypton Mode Dayn Wang, Dongda Ln, and Wenlng Wu (Correspondng author: Dayn Wang) Key Laboratory

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Data Mining: Model Evaluation

Data Mining: Model Evaluation Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct

More information

Fast Feature Value Searching for Face Detection

Fast Feature Value Searching for Face Detection Vol., No. 2 Computer and Informaton Scence Fast Feature Value Searchng for Face Detecton Yunyang Yan Department of Computer Engneerng Huayn Insttute of Technology Hua an 22300, Chna E-mal: areyyyke@63.com

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Classifier Swarms for Human Detection in Infrared Imagery

Classifier Swarms for Human Detection in Infrared Imagery Classfer Swarms for Human Detecton n Infrared Imagery Yur Owechko, Swarup Medasan, and Narayan Srnvasa HRL Laboratores, LLC 3011 Malbu Canyon Road, Malbu, CA 90265 {owechko, smedasan, nsrnvasa}@hrl.com

More information

Learning-based License Plate Detection on Edge Features

Learning-based License Plate Detection on Edge Features Learnng-based Lcense Plate Detecton on Edge Features Wng Teng Ho, Woo Hen Yap, Yong Haur Tay Computer Vson and Intellgent Systems (CVIS) Group Unverst Tunku Abdul Rahman, Malaysa wngteng_h@yahoo.com, woohen@yahoo.com,

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

What is Object Detection? Face Detection using AdaBoost. Detection as Classification. Principle of Boosting (Schapire 90)

What is Object Detection? Face Detection using AdaBoost. Detection as Classification. Principle of Boosting (Schapire 90) CIS 5543 Coputer Vson Object Detecton What s Object Detecton? Locate an object n an nput age Habn Lng Extensons Vola & Jones, 2004 Dalal & Trggs, 2005 one or ultple objects Object segentaton Object detecton

More information

Reducing Frame Rate for Object Tracking

Reducing Frame Rate for Object Tracking Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Improving Web Image Search using Meta Re-rankers

Improving Web Image Search using Meta Re-rankers VOLUME-1, ISSUE-V (Aug-Sep 2013) IS NOW AVAILABLE AT: www.dcst.com Improvng Web Image Search usng Meta Re-rankers B.Kavtha 1, N. Suata 2 1 Department of Computer Scence and Engneerng, Chtanya Bharath Insttute

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

PRÉSENTATIONS DE PROJETS

PRÉSENTATIONS DE PROJETS PRÉSENTATIONS DE PROJETS Rex Onlne (V. Atanasu) What s Rex? Rex s an onlne browser for collectons of wrtten documents [1]. Asde ths core functon t has however many other applcatons that make t nterestng

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

Simulation Based Analysis of FAST TCP using OMNET++

Simulation Based Analysis of FAST TCP using OMNET++ Smulaton Based Analyss of FAST TCP usng OMNET++ Umar ul Hassan 04030038@lums.edu.pk Md Term Report CS678 Topcs n Internet Research Sprng, 2006 Introducton Internet traffc s doublng roughly every 3 months

More information

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1)

For instance, ; the five basic number-sets are increasingly more n A B & B A A = B (1) Secton 1.2 Subsets and the Boolean operatons on sets If every element of the set A s an element of the set B, we say that A s a subset of B, or that A s contaned n B, or that B contans A, and we wrte A

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

Dynamic Camera Assignment and Handoff

Dynamic Camera Assignment and Handoff 12 Dynamc Camera Assgnment and Handoff Br Bhanu and Ymng L 12.1 Introducton...338 12.2 Techncal Approach...339 12.2.1 Motvaton and Problem Formulaton...339 12.2.2 Game Theoretc Framework...339 12.2.2.1

More information

Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009)

Specialized Weighted Majority Statistical Techniques in Robotics (Fall 2009) Statstcal Technques n Robotcs (Fall 09) Keywords: classfer ensemblng, onlne learnng, expert combnaton, machne learnng Javer Hernandez Alberto Rodrguez Tomas Smon javerhe@andrew.cmu.edu albertor@andrew.cmu.edu

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

A Gradient Difference based Technique for Video Text Detection

A Gradient Difference based Technique for Video Text Detection A Gradent Dfference based Technque for Vdeo Text Detecton Palaahnakote Shvakumara, Trung Quy Phan and Chew Lm Tan School of Computng, Natonal Unversty of Sngapore {shva, phanquyt, tancl }@comp.nus.edu.sg

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

Problem Set 3 Solutions

Problem Set 3 Solutions Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

A Gradient Difference based Technique for Video Text Detection

A Gradient Difference based Technique for Video Text Detection 2009 10th Internatonal Conference on Document Analyss and Recognton A Gradent Dfference based Technque for Vdeo Text Detecton Palaahnakote Shvakumara, Trung Quy Phan and Chew Lm Tan School of Computng,

More information

Concurrent Apriori Data Mining Algorithms

Concurrent Apriori Data Mining Algorithms Concurrent Apror Data Mnng Algorthms Vassl Halatchev Department of Electrcal Engneerng and Computer Scence York Unversty, Toronto October 8, 2015 Outlne Why t s mportant Introducton to Assocaton Rule Mnng

More information

Categorizing objects: of appearance

Categorizing objects: of appearance Categorzng objects: global and part-based models of appearance UT Austn Generc categorzaton problem 1 Challenges: robustness Realstc scenes are crowded, cluttered, have overlappng objects. Generc category

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification

Fast Sparse Gaussian Processes Learning for Man-Made Structure Classification Fast Sparse Gaussan Processes Learnng for Man-Made Structure Classfcaton Hang Zhou Insttute for Vson Systems Engneerng, Dept Elec. & Comp. Syst. Eng. PO Box 35, Monash Unversty, Clayton, VIC 3800, Australa

More information

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University

CAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

The Study of Remote Sensing Image Classification Based on Support Vector Machine

The Study of Remote Sensing Image Classification Based on Support Vector Machine Sensors & Transducers 03 by IFSA http://www.sensorsportal.com The Study of Remote Sensng Image Classfcaton Based on Support Vector Machne, ZHANG Jan-Hua Key Research Insttute of Yellow Rver Cvlzaton and

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Deep Classification in Large-scale Text Hierarchies

Deep Classification in Large-scale Text Hierarchies Deep Classfcaton n Large-scale Text Herarches Gu-Rong Xue Dkan Xng Qang Yang 2 Yong Yu Dept. of Computer Scence and Engneerng Shangha Jao-Tong Unversty {grxue, dkxng, yyu}@apex.sjtu.edu.cn 2 Hong Kong

More information

430 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 3, MARCH Boosting for Multi-Graph Classification

430 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 3, MARCH Boosting for Multi-Graph Classification 430 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 3, MARCH 2015 Boostng for Mult-Graph Classfcaton Ja Wu, Student Member, IEEE, Shru Pan, Xngquan Zhu, Senor Member, IEEE, and Zhhua Ca Abstract In ths

More information

Detection of hand grasping an object from complex background based on machine learning co-occurrence of local image feature

Detection of hand grasping an object from complex background based on machine learning co-occurrence of local image feature Detecton of hand graspng an object from complex background based on machne learnng co-occurrence of local mage feature Shnya Moroka, Yasuhro Hramoto, Nobutaka Shmada, Tadash Matsuo, Yoshak Shra Rtsumekan

More information

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1 200 2th Internatonal Conference on Fronters n Handwrtng Recognton Incremental MQDF Learnng for Wrter Adaptve Handwrtng Recognton Ka Dng, Lanwen Jn * School of Electronc and Informaton Engneerng, South

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

Object-Based Techniques for Image Retrieval

Object-Based Techniques for Image Retrieval 54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

ISSN: International Journal of Engineering and Innovative Technology (IJEIT) Volume 1, Issue 4, April 2012

ISSN: International Journal of Engineering and Innovative Technology (IJEIT) Volume 1, Issue 4, April 2012 Performance Evoluton of Dfferent Codng Methods wth β - densty Decodng Usng Error Correctng Output Code Based on Multclass Classfcaton Devangn Dave, M. Samvatsar, P. K. Bhanoda Abstract A common way to

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information