Optimizing for what matters: The Top Grasp Hypothesis
|
|
- Ira Skinner
- 5 years ago
- Views:
Transcription
1 Optmzng for hat matters: The Top Grasp Hypothess Danel Kappler 1, Stefan Schaal 1,2 and Jeannette Bohg 1 Abstract In ths paper, e consder the problem of robotc graspng of objects hen only partal and nosy sensor data of the envronment s avalable. We are specfcally nterested n the problem of relably selectng the best hypothess from a hole set. Ths s commonly the case hen tryng to grasp an object for hch e can only observe a partal pont cloud from one vepont through nosy sensors. There ll be many possble ays to successfully grasp ths object, and even more hch ll fal. We propose a supervsed learnng method that s traned th a rankng loss. Ths explctly encourages that the top-ranked tranng grasp n a hypothess set s also postvely labeled. We sho ho e adapt the standard rankng loss to ork th data that has bnary labels and explan the benefts of ths formulaton. Addtonally, e sho ho e can effcently optmze ths loss th stochastc gradent descent. In quanttatve experments, e sho that e can outperform prevous models by a large margn. I. INTRODUCTION Graspng unknon objects from partal and nosy sensor data s stll an open problem n the robotcs communty. For objects th a knon polygonal mesh model, experence databases can be bult offlne and serve as grasp look-up table once ths object has been detected n the scene. In [19, 13, 8] t has been shon that n ths case robust graspng and manpulaton can be acheved by applyng force control and explotng constrants n the envronment. Hoever, to transfer successful grasps beteen dfferent objects of hch only partal and nosy nformaton s knon, remans a challenge. There are many supervsed learnng approaches toards graspng. The majorty of those formulate graspng as a problem of classfyng a grasp hypothess as ether stable or unstable. A grasp hypothess n ths context s usually a grasp preshape, 6D pose of the grpper and the grpper jont confguraton. Examples of such supervsed methods nclude [12, 18, 16, 17, 20], to name just a fe. For a more comprehensve overve, e refer to Bohg et al. [1]. These approaches commonly use a learnng method that returns some confdence value for each query grasp hypothess. Gven these scores, the grasp th the hghest score s typcally selected for grasp executon, f t s reachable. Hoever, even though these methods select the best hypothess of all canddates at query tme, the underlyng classfcaton models have not been drectly traned for ths objectve. Instead they are optmzed for accurately predctng the bnary labels of the entre tranng dataset. Whle 1 Autonomous Moton Department at the Max-Planck- Insttute for Intellgent Systems, Tübngen, Germany Emal: frst.lastname@tue.mpg.de 2 Computatonal Learnng and Motor Control lab at the Unversty of Southern Calforna, Los Angeles, CA, USA for some subsets of data ponts separatng postves from negatves may be easy to acheve, t generally can be very hard to acheve ths separaton for all data ponts. Ths s partcularly a problem hen tranng on datasets th nosy labels or here the employed feature representaton s not rch enough to carry all the necessary nformaton for makng a decson. Here, e argue that for graspng, e should be tranng models on subsets of data, here one subset may for nstance represent all possble grasp hypotheses obtaned from one vepont of the object. Furthermore, e should optmze an objectve that reards hen the hghest scorng tranng data pont of such a set s also postve. For example, hen consderng a partal, segmented pont cloud of an object, there exsts a large set of potental grasps, most of hch are not stable. The best scorng hypothess thn ths set should correspond to a stable grasp. Such an objectve s called a rankng loss. Thus far, only fe grasp learnng models n the lterature consder ths knd of objectve. At frst glance, ths problem seems to default to standard classfcaton for bnary labeled data. In ths paper, e ntroduce a rankng formulaton for grasp stablty predcton for bnary labeled data. There are three man dfferences of our problem formulaton to typcal rankng problems. Frst, our hypothess set conssts only of bnary data, hence there s no nherent rankng beteen dfferent examples other than the dstncton beteen postve and negatve hypotheses. Second, e ant to optmze solely for the top-1 ranked hypothess n a set of hypotheses, and e are not nterested n the remanng order of hypotheses. Thrd, our resultng rankng score can also be nterpreted as a score for classfcaton, decdng hether or not the top-1 ranked hypothess s a postve or negatve one. We sho that ths formulaton outperforms large-capacty models such as Convolutonal Neural Netorks (CNNs) and Random Decson Forests (RDFs) traned on the same graspng dataset but optmzed th a bnary classfcaton objectve. In the remander of ths paper, e reve related ork on rankng n general and for graspng n partcular. Ths s folloed by a dscusson of classfcaton versus rankng objectves. In Secton IV the proposed rankng loss s descrbed n detal. Detals on ho the model s optmzed usng Stochastc Gradent Descent (SGD) are gven n Secton V. Ths s folloed by experments n Secton VI. II. RELATED WORK Many dfferent problems such as nformaton retreval, optmzng the clck through rates, even mult-class classf-
2 caton problems can be formulated as rankng problems. The most common approach to learnng ho to rank s based on par-se classfcaton. For nstance, n order to rank documents, Herbrch et al. [6] proposed to use a hnge loss SVM formulaton to learn regresson on ordnal data. Another parse formulaton s based on a probablstc loss, hch can be optmzed usng gradent descent, and has been appled to nformaton retreval by Burges et al. [3], n connecton th a neural netork based functon approxmator. A common ssue for par-se approaches s the based data dstrbuton, often volatng the..d. assumpton. Cao et al. [4] addressed ths ssue th a lst based loss, to learn to rank. Rankng data s a fundamental problem and has been appled to varous sub-problems n dfferent domans. Lehmann et al. [15] proposed to speed up object search by reducng the number of expensve classfer evaluatons by learnng ho to rank sets of hypotheses. In robotcs, rankng has been used to learn to select footholds based on terran templates by Kalakrshnan et al. [11], enablng robust alkng over rough terran. Data-drven approaches for grasp stablty predcton are commonly formulated as bnary classfcaton problems, often due to the nature of the provded data labels. There are hoever a fe examples that employ rankng. For example [7] teratvely mproves a matchng cost that s computed based on a lbrary of labeled local shape templates. Whle the matchng functon does not change, the lbrary s contnuously extended and thereby the rankng of dfferent grasp hypotheses changes over tme. Fndng the best fngertp placement on objects for object graspng has been dentfed as a rankng problem by Le et al. [14]. The authors manually label the tranng data th three dfferent classes (Bad, Good and Very Good) nstead of to. As a learnng method, they employ a rankng SVM that optmzes a measure that prefers better scores for the top grasp canddates. Jang et al. [9] presents an extenson to ths ork th a dfferent representaton of the grasp but otherse the same SVM-based rankng method. Our proposed approach dffers from ths lne of ork by beng able to explot bnary labeled tranng data ponts to optmze a rankng loss. We re-formulate the loss functon such that the best ranked grasp hypothess s also postvely labeled and tran a CNN th ths loss. III. GRASP STABILITY PREDICTION: CLASSIFICATION VERSUS RANKING To the best of our knoledge data-drven learnng methods for grasp plannng are almost exclusvely formulated as classfcaton problems of the form: mn (x,y) D L(F (x; ), y) (1) Here, the target of the functon approxmator F (x; ) s to predct the bnary grasp stablty y {1, 1} for a feature representaton x (e.g. 2D templates shon n Fg. 4), assocated th a grasp preshape (6D pose and grpper jont confguraton). The target objectve s to optmze the parameters, regularzed by R (e.g. 1 to have a sparse set of parameters) to acheve the mnmal loss L (e.g. max(0, 1 yf (x; )) hnge loss) on the tranng dataset D, assumng that the test data dstrbuton s smlar to the tranng data dstrbuton. We argue that for grasp plannng, classfcaton s a suboptmal objectve. The canoncal problem for grasp plannng s to predct a successful grasp for a target object gven an entre set of hypotheses. Such a hypothess set can for example contan all possble grasp rectangles for a ve of an object as provded by the Cornell dataset [16] or all possble templates extracted from a 3D pont-cloud as n [12]. Obtanng all possble successful grasps s not necessary, snce the goal s that the robot succeeds n graspng the object th the frst grasp attempt. Therefore, the grasp stablty predcton should be reformulated as a rankng problem, tryng to robustly dentfy one successful grasp thn each hypothess set. Addtonally t should provde a calbrated score to decde hether the best hypothess s stable or not. The canoncal par-se rankng problem for dfferent hypothess sets (x, y) and (x, y), formulated as a standard classfcaton problem, s llustrated n Fg. 1b and defned as mn R()+ (x,y) (x,y) L(F (x; ) F (x ; ), (y, y )). (2) The man dfference to the standard classfcaton problem (Eq. 1) s the par-se classfcaton and the par-se loss. Notce ths optmzaton problem can be specalzed to the rankng SVM formulaton proposed n [10] by applyng the approprate max-margn loss L. As shon n Fg. 1b, the rankng problem as descrbed n Eq. 2 s concerned th orderng examples from dfferent hypothess sets accordng to the loss. For typcal grasp datasets, consstng of bnary labeled grasp hypotheses, ths rankng formulaton ould result n a soluton smlar to the bnary classfcaton problem (Eq. 1) up to a hypothess set dependent scalng and offset. The scalng and offset are necessary snce the rankng formulaton s a relatve problem. The remander of ths paper s concerned th a rankng formulaton for bnary hypothess sets that allos top-1 predcton thn the gven hypthess set as ell as classfcaton of that top-1 choce. We further propose a method to optmze such a problem formulaton thn the standard stochastc gradent descent optmzaton frameork. IV. TOP-1 RANKING Ths paper addresses the problem of optmzng a functon that predcts one possble successful grasp thn any gven hypothess set, f the hypothess set contans at least one stable grasp. In addton to that, the resultng score has to be dscrmnatve to classfy hether or not the best predcted hypothess s postve (y = 1) or negatve (y = 1). In our ork, hypothess sets only contan bnary labeled grasps, meanng a grasp s ether consdered postve (stable) or
3 and s gven as: L + () = I + x + [ l(1 F (x + ; ))+ ] l(1 (F (x + ; ) F (x ; )) x (a) classfcaton (b) rankng Fg. 1: We llustrate the dfference beteen the standard (a) max-margn classfcaton problem and (b) par-se max-margn rankng problem. All symbols of the same shape are thn the same hypothess set. (a) Bnary classfcaton ams at separatng these to sets. The magntude of the error s ndcated by the color saturaton of the data samples here hte means no error. Each set has ts on color. The (b) rankng problem attempts to not only separate the 3 sets, but also mantans an order such that stars are alays further to the top rght than crcles, and crcles are further top rght than squares. The resultng par-se classfcaton problems llustrate the smlarty of the rankng problem to the standard classfcaton problem n (a). negatve (unstable). We assume no addtonal label nformaton for the data hch ould allo to further dscrmnate beteen dfferent examples n a set, e.g. f one postve grasp s better than another postve grasp. A concrete example for hypothess sets s the grasp database ntroduced n [12]. In ths partcular example, every partally observed object s assocated th a pont cloud and several labeled grasp templates, here the grasp template takes the role of the feature representaton x and the bnary labels y ndcate a stable or unstable grasp. Thus n ths settng, a hypothess set contans all pars (x, y) avalable for a partcular object ve. Every hypothess set can ether contan only postve examples, or only negatve examples or both. To smplfy notaton e ntroduce three dfferent ndex sets: I + refers to all sets th only postve examples, I refers to all sets th at least one negatve example, I + refers to all sets th at least one postve and negatve example. Every hypothess set s assgned to at least one ndex set. Hypothess sets th postve and negatve examples are assgned to both I + and I. In the follong e re-formulate and adapt the general rankng problem (Eq. 2) to the top-1 grasp predcton problem. We use a max-margn formulaton th a margn (t = 1) l(t yk), (3) both for classfcaton (k = F (x; )) and rankng (k = F (x; ) F (x ; )). Here, e use the squared hnge loss l(v) = 1 2 max(0, v)2, snce t s dfferentable everyhere, a property that has been proven useful for stochastc gradent descent based neural netork optmzaton [21]. Our proposed loss fucton s comprsed of three parts L + (), L + () and L () operatng on the prevously ntroduced ndex sets I +, I + and I, respectvely. The goal of the frst part of our loss, L + (), s to rank postve and negatve hypotheses usng a max-margn formulaton, here x represents all negatve and x + all postve hypotheses n the correspondng hypothess sets n I +. Notce, e obtan l(1 (F (x + ; ) F (x ; )) from Eq. 2, usng Eq. 3 th t = (y, y ) = 1 f y s a postve and y a negatve hypothess. Furthermore, e ensure that postve examples get a calbrated score by addng the max-margn formulaton l(1 F (x + ; )) for postve examples. In the case of separable data e can rerte L + to L + = [ l(1 (F (x + ; ) max F (x I + x + x ; ))+ ] l(1 F (x + ; )) (4) If the data s separable, summng over all negatve examples (as done n ntal L + ()) ll result n the same loss value, as ths max formulaton. The second part of our loss, L + (), operatng on ndex set I + (), ensures that the predcton scores are calbrated n the same manner as postve examples n I +, agan by usng the max-margn formulaton: L + () = l(1 F (x + ; )) I + x + The thrd component L (), establshes that negatve examples n the ndex set I are separated from postve ones to ensure the overall calbraton of the score such that the fnal rankng score for a hypothess set can be used for classfcaton: L () = l(1 + F (x ; )) I x Fnally, e obtan the jont rankng and classfcaton loss formulaton [ mn L + () + L + () + L () ] (5) If our bnary labeled tranng data, organzed n hypothess sets, s perfectly separable, ths formulaton ll result n the same soluton as the standard max-margn classfcaton problem (Eq. 1). The par-se terms n Eq. 4 ll vansh as soon as the to classes are perfectly separated. If the dataset s not separable, the par-se term ll functon as an addtonal loss on all postve examples thn hypothess sets for hch the rankng loss cannot be fulflled. Ths can be nterpreted as a dfference n mportance of postve and negatve msclassfcatons. Hoever, ths does not resolve the ssue that the top-1 predcton mght be a negatve example, an llustraton of that case s shon n Fg. 1a. The reason for msclassfcaton mght be the smlarty to a postve example thn a dfferent hypothess set. Hence, the perfect order/separaton s stll not achevable.
4 More concretely, let us assume that there exsts a negatve grasp hch has an ndstngushable feature representaton from several postve grasps n multple sets. In ths case multple falure cases can occur. If ths partcular (negatve) hypothess s n the same hypothess set as the ndstngushable postve hypotheses, the negatve hypothess can be pcked at random. The reason for ths s that the negatve hypothess acheves exactly the same score as the postve hypotheses and one hypothess has to be selected based on ths score. Another possblty s that ths negatve hypothess s n a dfferent hypothess set than the ndstngushable postve hypotheses and no easy postve example exsts for the functon approxmator F (x; ) n the hypothess set contanng the negatve hypothess. Thus, ths negatve hypothess ll acheve the hghest score. In the follong e present our approach to obtan a top-1 rankng problem despte the bnary nature of the hypothess sets. Snce there are no label dfferences thn the set of postve or negatve hypotheses, e propose to use the nduced dfference by the functon approxmator tself. Thus, hle optmzng the functon approxmator, the currently best postve and negatve example, gven the current functon approxmator predcton, s used for the parse loss, resultng n: mn I + [ l(1 (max F (x + x + ; ) max F (x x ; ))+ l(1 max F (x + x + ; ))] + l(1 max F (x + I + x + ; ))+ l(1 + (F (x ; )) (6) I x Fg. 2 shos an example hy ths smple change to the optmzaton objectve does acheve the top-1 rankng property for bnary datasets. Intutvely, our formulaton does not penalze any predcton for postve examples except for the current best postve and negatve one n each hypothess set. The best examples are determned by the current rankng of the latest functon approxmator parameterzaton. Ths rankng s not optmzed by an explct supervsed quantty but t rather reflects the dffculty for the functon approxmator to dstngush postve from negatve hypotheses. Hence, the functon approxmator has the ablty to select one postve example n each hypothess set, hch contans at least one postve example, hch s easy to separate from all negatve examples. Ths change enables our formulaton to gnore negatve examples hch are ndstngushable from postve hypotheses, as long as there exsts at least one other postve hypothess hch s dstngushable. Notce, that e do not select these postve examples, but the optmzaton tself ll determne these examples. Dfferent learnng methods for F (x; ) therefore mght result n dfferent top-1 canddates. Ths problem formulaton enables automatc selecton of postve top-1 examples hch are easy to separate from negatve examples. Indstngushable examples under the mplct functon approxmator smlarty measure, exstng e.g. n dfferent hypothess sets, are not enforced to obtan a postve score any more. To be more concrete, ths behavor s useful for postve hypotheses for hch mportant nformaton, e.g. the surface ponts of an object, s not avalable, due to e.g. partal occluson. In ths scenaro, the feature representaton for the hypothess mght not contan enough nformaton to dstngush ths example from other negatve examples. Usng our rankng formulaton (Eq. 6), the functon approxmator s not penalzed f t assgns a lo score to such examples, as long as there s another postve hypothess n the set, for hch the feature representaton contans enough nformaton to separate ths example from all negatve ones. The par-se loss, solely appled to the to currently maxmum examples of dfferent class can be nterpreted as a vrtual target for the postve example. Alternatvely, the parse loss can be seen as a rankng problem on exactly to hypotheses (hghest scorng postve and negatve), selected by the score of the functon approxmator. The optmzaton tres to ncrease the score of that partcular postve example to outperform the best negatve one by a fxed margn (Eq. 3 and 6). For each hypothess set e have to solve at most to dfferent problems. For hypothess sets n I + the par-se loss and negatve calbraton are optmzed. For hypothess sets n I +, the best postve example s calbrated and for hypothess sets n I all negatve examples are calbrated. Despte the smple nature of these problems, obtanng an effcent optmzaton of Eq. 6 s not straght forard as dscussed n the follong secton. Fg. 2: Ths fgure llustrates the proposed rankng objectve appled to a sngle bnary set of hypotheses. Squares represent negatve examples and crcles postve ones. The saturaton of the color fllng the shapes represents the error magntude for each sample. The three dashed lnes through zero represent the standard hnge loss. Notce that postve examples (crcles) are not enforced to be separated but negatve (squares) are. Snce the current best hypothess s a negatve example, an addtonal classfcaton problem for the best postve hypothess s created, creatng a vrtual target hgher than the current best negatve example plus a margn. Arros ndcate the drecton n hch the optmzaton objectve attempts to change the predcton scores. V. EFFICIENT FIRST ORDER OPTIMIZATION The nave problem formulaton as proposed n Eq. 6 could be optmzed th frst order batch gradent descent. Hoever, ths ould not allo us to use large-scale databases such as [12]. The standard approach to optmzng a loss of the type (Eq. 1 and Eq. 6) for large datasets s to use mn-batch stochastc gradent descent. Ths makes each optmzaton step ndependent of the total number of avalable dataponts. Current state-of-the-art approaches such as
5 CNNs, hch can explot large datasets due to the large number of open parameters, also follo ths optmzaton scheme. Usually n dataponts (x, y) are sampled unform at random from the tranng dataset, constructng one mnbatch. For our proposed loss, every mn-batch has to contan all postve examples of a hypothess set due to the max operaton. Notce ths s only restrcted to the postve examples. Usng any subset of the negatve examples hch s already fulflled ould smply result n zero loss for the par-se terms. Thus the naïve approach for our loss ould be to sample a hypothess set unform at random. All postve hypotheses of ths set have to be n the mn-batch together th any subset of negatve hypothess. Ths process s contnued untl the mn-batch s flled th samples. Ths naïve approach to construct the mn-batches for stochastc gradent descent has to man drabacks. Frst, the number of postve examples ould put a loer bound on the mn-batch sze. Second, the majorty of the computaton ould result n no mprovement, snce only the largest postve and negatve example ll be affected. In the follong e present our approach to overcome the lmtatons of the naïve approach. A. Par-Wse Loss Relaxaton As ponted out before, the max operaton n the parse term of our rankng loss Eq. 6, s the lmtng factor to dra ndvdual samples from each hypothess set. Thus, next e sho ho to address ths ssue such that e can use stochastc gradent descent effectvely. Typcal state-of-the-art methods for classfcaton and regresson such as (Convolutonal) Neural Netorks are global functon approxmators. Hence, every update of F (x; ) can affect the predcton of any other data sample. We assume that F (x; ) changes sloly for not affected values and more so for values for hch gradents are appled. Ths s not a very restrctve assumpton snce e use stochastc gradent descent hch requres to take small steps to converge. Usng ths assumpton e can explot that the max x F (x ; ) thn a hypothess set s unlkely to change very frequently. Thus, e propose rerte the par-se term as to max-margn classfcaton problems th a hypothess set dependent margn t : mn I + [ l(t + max F (x + x + ; ))+ l(t + max F (x x ; ))+ l(1 max F (x + x + ; ))] + l(1 max F (x + I + x + ; ))+ l(1 + F (x ; )) (7) I x here t + = 1 + max x F (x ; ) s computed for each hypothess set, as ell as t = 1 max x + F (x + ; ). The basc dea s to fx the maxmum postve hypothess for one hypothess set to compute the correspondng margn for the negatve hypothess and vce versa. Instead of alays evaluatng the functon approxmator to obtan the true t, the last knon predcton for every sample s used to update the estmates. Ths optmzaton problem ll result n the same mnmum as Eq. 6, f our assumpton, that the maxmum hypothess for a partcular hypothess set does not change frequently, holds. No, t s possble to dra ndvdual samples from each hypothess set. Note hoever, the most nformatve examples are the best postve and negatve examples. Other postve examples of a hypothess set n I + do not contrbute to the loss Eq. 7. Thus, to mprove the loss the sample dstrbuton over the hypothess and hypothess sets s not unform but dependent on the loss and an addtonal term descrbed n the follong secton. B. Loss Optmzaton usng Samplng Random data sample selecton s crucal for stochastc gradent descent based optmzaton. Yet, selectng data hch most lkely results n zero loss, thus zero gradents, smply slos don the optmzaton convergence. Usng the prevously ntroduced rankng loss Eq. 7, the problem th drang sample hypotheses s to trade of the mpact on the loss and the accuracy of the t estmaton. The latter ll ensure that the actual maxmum of each hypothess set s used to compute the loss and not an out of date estmate. Thus, e propose a heurstc to update the dstrbuton for hypothess samplng, hch trades of the follong to quanttes () the error gven the current loss (Eq. 7) and () the teratons snce the last update of the functon evaluaton of each data sample. More concretely, after every functon approxmator evaluaton e ll update the predcton for the correspondng hypothess and the teraton hen the predcton as performed. For all hypothess sets for hch a hypothess predcton as updated the estmates for the correspondng t + and t are updated and the loss based error for the hypothess s updated. Notce, almost all hypothess n a set have zero loss, snce only negatve and the maxmum postve hypothess are strctly enforced. If e normalze the error per hypothess th the total error for all hypothess, e obtan a dstrbuton. Samplng hypotheses from ths dstrbuton ll solely focus on mprovng the loss under the current t + and t estmates. Yet, due to the assumed global nature of the functon approxmator, e have to ensure that these estmates are stll true. Therefore, e augment ths error th an artfcal error term that captures the number of teratons snce the last update of a data pont. It s of the follong form: e(c, u; o, b) = exp( t + (c u)/b) (8) here c s the current teraton, u s the last update teraton of the example, o a trade-off parameter to determne the base nfluence of not evaluatng, and b determnes ho fast the nfluence gros.
6 Mn-Batch Predctons Dfferent Hypothess Sets Update Functon Approxmator Sample Mn-Batch Error Dstrbuton Target y -1-1 Target t -1-1 Compute Jont Error Last Update Last Predcton Update Predctons Current Iteraton Mn-Batch Predctons Fg. 3: Ths fgure llustrates the general optmzaton loop, samplng a mn-batch, performng one functon approxmator update step, feedng back the latest predcton values and updatng the error dstrbuton. We sho to exemplary sets of hypotheses, the one on the left contans postve and negatve examples and the one on the rght only negatve ones. The gray value of the computed error dstrbuton sgnals the mportance for of ths sample for the mn-batch samplng. Notce ho the error due to the loss, ndcated n red and green and the tme snce the last update affects the error dstrbuton. The error dstrbuton s normalzed across all hypothess sets and samples are dran thout replacement from the jont dstrbuton. Fnally, after each optmzaton teraton the hypothess predctons and loss errors are updated as prevously descrbed. In addton to that, e add the artfcal teraton dependent error term Eq. 8 to the hypothess error. The overall error for all hypothess s normalzed to get the dscrete dstrbuton from hch e dra n samples (thout replacement) to fll the ne mn-batch. Ths means, hypotheses hch have lo nfluence on the loss are sampled very nfrequently, bascally not untl Eq. 8 ncreases to a smlar error magntude as the maxmum loss volatng hypotheses. The maxmum postve and negatve hypothess per hypothess set are sampled more frequently f they do not fulfll the rankng loss. Fg. 3 llustrates the optmzaton loop for our proposed loss and mn-batch samplng. A. Dataset VI. EXPERIMENTS For evaluaton e use a large scale dataset [12] hch has been generated n OpenRave [5] by smulatng numerous grasps on each of more than 700 dstnct object mesh models. Ths dataset s splt nto 4 dfferent subsets: a toy dataset contanng only bottles, and three dverse sets of small, medum, and large objects. For our experments e use the physcs-metrc proposed n [12] to automatcally evaluate and label all the grasps. We bnarze the dataset based on ths metrc ((y = 1 : p > 0.9), (y = 1 : p <= 0.9)) th the same threshold as used for the evaluaton thn [12]. In addton to the grasps, the dataset also contans smulated pont clouds that are reconstructed from multple veng angles dstrbuted on a sphere around the object centrod. From each pont cloud, a set of local shape templates s extracted that essentally encode object shape as seen from the hand (Fg. 4). Apart from object surface nformaton, t also contans nformaton about free and occluded space. Thus a template can be nterpreted as an mage th 3 color channels. The frst channel represents the surface ponts of the object projected onto the plane spanned by the surface normal. The second channel represents the occluded space hch s computed based on the vepont and the surface ponts. Ponts are agan projected onto the same surface plane. Cells n the grd on the surface plane hch are nether flled by surface ponts nor by occluson ponts are marked as free space. Each template s lnked to exactly to grasp poses that only dffer n the ntal dstance beteen the palm of the hand and the object surface (the stand-off). The surface normal of a template s equal to the approach vector of the hand. One grasp can hoever be lnked to multple templates as ts assocated object surface normal may be vsble from multple veponts. An example template representaton s shon n Fg. 4. Ths fgure also vsualzes dfferent 3D versons of grasp templates for one grasp. When the angle beteen the vepont and the surface normal s too bg, the majorty of the local shape nformaton cannot be captured by the template representaton, thus t s dffcult for a learnng method to dscrmnate these examples. The feature representaton smply does not contan enough nformaton to separate postve from negatve examples under such condtons. From Grasp From Sde Fg. 4: Varaton of the local shape representaton gven dfferent veponts. The grasp for each of these templates s the same,.e. approach drecton along the cyan lne and fxed rst roll. The vepont s ndcated by the pnk lne. Each column shos the same template from to dfferent drectons. (Top) Template veed from the approach drecton. (Bottom) Template veed from the sde. The occluson area s the most affected by the varyng vepont. Fgure adopted from our prevous ork [12]. All templates extracted from one pont cloud that are thn a maxmum angle beteen the surface normal of the object and the veng pont of the sensor frame, are grouped nto one hypothess set. Smlar to [12], e reject templates th less than 30 surface ponts n the template.
7 B. Baselnes We compare the proposed method to to baselne models that are optmzed for classfcaton accuracy. The frst one as already proposed n [12]. It s a smple CNN that conssts of one convoluton layer, a subsequent poolng layer and 3 fully connected ones, usng a rectfed lnear unt as nonlnearty. The last nonlnearty s a sgmod functon to map to the bnary grasp label. As nput, t uses the same local shape template representaton as descrbed above. As a second baselne, e use a Random Decson Forest that s traned to perform bnary classfcaton on ths dataset [2]. As nput to the model, t uses a set of randomly sampled probes for each nformaton channel of the shape template and stacks t together nto one feature vector. Both baselne models are very smlar n classfcaton performance. C. Evaluaton A common use case n robotcs s to select the best grasp for a gven pont cloud. Due to the nature of the dataset, the pont cloud s already segmented to contan only ponts from the target object. In future ork, e ant to analyze ho precse the target object pont cloud segmentaton has to be. In Table 5 e evaluate the accuracy of the top-1 predctons. In ths case, a true postve s a predcton for a hypothess set from an object pont cloud for hch the hghest scored hypothess s classfed postve and the ground truth label s postve. A true negatve n ths experment s a predcton for a hypothess set for hch the hghest ranked hypothess s classfed negatve and there s no postve labeled hypothess n ths set. The scalar threshold for the classfcaton predcton, based on the rankng score, s obtaned by cross valdaton. We compare the performance of the proposed method th the to classfcaton baselnes. The results sho that the proposed model traned on a rankng objectve outperforms the to baselnes by a large margn. For the dataset contanng large objects, the performance s more than doubled. For the toy dataset of bottles the mprovement s moderate. Ths s probably due to the smplcty of ths subset of data here postve samples can be easly separated from negatve ones. Notce that the datasets are hghly unbalanced, meanng that the majorty of the grasp hypotheses across all hypothess sets are negatve. The results on the other datasets suggest that t s much harder to perfectly separate postve from negatve data hle t s easy to ensure that the top-rankng one refers to a stable grasp. Ths can be due to remanng label nose n the dataset here smlarly lookng templates can be ether postve or negatve. In Fg. 6 e llustrate ho our proposed samplng procedure (Secton V-B) affects the sample usage for optmzaton, focusng on the dffcult examples the most. Ths supports our hypothess that durng the course of the optmzaton of our proposed loss, the majorty of the hypotheses are easy to address, resultng n lo errors. Hoever, every example s revsted due to the suggested heurstc to ensure that, despte Bottles Small Medum Large data rato Forest CNN OURS Fg. 5: We report the data rato (all postve grasps dvded by all grasps) for each test dataset and the top-1 score on the test dataset obtaned by three dfferent methods. The top-1 accuracy ndcates the rato of pont clouds n the test data set for hch the best scorng template as classfed postve and also had a postve ground truth label or the best scorng template as classfed negatve and there as no postve ground truth example n the set. Results are reported per object group (bottles, small, medum, and large) and for grpper stand-off 0 from the object surface before closng the fngers. The proposed model that s traned on a rankng objectve outperforms the baselnes by a large margn. For large objects, the performance has more than doubled. changes to the parameters of the learnng method, the error on these examples s stll lo. #samples per hypothess update-count-mn update-count-mean update-count-max #samples Fg. 6: Ths fgure shos the nfluence of the error dstrbuton based samplng for the optmzaton. The mnmal update count (blue) llustrates that due to the error component based on the teratons, all data samples are revsted over tme. Hoever, the maxmum update count (red) shos that the optmzaton s mostly focusng on the dffcult hypotheses. VII. DISCUSSION AND CONCLUSION In ths paper e have proposed to treat grasp predcton on sets of hypotheses as a rankng problem. An mportant dstncton to other rankng approaches s that our method orks for bnary classfcaton datasets, as long as the dataset s organzed n sets of hypotheses, hch s the typcal case for grasp predcton. The expermental results support our hypothess that the proposed rankng problem formulaton sgnfcantly mproves top-1 grasp stablty predcton snce dffcult and ambguous examples can smply be gnored by the functon approxmator. Another advantage of ths formulaton s that ambguous and dffcult examples are determned automatcally by the optmzaton process. Ths s acheved by usng the rankng of the functon approxmator at the partcular moment of optmzaton. We beleve that top- 1 predcton s a better objectve for grasp predcton, snce perfect classfcaton of all possble grasp hypotheses for a partcular scene s unrealstc due to uncertanty n sensng and partal nformaton n general. Even f the grasp predctor s traned th an optmzaton objectve, one stable grasp has to be selected. In ths case, most often the dstance to the decson border of the classfer s used as a proxy to acheve a rankng thn the postve predcted grasps. In ths ork e have shon that ths proxy results n orse performance
8 compared to a grasp predctor hch as optmzed for rankng. Conceptually the bggest draback of the proposed approach s that e are solely optmzng for the top-1 grasp hypothess. In the case that ths hypothess s not feasble due to e.g. knematc or envronmental constrants, the robot has to alter ts poston to ether get a dfferent ve or make ths grasp reachable, snce no alternatve predcton has a meanng for ths set. Therefore, e beleve an nterestng extenson of ths approach s to optmze for top-n rankng as long as no other top-1 hypothess performance s affected. Another nterestng extenson to ths ork s to replace the heurstc for the hypotheses samplng, for mn-batch constructon, by a stochastc non-statonary mult-armed bandt formulaton. Such a formulaton could further mprove the optmzaton convergence. REFERENCES [1] J. Bohg, A. Morales, T. Asfour, and D. Kragc. Datadrven grasp synthess: A survey. IEEE Transactons on Robotcs, [2] J. Bohg, D. Kappler, and S. Schaal. Exemplar-based predcton of global object shape from local shape smlarty. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [3] C. Burges, T. Shaked, E. Rensha, A. Lazer, M. Deeds, N. Hamlton, and G. Hullender. Learnng to rank usng gradent descent. In In Proc. of Int. Conf. on Machne Learnng (ICML). ACM, [4] Z. Cao, T. Qn, T.-Y. Lu, M.-F. Tsa, and H. L. Learnng to rank: from parse approach to lstse approach. In In Proc. of Int. Conf. on Machne Learnng (ICML). ACM, [5] R. Dankov. Automated Constructon of Robotc Manpulaton Programs. PhD thess, Carnege Mellon Unversty, Robotcs Insttute, August [6] R. Herbrch, T. Graepel, and K. Obermayer. Support vector learnng for ordnal regresson [7] A. Herzog, P. Pastor, M. Kalakrshnan, L. Rghett, J. Bohg, T. Asfour, and S. Schaal. Learnng of grasp selecton based on shape-templates. Autonomous Robots, [8] N. Hudson, T. Hoard, J. Ma, A. Jan, M. Bajracharya, S. Mynt, C. Kuo, L. Matthes, P. Backes, P. Hebert, T. J. Fuchs, and J. W. Burdck. End-to-end dexterous manpulaton th delberate nteractve estmaton. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [9] Y. Jang, S. Moseson, and A. Saxena. Effcent graspng from rgbd mages: Learnng usng a ne rectangle representaton. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [10] T. Joachms. Optmzng search engnes usng clckthrough data. In Int. Conf. on Knoledge Dscovery and Data Mnng (ACM), [11] M. Kalakrshnan, J. Buchl, P. Pastor, and S. Schaal. Learnng locomoton over rough terran usng terran templates. In IEEE/RSJ Int. Conf. on Intellgent Robots and Systems (IROS), [12] D. Kappler, J. Bohg, and S. Schaal. Leveragng bg data for grasp plannng. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [13] M. Kazem, J. Valos, J. A. Bagnell, and N. S. Pollard. Robust object graspng usng force complant moton prmtves. In Robotcs: Scence and Systems VIII. [14] Q. V. Le, D. Kamm, A. F. Kara, and A. Y. Ng. Learnng to grasp objects th multple contact ponts. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [15] A. D. Lehmann, P. V. Gehler, and L. J. Van Gool. Branch&rank: Non-lnear object detecton. In BMVC, [16] I. Lenz, H. Lee, and A. Saxena. Deep learnng for detectng robotc grasps. IJRR, [17] R. Pelossof, A. Mller, P. Allen, and T. Jebera. An SVM learnng approach to robotc graspng. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [18] J. Redmon and A. Angelova. Real-tme grasp detecton usng convolutonal neural netorks. In IEEE Int. Conf. on Robotcs and Automaton (ICRA), [19] L. Rghett, M. Kalakrshnan, P. Pastor, J. Bnney, J. Kelly, R. Voorhes, G. S. Sukhatme, and S. Schaal. An autonomous manpulaton system based on force control and optmzaton. Autonomous Robots, [20] A. Saxena, J. Dremeyer, and A. Y. Ng. Robotc graspng of novel objects usng vson. The Int. Jour. of Robotcs Research (IJRR). [21] Y. Tang. Deep learnng usng lnear support vector machnes. arxv preprnt arxv: , 2013.
CS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationHelsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)
Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationDiscriminative classifiers for object classification. Last time
Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationData Mining: Model Evaluation
Data Mnng: Model Evaluaton Aprl 16, 2013 1 Issues: Evaluatng Classfcaton Methods Accurac classfer accurac: predctng class label predctor accurac: guessng value of predcted attrbutes Speed tme to construct
More informationActive Contours/Snakes
Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng
More informationGreedy Technique - Definition
Greedy Technque Greedy Technque - Defnton The greedy method s a general algorthm desgn paradgm, bult on the follong elements: confguratons: dfferent choces, collectons, or values to fnd objectve functon:
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationParallel matrix-vector multiplication
Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more
More informationA mathematical programming approach to the analysis, design and scheduling of offshore oilfields
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationGraph-based Clustering
Graphbased Clusterng Transform the data nto a graph representaton ertces are the data ponts to be clustered Edges are eghted based on smlarty beteen data ponts Graph parttonng Þ Each connected component
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationA New Knowledge-Based Face Image Indexing System through the Internet
Ne Knoledge-ased Face Image Indexng System through the Internet Shu-Sheng La a Geeng-Neng You b Fu-Song Syu c Hsu-Me Huang d a General Educaton Center, Chna Medcal Unversty, Taan bc Department of Multmeda
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationReal-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input
Real-tme Jont Tracng of a Hand Manpulatng an Object from RGB-D Input Srnath Srdhar 1 Franzsa Mueller 1 Mchael Zollhöfer 1 Dan Casas 1 Antt Oulasvrta 2 Chrstan Theobalt 1 1 Max Planc Insttute for Informatcs
More informationA Background Subtraction for a Vision-based User Interface *
A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton
More informationIntelligent Information Acquisition for Improved Clustering
Intellgent Informaton Acquston for Improved Clusterng Duy Vu Unversty of Texas at Austn duyvu@cs.utexas.edu Mkhal Blenko Mcrosoft Research mblenko@mcrosoft.com Prem Melvlle IBM T.J. Watson Research Center
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationAn Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation
17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed
More informationThe Codesign Challenge
ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.
More informationImage Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline
mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationComplex Numbers. Now we also saw that if a and b were both positive then ab = a b. For a second let s forget that restriction and do the following.
Complex Numbers The last topc n ths secton s not really related to most of what we ve done n ths chapter, although t s somewhat related to the radcals secton as we wll see. We also won t need the materal
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs246.stanford.edu 2/17/2015 Jure Leskovec, Stanford CS246: Mnng Massve Datasets, http://cs246.stanford.edu 2 Hgh dm. data Graph data
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationDetection of hand grasping an object from complex background based on machine learning co-occurrence of local image feature
Detecton of hand graspng an object from complex background based on machne learnng co-occurrence of local mage feature Shnya Moroka, Yasuhro Hramoto, Nobutaka Shmada, Tadash Matsuo, Yoshak Shra Rtsumekan
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationR s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes
SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges
More informationProblem Set 3 Solutions
Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationMachine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More information3D vector computer graphics
3D vector computer graphcs Paolo Varagnolo: freelance engneer Padova Aprl 2016 Prvate Practce ----------------------------------- 1. Introducton Vector 3D model representaton n computer graphcs requres
More informationArray transposition in CUDA shared memory
Array transposton n CUDA shared memory Mke Gles February 19, 2014 Abstract Ths short note s nspred by some code wrtten by Jeremy Appleyard for the transposton of data through shared memory. I had some
More informationA Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems
A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationJournal of Process Control
Journal of Process Control (0) 738 750 Contents lsts avalable at ScVerse ScenceDrect Journal of Process Control j ourna l ho me pag e: wwwelsevercom/locate/jprocont Decentralzed fault detecton and dagnoss
More informationIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationTN348: Openlab Module - Colocalization
TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages
More informationTaxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems
Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer
More informationOnline Detection and Classification of Moving Objects Using Progressively Improving Detectors
Onlne Detecton and Classfcaton of Movng Objects Usng Progressvely Improvng Detectors Omar Javed Saad Al Mubarak Shah Computer Vson Lab School of Computer Scence Unversty of Central Florda Orlando, FL 32816
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationINF 4300 Support Vector Machine Classifiers (SVM) Anne Solberg
INF 43 Support Vector Machne Classfers (SVM) Anne Solberg (anne@f.uo.no) 9..7 Lnear classfers th mamum margn for toclass problems The kernel trck from lnear to a hghdmensonal generalzaton Generaton from
More informationReducing Frame Rate for Object Tracking
Reducng Frame Rate for Object Trackng Pavel Korshunov 1 and We Tsang Oo 2 1 Natonal Unversty of Sngapore, Sngapore 11977, pavelkor@comp.nus.edu.sg 2 Natonal Unversty of Sngapore, Sngapore 11977, oowt@comp.nus.edu.sg
More informationCMPS 10 Introduction to Computer Science Lecture Notes
CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not
More informationFitting: Deformable contours April 26 th, 2018
4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.
More informationLearning to Project in Multi-Objective Binary Linear Programming
Learnng to Project n Mult-Objectve Bnary Lnear Programmng Alvaro Serra-Altamranda Department of Industral and Management System Engneerng, Unversty of South Florda, Tampa, FL, 33620 USA, amserra@mal.usf.edu,
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationPrivate Information Retrieval (PIR)
2 Levente Buttyán Problem formulaton Alce wants to obtan nformaton from a database, but she does not want the database to learn whch nformaton she wanted e.g., Alce s an nvestor queryng a stock-market
More informationBrave New World Pseudocode Reference
Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be
More informationRobotic manipulation of multiple objects as a POMDP
Robotc manpulaton of multple objects as a POMDP Jon Pajarnen, Jon.Pajarnen@aalto.f Vlle Kyrk, Vlle.Kyrk@aalto.f Department of Electrcal Engneerng and Automaton, Aalto Unversty, Fnland arxv:1402.0649v2
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationPerformance Evaluation of Information Retrieval Systems
Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence
More informationMeta-heuristics for Multidimensional Knapsack Problems
2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,
More informationAnytime Predictive Navigation of an Autonomous Robot
Anytme Predctve Navgaton of an Autonomous Robot Shu Yun Chung Department of Mechancal Engneerng Natonal Tawan Unversty Tape, Tawan Emal shuyun@robot0.me.ntu.edu.tw Abstract To acheve fully autonomous moble
More informationCSCI 5417 Information Retrieval Systems Jim Martin!
CSCI 5417 Informaton Retreval Systems Jm Martn! Lecture 11 9/29/2011 Today 9/29 Classfcaton Naïve Bayes classfcaton Ungram LM 1 Where we are... Bascs of ad hoc retreval Indexng Term weghtng/scorng Cosne
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationFINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK
FINDING IMPORTANT NODES IN SOCIAL NETWORKS BASED ON MODIFIED PAGERANK L-qng Qu, Yong-quan Lang 2, Jng-Chen 3, 2 College of Informaton Scence and Technology, Shandong Unversty of Scence and Technology,
More informationSteps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices
Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationCAN COMPUTERS LEARN FASTER? Seyda Ertekin Computer Science & Engineering The Pennsylvania State University
CAN COMPUTERS LEARN FASTER? Seyda Ertekn Computer Scence & Engneerng The Pennsylvana State Unversty sertekn@cse.psu.edu ABSTRACT Ever snce computers were nvented, manknd wondered whether they mght be made
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationClassifier Swarms for Human Detection in Infrared Imagery
Classfer Swarms for Human Detecton n Infrared Imagery Yur Owechko, Swarup Medasan, and Narayan Srnvasa HRL Laboratores, LLC 3011 Malbu Canyon Road, Malbu, CA 90265 {owechko, smedasan, nsrnvasa}@hrl.com
More informationAdaptive Transfer Learning
Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk
More informationGeneralized Team Draft Interleaving
Generalzed Team Draft Interleavng Eugene Khartonov,2, Crag Macdonald 2, Pavel Serdyukov, Iadh Ouns 2 Yandex, Russa 2 Unversty of Glasgow, UK {khartonov, pavser}@yandex-team.ru 2 {crag.macdonald, adh.ouns}@glasgow.ac.uk
More informationCombination of Local Multiple Patterns and Exponential Discriminant Analysis for Facial Recognition
Sensors & ransducers 203 by IFSA http://.sensorsportal.com Combnaton of Local Multple Patterns and Exponental Dscrmnant Analyss for Facal Recognton, 2 Lfang Zhou, 2 Bn Fang, 3 Wesheng L, 3 Ldou Wang College
More information3. CR parameters and Multi-Objective Fitness Function
3 CR parameters and Mult-objectve Ftness Functon 41 3. CR parameters and Mult-Objectve Ftness Functon 3.1. Introducton Cogntve rados dynamcally confgure the wreless communcaton system, whch takes beneft
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationQuality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation
Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on
More informationA New Approach For the Ranking of Fuzzy Sets With Different Heights
New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays
More information