Support Vector Machine Algorithm applied to Industrial Robot Error Recovery

Size: px
Start display at page:

Download "Support Vector Machine Algorithm applied to Industrial Robot Error Recovery"

Transcription

1 DEGREE PROJECT, I COMPUTER SCIECE, SECOD LEVEL STOCKHOLM, SWEDE 2015 Support Vector Machne Algorthm appled to Industral Robot Error Recovery CIDEY LAU KTH ROYAL ISTITUTE OF TECHOLOGY SCHOOL OF COMPUTER SCIECE AD COMMUICATIO (CSC)

2 Support Vector Machne Algorthm appled to Industral Robot Error Recovery Ttel svenska: Författare: E-post: Uppdragsgvare: Examnator: Handledare: Support Vector Machne algortm tllämpad nom felhanterng på ndustrrobotar Cdney Lau ABB Shangha Ltd. Prof. Anders Lansner Prof. Örjan Ekeberg Datum: Cvlngenjörsutbldnng Elektroteknk, examensarbete nom Datalog

3 Abstract (Englsh) A Machne Learnng approach for error recovery n an ndustral robot for the plastc mold ndustry s proposed n ths master thess project. The goal was to mprove the present error recovery method by provdng a learnng algorthm to the system nstead of usng the tradtonal algorthm-based control. The chosen method was the Support Vector Machne (SVM) due to the robustness and the good generalzaton performance n real-world applcatons. Furthermore, SVM generates good classfers even wth a mnmal number of tranng examples. In producton, there wll be no need for a human operator to tran the SVM wth hundreds or thousands of tranng examples to acheve good generalzaton. The advantage wth SVM s that good accuracy can be acheved wth only a couple of tranng examples f the tranng examples are well desgned. Frstly, the algorthm proposed was evaluated expermentally. The experments conssted of correct handlng of classfcaton performance on tranng examples, whch was a hand-coded data set created wth defned n- and output sgnals. Secondly, the results from the experments were tested n a smulated envronment. By usng only a few tranng examples the SVM reached perfect performance. In concluson, SVM s a good tool for classfcaton and a sutable method for error recovery on the ndustral robot for the plastc mold ndustry. Abstract (Swedsh) En masknnlärnngsstrateg för felhanterng på ndustrrobotar nom plastformndustrn presenteras detta examensarbete. Målet var att förbättra den nuvarande felhanterngen genom att applcera en nlärnngsalgortm stället för det tradtonella förprogrammerade systemet tll roboten. Den valda metoden är Support Vector Machne (SVM), då SVM är en robust metod som ger bra prestanda verklga tllämpnngar. SVM genererar bra klassfcerare även med ett mnmalt antal tränngsexempel. Fördelen med SVM är att god precson kan uppnås med bara ett par tränngsexempel förutsatt att tränngsexemplen är väldesgnade. Detta betyder att operatörerna produktonen nte behöver träna hundratals eller tusentals tränngsexempel med SVM för att uppnå en god generalserng. I projektet utvärderasdes SVM metoden expermentellt varefter den testades ett smulerngsprogram. Resultatet vsade att SVM metoden gav en perfekt precson med hjälp av endast ett fåtal tränngsdata. En slutsats från denna stude är att SVM är en bra metod för klassfcerng och lämplg för felhanterng på ndustrrobotar nom plastndustrn.

4 Contents 1 Introducton Background ABB RobotWare Plastc Mould (RWPM) Methods Background theory Support Vector Machne (SVM) Optmal hyperplane for the lnearly separable patterns Optmal hyperplane for the lnearly nonseparable patterns Optmal hyperplane for the nonlnear patterns Inner-Product Kernel Mult-class SVM classfcaton problem Implementatons Solvng the Quadratc Problems Tranng sets Output data Input data Appled Rules to the error recovery Experments and Results Lnear kernel Polynomal kernel Re-tran msjudged experments Dscusson Future research Implementaton of proposed Error recovery tool based on SVM Conclusons Acknowledgments References... 35

5 1 Introducton The frst ndustral robot was constructed by Joseph Engelberger n Therefore, year 1961 can be consdered as the begnnng of ndustral robotcs [1]. The ndustral robots have been developng and progressng ever snce and robots have replaced the human n many automated manufacturng systems. Prmarly n those systems that are dangerous to humans and harmful to the health. Productvty was ncreased when hgher regularty and accuracy was ntroduced. For example, n the last years, car manufacturng has been automated and fully robotzed, startng from the ntal stage of forgng, through engne manufacture, to assembly of parts nto the fnal product. The number of ndustral robots s presently estmated to one mllon and one thrd of them are made n Japan. In the last decade we have wtnessed a rapd development of robots and the development wll contnue to grow [2, 3, 4]. The ndustral robot s based on a hand-coded tradtonal algorthm. Moreover, ndustral robots are rarely assocated wth ntellgence [5]. The reason s that the robot-based manufacturng system s nherently complex. The control and coordnaton for the ndustral robots demand strct attenton to detal and relablty [6]. However, robot-based manufacturng systems can today be logcally correct but stll fal under abnormal condtons. The tradtonal algorthm-based control of manufacturng systems has no problem dealng wth well-ordered, hghly structured envronments. Moreover, they have also shown a good capablty for handlng most abnormal condtons wth many detals nvolved. But when t comes to dynamc systems or ncorporatng a 'hgher level' understandng of what s desred of the system, the tradtonal algorthm-based control has lmtatons. It cannot ensure that the envronment s predcable enough to functon relably [7, 8]. That s why many researchers attempt to address the lmtatons of tradtonal algorthm-based control through varous Machne Learnng technques [7]. Machne Learnng s a study of pattern recognton and computatonal learnng theory. Machne Learnng s makng computers to act wthout strctly followng statc program nstructons. It s more flexble than the expert system snce the learnng approach from tranng data makes lnks between dfferent deas and meanngs. Machne Learnng has the capacty to contnuously self-mprove and thereby offers ncreased effcency and effectveness. The learnng approach can solve complex tasks n logstc applcatons. It s also concluded n related artcles to be feasble for real-world applcatons [6, 9]. One goal of robotcs research s to construct robust and relable robot systems that are able to handle errors arsng from abnormal operatng condtons. Error handlng s becomng ncreasngly mportant n the ndustral robotcs envronments. The tradtonal algorthm-based controlled systems are nflexble and do not respond well to 'errors' that occur durng producton. When faced wth an unexpected event or error condton, the system would shut down and start settng off alarms and lghts. An 'operator' would then attend and dagnose what has faled and take correctve and approprate acton to the stuaton. Researchers have begun explorng the use of learnng systems to ncorporate automated dagnoss nto today's systems and, n some cases, they recommend the correctve actons to be taken by the operator. However, as systems become more complex and more capable, operators have less experence dealng wth errors and trouble-shootng. They become less qualfed to take correctve actons. At an ultmate stuaton, operators become lttle more than people assgned to read the rght manual when an error lght turns on and then push the rght button. It s at ths pont when the system tself should be gven the responsblty and authorty to take correctve acton on ts own. To obtan ths ablty, the systems must be provded wth on-lne automated dagnostcs and error recovery capabltes [7, 10]. 1

6 In computer systems, errors can be defned as ether component errors or desgn errors, but n the robot assembly world, there s a thrd type: external errors. The external error problem n ndustral robots has multple causes of errors and the error stuatons n ndustral robots are hard to predct n advance. These nclude external nterference wth processes, or components, and unexpected events such as breakages and jammed parts [11]. The feld of error handlng s often dvded n three subfelds: Detecton technques for (or the process of) observng the actual state of the controlled system and comparng t wth specfcatons n order to fnd dscrepances as early as possble. Dagnose technques for fndng the orgnal fault whch caused the error. Recovery applyng the proper correctve actons n order to prevent a possble future error or reach an error free state. In each of these subfelds there are several prncples for how to acheve these objectves, as well as dfferent methods for representng the nformaton needed [12]. The most wdely used technque for error recovery n the ndustral robots s known as the backward error recovery. The backward error recovery fnds a prevous error free state of the system and returns there, and undoes what has been done. However, despte the attractve smplcty of these methods, ther fxed response often proves napproprate n robotc applcatons. Ths s because backward recovery s nflexble and t assumes that processes are reversble and objects are recoverable, and that the system has full control over a well-defned envronment. These assumptons do not hold n robotcs [11, 12]. The nternatonal company ABB provdes many knds of ndustral robots for today s ndustres. When usng the ABB robot e.g. machne tendng, some error cases are complcated to handle. Currently, the error recovery for the ABB robots n machne tendng, especally for plastc njecton moldng cannot be customzed. The error recovery for the ABB plastc moldng robots moves the robot back to home poston (start poston) safely and as soon as possble whenever error occurs. Presently, all errors are handled wth the same rule. Accordng to the customers requests, ths soluton of the error recovery s not always enough. In the real world, customers need more flexblty and optons to cover all possble stuatons. That s why an mprovement s valuable for the exstng error recovery method, but can ths problem be solved wth machne learnng or other algorthms and approaches? Many researchers have focused to mplement dfferent Machne Learnng methods and Artfcal eural etwork (A) methods nto real-world applcatons and ndustral robots. Support vector machnes (SVM) whch s a Machne Learnng algorthm, and artfcal neural networks (A) have been used n applcatons such as: bearng fault detecton, breast cancer cell detecton, drug classfcaton, mage retreval, dentfyng students wth learnng dsabltes, modelng a mcrowave transmtter, proten fold recognton, sgnature recognton and textle color classfcaton. These nvestgatons ndcate that the SVM algorthm wll generally perform better than the A. However, there are exceptons that occur when the user has specfc knowledge of the applcaton and the avalable data. Then SVM s outperformed by A [4]. SVM was ntroduced by Vapnk Chervonenks n 1995 [3, 4, 13]. SVM seeks to determne a lnear separator between bnary data classes. The optmal poston of the separatng plane s specfed as where the margn between the plane and the data ponts are maxmzed. Ths concept s generally referred to as a maxmal margn classfer, and ts strength les n determnng a good general soluton to the classfcaton problem wthout overfttng the data [2, 14]. 2

7 Another artcle shows that Extreme Learnng Machne (ELM) has poorer generalzaton ablty than the SVM when the sze of the tranng set s small, whle t has the potental to yeld generalzaton behavor as good as the SVM when the sze of the tranng set becomes large. The possble reason s the presence of overfttng n the tranng process of the ELM. The ELM has also superor computatonal speed compared wth the SVM, and ths superorty wll ncrease drastcally as the sze of tranng set grows. Furthermore, the SVM shows very strong learnng ablty, avodng overfttng, as well as strong generalzaton performance compared to ELM [13]. The rapd development of SVM n statstcal learnng theory has encouraged researchers to apply SVM to varous felds. It has been wdely used for text classfcaton, pattern recognton and fault detecton, etc. [7, 15]. The man objectve of ths project was to explore an approach that s entrely based on Machne Learnng methods to ndustral robots n the hope to mprove the producton. The goal of ths research was to apply a Machne Learnng technque n error handlng for the ABB robots n the plastc moldng ndustry and effectvely tran the robot to classfy the most correct solutons accordng to the customer s requrements. Furthermore, ths learnng approach was nvestgated and evaluated to fnd out f t s sutable. Outlne The background knowledge for the ABB robot that was used n ths project s ntroduced n Chapter 2. In Chapter 3, a method s proposed wth a justfed reason. Furthermore, the basc concept of the background theory s explaned and a detaled secton of the mplementaton to the proposed method s gven. Experments and results acheved by usng the Machne Learnng technque, Support Vector Machne (SVM), are found n Chapter 4. Dscusson and proposed tools for future producton are presented n Chapter 5. Fnally, the conclusons are drawn n Chapter 6. Defntons and Abbrevatons AI Artfcal Intellgence A Artfcal eural etwork ELM Extreme Learnng Machne ERM Emprcal Rsk Mnmzaton K K-nearest neghbors RWPM RobotWare Plastcs Mould SMO Sequental Mnmal Optmzaton SRM Structural Rsk Mnmzaton SVM Support Vector Machne 3

8 2 Background ABB RobotWare Plastc Mould (RWPM) ABB s a leadng suppler of ndustral robots, modular manufacturng systems and servce. ABB has nstalled more than 250,000 robots worldwde. The ABB RobotWare Plastc Mould (RWPM) s a robot software for machne tendng, especally for njecton moldng. It provdes an easy to use user-nterface for programmng and producton, ncludng safe Home Run, user authorzaton, producton statstcs, and event log as well as a standardzed and structured way of machne tendng programmng. The error recovery soluton today s to make a safe Home Run when an error occurs. The secure and automatc Home Run system takes the robot safely from any stuaton where t has stopped wthout any need for the operator to jog the robot n dffcult areas. However, only the safe Home Run error recovery soluton s not always enough, accordng to the customers. In real-world applcatons, customers need more optons to cover all possble stuatons. Flexblty s a bg ssue here and that s why more solutons for the error handlng are requred. In ths project the ABB sx-axs robot for RWPM was used to test the Machne Learnng technque for mprovement of the current error recovery system n ndustral robots. Fgure 1. ABB sx-axs robot for RobotWare Plastc Mould. 4

9 Descrpton of the robot s envronment Every staton n RWPM, has ts own set of paths, and t can have a maxmum of sx paths. A path conssts of a maxmum of ten ponts and each path has a number of move-n ponts and a number of move-out ponts. A path has an Entry pont and a Target pont. The Entry pont s placed outsde the machne; the man path s then used to move nto the Target pont nsde the machne. From the Target pont, there could be two sub paths, path one and path two, see Fgure 2. A more detaled descrpton of dfferent defnton n RWPM s explaned below [16]. User Staton 1 User Staton 2 Sub path 1 Sub path 2 Target postons In postons Out postons Advanced path Man path Home Staton Robot Scrap Staton Fgure 2. Illustraton of the dfferent statons and the robot path. Staton defnton A staton s the specal work place where the robot pcks up or drops off a part or does some knd of processng of the part. There are three knds of statons that are used for ths project. Home Staton: A mandatory staton that s the robot s home poston. In the home poston the grpper tool s opened. Scrap Staton: A mandatory staton where robots leave all parts before t moves home durng home run cycle. In the scrap staton the grpper tool s opened. User Staton: User staton s any staton, except Home and Scrap staton, e.g. njecton mouldng machne or conveyer [16]. 5

10 Staton Status defnton Staton status s dynamc and t changes durng run tme for the staton. A staton status s used durng schedulng to determne f a staton could be executed and whch staton that s to be executed. Three status knds have been used for ths project. Staton OK: (stnok) Staton s okay.e. the robot can attend to that staton. Staton OK: (stnok) Staton s not okay, the robot s n a falure state. Staton Busy: (stnbusy) Staton s busy. Path Type defnton A staton has to have at least one path, the man path. Apart from the man path, a staton can have zero up to fve numbers of sub paths. There are four path types used for ths project. Man Path: Move nto statons. Sub Path: Path nsde a staton. Cell Path: Cell path s the path between statons. Out Path: Out path s when the robot s out of the defned path. Pont Type defnton A path s an array of ten ponts wth dfferent types. A path has an Entry-pont and a Targetpont. Ponts between the Entry-pont and the Target-pont are move-n ponts and ponts from Target-pont to last pont are move-out ponts. Three types are used for ths project. See Fgure 3. Target pont: The target pont s the pont between move n ponts and move out ponts. In pont: The n ponts are the ponts between the entry pont and the target pont. Out pont: The out ponts are the ponts after the target pont. Entry-pont Out-pont path one Target-pont man path path two In-pont Fgure 3. Illustraton of the paths and ponts n a staton. A staton has always a man path and f necessary a path one and/or a path two. 6

11 Tool Status defnton The grpper s a tool on the robot n plastc mould that s used to pck up, leave and hold part. Tool Status descrbes the status of the grpper. There are three knds of grpper status used n ths project. Tool open: The grpper s opened. Tool closed: The grpper s closed. Tool OK: The grpper n a falure state. The IRC5 controller FlexPendant The FlexPendant s an operator panel n whch all operatons and programmng can be carred out. The FlexPendant s equpped wth a large touch screen and a color dsplay, whch dsplays dfferent knds of system nformaton, pctures, graphs and user nteractons actvated by the touch of a fnger. Lmted programmng or computer experence s requred to learn how to use the FlexPendant, see Fgure 4 [17]. Fgure 4. FlexPendant. 7

12 3 Methods The man objectve of ths project was to explore an approach that was based on Machne Learnng for error recovery n ndustral robots. The learnng method was appled to the ABB robots n the plastc moldng ndustry (RWPM) to effectvely tran the robot to classfy the most correct solutons accordng to the customer s requrements. But the queston was whch learnng method that should be tested and whether t was sutable for ndustral robots? Dfferent methods such as Machne Learnng methods and A methods have been studed for ths project. Some artcle descrbes that SVM outperform A, n terms of both classfcaton accuracy and classfcaton speed [4]. In another artcle, three knds of classfer methods were compared. The frst was the statstcsbased method such as Bayesan method, K-nearest neghbors (K) method and SVM. The second was the rule-based method such as decson tree and rough sets. The last one was the A method. SVM algorthm s a soluton to convex optmzaton problems, and t s often better than others because ts local optmal soluton s the global optmal soluton [14]. Furthermore, classfcaton needs to be performed n an unstructured envronment full of uncertantes and SVM has been used n many real-world applcatons due to good generalzaton performance. SVM has also been proven to be a robust method [13]. Therefore, accordng to the reasons above, SVM was proposed as the learnng method for error recovery n ths project. The SVM was used to learn approxmated classfed solutons for the ABB robot error handlng system and thereafter conclude f t was a possble and sutable soluton. SVM s a supervsed tranng method that provdes learnng wth a teacher. In other words, the output answer has been assocated wth the nput durng the tranng process. The basc dea of SVM was to apply a nonlnear mappng Φ to map the data of nput space nto a hgher-dmensonal feature space, and then mplement the lnear classfcaton n ths hgherdmensonal space. SVM seeks to determne a lnear separator between bnary data classes. The optmal poston of the separatng plane s specfed as where the margn between the plane and the data ponts s maxmzed. Ths concept s generally referred to as a maxmal margn classfer, and ts strength les n determnng a good general soluton to the classfcaton problem wthout overfttng the data [4]. In order to separate more complex data sets, a kernel functon was used to map the data nto a hgher-dmensoned space where a sngle hyperplane could separate the bnary classes. The kernels tested n ths project were: Lnear kernels Polynomal kernels Mult-class classfcaton was also needed for ths project snce we wanted more optons to cover all possble stuatons n error recovery. Mult-class classfcaton can be acheved usng ether a mult-class verson of the SVM algorthm. 8

13 Lmtatons For ths project the SVM Machne Learnng approach has only been appled to error recovery n the ABB ndustral robots for plastc mould. The ABB robot software for machne tendng, especally for njecton moldng s called the RobotWare Plastc Mould (RWPM). Only a selecton of errors were handled n the project. The errors that were handled are so called plastc errors whch currently use the same rules for recovery.e. to do a safety Home Run whenever fault occurs. Furthermore, ths project was only evaluated for one set of hand-coded tranng rules, snce the purpose of ths project was to mprove the error recovery wth a Machne Learnng approach. Ths means that one evaluated set of hand-coded tranng rules was enough. Ths set of handcoded tranng rules were used n all the experments. To be able to understand the experments of the project, Chapter 3.1 and 3.2 gve a detaled background theory of the SVM algorthm and the mplementaton. 3.1 Background theory Support Vector Machne (SVM) The Artfcal eural etworks algorthm (A) changes system parameters and weghts durng the learnng process and these approaches are based on a rsk functon called Emprcal Rsk Mnmzaton (ERM). In contrast, the Support vector Machne algorthm (SVM) also uses the ERM and n addton, a rsk functon known as Structural Rsk Mnmzaton (SRM). Ths has been shown to be superor than usng ERM alone [18, 21]. The ERM s only based on mnmzng the error of the tranng data tself. If the tranng data s sparse and/or not representatve of the underlyng dstrbuton, then the system wll be poorly traned, hence lmted classfcaton performance. SVM s a Machne Learnng approach whch has drawn much attenton because of the hgh classfcaton performance. The SVM algorthm s a set of unversal supervsed, feed-forward network based classfcaton algorthm based on the statstcal learnng theory and a SRM prncple. For classfcaton problems, the SVM maps the nput vectors nto a hgh dmensonal nonlnear feature space and constructs a hyperplane, whch lnearly separates the data nto dfferent classes. Ths hyperplane s called the maxmal margn hyperplane because t maxmzes the dstance to the closest ponts of the two classes. By approprately defnng a kernel functon relatng the data n the nput space to the dot product between the data vectors n the feature space, a hgh dmensonal feature space can be acheved. Ths secton 3.1 ntroduces the theory of SVM, ncludng lnearly separable case, lnearly nonseparable case and nonlnear case through a two-class classfcaton problem. A mult-class SVM classfcaton problem s also ntroduced n ths secton [21] Optmal hyperplane for the lnearly separable patterns In the lnearly separable case, the tranng set S s gven as: S {(, y )} 1 x (1) Where x s the nput pattern, wth gven labels y { 1, 1}. The lnearly separable case can be separated by a hyperplane decson functon. The separatng hyperplane decson functon s: 9

14 f x w x b (2) where x s an nput vector, w s an adjustable weght vector and b s a bas. The goal of SVM s to fnd an optmal hyperplane that satsfes: [18]. w b 1 f y 1 (3) x w b 1 f y 1, 1,..., l (4) x In ther smplest form, SVMs are hyperplanes that separate tranng data by a maxmal margn. All vectors lyng on one sde of the hyperplane are labeled as -1, and all vectors lyng on the other sde are labeled as 1. The tranng data that le closest to the hyperplane and determne the hyperplane are called support vectors, SV. All remanng examples of the tranng set are rrelevant. ow the margn has to be maxmzed of ths hyperplane. The margn s the mnmal dstance of the hyperplane to the closest data pont. The problem s to fnd w that maxmzes the margn [19]. y f x (5) w An nfnte number of solutons exsts. In order to lmt the number of solutons, the only soluton that s consdered s normalzed as w 1. The wder the margn s, the smaller the VC-dmenson of the resultng classfer. By wdenng the classfcaton margn, the confdence nterval for the classfcaton error s reduced. Maxmzng the margn s equvalent to mnmzng w. Therefore, for the lnearly separable case, to fnd the optmal hyperplane s to solve the followng optmzaton problem: [20]. Mnmze: T Subject to constrants: y x b w (6) 2 w, 1,2,..., (7) To solve the optmzaton problem, the method of Lagrange multplers s used. If 0 are the Lagrange multplers, the optmzaton problem can be wrtten as: L w, b, w w y w x 1 b1 (8) 2 The soluton to the constraned optmzaton problem s to determne the saddle pont of the L w, b, s found, when t s Lagrangan functon. The saddle pont of ths functon mnmzed wth respect to w and b and maxmzed wth respect to 0.Thus, dfferentatng L w, b, wth respect to w and b and settng the result equal to zero, the followng two condtons of optmalty are acheved: 1 10

15 Condton 1: Condton 2: L w, b, w L w, b, b Solvng these two partal dervatves condtons gves: 0 0 (9) (10) w 1 (11) y x Ths condton must be satsfed by the Lagrange multplers. The followng condton shows that the vector w s a lnear combnaton of the tranng vectors: 1 0 (12) y It s mportant to note that at the saddle pont, for each Lagrange multpler, the product of that multpler wth ts correspondng constrant vanshes, as shown by: y w x b1 0, 1,2,..., (13) Therefore, only those multplers exactly meetng Eq. (13) can assume nonzero values. These x for whch >0 are the support vectors (SV). * The dual formulaton of the optmzed problem, where the Lagrangan s expressed n functon of the only, s translated as: Fnally, the separaton functon s: 1 j j j (14) 2 Maxmze: L y y ( x x ) dual 1 1 j1 Subject to constrants: y 0, wth 0, 1,2,..., (15) f 1 * y x x x b (16) 1 ote that the dual problem s cast entrely n terms of the tranng data. Furthermore, the dual functon to be maxmzed depends only on the nput patterns n the form of a set of dot products, x x. When the optmum Lagrange multplers 0, s determned, the j, j1 optmum weght vector w 0 may be computed [18, 19]. w 0 0, (17) 1 y x 11

16 Fgure 5. Illustraton of an optmal separatng hyperplane Optmal hyperplane for the lnearly nonseparable patterns Lnear separablty cannot always be assumed. If the nequaltes n Eq. (3) and (4), do not hold for some data ponts n S, the SVM becomes lnearly nonseparabale. When the tranng data sn t lnearly separable, the slack varables can be added to the optmzaton problem nto the decson surface. y x b1 w T, 1,2,..., (18) For 0 1, the data pont fall nsde the regon of separaton but on the rght sde of the decson surface. For 1, they fall on the wrong sde of the decson surface. ow the goal of the SVM s to fnd a separaton hyperplane for whch the msclassfcaton error can be mnmzed whle maxmzng the soft margn of separaton. The SVs le exactly on the edge of the margn when the soft margn s not used and on the edge or n the margn area n soft margn classfcaton. To fnd an optmal hyperplane for a lnearly nonseparable case, the followng constraned problem needs to be solved: Mnmze: w C (19) 1 Subject to constrants: y w T x b 1, 1,2,..., (20) 0, 1,2,..., As before, mnmzng the frst term n Eq. (19) s related to mnmzng the VC dmenson of the SVM. The second term s an upper bound on the number of test errors, where C s a userdefned postve parameter. It controls the trade-off between complexty of the machne and the number of nonseparable ponts. In partcular, ths s the only free parameter n SVM [18, 19, 20]. 12

17 One can also translate ths optmzaton problem nto a dual form. The dual Lagrangan that has to be maxmzed s: [19] 1 j j j (21) 2 Maxmze: L y y ( x x ) dual 1 1 j1 Subject to constrants: y 0, wth 0 C, 1,2,..., (22) 1 ote that nether the slack varables nor the Lagrange multplers appear n the dual form. The dual problem for the nonseparable case s smlar to the lnear separable case except for a mnor dfference. The dual functon s to be maxmzed n both cases thus the nonseparable case dffers n that the constrant s replaced wth the more strngent constrant 0 C. Except for ths modfcaton, the constraned optmzaton for the nonseparable case and computatons of the optmum values of the weght vector w and bas b proceed n the same way as n the lnearly separable case [18] Optmal hyperplane for the nonlnear patterns In most real lfe problems, the nput data sn t lnearly separable. To solve the nonlnear case, the nput vector s mapped/transformed nto a hgher dmensonal feature space, where the tranng data wll be lnearly separable. Fgure 6. onlnear SVM. Pcture an nput space made up of nonlnearly separable patterns. Cover s theorem states that such a multdmensonal space may be transformed nto a new feature space where the patterns are lnearly separable wth hgh probablty, provded that two condtons are satsfed. Frstly, the transformaton s nonlnear. Secondly, the dmensonalty of the feature space s hgh enough. ote that Cover s theorem does not dscuss the optmalty of the separatng hyperplane. It s only by usng an optmal separatng hyperplane that the VC dmenson s mnmzed and generalzaton s acheved [18]. m1 Let x denote a vector drawn from the nput space wth dmenson m 0. Let { j x} j 1 denote a set of nonlnear transformatons for the nput space to the feature space wth the dmenson m 1 of the feature space. Kx, x s the nner-product kernel that mplctly maps data from nput space nto a hgher dmensonal feature space va a nonlnear kernel functon: K for 1 T x Kx, x x x x x, 1,2,..., m (23) j0 j j 13

18 The dual Lagrangans needs to be maxmzed for the separable case n the hgh-dmensonal feature space. Therefore, the dual optmzaton problem for optmal hyperplane s now defned by: 1 Maxmze: Ldual j y y j Kx, x j (24) j1 Subject to constrants: y, wth 0, 1,2,..., (25) 1 The dual optmzaton problem wth the slack varables has the followng functon that needs to be optmzed: 1 Maxmze: Ldual j y y j Kx, x j (26) j1 Subject to constrants: y, wth 0 C, 1,2,..., (27) The requrement on the kernel x, K must be symmetrc, and K must be postve defnte. x 1 K s to satsfy the Mercer s theorem: ote that there s no need to use or know the form of, snce the mappng s never performed explctly. Only the nner products of the data have to be calculated, whle the explct form of the data can stay mplct. Therefore, SVM can computatonally afford to work n mplctly very large feature space, e.g dmensonal. SVM can also control and avod the overfttng ssue by controllng the capacty and maxmzng the margn Inner-Product Kernel The SVM algorthm can construct a varety of learnng machnes by usng dfferent kernel functons. In Table 2, a summary of the nner-product kernel for four common types of SVMs are presented [18]. Type of Support Vector Machne Lnear Kernel Inner-product kernel x K x,, 1,2,..., x T x Polynomal learnng machne Radal-bass functon network (RBF) Sgmod kernel (Two-layer perceptron) 1 exp 2 2 tanh 0 x T x x x 1 2 Table 1. Inner-Product Kernel. 14

19 For the polynomal kernel the obtaned decson functon has the followng form: f (28) P x sgn y x x 1 b sup portvectors The Radal Bass Functon networks (RBF) has the obtaned decson functons: 2 K = exp x x f x sgn K x x b sup portvectors (29) 1 2 where x x = exp x x 2. The number of support 2 vectors that are obtaned wth SVM, corresponds to the number of RBF kernels that s used to buld up ths network. The sgmod kernel s a smlar approach to the two-layer neural networks wth the separatng functon as follows: f x (30) sgn tanh 0 x T x 1 b sup portvectors Smlar as the RBF kernels, the number of support vectors corresponds to the number of frstlayer neurons that the network requres for optmal generalzatons. The weghts of the frstlayer neurons are the support vectors x, whle the weghts of the second-layer of neurons are gven by. The lnear kernel s able to work drectly on the nput space wthout mappng the data from nput space nto a hgher dmensonal feature space Mult-class SVM classfcaton problem Mult-class classfcaton, where the number of classes s larger than two, can be obtaned by combnng two-class SVM classfcaton. Three dfferent approaches are presented below. One-aganst-all or one-aganst-k s an approach where k s the number of classes. In ths method k, dfferent classfers are constructed, one for each class. By tranng k 1 two-class classfer, each machne s traned as a classfer for one class aganst all other classes. One-aganst-one s an approach that constructs a mult-class classfer. A two-class 2 machnes separates each par of classes, together wth a majorty votng scheme to estmate the fnal classfcaton. Hence, the class wth the maxmal number of votes among all classfers s the estmaton. Fnally, one can solve a mult-class SVM classfcaton drectly. All the classes are consdered at once by modfyng the optmzaton problem and optmzng the margns of hyperplanes at the same tme [19]. 15

20 3.2 Implementatons In all problems, a quadratc problem has to be solved that guarantees a unque maxmum. To solve ths problem numercally you need to perform a gradent ascent, also known as steepest 0 ascent [19]. We start from an ntal estmate of the soluton, whch we then teratvely update by followng the steepest ascendng path. The length of the update s the learnng rate of the algorthm. The learnng rate should be chosen carefully. If t s too large, t wll not converge, but f t s very small, convergence of the algorthm wll be very slow. For every tranng example the followng ndvdual learnng rate s proposed: Ths proposed learnng rate 1 (31) K x, x s proven to satsfy the suffcent condton for convergence. There are a number of practcal problems that arse when the learnng problem becomes larger.e. when the number of tranng examples ncreases substantally. Essentally the complexty of the optmzaton problem grows wth the sze of the matrx of Kernel values Kx, x, whch grows quadratcally wth the number of tranng examples. A number of solutons have been proposed to address ths ssue. Two solutons are especally mentoned that only requre a part of the data that s appled to the optmzer at the same tme. Chunkng: Startng wth a random subset ( chunk ) of tranng examples, the optmzer wll run on ths subset to fnd the ntal Support Vectors (SV). Ths ntal soluton s consecutvely used n order to fnd these tranng examples that most volate ths soluton. The latter examples and the current Support Vectors form together a new subset. Ths subset s used to run the optmzer agan. Ths procedure s repeated untl some stoppng crteron s met. Decomposton: Decomposton methods are currently one of the major methods for tranng support vector machnes. In the prevous method, the sze of the subset must be larger than the number of Support Vectors, whch for large problems can stll be a problem. Ths decomposton method fxes the subset sze (whch can be very small) and wll only run the optmzer on dfferent small subsets of the problem at a tme [21] Solvng the Quadratc Problems The dffculty of solvng mn 1 T T λ Qλ e 2 λ 0 C, 1,2,..., (32) Subject to constrants: y T λ 0 (33) s the densty of Q because Q j s n general not zero. e s the vector of all ones, C > 0 s the Q y y K x, x, and upper bound, Q s a postve semdefnte matrx, j j j T Kx, x x x j s the kernel. The decomposton method s used to conquer ths j dffculty. Ths method modfes a subset of per teraton. Ths subset, denoted as the workng set B, leads to a small sub-problem to be mnmzed n each teraton. An extreme case 16

21 s the Sequental Mnmal Optmzaton (SMO), whch restrcts B to have only two elements. Then n each teraton, one solves a smple two-varable problem wthout needng optmzaton software. Sequental Mnmal Optmzaton (SMO) Ths method s bascally the decomposton method pushed to the lmt. Usng a subset sze of two, only two ponts are consdered for optmzaton at the same tme. In ths case the optmzaton problem can be solved analytcally and hence no teratve quadratc optmzaton program s requred. Concretely, the optmzaton requres that: 0 0 (34) y Let s denote two multplers 1 and 2, hence the mnmum number of multplers that can be modfed at the same tme s 2. Gven the Eq. (34) ths constrant must satsfy: whle 0 1, 2 C wth, for the case y1 y2 : Whle, for the case y1 y2 :. If we calculate old old 1y1 2 y2 const 1 y1 2 y2 (35) new 2 frst, the later mples that new 2 has to satsfy: U new 2 V (36) old old U max 0, 2 1 (37) old old V mn C, C 2 1 U (38) 2 1 old old max 0, C (37) old old V mn C, 2 1 (38) ow one can defne the basc quanttes of the algorthm. The classfcaton error (as a SV) that a tranng example generates wth the current set of multplers s: E for f x, (39) j1 y f x j y j Kx j x b y 1,2,..., And the second dervatve of the objectve functon along the dagonal lne s: x, x Kx, x Kx, x x 2 K (40) x2 Usng these quanttes, the updates are gven by the pars of Lagrange multplers, whch wll maxmze the objectve functon when only these par of multplers are allowed to change: new, unclpped 2 old y2 E1 E2 2 (41) 17

22 Ths s clpped to enforce the constrant of Eq. (36): new And the value of 1 s gven by: new, unclpped V 2 V new new, unclpped new, unclpped 2 2 U 2 V (42) new, unclpped U 2 U new old old new 1 1 y1 y2 2 2 (43) The Calculaton of b After the soluton of the dual optmzaton problem s obtaned, the varable b must be calculated when used n the decson functon. Consder the case of y = 1. If there s whch satsfes avod numercal errors, we average them: r 1 0 C, y 1 f 0 C, y 1 On the other hand, f there s no such, as r 1 must satsfy 1 C, y 1 0, y C, then r1 f. To (44) max f r max f (45) We take r 1, as the mdpont of the range. For y 1, we can calculate r 2 n a smlar way. After r 1 and r 2 are obtaned, the calculaton of b wll be: Mult-class classfcaton r 1 r b 2 (46) 2 The one-aganst-one approach has been used. The approach n whch k ( k 1) / 2 classfers are constructed and each approach trans data from two dfferent classes. For tranng data from the :th and the j:th classes, the followng bnary classfcaton problem s solved: mn 1 j j j w, b, ζ 2 j T Subject to: x j T j j w w C t j j t b 1 t j T j j xt b 1 t t (47) w, f xt n the :th class, (48) w, f xt n the j:th class, (49) 0. j t In classfcaton we use a votng strategy. Each bnary classfcaton s consdered to be a vote where votes can be cast for all data ponts x n the end a pont s desgnated to be n a class wth maxmum number of votes. In case those two classes have dentcal votes, although t may not be a good strategy, we smply select the one wth the smallest ndex [22]. 18

23 3.3 Tranng sets The SVM optmal hyperplane for nonlnear patterns algorthm s selected and used n the experments. But before we used the algorthm, we needed to defne the hand-coded data set wth both nputs and outputs. There s a specfc vector format used for the SVM algorthm called the LIBSVM [22]. In Fgure 7, the vector format for the LIBSVM s shown. [label] [ndex1]:[value1] [ndex2]:[value2]...[ndex] [label] [ndex1]:[value1] [ndex2]:[value2]... Fgure 7. Illustraton of the SVM vector format. The ordered ndex wll be the dfferent features or attrbutes wth a respectvely value. Features or attrbutes show the characterstc hold by the nput vectors. The number of features also stands for the number of dmensons. To gve a deeper understandng of the vector format, an example s needed. Let us assume that two dfferent ponts le n a 2-dmensonal plane wth the coordnates and outputs shown n Table 2. Output x-axs y-axs Table 2. Illustraton of the output and nput feature x- and y-axs wth respectvely values. Wth the assumng data, the SVM vector format wll have the followng structure: 1 1:0 2:3 2 1:5 2:8 Accordng to ths structure, the two features or attrbutes (X and Y) are descrbed. The ndex1 n ths case has the feature or attrbute X and ndex2 has the feature or attrbute Y. The output n the table stands for the format label Output data The error recovery n RWPM today moves the robot to home poston safely and as soon as possble when an error occurs. The plastc errors are all defned errors for RWPM, for example when the machne s stopped because of ejectors are stocked. Currently, RWPM handles all errors wth the same rule, whch s to do a safety Home Run whenever fault occurs. There are two ways to handle the Home Run recovery: 1. If robot does not hold a part, move robot to home poston and stop. 2. If robot holds a part when an error occurs, at fault occurrence, scrap part at Scrap staton and then move the robot to home poston [16]. 19

24 Accordng to the customers requests, only two solutons of the error recovery are not always enough. In the real world, customers need more optons to cover all possble stuatons. Flexblty s a bg ssue here and that s why more solutons for the error handlng are requred. For ths project, addtonal two solutons are presented accordng to the customers demands. The frst soluton s that the robot drops the part at the nstant place before t returns to home poston. The second soluton s to stop the robot and let the operators jog the robot to the approprate poston. The output for ths project descrbes the dfferent output cases for the classfcaton. The defned output cases should be accordng to the customer s needs. The SVM algorthm s constructed n such way that t allows the customer to add and change the output defnton anytme. The four customzed new defned outputs that were the labels for ths project are: 1. Home Run Move robot to home poston and stop 2. Scrap Home Run Scrap part at Scrap staton and then move the robot to home poston 3. Drop Home Run Drop part at nstant place and then move the robot to home poston 4. Stop Jog Stop and let the operators jog the robot Input data Dfferent nputs are requred to be able to use and classfy the SVM algorthm. For the error recovery usng the SVM algorthm, only fve nputs were defned,.e. Actve Staton, Staton Status, Path Type, Pont Type and Tool Status. In real producton more nputs mght be needed. However, the am of ths research was to try to adapt a learnng theory to the ndustral robots, therefore fve nputs were enough to prove the prncple. The nputs have been carefully chosen to solve ths classfcaton problem. The most relevant nputs are shown n Table 3, snce the most mportant nformaton was to fnd out the current poston and status of the robot when an error occurred, and act to the defned solutons accordngly. 1. Actve Staton 2. Staton Status 3. Path Type 4. Pont Type 5. Tool Status one stnok Cellpath Target tlopen Home stnok Manpath Inpont tlclosed Scrap stnbusy Subpath Outpont tlok User Outpath features 3 features 4 features 3 features 3 features 4 features x 3 features x 4 features x 3 features x 3 features = 432 combnatons Table 3. Illustraton of the defned nputs for ths project and the scalng method. 20

25 Currently, the network only supports numercal data. on-numercal data has to be changed nto numercal data. One way to do ths s to use several bnary attrbutes to represent a categorcal attrbute. That s why every nput n ths project was dvded to three or four features/attrbutes. The Actve Staton has 4 features, Staton Status has 3 features, Path Type has 4 features, Pont Type has 3 features and Tool Status has 3 features. The multpled features gave: 4 features x 3 features x 4 features x 3 features x 3 features = 432 The possblty of the entre nput vector was calculated to 432 combnatons, see Table 3. In ths project, 17 features were pre-processed, whch means the nput space holds the number of 17 dmensons. To speed up the solvng SVM algorthm a scalng method was proposed. Another advantage of the method was to avod numercal dffcultes durng the calculaton. Because kernel values usually depend on the nner products of feature vectors, lnearly scalng each attrbute to the range [0, 1] s recommended. The bnary scalng method s also shown n the Table 3 and an example of the nput vector format s: [1 1:0 2:0 3:0 4:1 5:0 6:1 7:0 8:0 9:1 10:0 11:0 12:0 13:0 14:1 15:1 16:0 17:0] Actve Staton = User Staton Status = stnok Path Type = Manpath Pont Type = Outpont Tool Status = tlopen Feature 1-4 belongs to the nput Actve Staton and feature 5-7 belongs to Staton Status etc. In order to check the output classfed answer, the tranng example vector format has to be exposed wth the rght output answer. Furthermore, n order to construct good classfer by tranng, the tranng data must be from the same source as the unseen test data Appled Rules to the error recovery In present error recovery, the same rule was appled to all faults: 1. If robot does not hold part, move robot to home poston and stop 2. If robot holds a part at fault occurrence, scrap part and then move to home poston Snce a new dea was proposed wth four defned solutons, new rules for the proposed mprovements had to be created. The four new customzed outputs: 1. Home Run => HR 2. Scrap Home Run => Scrap 3. Drop Home Run => Drop 4. Stop Jog => Jog The new rules were created manually based on nput from ntervewng experenced robot engneers and knowledge from RobotWare Plastc Mould, RWPM. For the SVM algorthm, 13 rules are presented n the Table 4. The rules were created thoughtfully and cover at least one example from every customzed output. The defnton of the rules affects the SVM algorthm drectly. 21

26 # Rules Output Actve Staton Staton Status Path Type Pont Type Tool Status 1 Jog Any Any Outpath Any Any 2 Drop one Any Any Any tlok 3 Drop Home Any Any Any tlok 4 Drop User Any Any Any tlok 5 Scrap Scrap Any Any Any tlok 6 Scrap Any stok Any Any tlclosed 7 HR Any stok Any Any tlopen 8 Scrap Any stbusy Any Any tlclosed 9 HR Any stbusy Any Any tlopen 10 HR Any stok Any Target Any 11 HR Any stok Any In pont Any 12 Scrap Any stok Any Out pont tlclosed 13 HR Any stok Any Out pont tlopen Table 4. Illustraton of the 13 rules for the proposed mprovement. The number poston for all the rules was ordered by prorty. The frst rule had the hghest prorty. The hgher the prorty the hgher mpact of the SVM algorthm. Any n the table means that t can be any value. The 13 desgned rules above were frst of all used for research to see f the SVM algorthm was a good tool for error recovery on the RWPM. The rules were only used to generate one set of test data. It should be done dfferently for dfferent cases. To test n real envronment more realstc rules are requred. ote that these rules wll be dfferent dependng on the customers envronment. Tranng set (TR) The tranng set s tranng examples whch the system uses to create an optmal model for the classfcaton by choosng from the whole combnaton of 432. Once the system s traned t wll not be necessary to retran the algorthm agan. The system allows one to add new tranng examples whenever t s needed to am for better results. Testng set (TE) The testng set conssts of examples that are used to measure the accuracy of the classfcaton. All the nput vectors n combnaton, wthout the tranng set, were used as testng set to test results. 22

27 4 Experments and Results Applyng SVM to real-world applcatons, the selecton of a sutable kernel functons s mportant to get good classfers. The am of the experments was to fnd the optmal algorthm to select the optmal model to ft the RWPM error recovery. Ths secton puts partcular emphass on comparng dfferent models of SVM obtaned by choosng dfferent kernels. In the followng two kernels; the lnear kernel and the polynomal kernel were chosen to examne the capacty of SVM. 4.1 Lnear kernel The advantage wth the lnear kernel s that only one parameter s needed snce the network solves lnear and nonlnear SVM n the same way, together wth a slack varable. Ths facltates the operators to use the system. The parameter to be concerned of s the parameter C. Parameter C s a trade-off between tranng errors and the flatness of the soluton. The larger the C s the less the fnal tranng error wll be. But f C ncreases too much the rsk of losng generalzaton propertes of the classfer wll be bgger, because t wll try to ft as best as possble to all the tranng examples ncludng the possble errors of the dataset. In addton a large C also ncreases the tme needed for tranng. If parameter C s small, then the classfer s flat meanng that the dervatves are small. The goal s to fnd a C that keeps the tranng errors small but also generalzes well. Dfferent values of the parameter C have been tested and a good value for ths experment has been chosen to C = 10 for the lnear kernel. 13 random selected tranng examples, one from each of the predefned rules, were traned and tested wth the whole combnaton set excludng the random selected tranng examples. After tranng wth only 13 random selected tranng examples, one example from each rule, the predcted accuracy was estmated to %. Hence the tranng examples were randomly selected; dfferent rules are lkely to appear dfferently. To see how the algorthm was behavng, 13 new random selected examples were added for each tme, see Table 5. # Cases # TR # SV # TE Accuracy Accuracy % / ,41 % / ,83 % / ,43 % / % / ,19 % / ,15 % / % / % / % Table 5. Illustraton of the result by usng lnear kernel. 23

28 Accuracy % Accordng to Table 5, one can see that after gvng 91 (13x7) tranng examples the SVM algorthm wll have an accuracy of 100 %. ot all tranng examples are mportant for the algorthm. Only 73 tranng examples were selected for the tranng and the selected tranng examples are called support vectors, SV. As descrbed before, the optmzer wll run on ths subset to fnd the ntal SV. Ths ntal soluton was used to fnd these tranng examples that were most volate for ths soluton. The latter examples and the current support vectors formed together a new subset. Ths subset was used to run the optmzer agan repeatedly untl some stoppng crteron was met. The result of the SVM tranng s also shown n the Graph 1. Lnear kernel type # Tranng example Graph 1. Illustraton of the result by usng lnear kernel. The result of Graph 1 shows that the classfcaton accuracy converges to 100 % after 91 tranng examples. As soon as the classfcaton accuracy reaches the top, the result wll not change rrespectve of the newly added tranng examples. 4.2 Polynomal kernel To estmate when the SVM algorthm converges to 100 % accuracy when compared to the result from the lnear kernel, the polynomal kernel (descrbed n Chapter 3) wth dfferent parameters was used for the next experment. Table 6 shows smlar results as the lnear kernel. The parameters were set from the begnnng, n case one, and they were not changed durng the other cases when new tranng examples were added. If new parameters were added for every case, the result wll be even better. ew parameters could be set for every new added tranng example, but the flexblty wll reduce, snce the parameter set has to be done manually for T each case. For nstance, polynomal kernels P 24 K x, x parameter 1,,5 and the parameter gamma 0,1 x x, often uses the degree. The polynomal kernels are a good model for low degree. If hgh degree s used, numercal dffcultes wll tend to appear. The gamma ( ) parameter serves as an nner product coeffcent n the polynomal kernel. The scalng parameter s usually fxed.e. Kx, x 1for all 1,..., l. After defnng a reasonable set of parameter combnatons, the best parameter has to be selected for every set

29 Accuracy %.e. the test error s mnmzed. But n ths experment, parameters were only set n the frst case snce t wll not facltate the operators to use the system f parameters have to be reset after every case. Dfferent parameters were tested and the followng parameters gave the best result n the frst case wth polynomal kernel: Degree (P): 2 : 0.05 : 1 C: 10 # Cases # TR # SV #TE Accuracy Accuracy % / ,60 % / ,06 % / % / ,89% / ,55 % / ,59 % / % / % / % Table 6. Illustraton of the results by usng polynomal kernel. The dfference s not that bg when comparng the results from Table 5 and 6. One can also see that the Graph 2 of the result from the polynomal kernel s qute smlar to the result from the lnear kernel. Polynomal kernel type # Tranng set Graph 2. Illustraton of the results by usng polynomal kernel. 25

30 Accuracy % Graph 3 llustrates smlar curves from the both experments. Based on the results from both kernels, the lnear kernel was preferred, snce t was easer to use Lnear kernel Polynomal kernel # Tranng set Graph 3. Illustraton of the results wth both lnear and polynomal kernels. Based on the results from the two experments above, the lnear kernel s proposed. Frstly, the lnear kernel does not demand any parameter settngs. Secondly, snce the lnear kernel classfer s the best classfer to use from a theoretcal-learnng pont of vew (gven lmted pror knowledge and a lmted amount of tranng data). The lnear kernel s not necessarly the best choce, but t has been chosen for the ease of use. Hereafter, the lnear kernel wll be used n the followng experment snce result shows that the data was lnearly separable whle accuracy converges to 100%. Therefore, there wll be no need to experment dfferent kernels. Another advantage was that the polynomal kernel has three man parameters to be adjusted whle the lnear kernel has none. 4.3 Re-tran msjudged experments The next experment wll consder the behavor of the SVM algorthm n more realstc cases. Ths experment shows that by a few tranng examples, good generalzaton could be acheved. For example, n real productons, few tranng examples are exposed for the SVM algorthm n the begnnng and accordng to error occurrences, new tranng examples wll be added to the tranng set. Table 7 shows the experment result when one tranng example was added at the tme. Ths new examples that were added to the tranng set were chosen from the prevous ncorrectly classfed case. After the frst tranng wth 13 examples, 300 of 419 cases were correctly classfed. That means 119 ( =119) cases were msjudged, ncorrectly classfed. The new added example wll therefore be chosen from the 119 ncorrectly cases. The dsadvantage of ths experment was that the new added tranng examples that were pcked from each case were carefully analyzed to mprove the classfcaton result. The reason that only 13 examples were traned n the begnnng was to provde a more realstc nvestgaton by choosng at least one example from each of the 13 predefned rules. But n real producton, the operators wll not have tme to create that many tranng examples manually. 26

31 Accuracy % # Cases # TR # SV #TE Accuracy Accuracy % / ,60 % / ,51 % / ,05 % / ,98 % / ,02% / ,20 % / ,57 % / ,50 % / ,19 % / ,34 % / ,33 % / ,28 % / ,26 % / % / % / % Table 7. Illustraton of the results from a realstc experment. By fndng out whch cases that were classfed ncorrectly and re-tran the system by addng one of them, the system wll converge to 100 % already after 13 added tranng examples. Of course the result wll change accordng to whch case that s to be chosen. But the consensus from ths experment was that the SVM algorthm could classfy to 100 % accuracy wth only a few tranng examples. Furthermore, the system was re-traned by two more correct classfed cases just to see how the system behaves after reachng 100% accuracy. As predcted, the system contnues to gve perfect accuracy. Lnear Kernel type # Tranng set Graph 4. Illustraton of the results from a realstc experment. 27

32 5 Dscusson Lnear kernel and polynomal kernel were nvestgated to fnd out the optmal kernel selecton. By creatng smple and well desgned tranng examples, one does not have to use complcated and advanced kernels and stll be able to acheve good results. The frst experment wth lnear kernel showed that after 91 tranng examples the SVM algorthm classfed every test examples to 100 % accuracy. Ths means that the goal of ths experment was acheved. But how relable s ths kernel method and what weaknesses does t enclose? The weaknesses of ths experment are that the tranng examples were randomly chosen for each tme, whch means that the result only shows that the network wll approach to 100 % eventually, but do not show any patterns of how fast t wll converge or how many tranng examples that are needed. The SVM algorthm wll converge faster f good tranng examples, tranng examples that become support vectors, are exposed. But ths has not been tested n ths frst experment, snce the am of ths experment was to nvestgate the performance of the lnear kernel. The second experment was performed wth the polynomal kernel wth degree 2 to compare the result wth lnear kernel. Snce the tranng examples were well desgned, SVM algorthm should classfy wth good result by usng polynomal kernel degree 2. As llustrated above n Graph 3, the result showed that the kernel converged to 100 % smlar to the lnear kernel. But ths experment has a very pronounced weakness. The parameters were not set after every case. Ths means that the network wll not gve the best result n classfyng the test examples. Despte the weakness, one can see that 100 % accuracy was acheved after 91 tranng examples. Hypothetcally, the polynomal kernel wll converge faster than lnear kernel. But t requres manual settng of new parameters every tme to ft the new case. For ths project, the am was to fnd methods that do not complcate the use for producton operators. That s why lnear kernel wll be the most sutable kernel for the error recovery The thrd experment was performed usng the lnear kernel to do a realstc example n order to re-tran msjudged examples. Ths experment began wth 13 tranng examples lke the frst two experments, but the dfference was that only one new tranng example was added for each new case. In real producton, new tranng examples are acheved only when an error occurs. That s why the new tranng example s not randomly chosen. The new tranng example s chosen carefully on the bass of the cases that have not been correctly classfed from the prevous result cases. Taken together, SVM algorthm s a good tool for error recovery snce convergence could be acheved after only 13 new added tranng examples. In concluson, by usng only a few tranng data the SVM s a good tool for classfcaton. It s satsfyng that all three experments gave 100 % accuracy, snce the real-world ndustral robots need to be robust and relable. Anythng less than 100% accuracy wll not be acceptable. However, the results are better than expected and t s questonable how all results can be so accurate. Are the experments too smple or s the SVM method sutable for error recovery? One thng to thnk about s that the tranng examples have a great mpact on the result. Dfferent tranng examples gve dfferent results and the better desgned tranng examples the better result. In ths project we have 432 combnatons and 13 rules that were created manually based on nput from experenced robot engneers and knowledge from the system. It could of course be more or less than 13 rules that we provded the system. The rules were desgned thoughtfully and covered dfferent stuatons n dfferent areas. Ths means that these 13 rules could be too well desgned whch makes t much easer for the experments to reach 100 % accuracy. In the real-world, the errors wll occur and the operator s the one that desgns the new rule for the system. Ths s also mportant and needs to be taken nto account, snce the operator s decson wll drectly mpact on the result. Furthermore, we only defned 4 output solutons, classes. System wth more classes and hgher level complexty may not behave the same way and wth 100% accuracy. 28

33 Though SVM seems to be sutable for ths project, t stll has a couple of dsadvantages for realworld applcatons. SVM s establshed on the bass of the quadratc plannng and t cannot dstngush the attrbute mportance through a tranng set. Also, t s tme-consumng for large volume data classfcaton and tme seres predcton. The tranng tme and the occuped space for SVM must be shortened and reduced n order to be applcable n many real-world applcatons. We talk a lot about the rsks, safety and envronment n today's socety, but how are the rsks, safety and envronment n the robotcs ndustry? Rsk analyss nvolves the systematc use of avalable nformaton to descrbe and calculate the rsks of a gven system. In rsk analyss, we talk a lot about the probabltes and consequences. What are the probabltes of varous adverse events and ther consequences? Wth that as a bass, one can take varous decsons to reduce rsks. Therefore, t s mportant that we defne all knds of rsks n the ndustral robots error recovery envronment, especally f machne learnng methods are appled n the system as proposed n ths project. ot havng hand-coded tradtonal algorthm and lettng the robot thnk, learn and act for tself s a hgh rsk. It s not predctable and defntely not a guarantee on the behavor of the robots. For the SVM method n ths experment, the dea was to have the operator to defne a new rule whenever an error occurs. That tself s a rsk, snce t puts that much freedom, trust and responsblty on the operator. So before any new learnng algorthm s ntroduced to the ndustral robots for error recovery n producton, we need to defne the goal. By defnng the goal, all the uncertantes that prevent us from reachng the goal can be mapped. We need to map all stuatons that lead to problems and accdents. Examples of rsks for error recovery n ndustral robots are: Operator: Badly defned error recovery from operator that could lead to Stopped producton The robot breaks Accdents, njures Robot: Unpredctable behavor Bug n the program Breakages Jammed parts Incorrect decson Stopped producton The robot breaks Envronment damages Accdents, njures All these rsks wll also have economc consequences. 29

34 5.1 Future research A new step towards the future would be to buld systems based on learnng from ndustral robots. In ths project, a step by step mplementaton s proposed usng the SVM algorthm for error recovery n producton on the ABB robot n RWPM as a future mprovement Implementaton of proposed Error recovery tool based on SVM At launch the system s untraned, whch means that the SVM algorthm contans zero numbers of tranng data. Ths gves the operator the possblty to create the wanted error recovery coordnated to the envronment. The second advantage s to make t easy to use for the operators, snce there wll be no rules to follow at early stages. The dea s to let the operators create ther own rules among the producton whenever errors occur. In the begnnng, the robot wll always stop when an error occurs to let the operator choose what soluton that s requred. The dfferent cases of solutons wll be predefned and presented on the FlexPendant touch screen together wth the nputs. The operators only need to choose the wanted soluton (n support of the gvng nputs nformaton) and add t to the SVM algorthm tranng set. Inputs and output solutons are predefned for the proposed example below, but they can be changed after the customers requrements. The rules are set by the example below. 1. When an error occurs, the robot wll STOP mmedately. The FlexPandant screen n producton wll automatcally change to the new screen below. Press button to add to SVM tranng model Save the procedure 30

35 The new screen tells the operator the nput status after STOP. On the bass of ths nput nformaton, the operator can choose the approprate soluton. 2. After choosng the wanted error recovery soluton, press the button to the rght n order to add the case as a tranng example to the SVM algorthm. 3. The savng process s needed to confrm that the procedure has been traned and saved. A pop up wndow wll confrm that the procedure has been succeeded. 4. If the operator wsh to gnore addng the case to the tranng set, press the button Cancel. 5. Step 1-3 s repeated untl the network s completely traned. When s the tranng satsfed? When s the network completely traned to gve correctly classfed solutons for all cases? After a few numbers of added tranng examples the tranng model of the SVM algorthm wll be satsfed and converge to 100 %, whch means that the robot wll classfy correct solutons n all cases. The estmated numbers of needed tranng examples depend on the nput vector matrx. In the runnng producton, the status panel on the FlexPendant producton screen wll have the above appearance (from step 1 to step 5) whenever an error occurs. In the status panel, Cell Status s tellng the operator that somethng s wrong n the producton. Snce the SVM algorthm s completely traned, the system wll actvate and run the SVM algorthm accordng to the gvng nput shown on the screen. 31

36 1. The SVM network wll automatcally be used for error recovery n runnng productons. 2. The output soluton from the SVM network wll be presented on the FlexPendant Status screen, see pcture below. 3. To complete the whole error recovery, the robot should follow the crtera of the classfed soluton. 32

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION

CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Machine Learning 9. week

Machine Learning 9. week Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below

More information

GSLM Operations Research II Fall 13/14

GSLM Operations Research II Fall 13/14 GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS46: Mnng Massve Datasets Jure Leskovec, Stanford Unversty http://cs46.stanford.edu /19/013 Jure Leskovec, Stanford CS46: Mnng Massve Datasets, http://cs46.stanford.edu Perceptron: y = sgn( x Ho to fnd

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Announcements. Supervised Learning

Announcements. Supervised Learning Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Meta-heuristics for Multidimensional Knapsack Problems

Meta-heuristics for Multidimensional Knapsack Problems 2012 4th Internatonal Conference on Computer Research and Development IPCSIT vol.39 (2012) (2012) IACSIT Press, Sngapore Meta-heurstcs for Multdmensonal Knapsack Problems Zhbao Man + Computer Scence Department,

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 A mathematcal programmng approach to the analyss, desgn and

More information

Discriminative classifiers for object classification. Last time

Discriminative classifiers for object classification. Last time Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z.

TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS. Muradaliyev A.Z. TECHNIQUE OF FORMATION HOMOGENEOUS SAMPLE SAME OBJECTS Muradalyev AZ Azerbajan Scentfc-Research and Desgn-Prospectng Insttute of Energetc AZ1012, Ave HZardab-94 E-mal:aydn_murad@yahoocom Importance of

More information

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION...

SUMMARY... I TABLE OF CONTENTS...II INTRODUCTION... Summary A follow-the-leader robot system s mplemented usng Dscrete-Event Supervsory Control methods. The system conssts of three robots, a leader and two followers. The dea s to get the two followers to

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Using Neural Networks and Support Vector Machines in Data Mining

Using Neural Networks and Support Vector Machines in Data Mining Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss

More information

Biostatistics 615/815

Biostatistics 615/815 The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET 1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School

More information

General Vector Machine. Hong Zhao Department of Physics, Xiamen University

General Vector Machine. Hong Zhao Department of Physics, Xiamen University General Vector Machne Hong Zhao (zhaoh@xmu.edu.cn) Department of Physcs, Xamen Unversty The support vector machne (SVM) s an mportant class of learnng machnes for functon approach, pattern recognton, and

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

User Authentication Based On Behavioral Mouse Dynamics Biometrics

User Authentication Based On Behavioral Mouse Dynamics Biometrics User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Backpropagation: In Search of Performance Parameters

Backpropagation: In Search of Performance Parameters Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

LS-TaSC Version 2.1. Willem Roux Livermore Software Technology Corporation, Livermore, CA, USA. Abstract

LS-TaSC Version 2.1. Willem Roux Livermore Software Technology Corporation, Livermore, CA, USA. Abstract 12 th Internatonal LS-DYNA Users Conference Optmzaton(1) LS-TaSC Verson 2.1 Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2.1,

More information

Topology Design using LS-TaSC Version 2 and LS-DYNA

Topology Design using LS-TaSC Version 2 and LS-DYNA Topology Desgn usng LS-TaSC Verson 2 and LS-DYNA Wllem Roux Lvermore Software Technology Corporaton, Lvermore, CA, USA Abstract Ths paper gves an overvew of LS-TaSC verson 2, a topology optmzaton tool

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

CMPS 10 Introduction to Computer Science Lecture Notes

CMPS 10 Introduction to Computer Science Lecture Notes CPS 0 Introducton to Computer Scence Lecture Notes Chapter : Algorthm Desgn How should we present algorthms? Natural languages lke Englsh, Spansh, or French whch are rch n nterpretaton and meanng are not

More information

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour 6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Optimizing Document Scoring for Query Retrieval

Optimizing Document Scoring for Query Retrieval Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng

More information

Face Recognition Method Based on Within-class Clustering SVM

Face Recognition Method Based on Within-class Clustering SVM Face Recognton Method Based on Wthn-class Clusterng SVM Yan Wu, Xao Yao and Yng Xa Department of Computer Scence and Engneerng Tong Unversty Shangha, Chna Abstract - A face recognton method based on Wthn-class

More information

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION

Overview. Basic Setup [9] Motivation and Tasks. Modularization 2008/2/20 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Overvew 2 IMPROVED COVERAGE CONTROL USING ONLY LOCAL INFORMATION Introducton Mult- Smulator MASIM Theoretcal Work and Smulaton Results Concluson Jay Wagenpfel, Adran Trachte Motvaton and Tasks Basc Setup

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation 17 th European Symposum on Computer Aded Process Engneerng ESCAPE17 V. Plesu and P.S. Agach (Edtors) 2007 Elsever B.V. All rghts reserved. 1 An Iteratve Soluton Approach to Process Plant Layout usng Mxed

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Load Balancing for Hex-Cell Interconnection Network

Load Balancing for Hex-Cell Interconnection Network Int. J. Communcatons, Network and System Scences,,, - Publshed Onlne Aprl n ScRes. http://www.scrp.org/journal/jcns http://dx.do.org/./jcns.. Load Balancng for Hex-Cell Interconnecton Network Saher Manaseer,

More information

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT

APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT 3. - 5. 5., Brno, Czech Republc, EU APPLICATION OF MULTIVARIATE LOSS FUNCTION FOR ASSESSMENT OF THE QUALITY OF TECHNOLOGICAL PROCESS MANAGEMENT Abstract Josef TOŠENOVSKÝ ) Lenka MONSPORTOVÁ ) Flp TOŠENOVSKÝ

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Incremental Learning with Support Vector Machines and Fuzzy Set Theory

Incremental Learning with Support Vector Machines and Fuzzy Set Theory The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and

More information

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated.

Some Advanced SPC Tools 1. Cumulative Sum Control (Cusum) Chart For the data shown in Table 9-1, the x chart can be generated. Some Advanced SP Tools 1. umulatve Sum ontrol (usum) hart For the data shown n Table 9-1, the x chart can be generated. However, the shft taken place at sample #21 s not apparent. 92 For ths set samples,

More information

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr) Helsnk Unversty Of Technology, Systems Analyss Laboratory Mat-2.08 Independent research projects n appled mathematcs (3 cr) "! #$&% Antt Laukkanen 506 R ajlaukka@cc.hut.f 2 Introducton...3 2 Multattrbute

More information

Discrimination of Faulted Transmission Lines Using Multi Class Support Vector Machines

Discrimination of Faulted Transmission Lines Using Multi Class Support Vector Machines 16th NAIONAL POWER SYSEMS CONFERENCE, 15th-17th DECEMBER, 2010 497 Dscrmnaton of Faulted ransmsson Lnes Usng Mult Class Support Vector Machnes D.hukaram, Senor Member IEEE, and Rmjhm Agrawal Abstract hs

More information

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems

Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems Taxonomy of Large Margn Prncple Algorthms for Ordnal Regresson Problems Amnon Shashua Computer Scence Department Stanford Unversty Stanford, CA 94305 emal: shashua@cs.stanford.edu Anat Levn School of Computer

More information

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro

More information

Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c

Abstract Ths paper ponts out an mportant source of necency n Smola and Scholkopf's Sequental Mnmal Optmzaton (SMO) algorthm for SVM regresson that s c Improvements to SMO Algorthm for SVM Regresson 1 S.K. Shevade S.S. Keerth C. Bhattacharyya & K.R.K. Murthy shrsh@csa.sc.ernet.n mpessk@guppy.mpe.nus.edu.sg cbchru@csa.sc.ernet.n murthy@csa.sc.ernet.n 1

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

The Study of Remote Sensing Image Classification Based on Support Vector Machine

The Study of Remote Sensing Image Classification Based on Support Vector Machine Sensors & Transducers 03 by IFSA http://www.sensorsportal.com The Study of Remote Sensng Image Classfcaton Based on Support Vector Machne, ZHANG Jan-Hua Key Research Insttute of Yellow Rver Cvlzaton and

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

A Robust LS-SVM Regression

A Robust LS-SVM Regression PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Performance Evaluation of Information Retrieval Systems

Performance Evaluation of Information Retrieval Systems Why System Evaluaton? Performance Evaluaton of Informaton Retreval Systems Many sldes n ths secton are adapted from Prof. Joydeep Ghosh (UT ECE) who n turn adapted them from Prof. Dk Lee (Unv. of Scence

More information

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems

Adaptive Virtual Support Vector Machine for the Reliability Analysis of High-Dimensional Problems Proceedngs of the ASME 2 Internatonal Desgn Engneerng Techncal Conferences & Computers and Informaton n Engneerng Conference IDETC/CIE 2 August 29-3, 2, Washngton, D.C., USA DETC2-47538 Adaptve Vrtual

More information

Three supervised learning methods on pen digits character recognition dataset

Three supervised learning methods on pen digits character recognition dataset Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru

More information

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR

SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR SENSITIVITY ANALYSIS IN LINEAR PROGRAMMING USING A CALCULATOR Judth Aronow Rchard Jarvnen Independent Consultant Dept of Math/Stat 559 Frost Wnona State Unversty Beaumont, TX 7776 Wnona, MN 55987 aronowju@hal.lamar.edu

More information

5 The Primal-Dual Method

5 The Primal-Dual Method 5 The Prmal-Dual Method Orgnally desgned as a method for solvng lnear programs, where t reduces weghted optmzaton problems to smpler combnatoral ones, the prmal-dual method (PDM) has receved much attenton

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

Brave New World Pseudocode Reference

Brave New World Pseudocode Reference Brave New World Pseudocode Reference Pseudocode s a way to descrbe how to accomplsh tasks usng basc steps lke those a computer mght perform. In ths week s lab, you'll see how a form of pseudocode can be

More information

Network Intrusion Detection Based on PSO-SVM

Network Intrusion Detection Based on PSO-SVM TELKOMNIKA Indonesan Journal of Electrcal Engneerng Vol.1, No., February 014, pp. 150 ~ 1508 DOI: http://dx.do.org/10.11591/telkomnka.v1.386 150 Network Intruson Detecton Based on PSO-SVM Changsheng Xang*

More information

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms Course Introducton Course Topcs Exams, abs, Proects A quc loo at a few algorthms 1 Advanced Data Structures and Algorthms Descrpton: We are gong to dscuss algorthm complexty analyss, algorthm desgn technques

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

Fitting: Deformable contours April 26 th, 2018

Fitting: Deformable contours April 26 th, 2018 4/6/08 Fttng: Deformable contours Aprl 6 th, 08 Yong Jae Lee UC Davs Recap so far: Groupng and Fttng Goal: move from array of pxel values (or flter outputs) to a collecton of regons, objects, and shapes.

More information

An Entropy-Based Approach to Integrated Information Needs Assessment

An Entropy-Based Approach to Integrated Information Needs Assessment Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology

More information