Experimental Analysis on Character Recognition using Singular Value Decomposition and Random Projection
|
|
- Rose Black
- 5 years ago
- Views:
Transcription
1 Expermental Analyss on Character Recognton usng Sngular Value Decomposton and Random Projecton Manjusha K. 1, Anand Kumar M. 2, Soman K. P. 3 Centre for Excellence n Computatonal Engneerng and Networkng, Amrta Vshwa Vdyapeetham Amrta School of Engneerng, Combatore, Inda 1 manjushagecpkd@gmal.com 2 m_anandkumar@cb.amrta.edu 3 kp_soman@amrta.edu Abstract Character recognton, a specfc problem n the area of pattern recognton s a sub-process n most of the Optcal Character Recognton (OCR) systems. Sngular Value Decomposton (SVD) s one of the promsng and effcent dmensonalty reducton methods, whch s already appled and proved n the area of character recognton. Random Projecton (RP) s a recently evolved dmenson reducton algorthm whch can scale wth large dataset. In ths paper, we have appled SVD and RP as feature extracton technque for character recognton and expermented wth k-nearest Neghbor (k-nn) and Support Vector Machne (SVM) classfers. Our experments are conducted on MNIST handwrtten dgt database and Malayalam character mage database. On both databases SVD features could acheve lower msclassfcaton rate than RP based features. SVD features wth SVM classfer usng RBF kernel could acheve a msclassfcaton rate of 2.53% on MNIST database and 2.87% on Malayalam character mage database. Keyword-Character Recognton, Sngular Value Decomposton, Random Projecton, Pattern Recognton, Support Vector Machne, Dmensonalty Reducton I. INTRODUCTION Character recognton research dates back to 1950 s. It s the process of dentfyng the character or symbol represented by the gven mage. It comprses of feature extracton followed by classfcaton process. Feature extracton n the context of character recognton s mappng the segmented character mage to a real valued vector whch can better descrbe the character on that mage. Features extracted from character mages of dfferent classes should have a good separatng boundary to acheve good accuracy n character classfcaton. Thus the feature extracton algorthm used character recognton can greatly nfluence the accuracy of whole character recognton process. Sometmes the dfferent feature extracton algorthms can be combned together to acheve a hgher accuracy n character classfcaton. To deal wth hgh dmensonal feature vector, an effcent dmenson reducton technque would be desrable. The best way to reduce the dmenson of data s to project the data vector to a reasonably low-dmensonal orthogonal subspace that can capture most of the varatons present n the hgh dmensonal data. The amount of dstorton n data and the computatonal complexty are the two evaluaton parameters for dmenson reducton algorthms [1]. SVD s one of the most wdely used dmenson reducton method whch s a matrx factorzaton technque, whch breaks the data matrx nto smple and meanngful peces. It has been appled n several data processng applcatons successfully [15][16][17]. It can transform the data matrx from hgher to lower dmenson wthout much nformaton loss. SVD s appled to the character mage data set and the egenvectors assocated wth the most sgnfcant prncpal components are retaned for classfcaton. The dsadvantage of SVD s that t becomes computatonally expensve as the data dmenson ncreases. Random Projecton (RP) s recently emerged smple, powerful computatonally effcent dmenson reducton technque [6]. RP s based on random matrx to project data to lower dmensonal subspace whose columns have unt length. Another advantage of RP over SVD s that t s data ndependent. The random drectons we are choosng does not depend upon the dataset we are usng. RP have strong theoretcal backgrounds [2] and successfully appled n dfferent machne learnng applcatons [4][5][6][8]. Our paper s a comparatve study between SVD and RP on character recognton process. After feature extracton, the feature vector s mapped to a class label or membershp scores wth the help of machne learnng algorthms. Dfferent supervsed classfcaton algorthms can be used wth character recognton. k-nearest Neghbor (k-nn) and Support Vector Machne (SVM) algorthms are used for our expermental purpose. k-nn fnds k closest neghbors of gven feature vector from the tranng samples and decdes the fnal class label whle SVM creates hyper plane whch maxmally separates dfferent classes, and s used for classfyng the new test feature vector [18]. Instead of applyng the dmensonalty reducton on mage ISSN : Vol 7 No 4 Aug-Sep
2 tself, t can be appled on features extracted from the mage. Wavelet and Radon transform can produce robust feature descrptors [19][20]. Wth wavelet transform, the mage can be decomposed nto four sub-mages wth the hgh- and low-pass flters. Radon transform represents the drectonal features n the mage [20]. Besdes transform doman features, structural features can also produce good feature descrptors. Our experments are conducted on MNIST hand wrtten dgt database and Malayalam character mage database. The challenges n terms of recognton for Malayalam Language scrpt, the offcal language of Kerala are the smlarty among character symbol shapes and large number of characters present n the scrpt. Malayalam language have a large number of character symbols representng, vowels, consonants, halfconsonants (known as chllu), vowel modfers and conjunct characters (formed from consonantconsonant/vowel combnaton). Scrpt revson and nonstandard font desgn are other man challenges n Malayalam character recognton [12]. Neeba and Jawahar [21] conducted an emprcal study on dfferent pattern classfcaton schemes for Malayalam language scrpt and observed that SVM outperforms other classfcaton approaches on Malayalam character recognton. But the performance of SVM degraded wth the ncrease n number of classes. Neural Network classfers wth wavelet features on Malayalam character recognton [22] could acheve an accuracy of 92%. SVD have appled on 90 classes of Malayalam Language [14] for dmensonalty reducton and then classfed the features n lower dmenson usng k-nearest Neghbor (k-nn) algorthm wth an accuracy of 94%. In ths paper, we are applyng the technque descrbed n [14] for creatng SVD based features. RP features are created usng the algorthm mentoned n Secton 3. The performance of both the features are evaluated usng k-nn and SVM classfers on Malayalam character mage database and MNIST hand wrtten dgt mage database. Secton 2 and 3 descrbes about the techncal detals about SVD and RP respectvely. Secton 4 s about the classfcaton algorthms used n our experments. Secton 5 represents our detaled experments and results. Secton 6 s the concluson and summary of our work wth SVD and RP on character recognton. II. SINGULAR VALUE DECOMPOSITION (SVD) SVD represents or transforms the data n the matrx to those axes n whch varaton among the data s maxmum whle keepng the total varaton among data same. It s bascally a matrx factorzaton method, whch factorzes any rectangular matrx A of sze m n to three matrces U, V and S. U s an m m, orthogonal matrx and ts columns are egen vectors of AA T. V s an n n orthogonal matrx wth ts columns, egen vectors of A T A. S s an m n rectangular dagonal matrx wth postve real values on ts dagonal and the dagonal values are decreasngly ordered [25]. The basc SVD process can be alternatvely descrbed as: If the rank of A s n, then U = (u 1 u 2 u 3. u n ), V = (v 1 v 2 v 3 v n ) and S = dag (s 1, s 2, s 3, s n ). A can be represented as n (1) n A= suv (1) = 1 T In the summaton gven n (1), even f we dscard the last few terms t can approxmate A. In last few terms actually nose s captured and those values don t contrbute to the nformaton contaned n A. Ths property s actually helpng for dmensonalty reducton and nose removal usng SVD. In our experments, we have created the SVD features of tranng and testng character mages usng the technque descrbed n [14]. The algorthm fnds the U, S and V factors of data matrx D usng SVD. The data matrx s created by appendng vectorzed form of all the character mages n tranng dataset. Then subset of V and S are multpled together to represent the reduced representaton of tranng mages. These are tranng features. The testng features are created by projectng each testng character mage vector to the subset of U. The subset of V, S and U are selected based on the number of sngular values we are consderng for the experment. The number of sngular values s the varyng parameter for our SVD based feature creaton process. III. RANDOM PROJECTIONS (RP) Random Projecton s acheved by projectng the d-dmensonal data on to k dmensonal space usng random matrx where k << d (k s very less than d). Let our data matrx s denoted by D of sze d N where, D denotes the feature data dmenson and N s the number of data observatons. Let R be the random matrx of sze k d. The columns of R are normalzed to get an approxmate orthonormal matrx. Then the random projecton P can be calculated as shown n equaton (2). P= RD (2) ISSN : Vol 7 No 4 Aug-Sep
3 The dmenson of P wll be of k*n. Here we can see that the data dmenson s reduced from d to k. The RP can also be consdered as a lnear transformaton n whch the bass s changed to random vectors. The whole process s of order O(dkN). Random projecton reles on the result that; n hgh dmensonal space, there s large number of almost orthogonal than orthogonal drectons [2]. As the dmenson ncreases, the random matrx vectors wll be almost orthogonal and the matrx R T R wll be approxmate dentty matrx. Random projecton preserves the par-wse dstances between the data vectors even n the reduced low dmenson whle SVD does not gve guarantee about ths. The nner product between the two data vectors are dstorted by random mappng s almost zero on the average and ts varance s at most the nverse of the reduced dmenson f the random matrx s chosen from normal dstrbuton wth zero mean and varance equals to one [1]. Apart from nner product, Eucldean measure can be utlzed to fnd the smlarty between data vectors. Johnson-Lndenstrauss lemma [3] states that the data dmenson can be reduced to k > log(d)/ε such that dstances between the ponts are approxmately preserved (.e., not dstorted more than a factor of 1± ε, for any 0 < ε < 1). But some prevous studes [1] suggests that the lower bound for k s not tght and even lower k values gves good results. Random projecton algorthm for feature extracton can be summarzed as follows 1. Generate random matrx, R of sze k d 2. Normalze the columns of R 3. Fnd the random projecton result P = RD The 3 rd step can be parallelzed f the number of data samples we are takng s very large. Each column of D can be taken and fnd the projecton vector. In RP, the feature sze s dependent the parameter k, the row sze of random matrx R. Feature sze ncreases wth ncrease n k. IV. FEATURE CLASSIFICATION After mappng the character mages from mage space to feature space, the feature vector has to be mapped or classfed to one of the predefned character classes. Supervsed and unsupervsed machne learnng algorthms can be appled for feature classfcaton purpose. In ths paper, we have used two supervsed machne learnng algorthms for classfyng the feature vector, k-nearest Neghbor (k-nn) and Support Vector Machne. A. K-Nearest Neghbor Classfer k-nn s one of the smple machne learnng technque, whch can be used for both classfcaton and regresson. In k-nn, no explct tranng phase s needed, t usually store all the features extracted from the tranng mages along wth ther class labels. Durng testng phase, t labels the new nstances (features extracted from the testng mages) wth that class whch s most common among the k tranng feature neghbors calculated wth the help of smlarty measurng technques. In k-nn, two parameters can be vared; k value and the smlarty measurng technque for calculatng dstance. Cross valdaton approach can be used for estmatng k value for whch k-nn gves good accuracy on the dataset. Dfferent dstance functons can be used for calculatng smlarty between testng feature and tranng features. In our experments, we have used Eucldean dstance as the dstance functon. B. Support Vector Machne SVM classfer s bascally a bnary (two-class) classfer based on the use of dscrmnant functons, whch represents a surface that separates the observatons so that the observatons from the two classes le on the opposte sdes of the surface. Consder labeled tranng dataset (x, y ) ; = 1,... n; y {-1, 1}, x R d where n s the total number of samples n tranng set, d s the dmensonalty of each sample. Suppose we have a hyperplane that separates the postve from the negatve samples. The ponts x whch le on the hyperplane satsfy wx + b = 0, where w s normal to the hyperplane and b s the offset [26]. For the lnearly separable case, SVM fnds a separatng hyperplane wth largest margn; the pont where the tranng data satsfy the followng constrants. wx. + b 1, for y = + 1 wx. + b 1, for y = 1 Identfcaton of the optmal hyperplane for separaton nvolves maxmzaton of an approprate objectve functon,.e, solvng the followng quadratc optmzaton problem n (3) durng tranng an SVM. ISSN : Vol 7 No 4 Aug-Sep
4 Maxmze 1 α n n n j j j = 1 2 = 1 j= 1 subject to α 0 n = 1 αα yyk( x, x) α y = 0; = 1,2,... n where α are the Lagrangan multplers correspondng to each of the tranng data ponts x. The functon K s the kernel functon. It s defned as K(x, y) = Φ(x). Φ(y), where ø maps the data ponts n d dmensons to a hgher dmensonal space [26]. Kernels used n our experments are lnear, polynomal and radal bass functon (RBF). 1. Lnear Kernel : K x, y = x y (Φ s dentty no mappng) ( ) T 2. Polynomal Kernel : ( ) ( T d T, ) or ( 1 ) K x y = x y + x y (d represents degree) 3. Radal Bass Functon (RBF) Kernel : 2 K( x, y) = exp γ x y (γ represents kernel bandwdth) The result of the tranng phase n SVM s the dentfcaton of a set of labeled support vectors and a set of coeffcents α. Support vectors are the samples near the decson boundary and most dffcult to classfy. The decson of test sample x s made from (4). n y = sgn( α y K( x, x)) (4) = 1 For extendng bnary SVM to mult class classfcaton problem, there are two approaches: - Drect and Indrect [26]. In ndrect approach several bnary SVM are constructed and are combned together to fnd fnal target class (one-aganst-one). In drect approach all the classes are consdered n a sngle optmzaton formulaton (one-aganst-rest). In our experments we have used one-aganst-one approach n multclass SVM based classfcaton. V. EXPERIMENTAL RESULTS Experments are conducted on our Malayalam character s mage database and MNIST hand wrtten dgt database [24]. Malayalam character mage database s bult from scanned document mages of Malayalam language story books and magaznes. The mage database ncludes the consonants, vowels, vowel modfers (datrcs), conjunct characters of Malayalam language scrpt. In total 135 character classes are consdered. 75% character mages of each character classes s taken as tranng mages and the rest 25% as testng mages. MNIST hand wrtten dgt database contans totally 60,000 tranng samples and 10,000 testng samples. MATLAB tool s used as expermental platform for conductng our trals. Msclassfcaton rate s consdered as the evaluaton measure to decde the performance of technques used n our experments. Msclassfcaton rate s found by calculatng the percentage of testng samples classfed ncorrectly among total number of test samples. For the frst experment, SVD features extracted from character mages n Malayalam character mage database s frst tested wth k-nn classfer. SVD features for dfferent number of sngular values are extracted and are classfed wth dfferent k values (k=1,2, 9) n k-nn classfer. Msclassfcaton rate obtaned for features of dfferent numbers of sngular values are analyzed and the range of sngular values for whch lowest msclassfcaton rate obtaned s found out. The result obtaned s lsted n TABLE. I. The lowest msclassfcaton rate s obtaned for k=1 when the number of sngular values s n the range For all other k values tll 9, the lowest msclassfcatons occurred on the range of A graph s drawn n between the lowest msclassfcatons and dfferent k values n k-nn classfer, and s shown n Fg.1. d (3) ISSN : Vol 7 No 4 Aug-Sep
5 TABLE I. LOWEST MISCLASSIFICATION RATE OBTAINED FOR K-NN CLASSIFIER FOR SVD FEATURES ON MALAYALAM CHARACTER IMAGE DATABASE k Lowest Msclassfcaton Varaton (%) Range of Number of Sngular Values Fg. 1. Vsual representaton of the msclassfcaton rate varaton obtaned for the range of sngular values lsted on TABLE I n k-nn classfer for dfferent k values on Malayalam character mage database. Among dfferent k values n k-nn, 1-NN s gettng lowest msclassfcaton rate. Then, the same SVD features are tested wth SVM classfer. For testng wth SVM multclass classfer we have used LbSVM tool [23]. Durng tranng process, Lnear, Polynomal and Radal Bass Functon (RBF) kernels n SVM classfer are appled and tested wth the features. The msclassfcaton rate obtaned for SVD features created by consderng dfferent sngular values usng dfferent kernels on SVM classfer are shown n Fg 2. RBF kernel s performng better than lnear and polynomal kernels. Average msclassfcaton rate of 2.87% s obtaned wth RBF kernel on SVM multclass classfer for number of sngular values n the range From Fg. 2 t s obvous that for small number of sngular values, the msclassfcaton rate s hgh; up on ncreasng the number of sngular values the msclassfcaton rate tends to decrease. On Malayalam character mage database the SVD - SVM combnaton performs better than SVD - k-nn combnaton. In experments mentoned above, we have used vectorzed form of character mages for applyng SVD. Instead of usng mage vector drectly, a pre-processng can be appled to character mages to extract features. After pre-processng, SVD can be appled on those extracted features. We have used Wavelet Transform, Radon Transform and Projecton Profle features on Malayalam character mage database for pre-processng. Wavelet transform features are extracted by fndng the approxmaton co-effcent after applyng wavelet decomposton usng haar wavelets on the character mages. Radon transform computes the lne ntegrals from multple sources along parallel paths, n a certan drecton; n our experments angle s vared n between 0 and 180 degree. Projecton profle features are computed by summng the pxel values n character mage horzontally and vertcally. The same SVD approach s appled for the extracted features wth SVM classfer and the msclassfcaton rate obtaned s tabulated n TABLE II. ISSN : Vol 7 No 4 Aug-Sep
6 TABLE II. SVD APPROACH WITH DIFFERENT PRE-PROCESSING METHODS ON MALAYALAM CHARACTER IMAGE DATABASE Feature Name Raw Image (wthout pre-processng) 2.81 Wavelet Transform 2.51 Radon Transform 4.5 Projecton Profle Least Msclassfcaton Rate Obtaned (%) Fg. 2. Graph shows the msclassfcaton rate on Malayalam character mage database, when SVD features from dfferent sngular values are classfed wth SVM classfer. Lnear, polynomal and radal bass functon based kernels nsde SVM are expermented. Fg. 3. a)graph shows the average msclassfcaton rate obtaned for the RP features classfed wth k-nn on Malayalam character mage database. The lowest msclassfcaton s for RP feature sze = 425 b) Msclassfcaton for dfferent k n k-nn classfer for RP feature sze 425. In our approach, SVD works on whole of the tranng dataset to generate ts factor matrces. As the number of mages n tranng set ncreases, applyng SVD becomes computatonally expensve. RP s another dmenson reducton algorthm whch scales wth growng data set. We have appled RP on Malayalam character mage database and tested wth k-nn and SVM classfers. The random matrx sze determnes the number of features n RP. In our experments wth RP on Malayalam character mage database, we have vared the RP feature sze from 50, 75,100 to 500. These RP features are classfed wth k-nn and SVM classfer. The msclassfcaton rate obtaned for dfferent RP feature sze on k-nn classfer s shown n Fg 3. From Fg 3, t s obvous that the msclassfcaton rate decreased wth the ncrease n RP feature sze ntally. When the RP feature sze crossed 425, the msclassfcaton rate s below 10% for all k n k-nn classfer. We have obtaned lowest msclassfcaton rate of 9% for RP feature sze 425 n 1-NN and 2-NN classfer. The RP features from Malayalam character mage database are classfed wth SVM classfer for dfferent kernels: lnear, polynomal and radal bass functon. The result obtaned s graphcally shown n Fg 4. RBF kernel performs better than the other two kernels on RP features. As the RP feature sze ncreased and t reaches 350, lnear kernel could ISSN : Vol 7 No 4 Aug-Sep
7 acheve almost the same result as that of RBF kernel. RP features of sze 350 could acheve a msclassfcaton rate of 4.21% wth SVM classfer usng RBF kernel. We have performed the same set of experments wth SVD and RP on MNIST handwrtten dgt database. In MNIST database each character mage s of sze All the mages nsde MNIST database are converted to bnary mages before applyng SVD and RP. On vectorzed form each bnarzed character mage represents a feature vector of sze 784. All the bnarzed tranng mage vectors are appended together to create the tranng data matrx. SVD features are extracted for dfferent numbers of sngular values from the factorzed tranng data matrx. The obtaned features are frst expermented wth k-nn classfer. The msclassfcaton rate obtaned for dfferent values of k n k-nn classfer s lsted n TABLE III. From TABLE III t s clear that wth for feature sze of range 32-38, the 5-NN classfer could acheve the lowest msclassfcaton rate of 3.24%. The results mply that the reduced dmenson n thrtes from the orgnal dmenson of 784 could capture most of the nformaton contaned n the tranng data matrx. The SVD features from MNIST database are appled on SVM classfer. The msclassfcaton rate obtaned for SVD features on dfferent kernels wth SVM classfer s shown n Fg 5. The RBF kernel outperforms the lnear and polynomal kernel on the SVD features from MNIST. An average msclassfcaton rate of 2.53% s obtaned wth RBF kernel n SVM classfer on feature sze on the range From Fg 5, t can be seen that the msclassfcaton rate s hgh for very small number of sngular values and s decreased wth the ncrease n number of sngular values and reached a mnmum range. Msclassfcaton Rate s slghtly ncreased when the feature sze s ncreased beyond 50 n both RBF and lnear kernel. It can be seen that the SVD - SVM combnaton could acheve a msclassfcaton rate lower than that of SVD - k-nn combnaton on MNIST database. The RP features extracted from hand wrtten dgt mages MNIST database are classfed wth k-nn and SVM classfer. The RP feature sze we have consdered from 50 to 500 and tested wth k-nn for dfferent k values. The result obtaned s lsted n TABLE IV. The sze of RP features ncreased, accuracy also ncreased n k-nn. 4-NN s gettng lowest msclassfcaton rate n majorty cases of dfferent RP feature szes. It can be seen that n hgher feature dmensons the dfference n msclassfcaton rate for dfferent k n k-nn classfer s very less. The lowest msclassfcaton rate s obtaned for 4-NN for RP feature sze of 450 and 475. The same RP features are tested wth SVM classfer wth lnear, polynomal and radal bass functon kernels and the results are lsted n TABLE IV. It can be concluded that RBF kernel performs better than lnear and polynomal kernels. The lowest msclassfcaton rate of 5.56% s obtaned n RP feature sze of 150 for SVM classfer wth RBF kernel. The results of RP features on k-nn and SVM classfers shows that there s not much dfference n the lowest msclassfcaton rate obtaned for both of the classfers. k-nn could acheve the lowest msclassfcaton rate of 5.98% for RP feature sze of 450 whle SVM could acheve 5.56% for RP feature sze of 150. k-nn could acheve almost comparable msclassfcaton rate as that of SVM classfer but n hgher dmenson of RP feature on MNIST database. Fg. 4. Graph of msclassfcaton rate Vs RP features, when classfed wth SVM on Malayalam character mage database ISSN : Vol 7 No 4 Aug-Sep
8 TABLE III. LOWEST MISCLASSIFICATION RATE OBTAINED FOR K-NN CLASSIFIER FOR SVD FEATURES ON MNIST DATABASE k Range of Number of Sngular Values for whch Lowest Msclassfcaton Rate obtaned Average Msclassfcaton Rate on the range (%) Fg. 5. Graph shows the msclassfcaton rate obtaned for SVD features usng SVM classfer wth lnear, polynomal and radal bass functon kernels on MNIST database ISSN : Vol 7 No 4 Aug-Sep
9 RP Feature Dmenson TABLE IV. MISCLASSIFICATION RATE OF RP FEATURES ON MNIST DATABASE RP+k-NN Msclassfcaton Rate (%) RP+SVM Msclassfcaton Rate (%) k=1 k=2 k=3 k=4 k=5 k=6 k=7 k=8 k=9 Lnear Polynomal RBF VI. CONCLUSION Dmenson reducton algorthms play a great role n almost all machne learnng applcatons. We have tred to compare the performance of SVD and RP, two dmenson reducton technques wth the help of two classfcaton algorthms, k-nn and SVM on the context of character recognton. The experments are conducted on Malayalam character mage database (135 dfferent classes) and MNIST handwrtten dgt database. SVD features are gvng better accuracy wth SVM classfer than k-nn on both the databases. In case of RP features, on Malayalam character mage database RP SVM combnaton s performng better than RP k-nn combnaton. On MNIST database the performance of RP k-nn and RP SVM s almost same, except that k-nn performs well on hgh dmensons of RP feature. On comparng the performance of SVD and RP features, SVD domnates over RP on both the mage databases we have used n our experments. But for applcatons, handlng massve data, the SVD approach we have used becomes computatonally expensve whle RP scales well. Our future work ncludes mprovng the performance of RP to make ts performance comparable to that of SVD and on makng SVD approach practcal for applcatons nvolvng growng datasets. REFERENCES [1] Ella Bngham and Hekk Mannla, Random projecton n dmensonalty reducton: Applcatons to mage and text data, n Proc. KDD'01, pp , [2] R. Hecht-Nelsen, Context vectors: general purpose approxmate meanng representatons self-organzed from raw data, n Computatonal Intellgence: Imtatng Lfe, IEEE Press, pp , [3] W. Johnson and J. Lndenstrauss, Extensons of lpschtz mappng nto a hlbert space, n Conf. on Modern Analyss and Probablty, vol. 26, pp , [4] S. Kask, Dmensonalty reducton by random mappng, n Proc. Int. Jont Conf. on Neural Networks, vol. 1, pp , [5] S. Dasgupta, Experments wth random projecton, n Proc. UAI '00, pp , [6] Tetsuya Takguch, Jeff Blmes, Marko Yosh and Yasuo Ark, Evaluaton of Random Projecton Based Feature Combnaton on Speech Recognton, n IEEE Internatonal Conf. on Acoustcs Speech and Sgnal Processng (ICASSP), pp , [7] Xaol Zhang Fern and Carla E. Brodley, Random Projecton for Hgh Dmensonal Data Clusterng: A Cluster Ensemble Approach, n Proc. of the 20th Internatonal Conference on Machne Learnng, pp , [8] Navn Goel, George Bebs, and Ara Nefan, Face Recognton Experments wth Random Projecton, n Proc. of the SPIE, vol. 5779, p , [9] Armn Eftekhar, Massoud Babae-Zadeh and Hamd Abrsham Moghaddam, Two-dmensonal random projecton, Sgnal Processng, vol. 91, pp , [10] Mchael B. Wakn and Rchard G. Baranuk, Random Projectons Of Sgnal Manfolds, n Proc. IEEE Internatonal Conf. on Acoustcs, Speech and Sgnal Processng, vol. 5, [11] Magnus Sahlgren, An Introducton to Random Indexng, n Methods and Applcatons of Semantc Indexng Workshop at the 7th Internatonal Conf. on Termnology and Knowledge Engneerng, [12] Setlur, Srrangaraj. Gude to OCR for Indc Scrpts. Edted by Venu Govndaraju. Sprnger London, [13] U. Pal and BB Chaudhur, Indan scrpt character recognton: A survey, n Pattern Recognton, Elsever vol. 37, no. 9, pp , ISSN : Vol 7 No 4 Aug-Sep
10 [14] S. Sachn Kumar, K. Manjusha and K. P. Soman, Novel SVD Based Character Recognton Approach for Malayalam Language Scrpt, Recent Advances n Intellgent Informatcs Advances n Intellgent Systems and Computng, vol. 235, pp , [15] Soren Holdt Jensen, Per Chrstan Hansen, Steffen Duus Hansen and John Aasted Sorensen, Reducton of Broad-Band Nose n Speech by Truncated QSVD, n IEEE Transactons on Speech and Audo Processng, vol.3, no.6, [16] Vladmr I. Gorodetsk,Leonard J. Popyack,Vladmr Samolov and Vctor A. Skormn, SVD-Based Approach to Transparent Embeddng Data nto Dgtal Images, n Proc. MMM-ACNS '01, pp , [17] W. Zhao, R. Chellappa and A. Krshnaswamy, Dscrmnant Analyss of Prncpal Components for Face Recognton, n Proc. FG '98, p. 336, [18] Mohamed Cheret, Nawwaf Kharma, Cheng-Ln Lu and Chng Y.Suen, Character Recognton Systems, A Gude for Students and Practtoners WILEY-INTERSCIENCE: John Wley & Sons, Inc., Hoboken, New Jersey, [19] Mallat, S., "A theory for multresoluton sgnal decomposton: the wavelet representaton," n IEEE Pattern Anal. and Machne Intell., vol. 11, no. 7, pp , [20] Dattatray V. Jadhav and Raghunath S. Holambe, Feature extracton usng Radon and wavelet transforms wth applcaton to face recognton, n Neurocomputng, vol.72, pp , [21] N. V. Neeba and C. V. Jawahar, Emprcal Evaluaton of Character Classcaton Schemes, n ICAPR, 7th Internatonal Conference on Advances n Pattern Recognton, 2009 [22] M. Abdul Rahman, M. S. Rajasree, OCR for Malayalam Scrpt Usng Neural Networks, ICUMT '09, Internatonal Conference on Ultra Modern Telecommuncatons & Workshops, [23] Chh-Chung Chang and Chh-Jen Ln, LIBSVM: a lbrary for support vector machnes, n ACM Transactons on Intellgent Systems and Technology, [24] LeCun Y. and Cortes C, 'MNIST handwrtten dgt database', (Accessed on: 10 Nov. 2014). [25] Kalman, D., A Sngularly Valuable Decomposton: The SVD of a Matrx n The College Mathematcs Journal, [26] Soman K. P., Loganathan R., and Ajay V. Machne learnng wth SVM and other kernel methods. PHI Learnng Pvt. Ltd ISSN : Vol 7 No 4 Aug-Sep
Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)
Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More information12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification
Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationFace Recognition University at Buffalo CSE666 Lecture Slides Resources:
Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationLearning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More informationSkew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach
Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationA Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines
A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationHuman Face Recognition Using Generalized. Kernel Fisher Discriminant
Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of
More informationClassification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM
Classfcaton of Face Images Based on Gender usng Dmensonalty Reducton Technques and SVM Fahm Mannan 260 266 294 School of Computer Scence McGll Unversty Abstract Ths report presents gender classfcaton based
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationFace Recognition Based on SVM and 2DPCA
Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationDetection of an Object by using Principal Component Analysis
Detecton of an Object by usng Prncpal Component Analyss 1. G. Nagaven, 2. Dr. T. Sreenvasulu Reddy 1. M.Tech, Department of EEE, SVUCE, Trupath, Inda. 2. Assoc. Professor, Department of ECE, SVUCE, Trupath,
More informationAnnouncements. Supervised Learning
Announcements See Chapter 5 of Duda, Hart, and Stork. Tutoral by Burge lnked to on web page. Supervsed Learnng Classfcaton wth labeled eamples. Images vectors n hgh-d space. Supervsed Learnng Labeled eamples
More informationFace Recognition Method Based on Within-class Clustering SVM
Face Recognton Method Based on Wthn-class Clusterng SVM Yan Wu, Xao Yao and Yng Xa Department of Computer Scence and Engneerng Tong Unversty Shangha, Chna Abstract - A face recognton method based on Wthn-class
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationCollaboratively Regularized Nearest Points for Set Based Recognition
Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationHybrid Non-Blind Color Image Watermarking
Hybrd Non-Blnd Color Image Watermarkng Ms C.N.Sujatha 1, Dr. P. Satyanarayana 2 1 Assocate Professor, Dept. of ECE, SNIST, Yamnampet, Ghatkesar Hyderabad-501301, Telangana 2 Professor, Dept. of ECE, AITS,
More informationDiscriminative classifiers for object classification. Last time
Dscrmnatve classfers for object classfcaton Thursday, Nov 12 Krsten Grauman UT Austn Last tme Supervsed classfcaton Loss and rsk, kbayes rule Skn color detecton example Sldng ndo detecton Classfers, boostng
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationCLASSIFICATION OF ULTRASONIC SIGNALS
The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION
More informationLecture 4: Principal components
/3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationCHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION
48 CHAPTER 3 SEQUENTIAL MINIMAL OPTIMIZATION TRAINED SUPPORT VECTOR CLASSIFIER FOR CANCER PREDICTION 3.1 INTRODUCTION The raw mcroarray data s bascally an mage wth dfferent colors ndcatng hybrdzaton (Xue
More informationRecognizing Faces. Outline
Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &
More information2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements
Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.
More informationRECOGNIZING GENDER THROUGH FACIAL IMAGE USING SUPPORT VECTOR MACHINE
Journal of Theoretcal and Appled Informaton Technology 30 th June 06. Vol.88. No.3 005-06 JATIT & LLS. All rghts reserved. ISSN: 99-8645 www.jatt.org E-ISSN: 87-395 RECOGNIZING GENDER THROUGH FACIAL IMAGE
More informationAn Image Fusion Approach Based on Segmentation Region
Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationUsing Fuzzy Logic to Enhance the Large Size Remote Sensing Images
Internatonal Journal of Informaton and Electroncs Engneerng Vol. 5 No. 6 November 015 Usng Fuzzy Logc to Enhance the Large Sze Remote Sensng Images Trung Nguyen Tu Huy Ngo Hoang and Thoa Vu Van Abstract
More informationA Fast Visual Tracking Algorithm Based on Circle Pixels Matching
A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationAn Image Compression Algorithm based on Wavelet Transform and LZW
An Image Compresson Algorthm based on Wavelet Transform and LZW Png Luo a, Janyong Yu b School of Chongqng Unversty of Posts and Telecommuncatons, Chongqng, 400065, Chna Abstract a cylpng@63.com, b y27769864@sna.cn
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More informationModular PCA Face Recognition Based on Weighted Average
odern Appled Scence odular PCA Face Recognton Based on Weghted Average Chengmao Han (Correspondng author) Department of athematcs, Lny Normal Unversty Lny 76005, Chna E-mal: hanchengmao@163.com Abstract
More informationMachine Learning: Algorithms and Applications
14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of
More informationSupport Vector Machine for Remote Sensing image classification
Support Vector Machne for Remote Sensng mage classfcaton Hela Elmanna #*, Mohamed Ans Loghmar #, Mohamed Saber Naceur #3 # Laboratore de Teledetecton et Systeme d nformatons a Reference spatale, Unversty
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationBAYESIAN MULTI-SOURCE DOMAIN ADAPTATION
BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION SHI-LIANG SUN, HONG-LEI SHI Department of Computer Scence and Technology, East Chna Normal Unversty 500 Dongchuan Road, Shangha 200241, P. R. Chna E-MAIL: slsun@cs.ecnu.edu.cn,
More informationEYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS
P.G. Demdov Yaroslavl State Unversty Anatoly Ntn, Vladmr Khryashchev, Olga Stepanova, Igor Kostern EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS Yaroslavl, 2015 Eye
More informationComparison Study of Textural Descriptors for Training Neural Network Classifiers
Comparson Study of Textural Descrptors for Tranng Neural Network Classfers G.D. MAGOULAS (1) S.A. KARKANIS (1) D.A. KARRAS () and M.N. VRAHATIS (3) (1) Department of Informatcs Unversty of Athens GR-157.84
More informationAppearance-based Statistical Methods for Face Recognition
47th Internatonal Symposum ELMAR-2005, 08-10 June 2005, Zadar, Croata Appearance-based Statstcal Methods for Face Recognton Kresmr Delac 1, Mslav Grgc 2, Panos Latss 3 1 Croatan elecom, Savsa 32, Zagreb,
More informationHistogram of Template for Pedestrian Detection
PAPER IEICE TRANS. FUNDAMENTALS/COMMUN./ELECTRON./INF. & SYST., VOL. E85-A/B/C/D, No. xx JANUARY 20xx Hstogram of Template for Pedestran Detecton Shaopeng Tang, Non Member, Satosh Goto Fellow Summary In
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationImage Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline
mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationSHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE
SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE Dorna Purcaru Faculty of Automaton, Computers and Electroncs Unersty of Craoa 13 Al. I. Cuza Street, Craoa RO-1100 ROMANIA E-mal: dpurcaru@electroncs.uc.ro
More informationFeature Extractions for Iris Recognition
Feature Extractons for Irs Recognton Jnwook Go, Jan Jang, Yllbyung Lee, and Chulhee Lee Department of Electrcal and Electronc Engneerng, Yonse Unversty 134 Shnchon-Dong, Seodaemoon-Gu, Seoul, KOREA Emal:
More informationThe Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique
//00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy
More informationAn Evaluation of Divide-and-Combine Strategies for Image Categorization by Multi-Class Support Vector Machines
An Evaluaton of Dvde-and-Combne Strateges for Image Categorzaton by Mult-Class Support Vector Machnes C. Demrkesen¹ and H. Cherf¹, ² 1: Insttue of Scence and Engneerng 2: Faculté des Scences Mrande Galatasaray
More informationData-dependent Hashing Based on p-stable Distribution
Data-depent Hashng Based on p-stable Dstrbuton Author Ba, Xao, Yang, Hachuan, Zhou, Jun, Ren, Peng, Cheng, Jan Publshed 24 Journal Ttle IEEE Transactons on Image Processng DOI https://do.org/.9/tip.24.2352458
More informationRelevance Feedback Document Retrieval using Non-Relevant Documents
Relevance Feedback Document Retreval usng Non-Relevant Documents TAKASHI ONODA, HIROSHI MURATA and SEIJI YAMADA Ths paper reports a new document retreval method usng non-relevant documents. From a large
More informationTowards Semantic Knowledge Propagation from Text to Web Images
Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationAccounting for the Use of Different Length Scale Factors in x, y and z Directions
1 Accountng for the Use of Dfferent Length Scale Factors n x, y and z Drectons Taha Soch (taha.soch@kcl.ac.uk) Imagng Scences & Bomedcal Engneerng, Kng s College London, The Rayne Insttute, St Thomas Hosptal,
More informationA PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION
1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute
More informationOptimal Workload-based Weighted Wavelet Synopses
Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,
More informationLearning a Class-Specific Dictionary for Facial Expression Recognition
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for
More informationFlatten a Curved Space by Kernel: From Einstein to Euclid
Flatten a Curved Space by Kernel: From Ensten to Eucld Quyuan Huang, Dapeng Olver Wu Ensten s general theory of relatvty fundamentally changed our vew about the physcal world. Dfferent from Newton s theory,
More informationCategories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms
3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu
More informationIMAGE FUSION TECHNIQUES
Int. J. Chem. Sc.: 14(S3), 2016, 812-816 ISSN 0972-768X www.sadgurupublcatons.com IMAGE FUSION TECHNIQUES A Short Note P. SUBRAMANIAN *, M. SOWNDARIYA, S. SWATHI and SAINTA MONICA ECE Department, Aarupada
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationRange images. Range image registration. Examples of sampling patterns. Range images and range surfaces
Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationLecture 13: High-dimensional Images
Lec : Hgh-dmensonal Images Grayscale Images Lecture : Hgh-dmensonal Images Math 90 Prof. Todd Wttman The Ctadel A grayscale mage s an nteger-valued D matrx. An 8-bt mage takes on values between 0 and 55.
More informationDiscrimination of Faulted Transmission Lines Using Multi Class Support Vector Machines
16th NAIONAL POWER SYSEMS CONFERENCE, 15th-17th DECEMBER, 2010 497 Dscrmnaton of Faulted ransmsson Lnes Usng Mult Class Support Vector Machnes D.hukaram, Senor Member IEEE, and Rmjhm Agrawal Abstract hs
More informationA Gradient Difference based Technique for Video Text Detection
A Gradent Dfference based Technque for Vdeo Text Detecton Palaahnakote Shvakumara, Trung Quy Phan and Chew Lm Tan School of Computng, Natonal Unversty of Sngapore {shva, phanquyt, tancl }@comp.nus.edu.sg
More informationLocal Quaternary Patterns and Feature Local Quaternary Patterns
Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents
More informationUsing Neural Networks and Support Vector Machines in Data Mining
Usng eural etworks and Support Vector Machnes n Data Mnng RICHARD A. WASIOWSKI Computer Scence Department Calforna State Unversty Domnguez Hlls Carson, CA 90747 USA Abstract: - Multvarate data analyss
More informationAudio Content Classification Method Research Based on Two-step Strategy
(IJACSA) Internatonal Journal of Advanced Computer Scence and Applcatons, Audo Content Classfcaton Method Research Based on Two-step Strategy Sume Lang Department of Computer Scence and Technology Chongqng
More informationFacial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis
WSEAS RANSACIONS on SIGNAL PROCESSING Shqng Zhang, Xaomng Zhao, Bcheng Le Facal Expresson Recognton Based on Local Bnary Patterns and Local Fsher Dscrmnant Analyss SHIQING ZHANG, XIAOMING ZHAO, BICHENG
More informationA Gradient Difference based Technique for Video Text Detection
2009 10th Internatonal Conference on Document Analyss and Recognton A Gradent Dfference based Technque for Vdeo Text Detecton Palaahnakote Shvakumara, Trung Quy Phan and Chew Lm Tan School of Computng,
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationDiscriminative Dictionary Learning with Pairwise Constraints
Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationPositive Semi-definite Programming Localization in Wireless Sensor Networks
Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer
More informationIncremental Learning with Support Vector Machines and Fuzzy Set Theory
The 25th Workshop on Combnatoral Mathematcs and Computaton Theory Incremental Learnng wth Support Vector Machnes and Fuzzy Set Theory Yu-Mng Chuang 1 and Cha-Hwa Ln 2* 1 Department of Computer Scence and
More informationINF 4300 Support Vector Machine Classifiers (SVM) Anne Solberg
INF 43 Support Vector Machne Classfers (SVM) Anne Solberg (anne@f.uo.no) 9..7 Lnear classfers th mamum margn for toclass problems The kernel trck from lnear to a hghdmensonal generalzaton Generaton from
More informationScale Selective Extended Local Binary Pattern For Texture Classification
Scale Selectve Extended Local Bnary Pattern For Texture Classfcaton Yutng Hu, Zhlng Long, and Ghassan AlRegb Multmeda & Sensors Lab (MSL) Georga Insttute of Technology 03/09/017 Outlne Texture Representaton
More informationDynamic wetting property investigation of AFM tips in micro/nanoscale
Dynamc wettng property nvestgaton of AFM tps n mcro/nanoscale The wettng propertes of AFM probe tps are of concern n AFM tp related force measurement, fabrcaton, and manpulaton technques, such as dp-pen
More informationRecognition of Handwritten Numerals Using a Combined Classifier with Hybrid Features
Recognton of Handwrtten Numerals Usng a Combned Classfer wth Hybrd Features Kyoung Mn Km 1,4, Joong Jo Park 2, Young G Song 3, In Cheol Km 1, and Chng Y. Suen 1 1 Centre for Pattern Recognton and Machne
More informationA fast algorithm for color image segmentation
Unersty of Wollongong Research Onlne Faculty of Informatcs - Papers (Arche) Faculty of Engneerng and Informaton Scences 006 A fast algorthm for color mage segmentaton L. Dong Unersty of Wollongong, lju@uow.edu.au
More informationLaplacian Eigenmap for Image Retrieval
Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much
More information6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour
6.854 Advanced Algorthms Petar Maymounkov Problem Set 11 (November 23, 2005) Wth: Benjamn Rossman, Oren Wemann, and Pouya Kheradpour Problem 1. We reduce vertex cover to MAX-SAT wth weghts, such that the
More informationCorrelative features for the classification of textural images
Correlatve features for the classfcaton of textural mages M A Turkova 1 and A V Gadel 1, 1 Samara Natonal Research Unversty, Moskovskoe Shosse 34, Samara, Russa, 443086 Image Processng Systems Insttute
More information