An Ensemble Learning algorithm for Blind Signal Separation Problem
|
|
- Bertha Scott
- 5 years ago
- Views:
Transcription
1 An Ensemble Learnng algorthm for Blnd Sgnal Separaton Problem Yan L 1 and Peng Wen 1 Department of Mathematcs and Computng, Faculty of Engneerng and Surveyng The Unversty of Southern Queensland, Queensland, Australa, QLD 35 {lyan, pengwen}@usq.edu.au Abstract The framework n Bayesan learnng algorthms s based on the assumptons that the quanttes of nterest are governed by probablty dstrbutons, and that optmal decsons can be made by reasonng about these probabltes together wth the data. In ths paper, a Bayesan ensemble learnng approach based on enhanced least square backpropagaton (LSB neural network tranng algorthm s proposed for blnd sgnal separaton problem. The method uses a three layer neural network wth an enhanced LSB tranng algorthm to model the unknown blnd mxng system. Ensemble learnng s appled to estmate the parametrc approxmaton of the posteror probablty densty functon (pdf. The Kullback- Lebler nformaton dvergence s used as the cost functon n the paper. The expermental results on both artfcal data and real recordngs demonstrate that the proposed algorthm can separate blnd sgnals very well. I. INTRODUCTION The problem of blnd sgnal separaton (BSS has drawn a great attenton from many researchers n the past two decades. BSS s to extract the sources s(t that have generated the observatons x(t. x(t = F[s(t]+ n(t (1 where F: R m R m s the unknown nonlnear mxng functon and n(t s addtve nose. The objectve s to fnd a mappng that yelds components y(t = g(x(t ( So that y(t are statstcally ndependent and as close as possble to s(t. Ths must be done from the observed data n a blnd manner as both the orgnal sources and the mxng process are unknown. Many dfferent approaches to BSS have been attempted by numerous researchers [1]. In ths paper, we explore a new blnd separaton method usng a Bayesan estmaton technque and an enhanced LSB neural network tranng algorthm to model the system. Bayesan ensemble learnng, also called Varatonal Bayesan learnng [], utlzes an approxmaton whch s ftted to be posteror dstrbuton of the parameter(s to be estmated. The approxmatve dstrbuton s often chosen to be Gaussan because of ts smplcty and computatonal effcency. The mean of ths Gaussan dstrbuton provdes a pont estmate for the unknown parameter consdered, and ts varance gves a measure of the relablty of the pont estmate. The approxmatve posteror dstrbuton s ftted to the posteror dstrbuton estmated from the data usng the Kullback-Lebler nformaton dvergence. Ths measures the dfference between two probablty denstes and s senstve to the mass of the dstrbutons. One problem n Bayesan estmaton methods s that ther computatonal load s hgh n problems of realstc sze n spte of the effcent Gaussan approxmaton. Another problem s that the Bayesan ensemble learnng procedure may get stuck to a local mnmum and requres careful ntalzaton [3]. These obstacles have prevented ther applcatons to real unsupervsed or blnd learnng problems where the number of unknown parameters to be estmated grows very large. To combat these problems, we use, n ths paper, a LSB neural network to model the blnd mxng process and apply the Bayesan ensemble learnng to estmate orgnal sources. The expermental results are presented n the paper and demonstrate the technque works very well. The rest of the paper s organzed as follows: the enhanced least square neural network model and ts tranng method are ntroduced n the next secton. The network parameters and parametrc approxmaton of the posteror pdf are presented n Secton 3. Secton ntroduces ensemble learnng and the cost functon used n ths paper. The expermental results are gven n Secton 5 to demonstrate the performance of the method. Fnally, Secton 6 concludes the paper. II. THE LEAST SQUARE NEURAL MODEL In 1993, Kong and Barmann [] separated neural networks nto lnear parts and non-lnear parts. The lnear parts sum up the weghted nputs to the neurons and none-lnear parts pass through the sgnals wth the non-lnear actvty functons (such as sgmodal actvaton. Whle solvng the lnear parts optmally, they used the nverse of the actvaton to propagate the remanng error back nto the prevous layer of the neural networks. Therefore, the learnng error s Proceedngs of the 5 Internatonal Conference on Computatonal Intellgence for Modellng, Control and Automaton, and Internatonal Conference on Intellgent Agents, Web Technologes and Internet Commerce (CIMCA-IAWTIC /5 $. 5 IEEE Authorzed lcensed use lmted to: UNIVERSITY OF SOUTHERN QUEENSLAND. Downloaded on May 1,1 at :6: UTC from IEEE Xplore. Restrctons apply.
2 mnmsed on each layer separately from the output layer to the hdden and nput layers by usng least square back propagaton (LSB method. The convergence of the algorthm s much faster than that of classcal Back Propagaton (BP algorthm. However, the drawback of the LSB algorthm s that the tranng error can not be further reduced after the begnnng two or three teratons []. In fact, the tranng error has been sgnfcantly reduced at the frst and second teratons, whch s good enough for the most of the practcal applcatons. The model structure used n ths paper s a three layer neural network wth an enhanced LSB tranng algorthm [5]. Fg. 1 shows the structure of network. The LSB tranng algorthm optmses the network weghts through an teratve process layer by layer. The tranng algorthm takes, frstly, outputs of nodes n the hdden layer nto consderaton. It not only adjusts the weghts of the network but also adjusts the outputs of the hdden layer. The network works lke a RNN, but t can reach ts steady state very quck because of ts novel tranng algorthm. Please refer to [5] for the detals about ths algorthm. The neurons n the frst layer are lnear. They pass through the nput sgnals to all the neurons n the hdden layer. The actvaton functon used n the neurons n the hdden layer and the output layer s the nverse hyperbolc sne, sn -1, whch s a sgmodal functon but not saturatng for large values of ts nputs. The orgnal algorthm s a supervsed learnng algorthm. Inspred by [6], t can be adapted for BSS problem (wth unknown nputs. Durng the learnng process, we generate a set of random source varables to play the role of nputs. The frst data vector s passed through the neural network, and the outputs of the network are produced. The observaton data play the role of the outputs. The enhanced LSB algorthm s appled to fnd an optmal source sgnals whch produce the observed data. The ntal weghts of the network are set randomly. Inputs Z -1 The desred output of the hdden Outputs Fg. 1 The Network Structure Once the optmal source sgnals are found, the nputs of the network are known and the learnng process s the same as the supervsed learnng: the weghts are adapted. It makes the best matchng model vector be moved even closer to the true nputs. Then the next nput data vector are taken to pass through the network, to fnd the source varables that best descrbe the data, to adapt the weghts and so on. Unlke the method used n [6], whch appled the tradtonal BP algorthm, the algorthm does not need to be terated many tmes to fnd an optmal orgnal source sgnals as one teraton s good enough for the enhanced LSB tranng algorthm to reach the equvalent tranng error or even better. It s expected that the tranng process s much faster than the approach usng BP algorthm as the convergence of the enhanced LSB algorthm s nearly orders of magntude faster than the classcal BP. III. NETWORK PARAMETERS AND PARAMETRIC APPROXIMATION A. Network Parameters Let x(t denote the observed data vector at tme t; s(t the vectors of the source varables at the tme t; W 1 (t and W (t the matrces contanng the weghts on the frst and the second layers, respectvely. All the bases for the network are set to.5, and f(. s the vector of nonlnear actvaton functons (sn -1. As all real sgnals contan nose, we shall assume that observatons are corrupted by Gaussan nose denoted by n(t. Usng ths notaton, the model for the observatons passes through the network descrbed below; x(t = f(w (t[f(w 1 (t s(t] + n(t (3 The sources are assumed to be ndependent and Gaussan. The Gaussanty assumpton s realstc as the network has nonlneartes whch can transform the Gaussan dstrbutons to vrtually any other regular dstrbutons. The weght matrces W 1 (t and W (t, and the parameters of the dstrbutons of the nose, source varables and column vectors of the weght matrces are the man parameters of the network. For smplcty, all the parametersed dstrbutons are assumed to be Gaussan. B. Parametrc approxmaton of the posteror pdf Exact treatment of the posteror pdfs of the models s mpossble n practce and posteror pdfs need to be approxmated. In ths paper, we apply a computatonally effcent parametrc approxmaton whch usually yelds satsfactory results. A standard approach for parametrc approxmaton s the Laplace s method. MacKay ntroduces a varaton method called the evdence framework. In hs neural network approach, one frst fnds a (local maxmum pont of the posteror pdf and then apples a second order Taylor s seres approxmaton for the logarthm of the posteror pdf. Ths s equvalent as to applyng the Gaussan approxmaton to the posteror pdf. C. Ensemble Learnng and the Cost Functon Proceedngs of the 5 Internatonal Conference on Computatonal Intellgence for Modellng, Control and Automaton, and Internatonal Conference on Intellgent Agents, Web Technologes and Internet Commerce (CIMCA-IAWTIC /5 $. 5 IEEE Authorzed lcensed use lmted to: UNIVERSITY OF SOUTHERN QUEENSLAND. Downloaded on May 1,1 at :6: UTC from IEEE Xplore. Restrctons apply.
3 The ensemble learnng [7], a well developed method for parametrc approxmaton of posteror pdfs, s used n ths paper. The basc dea s to mnmze the dfferences between the posteror pdf and ts parametrc approxmaton. Let P denote the exact posteror pdf and Q s parametrc approxmaton. Assume that s the parameters of the model H and X s the set of the observed data. It s assumed that we have ndependent prors of each parameter, thus P ( H P( H ( The Ensemble learnng cost functon, C Kl, s the msft measured by the Kullback-Kebler nformaton dvergence between P and Q. Q( CKL E {log( } P( X, H P( H (5 Q( E { log log P( X, H } P( H If the margnalzaton s performed over all the parameters, wth the excepton of, we have: CKL Q( (logq( H EQ {log P( X, H } d c (6 where c s a constant. Dfferentatng the above equaton wth respect to Q (, we obtan Ckl logq( log P( H Q( (7 EQ\ {log P( X, H } 1 where s a Lagrange multpler ntroduced to ensure that Q( s normalzed. The optmal dstrbuton Q ( s 1 Q( P( H exp( EQ\ {log P( X H } (8 Z where Z s the partton functon: Z P( H exp( EQ {log P( X, H } d (9 Ths procedure leads to an teratve algorthm for the update of each dstrbuton. Smple Gaussan dstrbutons are used to approxmate the posteror pdf. Note that the Kullback-Lebler dvergence nvolves an expectaton over a dstrbuton and, consequently, s senstve to probablty mass rather than probablty densty. The Kullback-Lebler dvergence s used as the cost functon n ths paper. For mathematcal and computatonal smplcty, the approxmaton of Q needs to be smple. The cost functon C KL s a functon of the posteror means and varances of the source varables and the parameters of the network. Ths s because nstead of fndng a pont estmate, a whole dstrbuton wll be estmated for the source varables and the parameters durng learnng. The end result of the learnng s therefore not just an estmate of the unknown varables, but a dstrbuton over the varables. IV. EXPERIMENTAL RESULTS Two experments are presented n ths secton. In the frst experment, we use a set of artfcal data; however, n the second one, real speech recordngs are used to test the performance of the proposed approach. A. Experment 1: Artfcal data There are eght sources, four super-gaussans and four sub-gaussans, generated by Matlab functons. The observaton data are generated from these sources through a nonlnear mappng neural network. The network s a randomly ntalzed three-layer feedforward neural network wth 3 hdden neurons and eght output neurons. A Gaussan nose havng a standard devaton of.1 s also added to the data. The results are shown n Fg.. It shows eght scatter plots, each of them correspondng to one of the eght sources. The orgnal source s on the x-axs and the estmated source on the y-axs of each plot, wth each pont correspondng to one data vector. An optmal result s a straght lne presentng that the estmated values of the sources are the same as the true values. The number of hdden neurons s changed to optmze the results. There are neurons used n the hdden layer n the enhanced LSB neural network and only two teratons (the data set s gong through the neural network twce used n the results shown n Fg.. Further more teratons do not brng better results rather than more tranng tme, whch s consstent wth the characterstc of LSB algorthm. Fg. 3 shows the results after 5 tranng teratons, whch gves no better percevable results than those n Fg.. The scatter plots present the dfferences between the sources and the estmated sgnals. B. Experment : Real speech sgnal separaton The observed sgnals were taken from Dr Te-Won Lee s home page at the Salk Insttute on the webste One sgnal s a recordng of the dgts from one to ten spoken n Englsh. The second mcrophone sgnal s the dgts spoken n Spansh at the same tme. The proposed algorthm s appled to the sgnals. Fgs and 5 show the real sgnals and the separated results (only half of the sgnals are presented here for clarty. It s hard to compare the results wth Lee s results n a quanttatve way due to the dfferent methodologes, but comparable results can be dentfed when the sgnals are lstened to. Proceedngs of the 5 Internatonal Conference on Computatonal Intellgence for Modellng, Control and Automaton, and Internatonal Conference on Intellgent Agents, Web Technologes and Internet Commerce (CIMCA-IAWTIC /5 $. 5 IEEE Authorzed lcensed use lmted to: UNIVERSITY OF SOUTHERN QUEENSLAND. Downloaded on May 1,1 at :6: UTC from IEEE Xplore. Restrctons apply.
4 V. CONCLUSION In ths paper, we develop a new approach based on Bayesan ensemble learnng and LSB neural network tranng algorthm for BSS problem. A three layer comparng probablty dstrbutons and t can be computed effcently n practce f the approxmaton s chosen to be smple enough. Kullback-Lebler nformaton s senstve to probablty mass and therefore the search for good models focuses on the models whch have large probablty mass as opposed to probablty densty. The drawback s that n order for ensemble learnng to be computatonally effcent, the approxmaton of the posteror needs to have a smple factoral structure. The experments have been carred out usng both artfcal data and real recordngs. The results show the success of the proposed algorthm. Fg. The scatter plots, wth the orgnal sources on the x-axs of each scatter plot and the sources estmated by the proposed algorthm on the y-axs, after teratons. Fg. The real sgnals Fg. 3 The scatter plots, wth the orgnal sources on the x-axs of each scatter plot and the sources estmated by the proposed algorthm on the y-axs, after 5 teratons. Fg. 5 The separated sgnals neural network wth an enhanced LSB tranng algorthm s used to model the unknown blnd mxng system. The network works lke a RNN, but t can reach ts steady state very quck because of ts enhanced LSB tranng algorthm. Ensemble learnng s appled to estmate the parametrc approxmaton of the posteror pdf. The Kullback-Lebler nformaton dvergence s used as the cost functon n the paper. It s a measure suted for REFERENCES [1] L, Yan, Peng Wen and Davd Powers, Methods For The Blnd Sgnal Separaton Problem, The proceedng of the IEEE Internatonal Conference on Neural Networks & Sgnal Processng (ICNNSP 3, Nanjng, Chna, December, 1-17, 3, pp [] Lappalanen, H., Ensemble Learnng, n Advances n Independent Component Analyss, M. Grolam, Ed. Berln: Sprnge Verlag,, pp Proceedngs of the 5 Internatonal Conference on Computatonal Intellgence for Modellng, Control and Automaton, and Internatonal Conference on Intellgent Agents, Web Technologes and Internet Commerce (CIMCA-IAWTIC /5 $. 5 IEEE Authorzed lcensed use lmted to: UNIVERSITY OF SOUTHERN QUEENSLAND. Downloaded on May 1,1 at :6: UTC from IEEE Xplore. Restrctons apply.
5 [3] Jutten, C. and J. Karhunen, Advances n Nonlnear Blnd Source Separaton, th Internatonal Symposum on Independent Component Analyss and Blnd Sgnal Separaton (ICA3, Aprl 3, Nara, Japan, pp [] Begler-Kong, F. B. and F Barman, 1993, A learnng algorthm for multlayered neural networks based on lnear least square problems, Neural Networks, Vol. 6, pp [5] L, Yan, A. B. Rad and Wen Peng, An Enhanced Tranng Algorthm for Multlayer Neural Networks Based on Reference Output of Hdden Layer, Neural Computng & Applcatons, Vol. 8, 1999, pp [6] Lappalanen, H. and Xaver Gannakopoulos, Mult-Layer Perceptrons as Nonlnear Generatve Models for Unsupervsed Learnng: a Bayesan Treatment, ICANN 99, pp. 19-, [7] Geoffery E. Hnton and Drew van Camp, Keepng neural networks smple by mnmzng the descrpton length of the weghts In Proceedngs of the COLT 93, pp. 5-13, Santa Cruz, Calforna, [8] The webste Proceedngs of the 5 Internatonal Conference on Computatonal Intellgence for Modellng, Control and Automaton, and Internatonal Conference on Intellgent Agents, Web Technologes and Internet Commerce (CIMCA-IAWTIC /5 $. 5 IEEE Authorzed lcensed use lmted to: UNIVERSITY OF SOUTHERN QUEENSLAND. Downloaded on May 1,1 at :6: UTC from IEEE Xplore. Restrctons apply.
Learning the Kernel Parameters in Kernel Minimum Distance Classifier
Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department
More informationOutline. Type of Machine Learning. Examples of Application. Unsupervised Learning
Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton
More informationSupport Vector Machines
/9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.
More informationA Binarization Algorithm specialized on Document Images and Photos
A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a
More informationAn Entropy-Based Approach to Integrated Information Needs Assessment
Dstrbuton Statement A: Approved for publc release; dstrbuton s unlmted. An Entropy-Based Approach to ntegrated nformaton Needs Assessment June 8, 2004 Wllam J. Farrell Lockheed Martn Advanced Technology
More informationA Fast Content-Based Multimedia Retrieval Technique Using Compressed Data
A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,
More informationFeature Reduction and Selection
Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components
More informationSLAM Summer School 2006 Practical 2: SLAM using Monocular Vision
SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,
More informationUnsupervised Learning
Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and
More informationSum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints
Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan
More informationDetermining the Optimal Bandwidth Based on Multi-criterion Fusion
Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn
More informationSmoothing Spline ANOVA for variable screening
Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory
More informationMachine Learning 9. week
Machne Learnng 9. week Mappng Concept Radal Bass Functons (RBF) RBF Networks 1 Mappng It s probably the best scenaro for the classfcaton of two dataset s to separate them lnearly. As you see n the below
More informationThe Shortest Path of Touring Lines given in the Plane
Send Orders for Reprnts to reprnts@benthamscence.ae 262 The Open Cybernetcs & Systemcs Journal, 2015, 9, 262-267 The Shortest Path of Tourng Lnes gven n the Plane Open Access Ljuan Wang 1,2, Dandan He
More informationParallelism for Nested Loops with Non-uniform and Flow Dependences
Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr
More informationSubspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;
Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features
More informationBackpropagation: In Search of Performance Parameters
Bacpropagaton: In Search of Performance Parameters ANIL KUMAR ENUMULAPALLY, LINGGUO BU, and KHOSROW KAIKHAH, Ph.D. Computer Scence Department Texas State Unversty-San Marcos San Marcos, TX-78666 USA ae049@txstate.edu,
More informationImprovement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration
Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,
More informationMathematics 256 a course in differential equations for engineering students
Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the
More informationThe Research of Support Vector Machine in Agricultural Data Classification
The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou
More informationVectorization of Image Outlines Using Rational Spline and Genetic Algorithm
01 Internatonal Conference on Image, Vson and Computng (ICIVC 01) IPCSIT vol. 50 (01) (01) IACSIT Press, Sngapore DOI: 10.776/IPCSIT.01.V50.4 Vectorzaton of Image Outlnes Usng Ratonal Splne and Genetc
More informationAn Image Fusion Approach Based on Segmentation Region
Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua
More informationCS 534: Computer Vision Model Fitting
CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust
More informationS1 Note. Basis functions.
S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type
More informationy and the total sum of
Lnear regresson Testng for non-lnearty In analytcal chemstry, lnear regresson s commonly used n the constructon of calbraton functons requred for analytcal technques such as gas chromatography, atomc absorpton
More informationFuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches
Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of
More informationEdge Detection in Noisy Images Using the Support Vector Machines
Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona
More informationA MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS
Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung
More informationSkew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach
Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research
More informationA Robust Method for Estimating the Fundamental Matrix
Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.
More informationNAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics
Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson
More informationSVM-based Learning for Multiple Model Estimation
SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:
More informationCluster Analysis of Electrical Behavior
Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School
More informationClassifying Acoustic Transient Signals Using Artificial Intelligence
Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)
More informationHybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 2 Sofa 2016 Prnt ISSN: 1311-9702; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-2016-0017 Hybrdzaton of Expectaton-Maxmzaton
More informationLecture 5: Multilayer Perceptrons
Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented
More informationProblem Definitions and Evaluation Criteria for Computational Expensive Optimization
Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty
More informationThe Man-hour Estimation Models & Its Comparison of Interim Products Assembly for Shipbuilding
Internatonal Journal of Operatons Research Internatonal Journal of Operatons Research Vol., No., 9 4 (005) The Man-hour Estmaton Models & Its Comparson of Interm Products Assembly for Shpbuldng Bn Lu and
More informationMachine Learning. K-means Algorithm
Macne Learnng CS 6375 --- Sprng 2015 Gaussan Mture Model GMM pectaton Mamzaton M Acknowledgement: some sldes adopted from Crstoper Bsop Vncent Ng. 1 K-means Algortm Specal case of M Goal: represent a data
More informationProblem Set 3 Solutions
Introducton to Algorthms October 4, 2002 Massachusetts Insttute of Technology 6046J/18410J Professors Erk Demane and Shaf Goldwasser Handout 14 Problem Set 3 Solutons (Exercses were not to be turned n,
More informationArtificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling
Artfcal Intellgence Technques for Steam Generator Modellng Sarah Wrght and Tshldz Marwala Abstract Ths paper nvestgates the use of dfferent Artfcal Intellgence methods to predct the values of several contnuous
More informationClassifier Selection Based on Data Complexity Measures *
Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.
More informationComparison Study of Textural Descriptors for Training Neural Network Classifiers
Comparson Study of Textural Descrptors for Tranng Neural Network Classfers G.D. MAGOULAS (1) S.A. KARKANIS (1) D.A. KARRAS () and M.N. VRAHATIS (3) (1) Department of Informatcs Unversty of Athens GR-157.84
More informationFast Sparse Gaussian Processes Learning for Man-Made Structure Classification
Fast Sparse Gaussan Processes Learnng for Man-Made Structure Classfcaton Hang Zhou Insttute for Vson Systems Engneerng, Dept Elec. & Comp. Syst. Eng. PO Box 35, Monash Unversty, Clayton, VIC 3800, Australa
More informationAn Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices
Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal
More informationA Robust LS-SVM Regression
PROCEEDIGS OF WORLD ACADEMY OF SCIECE, EGIEERIG AD ECHOLOGY VOLUME 7 AUGUS 5 ISS 37- A Robust LS-SVM Regresson József Valyon, and Gábor Horváth Abstract In comparson to the orgnal SVM, whch nvolves a quadratc
More informationHermite Splines in Lie Groups as Products of Geodesics
Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the
More informationA Post Randomization Framework for Privacy-Preserving Bayesian. Network Parameter Learning
A Post Randomzaton Framework for Prvacy-Preservng Bayesan Network Parameter Learnng JIANJIE MA K.SIVAKUMAR School Electrcal Engneerng and Computer Scence, Washngton State Unversty Pullman, WA. 9964-75
More informationOutline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1
4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:
More informationContent Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers
IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth
More information7/12/2016. GROUP ANALYSIS Martin M. Monti UCLA Psychology AGGREGATING MULTIPLE SUBJECTS VARIANCE AT THE GROUP LEVEL
GROUP ANALYSIS Martn M. Mont UCLA Psychology NITP AGGREGATING MULTIPLE SUBJECTS When we conduct mult-subject analyss we are tryng to understand whether an effect s sgnfcant across a group of people. Whether
More informationEXTENDED BIC CRITERION FOR MODEL SELECTION
IDIAP RESEARCH REPORT EXTEDED BIC CRITERIO FOR ODEL SELECTIO Itshak Lapdot Andrew orrs IDIAP-RR-0-4 Dalle olle Insttute for Perceptual Artfcal Intellgence P.O.Box 59 artgny Valas Swtzerland phone +4 7
More informationNUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS
ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana
More informationTerm Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task
Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto
More informationLecture 4: Principal components
/3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness
More informationSimulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010
Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement
More informationA Statistical Model Selection Strategy Applied to Neural Networks
A Statstcal Model Selecton Strategy Appled to Neural Networks Joaquín Pzarro Elsa Guerrero Pedro L. Galndo joaqun.pzarro@uca.es elsa.guerrero@uca.es pedro.galndo@uca.es Dpto Lenguajes y Sstemas Informátcos
More informationOptimal Workload-based Weighted Wavelet Synopses
Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,
More informationBootstrapping Color Constancy
Bootstrappng Color Constancy Bran Funt and Vlad C. Carde * Smon Fraser Unversty Vancouver, Canada ABSTRACT Bootstrappng provdes a novel approach to tranng a neural network to estmate the chromatcty of
More informationMULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION
MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and
More informationProgramming in Fortran 90 : 2017/2018
Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values
More informationLearning-Based Top-N Selection Query Evaluation over Relational Databases
Learnng-Based Top-N Selecton Query Evaluaton over Relatonal Databases Lang Zhu *, Wey Meng ** * School of Mathematcs and Computer Scence, Hebe Unversty, Baodng, Hebe 071002, Chna, zhu@mal.hbu.edu.cn **
More informationMaximum Variance Combined with Adaptive Genetic Algorithm for Infrared Image Segmentation
Internatonal Conference on Logstcs Engneerng, Management and Computer Scence (LEMCS 5) Maxmum Varance Combned wth Adaptve Genetc Algorthm for Infrared Image Segmentaton Huxuan Fu College of Automaton Harbn
More informationMixed Linear System Estimation and Identification
48th IEEE Conference on Decson and Control, Shangha, Chna, December 2009 Mxed Lnear System Estmaton and Identfcaton A. Zymns S. Boyd D. Gornevsky Abstract We consder a mxed lnear system model, wth both
More informationUnsupervised Learning and Clustering
Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned
More informationSupport Vector Machines
Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned
More informationTsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance
Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for
More informationAn Optimal Algorithm for Prufer Codes *
J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,
More informationCS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15
CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc
More informationUser Authentication Based On Behavioral Mouse Dynamics Biometrics
User Authentcaton Based On Behavoral Mouse Dynamcs Bometrcs Chee-Hyung Yoon Danel Donghyun Km Department of Computer Scence Department of Computer Scence Stanford Unversty Stanford Unversty Stanford, CA
More information2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements
Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.
More informationThe Nottingham eprints service makes this work by researchers of the University of Nottingham available open access under the following conditions.
Da, Wujao and Lu, Bn and Meng, Xaoln and Huang, D. () Spato-temporal modellng of dam deformaton usng ndependent component analyss. Survey Revew, 6 (9). pp. 7-. ISSN 7-76 Access from the Unversty of Nottngham
More informationFEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur
FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents
More informationAn Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method
Internatonal Journal of Computatonal and Appled Mathematcs. ISSN 89-4966 Volume, Number (07), pp. 33-4 Research Inda Publcatons http://www.rpublcaton.com An Accurate Evaluaton of Integrals n Convex and
More informationFast Computation of Shortest Path for Visiting Segments in the Plane
Send Orders for Reprnts to reprnts@benthamscence.ae 4 The Open Cybernetcs & Systemcs Journal, 04, 8, 4-9 Open Access Fast Computaton of Shortest Path for Vstng Segments n the Plane Ljuan Wang,, Bo Jang
More informationThree supervised learning methods on pen digits character recognition dataset
Three supervsed learnng methods on pen dgts character recognton dataset Chrs Flezach Department of Computer Scence and Engneerng Unversty of Calforna, San Dego San Dego, CA 92093 cflezac@cs.ucsd.edu Satoru
More informationApplying Continuous Action Reinforcement Learning Automata(CARLA) to Global Training of Hidden Markov Models
Applyng Contnuous Acton Renforcement Learnng Automata(CARLA to Global Tranng of Hdden Markov Models Jahanshah Kabudan, Mohammad Reza Meybod, and Mohammad Mehd Homayounpour Department of Computer Engneerng
More informationActive Contours/Snakes
Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng
More informationClassification / Regression Support Vector Machines
Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM
More informationData-dependent Hashing Based on p-stable Distribution
Data-depent Hashng Based on p-stable Dstrbuton Author Ba, Xao, Yang, Hachuan, Zhou, Jun, Ren, Peng, Cheng, Jan Publshed 24 Journal Ttle IEEE Transactons on Image Processng DOI https://do.org/.9/tip.24.2352458
More informationA Background Subtraction for a Vision-based User Interface *
A Background Subtracton for a Vson-based User Interface * Dongpyo Hong and Woontack Woo KJIST U-VR Lab. {dhon wwoo}@kjst.ac.kr Abstract In ths paper, we propose a robust and effcent background subtracton
More informationBiostatistics 615/815
The E-M Algorthm Bostatstcs 615/815 Lecture 17 Last Lecture: The Smplex Method General method for optmzaton Makes few assumptons about functon Crawls towards mnmum Some recommendatons Multple startng ponts
More informationBOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET
1 BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET TZU-CHENG CHUANG School of Electrcal and Computer Engneerng, Purdue Unversty, West Lafayette, Indana 47907 SAUL B. GELFAND School
More informationLecture 9 Fitting and Matching
In ths lecture, we re gong to talk about a number of problems related to fttng and matchng. We wll formulate these problems formally and our dscusson wll nvolve Least Squares methods, RANSAC and Hough
More informationX- Chart Using ANOM Approach
ISSN 1684-8403 Journal of Statstcs Volume 17, 010, pp. 3-3 Abstract X- Chart Usng ANOM Approach Gullapall Chakravarth 1 and Chaluvad Venkateswara Rao Control lmts for ndvdual measurements (X) chart are
More informationOptimal Scheduling of Capture Times in a Multiple Capture Imaging System
Optmal Schedulng of Capture Tmes n a Multple Capture Imagng System Tng Chen and Abbas El Gamal Informaton Systems Laboratory Department of Electrcal Engneerng Stanford Unversty Stanford, Calforna 9435,
More informationProper Choice of Data Used for the Estimation of Datum Transformation Parameters
Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and
More informationDeep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch
Deep learnng s a good steganalyss tool when embeddng key s reused for dfferent mages, even f there s a cover source-msmatch Lonel PIBRE 2,3, Jérôme PASQUET 2,3, Dno IENCO 2,3, Marc CHAUMONT 1,2,3 (1) Unversty
More informationOptimizing Document Scoring for Query Retrieval
Optmzng Document Scorng for Query Retreval Brent Ellwen baellwe@cs.stanford.edu Abstract The goal of ths project was to automate the process of tunng a document query engne. Specfcally, I used machne learnng
More informationModel-Based Pose Estimation by Consensus
Model-Based Pose Estmaton by Consensus Anne Jorstad 1, Phlppe Burlna, I-Jeng Wang, Denns Lucarell, Danel DeMenthon 1 Appled Mathematcs and Scentfc Computaton, Unversty of Maryland College Park Johns Hopkns
More informationCompiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz
Compler Desgn Sprng 2014 Regster Allocaton Sample Exercses and Solutons Prof. Pedro C. Dnz USC / Informaton Scences Insttute 4676 Admralty Way, Sute 1001 Marna del Rey, Calforna 90292 pedro@s.edu Regster
More informationGSLM Operations Research II Fall 13/14
GSLM 58 Operatons Research II Fall /4 6. Separable Programmng Consder a general NLP mn f(x) s.t. g j (x) b j j =. m. Defnton 6.. The NLP s a separable program f ts objectve functon and all constrants are
More informationSupport Vector Machines. CS534 - Machine Learning
Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators
More informationUnderstanding K-Means Non-hierarchical Clustering
SUNY Albany - Techncal Report 0- Understandng K-Means Non-herarchcal Clusterng Ian Davdson State Unversty of New York, 1400 Washngton Ave., Albany, 105. DAVIDSON@CS.ALBANY.EDU Abstract The K-means algorthm
More informationImage Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline
mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and
More informationAdaptive Transfer Learning
Adaptve Transfer Learnng Bn Cao, Snno Jaln Pan, Yu Zhang, Dt-Yan Yeung, Qang Yang Hong Kong Unversty of Scence and Technology Clear Water Bay, Kowloon, Hong Kong {caobn,snnopan,zhangyu,dyyeung,qyang}@cse.ust.hk
More informationEECS 730 Introduction to Bioinformatics Sequence Alignment. Luke Huan Electrical Engineering and Computer Science
EECS 730 Introducton to Bonformatcs Sequence Algnment Luke Huan Electrcal Engneerng and Computer Scence http://people.eecs.ku.edu/~huan/ HMM Π s a set of states Transton Probabltes a kl Pr( l 1 k Probablty
More informationA fast algorithm for color image segmentation
Unersty of Wollongong Research Onlne Faculty of Informatcs - Papers (Arche) Faculty of Engneerng and Informaton Scences 006 A fast algorthm for color mage segmentaton L. Dong Unersty of Wollongong, lju@uow.edu.au
More informationA Saturation Binary Neural Network for Crossbar Switching Problem
A Saturaton Bnary Neural Network for Crossbar Swtchng Problem Cu Zhang 1, L-Qng Zhao 2, and Rong-Long Wang 2 1 Department of Autocontrol, Laonng Insttute of Scence and Technology, Benx, Chna bxlkyzhangcu@163.com
More informationCategories and Subject Descriptors B.7.2 [Integrated Circuits]: Design Aids Verification. General Terms Algorithms
3. Fndng Determnstc Soluton from Underdetermned Equaton: Large-Scale Performance Modelng by Least Angle Regresson Xn L ECE Department, Carnege Mellon Unversty Forbs Avenue, Pttsburgh, PA 3 xnl@ece.cmu.edu
More information