Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu) Abstract Submarnes need to dentfy hazardous projectles wth speed and accuracy. One method of dentfyng possble dangers s the process of usng passve sonar. Passve sonar s the practce of lstenng for abnormal anomales. Ths paper descrbes multple Artfcal Intellgent methods of classfyng acoustc transents. In addton, we address localzaton of transents (.e., determnng the locaton of sgnal wthn a dataset). Purpose Ths paper presents results of research on Acoustcs Transent Sgnals (ATC). Specfcally efforts to detect, localze and classfy exemplar analog sgnals are dscussed. The sgnals used n ths research consst of three classes of Acoustc Transents generated usng Eq.,, and 3 []. Class = c(α, κ) = exp(-α * κ - ) * cos(00κ) () Class = c(α, κ) = exp(-α * κ - ) * cos(κ) () Class 3 = c3(α 3, κ) = exp(-α 3 * κ - ) * cos(0κ - π/) (3) where κ s an ntegral value n the range κ = [0, 7] and α ε(0.3, 0.7), α ε(0.3, 0.7), and α 3 ε(0.0, 0.). In an attempt to better acheve real world condtons Gaussan whte nose (GWN) was added to each of the sgnals. The GWN was computed usng Eq. 4. where R s a random number n the range [0, ]. GWN = cos(πr) * * ln( R) (4) Examples of the raw and nosy sgnals are shown n Fgures,, and 3.
5 7 06 3 0 7 5 7 06 3 0 7 Fgure. Class Raw and Nosy Sgnal. 5 7 06 3 0 7 5 7 06 3 0 7 Fgure. Class Raw and Nosy Sgnal 5 7 06 3 0 7 5 7 06 3 0 7 Fgure 3. Class 3 Raw and Nosy Sgnal Sgnal localzaton, determnaton of the sgnal locaton wthn a set of data, was acheved through the use of a mathematcal convoluton operaton wthn the process llustrated n Fgure 4.
Raw Sgnal 56 FFT FFT X = Convoluton Value Fgure 4. Sgnal Localzaton The process compares Fast Fourer Transform (FFT) [3] data generated for a known sgnal class wth that of the unknown data set. To do ths a vrtual wndow, or subset of data, of length n (the sze of the known sgnal) s extracted from the unknown sgnal data set. FFT data s generated for the unknown sgnal data and the product (.e., convoluton) of the known sgnal vector and the unknown sgnal vector s calculated resultng n a sgnal value. The process s repeated usng the next wndowed set of data from the unknown sgnal untl the data set s exhausted. The wndowed secton resultng n the hghest value s consdered to be the area n whch the sgnal s most lkely present. For classfcaton purposes a measure of total power was chosen as the feature for whch each sgnal would be evaluated. The feature extracton process for each sgnal s llustrated n Fgure 5. Sgnal FFT Calc. Power Dvde/Sum nto n Bns Normalze # # # # n Bns Classfer Fgure 5. Feature Extracton
A Fast Fourer Transform was appled to each sgnal takng the sgnal from the tme doman to the frequency doman. From the transformed data, power values were calculated accordng to Eq. 5. power + = real magnary (5) The power values were normalzed and compressed nto a total power vector. The total power vector was calculated by dvdng the normalzed data nto n sectons where each secton s represented as a bn value n the total power vector. The bn values are calculated accordng to Eq. 6. n bn [ j] = power[ ] (6) The total power vector generated for each sgnal s used as nput to the classfers. Methods Bayesan Neural Network Bayesan decson theory s a fundamental statstcal approach to the problem of pattern classfcaton. It approaches the problem of classfcaton from the probablstc standpont and the cost assocated wth each decson. Ths approach assumes that some a pror probablty about each class s known []. For each known class there exsts a correspondng dscrmnant functon shown n Eq. 7. g t ( x) = ( x µ ) ( x µ ) ln ln ( ) + P ω (7) where x s the unknown class vector, µ s the class mean vector, Σ s the covarance matrx, Σ s the determnant of Σ, and P(ω) s the class a pror probablty. Classfcaton of unknown classes nvolves several steps. Frst, exemplars for each known class are collected. Second, a pror probabltes are determned for each class. Thrd, usng ths data a set of dscrmnant functons s created. Fnally, the unknown class vector s feed to each dscrmnant functon. The class of the functon resultng n the hghest value s assgned as the class of the unknown. Feed Forward Neural Network wth Back Propagaton The feed forward neural network wth back propagaton uses the normalzed calculated mean vector for nputs. The network s confgured usng the number of bns as the total number of nputs, one hdden layer wth two nodes, and three output nodes as shown n Fg. 6. Each nput corresponds to a bn of the total power vector and the output layer nodes correspond to a Class
Sgnal. The bas for all hdden nodes and output nodes s set to be equal to and every edge n the network has a respectve weght. Input 0 Input I0 I w H0I0 H0 Bas O0 O Bas Output Output H Bas Bas O Output 3 Input n In Bas Fgure 6. Feed Forward Neural Network The Feed Forward Neural Network (FFNE) s traned usng the fve alpha values for every sgnal to equate to a total of twenty-fve known nosy sgnals. The process for tranng the network follows the flow n Fgure 7. The output nodes are traned to o = 0. for a losng node and o = 0.9 for a wnnng node. The completon of tranng s determned once the Root Mean Squared Error (RMSE) has surpassed the approprate threshold. Ths mplementaton was optmzed wth the threshold equal to 0.79.
For all Tranng Vectors Input Tranng Values Calc Hdden Layer Calc Output Layer Calc Errors FAIL Calc Weght Adjustments Apply Weght Adjustments Calc TSSE Calc RMSE Test Threshold PASS Run Test Fgure 7. Feed Forward Network Process Flow. Once the FFNE has completed tranng, nosy test sgnals of known classes are tested on the network. The number of test sgnals s confgurable, and the class defnton can be ether random or defned. The wnnng decson s determned as the class whose correspondng output node equals o = 0.9. The network classfes each sgnal and provdes a percent accuracy for sgnals correctly dentfed. Kohonen Neural Network The Kohonen network s an unsupervsed approach of classfcaton. The network conssts of nputs and a network map as shown n Fgure. Each nput corresponds to a bn of the total power vector and the output layer nodes correspond to a class sgnal. The network map comprses of three class neurons. A class neuron s sad to be ether the sngle wnnng neuron of the map or a losng neuron.
Input 0 Input I0 I Neuron Neuron Input n In Neuron 3 Fgure. Kohonen Network confguraton The nput vectors are normalzed to the range of [-, ]. The network s traned usng a known tranng set consstng of the fve alpha values for every sgnal to equate to a total of twenty-fve known nosy sgnals. The network s consdered traned once ether the error rate has been acheved or the change n error has changed by an nsgnfcantly small amount. If the change n error s nsgnfcantly small then the network s aborted and the weghts are randomly reassgned and tranng begns agan. Ths mplementaton used a learnng rate equal to α = 0. and max number of retres equal to 0,000. The process used for tranng s shown n Fgure 9.. Assgn Random Weghts Calc Errors Provde Tranng Vector NO Error Accepted YES Adjust Weghts wth relevance to the Wnnng Neuron Calc Error NO Run Tests Error Changed Sgnfcantly YES YES Exceeded Max Retres
Fgure 9. Kohonen Network Tranng Process Flow Once the Kohonen Network has successfully completed tranng, nosy test sgnals of sgnals of known classes are tested on the network. The number of test used n ths mplementaton s n = 0,000. The number of test sgnals s confgurable, and the class defnton can be ether random or defned. The resultng neuron map produces an ON or OFF value for each neuron n the map, wth the neurons set to ON not to exceed n =. The network classfes each sgnal and provdes a percent accuracy for sgnals correctly dentfed. Adaptve Resonance Theory (ART) An ART network s a neural network approach of classfcaton [4]. Ths class of neural networks s a self-organzng pattern recognton code that responds to a random sequence of analog nputs. The network conssts of nputs, n ths case bns of the total power vector descrbed above, and a network map as shown n Fgure 0. An ART network s comprsed of subsystems: the F or STM (short-term memory) and F or LTM (long-term memory). Y j Y j Y j F LTM (b j, t j ) m R P Q F U V W X I Fgure 0. Example ART Network The F layer s made up of m number of neurons where m s the dmenson of the nput vector. Each neuron of the F layer conssts of 6 unts (W, X, U, V, P, and Q). The prmary functon of the F layer s normalzaton of the nput sgnal. To accomplsh ths functon nose s fltered from the nput through accentuaton of the salent portons of the nput and suppresson of the nose. Once the nput s normalzed t s compared wth learned patterns n the F layer usng weghts n the LTM. The F layer s made up of n number of neurons where n s the number of learned patterns and a set of weghts (bottom-up and top-down) connectng each F layer neuron to each F layer neuron. It serves as a compettve F layer whereby the wnnng pattern s chosen by the F neuron wth the hghest actvaton calculated usng the LTM weghts. A vglance parameter ρ and the unt R are utlzed to enforce a level of smlarty between learned patterns.
Results Testng was performed on all four pattern recognton classfers ncludng: Bayesan, Feedforward Neural Network wth Back Propagaton, Kohonen Neural Network, and Adaptve Resonance Theory (ART). The results are summarzed n Table. Classfer Nomnal Accuracy (%) Bayesan 70 Feedfoward Neural Network ( Hdden Layer, Nodes/Layer, 3 Outputs) 70 Kohonen Neural Network 65 ART Table. Classer Performance Conclusons Evaluaton of the data shows that of the classfers tested the ART methodology performed the best wth a nomnal accuracy of %. In addton to greater accuracy, ART s more flexble n terms of adaptablty gven that addtonal classes can be ntroduced to the network wthout the need to completely retran. Ths capablty s not afforded to the other classfers. Future work wll focus on extracton of addtonal transent features to mprove the accuracy of the classfers. Such features may nclude data obtan through the use of Wavelet transforms. Addtonally, the present Feed Forward Network was mplemented usng one hdden layer wth two nodes per hdden layer. The effect of addng addtonal nodes wll be nvestgated. An evaluaton of Prncpal Component Analyss wll also be conducted. Reference. Sn Sam-Kt, DeFguerdeo, A New Desgn Methodology for Optmal Interpolatve Neural Networks wth Applcaton to the Localzaton and Classfcaton of Acoustc Transents, IEEE Conference on Neural Networks for Ocean Engneerng, 9CH30-3, August 5,. Pattern Classfcaton, Rchard O. Duda, Peter E. Hart, and Davd G. Stork, John Wley & Sons, Inc., 00 3. Numercal Recpes.com, Numercal Recpes n C, http://www.nr.com/ 4. Carpenter Gal A., Grossberg, Stephen, ART : self-organzaton of stable category recognton codes for analog nput patterns, Appled Optcs, Vol 6, No. 3, December, 97