Classifying Acoustic Transient Signals Using Artificial Intelligence

Similar documents
Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

An Improved Neural Network Algorithm for Classifying the Transmission Line Faults

Machine Learning 9. week

Detection of an Object by using Principal Component Analysis

Support Vector Machines

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Classifier Selection Based on Data Complexity Measures *

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

A mathematical programming approach to the analysis, design and scheduling of offshore oilfields

Performance Assessment and Fault Diagnosis for Hydraulic Pump Based on WPT and SOM

Unsupervised Learning

The Research of Support Vector Machine in Agricultural Data Classification

BAYESIAN MULTI-SOURCE DOMAIN ADAPTATION

Comparing Image Representations for Training a Convolutional Neural Network to Classify Gender

The Codesign Challenge

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Edge Detection in Noisy Images Using the Support Vector Machines

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

An Entropy-Based Approach to Integrated Information Needs Assessment

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

CS 534: Computer Vision Model Fitting

Feature Extractions for Iris Recognition

Comparison Study of Textural Descriptors for Training Neural Network Classifiers

GA-Based Learning Algorithms to Identify Fuzzy Rules for Fuzzy Neural Networks

A Binarization Algorithm specialized on Document Images and Photos

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Face Recognition Based on SVM and 2DPCA

A fault tree analysis strategy using binary decision diagrams

Feature Reduction and Selection

UB at GeoCLEF Department of Geography Abstract

Support Vector Machines

Learning Non-Linearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks

Face Recognition Methods Based on Feedforward Neural Networks, Principal Component Analysis and Self-Organizing Map

SVM-based Learning for Multiple Model Estimation

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Announcements. Supervised Learning

Face Recognition Method Based on Within-class Clustering SVM

Artificial Intelligence (AI) methods are concerned with. Artificial Intelligence Techniques for Steam Generator Modelling

Hybridization of Expectation-Maximization and K-Means Algorithms for Better Clustering Performance

BOOSTING CLASSIFICATION ACCURACY WITH SAMPLES CHOSEN FROM A VALIDATION SET

Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Face Detection with Deep Learning

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Analysis of Continuous Beams in General

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Incremental MQDF Learning for Writer Adaptive Handwriting Recognition 1

APPLICATION OF PREDICTION-BASED PARTICLE FILTERS FOR TELEOPERATIONS OVER THE INTERNET

A CALCULATION METHOD OF DEEP WEB ENTITIES RECOGNITION

CLASSIFICATION OF ULTRASONIC SIGNALS

APPLICATION OF PREDICTION-BASED PARTICLE FILTERS FOR TELEOPERATIONS OVER THE INTERNET

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Biological Sequence Mining Using Plausible Neural Network and its Application to Exon/intron Boundaries Prediction

Audio Event Detection and classification using extended R-FCN Approach. Kaiwu Wang, Liping Yang, Bin Yang

KOHONEN'S SELF ORGANIZING NETWORKS WITH "CONSCIENCE"

INTELLECT SENSING OF NEURAL NETWORK THAT TRAINED TO CLASSIFY COMPLEX SIGNALS. Reznik A. Galinskaya A.

Audio Content Classification Method Research Based on Two-step Strategy

Face Recognition Based on Neuro-Fuzzy System

Investigating the Performance of Naïve- Bayes Classifiers and K- Nearest Neighbor Classifiers

High-Boost Mesh Filtering for 3-D Shape Enhancement

Learning-based License Plate Detection on Edge Features

Fusion Performance Model for Distributed Tracking and Classification

Neural Network Control for TCP Network Congestion

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Correlative features for the classification of textural images

Fuzzy Modeling of the Complexity vs. Accuracy Trade-off in a Sequential Two-Stage Multi-Classifier System

RECOGNITION AND AGE PREDICTION WITH DIGITAL IMAGES OF MISSING CHILDREN

On Supporting Identification in a Hand-Based Biometric Framework

FACE RECOGNITION USING MAP DISCRIMINANT ON YCBCR COLOR SPACE

AUTOMATIC ROAD EXTRACTION FROM HIGH RESOLUTION SATELLITE IMAGES USING NEURAL NETWORKS, TEXTURE ANALYSIS, FUZZY CLUSTERING AND GENETIC ALGORITHMS

Texture Feature Extraction Inspired by Natural Vision System and HMAX Algorithm

A classification scheme for applications with ambiguous data

Lecture 5: Multilayer Perceptrons

On Modeling Variations For Face Authentication

USING LINEAR REGRESSION FOR THE AUTOMATION OF SUPERVISED CLASSIFICATION IN MULTITEMPORAL IMAGES

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Virtual Machine Migration based on Trust Measurement of Computer Node

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.

Classification algorithms on the cell processor

Implementation Naïve Bayes Algorithm for Student Classification Based on Graduation Status

Online Detection and Classification of Moving Objects Using Progressively Improving Detectors

Three supervised learning methods on pen digits character recognition dataset

Wishing you all a Total Quality New Year!

Unsupervised Learning and Clustering

Load-Balanced Anycast Routing

S1 Note. Basis functions.

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

ENSEMBLE OF NEURAL NETWORKS FOR IMPROVED RECOGNITION AND CLASSIFICATION OF ARRHYTHMIA

Journal of Process Control

Transcription:

Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu) Abstract Submarnes need to dentfy hazardous projectles wth speed and accuracy. One method of dentfyng possble dangers s the process of usng passve sonar. Passve sonar s the practce of lstenng for abnormal anomales. Ths paper descrbes multple Artfcal Intellgent methods of classfyng acoustc transents. In addton, we address localzaton of transents (.e., determnng the locaton of sgnal wthn a dataset). Purpose Ths paper presents results of research on Acoustcs Transent Sgnals (ATC). Specfcally efforts to detect, localze and classfy exemplar analog sgnals are dscussed. The sgnals used n ths research consst of three classes of Acoustc Transents generated usng Eq.,, and 3 []. Class = c(α, κ) = exp(-α * κ - ) * cos(00κ) () Class = c(α, κ) = exp(-α * κ - ) * cos(κ) () Class 3 = c3(α 3, κ) = exp(-α 3 * κ - ) * cos(0κ - π/) (3) where κ s an ntegral value n the range κ = [0, 7] and α ε(0.3, 0.7), α ε(0.3, 0.7), and α 3 ε(0.0, 0.). In an attempt to better acheve real world condtons Gaussan whte nose (GWN) was added to each of the sgnals. The GWN was computed usng Eq. 4. where R s a random number n the range [0, ]. GWN = cos(πr) * * ln( R) (4) Examples of the raw and nosy sgnals are shown n Fgures,, and 3.

5 7 06 3 0 7 5 7 06 3 0 7 Fgure. Class Raw and Nosy Sgnal. 5 7 06 3 0 7 5 7 06 3 0 7 Fgure. Class Raw and Nosy Sgnal 5 7 06 3 0 7 5 7 06 3 0 7 Fgure 3. Class 3 Raw and Nosy Sgnal Sgnal localzaton, determnaton of the sgnal locaton wthn a set of data, was acheved through the use of a mathematcal convoluton operaton wthn the process llustrated n Fgure 4.

Raw Sgnal 56 FFT FFT X = Convoluton Value Fgure 4. Sgnal Localzaton The process compares Fast Fourer Transform (FFT) [3] data generated for a known sgnal class wth that of the unknown data set. To do ths a vrtual wndow, or subset of data, of length n (the sze of the known sgnal) s extracted from the unknown sgnal data set. FFT data s generated for the unknown sgnal data and the product (.e., convoluton) of the known sgnal vector and the unknown sgnal vector s calculated resultng n a sgnal value. The process s repeated usng the next wndowed set of data from the unknown sgnal untl the data set s exhausted. The wndowed secton resultng n the hghest value s consdered to be the area n whch the sgnal s most lkely present. For classfcaton purposes a measure of total power was chosen as the feature for whch each sgnal would be evaluated. The feature extracton process for each sgnal s llustrated n Fgure 5. Sgnal FFT Calc. Power Dvde/Sum nto n Bns Normalze # # # # n Bns Classfer Fgure 5. Feature Extracton

A Fast Fourer Transform was appled to each sgnal takng the sgnal from the tme doman to the frequency doman. From the transformed data, power values were calculated accordng to Eq. 5. power + = real magnary (5) The power values were normalzed and compressed nto a total power vector. The total power vector was calculated by dvdng the normalzed data nto n sectons where each secton s represented as a bn value n the total power vector. The bn values are calculated accordng to Eq. 6. n bn [ j] = power[ ] (6) The total power vector generated for each sgnal s used as nput to the classfers. Methods Bayesan Neural Network Bayesan decson theory s a fundamental statstcal approach to the problem of pattern classfcaton. It approaches the problem of classfcaton from the probablstc standpont and the cost assocated wth each decson. Ths approach assumes that some a pror probablty about each class s known []. For each known class there exsts a correspondng dscrmnant functon shown n Eq. 7. g t ( x) = ( x µ ) ( x µ ) ln ln ( ) + P ω (7) where x s the unknown class vector, µ s the class mean vector, Σ s the covarance matrx, Σ s the determnant of Σ, and P(ω) s the class a pror probablty. Classfcaton of unknown classes nvolves several steps. Frst, exemplars for each known class are collected. Second, a pror probabltes are determned for each class. Thrd, usng ths data a set of dscrmnant functons s created. Fnally, the unknown class vector s feed to each dscrmnant functon. The class of the functon resultng n the hghest value s assgned as the class of the unknown. Feed Forward Neural Network wth Back Propagaton The feed forward neural network wth back propagaton uses the normalzed calculated mean vector for nputs. The network s confgured usng the number of bns as the total number of nputs, one hdden layer wth two nodes, and three output nodes as shown n Fg. 6. Each nput corresponds to a bn of the total power vector and the output layer nodes correspond to a Class

Sgnal. The bas for all hdden nodes and output nodes s set to be equal to and every edge n the network has a respectve weght. Input 0 Input I0 I w H0I0 H0 Bas O0 O Bas Output Output H Bas Bas O Output 3 Input n In Bas Fgure 6. Feed Forward Neural Network The Feed Forward Neural Network (FFNE) s traned usng the fve alpha values for every sgnal to equate to a total of twenty-fve known nosy sgnals. The process for tranng the network follows the flow n Fgure 7. The output nodes are traned to o = 0. for a losng node and o = 0.9 for a wnnng node. The completon of tranng s determned once the Root Mean Squared Error (RMSE) has surpassed the approprate threshold. Ths mplementaton was optmzed wth the threshold equal to 0.79.

For all Tranng Vectors Input Tranng Values Calc Hdden Layer Calc Output Layer Calc Errors FAIL Calc Weght Adjustments Apply Weght Adjustments Calc TSSE Calc RMSE Test Threshold PASS Run Test Fgure 7. Feed Forward Network Process Flow. Once the FFNE has completed tranng, nosy test sgnals of known classes are tested on the network. The number of test sgnals s confgurable, and the class defnton can be ether random or defned. The wnnng decson s determned as the class whose correspondng output node equals o = 0.9. The network classfes each sgnal and provdes a percent accuracy for sgnals correctly dentfed. Kohonen Neural Network The Kohonen network s an unsupervsed approach of classfcaton. The network conssts of nputs and a network map as shown n Fgure. Each nput corresponds to a bn of the total power vector and the output layer nodes correspond to a class sgnal. The network map comprses of three class neurons. A class neuron s sad to be ether the sngle wnnng neuron of the map or a losng neuron.

Input 0 Input I0 I Neuron Neuron Input n In Neuron 3 Fgure. Kohonen Network confguraton The nput vectors are normalzed to the range of [-, ]. The network s traned usng a known tranng set consstng of the fve alpha values for every sgnal to equate to a total of twenty-fve known nosy sgnals. The network s consdered traned once ether the error rate has been acheved or the change n error has changed by an nsgnfcantly small amount. If the change n error s nsgnfcantly small then the network s aborted and the weghts are randomly reassgned and tranng begns agan. Ths mplementaton used a learnng rate equal to α = 0. and max number of retres equal to 0,000. The process used for tranng s shown n Fgure 9.. Assgn Random Weghts Calc Errors Provde Tranng Vector NO Error Accepted YES Adjust Weghts wth relevance to the Wnnng Neuron Calc Error NO Run Tests Error Changed Sgnfcantly YES YES Exceeded Max Retres

Fgure 9. Kohonen Network Tranng Process Flow Once the Kohonen Network has successfully completed tranng, nosy test sgnals of sgnals of known classes are tested on the network. The number of test used n ths mplementaton s n = 0,000. The number of test sgnals s confgurable, and the class defnton can be ether random or defned. The resultng neuron map produces an ON or OFF value for each neuron n the map, wth the neurons set to ON not to exceed n =. The network classfes each sgnal and provdes a percent accuracy for sgnals correctly dentfed. Adaptve Resonance Theory (ART) An ART network s a neural network approach of classfcaton [4]. Ths class of neural networks s a self-organzng pattern recognton code that responds to a random sequence of analog nputs. The network conssts of nputs, n ths case bns of the total power vector descrbed above, and a network map as shown n Fgure 0. An ART network s comprsed of subsystems: the F or STM (short-term memory) and F or LTM (long-term memory). Y j Y j Y j F LTM (b j, t j ) m R P Q F U V W X I Fgure 0. Example ART Network The F layer s made up of m number of neurons where m s the dmenson of the nput vector. Each neuron of the F layer conssts of 6 unts (W, X, U, V, P, and Q). The prmary functon of the F layer s normalzaton of the nput sgnal. To accomplsh ths functon nose s fltered from the nput through accentuaton of the salent portons of the nput and suppresson of the nose. Once the nput s normalzed t s compared wth learned patterns n the F layer usng weghts n the LTM. The F layer s made up of n number of neurons where n s the number of learned patterns and a set of weghts (bottom-up and top-down) connectng each F layer neuron to each F layer neuron. It serves as a compettve F layer whereby the wnnng pattern s chosen by the F neuron wth the hghest actvaton calculated usng the LTM weghts. A vglance parameter ρ and the unt R are utlzed to enforce a level of smlarty between learned patterns.

Results Testng was performed on all four pattern recognton classfers ncludng: Bayesan, Feedforward Neural Network wth Back Propagaton, Kohonen Neural Network, and Adaptve Resonance Theory (ART). The results are summarzed n Table. Classfer Nomnal Accuracy (%) Bayesan 70 Feedfoward Neural Network ( Hdden Layer, Nodes/Layer, 3 Outputs) 70 Kohonen Neural Network 65 ART Table. Classer Performance Conclusons Evaluaton of the data shows that of the classfers tested the ART methodology performed the best wth a nomnal accuracy of %. In addton to greater accuracy, ART s more flexble n terms of adaptablty gven that addtonal classes can be ntroduced to the network wthout the need to completely retran. Ths capablty s not afforded to the other classfers. Future work wll focus on extracton of addtonal transent features to mprove the accuracy of the classfers. Such features may nclude data obtan through the use of Wavelet transforms. Addtonally, the present Feed Forward Network was mplemented usng one hdden layer wth two nodes per hdden layer. The effect of addng addtonal nodes wll be nvestgated. An evaluaton of Prncpal Component Analyss wll also be conducted. Reference. Sn Sam-Kt, DeFguerdeo, A New Desgn Methodology for Optmal Interpolatve Neural Networks wth Applcaton to the Localzaton and Classfcaton of Acoustc Transents, IEEE Conference on Neural Networks for Ocean Engneerng, 9CH30-3, August 5,. Pattern Classfcaton, Rchard O. Duda, Peter E. Hart, and Davd G. Stork, John Wley & Sons, Inc., 00 3. Numercal Recpes.com, Numercal Recpes n C, http://www.nr.com/ 4. Carpenter Gal A., Grossberg, Stephen, ART : self-organzaton of stable category recognton codes for analog nput patterns, Appled Optcs, Vol 6, No. 3, December, 97