Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)

Similar documents
CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Radial Basis Function Networks

COMPUTATIONAL INTELLIGENCE

Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Facial expression recognition using shape and texture information

Classification with Diffuse or Incomplete Information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Post-Processing Radial Basis Function Approximations: A Hybrid Method

arxiv:q-bio/ v1 [q-bio.nc] 4 Oct 2004

NEURAL NETWORK-BASED SEGMENTATION OF TEXTURES USING GABOR FEATURES

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

Lab 2: Support vector machines

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

Chap.12 Kernel methods [Book, Chap.7]

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

Comparing different interpolation methods on two-dimensional test functions

Introduction to ANSYS DesignXplorer

Function approximation using RBF network. 10 basis functions and 25 data points.

Combined Weak Classifiers

Channel Performance Improvement through FF and RBF Neural Network based Equalization

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Simulation of Back Propagation Neural Network for Iris Flower Classification

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

Ensembles. An ensemble is a set of classifiers whose combined results give the final decision. test feature vector

Hybrid Training Algorithm for RBF Network

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Radial Basis Function Networks: Algorithms

291 Programming Assignment #3

CS6220: DATA MINING TECHNIQUES

Support Vector Machines

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK FOR IMAGE RESTORATION

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Hsiaochun Hsu Date: 12/12/15. Support Vector Machine With Data Reduction

Image Compression: An Artificial Neural Network Approach

2. Neural network basics

Techniques for Dealing with Missing Values in Feedforward Networks

Adaptive Regularization. in Neural Network Filters

Neural Networks Laboratory EE 329 A

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Efficient Object Tracking Using K means and Radial Basis Function

Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Data Compression. The Encoder and PCA

DEVELOPMENT OF NEURAL NETWORK TRAINING METHODOLOGY FOR MODELING NONLINEAR SYSTEMS WITH APPLICATION TO THE PREDICTION OF THE REFRACTIVE INDEX

A Novel Fault Identifying Method with Supervised Classification and Unsupervised Clustering

Radial Basis Function Neural Network Classifier

Using Statistical Techniques to Improve the QC Process of Swell Noise Filtering

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Global Journal of Engineering Science and Research Management

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK

Determining optimal value of the shape parameter c in RBF for unequal distances topographical points by Cross-Validation algorithm

FACE DETECTION AND RECOGNITION USING BACK PROPAGATION NEURAL NETWORK AND FOURIER GABOR FILTERS

BL5229: Data Analysis with Matlab Lab: Learning: Clustering

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

Data Mining: Exploring Data

Image Segmentation using Gaussian Mixture Models

Support Vector Machines

A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods

An Analog VLSI Chip for Radial Basis Functions

Gesture Recognition using Neural Networks

CHAPTER IX Radial Basis Function Networks

An Approach for Reduction of Rain Streaks from a Single Image

Cluster Analysis and Visualization. Workshop on Statistics and Machine Learning 2004/2/6

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation

Unsupervised learning

Introduction to Data Mining

A Comparative Study of SVM Kernel Functions Based on Polynomial Coefficients and V-Transform Coefficients

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

RESPONSE SURFACE METHODOLOGIES - METAMODELS

5 Learning hypothesis classes (16 points)

Distributed Anomaly Detection using Autoencoder Neural Networks in WSN for IoT

Chuck Cartledge, PhD. 20 January 2018

THREE PHASE FAULT DIAGNOSIS BASED ON RBF NEURAL NETWORK OPTIMIZED BY PSO ALGORITHM

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Department of Electronics and Telecommunication Engineering 1 PG Student, JSPM s Imperial College of Engineering and Research, Pune (M.H.

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla

Recent Developments in Model-based Derivative-free Optimization

An indirect tire identification method based on a two-layered fuzzy scheme

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

4. Feedforward neural networks. 4.1 Feedforward neural network structure

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

IN recent years, neural networks have attracted considerable attention

UNIVERSITY OF DUBLIN TRINITY COLLEGE

Application of Geometry Rectification to Deformed Characters Recognition Liqun Wang1, a * and Honghui Fan2

A Hierarchial Model for Visual Perception

Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining

Image Enhancement Techniques for Fingerprint Identification

Data Mining: Exploring Data. Lecture Notes for Data Exploration Chapter. Introduction to Data Mining

Lab 2: Support Vector Machines

Transcription:

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Yaobin Qin qinxx143@umn.edu Supervisor: Pro.lilja Department of Electrical and Computer Engineering Abstract Neural networks are family statistical learning algorithms and structures and are used to estimate or approximate functions and pattern classification. The Neural network system is constructed through interconnected neurons and training weights. The paper will present the improvement of recognition rate, recognition time and hardware overhead through introducing TMR technology into the conventional RBF neural network which is a simple neural network only consisting of three layers. Introduction Due to the non-linear approximation of Radial Basis Functions Neural network, it is employed for functional approximation in time-series modeling and in pattern classification, specifically, such as potential function, clustering, functional approximation, spline interpolation and so on [1]. RBF neural network is a simple neural network consisting of 3 layers, input, hidden and output layer. Input layer is used to input the features of the pattern. In the hidden layer, each interconnection between input and hidden units implements the radial activated function. And the selection of activated function depends on the specific application. Output layer implements a linear combination of the hidden unit outputs to identify the specific pattern. Given a RBF neural network, we need to specify the activated functions, the number of the neurons, the rules for modeling a given task and finding the characteristics parameters of the neural network. In general, two optional functions including thin-plate spline and Gaussian function are most often used on activated function between input and hidden layer [1]. While thin-plate spline function are most used in the time series modeling, Gaussian function is frequently used in pattern specification [1]. Finding parameters called weights based on the training at hand is called network training. In the supervised neural network, the training set is in the form of input-output pairs. The training algorithm will adapt the weights to fit the changing training set. After training, the RBF neural network can be used for pattern identification with the input data similar to the training set. The project only focuses on the pattern classification implemented by supervised RBF neural network with Gaussian activated function. RBF neural network focused on pattern classification has been successfully applied into Iris flower recognition and number recognition [2]. For the Iris flower recognition, 3x8x4 RBF neural network is used to recognize three species of flower based on four features including length and width of sepal and petal. And for the number recognition, more complicated neural network is used to recognize numbers from 0 to 9, injected with different level Gaussian errors. However, the recognition time will be greatly increased with the increasing complexity of the network. Additionally, complicated neural network will cause difficulty of logic elements placement and routing. So an idea is emerged. If we can build three simpler neural networks instead of a single complicated one, it will make the logic elements of the network more

flexible and easier for placement and routing. In my project, I introduce Triple Modular Redundancy technology [3] into the RBF neural network, which is expected to reduce the run time and increase the recognition rate but not increasing hardware overhead too much. Focusing on the case of 7x8 pixel numbers recognition, I will introduce the basic structure of the RBF neural network in the first part of my paper. In the second part, I will discuss how to introduce the TMR techniques into the neural network. In the final part, I will compare the recognition rate, run time and hardware overhead of the single network with those of the network introduced by TMR. RBF Network Figure 1 shows the structure of the RBF network [2]. The input layers include 56 inputs denoted by vector x i (i = 1,2,3...56), indicating 7x8 pixel of each number. The neurons in the hidden layer implement the Gaussian activated function to extract the pixel features from input vector x i. Mathematically, the output of the hidden layer can be calculated in: y i = exp ( x i c ij 2 /σ 2 ) (1) The vector y i is the output of the jth neuron in the hidden layer (j = 1,2, J). c ij is the central point of the Gaussian function and σ is used to control the shape of y i. The output layer consists of 10 units denoted by z k (k = 1,2, 10), indicating numbers from 0 to 9. The outputs of the overall network are computed through linear combination of y i in: z k = y i w jk + b k (2) Where w jk is the weight matrix of the linear output layer and vector b k is the bias. The values of c ij, σ, w jk and b k are four weights of the network and can be determined through training (figure 2). The recognition ability is various mainly based on the number of the neurons if given large enough training image. I: input neuron number J: hidden neuron number K: output neuron number Input layer (features) i = 1, 2,, I x 1 x 2 x I c ij exp(- x i -c ij 2 /σ 2 ) Hidden layer (mapping) j = 1, 2,, J y 1 y 2 y 3 y J w jk Output layer (result) k = 1, 2,, K z 1 b 1 b K z K y j w jk +b k Figure 1 basic structure of RBF neural network Training set Weight (c ij, σ, w jk and b k ) Figure 2 the training of the neural network

TMR System TMR technique which is essentially to use two out of three voting concept has been applied to digital computer and many forms of redundancy to meet reliability requiments [3]. The goal of my paper is to introduce the TMR techniques to RBF neural network. As mentioned above, the ability of recognition of the network mainly depends on the number of neurons if given large enough trainning image. Figure 3 shows how to apply TMR techniques to conventional RBF neural network (called OMR). Divding the single neural network with n neurons into three sub-networks with n/3 neurons respectively and training the sub-network seperately to get three different weights. Given a recognition task, three sub-networks execute the task respectively and the final output is determined by two out of three after voting. Figure 3 neural network in TMR Comparison TMR with OMR A. Performance (Recognition Rate) Figure 4 shows the numbers injected with different Gaussian error levels (deviation from 0.1 to 0.4). In order to compare the abilities of recognition of the network in TMR and OMR, two networks in different schemes are built (see figure 3) to be tested. Figure 5 shows the recognition rate measured in the network based on TMR and OMR system. We can see that the performance of TMR has 1.0% to 2.0% improvement, compared with OMR. Figure 4 number injected with different level Gaussian errors

Recogntion rate Recogniton Rate 105.0% 100.0% 95.0% 100.0% 97.5% 98.7% 96.4% 93.5% 92.0% 90.0% 85.0% 80.0% 75.0% 85.9% 83.8% dev=0.1 dev=0.2 dev=0.3 dev=0.4 Deviation of Gaussian error injection OMR TMR Figure 5 the recognition rate of the network in OMR and TMR The mathematics quantification is used to verify the improvement of performance of the RBF neural network in TMR. Based on the basic knowledge of probability, the recognition rate of the network in TMR can be quantified: P = r 3 + 3r 2 (1 r) (3) where P is the recognition rate of the overall network in TMR and r is the recognition rate of the sub-network. Figure 6 shows the recognition rate with different neurons in different networks. In the graphs, the black lines describe the trend of recognition rates with increasing neurons in the network of OMR. While dashed line describes the trend of recognition rates with increasing neurons in the network of TMR based on the equation (3), the circle lines denote the trend in the network based on the actual experiment. According to Figure 6, we can see that the line derived in the equation (3) is very close to the line plotted based on the experiment. So the equation (3) is used to compute the overall recognition rate of the network in TMR is reasonable. However, the difference between the lines of TMR in theory and in experiment are mainly due to these two factors: 1. the recognition rate for each sub-network is not the same. 2. sub-networks are correlated with each other. For the first factors, the equation (3) can be updated based on different recognition rates of each sub-networks: P = r 1 r 2 r 3 + (1 r 1 )r 2 r 3 + r 1 (1 r 2 )r 3 + r 1 r 2 (1 r 3 ) (4) where r 1, r 2, r 3 are three recognition rates of three sub-networks. For the second factors, two examples shown in Figure 7 are taken to verify if the recognition rate of the network in TMR is affected by the correlation between sub-networks. For Example 1, the recognition rate measured in the experiment P(exp) is very close to that P(theory) computed with equation (4). The correlation coefficient between each two sub-network approximately equals zero, which means each two sub-networks are uncorrelated. However, as shown in Example 2, the theoretical value 89.42% is reduced to 84.0% in the experiment that measured the recognition rate.

This reduction is because each two of three sub-networks are positively correlated as indicated by those correlation coefficients shown in the figure 7. So RBF neural network in TMR with three uncorrelated sub-networks can achieve high performance. Figure 6 the recognition rate of different Gaussian error level injection in different schemes Example 1: total neurons=300 and dev=0.1 recognition rate correlation coeffecient r1 98.10% (r1,r2) -0.0088 r2 99.60% (r1,r3) -0.0411 r3 92.00% (r2,r3) -0.0187 P(exp) 99.81% P(theory) 100% Example 2: total neurons=360 and dev=0.4 recognition rate correlation coeffecient r1 79.00% (r1,r2) 0.3771 r2 77.30% (r1,r3) 0.4856 r3 83.00% (r2,r3) 0.3458 P(exp) 84.10% P(theory) 89.42% Figure 7 recognition rates based on different correlation coefficient

B. Run time After applying TMR technique to RBF neural network, three sub-networks execute the recognition task in parallel. That is to say, while the run time measured is regarding to the network in OMR with n neurons, the run time measured is regarding to the network with n/3 neurons after introducing TMR technique. The time complexity based on the number of neurons is approved in mathematically in: T = I J t 1 + J K t 2 = J (I t 1 + K t 2 ) Where I is the number of the inputs, J is the number of the neurons, K is the number of the outputs, t 1 is the time complexity for each interconnection between input and hidden layer and t 2 is the time complexity for each interconnection between hidden and output layer. We can see that time complexity of the network T is linear with the number of neurons J. So the network in TMR can save run time twice, compared with the network in OMR with the same total neurons, which is matched with the Matlab simulation. C. Hardware cost The hardware cost I measured is through counting how many ALUs are used to do the operations and how much memory is needed to store the training weights in the task of recognizing the numbers injected with different level Gaussian errors based on the c++ code. Some equations are derived to compute the hardware cost in: ALUs (adders, exponentials, multipliers, roots): OMR = 477 neurons + 10 TMR = 477 neurons + 30 Memory(c ij, σ, w jk and b k ): OMR = 67 neurons + 10 TMR = 67 neurons + 30 Based on the equations above, we can see the cost of ALUs and Memory are very close in the same total number of neurons. Conclusion RBF neural network is set up to recognize the number from 0 to 9 injected with Gaussian errors of different deviation level. Two schemes, OMR (traditional scheme) and TMR (new scheme), are respectively applied into the network. Ensuring the approximate hardware overhead (same total neurons needed in the network), recognition rates and execution times are tested in two schemes with different error levels. And the results showed that the recognition rate and run time have improvements in TMR compared with OMR. In order to prove these improvements, the project also did the mathematical quantification to verify. So in the future, I hope to apply the TMR technology into different neural networks to see the performance and cost.

Reference [1] Bors, Adrian. "Introduction of the Radial Basis Function (RBF) Networks." [2] Ji, Yuan, and Feng Ran. "Using Stochastic Logics in Hardware Implementation of Radial Basis Function Neural Nework." Proceedings of the 2015 Design, Automation & Test in Europe Conference & Exhibition, 2015, 880-83. [3] Lyons, R. E., Vanderkulk, W, The Use of Triple-Modular Redundancy to Improve Computer Reliability, IBM Journal of Research and Development, 1962, Vol.6(2), pp.200-209.