CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

Similar documents
CHAPTER 4 IMPLEMENTATION OF BACK PROPAGATION ALGORITHM NEURAL NETWORK FOR STEGANALYSIS

COMPUTATIONAL INTELLIGENCE

Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)

CHAPTER IX Radial Basis Function Networks

Radial Basis Function Networks

Recent Developments in Model-based Derivative-free Optimization

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

Radial Basis Functions and Application in Edge Detection

Chapter 4. The Classification of Species and Colors of Finished Wooden Parts Using RBFNs

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

CHAPTER VI BACK PROPAGATION ALGORITHM

Radial Basis Function Networks: Algorithms

Determining optimal value of the shape parameter c in RBF for unequal distances topographical points by Cross-Validation algorithm

Image Compression: An Artificial Neural Network Approach

Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

Chapter 1: Number and Operations

3 Nonlinear Regression

3 Nonlinear Regression

Radial Basis Function Neural Network Classifier

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Automatic basis selection for RBF networks using Stein s unbiased risk estimator

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

PARAMETRIC SHAPE AND TOPOLOGY OPTIMIZATION WITH RADIAL BASIS FUNCTIONS

Matrix Inverse 2 ( 2) 1 = 2 1 2

RESPONSE SURFACE METHODOLOGIES - METAMODELS

Bayes Classifiers and Generative Methods

Geometric Transformations and Image Warping

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Surfaces, meshes, and topology

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Image Warping. Srikumar Ramalingam School of Computing University of Utah. [Slides borrowed from Ross Whitaker] 1

COS 702 Spring 2012 Assignment 1. Radial Basis Functions University of Southern Mississippi Tyler Reese

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Efficient Object Tracking Using K means and Radial Basis Function

1 Training/Validation/Testing

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation

LECTURE 5: DUAL PROBLEMS AND KERNELS. * Most of the slides in this lecture are from

Extending reservoir computing with random static projections: a hybrid between extreme learning and RC

D-Separation. b) the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C.

Hybrid Training Algorithm for RBF Network

SE 263 R. Venkatesh Babu. Object Tracking. R. Venkatesh Babu

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Introduction to ANSYS DesignXplorer

Data Mining: Concepts and Techniques. Chapter 9 Classification: Support Vector Machines. Support Vector Machines (SVMs)

Breaking the OutGuess

Haresh D. Chande #, Zankhana H. Shah *

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Week 3: Perceptron and Multi-layer Perceptron

Conflict Graphs for Parallel Stochastic Gradient Descent

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla

DIGITAL COLOR RESTORATION OF OLD PAINTINGS. Michalis Pappas and Ioannis Pitas

Artificial Neural Network-Based Prediction of Human Posture

Face Detection Using Radial Basis Function Neural Networks with Fixed Spread Value

Global Journal of Engineering Science and Research Management

Epipolar geometry. x x

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Introduction to Computer Graphics. Modeling (3) April 27, 2017 Kenshi Takayama

4. Feedforward neural networks. 4.1 Feedforward neural network structure

Character Recognition Using Convolutional Neural Networks

Estimating basis functions for spectral sensitivity of digital cameras

Bilevel Sparse Coding

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Machine Learning Classifiers and Boosting

Solving Systems Using Row Operations 1 Name

CPSC 340: Machine Learning and Data Mining. Robust Regression Fall 2015

Support vector machines

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Chemnitz Scientific Computing Preprints

Multiresponse Sparse Regression with Application to Multidimensional Scaling

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions

Nelder-Mead Enhanced Extreme Learning Machine

Perceptron as a graph

Alpha-trimmed Image Estimation for JPEG Steganography Detection

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Splines. Parameterization of a Curve. Curve Representations. Roller coaster. What Do We Need From Curves in Computer Graphics? Modeling Complex Shapes

Neuro-Fuzzy Inverse Forward Models

5 Learning hypothesis classes (16 points)

Classification: Linear Discriminant Functions

Model Fitting. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Lab 2: Support vector machines

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

On the Kernel Widths in Radial-Basis Function Networks

Dynamic Analysis of Structures Using Neural Networks

Learning from Data Linear Parameter Models

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Neural Networks. Theory And Practice. Marco Del Vecchio 19/07/2017. Warwick Manufacturing Group University of Warwick

Transcription:

95 CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS 6.1 INTRODUCTION The concept of distance measure is used to associate the input and output pattern values. RBFs use the concept of distance measure to produce function approximation or pattern mapping to an unknown approximation can be obtained by giving an input reference point through a set of basis functions. These basis functions hold one of the RBF centers, multiplying the result of each function by a coefficient and then summing them in linear fashion. For each function ( ) the approximation to this function is essentially stored in the coefficients and centers of the RBF. These parameters are in no way unique since for each function ( ) is being approximated; many combinations of parameter values exist. RBFs have the following mathematical representation: N 1 o ci x R i ) i 0 F(x) c (6.1) Where c is a vector containing the coefficients of the RBF, R is a vector containing the centers of the RBF, and is the basis function or activation function of the network. F(x) is the approximation produced as the output of the network. The coefficient c o, which is a bias term, may take the value 0, if no bias is present. The norm used is the Euclidean distance norm. Equation (6.2) shows the Euclidean distance for a vector x containing n elements:

96 n x (6.2) i 1 2 x i Each centers R j has the same dimension as the input vector x, which contains n input values. The centers act as reference points within the input data space and are chosen so that they are representatives of the input data. Euclidean distance is considered while computing the distance between input point and centers by RBF. The obtained distances are given as input to the basis functio generate weighted results with the coefficients c i, and are then linearly summed to produce the overall RBF output. The popular choice for the basis function is the Gaussian: 2 x ( x ) exp( ) (6.3) 2 Where, is a scaling parameter. Other choices for the basis functions include the thin plate spline, the multi-quadric and the inverse multiquadric. RBFs can be represented by a network structure, like any other approximation based neural networks. The input layer provides elements of the input vector to all the hidden nodes. The nodes in the hidden layer holds the RBFs centers, computes distance between the input vector and its centers. The nodes of hidden layer generates a scalar value, depends upon the centers it holds. The outputs of the hidden layer nodes are passed to the output layer via weighted connections. Each connection between the hidden and output layers is weighted with the relevant coefficient. The node in the output layer sums its inputs to produce the network output. If an output of many dimensions is required, then several output nodes are needed, one for each output dimension. Several sets of coefficients will also be required, one set for the connections to each output node.

97 Input Layer Hidden Layer Output Layer 0.25 Input Output 0.75 Fig. 6.1 RBF neural network 6.2 IMPLEMENTATION OF RBF Figure 6.1 represents the RBF Neural Network. It has 2 nodes in input layer, 3 nodes in hidden layer and 2 nodes in output layer. Every function can be uniquely identified by its inherent properties, and this particular class of problems. The selection of position and the number of centers is similar to problems choosing the number and initial values of exist to choose the initial weight values. However, algorithms such as back propagation can be used to obtain the initial weight values. A best approximation can be produced when optimal number of centers is identified. Neither very few nor many centers should be chosen, since this may lead to poor approximation. It is very important to maintain equilibrium between the number of centers and the amount of training data. Many ways exist to denote the centers for a RBF: 1. Fixed centers. 2. Centers which move in a self-organized fashion. 3. Centers that have chosen using a supervised learning process (E.g. gradient descent.)

98 Centers can be chosen from input training data space when they are fixed. Several ways exist to locate the centers within this space, including random or uniform distributions, Meng et al. [77]. The flow chart for implementing RBF neural network is shown in Figure 6.2. Read patterns from cover images Create centers Create RBF for each center Compute G=RBFT * RBF Compute Determinant D=det(G) If D==0? Yes No Find SVD(D) G= U * W * V T Compute B=inv (G) Compute E =B * G T Compute and store final weights in a file, F= E * Target Fig. 6.2 Flow chart implementation of steganalysis using RBF

99 Fig. 6.3 Detection of message location using RBF In Figure 6.3, Original location of the message refers to the actual information of the image, and Detected information indicates that the suspect image is a steganographic one. Figure 6.3 presents the information detected in. Some of the Sample cover images and steganographic images are given in Table 4.1. The information is detected from these steganographic images. represents the pixels in cover image. 6.2.1 COMBINING BPA / FUBPA AND RBF An effort to combine BPA with RBF and FUBPA with RBF for steganalysis of covert information has been performed. The performance of BPARBF as well as FUBPARBF is appreciable compared to individual performance of each algorithm (BPA/RBF/FUBPA). Training and Testing performance is improved when two algorithms are combined instead of using them separately. This leads to promising results.

100 Input Layer of BPA Hidden Layer of BPA Output Layer of BPA Hidden Layer of RBF Output Layer of RBF Input Output Input Layer of RBF BPA RBF Fig. 6.4 BPARBF neural network 1 Input Layer of FUBPA 0 0 Output Layer of FUBPA 1 0.25 Hidden Layer Input of RBF Output Layer of RBF 0 Output 0 1 0.75 1 Input Layer of RBF FUBPA RBF Fig. 6.5 FUBPARBF neural network

101 Certainly the combined ANN algorithms of FUBPA and RBF provide better performance than combining BPA and RBF. But still organizing the retrieved information is a challenging task. Training RBF Step 1: Initialize No.of Input =no of nodes in the output layer of BPA / FUBPA Step 2: Create centers=no. of Patterns Step 3: Calculate RBF as exp(-x) where X-(patterns-centers) Step 4: Calculate Matrix as G=RBF and A=G T * G Step 5: Calculate B=A -1 and E=B * G T Step 6: Calculate the final weight as F=(E* D) and store the final weights in a File. Testing RBF Step 1: Read output of BPA / FUBPA Step 2: Calculate RBF as exp(-x) where X-(patterns-centers) Step 3: Calculate Matrix as G=RBF and A=G T * G Step 4: Calculate B=A -1 and E=B * G T Step 5: Classify the pixel as containing information or not. 6.3 RESULTS AND DISCUSSION In Figure 6.6, Original location of the message refers to the actual information of the image, and Detected information tells that the suspect image is a steganographic one. This figure presents the information detected in. Some of the sample cover images and steganographic images are given in Table 4.1. The information is detected from these steganographic images. represents the pixels in cover image.

102 Fig. 6.6 Detection of message location using BPARBF Fig. 6.7 Detection of message location using FUBPARBF In Figure 6.7, Original location of the message refers to the actual information of the image, and Detected information tells that the suspect image is a steganographic one. This figure presents the information detected in cover image.

103 6.4 SUMMARY This chapter presents implementation of RBF and a combination of RBF with BPA as well as the combination of RBF with FUBPA for identification of information in a covert image. The RBF uses distance concept for learning the ordinary image and covert images. Based on the efficiency of learning, the RBF detects the presence of information if present else not. Chapter 7 presents the comparisons of performances of proposed algorithms for steganalysis of limited combinations of images.