Artificial Neural Network Based Byzantine Agreement Protocol

Similar documents
An optimal novel Byzantine agreement protocol (ONBAP) for heterogeneous distributed database processing systems

BYZANTINE GENERALS BYZANTINE GENERALS (1) A fable: Michał Szychowiak, 2002 Dependability of Distributed Systems (Byzantine agreement)

BYZANTINE AGREEMENT CH / $ IEEE. by H. R. Strong and D. Dolev. IBM Research Laboratory, K55/281 San Jose, CA 95193

CS6220: DATA MINING TECHNIQUES

CSE 5306 Distributed Systems. Fault Tolerance

Byzantine Consensus in Directed Graphs

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

CMSC 858F: Algorithmic Game Theory Fall 2010 Achieving Byzantine Agreement and Broadcast against Rational Adversaries

Distributed Encryption and Decryption Algorithms

Consensus and agreement algorithms

Distributed Deadlock

Byzantine Failures. Nikola Knezevic. knl

CSE 5306 Distributed Systems

An Efficient Implementation of the SM Agreement Protocol for a Time Triggered Communication System

Fault-Tolerant Distributed Consensus

Consensus and related problems

Byzantine Techniques

A definition. Byzantine Generals Problem. Synchronous, Byzantine world

The Long March of BFT. Weird Things Happen in Distributed Systems. A specter is haunting the system s community... A hierarchy of failure models

Erez Petrank. Department of Computer Science. Haifa, Israel. Abstract

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

BYZANTINE CONSENSUS THROUGH BITCOIN S PROOF- OF-WORK

Practical Byzantine Fault Tolerance Using Fewer than 3f+1 Active Replicas

arxiv:physics/ v1 [physics.ins-det] 5 Mar 2007 Gate Arrays

FAULT TOLERANT LEADER ELECTION IN DISTRIBUTED SYSTEMS

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

Practical Byzantine Fault Tolerance (The Byzantine Generals Problem)

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Consensus Problem. Pradipta De

CHAPTER AGREEMENT ~ROTOCOLS 8.1 INTRODUCTION

ROUND COMPLEXITY LOWER BOUND OF ISC PROTOCOL IN THE PARALLELIZABLE MODEL. Huijing Gong CMSC 858F

Distributed Algorithms (PhD course) Consensus SARDAR MUHAMMAD SULAMAN

To do. Consensus and related problems. q Failure. q Raft

A Matlab Tool for Analyzing and Improving Fault Tolerance of Artificial Neural Networks

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION

Yale University Department of Computer Science

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Dfinity Consensus, Explored

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

Signed Messages. Signed Messages

Character Recognition Using Convolutional Neural Networks

CHAPTER VI BACK PROPAGATION ALGORITHM

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Image Compression: An Artificial Neural Network Approach

Concepts. Techniques for masking faults. Failure Masking by Redundancy. CIS 505: Software Systems Lecture Note on Consensus

CSCI 5454, CU Boulder Samriti Kanwar Lecture April 2013

Complexity of Multi-Value Byzantine Agreement

Consensus. Chapter Two Friends. 2.3 Impossibility of Consensus. 2.2 Consensus 16 CHAPTER 2. CONSENSUS

Practical Byzantine Fault

A Framework of Hyperspectral Image Compression using Neural Networks

Consensus. Chapter Two Friends. 8.3 Impossibility of Consensus. 8.2 Consensus 8.3. IMPOSSIBILITY OF CONSENSUS 55

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Opening the Black Box Data Driven Visualizaion of Neural N

MODULO 2 n + 1 MAC UNIT

Improved Attack on Full-round Grain-128

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Visual object classification by sparse convolutional neural networks

STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK

Byzantine Consensus. Definition

Detectable Byzantine Agreement Secure Against Faulty Majorities

FPGA Implementation of Optimized DES Encryption Algorithm on Spartan 3E

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine

Link Lifetime Prediction in Mobile Ad-Hoc Network Using Curve Fitting Method

Distributed Systems Fault Tolerance

Distributed Systems 11. Consensus. Paul Krzyzanowski

CMPT 882 Week 3 Summary

Dep. Systems Requirements

Notes on Multilayer, Feedforward Neural Networks

A Network Intrusion Detection System Architecture Based on Snort and. Computational Intelligence

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT

Two-Phase Atomic Commitment Protocol in Asynchronous Distributed Systems with Crash Failure

Week 3: Perceptron and Multi-layer Perceptron

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University

Achievable Rate Regions for Network Coding

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Secure Reliable Multicast Protocols in a WAN

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

International Journal of Advanced Research in Computer Science and Software Engineering

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator

Tradeoffs in Byzantine-Fault-Tolerant State-Machine-Replication Protocol Design

Distributed Systems. Fault Tolerance. Paul Krzyzanowski

Practical Byzantine Fault Tolerance

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

Fault Tolerance. Distributed Software Systems. Definitions

Watermarking Using Bit Plane Complexity Segmentation and Artificial Neural Network Rashmeet Kaur Chawla 1, Sunil Kumar Muttoo 2

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

A Hybrid Approach for Misbehavior Detection in Wireless Ad-Hoc Networks

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Fault-Tolerant Routing Algorithm in Meshes with Solid Faults

Failure models. Byzantine Fault Tolerance. What can go wrong? Paxos is fail-stop tolerant. BFT model. BFT replication 5/25/18

Transcription:

P Artificial Neural Network Based Byzantine Agreement Protocol K.W. Lee and H.T. Ewe 2 Faculty of Engineering & Technology 2 Faculty of Information Technology Multimedia University Multimedia University 75450 Melaka, Malaysia 6300 yberjaya, Malaysia Tel: 60-(6)-252-3005; Fax: 60-(6)-23-6552 Tel: 60-(3)-832-5430; Fax: 60-(3)-832-5264 Email: kwlee@mmu.edu.my Email: htewe@mmu.edu.my ABSTRAT Reliability of distributed computer systems and computer networks involves the fault-tolerant capability of handling malfunctioning components that give contradictory information to other units in the systems. A malicious environment consists of both loyal and faulty units. The problem of finding an algorithm to achieve consensus among all the loyal units in the network is called Byzantine Generals Problem (BGP), and the algorithm of solving BGP is known as Byzantine Agreement Protocol (BAP). A new approach to BGP using artificial neural networks (ANN) is recommended in [,2]. In this paper, we propose an improved ANN based BAP. It shows better performance when the size of the network n 0. This ANN based BAP has several advantages over traditional BAP. The good points include (i) great reduction of memory space requirement; (ii) parallel processing ability of each node unit; (iii) repeatability of neural network learning capability to the dynamic Byzantine environment. KEY WORDS Artificial Neural Networks, Byzantine Generals Problem, Fault Tolerance, ryptography, Network Security.. INTRODUTION D ISTRIBUTED computer system is a network of processors, which can communicate among each other. For distributed systems that handle critical manoeuvres such as life, manufacturing and military, its reliability is a very important issue. As the size of the system grows up, the complexity to achieve reliability becomes tougher. A distributed system with reliability support is known as a dependable distributed system. A distributed system can be simplified into a virtual network of nodes. The nodes represent the processors and the links among the nodes represent the communication paths between the processors. The distributed network is hence in a form of graph for analysis. This research was supported by Telekom R&D Funding of Malaysia under project number PR/200/0025. In this paper, a model with a network topology called Fully onnected Network (FN) is used for the study of Byzantine Generals Problem (BGP) [3,4,5,6]. FN is a network model where each node is connected to all the other nodes via a dedicated link. The security of data transmission over the communication link is ensured via the cryptographic methods [7,8]. For a distributed system, there exist both the loyal processors and the faulty processors. In the worst case, the faulty processors behave in an arbitrary manner called Byzantine fault. This problem is defined as the Byzantine Generals Problem (BGP) where in a network of nodes, there exist a number of faulty nodes sending out arbitrary information. To solve this problem, an algorithm called Byzantine Agreement Protocol (BAP) is developed to achieve agreement among all the loyal nodes. The traditional BAP is firstly studied by Lamport [3] in 982. In [3], the BGP solutions to both the unsigned message and signed message are proposed. In this paper, the new approach to BGP of unsigned message using ANN is further investigated. In this further BAP approach, we use the methodology of adopting artificial neural networks (ANN) to help solve the BGP. A message exchange matrix is formed to train the neural networks using the Back Propagation Network (BPN). As compared to the traditional BAP, we can see some advantages of using ANN based BAP. 2. BYZANTINE GENERALS PROBLEM The Byzantine Generals Problem is named after an ancient war when troops of Byzantine generals are besieging a town and there are traitors among them. To solve this dilemma, the loyal Byzantine generals must be able to reach the same decision or agreement for attacking or retreating from the town. It is shown in papers [3] and [4] that for solvable BGP of unsigned message, we require that n 3m+, where n is the total number of nodes in the distributed systems and m is the number of faulty nodes of the network. To achieve the Byzantine agreement for the Byzantine Generals Problem, both of the interactive consistency conditions (I) below, i.e. agreement and validity, have to be satisfied. Agreement : All the loyal nodes reach a common value.

Validity : If the source is non-faulty, then all loyal nodes use the common value same as the initial value of the source; or DEFAULT value is used if a node receives no value. For all the BGP of unsigned message, we adopt the application of cryptography for the assumptions below: A: Every message that is sent is delivered correctly. A2: The receiver of a message can verify the sender. A3: The absence of a message can be detected. The simplest distributed system for BGP study is n = 4. In Fig. below, it shows the FN model of a 4-processor network. is the commander node; whereas L, L 2 and are the lieutenant nodes. Fig. 2 below shows the graph of the total number of message being exchanged (msg) against the number of processors (n) for the traditional BAP and ANN based BAP. From the figure, for small number of nodes in a distributed network, traditional BAP is found to be more efficient than ANN based BAP. However as n 0, ANN based BAP becomes more efficient. Hence once the artificial neural network is trained up, we can conclude that ANN based BAP is much more suitable to a large distributed network as compared to the traditional BAP. This is because ANN based BAP is of the complexity of O(n 3 ) and the traditional BAP is of the complexity of O(n m+ ). Hence as the number of faulty nodes, m, is equal or more than three, the efficiency of ANN based BAP will overtake the traditional BAP. Fig. FN model of 4-processor network. For traditional BAP as in [3], if we adopt the network architecture of a FN model, we will find that Rnd_exch Trad., the number of rounds of message exchange among nodes, and Msg Trad., the total number of message being exchanged to achieve Byzantine agreement, are given as below: Rnd _ exchtrad. = m + () Msg Trad. L L 2 m ( n )! = n + (2) 2)! k = ( n k It can be noticed that the worst case upper bound is given by O(n m+ ), and we know that m equals to the floor of (n-)/3. Hence the complexity of BGP using traditional BAP increases exponentially as the number of nodes gets bigger. For the proposed ANN based BAP, only a maximum of three rounds is required for the number of rounds of message exchange. This allows a great reduction in memory space requirement. The Rnd_exch ANN and Msg ANN for ANN based BAP to reach a consensus are as below: Rnd _ exch ANN = 3 (3) 3 Msg ANN = ( n ) (4) Total number of message exchanged, msg 0000 8000 6000 4000 2000 Fig. 2 Total number of message exchanged (msg) vs number of processors (n) for traditional BAP and ANN based BAP. 3. ANN BASED BAP 3. Introduction 0 Traditional BAP ANN based BAP 4 6 8 0 2 Number of processors, n In this paper, we propose an ANN based BAP as in Fig. 3 over traditional BAP. Based on its ANN feature, this new BAP allows each processor to work in parallel. At the initialization stage, the commander node starts to broadcast the source message to the other (n-) lieutenant nodes. At the message exchange stage, each lieutenant node acts as a commander to broadcast its value to other (n-2) lieutenant nodes. A message exchange matrix of the size (n-)*(n-) is formed at every lieutenant node as the training data set for the neural network. Every lieutenant node feeds the received message into the BPN to start the ANN training phase. The training stage continues until the preset sum squared error (SSE) is reached. Once the training stage finishes, the ANN is ready for a new application of BGP to gain the Byzantine agreement. When a new BGP triggers, the commander node sends the

message to each lieutenant node. Then every lieutenant node sends its message to other (n-2) nodes. At this period, the application stage of the ANN is called. Received messages by the lieutenant nodes are fed into the ANN to produce outputs for the compromise stage. In this final stage, the majority value is computed for Byzantine agreement based on the temporarily stored outputs. If a majority value cannot be reached, the DEFAULT value will be used instead. Initialization Message Exchange Fig. 3 Block diagram of the ANN based BAP 3.2 Artificial Neural Networks Artificial neural networks (ANN) [9] is an effective and well-known tool for handling complicated and dynamic problems that involve lots of parameters. In view of its network architecture, ANN is also known as distributed processing system. An ANN model varies according to its architecture, learning algorithm and activation function. ANN has lots of applications in a variety of fields such as fault detection [0] and feature classification. ANN can help reduce the requirement of memory space via its memory capability. And its adaptive learning capability allows ANN to survive in the critical and dynamic situation such as the Byzantine environment of BGP. Besides, ANN has the ability of parallel processing for each local node. We propose the application of ANN to help reach the Byzantine agreement in this paper. This new approach to BAP using ANN is named as ANN based BAP. In this paper, we use a recurrent ANN known as Back Propagation Network (BPN) with binary sigmoid function. Fig. shows the layout of a 4-processor distributed network. Meanwhile, Fig. 4 shows the equivalent ANN architecture of this 4-processor network. The number of processors excluding the commander is denoted by X i, and Z k denotes the different results: 0, and DEFAULT. X i is the input neuron and Z k is the output Z X Z2 Y Y2 Y3 X2 Training g Application ANN Fig. 4 A 2-layer BPN for a 4-processor network Z3 X3 Output Layer k Weights of link j-k Hidden Layer j Weights of link i-j Input Layer i ompromise neuron. Y j denotes the hidden neuron where we take it as half of the number of input neurons and output neurons. During the training phase, the neural network is firstly trained to determine its weights and biases. The training stops when the sum squared error (SSE) reaches below the preset SSE. Then the ANN is ready for application to solve the intended problems. 3.3 Training Phase An artificial neural network is formed with the number of input neurons (N X ) equals to (n-) and the number of output neurons (N Z ) equals to the number of possible results: 0, and DEFAULT. Hence if we adopt an ANN with one hidden layer, the number of hidden neurons (Y j ) equals to (n+2)/2. When the network architecture is ready, we select the binary sigmoid function (p) as in Eq. (5) as the transfer characteristics of the activation function. The input to the neuron is denoted as q. p = + e q The chosen learning algorithm is Back Propagation Network (BPN). We set the expected output targets (t k ) and preset the SSE to be 0., 0.0, 0.00 and 0.000. Let N Z be the number of output neurons and z ok be the output of the output neurons, we have SSE as in Eq. (6). This SSE is used to update the weights and biases of the network during the back propagation process via the gradient descent method. SSE = 2 N Z ( t k z ok ) k= To train the ANN, a training data set is required. A training matrix of size (n-)*(n-) is formed during the message exchange phase among all the nodes in the distributed network. This matrix is known as message exchange matrix in this paper. From Section 2, the number of rounds of message exchange for ANN based BAP is stated to be three. In this paper, an example is used to show how the message exchange matrix is formed. The simplest BGP of a 4- processor distributed system is used here. In the first round of message exchange phase, the commander node sends out one bit of message, say bit '', to every lieutenant node. And let the third lieutenant node,, be a faulty node. Then in the second round, each lieutenant node exchanges message among each other. For a loyal node, it transmits message exactly the same as the message it receives from the commander node in the first round. But for a faulty node, an arbitrary message will be transmitted to other lieutenant nodes. This arbitrary message can be either bit '0', '' or 'DEFAULT'. 'DEFAULT' is a situation where no message is received from a particular node in one round. The 'DEFAULT' value is normally either set to bit '0' or bit ''. 2 (5) (6)

When the second round finishes, each lieutenant node has (n-) bits of received message from other nodes in its memory space. In the last round, each lieutenant node delivers one string of message of (n-) bits to every other lieutenant node. When this round ends, a message exchange matrix of size (n-)*(n-) is obtained for every lieutenant node. And since we have (n-) lieutenant nodes, the total number of message exchanged within the network is (n-) 3. This message exchange phase is shown in Fig. 5. The first round of the message exchange is represented by Fig. 5(a). Meanwhile the second and third rounds are represented by Fig. 5(b) and 5(c), respectively. When the message exchange matrix is ready, it will be used for the training of the pre-designed artificial neural networks. For a 4-processor network, a double-layer BPN is designed with three input neurons (X i = X X 2 X 3 ) representing the lieutenant nodes L L 2 and three output neurons (Z k = Z Z 2 Z 3 ) representing the common agreement to be achieved: bit '0', bit '' or 'DEFAULT'. In the hidden layer, the number of hidden neurons (Y j ) is the ceiling of half the total number of input neurons and output neurons. For n=4, the number of hidden neurons is 3. Its ANN architecture is shown in Fig. 4. Row by row, each row vector of the message exchange matrix is fed to the input layer of the BPN. Each input neuron processes the message it receives according to its activation function. Then input neurons send the outputs of the input layer (x oi ) to each hidden neuron via the weights w ij. And hidden neurons process the information and send the outputs of the hidden layer (y oj ) to each output neuron via the weights w jk. Both the input neurons and the hidden neurons act as threshold logic units with an activation function. Due to the factor of computational efficiency, binary sigmoid function as in Eq. (5) is chosen as the transfer characteristics. Whereas for the SSE, we preset it at 0., 0.0, 0.00, and 0.000. There are two processes in the BPN training phase, i.e. the feed forward process and the back propagation process. During the feed forward process, sum squared error (SSE) is generated from the real output (z ok ) and the target output (t k ) according to Eq. (6). This SSE value is then used in the back propagation process to update the weights w jk and w ij by using the gradient descent method. Then feed forward process is repeated with the same set of training data, i.e. the message exchange matrix, but with the updated weights. After that it is back propagation process again. These two processes repeat until the preset SSE value is achieved. Now the neural network is ready to be used in the application phase to solve the new incoming BGP as long as the number of faulty nodes within the network is equal to or less than m. 3.4 Application Phase The weights after the preset SSE is achieved in the training phase is used in this phase for new BAP application to reach the interactive consistency conditions. This application phase is the same with the feed forward L L 2 Fig. 5 The formation of message exchange matrix for a 4- processor network Faulty Processor (a) First round of message exchange L L 2 0 0 Faulty Processor (b) Second round of message exchange 0 0 0 L L 2 000 000 Faulty Processor 0 (c) Third round of message exchange

process in the training phase. Received message is fed to the input layer and outputs are collected from the output layer. The majority of these outputs will be the common value for the Byzantine agreement on the condition that the number of faulty nodes is at a maximum of (n-)/3. 3.5 Results The ANN based BAP is applied to the cases of n=4, 7 and 0 with corresponding critical cases of m=, 2, and 3. Different preset SSE against the required epochs in the training phase for these cases are plotted in Fig. 5. log 0 (SSE) - -2-3 -4 Fig. 5 Epoch needed for preset SSE of n-processor systems. To analyse the performance of this ANN based BAP, we compare the epoch needed for various preset SSE of three sets of n-processor systems as in Fig. 5. The figure shows that as the number of nodes in a network increases, less epoch is required to reach the preset SSE values of 0., 0.0, 0.00, and 0.000. Table below shows the executing results of a 4- processor system with the preset SSE of 0.000. Table Executing results of a 4-processor system Node MSG Z ok Local 0 DFT MAJ 0 0.004 0.995 0.003 L 0 0.004 0.996 0.003 0 0 0 0.990 0.00 0.003 0 0 0.004 0.995 0.002 L 2 0 0.004 0.996 0.003 0 0 0 0.999 0.00 0.003 0 MSG: Message DFT: DEFAULT MAJ: Majority n = 0 Epoch Node MAJ MAJ Row by row, the row vector of the message exchange matrix at node L is fed into the trained neural network. Row [ 0], then [ 0] and at last [0 0 0] will give three outputs called local majority value. Based on these values, the majority function is applied to select the bit value with the maximum frequency of occurrence. The selected bit value is called the node majority, which represents the final decision a lieutenant node makes. Then based on this node majority value at every node of the network, all the n = 7 n = 4 0 2000 4000 6000 8000 0000 2000 4000 6000 n = 4 n = 7 n = 0 loyal lieutenant nodes will hold the same majority value of MAJ of bit '' as in Table. This majority (MAJ) value will be the Byzantine agreement of the 4-processor network. Hence, we can see that the ANN based BAP can reach Byzantine agreement among loyal nodes even in the existence of faulty node. 4. ONLUSIONS As a summary, a further improved ANN based BAP is designed in this paper. This ANN based BAP can be used for a n-processor distributed system. Byzantine agreement can be reached on the conditions that the source is nonfaulty and the number of faulty nodes is at a maximum of (n-)/3. Below list the advantages of ANN based BAP over the traditional BAP: i. Greatly reduced requirement for memory space. ii. Parallel processing ability of each node. iii. Flexible learning capability of each node. In the future, ANN based BAP can be improved to detect faulty node. Furthermore, the application of ANN to mixed faults [] and authenticated BGP worth a study. This can help the development of secure multiparty protocols, which is also known as distributed cryptography [8]. REFERENES [] S.. Wang & S.H. Kao, A new approach for Byzantine agreement, Proceedings of the 5 th International onference on Information Networking, February 200, 58-524. [2] K.W. Lee & H.T. Ewe, Artificial neural networks based algorithm for Byzantine Generals Problem, MMU International Symposium on Information and ommunications Technologies 200, Kuala Lumpur, 6-7 October 200. [3] L. Lamport, R. Shostak, & M. Pease, The Byzantine Generals Problem, AM Transactions on Programming Languages and Systems, 4(3), July 982, 382-40. [4] M. Pease, R. Shostak, & L. Lamport, Reaching agreement in the presence of faults, Journal of AM, 27(2), April 980, 228-234. [5] M. Fischer & N. Lynch, A lower bound for the time to assure interactive consistency, Information Processing Letters, 4(4), June 982, 83-86. [6] L. Lamport, The weak Byzantine Generals Problem, Journal of AM, 30(3), July 983, 668-676. [7] W. Diffie & M.E. Hellman, New directions in cryptography, IEEE Transaction on Information Theory, IT-22(6), November 976, 644-654. [8] S. Goldwasser, New directions in cryptography: twenty some years later, Proceedings of the 38 th

Annual Symposium on Foundations of omputer Science, October 997, 34-324. [9] L.A. Snider & Y.S. Yuen, The artificial neural networks based relay algorithm for distribution system high impedance fault detection, Proceedings of the 4 th International onference on Advances in Power System ontrol, Operation and Management, APSOM-97, Hong Kong, November 997, 00-06. [0] M.T. Hagan & H.B. Demuth, Neural Network Design (PWS Publishing ompany, 995). [] H.S. Siu, Y.H. hin, & W.P. Yang, Byzantine agreement in the presence of mixed faults on processors and links, IEEE Transaction on Parallel and Distributed Systems, 9(4), April 998, 335-345.