Artificial Neural Network Based Byzantine Agreement Protocol

Size: px
Start display at page:

Download "Artificial Neural Network Based Byzantine Agreement Protocol"

Transcription

1 P Artificial Neural Network Based Byzantine Agreement Protocol K.W. Lee and H.T. Ewe 2 Faculty of Engineering & Technology 2 Faculty of Information Technology Multimedia University Multimedia University Melaka, Malaysia 6300 yberjaya, Malaysia Tel: 60-(6) ; Fax: 60-(6) Tel: 60-(3) ; Fax: 60-(3) kwlee@mmu.edu.my htewe@mmu.edu.my ABSTRAT Reliability of distributed computer systems and computer networks involves the fault-tolerant capability of handling malfunctioning components that give contradictory information to other units in the systems. A malicious environment consists of both loyal and faulty units. The problem of finding an algorithm to achieve consensus among all the loyal units in the network is called Byzantine Generals Problem (BGP), and the algorithm of solving BGP is known as Byzantine Agreement Protocol (BAP). A new approach to BGP using artificial neural networks (ANN) is recommended in [,2]. In this paper, we propose an improved ANN based BAP. It shows better performance when the size of the network n 0. This ANN based BAP has several advantages over traditional BAP. The good points include (i) great reduction of memory space requirement; (ii) parallel processing ability of each node unit; (iii) repeatability of neural network learning capability to the dynamic Byzantine environment. KEY WORDS Artificial Neural Networks, Byzantine Generals Problem, Fault Tolerance, ryptography, Network Security.. INTRODUTION D ISTRIBUTED computer system is a network of processors, which can communicate among each other. For distributed systems that handle critical manoeuvres such as life, manufacturing and military, its reliability is a very important issue. As the size of the system grows up, the complexity to achieve reliability becomes tougher. A distributed system with reliability support is known as a dependable distributed system. A distributed system can be simplified into a virtual network of nodes. The nodes represent the processors and the links among the nodes represent the communication paths between the processors. The distributed network is hence in a form of graph for analysis. This research was supported by Telekom R&D Funding of Malaysia under project number PR/200/0025. In this paper, a model with a network topology called Fully onnected Network (FN) is used for the study of Byzantine Generals Problem (BGP) [3,4,5,6]. FN is a network model where each node is connected to all the other nodes via a dedicated link. The security of data transmission over the communication link is ensured via the cryptographic methods [7,8]. For a distributed system, there exist both the loyal processors and the faulty processors. In the worst case, the faulty processors behave in an arbitrary manner called Byzantine fault. This problem is defined as the Byzantine Generals Problem (BGP) where in a network of nodes, there exist a number of faulty nodes sending out arbitrary information. To solve this problem, an algorithm called Byzantine Agreement Protocol (BAP) is developed to achieve agreement among all the loyal nodes. The traditional BAP is firstly studied by Lamport [3] in 982. In [3], the BGP solutions to both the unsigned message and signed message are proposed. In this paper, the new approach to BGP of unsigned message using ANN is further investigated. In this further BAP approach, we use the methodology of adopting artificial neural networks (ANN) to help solve the BGP. A message exchange matrix is formed to train the neural networks using the Back Propagation Network (BPN). As compared to the traditional BAP, we can see some advantages of using ANN based BAP. 2. BYZANTINE GENERALS PROBLEM The Byzantine Generals Problem is named after an ancient war when troops of Byzantine generals are besieging a town and there are traitors among them. To solve this dilemma, the loyal Byzantine generals must be able to reach the same decision or agreement for attacking or retreating from the town. It is shown in papers [3] and [4] that for solvable BGP of unsigned message, we require that n 3m+, where n is the total number of nodes in the distributed systems and m is the number of faulty nodes of the network. To achieve the Byzantine agreement for the Byzantine Generals Problem, both of the interactive consistency conditions (I) below, i.e. agreement and validity, have to be satisfied. Agreement : All the loyal nodes reach a common value.

2 Validity : If the source is non-faulty, then all loyal nodes use the common value same as the initial value of the source; or DEFAULT value is used if a node receives no value. For all the BGP of unsigned message, we adopt the application of cryptography for the assumptions below: A: Every message that is sent is delivered correctly. A2: The receiver of a message can verify the sender. A3: The absence of a message can be detected. The simplest distributed system for BGP study is n = 4. In Fig. below, it shows the FN model of a 4-processor network. is the commander node; whereas L, L 2 and are the lieutenant nodes. Fig. 2 below shows the graph of the total number of message being exchanged (msg) against the number of processors (n) for the traditional BAP and ANN based BAP. From the figure, for small number of nodes in a distributed network, traditional BAP is found to be more efficient than ANN based BAP. However as n 0, ANN based BAP becomes more efficient. Hence once the artificial neural network is trained up, we can conclude that ANN based BAP is much more suitable to a large distributed network as compared to the traditional BAP. This is because ANN based BAP is of the complexity of O(n 3 ) and the traditional BAP is of the complexity of O(n m+ ). Hence as the number of faulty nodes, m, is equal or more than three, the efficiency of ANN based BAP will overtake the traditional BAP. Fig. FN model of 4-processor network. For traditional BAP as in [3], if we adopt the network architecture of a FN model, we will find that Rnd_exch Trad., the number of rounds of message exchange among nodes, and Msg Trad., the total number of message being exchanged to achieve Byzantine agreement, are given as below: Rnd _ exchtrad. = m + () Msg Trad. L L 2 m ( n )! = n + (2) 2)! k = ( n k It can be noticed that the worst case upper bound is given by O(n m+ ), and we know that m equals to the floor of (n-)/3. Hence the complexity of BGP using traditional BAP increases exponentially as the number of nodes gets bigger. For the proposed ANN based BAP, only a maximum of three rounds is required for the number of rounds of message exchange. This allows a great reduction in memory space requirement. The Rnd_exch ANN and Msg ANN for ANN based BAP to reach a consensus are as below: Rnd _ exch ANN = 3 (3) 3 Msg ANN = ( n ) (4) Total number of message exchanged, msg Fig. 2 Total number of message exchanged (msg) vs number of processors (n) for traditional BAP and ANN based BAP. 3. ANN BASED BAP 3. Introduction 0 Traditional BAP ANN based BAP Number of processors, n In this paper, we propose an ANN based BAP as in Fig. 3 over traditional BAP. Based on its ANN feature, this new BAP allows each processor to work in parallel. At the initialization stage, the commander node starts to broadcast the source message to the other (n-) lieutenant nodes. At the message exchange stage, each lieutenant node acts as a commander to broadcast its value to other (n-2) lieutenant nodes. A message exchange matrix of the size (n-)*(n-) is formed at every lieutenant node as the training data set for the neural network. Every lieutenant node feeds the received message into the BPN to start the ANN training phase. The training stage continues until the preset sum squared error (SSE) is reached. Once the training stage finishes, the ANN is ready for a new application of BGP to gain the Byzantine agreement. When a new BGP triggers, the commander node sends the

3 message to each lieutenant node. Then every lieutenant node sends its message to other (n-2) nodes. At this period, the application stage of the ANN is called. Received messages by the lieutenant nodes are fed into the ANN to produce outputs for the compromise stage. In this final stage, the majority value is computed for Byzantine agreement based on the temporarily stored outputs. If a majority value cannot be reached, the DEFAULT value will be used instead. Initialization Message Exchange Fig. 3 Block diagram of the ANN based BAP 3.2 Artificial Neural Networks Artificial neural networks (ANN) [9] is an effective and well-known tool for handling complicated and dynamic problems that involve lots of parameters. In view of its network architecture, ANN is also known as distributed processing system. An ANN model varies according to its architecture, learning algorithm and activation function. ANN has lots of applications in a variety of fields such as fault detection [0] and feature classification. ANN can help reduce the requirement of memory space via its memory capability. And its adaptive learning capability allows ANN to survive in the critical and dynamic situation such as the Byzantine environment of BGP. Besides, ANN has the ability of parallel processing for each local node. We propose the application of ANN to help reach the Byzantine agreement in this paper. This new approach to BAP using ANN is named as ANN based BAP. In this paper, we use a recurrent ANN known as Back Propagation Network (BPN) with binary sigmoid function. Fig. shows the layout of a 4-processor distributed network. Meanwhile, Fig. 4 shows the equivalent ANN architecture of this 4-processor network. The number of processors excluding the commander is denoted by X i, and Z k denotes the different results: 0, and DEFAULT. X i is the input neuron and Z k is the output Z X Z2 Y Y2 Y3 X2 Training g Application ANN Fig. 4 A 2-layer BPN for a 4-processor network Z3 X3 Output Layer k Weights of link j-k Hidden Layer j Weights of link i-j Input Layer i ompromise neuron. Y j denotes the hidden neuron where we take it as half of the number of input neurons and output neurons. During the training phase, the neural network is firstly trained to determine its weights and biases. The training stops when the sum squared error (SSE) reaches below the preset SSE. Then the ANN is ready for application to solve the intended problems. 3.3 Training Phase An artificial neural network is formed with the number of input neurons (N X ) equals to (n-) and the number of output neurons (N Z ) equals to the number of possible results: 0, and DEFAULT. Hence if we adopt an ANN with one hidden layer, the number of hidden neurons (Y j ) equals to (n+2)/2. When the network architecture is ready, we select the binary sigmoid function (p) as in Eq. (5) as the transfer characteristics of the activation function. The input to the neuron is denoted as q. p = + e q The chosen learning algorithm is Back Propagation Network (BPN). We set the expected output targets (t k ) and preset the SSE to be 0., 0.0, 0.00 and Let N Z be the number of output neurons and z ok be the output of the output neurons, we have SSE as in Eq. (6). This SSE is used to update the weights and biases of the network during the back propagation process via the gradient descent method. SSE = 2 N Z ( t k z ok ) k= To train the ANN, a training data set is required. A training matrix of size (n-)*(n-) is formed during the message exchange phase among all the nodes in the distributed network. This matrix is known as message exchange matrix in this paper. From Section 2, the number of rounds of message exchange for ANN based BAP is stated to be three. In this paper, an example is used to show how the message exchange matrix is formed. The simplest BGP of a 4- processor distributed system is used here. In the first round of message exchange phase, the commander node sends out one bit of message, say bit '', to every lieutenant node. And let the third lieutenant node,, be a faulty node. Then in the second round, each lieutenant node exchanges message among each other. For a loyal node, it transmits message exactly the same as the message it receives from the commander node in the first round. But for a faulty node, an arbitrary message will be transmitted to other lieutenant nodes. This arbitrary message can be either bit '0', '' or 'DEFAULT'. 'DEFAULT' is a situation where no message is received from a particular node in one round. The 'DEFAULT' value is normally either set to bit '0' or bit ''. 2 (5) (6)

4 When the second round finishes, each lieutenant node has (n-) bits of received message from other nodes in its memory space. In the last round, each lieutenant node delivers one string of message of (n-) bits to every other lieutenant node. When this round ends, a message exchange matrix of size (n-)*(n-) is obtained for every lieutenant node. And since we have (n-) lieutenant nodes, the total number of message exchanged within the network is (n-) 3. This message exchange phase is shown in Fig. 5. The first round of the message exchange is represented by Fig. 5(a). Meanwhile the second and third rounds are represented by Fig. 5(b) and 5(c), respectively. When the message exchange matrix is ready, it will be used for the training of the pre-designed artificial neural networks. For a 4-processor network, a double-layer BPN is designed with three input neurons (X i = X X 2 X 3 ) representing the lieutenant nodes L L 2 and three output neurons (Z k = Z Z 2 Z 3 ) representing the common agreement to be achieved: bit '0', bit '' or 'DEFAULT'. In the hidden layer, the number of hidden neurons (Y j ) is the ceiling of half the total number of input neurons and output neurons. For n=4, the number of hidden neurons is 3. Its ANN architecture is shown in Fig. 4. Row by row, each row vector of the message exchange matrix is fed to the input layer of the BPN. Each input neuron processes the message it receives according to its activation function. Then input neurons send the outputs of the input layer (x oi ) to each hidden neuron via the weights w ij. And hidden neurons process the information and send the outputs of the hidden layer (y oj ) to each output neuron via the weights w jk. Both the input neurons and the hidden neurons act as threshold logic units with an activation function. Due to the factor of computational efficiency, binary sigmoid function as in Eq. (5) is chosen as the transfer characteristics. Whereas for the SSE, we preset it at 0., 0.0, 0.00, and There are two processes in the BPN training phase, i.e. the feed forward process and the back propagation process. During the feed forward process, sum squared error (SSE) is generated from the real output (z ok ) and the target output (t k ) according to Eq. (6). This SSE value is then used in the back propagation process to update the weights w jk and w ij by using the gradient descent method. Then feed forward process is repeated with the same set of training data, i.e. the message exchange matrix, but with the updated weights. After that it is back propagation process again. These two processes repeat until the preset SSE value is achieved. Now the neural network is ready to be used in the application phase to solve the new incoming BGP as long as the number of faulty nodes within the network is equal to or less than m. 3.4 Application Phase The weights after the preset SSE is achieved in the training phase is used in this phase for new BAP application to reach the interactive consistency conditions. This application phase is the same with the feed forward L L 2 Fig. 5 The formation of message exchange matrix for a 4- processor network Faulty Processor (a) First round of message exchange L L Faulty Processor (b) Second round of message exchange L L Faulty Processor 0 (c) Third round of message exchange

5 process in the training phase. Received message is fed to the input layer and outputs are collected from the output layer. The majority of these outputs will be the common value for the Byzantine agreement on the condition that the number of faulty nodes is at a maximum of (n-)/ Results The ANN based BAP is applied to the cases of n=4, 7 and 0 with corresponding critical cases of m=, 2, and 3. Different preset SSE against the required epochs in the training phase for these cases are plotted in Fig. 5. log 0 (SSE) Fig. 5 Epoch needed for preset SSE of n-processor systems. To analyse the performance of this ANN based BAP, we compare the epoch needed for various preset SSE of three sets of n-processor systems as in Fig. 5. The figure shows that as the number of nodes in a network increases, less epoch is required to reach the preset SSE values of 0., 0.0, 0.00, and Table below shows the executing results of a 4- processor system with the preset SSE of Table Executing results of a 4-processor system Node MSG Z ok Local 0 DFT MAJ L L MSG: Message DFT: DEFAULT MAJ: Majority n = 0 Epoch Node MAJ MAJ Row by row, the row vector of the message exchange matrix at node L is fed into the trained neural network. Row [ 0], then [ 0] and at last [0 0 0] will give three outputs called local majority value. Based on these values, the majority function is applied to select the bit value with the maximum frequency of occurrence. The selected bit value is called the node majority, which represents the final decision a lieutenant node makes. Then based on this node majority value at every node of the network, all the n = 7 n = n = 4 n = 7 n = 0 loyal lieutenant nodes will hold the same majority value of MAJ of bit '' as in Table. This majority (MAJ) value will be the Byzantine agreement of the 4-processor network. Hence, we can see that the ANN based BAP can reach Byzantine agreement among loyal nodes even in the existence of faulty node. 4. ONLUSIONS As a summary, a further improved ANN based BAP is designed in this paper. This ANN based BAP can be used for a n-processor distributed system. Byzantine agreement can be reached on the conditions that the source is nonfaulty and the number of faulty nodes is at a maximum of (n-)/3. Below list the advantages of ANN based BAP over the traditional BAP: i. Greatly reduced requirement for memory space. ii. Parallel processing ability of each node. iii. Flexible learning capability of each node. In the future, ANN based BAP can be improved to detect faulty node. Furthermore, the application of ANN to mixed faults [] and authenticated BGP worth a study. This can help the development of secure multiparty protocols, which is also known as distributed cryptography [8]. REFERENES [] S.. Wang & S.H. Kao, A new approach for Byzantine agreement, Proceedings of the 5 th International onference on Information Networking, February 200, [2] K.W. Lee & H.T. Ewe, Artificial neural networks based algorithm for Byzantine Generals Problem, MMU International Symposium on Information and ommunications Technologies 200, Kuala Lumpur, 6-7 October 200. [3] L. Lamport, R. Shostak, & M. Pease, The Byzantine Generals Problem, AM Transactions on Programming Languages and Systems, 4(3), July 982, [4] M. Pease, R. Shostak, & L. Lamport, Reaching agreement in the presence of faults, Journal of AM, 27(2), April 980, [5] M. Fischer & N. Lynch, A lower bound for the time to assure interactive consistency, Information Processing Letters, 4(4), June 982, [6] L. Lamport, The weak Byzantine Generals Problem, Journal of AM, 30(3), July 983, [7] W. Diffie & M.E. Hellman, New directions in cryptography, IEEE Transaction on Information Theory, IT-22(6), November 976, [8] S. Goldwasser, New directions in cryptography: twenty some years later, Proceedings of the 38 th

6 Annual Symposium on Foundations of omputer Science, October 997, [9] L.A. Snider & Y.S. Yuen, The artificial neural networks based relay algorithm for distribution system high impedance fault detection, Proceedings of the 4 th International onference on Advances in Power System ontrol, Operation and Management, APSOM-97, Hong Kong, November 997, [0] M.T. Hagan & H.B. Demuth, Neural Network Design (PWS Publishing ompany, 995). [] H.S. Siu, Y.H. hin, & W.P. Yang, Byzantine agreement in the presence of mixed faults on processors and links, IEEE Transaction on Parallel and Distributed Systems, 9(4), April 998,

An optimal novel Byzantine agreement protocol (ONBAP) for heterogeneous distributed database processing systems

An optimal novel Byzantine agreement protocol (ONBAP) for heterogeneous distributed database processing systems Available online at www.sciencedirect.com Procedia Technology 6 (2012 ) 57 66 2 nd International Conference on Communication, Computing & Security An optimal novel Byzantine agreement protocol (ONBAP)

More information

BYZANTINE GENERALS BYZANTINE GENERALS (1) A fable: Michał Szychowiak, 2002 Dependability of Distributed Systems (Byzantine agreement)

BYZANTINE GENERALS BYZANTINE GENERALS (1) A fable: Michał Szychowiak, 2002 Dependability of Distributed Systems (Byzantine agreement) BYZANTINE GENERALS (1) BYZANTINE GENERALS A fable: BYZANTINE GENERALS (2) Byzantine Generals Problem: Condition 1: All loyal generals decide upon the same plan of action. Condition 2: A small number of

More information

BYZANTINE AGREEMENT CH / $ IEEE. by H. R. Strong and D. Dolev. IBM Research Laboratory, K55/281 San Jose, CA 95193

BYZANTINE AGREEMENT CH / $ IEEE. by H. R. Strong and D. Dolev. IBM Research Laboratory, K55/281 San Jose, CA 95193 BYZANTINE AGREEMENT by H. R. Strong and D. Dolev IBM Research Laboratory, K55/281 San Jose, CA 95193 ABSTRACT Byzantine Agreement is a paradigm for problems of reliable consistency and synchronization

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

CSE 5306 Distributed Systems. Fault Tolerance

CSE 5306 Distributed Systems. Fault Tolerance CSE 5306 Distributed Systems Fault Tolerance 1 Failure in Distributed Systems Partial failure happens when one component of a distributed system fails often leaves other components unaffected A failure

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart

More information

CMSC 858F: Algorithmic Game Theory Fall 2010 Achieving Byzantine Agreement and Broadcast against Rational Adversaries

CMSC 858F: Algorithmic Game Theory Fall 2010 Achieving Byzantine Agreement and Broadcast against Rational Adversaries CMSC 858F: Algorithmic Game Theory Fall 2010 Achieving Byzantine Agreement and Broadcast against Rational Adversaries Instructor: Mohammad T. Hajiaghayi Scribe: Adam Groce, Aishwarya Thiruvengadam, Ateeq

More information

Distributed Encryption and Decryption Algorithms

Distributed Encryption and Decryption Algorithms Distributed Encryption and Decryption Algorithms André Postma, Willem de Boer, Arne Helme, Gerard Smit University of Twente, Department of Computer Science P.O.Box 217, NL 7500 AE Enschede, the Netherlands

More information

Consensus and agreement algorithms

Consensus and agreement algorithms CHAPTER 4 Consensus and agreement algorithms 4. Problem definition Agreement among the processes in a distributed system is a fundamental requirement for a wide range of applications. Many forms of coordination

More information

Distributed Deadlock

Distributed Deadlock Distributed Deadlock 9.55 DS Deadlock Topics Prevention Too expensive in time and network traffic in a distributed system Avoidance Determining safe and unsafe states would require a huge number of messages

More information

Byzantine Failures. Nikola Knezevic. knl

Byzantine Failures. Nikola Knezevic. knl Byzantine Failures Nikola Knezevic knl Different Types of Failures Crash / Fail-stop Send Omissions Receive Omissions General Omission Arbitrary failures, authenticated messages Arbitrary failures Arbitrary

More information

CSE 5306 Distributed Systems

CSE 5306 Distributed Systems CSE 5306 Distributed Systems Fault Tolerance Jia Rao http://ranger.uta.edu/~jrao/ 1 Failure in Distributed Systems Partial failure Happens when one component of a distributed system fails Often leaves

More information

An Efficient Implementation of the SM Agreement Protocol for a Time Triggered Communication System

An Efficient Implementation of the SM Agreement Protocol for a Time Triggered Communication System An Efficient Implementation of the SM Agreement Protocol for a Time Triggered Communication System 2010-01-2320 Published 10/19/2010 Markus Jochim and Thomas M. Forest General Motors Copyright 2010 SAE

More information

Fault-Tolerant Distributed Consensus

Fault-Tolerant Distributed Consensus Fault-Tolerant Distributed Consensus Lawrence Kesteloot January 20, 1995 1 Introduction A fault-tolerant system is one that can sustain a reasonable number of process or communication failures, both intermittent

More information

Consensus and related problems

Consensus and related problems Consensus and related problems Today l Consensus l Google s Chubby l Paxos for Chubby Consensus and failures How to make process agree on a value after one or more have proposed what the value should be?

More information

Byzantine Techniques

Byzantine Techniques November 29, 2005 Reliability and Failure There can be no unity without agreement, and there can be no agreement without conciliation René Maowad Reliability and Failure There can be no unity without agreement,

More information

A definition. Byzantine Generals Problem. Synchronous, Byzantine world

A definition. Byzantine Generals Problem. Synchronous, Byzantine world The Byzantine Generals Problem Leslie Lamport, Robert Shostak, and Marshall Pease ACM TOPLAS 1982 Practical Byzantine Fault Tolerance Miguel Castro and Barbara Liskov OSDI 1999 A definition Byzantine (www.m-w.com):

More information

The Long March of BFT. Weird Things Happen in Distributed Systems. A specter is haunting the system s community... A hierarchy of failure models

The Long March of BFT. Weird Things Happen in Distributed Systems. A specter is haunting the system s community... A hierarchy of failure models A specter is haunting the system s community... The Long March of BFT Lorenzo Alvisi UT Austin BFT Fail-stop A hierarchy of failure models Crash Weird Things Happen in Distributed Systems Send Omission

More information

Erez Petrank. Department of Computer Science. Haifa, Israel. Abstract

Erez Petrank. Department of Computer Science. Haifa, Israel. Abstract The Best of Both Worlds: Guaranteeing Termination in Fast Randomized Byzantine Agreement Protocols Oded Goldreich Erez Petrank Department of Computer Science Technion Haifa, Israel. Abstract All known

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

BYZANTINE CONSENSUS THROUGH BITCOIN S PROOF- OF-WORK

BYZANTINE CONSENSUS THROUGH BITCOIN S PROOF- OF-WORK Informatiemanagement: BYZANTINE CONSENSUS THROUGH BITCOIN S PROOF- OF-WORK The aim of this paper is to elucidate how Byzantine consensus is achieved through Bitcoin s novel proof-of-work system without

More information

Practical Byzantine Fault Tolerance Using Fewer than 3f+1 Active Replicas

Practical Byzantine Fault Tolerance Using Fewer than 3f+1 Active Replicas Proceedings of the 17th International Conference on Parallel and Distributed Computing Systems San Francisco, California, pp 241-247, September 24 Practical Byzantine Fault Tolerance Using Fewer than 3f+1

More information

arxiv:physics/ v1 [physics.ins-det] 5 Mar 2007 Gate Arrays

arxiv:physics/ v1 [physics.ins-det] 5 Mar 2007 Gate Arrays A Hardware Implementation of Artificial Neural Network Using Field Programmable arxiv:physics/0703041v1 [physics.ins-det] 5 Mar 2007 Gate Arrays E. Won Department of Physics, Korea University, Seoul 136-713,

More information

FAULT TOLERANT LEADER ELECTION IN DISTRIBUTED SYSTEMS

FAULT TOLERANT LEADER ELECTION IN DISTRIBUTED SYSTEMS FAULT TOLERANT LEADER ELECTION IN DISTRIBUTED SYSTEMS Marius Rafailescu The Faculty of Automatic Control and Computers, POLITEHNICA University, Bucharest ABSTRACT There are many distributed systems which

More information

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol

Global atomicity. Such distributed atomicity is called global atomicity A protocol designed to enforce global atomicity is called commit protocol Global atomicity In distributed systems a set of processes may be taking part in executing a task Their actions may have to be atomic with respect to processes outside of the set example: in a distributed

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Practical Byzantine Fault Tolerance (The Byzantine Generals Problem)

Practical Byzantine Fault Tolerance (The Byzantine Generals Problem) Practical Byzantine Fault Tolerance (The Byzantine Generals Problem) Introduction Malicious attacks and software errors that can cause arbitrary behaviors of faulty nodes are increasingly common Previous

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Consensus Problem. Pradipta De

Consensus Problem. Pradipta De Consensus Problem Slides are based on the book chapter from Distributed Computing: Principles, Paradigms and Algorithms (Chapter 14) by Kshemkalyani and Singhal Pradipta De pradipta.de@sunykorea.ac.kr

More information

CHAPTER AGREEMENT ~ROTOCOLS 8.1 INTRODUCTION

CHAPTER AGREEMENT ~ROTOCOLS 8.1 INTRODUCTION CHAPTER 8 ~ AGREEMENT ~ROTOCOLS 8.1 NTRODUCTON \ n distributed systems, where sites (or processors) often compete as well as cooperate to achieve a common goal, it is often required that sites reach mutual

More information

ROUND COMPLEXITY LOWER BOUND OF ISC PROTOCOL IN THE PARALLELIZABLE MODEL. Huijing Gong CMSC 858F

ROUND COMPLEXITY LOWER BOUND OF ISC PROTOCOL IN THE PARALLELIZABLE MODEL. Huijing Gong CMSC 858F ROUND COMPLEXITY LOWER BOUND OF ISC PROTOCOL IN THE PARALLELIZABLE MODEL Huijing Gong CMSC 858F Overview Background Byzantine Generals Problem Network Model w/o Pre-existing Setup ISC Protocol in Parallelizable

More information

Distributed Algorithms (PhD course) Consensus SARDAR MUHAMMAD SULAMAN

Distributed Algorithms (PhD course) Consensus SARDAR MUHAMMAD SULAMAN Distributed Algorithms (PhD course) Consensus SARDAR MUHAMMAD SULAMAN Consensus (Recapitulation) A consensus abstraction is specified in terms of two events: 1. Propose ( propose v )» Each process has

More information

To do. Consensus and related problems. q Failure. q Raft

To do. Consensus and related problems. q Failure. q Raft Consensus and related problems To do q Failure q Consensus and related problems q Raft Consensus We have seen protocols tailored for individual types of consensus/agreements Which process can enter the

More information

A Matlab Tool for Analyzing and Improving Fault Tolerance of Artificial Neural Networks

A Matlab Tool for Analyzing and Improving Fault Tolerance of Artificial Neural Networks A Matlab Tool for Analyzing and Improving Fault Tolerance of Artificial Neural Networks Rui Borralho*. Pedro Fontes*. Ana Antunes*. Fernando Morgado Dias**. *Escola Superior de Tecnologia de Setúbal do

More information

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION International Journal of Science, Environment and Technology, Vol. 2, No 5, 2013, 779 786 ISSN 2278-3687 (O) THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM

More information

Yale University Department of Computer Science

Yale University Department of Computer Science Yale University Department of Computer Science The Consensus Problem in Unreliable Distributed Systems (A Brief Survey) Michael J. Fischer YALEU/DCS/TR-273 June 1983 Reissued February 2000 To be presented

More information

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Ritika Luthra Research Scholar Chandigarh University Gulshan Goyal Associate Professor Chandigarh University ABSTRACT Image Skeletonization

More information

Dfinity Consensus, Explored

Dfinity Consensus, Explored Dfinity Consensus, Explored Ittai Abraham, Dahlia Malkhi, Kartik Nayak, and Ling Ren VMware Research {iabraham,dmalkhi,nkartik,lingren}@vmware.com Abstract. We explore a Byzantine Consensus protocol called

More information

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION http:// INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION 1 Rajat Pradhan, 2 Satish Kumar 1,2 Dept. of Electronics & Communication Engineering, A.S.E.T.,

More information

Signed Messages. Signed Messages

Signed Messages. Signed Messages Signed Messages! Traitors ability to lie makes Byzantine General Problem so difficult.! If we restrict this ability, then the problem becomes easier! Use authentication, i.e. allow generals to send unforgeable

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

Concepts. Techniques for masking faults. Failure Masking by Redundancy. CIS 505: Software Systems Lecture Note on Consensus

Concepts. Techniques for masking faults. Failure Masking by Redundancy. CIS 505: Software Systems Lecture Note on Consensus CIS 505: Software Systems Lecture Note on Consensus Insup Lee Department of Computer and Information Science University of Pennsylvania CIS 505, Spring 2007 Concepts Dependability o Availability ready

More information

CSCI 5454, CU Boulder Samriti Kanwar Lecture April 2013

CSCI 5454, CU Boulder Samriti Kanwar Lecture April 2013 1. Byzantine Agreement Problem In the Byzantine agreement problem, n processors communicate with each other by sending messages over bidirectional links in order to reach an agreement on a binary value.

More information

Complexity of Multi-Value Byzantine Agreement

Complexity of Multi-Value Byzantine Agreement Complexity of Multi-Value Byzantine Agreement Guanfeng Liang and Nitin Vaidya Department of Electrical and Computer Engineering, and Coordinated Science Laboratory University of Illinois at Urbana-Champaign

More information

Consensus. Chapter Two Friends. 2.3 Impossibility of Consensus. 2.2 Consensus 16 CHAPTER 2. CONSENSUS

Consensus. Chapter Two Friends. 2.3 Impossibility of Consensus. 2.2 Consensus 16 CHAPTER 2. CONSENSUS 16 CHAPTER 2. CONSENSUS Agreement All correct nodes decide for the same value. Termination All correct nodes terminate in finite time. Validity The decision value must be the input value of a node. Chapter

More information

Practical Byzantine Fault

Practical Byzantine Fault Practical Byzantine Fault Tolerance Practical Byzantine Fault Tolerance Castro and Liskov, OSDI 1999 Nathan Baker, presenting on 23 September 2005 What is a Byzantine fault? Rationale for Byzantine Fault

More information

A Framework of Hyperspectral Image Compression using Neural Networks

A Framework of Hyperspectral Image Compression using Neural Networks A Framework of Hyperspectral Image Compression using Neural Networks Yahya M. Masalmah, Ph.D 1, Christian Martínez-Nieves 1, Rafael Rivera-Soto 1, Carlos Velez 1, and Jenipher Gonzalez 1 1 Universidad

More information

Consensus. Chapter Two Friends. 8.3 Impossibility of Consensus. 8.2 Consensus 8.3. IMPOSSIBILITY OF CONSENSUS 55

Consensus. Chapter Two Friends. 8.3 Impossibility of Consensus. 8.2 Consensus 8.3. IMPOSSIBILITY OF CONSENSUS 55 8.3. IMPOSSIBILITY OF CONSENSUS 55 Agreement All correct nodes decide for the same value. Termination All correct nodes terminate in finite time. Validity The decision value must be the input value of

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

Opening the Black Box Data Driven Visualizaion of Neural N

Opening the Black Box Data Driven Visualizaion of Neural N Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological

More information

MODULO 2 n + 1 MAC UNIT

MODULO 2 n + 1 MAC UNIT Int. J. Elec&Electr.Eng&Telecoms. 2013 Sithara Sha and Shajimon K John, 2013 Research Paper MODULO 2 n + 1 MAC UNIT ISSN 2319 2518 www.ijeetc.com Vol. 2, No. 4, October 2013 2013 IJEETC. All Rights Reserved

More information

Improved Attack on Full-round Grain-128

Improved Attack on Full-round Grain-128 Improved Attack on Full-round Grain-128 Ximing Fu 1, and Xiaoyun Wang 1,2,3,4, and Jiazhe Chen 5, and Marc Stevens 6, and Xiaoyang Dong 2 1 Department of Computer Science and Technology, Tsinghua University,

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK

STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK 2004 IEEE Workshop on Machine Learning for Signal Processing STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK Y. V. Venkatesh, B. S. Venhtesh and A. Jaya Kumar Department of Electrical Engineering

More information

Byzantine Consensus. Definition

Byzantine Consensus. Definition Byzantine Consensus Definition Agreement: No two correct processes decide on different values Validity: (a) Weak Unanimity: if all processes start from the same value v and all processes are correct, then

More information

Detectable Byzantine Agreement Secure Against Faulty Majorities

Detectable Byzantine Agreement Secure Against Faulty Majorities Detectable Byzantine Agreement Secure Against Faulty Majorities Matthias Fitzi, ETH Zürich Daniel Gottesman, UC Berkeley Martin Hirt, ETH Zürich Thomas Holenstein, ETH Zürich Adam Smith, MIT (currently

More information

FPGA Implementation of Optimized DES Encryption Algorithm on Spartan 3E

FPGA Implementation of Optimized DES Encryption Algorithm on Spartan 3E FPGA Implementation of Optimized DES Encryption Algorithm on Spartan 3E Amandeep Singh, Manu Bansal Abstract - Data Security is an important parameter for the industries. It can be achieved by Encryption

More information

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine M. Vijay Kumar Reddy 1 1 Department of Mechanical Engineering, Annamacharya Institute of Technology and Sciences,

More information

Link Lifetime Prediction in Mobile Ad-Hoc Network Using Curve Fitting Method

Link Lifetime Prediction in Mobile Ad-Hoc Network Using Curve Fitting Method IJCSNS International Journal of Computer Science and Network Security, VOL.17 No.5, May 2017 265 Link Lifetime Prediction in Mobile Ad-Hoc Network Using Curve Fitting Method Mohammad Pashaei, Hossein Ghiasy

More information

Distributed Systems Fault Tolerance

Distributed Systems Fault Tolerance Distributed Systems Fault Tolerance [] Fault Tolerance. Basic concepts - terminology. Process resilience groups and failure masking 3. Reliable communication reliable client-server communication reliable

More information

Distributed Systems 11. Consensus. Paul Krzyzanowski

Distributed Systems 11. Consensus. Paul Krzyzanowski Distributed Systems 11. Consensus Paul Krzyzanowski pxk@cs.rutgers.edu 1 Consensus Goal Allow a group of processes to agree on a result All processes must agree on the same value The value must be one

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

Dep. Systems Requirements

Dep. Systems Requirements Dependable Systems Dep. Systems Requirements Availability the system is ready to be used immediately. A(t) = probability system is available for use at time t MTTF/(MTTF+MTTR) If MTTR can be kept small

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

A Network Intrusion Detection System Architecture Based on Snort and. Computational Intelligence

A Network Intrusion Detection System Architecture Based on Snort and. Computational Intelligence 2nd International Conference on Electronics, Network and Computer Engineering (ICENCE 206) A Network Intrusion Detection System Architecture Based on Snort and Computational Intelligence Tao Liu, a, Da

More information

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Channel Performance Improvement through FF and RBF Neural Network based Equalization Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT

CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT 1 CS5412: CONSENSUS AND THE FLP IMPOSSIBILITY RESULT Lecture XII Ken Birman Generalizing Ron and Hermione s challenge 2 Recall from last time: Ron and Hermione had difficulty agreeing where to meet for

More information

Two-Phase Atomic Commitment Protocol in Asynchronous Distributed Systems with Crash Failure

Two-Phase Atomic Commitment Protocol in Asynchronous Distributed Systems with Crash Failure Two-Phase Atomic Commitment Protocol in Asynchronous Distributed Systems with Crash Failure Yong-Hwan Cho, Sung-Hoon Park and Seon-Hyong Lee School of Electrical and Computer Engineering, Chungbuk National

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University

Fault Tolerance Part I. CS403/534 Distributed Systems Erkay Savas Sabanci University Fault Tolerance Part I CS403/534 Distributed Systems Erkay Savas Sabanci University 1 Overview Basic concepts Process resilience Reliable client-server communication Reliable group communication Distributed

More information

Achievable Rate Regions for Network Coding

Achievable Rate Regions for Network Coding Achievable Rate Regions for Network oding Randall Dougherty enter for ommunications Research 4320 Westerra ourt San Diego, A 92121-1969 Email: rdough@ccrwest.org hris Freiling Department of Mathematics

More information

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Utkarsh Dwivedi 1, Pranjal Rajput 2, Manish Kumar Sharma 3 1UG Scholar, Dept. of CSE, GCET, Greater Noida,

More information

Secure Reliable Multicast Protocols in a WAN

Secure Reliable Multicast Protocols in a WAN Secure Reliable Multicast Protocols in a WAN Dahlia Malkhi Michael Merritt Ohad Rodeh AT&T Labs, Murray Hill, New Jersey {dalia,mischu}@research.att.com The Hebrew University of Jerusalem tern@cs.huji.ac.il

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 4, April 203 ISSN: 77 2X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Stock Market Prediction

More information

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction

A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction A Boosting-Based Framework for Self-Similar and Non-linear Internet Traffic Prediction Hanghang Tong 1, Chongrong Li 2, and Jingrui He 1 1 Department of Automation, Tsinghua University, Beijing 100084,

More information

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator

Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator JOURNAL OF ENGINEERING RESEARCH AND TECHNOLOGY, VOLUME 1, ISSUE 2, JUNE 2014 Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator Mahmoud M. Al Ashi 1,

More information

Tradeoffs in Byzantine-Fault-Tolerant State-Machine-Replication Protocol Design

Tradeoffs in Byzantine-Fault-Tolerant State-Machine-Replication Protocol Design Tradeoffs in Byzantine-Fault-Tolerant State-Machine-Replication Protocol Design Michael G. Merideth March 2008 CMU-ISR-08-110 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213

More information

Distributed Systems. Fault Tolerance. Paul Krzyzanowski

Distributed Systems. Fault Tolerance. Paul Krzyzanowski Distributed Systems Fault Tolerance Paul Krzyzanowski Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. Faults Deviation from expected

More information

Practical Byzantine Fault Tolerance

Practical Byzantine Fault Tolerance Practical Byzantine Fault Tolerance Robert Grimm New York University (Partially based on notes by Eric Brewer and David Mazières) The Three Questions What is the problem? What is new or different? What

More information

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov

Practical Byzantine Fault Tolerance. Miguel Castro and Barbara Liskov Practical Byzantine Fault Tolerance Miguel Castro and Barbara Liskov Outline 1. Introduction to Byzantine Fault Tolerance Problem 2. PBFT Algorithm a. Models and overview b. Three-phase protocol c. View-change

More information

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms

Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms Verteilte Systeme/Distributed Systems Ch. 5: Various distributed algorithms Holger Karl Computer Networks Group Universität Paderborn Goal of this chapter Apart from issues in distributed time and resulting

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Fault Tolerance. Distributed Software Systems. Definitions

Fault Tolerance. Distributed Software Systems. Definitions Fault Tolerance Distributed Software Systems Definitions Availability: probability the system operates correctly at any given moment Reliability: ability to run correctly for a long interval of time Safety:

More information

Watermarking Using Bit Plane Complexity Segmentation and Artificial Neural Network Rashmeet Kaur Chawla 1, Sunil Kumar Muttoo 2

Watermarking Using Bit Plane Complexity Segmentation and Artificial Neural Network Rashmeet Kaur Chawla 1, Sunil Kumar Muttoo 2 International Journal of Scientific Research and Management (IJSRM) Volume 5 Issue 06 Pages 5378-5385 2017 Website: www.ijsrm.in ISSN (e): 2321-3418 Index Copernicus value (2015): 57.47 DOI: 10.18535/ijsrm/v5i6.04

More information

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications D.A. Karras 1 and V. Zorkadis 2 1 University of Piraeus, Dept. of Business Administration,

More information

A Hybrid Approach for Misbehavior Detection in Wireless Ad-Hoc Networks

A Hybrid Approach for Misbehavior Detection in Wireless Ad-Hoc Networks A Hybrid Approach for Misbehavior Detection in Wireless Ad-Hoc Networks S. Balachandran, D. Dasgupta, L. Wang Intelligent Security Systems Research Lab Department of Computer Science The University of

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Fault-Tolerant Routing Algorithm in Meshes with Solid Faults

Fault-Tolerant Routing Algorithm in Meshes with Solid Faults Fault-Tolerant Routing Algorithm in Meshes with Solid Faults Jong-Hoon Youn Bella Bose Seungjin Park Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Oregon State University

More information

Failure models. Byzantine Fault Tolerance. What can go wrong? Paxos is fail-stop tolerant. BFT model. BFT replication 5/25/18

Failure models. Byzantine Fault Tolerance. What can go wrong? Paxos is fail-stop tolerant. BFT model. BFT replication 5/25/18 Failure models Byzantine Fault Tolerance Fail-stop: nodes either execute the protocol correctly or just stop Byzantine failures: nodes can behave in any arbitrary way Send illegal messages, try to trick

More information