A survey of the SpiNNaker Project : A massively parallel spiking Neural Network Architecture

Size: px
Start display at page:

Download "A survey of the SpiNNaker Project : A massively parallel spiking Neural Network Architecture"

Transcription

1 A survey of the SpiNNaker Project : A massively parallel spiking Neural Network Architecture Per Lenander Mälardalen University Robotics program Västerås, Sweden plr07001@student.mdh.se ABSTRACT Scientists have through the decades tried many different approaches to creating true Artificial Intelligence (AI). There are two main goals to this endeavour; First, to create more complex and intelligent machines and software, and second, to further the understanding of the human brain. One line that some has followed is to model the biological structure of the brain hard AI instead of modelling the problem solving process soft AI. This paper is a survey of a hard AI endeavour: the SpiNNaker project, a massively parallel spiking neural network architecture. The SpiNNaker is designed as many smaller specialized chips connected together in a network architecture. The focus is on high message throughput and clever routing algorithms, rather than on the processing power of each individual node. Another big focus is how to manage power consumption in a system consisting of a million processor cores. This paper briefly explains the purpose and the inner workings of the SpiNNaker, and provides a glimpse of what is to come in the Artificial Intelligence field in the near future. Keywords Neural Network, Parallel computing, SpiNNaker. 1. INTRODUCTION The concept of the neural network is by no means a new idea. It has it roots in recreating how the human brain works by modelling neurons and the connections between these. This field of research is usually referred to as hard Artificial Intelligence. A key feature of this model is learning; the network is exposed to a large number of training examples, and modified through one of several learning algorithms to better solve the given problem. Another method is to have the network Anton Fosselius Mälardalen University Robotics program Västerås, Sweden afs07002@student.mdh.se learn while the system is on-line, by providing positive and negative feedback on resulting output. This model can be constructed either in hardware or in software (an Artificial Neural Network). Neural networks have many applications, from classification of objects[10], pattern recognition[16][1] and similar problems, to modelling and optimizing complex mathematical functions[12]. From the simplest model of a neural network the feedforward structure using perceptrons[14][9] to more complex structures such as bi-directional recurrent networks, there are many different approaches to constructing neural networks. A comprehensive summary of the field can be found in Simon Haykins book[3]. We will further examine one such model called the spiking neural network. The basic idea is to build a network of interconnected neurons, where each neuron has an electrical potential. Once this potential reaches a certain level, the neuron sends out signals to all other neurons connected to it. These connections are called synapses, and can be either exciting or inhibiting. This model is much closer to the human brain than the simplistic feed-forward network, but also much more computationally expensive. To facilitate learning in the network, synaptic weights can be adjusted through synaptic plasticity algorithms. This will be explained further in section 4. A general problem with artificial neural networks is how computationally expensive it is to simulate and train the network. This problem, combined with knowledge of how neurons work in a human brain through asynchronous communication makes neural networks perfect for a parallel architecture. An example of such an architecture is the GPU-SNN[11], which models a large number of spiking neurons on a General Purpose Graphical Processing Unit 1. Another architecture with a similar vision the SpiNNaker project is the focus of this paper. The project was started at the University of Manchester in 2006 by Stephen Furber 2, ICL Professor of Computer Engineering, and is scheduled to finish Or, to put it in simpler terms, an ordinary desktop graphics card. 2 Prof. Furbers earlier work includes designing the 32-bit ARM architecture. 1

2 at the average rate of 10 Hz. Figure 1: Biological model of a neuron. Image from Phenomenon of Science by Valentin Turchin, The purpose of this project is to create a neural network with approximately 1% the number of neurons in the human brain. To do this, the project team designed a specialized hardware architecture focused on massive parallelism. The project is funded by the Engineering and Physical Sciences Research Council in the period 01 October 2006 to 31 Mars 2010, with to University of Southampton and to The University of Manchester, a grand total of In addition to this, ARM Limited and Silistix Limited are sponsoring the project with hardware and technology. The paper is divided into sections describing the different aspects of the SpiNNaker project. Section 2 begins by introducing the concept of a spiking neural network. Section 3 discusses the hardware architecture and communication protocols of the system, while section 4 describes the software implemented in each processor core. Section 5 and 6 discusses future work and conclusions about the project. 2. SPIKING NEURAL NETWORK 2.1 Biological inspiration The idea of neural networks stem from observations of nature, specifically from the human brain. The working of the human brain is significantly different from an ordinary sequential computer. While the computers of today are significantly faster and has much higher computational power than single neurons, the parallel nature and complexity of the brain allows for a high level of pattern recognition, deduction and adaptation. The brain consists of a vast network of neurons, a form of cells able to transmit electrical signals (see figure 1). The neuron has a set of dendrites (inputs) and an axon (output) that is connected to the dendrites of other neurons through synapses. When the neuron reaches a certain electrical potential, it will send out an electrical signal through the axon, which will propagate through the synapses to all connected neurons. Each synapse may modify the signal to either excite of inhibit the potential of the receiving neuron. A human brain consists of roughly one hundred billion neurons, where each neuron is connected to several tens of thousands other neurons. Each neuron transmits a signal a spike 2.2 Artificial Model This biological construction can be modelled, either in software or in hardware. In its simplest form, the spiking neuron model has a membrane potential. Over time, the neuron will receive short energy bursts spikes from other neurons. These spikes will slightly change the membrane potential in a linear fashion, but after some time the neuron will degrade to a resting potential. Once a threshold potential is reached, the neuron will send out a spike of its own to nearby connected neurons and return to the resting potential. If a neuron has many incoming exciting spikes, it will send out spikes more often. Computationally, the spiking neurons essentially are independent of each other, so there is a great potential for taking advantage of parallelism and asynchronous hardware and software features. Each neuron can be designed as an object that can send and receive messages (spikes), much like a computer network. 2.3 Application of theory The SpiNNaker is developed with all this in mind. Instead of focusing on high speed computer chips, the SpiNNaker is a network of low power nodes, connected through a set of high speed package routers for communication. This design adds a great redundancy to the SpiNNaker when compared to conventional computers. There is no central point of failure, and the network can still function even if packets are lost, or entire nodes drop of the grid. This is also similar to how the brain works. Though a general model of the low level building block of the brain the neuron is known, and some of the high level brain activity can be measured, scientists do not currently know how the network of neurons form the complex output that they do. The project team hopes to further research within this area by replicating the low level structure of the brain, so that neuroscientists have a simulation environment to experiment on. 3. HARDWARE 3.1 Chip design The core of the project is a hardware architecture designed with massive parallelism in mind. The system consists of a series of specially designed chips connected together in a network with a two-dimensional toroidal triangular mesh structure (see figure 2). Each node in the network is relatively low in computational power, but when put together forms a very powerful and robust system. Every node consists of twenty ARM9 3 cores, each clocked at 200 MHz. The chip also has a router for communication between different cores, both locally on the chip and globally in the network. Each core can simulate about 1000 neurons in biological real-time, however only nineteen of the cores 3 ARM is a 32-bit Reduced Instruction Set Computer (RISC) processor architecture. The design focuses on low power consumption. 2

3 Figure 3: Overview of the network architecture of SpiNNaker. Each green node is a SpiNNaker chip with twenty ARM9 processor cores. Image from SpiNNaker website. Figure 2: Model of the two-dimensional toroidal triangular mesh structure. Image from A GALS Infrastructure for a Massively Parallel Multiprocessor, by Luis A. Plana[13]. are used for simulation. The spare core is used to perform management tasks and is assigned by being the first core on the chip that finishes its self-test on power-up. This means that one SpiNNaker chip can simulate about neurons. To achieve the goal of simulating 1% of the human brain, the SpiNNaker network will have to be able to simulate one billion neurons. If one chip can simulate about neurons then SpiNNaker chips is needed to achieve this goal. To build a neural network of this size is a demanding task, where the constrains are power consumption, communication and flexibility. When the initial design was being considered there were a comparison between Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computer (RISC) processors. These are two different archetypes in processor design, one focusing on many complex and powerful instructions while the other uses a small number of highly optimized instructions. CISC is the CPU architecture that is most common to use for heavy calculations today because its superior processor power. However those CPU:s are expensive and needs a lot of power. RISC CPUs are a lot cheaper and demands less power to do the same amount of calculations as the CISC CPUs. The solution was to use a lot of cheap low power RISC CPUs (ARM9) and running them in parallel. 3.2 Communication In this section we explain the different techniques used to makes a network of this size possible. Ordinary network protocols such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) would create a huge overhead and take a lot of time to set up, and the communication speed would be too slow. Because of this, a huge part of the SpiNNaker project is about communication, both on the SpiNNaker chip and between different nodes. The SpiNNaker chip does not have a standard data bus (as a motherboard in an ordinary PC would have). The reason for this is that the system is designed to be globally asynchronous, which means that every processor runs on its own clock. To realize an ordinary bus solution with this design in mind, the bus would have to have twenty-one bus masters. Every ARM9 core, as well as the router needs to be masters and connected to the same bus. This would lead to a really complex and poorly optimized communication architecture. Instead the SpiNNaker chip has two Network on Chips (NoC), used as follows: The System NoC is SpiNNaker s way to give the ARM9 cores access to shared system resources. One of those system resources is the off-chip 1 Gbit Synchronous Dynamic Random Access Memory (SDRAM) that is shared between the twenty ARM cores. The System NoC has a bandwidth of 8 Gbps to give the cores fast access to the system resources like the SDRAM. Another example of the system resources that can be accessed through the system NoC is the Ethernet interface. The Ethernet interface allows an external computer to access the SpiNNaker network (see figure 3), this is used to map and initiate the routing tables on the network but can also be used to monitor and configure the neural network. The other Network on Chip, the Communications NoC allows each core to connect to other ARM9 cores on- or offchip. The chip is designed to be Globally Asynchronous, Locally Synchronous (GALS) and use Delay Insensitive communication (DI), two concepts that will shortly be explained GALS As Furber states in [2]: One of the three key ideas behind the SpiNNaker architecture is that time models itself : time is free running and there is no global synchronization. This is sometimes referred to as bounded asynchrony since interactions are asynchronous but threads progress at very similar rates. 3

4 topology] is that physical and logical connectivity are decoupled. Figure 4: 1 of 4 Delay Insensitive (DI) code. Image from A GALS Infrastructure for a Massively Parallel Multiprocessor, by Luis A. Plana[13]. The SpiNNaker uses Global Asynchronous, Locally Synchronous communication (GALS)[13], where each ARM9 core uses its own clock (locally synchronous), and all communication to and from the core is asynchronous. This is the core design of the entire SpiNNaker project, everything will run asynchronous and in parallel with locally synchronous elements. A synchronous design of a system of this size would demand a vast amount of resources just to keep the system synced. With a asynchronous system design, you free a lot of system resources that can be put to better use. However, we will have to use asynchronous communication, which brings its own problems. How this asynchronous communication works is explained in the next sections Delay Insensitive communication Delay Insensitive communication (DI)[13] is a type of failsafe asynchronous communication that is used for all internal communication in the SpiNNaker network. DI expects that all delays are positive and finite, and is used to make communication between nodes very robust. Protocols such this are more robust than simple parallel communication, but faster than simple serial communication (since one symbol can be sent each communication cycle, rather than just one bit). DI uses 1-of-N code, where N are the number of wires that are used and the 1 is how many of the wires that can be active at the same time. Figure 4 shows how 1-of-4 DI represents binary values. Between each value transmitted, a null value is sent. This is due to the globally asynchronous structure, to make sure that the correct value is received. This is called Return-To-Zero (RTZ) protocol. When data is received, an acknowledgement (ACK) is sent to confirm that the data was received. The null token that RTZ always alternate with limits the bandwidth of the communication. However, there is an alternative called Non Return to Zero (NRZ)[13] that can be used to achieve a higher bandwidth. In the NRZ protocol there is no alternation with the null token, which gives it twice the bandwidth but the same energy consumption compared to the RTZ protocol, at the expense of accuracy. NRZ is used in the SpiNNaker project, as it can communicate data faster than the alternatives. The risk for errors in communication are compensated for at the architectural level SpiNNaker is built to be fault tolerant Routing and Network communication In this section we will explain how a signal travels from one neuron to another and go through all the steps on the way. Furber states in [2]: The second key idea behind SpiNNaker [Virtual What this entails is that each neuron in the SpiNNaker system is independent of its actual physical location in the system. Due to the fast routing protocols used, the same neural network can be mapped to the SpiNNaker architecture in any way, with no need for connected neurons to be mapped to physically closely connected nodes. The SpiNNaker network is built up by a large number of circuit boards, and on each of those circuit boards there are several SpiNNaker chips nodes in the network. On every SpiNNaker chip there are twenty ARM9 cores where each core can simulate a number of neurons. Each neuron is connected to a large number of other neurons through one way communication (either on the same SpiNNaker node or somewhere else in the network). Those connections are stored as pre and post-synaptic connections for every neuron. The pre-synaptic connections are the neurons that the current neuron can receive packages from, while the post-synaptic connections are the neurons that the current neuron can send to. The packages emitted through the network when a neuron spikes are sent without concern if the packages is received or not. If a package is sent and not received it is said to be lost, and nothing will be done about it. The Communications NoC is what makes it possible for two cores to be able to communicate with each other independent of their location in the SpiNNaker network (and thus, for two neurons to communicate). To be able to route the packages that are sent through the network there is a router embedded on the SpiNNaker chip. The router takes about 10% of the chip size and is connected to the Communications NoC, where it allows communication between ARM cores on different chips with a maximum speed of 1 Gbps. The router connects each SpiNNaker chip to six others in an network formed as a two-dimensional toroidal triangular mesh structure. Rather than writing all the spike destinations in each package, or sending one package for each destination neuron, there is no destination within the package, only its source is stored. This works since the router keeps a table on what neurons are connected to each source neuron. The router then sends the package to all nodes that contain one of these destination neurons. The routing information is stored on 1024 word routing tables that are distributed throughout the system. When a package is received, the router checks its routing table and decides where to send the package. One way to speed up the routing is through default routing. When a package is routed through default routing the package will be sent through the opposite port from the port that the package was received from. To make the routing more robust emergency routing can be used. Emergency routing tries to send a package on an alternative route if it fails to send a package on its assigned port. If the router is still not able to send the package then the package is dropped to prevent deadlock. Every package has an age, and when a package gets 4

5 Figure 5: The figure of the SpiNNaker chip design as seen above, shows how the ARM9 cores is connected to the System and Communications NoC. Image from A GALS Infrastructure for a Massively Parallel Multiprocessor, by Luis A. Plana[13]. too old it is dropped to prevent livelock (a lock where the system continues to run as usual, but the current state is not improved). An overview of how the communication to and from the ARM9 cores works is provided by figure 5 (on chip) and figure 2 (off chip). 3.3 Energy consumption In a system of this size the energy consumption is of huge interest. Furber states in[2]: The third key idea behind SpiNNaker [energy frugality] is that processors are free; the real cost of computing is energy. We have seen a continuing trend with cheaper and cheaper CPUs, along with higher and higher energy costs. If this trend continues in the same direction, this will eventually lead us to a situation where CPU cores will be almost free, while there will be a huge cost for running them. With this in mind we want to consider building a computer architecture that demands less power to do the same amount of work as today s high power CPUs. In the SpiNNaker project this is solved by using a lot of power effective ARM9 CPUs to do the same work as a high power Intel or AMD CPU. Another advantage with the ARM9 CPUs is that they can be put in sleep mode when they are not processing an incoming spike, and then be activated when they receive a package. This makes the ARM9 CPU even more power efficient. Since the SpiNNaker network is built on such a large scale with the final design goal containing over nodes even small changes towards energy efficiency will make a huge difference when you look at the whole network. Due to this, every aspect of the system is designed with energy Figure 6: Neuron potential over time. A spike is emitted from the neuron when the potential rises over the threshold potential. efficiency in mind, from low level communication protocols to efficient processors and electrical components. 4. SOFTWARE 4.1 Neuron model We have previously discussed different models for neurons and for overall structure of the neural network. As mentioned, the SpiNNaker system s main application will be a spiking neural network. The project team plans to use the Izhikevich model of a spiking neuron[6]. Izhikevich models the neuron from an electrical perspective, and the algorithm contains variables such as neuron membrane potential, membrane recovery (after a spike), and their respective derivatives. The resulting output from the neuron will be periodical signals, as the neuron will send spikes trough the network at a rate depending on the input it receives. Figure 6 shows how the neuron membrane potential changes over time in this neuron model. When a spike arrives, the potential rise slightly, but decays towards a resting potential over time. Once the potential exceeds a threshold value the neuron will emit a spike. For a short while after a spike has been emitted, it becomes harder for incoming spikes to affect the membrane potential. For a more in-depth look at the model, study Izhikevich s original paper[4]. 4.2 Spike-timing-dependent plasticity In the brain, neurons communicate with each other through synapses. A synapse is the connection through which signals or electrical charges are transferred between one neuron and another. From the synapses perspective the neuron that have sent the signal is said to be pre-synaptic, and the receiving neuron is called post-synaptic. When a neuron has reached its threshold value, it sends a signal through its synapses to the connected neurons. This signal is called a spike or electrical charge. The synapse can modify the potential of the 5

6 receiving neuron according to weights associated with the synapse. These synaptic weights can either be increased or decreased depending on the timing of the spikes, and they can be changed both by the pre-synaptic and post-synaptic neuron. A common algorithm for this is called Spike-timingdependent plasticity[5]. It is this algorithm that allow the behaviour of the neural network to change over time based on the most active neurons in short, it is how the network can learn. In the SpiNNaker system, the STDP learning algorithm only triggers on pre-synaptic spikes. The formula to calculate the change in the synaptic weight, F, dependent on the timing t between a pre-synaptic and post-synaptic spike is described below: A +e τ t + t < 0, F ( t) = A t e τ t 0. A pre-synaptic spike is defined as a spike arriving to a neuron, while a post-synaptic spike is defined as that neuron sending a spike. The factors A + and A are roughly equivalent to the learning rate in a standard neural network. The higher these values are, the more the synaptic weight will change each time STDP is triggered. τ + and τ are the sizes of the time windows in which learning can occur. In the theoretical case, this will decrease the impact of spikes with a long delay between them. In practise, the STDP algorithm will only consider spikes that are within the time windows when learning. However, since we only trigger STDP on pre-synaptic events (in practice, this means that the algorithm will have to be able to predict future spikes of the receiving neuron), we have to modify the STDP. Without the modifications proposed by Jin[8], performance of the algorithm will be reduced due to how synaptic weights are stored in the shared memory. Because of how the data is addressed, the algorithm is optimized either for pre-synaptic triggering or postsynaptic triggering, but not both at the same time. The problem is solved by using a deferred event-driven model of a pre-sensitive scheme. For further information on the event-driven model of a pre-sensitive scheme please read Jins article[8]. 4.3 General applications It is interesting to note that the SpiNNaker is merely a specialized hardware architecture, and not a neural network in itself. Any and all actual implementations of a network will model its neurons completely in software. This means that even though the main application of the SpiNNaker network will be a spiking neural network, it is far from impossible to implement other network structures on it. One such example is an algorithm described by Jin[7] a multi-layer back-propagation algorithm using perceptrons. (1) The project team has also found out that many other problems can be modelled using the SpiNNaker architecture, as long as they can be mapped to the system. Also, the principles of the SpiNNaker[2] bounded asynchrony, virtual topology and energy frugality can be applied to other problems, thus making them appropriate for the SpiNNaker system. 5. FUTURE WORK At the current stage in the SpiNNaker project, a prototype chip with two ARM9 cores has been developed and is in the testing phase. In addition to this, the full chip has been simulated using Field Programmable Gate Array (FPGA) technology. The project team plans to realize a 50 node network by years end, and after this create bigger and bigger networks, with 500, 5000 and finally nodes. The main focus of the project so far has been developing the hardware architecture of the SpiNNaker. This has brought forth innovations in many different areas, from routing to chip design. However, the team has yet to focus on further developing the software of the system. In an interview with the magazine New Electronics[15], professor Steven Furber the leader of the project commented: I don t think we have contributed anything to neural networks at this stage. However, I ll be disappointed if, in a year from now, that is still the case. 6. CONCLUSIONS The completion of the SpiNNaker project could well be the first step to implementing hard AI, and create a platform on which to further studies both in AI and in how the human brain works. Once this architecture has been tested, over time it could reasonably be scaled up even further, finally modelling 100 billion neurons. The SpiNNaker approach to large scale neural networks has many benefits when compared to other methods. Dedicated neural chips may be optimized for a specific scalable neural network, but once the design is fixed, it is hard to change the neuron model or the network architecture. FPGA and General Purpose Graphic Processing Units (GPGPU) solutions are flexible since the model can easily be changed in software. However, they are both limited in scale unless a network similar to the SpiNNaker is constructed. Several applications for the SpiNNaker are already planned, among them a continuation of a project at Manchester University. Psychologists have developed a neural network and taught it to read, with the intention of damaging parts of the network selectively and compare symptoms with brain damaged patients. Their current model is limited by its scale to contain only a small vocabulary, which could be extended by mapping the problem to the completed SpiNNaker system. Another interesting field of application for the SpiNNaker is the field of robotics. If an autonomous robot had its sensors and actuators connected to the SpiNNaker, many interesting tests could be performed. For example, how the system would adapt to a changing environment, and how the performance changes over time as the network learns new things. 6

7 Note that due to the physical size of the SpiNNaker network, in this case the robot could be a host connected to the SpiNNaker, for example through wireless communication. It should be noted that this report deals with a current research project in the middle of its life cycle, and that final results are yet to be presented. 7. REFERENCES [1] O. Boumbarov, S. Sokolov, and G. Gluhchev. Combined face recognition using wavelet packets and radial basis function neural network. In CompSysTech 07: Proceedings of the 2007 international conference on Computer systems and technologies, pages 1 7, New York, NY, USA, ACM. [2] S. Furber and A. Brown. Biologically-inspired massively-parallel architectures - computing beyond a million processors. In Proc. 9th International Conference on the Application of Concurrency to System Design, pages ACSD 09, [3] S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall PTR, Upper Saddle River, NJ, USA, [4] E. M. Izhikevich. Simple model of spiking neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 6, pages , [5] E. M. Izhikevich. Polychronization: Computation with spikes. In Neural Computation, 18, pages , [6] X. Jin, M. Luján, L. A. Plana, S. Davies, S. Temple, and S. B. Furber. Modeling spiking neural networks on spinnaker. IEEE Computing in Science and Engineering, September/October 2010 (vol. 12 no. 5), pages 91 97, [7] X. Jin, M. Luján, L. A. Plana, A. D. Rast, S. R. Welbourne, and S. B. Furber. Efficient parallel implementation of multilayer backpropagation networks on spinnaker. In CF 10: Proceedings of the 7th ACM international conference on Computing frontiers, pages 89 90, New York, NY, USA, ACM. [8] X. Jin, A. Rast, F. Galluppi, S. Davies, and S. Furber. Implementing spike-timing-dependent plasticity on spinnaker neuromorphic hardware. WCCI 2010 IEEE World Congress on Computational Intelligence, pages , July [9] M. M. L. and P. S. A. Perceptrons. MIT Press., Cambridge, [10] D. F. Leite, P. Costa, and F. Gomide. Evolving granular classification neural networks. In IJCNN 09: Proceedings of the 2009 international joint conference on Neural Networks, pages , Piscataway, NJ, USA, IEEE Press. [11] J. M. Nageswaran, N. Dutt, J. L. Krichmar, A. Nicolau, and A. Veidenbaum. Efficient simulation of large-scale spiking neural networks using cuda graphics processors. In IJCNN 09: Proceedings of the 2009 international joint conference on Neural Networks, pages , Piscataway, NJ, USA, IEEE Press. [12] C. D. Paternina-Arboleda, J. R. Montoya-Torres, and A. Fábregas-Ariza. Simulation-optimization using a reinforcement learning approach. In WSC 08: Proceedings of the 40th Conference on Winter Simulation, pages Winter Simulation Conference, [13] L. Plana, S. Furber, S. Temple, M. Khan, Y. Shi, J. Wu, and S. Yang. A gals infrastructure for a massively parallel multiprocessor. IEEE Design and Test of Computers, 24(5): , [14] F. Rosenblatt. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Cornell Aeronautical Laboratory, [15] R. Rubenstein. Linking arms to make a brain. New Electronics, July 2010, pages 16 18, [16] H. Zhang, J. Guan, and G. C. Sun. Artificial neural network-based image pattern recognition. In ACM-SE 30: Proceedings of the 30th annual Southeast regional conference, pages , New York, NY, USA, ACM. 7

SpiNNaker a Neuromorphic Supercomputer. Steve Temple University of Manchester, UK SOS21-21 Mar 2017

SpiNNaker a Neuromorphic Supercomputer. Steve Temple University of Manchester, UK SOS21-21 Mar 2017 SpiNNaker a Neuromorphic Supercomputer Steve Temple University of Manchester, UK SOS21-21 Mar 2017 Outline of talk Introduction Modelling neurons Architecture and technology Principles of operation Summary

More information

SpiNNaker - a million core ARM-powered neural HPC

SpiNNaker - a million core ARM-powered neural HPC The Advanced Processor Technologies Group SpiNNaker - a million core ARM-powered neural HPC Cameron Patterson cameron.patterson@cs.man.ac.uk School of Computer Science, The University of Manchester, UK

More information

Biologically-Inspired Massively-Parallel Architectures - computing beyond a million processors

Biologically-Inspired Massively-Parallel Architectures - computing beyond a million processors Biologically-Inspired Massively-Parallel Architectures - computing beyond a million processors Dave Lester The University of Manchester d.lester@manchester.ac.uk NeuroML March 2011 1 Outline 60 years of

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Artificial Neuron Modelling Based on Wave Shape

Artificial Neuron Modelling Based on Wave Shape Artificial Neuron Modelling Based on Wave Shape Kieran Greer, Distributed Computing Systems, Belfast, UK. http://distributedcomputingsystems.co.uk Version 1.2 Abstract This paper describes a new model

More information

Yuki Osada Andrew Cannon

Yuki Osada Andrew Cannon Yuki Osada Andrew Cannon 1 Humans are an intelligent species One feature is the ability to learn The ability to learn comes down to the brain The brain learns from experience Research shows that the brain

More information

Algorithm and Software for Simulation of Spiking Neural Networks on the Multi-Chip SpiNNaker System

Algorithm and Software for Simulation of Spiking Neural Networks on the Multi-Chip SpiNNaker System WCCI 2010 IEEE World Congress on Computational Intelligence July, 18-23, 2010 - CCIB, Barcelona, Spain IJCNN Algorithm and Software for Simulation of Spiking Neural Networks on the Multi-Chip SpiNNaker

More information

Neuromorphic Hardware. Adrita Arefin & Abdulaziz Alorifi

Neuromorphic Hardware. Adrita Arefin & Abdulaziz Alorifi Neuromorphic Hardware Adrita Arefin & Abdulaziz Alorifi Introduction Neuromorphic hardware uses the concept of VLSI systems consisting of electronic analog circuits to imitate neurobiological architecture

More information

Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation

Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation Kshitij Bhardwaj Dept. of Computer Science Columbia University Steven M. Nowick 2016 ACM/IEEE Design Automation

More information

Despite an increasing amount

Despite an increasing amount Editors: Volodymyr Kindratenko, kindr@ncsa.uiuc.edu Pedro Trancoso, pedro@cs.ucy.ac.cy Modeling Spiking Neural Networks on SpiNNaker By Xin Jin, Mikel Luján, Luis A. Plana, Sergio Davies, Steve Temple,

More information

LOW LATENCY DATA DISTRIBUTION IN CAPITAL MARKETS: GETTING IT RIGHT

LOW LATENCY DATA DISTRIBUTION IN CAPITAL MARKETS: GETTING IT RIGHT LOW LATENCY DATA DISTRIBUTION IN CAPITAL MARKETS: GETTING IT RIGHT PATRICK KUSTER Head of Business Development, Enterprise Capabilities, Thomson Reuters +358 (40) 840 7788; patrick.kuster@thomsonreuters.com

More information

Brainchip OCTOBER

Brainchip OCTOBER Brainchip OCTOBER 2017 1 Agenda Neuromorphic computing background Akida Neuromorphic System-on-Chip (NSoC) Brainchip OCTOBER 2017 2 Neuromorphic Computing Background Brainchip OCTOBER 2017 3 A Brief History

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

A ROUTER FOR MASSIVELY-PARALLEL NEURAL SIMULATION

A ROUTER FOR MASSIVELY-PARALLEL NEURAL SIMULATION A ROUTER FOR MASSIVELY-PARALLEL NEURAL SIMULATION A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2010 By Jian

More information

GPU-Based Simulation of Spiking Neural Networks with Real-Time Performance & High Accuracy

GPU-Based Simulation of Spiking Neural Networks with Real-Time Performance & High Accuracy GPU-Based Simulation of Spiking Neural Networks with Real-Time Performance & High Accuracy Dmitri Yudanov, Muhammad Shaaban, Roy Melton, Leon Reznik Department of Computer Engineering Rochester Institute

More information

IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL NETWORK (ANN) FOR FULL ADDER. Research Scholar, IIT Kharagpur.

IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL NETWORK (ANN) FOR FULL ADDER. Research Scholar, IIT Kharagpur. Journal of Analysis and Computation (JAC) (An International Peer Reviewed Journal), www.ijaconline.com, ISSN 0973-2861 Volume XI, Issue I, Jan- December 2018 IMPLEMENTATION OF FPGA-BASED ARTIFICIAL NEURAL

More information

Biologically-Inspired Massively-Parallel Architectures computing beyond a million processors

Biologically-Inspired Massively-Parallel Architectures computing beyond a million processors Biologically-Inspired Massively-Parallel Architectures computing beyond a million processors Steve Furber The University of Manchester, Oxford Road, Manchester M13 9PL UK steve.furber@manchester.ac.uk

More information

Neural Networks CMSC475/675

Neural Networks CMSC475/675 Introduction to Neural Networks CMSC475/675 Chapter 1 Introduction Why ANN Introduction Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine

More information

A Large-Scale Spiking Neural Network Accelerator for FPGA Systems

A Large-Scale Spiking Neural Network Accelerator for FPGA Systems A Large-Scale Spiking Neural Network Accelerator for FPGA Systems Kit Cheung 1, Simon R Schultz 2, Wayne Luk 1 1 Department of Computing, 2 Department of Bioengineering Imperial College London {k.cheung11,

More information

High Performance Interconnect and NoC Router Design

High Performance Interconnect and NoC Router Design High Performance Interconnect and NoC Router Design Brinda M M.E Student, Dept. of ECE (VLSI Design) K.Ramakrishnan College of Technology Samayapuram, Trichy 621 112 brinda18th@gmail.com Devipoonguzhali

More information

The Return of Innovation. David May. David May 1 Cambridge December 2005

The Return of Innovation. David May. David May 1 Cambridge December 2005 The Return of Innovation David May David May 1 Cambridge December 2005 Long term trends Computer performance/cost has followed an exponential path since the 1940s, doubling about every 18 months This has

More information

PARALLEL SIMULATION OF NEURAL NETWORKS ON SPINNAKER UNIVERSAL NEUROMORPHIC HARDWARE

PARALLEL SIMULATION OF NEURAL NETWORKS ON SPINNAKER UNIVERSAL NEUROMORPHIC HARDWARE PARALLEL SIMULATION OF NEURAL NETWORKS ON SPINNAKER UNIVERSAL NEUROMORPHIC HARDWARE A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering

More information

Network on Chip Architecture: An Overview

Network on Chip Architecture: An Overview Network on Chip Architecture: An Overview Md Shahriar Shamim & Naseef Mansoor 12/5/2014 1 Overview Introduction Multi core chip Challenges Network on Chip Architecture Regular Topology Irregular Topology

More information

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 20: Networks and Distributed Systems

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 20: Networks and Distributed Systems S 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2003 Lecture 20: Networks and Distributed Systems 20.0 Main Points Motivation for distributed vs. centralized systems

More information

Distributed Data Infrastructures, Fall 2017, Chapter 2. Jussi Kangasharju

Distributed Data Infrastructures, Fall 2017, Chapter 2. Jussi Kangasharju Distributed Data Infrastructures, Fall 2017, Chapter 2 Jussi Kangasharju Chapter Outline Warehouse-scale computing overview Workloads and software infrastructure Failures and repairs Note: Term Warehouse-scale

More information

A Hardware/Software Framework for Real-time Spiking Systems

A Hardware/Software Framework for Real-time Spiking Systems A Hardware/Software Framework for Real-time Spiking Systems Matthias Oster, Adrian M. Whatley, Shih-Chii Liu, and Rodney J. Douglas Institute of Neuroinformatics, Uni/ETH Zurich Winterthurerstr. 190, 8057

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University

CS455: Introduction to Distributed Systems [Spring 2018] Dept. Of Computer Science, Colorado State University CS 455: INTRODUCTION TO DISTRIBUTED SYSTEMS [NETWORKING] Shrideep Pallickara Computer Science Colorado State University Frequently asked questions from the previous class survey Why not spawn processes

More information

Neural Nets. CSCI 5582, Fall 2007

Neural Nets. CSCI 5582, Fall 2007 Neural Nets CSCI 5582, Fall 2007 Assignments For this week: Chapter 20, section 5 Problem Set 3 is due a week from today Neural Networks: Some First Concepts Each neural element is loosely based on the

More information

Performance of a Switched Ethernet: A Case Study

Performance of a Switched Ethernet: A Case Study Performance of a Switched Ethernet: A Case Study M. Aboelaze A Elnaggar Dept. of Computer Science Dept of Electrical Engineering York University Sultan Qaboos University Toronto Ontario Alkhod 123 Canada

More information

This chapter provides the background knowledge about Multistage. multistage interconnection networks are explained. The need, objectives, research

This chapter provides the background knowledge about Multistage. multistage interconnection networks are explained. The need, objectives, research CHAPTER 1 Introduction This chapter provides the background knowledge about Multistage Interconnection Networks. Metrics used for measuring the performance of various multistage interconnection networks

More information

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 19: Networks and Distributed Systems

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 19: Networks and Distributed Systems S 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 19: Networks and Distributed Systems 19.0 Main Points Motivation for distributed vs. centralized systems

More information

Managing Burstiness and Scalability in Event-Driven Models on the SpiNNaker Neuromimetic System

Managing Burstiness and Scalability in Event-Driven Models on the SpiNNaker Neuromimetic System International Journal of Parallel Programming manuscript No. (will be inserted by the editor) Managing Burstiness and Scalability in Event-Driven Models on the SpiNNaker Neuromimetic System Alexander D.

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

International Journal of Scientific & Engineering Research Volume 8, Issue 5, May ISSN

International Journal of Scientific & Engineering Research Volume 8, Issue 5, May ISSN International Journal of Scientific & Engineering Research Volume 8, Issue 5, May-2017 106 Self-organizing behavior of Wireless Ad Hoc Networks T. Raghu Trivedi, S. Giri Nath Abstract Self-organization

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

NoC Round Table / ESA Sep Asynchronous Three Dimensional Networks on. on Chip. Abbas Sheibanyrad

NoC Round Table / ESA Sep Asynchronous Three Dimensional Networks on. on Chip. Abbas Sheibanyrad NoC Round Table / ESA Sep. 2009 Asynchronous Three Dimensional Networks on on Chip Frédéric ric PétrotP Outline Three Dimensional Integration Clock Distribution and GALS Paradigm Contribution of the Third

More information

4. Networks. in parallel computers. Advances in Computer Architecture

4. Networks. in parallel computers. Advances in Computer Architecture 4. Networks in parallel computers Advances in Computer Architecture System architectures for parallel computers Control organization Single Instruction stream Multiple Data stream (SIMD) All processors

More information

! References: ! Computer eyesight gets a lot more accurate, NY Times. ! Stanford CS 231n. ! Christopher Olah s blog. ! Take ECS 174!

! References: ! Computer eyesight gets a lot more accurate, NY Times. ! Stanford CS 231n. ! Christopher Olah s blog. ! Take ECS 174! Exams ECS 189 WEB PROGRAMMING! If you are satisfied with your scores on the two midterms, you can skip the final! As soon as your Photobooth and midterm are graded, I can give you your course grade (so

More information

Event-driven computing

Event-driven computing Event-driven computing Andrew Brown Southampton adb@ecs.soton.ac.uk Simon Moore Cambridge simon.moore@cl.cam.ac.uk David Thomas Imperial College d.thomas1@imperial.ac.uk Andrey Mokhov Newcastle andrey.mokhov@newcastle.ac.uk

More information

A closer look at network structure:

A closer look at network structure: T1: Introduction 1.1 What is computer network? Examples of computer network The Internet Network structure: edge and core 1.2 Why computer networks 1.3 The way networks work 1.4 Performance metrics: Delay,

More information

Digital Communication Networks

Digital Communication Networks Digital Communication Networks MIT PROFESSIONAL INSTITUTE, 6.20s July 25-29, 2005 Professor Muriel Medard, MIT Professor, MIT Slide 1 Digital Communication Networks Introduction Slide 2 Course syllabus

More information

Computer Architecture

Computer Architecture Informatics 3 Computer Architecture Dr. Boris Grot and Dr. Vijay Nagarajan Institute for Computing Systems Architecture, School of Informatics University of Edinburgh General Information Instructors: Boris

More information

Neural-based TCP performance modelling

Neural-based TCP performance modelling Section 1 Network Systems Engineering Neural-based TCP performance modelling X.D.Xue and B.V.Ghita Network Research Group, University of Plymouth, Plymouth, United Kingdom e-mail: info@network-research-group.org

More information

CSMA based Medium Access Control for Wireless Sensor Network

CSMA based Medium Access Control for Wireless Sensor Network CSMA based Medium Access Control for Wireless Sensor Network H. Hoang, Halmstad University Abstract Wireless sensor networks bring many challenges on implementation of Medium Access Control protocols because

More information

Module 6: INPUT - OUTPUT (I/O)

Module 6: INPUT - OUTPUT (I/O) Module 6: INPUT - OUTPUT (I/O) Introduction Computers communicate with the outside world via I/O devices Input devices supply computers with data to operate on E.g: Keyboard, Mouse, Voice recognition hardware,

More information

Network Superhighway CSCD 330. Network Programming Winter Lecture 13 Network Layer. Reading: Chapter 4

Network Superhighway CSCD 330. Network Programming Winter Lecture 13 Network Layer. Reading: Chapter 4 CSCD 330 Network Superhighway Network Programming Winter 2015 Lecture 13 Network Layer Reading: Chapter 4 Some slides provided courtesy of J.F Kurose and K.W. Ross, All Rights Reserved, copyright 1996-2007

More information

Full file at

Full file at Guide to Networking Essentials, Fifth Edition 2-1 Chapter 2 Network Design Essentials At a Glance Instructor s Manual Table of Contents Overview Objectives s Quick Quizzes Class Discussion Topics Additional

More information

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS

TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 232-7345 TM ALGORITHM TO IMPROVE PERFORMANCE OF OPTICAL BURST SWITCHING (OBS) NETWORKS Reza Poorzare 1 Young Researchers Club,

More information

Real-Time Insights from the Source

Real-Time Insights from the Source LATENCY LATENCY LATENCY Real-Time Insights from the Source This white paper provides an overview of edge computing, and how edge analytics will impact and improve the trucking industry. What Is Edge Computing?

More information

Climate Precipitation Prediction by Neural Network

Climate Precipitation Prediction by Neural Network Journal of Mathematics and System Science 5 (205) 207-23 doi: 0.7265/259-529/205.05.005 D DAVID PUBLISHING Juliana Aparecida Anochi, Haroldo Fraga de Campos Velho 2. Applied Computing Graduate Program,

More information

Summary of MAC protocols

Summary of MAC protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division, Code Division, Frequency Division Random partitioning (dynamic) ALOHA, S-ALOHA,

More information

CS 3640: Introduction to Networks and Their Applications

CS 3640: Introduction to Networks and Their Applications CS 3640: Introduction to Networks and Their Applications Fall 2018, Lecture 7: The Link Layer II Medium Access Control Protocols Instructor: Rishab Nithyanand Teaching Assistant: Md. Kowsar Hossain 1 You

More information

FPGA DESIGN OF A MULTICORE NEUROMORPHIC PROCESSING SYSTEM. Thesis. Submitted to. The School of Engineering of the UNIVERSITY OF DAYTON

FPGA DESIGN OF A MULTICORE NEUROMORPHIC PROCESSING SYSTEM. Thesis. Submitted to. The School of Engineering of the UNIVERSITY OF DAYTON FPGA DESIGN OF A MULTICORE NEUROMORPHIC PROCESSING SYSTEM Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment of the Requirements for The Degree of Master of

More information

Outline Marquette University

Outline Marquette University COEN-4710 Computer Hardware Lecture 1 Computer Abstractions and Technology (Ch.1) Cristinel Ababei Department of Electrical and Computer Engineering Credits: Slides adapted primarily from presentations

More information

Designing and debugging real-time distributed systems

Designing and debugging real-time distributed systems Designing and debugging real-time distributed systems By Geoff Revill, RTI This article identifies the issues of real-time distributed system development and discusses how development platforms and tools

More information

TECHNOLOGY BRIEF. Double Data Rate SDRAM: Fast Performance at an Economical Price EXECUTIVE SUMMARY C ONTENTS

TECHNOLOGY BRIEF. Double Data Rate SDRAM: Fast Performance at an Economical Price EXECUTIVE SUMMARY C ONTENTS TECHNOLOGY BRIEF June 2002 Compaq Computer Corporation Prepared by ISS Technology Communications C ONTENTS Executive Summary 1 Notice 2 Introduction 3 SDRAM Operation 3 How CAS Latency Affects System Performance

More information

Lecture 11: Packet forwarding

Lecture 11: Packet forwarding Lecture 11: Packet forwarding Anirudh Sivaraman 2017/10/23 This week we ll talk about the data plane. Recall that the routing layer broadly consists of two parts: (1) the control plane that computes routes

More information

The Memory Component

The Memory Component The Computer Memory Chapter 6 forms the first of a two chapter sequence on computer memory. Topics for this chapter include. 1. A functional description of primary computer memory, sometimes called by

More information

Local Area Network Overview

Local Area Network Overview Local Area Network Overview Chapter 15 CS420/520 Axel Krings Page 1 LAN Applications (1) Personal computer LANs Low cost Limited data rate Back end networks Interconnecting large systems (mainframes and

More information

3D Wafer Scale Integration: A Scaling Path to an Intelligent Machine

3D Wafer Scale Integration: A Scaling Path to an Intelligent Machine 3D Wafer Scale Integration: A Scaling Path to an Intelligent Machine Arvind Kumar, IBM Thomas J. Watson Research Center Zhe Wan, UCLA Elec. Eng. Dept. Winfried W. Wilcke, IBM Almaden Research Center Subramanian

More information

Segment 1A. Introduction to Microcomputer and Microprocessor

Segment 1A. Introduction to Microcomputer and Microprocessor Segment 1A Introduction to Microcomputer and Microprocessor 1.1 General Architecture of a Microcomputer System: The term microcomputer is generally synonymous with personal computer, or a computer that

More information

3/24/2014 BIT 325 PARALLEL PROCESSING ASSESSMENT. Lecture Notes:

3/24/2014 BIT 325 PARALLEL PROCESSING ASSESSMENT. Lecture Notes: BIT 325 PARALLEL PROCESSING ASSESSMENT CA 40% TESTS 30% PRESENTATIONS 10% EXAM 60% CLASS TIME TABLE SYLLUBUS & RECOMMENDED BOOKS Parallel processing Overview Clarification of parallel machines Some General

More information

Design and Performance Analysis of and Gate using Synaptic Inputs for Neural Network Application

Design and Performance Analysis of and Gate using Synaptic Inputs for Neural Network Application IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 12 May 2015 ISSN (online): 2349-6010 Design and Performance Analysis of and Gate using Synaptic Inputs for Neural

More information

FYS Data acquisition & control. Introduction. Spring 2018 Lecture #1. Reading: RWI (Real World Instrumentation) Chapter 1.

FYS Data acquisition & control. Introduction. Spring 2018 Lecture #1. Reading: RWI (Real World Instrumentation) Chapter 1. FYS3240-4240 Data acquisition & control Introduction Spring 2018 Lecture #1 Reading: RWI (Real World Instrumentation) Chapter 1. Bekkeng 14.01.2018 Topics Instrumentation: Data acquisition and control

More information

Networks-on-Chip Router: Configuration and Implementation

Networks-on-Chip Router: Configuration and Implementation Networks-on-Chip : Configuration and Implementation Wen-Chung Tsai, Kuo-Chih Chu * 2 1 Department of Information and Communication Engineering, Chaoyang University of Technology, Taichung 413, Taiwan,

More information

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? Initially considered for this model was a feed forward neural network. Essentially, this means connections between units do not form

More information

Fog Computing. ICTN6875: Emerging Technology. Billy Short 7/20/2016

Fog Computing. ICTN6875: Emerging Technology. Billy Short 7/20/2016 Fog Computing ICTN6875: Emerging Technology Billy Short 7/20/2016 Abstract During my studies here at East Carolina University, I have studied and read about many different t types of emerging technologies.

More information

NoC Test-Chip Project: Working Document

NoC Test-Chip Project: Working Document NoC Test-Chip Project: Working Document Michele Petracca, Omar Ahmad, Young Jin Yoon, Frank Zovko, Luca Carloni and Kenneth Shepard I. INTRODUCTION This document describes the low-power high-performance

More information

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks Dodda Sunitha Dr.A.Nagaraju Dr. G.Narsimha Assistant Professor of IT Dept. Central University

More information

Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System

Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System Christian Denk 1, Francisco Llobet-Blandino 1, Francesco Galluppi 2, Luis A. Plana 2, Steve Furber 2, and

More information

A Data Classification Algorithm of Internet of Things Based on Neural Network

A Data Classification Algorithm of Internet of Things Based on Neural Network A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

ECE 1160/2160 Embedded Systems Design. Midterm Review. Wei Gao. ECE 1160/2160 Embedded Systems Design

ECE 1160/2160 Embedded Systems Design. Midterm Review. Wei Gao. ECE 1160/2160 Embedded Systems Design ECE 1160/2160 Embedded Systems Design Midterm Review Wei Gao ECE 1160/2160 Embedded Systems Design 1 Midterm Exam When: next Monday (10/16) 4:30-5:45pm Where: Benedum G26 15% of your final grade What about:

More information

REAL-TIME ANALYSIS OF A MULTI-CLIENT MULTI-SERVER ARCHITECTURE FOR NETWORKED CONTROL SYSTEMS

REAL-TIME ANALYSIS OF A MULTI-CLIENT MULTI-SERVER ARCHITECTURE FOR NETWORKED CONTROL SYSTEMS REAL-TIME ANALYSIS OF A MULTI-CLIENT MULTI-SERVER ARCHITECTURE FOR NETWORKED CONTROL SYSTEMS Abhish K. and Rakesh V. S. Department of Electronics and Communication Engineering, Vidya Academy of Science

More information

1 Connectionless Routing

1 Connectionless Routing UCSD DEPARTMENT OF COMPUTER SCIENCE CS123a Computer Networking, IP Addressing and Neighbor Routing In these we quickly give an overview of IP addressing and Neighbor Routing. Routing consists of: IP addressing

More information

Interconnect Technology and Computational Speed

Interconnect Technology and Computational Speed Interconnect Technology and Computational Speed From Chapter 1 of B. Wilkinson et al., PARAL- LEL PROGRAMMING. Techniques and Applications Using Networked Workstations and Parallel Computers, augmented

More information

Processor: Faster and Faster

Processor: Faster and Faster Chapter 4 Processor: Faster and Faster Most of the computers, no matter how it looks, can be cut into five parts: Input/Output brings things in and, once done, sends out the result; a memory remembers

More information

Now we are going to speak about the CPU, the Central Processing Unit.

Now we are going to speak about the CPU, the Central Processing Unit. Now we are going to speak about the CPU, the Central Processing Unit. The central processing unit or CPU is the component that executes the instructions of the program that is stored in the computer s

More information

Model and Algorithms for the Density, Coverage and Connectivity Control Problem in Flat WSNs

Model and Algorithms for the Density, Coverage and Connectivity Control Problem in Flat WSNs Model and Algorithms for the Density, Coverage and Connectivity Control Problem in Flat WSNs Flávio V. C. Martins, cruzeiro@dcc.ufmg.br Frederico P. Quintão, fred@dcc.ufmg.br Fabíola G. Nakamura fgnaka@dcc.ufmg.br,fabiola@dcc.ufam.edu.br

More information

The other function of this talk is to introduce some terminology that will be used through the workshop.

The other function of this talk is to introduce some terminology that will be used through the workshop. This presentation is to provide a quick overview of the hardware and software of SpiNNaker. It introduces some key concepts of the topology of a SpiNNaker machine, the unique message passing and routing

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Emerging computing paradigms: The case of neuromorphic platforms

Emerging computing paradigms: The case of neuromorphic platforms Emerging computing paradigms: The case of neuromorphic platforms Andrea Acquaviva DAUIN Computer and control Eng. Dept. 1 Neuromorphic platforms Neuromorphic platforms objective is to enable simulation

More information

DIGITAL VS. ANALOG SIGNAL PROCESSING Digital signal processing (DSP) characterized by: OUTLINE APPLICATIONS OF DIGITAL SIGNAL PROCESSING

DIGITAL VS. ANALOG SIGNAL PROCESSING Digital signal processing (DSP) characterized by: OUTLINE APPLICATIONS OF DIGITAL SIGNAL PROCESSING 1 DSP applications DSP platforms The synthesis problem Models of computation OUTLINE 2 DIGITAL VS. ANALOG SIGNAL PROCESSING Digital signal processing (DSP) characterized by: Time-discrete representation

More information

ECE/CS 757: Advanced Computer Architecture II Interconnects

ECE/CS 757: Advanced Computer Architecture II Interconnects ECE/CS 757: Advanced Computer Architecture II Interconnects Instructor:Mikko H Lipasti Spring 2017 University of Wisconsin-Madison Lecture notes created by Natalie Enright Jerger Lecture Outline Introduction

More information

CH : 15 LOCAL AREA NETWORK OVERVIEW

CH : 15 LOCAL AREA NETWORK OVERVIEW CH : 15 LOCAL AREA NETWORK OVERVIEW P. 447 LAN (Local Area Network) A LAN consists of a shared transmission medium and a set of hardware and software for interfacing devices to the medium and regulating

More information

Free upgrade of computer power with Java, web-base technology and parallel computing

Free upgrade of computer power with Java, web-base technology and parallel computing Free upgrade of computer power with Java, web-base technology and parallel computing Alfred Loo\ Y.K. Choi * and Chris Bloor* *Lingnan University, Hong Kong *City University of Hong Kong, Hong Kong ^University

More information

Wireless Medium Access Control Protocols

Wireless Medium Access Control Protocols Wireless Medium Access Control Protocols Telecomunicazioni Undergraduate course in Electrical Engineering University of Rome La Sapienza Rome, Italy 2007-2008 Classification of wireless MAC protocols Wireless

More information

Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip. Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH

Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip. Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH Overlaid Mesh Topology Design and Deadlock Free Routing in Wireless Network-on-Chip Danella Zhao and Ruizhe Wu Presented by Zhonghai Lu, KTH Outline Introduction Overview of WiNoC system architecture Overlaid

More information

Network-on-Chip Architecture

Network-on-Chip Architecture Multiple Processor Systems(CMPE-655) Network-on-Chip Architecture Performance aspect and Firefly network architecture By Siva Shankar Chandrasekaran and SreeGowri Shankar Agenda (Enhancing performance)

More information

Design of a System-on-Chip Switched Network and its Design Support Λ

Design of a System-on-Chip Switched Network and its Design Support Λ Design of a System-on-Chip Switched Network and its Design Support Λ Daniel Wiklund y, Dake Liu Dept. of Electrical Engineering Linköping University S-581 83 Linköping, Sweden Abstract As the degree of

More information

Module 15 Communication at Data Link and Transport Layer

Module 15 Communication at Data Link and Transport Layer Computer Networks and ITCP/IP Protocols 1 Module 15 Communication at Data Link and Transport Layer Introduction Communication at data link layer is very important as it is between two adjacent machines

More information

CSCD 330 Network Programming

CSCD 330 Network Programming CSCD 330 Network Programming Network Superhighway Spring 2018 Lecture 13 Network Layer Reading: Chapter 4 Some slides provided courtesy of J.F Kurose and K.W. Ross, All Rights Reserved, copyright 1996-2007

More information

Medium Access Protocols

Medium Access Protocols Medium Access Protocols Summary of MAC protocols What do you do with a shared media? Channel Partitioning, by time, frequency or code Time Division,Code Division, Frequency Division Random partitioning

More information

Prepared by Agha Mohammad Haidari Network Manager ICT Directorate Ministry of Communication & IT

Prepared by Agha Mohammad Haidari Network Manager ICT Directorate Ministry of Communication & IT Network Basics Prepared by Agha Mohammad Haidari Network Manager ICT Directorate Ministry of Communication & IT E-mail :Agha.m@mcit.gov.af Cell:0700148122 After this lesson,you will be able to : Define

More information

DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT ABSTRACT

DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT ABSTRACT DISTRIBUTED NETWORK COMMUNICATION FOR AN OLFACTORY ROBOT NSF Summer Undergraduate Fellowship in Sensor Technologies Jiong Shen (EECS) - University of California, Berkeley Advisor: Professor Dan Lee ABSTRACT

More information

CMSC 332 Computer Networks Network Layer

CMSC 332 Computer Networks Network Layer CMSC 332 Computer Networks Network Layer Professor Szajda CMSC 332: Computer Networks Where in the Stack... CMSC 332: Computer Network 2 Where in the Stack... Application CMSC 332: Computer Network 2 Where

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are connectionist neural networks? Connectionism refers to a computer modeling approach to computation that is loosely based upon the architecture of the brain Many

More information