Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System
|
|
- Alexis Melton
- 6 years ago
- Views:
Transcription
1 Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System Christian Denk 1, Francisco Llobet-Blandino 1, Francesco Galluppi 2, Luis A. Plana 2, Steve Furber 2, and Jörg Conradt 1 1 Fachgebiet Neurowissenschaftliche Systemtheorie, Fakultät für Elektro- und Informationstechnik, Technische Universität München, München, Germany {christian.denk,llobetblandino,conradt}@tum.de 2 Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester M13 9PL, UK {francesco.galluppi,plana,sfurber}@cs.man.ac.uk Abstract. Various custom hardware solutions for simulation of neural circuitry have recently been developed, each focusing on particular aspects such as low power operation, high computation speed, or biologically detailed simulations. The SpiNNaker computing system has been developed to simulate large spiking neural circuits in real-time in a network of parallel operating microcontrollers, interconnected by a high-speed asynchronous interface. A potential application area is autonomous mobile robotics, which would tremendously benefit from on-board simulations of networks of tens of thousands of spiking neurons in real-time. Currently, the SpiNNaker hardware circuit boards provide a single Ethernet interface for booting, debug, and input and output of data, which results in a severe bottleneck for sensory perception and motor control signals. This paper describes a small and flexible real-time I/O-hardware interface to connect external devices such as robotic sensors and actuators directly to the fast asynchronous internal communication infrastructure of the SpiNNaker neural computing system. We evaluate performance in terms of package throughput and present a simple application demonstration of a closed loop mobile robot interpreting visual data to approach the most salient stimulus. Keywords: massively-parallel simulation of spiking neurons, SpiNNaker, hardware interface board, mobile robotics. 1 Introduction and Related Work In computational neuroscience research various customized computing systems for the simulation of neuronal networks are under development, such as Facets/ BrainScales/ HBP [1], Neuro-grid [2], dedicated avlsi computing chips [3], or the SpiNNaker spiking network computing system [4]. All such hardware resembles brain style information processing, which in contrast to traditional general purpose computers offers various advantages, e.g. reduced power consumption or increased V. Mladenov et al. (Eds.): ICANN 13, LNCS 8131, pp , 13. Springer-Verlag Berlin Heidelberg 13
2 468 C. Denk et al. processing speed for elaborate or complex neural models. Most larger systems are designed for neuronal number crunching (i.e. detailed neuronal modeling) and exist in a closed computer rack with well controlled digital input and output channels. At the other side of the spectrum, there are small scale test-case implementations of neuromorphic functionality. Only very few such neuromorphic computing systems have yet operated in real time on noisily perceived sensory data and produced reasonable motor outputs to interact with the surrounding environment. Engineers and robotics systems designers however can strongly benefit from flexible instantiations of real-time neural information processing algorithms. Many robotic research groups agree that the neuronal style of information processing is advantageous for real time sensory processing and motor control, as this is what the brain and especially cortex is largely devoted to do. Current applications of neuromorphic hardware in closed loop systems typically keep a desktop computer in the loop to acquire sensory data from a robot, translate it into neural form, and provide it to the neuromorphic hardware and vice versa for motor output. Such a setup faces various inherent drawbacks: (a) large size and power consumption, which defies application on most mobile robots; (b) processing delays that might break real-time control loops; (c) no autonomy, which limits the operation range as a connection to a stationary computer is needed and (d) waste of resources, as the computer often only translates data. In this paper we present a standalone interface solution for a direct connection of various types of external hardware to the SpiNNaker [4] neural computing system, and present an application example that is autonomously executed in real-time onboard a mobile robot. The developed board is compact in size, supports high data transfer rates, requires little operating power (thereby allowing extended runtime on batteries), operates autonomously, offers various connection options for existing robots and sensors, and provides simple customizable extensions to additional sensors and actuators (or other robots). 2 An Autonomous Mobile Robot with a Neuronal Computing System 2.1 The SpiNNaker Neural Network Computing System The SpiNNaker computing system [4] is designed for massively-parallel computations of spiking neural networks. Each SpiNNaker chip consists of 16+2 generic ARM968 cores, a shared 128MByte SDRAM module, and an asynchronous high-speed communication interface with six bi-directional links. The chips execute arbitrary code in each of the 16 user accessible cores, but the overall system design is optimized for simulations of large spiking neural networks (e.g. LIF or Izhikevich neurons). Current SpiNNaker systems offer 4 chips (64 cores) or 48 chips (768 cores) with each core simulating up to 1000 neurons in real time [5], thereby allowing networks of ( ) spiking neurons. The fast asynchronous communication interface is designed to route neural action potentials from arbitrary neurons to a large number of other neurons. Various options for neural network implementations exist, from API library calls to interpreters of neural description languages such as PyNN [6] or NeNGO[7].
3 Real-Time Interface Board for Closed-Loop Robotic Tasks The Holonomic Mobile Robot Platform The mobile robot used in this project (Figure 1, left) is an omni-directional platform of 26cm diameter, with embedded low-level motor control and elementary sensors. An embedded microcontroller obtains motion commands in x and y direction and rotation through a UART interface, and continuously adapts three motor signals to maintain requested velocities. The robot s on-board sensors include wheel encoders, a 9 Degrees of Freedom (DOF) inertial measurement unit and a simple bump-sensor ring to trigger binary contact switches upon contact with objects. Fig. 1. Left: Autonomous mobile robot with on-board vision sensors, SpiNNaker hardware and interface board. Right: exemplary robot task: select the strongest out of multiple visual stimuli. 2.3 The Embedded Dynamic Vision Sensor The dynamic vision sensor (DVS) [9] used as spiking sensory input in this project is an address-event silicon retina that responds to temporal contrast. Each output spike represents a quantized change of log intensity at a particular pixel since the last event from that pixel. All 128x128 pixels operate asynchronously and signal illumination changes within a few microseconds after occurrence. We developed an embedded DVS system (edvs) [10] composed of a DVS chip connected to an ARM7 microcontroller that initializes the DVS and captures events. In this project the microcontroller streams all obtained events over a UART port into the SpiNNaker interface board (Section 3). 3 Design and Specifications of the Real-Time Interface Board Distributed computing cores in SpiNNaker exchange data on an energy and speed efficient asynchronous interface, which unfortunately is tedious to connect to external hardware. This section presents our developed interface board that attaches to this interface to send and receive native SpiNNaker packets, and to act as customizable interpreter between external hardware and the SpiNNaker system.
4 470 C. Denk et al. 3.1 The SpiNNaker Inter-Chip Communication Protocol and Interface Messages in the SpiNNaker system (typically neuronal spikes ) are transferred as 40-bit packets, composed of an 8-bit header and a 32-bit routing key. Packets can also carry an optional 32-bit payload. [5]. Communication between chips happens on two unidirectional asynchronous interfaces composed of 7 data lines (plus acknowledge), which are optimized for low energy consumption and fast data transfer rates: static levels on the data lines are meaningless, only transitions of bits encode a value. Any double bit flip on those 7 lines encodes the next nibble of the 40/72bits data word ( 2- of-7 code ), which needs to be acknowledged by toggling the signal on the respective acknowledge line. This protocol is fast (allows up to 6M packets per second) and energy efficient, but is difficult to implement for existing sensors and/or mobile robots. Hence we developed a generic interface board that on one side follows the communication protocol enforced in SpiNNaker, and on the other side offers various generic options to connect external hardware. Fig. 2. Left: Sketch of information flow SpiNNaker external hardware (e.g. edvs, robot). Right: SpiNN-3 system (4 chip board) with attached interface board. 3.2 The Developed SpiNNaker Interface Board The Interface Board receives and transmits SpiNNaker data packets in the 2-of-7 bit toggling format on one side (Figure 2, red connectors) and offers a variety of flexible interfaces for sensors and/or actuators on the other side (Figure 2, purple connectors). A fast on-board microcontroller (STM 32F407, 32bit ARM Cortex M4, 168MHz; Figure 2, green) allows flexible customization of translation protocols between SpiNNaker packets and sensor or actuators signals as described below. For efficiency we added a CPLD (XILINX Coolrunner-II, XC2C64A; Figure 2, blue) in the communication path, which translates between 2-of-7 bit-toggling codes and 9 bit data bus level signals for the microcontroller (8 data bits and 1 handshaking signal). All communication (SpiNNaker CPLD microcontroller) in both directions generates appropriate handshake signals to guarantee lossless transmission of data. The microcontroller consecutively retrieves all available data from SpiNNaker and connected peripherals and translates the data into the respective other format. After the translation, the data are forwarded to the respective devices as soon as possible (e.g. the SpiNNaker and/or UART transmit ports free).
5 Real-Time Interface Board for Closed-Loop Robotic Tasks 471 The presented interface board is easy to extend for upcoming system demands, even for users inexperienced with electronic hardware and/or microcontroller programming: the main-loop that continuously processes data is essentially a large lookup-table, which makes it easy to include different sensors and actuators without the need to be aware of SpiNNaker low-level programming (such as 2-of-7 bit toggling ) or routines to communicate over UART, SPI, or TWI. The developed interface board allows neural models running on SpiNNaker to receive sensory input signals and to control actuators. Performance evaluation in terms of number of sent / received packages is shown in Figure 3, left. 4 Application: Winner-Takes-All Network on Mobile Robot We demonstrate the SpiNNaker robot platform equipped with a forward pointing edvs performing a simple closed loop robotic task: various visual stimuli (lights with different flashing frequencies) are positioned at some distance of the robot. The system should identify and approach the most active stimulus (Figure 1, right). In our demonstration we implement a nonlinear robust Winner Takes All (WTA) [8] (Figure 3 right) neuronal network on the visual input to identify the most salient stimulus. All elements of this network are spiking neurons, which is well suited for the SpiNNaker platform. We demonstrate and evaluate the performance of our implementation in a closed-loop experiment. [MP/s] 0.5 Spinnaker to Interface Data Throughput Interface to SpiNNaker [MP/s] IF IF IF IF IF IF IF Fig. 3. Left: Transfer rates for simultaneous transmission and reception of SpiNNaker packets (32bit) in million packets per second. Right: Sketch of WTA network to process visual stimuli. 4.1 Winner Takes All Networks The WTA network is a well-established computing principle [8] to sharpen spatially and/or temporally related diffuse input signals. A WTA network can be described as a robust max operation: instead of identifying the maximum of several inputs at each time step, a WTA filter is a dynamical system that selects the maximum over a sliding window in time and possibly space, implementing hysteresis and thereby generating robust output [8]. Our implementation of the WTA network uses a layer of Integrate and Fire (IF) neurons to which all edvs events of one input column are propagated (Figure 3,
6 472 C. Denk et al. right). These IF neurons compete for activation, but only the most active neuron will reach its firing threshold and thereby is identified network winner. Such firing of a winning neuron has two consequences: (a) inhibition of its competitors, and (b) resetting/initializing itself to a non-zero membrane potential (self-excitation). This recurrence generates the desired hysteresis as discussed in [8]. The sketch of a sub-region of the full 128-node network in Figure 3, right, depicts event propagation and WTA implementation: the top grid shows a part of the edvs pixel map. Our selection problem is essentially a one-dimensional task, as all pixels in a column (y) for any given row (x) support the same driving direction. This grouping results in a one-dimensional visual input vector of size 128, represented by the activity levels in designated neurons. We apply a spatial low-pass distribution on the input signal (green Gaussian in Figure 3, right) that produces strong excitation at the center neuron and symmetrically decayed excitation at its neighbors. Stimulus Location S S1,S Temporal S2 (125Hz) Activity: S3 (85Hz) S1 (100Hz) Stimulus Location S2 S1,S3 Neuron Index Neuron Index WTA Stationary Input and Output Neuron Potentials Time [s] Visual input event WTA winning neuron Magnified View firing threshold reached 29,00 29,025 29,05 Fig. 4. Top Left: Stimulus activation (black bars) and corresponding evoked visual events over time (red dots). Bottom Left: activation of WTA neurons. Right: time magnified view. 4.2 WTA Implementation on SpiNNaker Hardware The developed interface board (Section 3) translates and propagates all edvs visual input events as native SpiNNaker packets, conveying the x-coordinate as source address. These packets are injected at the lower left chip on the SpiNNaker board, and distributed evenly among several other SpiNNaker chips and cores. For simplicity (as this is a proof-of-concept implementation) each core implements only a single IF neuron, who s potential is augmented by incoming events according to the Gaussian weighting function. Upon reaching firing threshold, an output spike causes all other distributed neurons to decrease their respective potentials (see Figure 4, right). Various dedicated motor neurons detect WTA output spikes and compute temporally low-pass filtered rate-encoded driving signals, which are sent through the SpiNNaker interface board to the robot.
7 Real-Time Interface Board for Closed-Loop Robotic Tasks Evaluation of the Demonstration System We demonstrate our implementation of the WTA network in two different scenarios: (a) stationary with multiple different alternating stimuli and (b) on an autonomous robot driving towards partially occluded stimuli (Figure 1, right). Robot X/Y Location [cm] Sensor Pixel X Stimulus 85Hz Stimulus 100Hz Time [s] Fig. 5. Closed loop SpiNNaker-robot experiment: WTA alternatively identifying winner (blue) out of all stimuli (red dots); robot continuously approaches (and centers) the respective winner For scenario (a), we provide three distinct LED stimuli (S1-3), each flashing with a particular frequency (see Figure 4, left). We position all stimuli so that the two low frequency stimuli are at roughly similar x coordinates (around x=38), whereas the most active stimulus is located elsewhere in the field-of-view (around x=110). The LED stimuli are turned on according to the timeline (black bar) in Figure 4. The upper graph shows input events (red dots) and WTA output spikes (blue dots). The lower graph displays WTA neuron integrator activation over time (darker parts indicate higher activation). Initially, with only S1 active, the WTA network identifies x=38 as center of activation, and provides a unique, spatially stable output firing despite background noise and a slightly broadened input signal distribution. After activation of S2 (with increased activity compared to S1) at t=10s, the WTA network transitions to this stimulus as winner; again identifying a spatially stable unique winning location. Additionally activating S3, which has the lowest frequency of all, but is spatially co-located with S1, yields a return to the first winning location, due to the Gaussian distribution of input signals to neurons, which allows the two less active stimuli to surpass the most active stimulus in sum. The magnified view (Figure 4, right) shows the neuronal activation over time; note the spatially distributed increase of activation around stimuli, and global inhibition and local self-excitation after a neuron fired. In demonstration scenario (b), an autonomous mobile robot is controlled by the WTA output (Figures 1 and 5). The WTA network focuses on the most active stimulus, which causes the robot to continuously turn towards and approach that stimulus. In the closed loop system, the winning stimulus approaches the center of the vision sensor (pixel 64) as the robot turns (Figure 5; inset shows the robot s trajectory in top-down view). We repeatedly occlude the stronger stimulus (see activation bars in Figure 5), which produces alternating robot motion towards the stronger stimulus, thereby demonstrating switching behavior of the WTA network.
8 474 C. Denk et al. 5 Results and Discussion The SpiNNaker computing system provides a powerful and easy to learn neuronal computing infrastructure for computational modelers, which allows simulation of large scale spiking neural system in real-time. However, scenarios with real-time input/output currently require a PC in the loop or are custom developed FPGA chips for a particular piece of hardware, because of the SpiNNaker internal communication bus which is incompatible with existing sensors and/or robots. In this project we presented a solution to flexibly interface various external hardware (such as sensors and/or robots) to the SpiNNaker computing system. The developed interface is small, allows high data transfer rates (sufficient even for visual data), and is easily customizable for future additional sensors and actuators without requiring in-depth knowledge about the SpiNNaker communication protocol. We demonstrated the performance of the developed system in two example settings: (a) stationary sensors with variable stimuli and (b) in an autonomous closed loop robotic experiment. The presented application shall be viewed as a proof-of-principle, not as an exhaustive evaluation of the board. In fact the demonstration only requires a small subset of the implemented features (e.g. significantly higher data rates are possible, refer to Figure 3, left). We are currently using the interface board in ongoing research such as a stereo optic flow processing and a neural model of grid cells for navigation. References 1. Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M.A., Schmuker, M., Brüderle, D., Schemmel, J., Meier, K.: Six Networks on a Universal Neuromorphic Computing Substrate. Frontiers in Neuroscience 7 (13) 2. Choudhary, S., Sloan, S., Fok, S., Neckar, A., Trautmann, E., Gao, P., Stewart, T., Eliasmith, C., Boahen, K.: Silicon Neurons That Compute. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 12, Part I. LNCS, vol. 7552, pp Springer, Heidelberg (12) 3. Badoni, D., Giulioni, M., Dante, V., Del Giudice, P.: An Avlsi Recurrent Network of Spiking Neurons with Reconfigurable and Plastic Synapses. In: IEEE (ISCAS), pp (06) 4. Khan, M., Lester, D., Plana, L.A., Rast, A., Jin, X., Painkras, E., Furber, S.B.: SpiNNaker: Mapping Neural Networks onto a Massively-Parallel Chip Multiprocessor. In: IEEE International Joint Conference on Neural Networks (IJCNN), pp IEEE (08) 5. Plana, L.A., Bainbridge, J., Furber, S., Salisbury, S., Shi, Y., Wu, J.: An on-chip and Inter- Chip Communications Network for the SpiNNaker Massively-Parallel Neural Net Simulator. In: 2nd ACM/IEEE NoCS, pp IEEE (08) 6. Galluppi, F., Davies, S., Rast, A., Sharp, T., Plana, L.A., Furber, S.: A Hierachical Configuration System for a Massively Parallel Neural Hardware Platform. In: Proceedings of the 9th conference on Computing Frontiers, pp ACM (12) 7. Galluppi, F., Davies, S., Furber, S., Stewart, T., Eliasmith, C.: Real Time on-chip Implementation of Dynamical Systems with Spiking Neurons. In: IJCNN, pp IEEE (12) 8. Oster, M., Douglas, R., Liu, S.-C.: Computation with Spikes in a Winner-Take-All Network. Neural Computation 21, (09) 9. Lichtsteiner, P., Posch, C., Delbruck, T.: A dB 15ms Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Solid-State Circuits 43, (08) 10. Conradt, J., Berner, R., Cook, M., Delbruck, T.: An Embedded AER Dynamic Vision Sensor for Low-Latency Pole Balancing. In: IEEE ECV, pp IEEE (09)
Miniaturized embedded event based vision for high-speed robots
Miniaturized embedded event based vision for high-speed robots Lukas Everding, Mohsen Firouzi, Michael Lutter, Florian Mirus, Nicolai Waniek, Zied Tayeb Dr. Cristian Axenie, Viviane Ghaderi, Ph.D., Dr.
More informationSpiNNaker a Neuromorphic Supercomputer. Steve Temple University of Manchester, UK SOS21-21 Mar 2017
SpiNNaker a Neuromorphic Supercomputer Steve Temple University of Manchester, UK SOS21-21 Mar 2017 Outline of talk Introduction Modelling neurons Architecture and technology Principles of operation Summary
More informationAn Event-based Optical Flow Algorithm for Dynamic Vision Sensors
An Event-based Optical Flow Algorithm for Dynamic Vision Sensors Iffatur Ridwan and Howard Cheng Department of Mathematics and Computer Science University of Lethbridge, Canada iffatur.ridwan@uleth.ca,howard.cheng@uleth.ca
More informationA Hardware/Software Framework for Real-time Spiking Systems
A Hardware/Software Framework for Real-time Spiking Systems Matthias Oster, Adrian M. Whatley, Shih-Chii Liu, and Rodney J. Douglas Institute of Neuroinformatics, Uni/ETH Zurich Winterthurerstr. 190, 8057
More informationSpiNNaker - a million core ARM-powered neural HPC
The Advanced Processor Technologies Group SpiNNaker - a million core ARM-powered neural HPC Cameron Patterson cameron.patterson@cs.man.ac.uk School of Computer Science, The University of Manchester, UK
More informationOn-Board Real-Time Optic-Flow for Miniature Event-Based Vision Sensors
On-Board Real-Time Optic-Flow for Miniature Event-Based Vision Sensors J. Conradt, Senior Member, IEEE Abstract This paper presents a novel, drastically simplified method to compute optic flow on a miniaturized
More informationBiologically-Inspired Massively-Parallel Architectures - computing beyond a million processors
Biologically-Inspired Massively-Parallel Architectures - computing beyond a million processors Dave Lester The University of Manchester d.lester@manchester.ac.uk NeuroML March 2011 1 Outline 60 years of
More informationA survey of the SpiNNaker Project : A massively parallel spiking Neural Network Architecture
A survey of the SpiNNaker Project : A massively parallel spiking Neural Network Architecture Per Lenander Mälardalen University Robotics program Västerås, Sweden plr07001@student.mdh.se ABSTRACT Scientists
More informationComputing Spike-based Convolutions on GPUs
Computing Spike-based Convolutions on GPUs Jayram Moorkanikara Nageswaran and Nikil Dutt Center for Embedded Systems Donald Bren School of Information and Computer Science University of California, Irvine
More informationAlgorithm and Software for Simulation of Spiking Neural Networks on the Multi-Chip SpiNNaker System
WCCI 2010 IEEE World Congress on Computational Intelligence July, 18-23, 2010 - CCIB, Barcelona, Spain IJCNN Algorithm and Software for Simulation of Spiking Neural Networks on the Multi-Chip SpiNNaker
More informationIncreasing interconnection network connectivity for reducing operator complexity in asynchronous vision systems
Increasing interconnection network connectivity for reducing operator complexity in asynchronous vision systems Valentin Gies and Thierry M. Bernard ENSTA, 32 Bd Victor 75015, Paris, FRANCE, contact@vgies.com,
More informationUniversity of Zurich. Computing spike-based convolutions on GPUs. Zurich Open Repository and Archive. Nageswaran, J M; Dutt, N; Wang, Y; Delbrueck, T
University of Zurich Zurich Open Repository and Archive Winterthurerstr. 190 CH-8057 Zurich http://www.zora.uzh.ch Year: 2009 Computing spike-based convolutions on GPUs Nageswaran, J M; Dutt, N; Wang,
More informationA Real-Time, FPGA based, Biologically Plausible Neural Network Processor
A Real-Time, FPGA based, Biologically Plausible Neural Network Processor Martin Pearson 1, Ian Gilhespy 1, Kevin Gurney 2, Chris Melhuish 1, Benjamin Mitchinson 2, Mokhtar Nibouche 1, Anthony Pipe 1 1
More informationAn Object Tracking Computational Sensor
An Object Tracking Computational Sensor Vladimir Brajovic CMU-RI-TR-01-40 Robotics Institute, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213 ph: (412)268-5622, fax: (412)268-5571 brajovic@ri.cmu.edu
More informationA Large-Scale Spiking Neural Network Accelerator for FPGA Systems
A Large-Scale Spiking Neural Network Accelerator for FPGA Systems Kit Cheung 1, Simon R Schultz 2, Wayne Luk 1 1 Department of Computing, 2 Department of Bioengineering Imperial College London {k.cheung11,
More informationCELL COMPETITION GENERATES. Inst. fur Theor. Physik, SFB Nichtlin. Dynamik, Univ.
ON-CENTER AND OFF-CENTER CELL COMPETITION GENERATES ORIENTED RECEPTIVE FIELDS FROM NON-ORIENTED STIMULI IN KOHONEN'S SELF-ORGANIZING MAP Maximilian Riesenhuber y Hans-Ulrich Bauer Theo Geisel max@ai.mit.edu,
More informationUnlocking the Potential of Your Microcontroller
Unlocking the Potential of Your Microcontroller Ethan Wu Storming Robots, Branchburg NJ, USA Abstract. Many useful hardware features of advanced microcontrollers are often not utilized to their fullest
More informationA Location-Independent Direct Link Neuromorphic Interface
Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 A Location-Independent Direct Link Neuromorphic Interface Alexander D. Rast, Johannes Partzsch, Christian
More informationCHAPTER 6 FPGA IMPLEMENTATION OF ARBITERS ALGORITHM FOR NETWORK-ON-CHIP
133 CHAPTER 6 FPGA IMPLEMENTATION OF ARBITERS ALGORITHM FOR NETWORK-ON-CHIP 6.1 INTRODUCTION As the era of a billion transistors on a one chip approaches, a lot of Processing Elements (PEs) could be located
More informationThe other function of this talk is to introduce some terminology that will be used through the workshop.
This presentation is to provide a quick overview of the hardware and software of SpiNNaker. It introduces some key concepts of the topology of a SpiNNaker machine, the unique message passing and routing
More informationImprovement of optic flow estimation from an event-based sensor with recurrently connected processing layers
Improvement of optic flow estimation from an event-based sensor with recurrently connected processing layers Vilim Štih 31st March 2014 1 Introduction The Dynamic Vision Sensor (DVS) is a visual sensor
More informationCooperative SLAM on Small Mobile Robots
Cooperative SLAM on Small Mobile Robots Nicolai Waniek, Student Member IEEE, Johannes Biedermann, Jörg Conradt, Senior Member IEEE Abstract We present a method for simultaneous localization and mapping
More informationMapping AER Events to SpiNNaker Multicast Packets
SpiNNaker AppNote 8 SpiNNaker - AER interface Page 1 AppNote 8 - Interfacing AER devices to SpiNNaker using an FPGA SpiNNaker Group, School of Computer Science, University of Manchester Luis A. Plana -
More informationFeature Detection and Tracking with the Dynamic and Active-pixel Vision Sensor (DAVIS)
Feature Detection and Tracking with the Dynamic and Active-pixel Vision Sensor (DAVIS) David Tedaldi, Guillermo Gallego, Elias Mueggler and Davide Scaramuzza Abstract Because standard cameras sample the
More informationm Environment Output Activation 0.8 Output Activation Input Value
Learning Sensory-Motor Cortical Mappings Without Training Mike Spratling Gillian Hayes Department of Articial Intelligence University of Edinburgh mikes@dai.ed.ac.uk gmh@dai.ed.ac.uk Abstract. This paper
More informationTowards biologically realistic multi-compartment neuron model emulation in analog VLSI
Towards biologically realistic multi-compartment neuron model emulation in analog VLSI Sebastian Millner, Andreas Hartel, Johannes Schemmel and Karlheinz Meier Universität Heidelberg - Kirchhoff-Institut
More informationAn Integrated Vision Sensor for the Computation of Optical Flow Singular Points
An Integrated Vision Sensor for the Computation of Optical Flow Singular Points Charles M. Higgins and Christof Koch Division of Biology, 39-74 California Institute of Technology Pasadena, CA 925 [chuck,koch]@klab.caltech.edu
More informationReconstruction, Motion Estimation and SLAM from Events
Reconstruction, Motion Estimation and SLAM from Events Andrew Davison Robot Vision Group and Dyson Robotics Laboratory Department of Computing Imperial College London www.google.com/+andrewdavison June
More informationPerceptual Quality Improvement of Stereoscopic Images
Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:
More informationNeurmorphic Architectures. Kenneth Rice and Tarek Taha Clemson University
Neurmorphic Architectures Kenneth Rice and Tarek Taha Clemson University Historical Highlights Analog VLSI Carver Mead and his students pioneered the development avlsi technology for use in neural circuits
More informationECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University
ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University Optical Flow on FPGA Ian Thompson (ijt5), Joseph Featherston (jgf82), Judy Stephen
More informationMassively Parallel Computing on Silicon: SIMD Implementations. V.M.. Brea Univ. of Santiago de Compostela Spain
Massively Parallel Computing on Silicon: SIMD Implementations V.M.. Brea Univ. of Santiago de Compostela Spain GOAL Give an overview on the state-of of-the- art of Digital on-chip CMOS SIMD Solutions,
More informationBio-inspired Binocular Disparity with Position-Shift Receptive Field
Bio-inspired Binocular Disparity with Position-Shift Receptive Field Fernanda da C. e C. Faria, Jorge Batista and Helder Araújo Institute of Systems and Robotics, Department of Electrical Engineering and
More informationA Model of Dynamic Visual Attention for Object Tracking in Natural Image Sequences
Published in Computational Methods in Neural Modeling. (In: Lecture Notes in Computer Science) 2686, vol. 1, 702-709, 2003 which should be used for any reference to this work 1 A Model of Dynamic Visual
More informationThe S6000 Family of Processors
The S6000 Family of Processors Today s Design Challenges The advent of software configurable processors In recent years, the widespread adoption of digital technologies has revolutionized the way in which
More informationMobile Robot Path Planning Software and Hardware Implementations
Mobile Robot Path Planning Software and Hardware Implementations Lucia Vacariu, Flaviu Roman, Mihai Timar, Tudor Stanciu, Radu Banabic, Octavian Cret Computer Science Department, Technical University of
More informationEmerging computing paradigms: The case of neuromorphic platforms
Emerging computing paradigms: The case of neuromorphic platforms Andrea Acquaviva DAUIN Computer and control Eng. Dept. 1 Neuromorphic platforms Neuromorphic platforms objective is to enable simulation
More informationLocating ego-centers in depth for hippocampal place cells
204 5th Joint Symposium on Neural Computation Proceedings UCSD (1998) Locating ego-centers in depth for hippocampal place cells Kechen Zhang,' Terrence J. Sejeowski112 & Bruce L. ~cnau~hton~ 'Howard Hughes
More information11. WorldOfStateMachines. Gateway lab exercises 5/31/2013. Two statements Synchronous one. Asynchronous one (combinational)
Gateway lab exercises 10. ColourTheWorld parameters in generics; VGA display control to generate sync signals and RGB colors (see BASYS2 manual). making state machines using sequential and combinational
More informationA ROUTER FOR MASSIVELY-PARALLEL NEURAL SIMULATION
A ROUTER FOR MASSIVELY-PARALLEL NEURAL SIMULATION A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2010 By Jian
More informationDetecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution
Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.
More informationSmart Ultra-Low Power Visual Sensing
Smart Ultra-Low Power Visual Sensing Manuele Rusci*, Francesco Conti * manuele.rusci@unibo.it f.conti@unibo.it Energy-Efficient Embedded Systems Laboratory Dipartimento di Ingegneria dell Energia Elettrica
More informationAn actor-critic reinforcement learning controller for a 2-DOF ball-balancer
An actor-critic reinforcement learning controller for a 2-DOF ball-balancer Andreas Stückl Michael Meyer Sebastian Schelle Projektpraktikum: Computational Neuro Engineering 2 Empty side for praktikums
More informationChain of Small Robots
Chain of Small Robots PRACTICAL COURSE COMPUTATIONAL NEURO ENGINEERING submitted by Dominik Luber Johannes Biedermann NEUROSCIENTIFIC SYSTEM THEORY Technische Universität München Supervisor: Nicolai Waniek
More informationAchieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation
Achieving Lightweight Multicast in Asynchronous Networks-on-Chip Using Local Speculation Kshitij Bhardwaj Dept. of Computer Science Columbia University Steven M. Nowick 2016 ACM/IEEE Design Automation
More informationApproximate Fixed-Point Elementary Function Accelerator for the SpiNNaker-2 Neuromorphic Chip
Approximate Fixed-Point Elementary Function Accelerator for the SpiNNaker-2 Neuromorphic Chip Mantas Mikaitis, PhD student @ University of Manchester, UK mantas.mikaitis@manchester.ac.uk 25 th IEEE Symposium
More information3D Wafer Scale Integration: A Scaling Path to an Intelligent Machine
3D Wafer Scale Integration: A Scaling Path to an Intelligent Machine Arvind Kumar, IBM Thomas J. Watson Research Center Zhe Wan, UCLA Elec. Eng. Dept. Winfried W. Wilcke, IBM Almaden Research Center Subramanian
More informationParallel Evaluation of Hopfield Neural Networks
Parallel Evaluation of Hopfield Neural Networks Antoine Eiche, Daniel Chillet, Sebastien Pillement and Olivier Sentieys University of Rennes I / IRISA / INRIA 6 rue de Kerampont, BP 818 2232 LANNION,FRANCE
More informationDoes the Brain do Inverse Graphics?
Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features
More informationAgenda. ! Efficient Coding Hypothesis. ! Response Function and Optimal Stimulus Ensemble. ! Firing-Rate Code. ! Spike-Timing Code
1 Agenda! Efficient Coding Hypothesis! Response Function and Optimal Stimulus Ensemble! Firing-Rate Code! Spike-Timing Code! OSE vs Natural Stimuli! Conclusion 2 Efficient Coding Hypothesis! [Sensory systems]
More informationSpiNN 3 System Diagram
SpiNNaker AppNote SpiNN-3 DevBoard Page AppNote - SpiNN-3 Development Board SpiNNaker Group, School of Computer Science, University of Manchester Steve Temple - 4 Nov - Version. Introduction This document
More informationDVS and DAVIS Specifications
DVS and DAVIS Specifications Document version: 2018-06-08 Definition: EPS = Events per second Camera Specifications Current Models DVS128 (DISCONTINUED) edvs mini-edvs DAVIS240 DAVIS346 Picture Optics
More informationNeuroMem. A Neuromorphic Memory patented architecture. NeuroMem 1
NeuroMem A Neuromorphic Memory patented architecture NeuroMem 1 Unique simple architecture NM bus A chain of identical neurons, no supervisor 1 neuron = memory + logic gates Context Category ted during
More informationI 2 C and SPI Protocol Triggering and Decode for Infiniium 9000 Series Oscilloscopes
I 2 C and SPI Protocol Triggering and Decode for Infiniium 9000 Series Oscilloscopes Data sheet This application is available in the following license variations. Order N5391B for a user-installed license
More informationNeuromorphic Hardware. Adrita Arefin & Abdulaziz Alorifi
Neuromorphic Hardware Adrita Arefin & Abdulaziz Alorifi Introduction Neuromorphic hardware uses the concept of VLSI systems consisting of electronic analog circuits to imitate neurobiological architecture
More informationTowards a Dynamically Reconfigurable System-on-Chip Platform for Video Signal Processing
Towards a Dynamically Reconfigurable System-on-Chip Platform for Video Signal Processing Walter Stechele, Stephan Herrmann, Andreas Herkersdorf Technische Universität München 80290 München Germany Walter.Stechele@ei.tum.de
More informationSaliency Extraction for Gaze-Contingent Displays
In: Workshop on Organic Computing, P. Dadam, M. Reichert (eds.), Proceedings of the 34th GI-Jahrestagung, Vol. 2, 646 650, Ulm, September 2004. Saliency Extraction for Gaze-Contingent Displays Martin Böhme,
More informationDeep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies
http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort
More informationExploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors
Exploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors G. Chen 1, M. Kandemir 1, I. Kolcu 2, and A. Choudhary 3 1 Pennsylvania State University, PA 16802, USA 2 UMIST,
More informationDVS and DAVIS Specifications
DVS and DAVIS Specifications Document version: 2018-10-31 Definition: EPS = Events per second Camera Specifications Current Models DAVIS240 DAVIS346 Picture Optics CS-mount CS-mount Host Connection USB
More informationUnsupervised Learning
Unsupervised Learning Learning without a teacher No targets for the outputs Networks which discover patterns, correlations, etc. in the input data This is a self organisation Self organising networks An
More informationOnline Learning for Object Recognition with a Hierarchical Visual Cortex Model
Online Learning for Object Recognition with a Hierarchical Visual Cortex Model Stephan Kirstein, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl Legien Str. 30 63073 Offenbach
More informationEdge Detection (with a sidelight introduction to linear, associative operators). Images
Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationNeuro-Evolution of Spiking Neural Networks on SpiNNaker Neuromorphic Hardware
Neuro-Evolution of Spiking Neural Networks on SpiNNaker Neuromorphic Hardware Alexander Vandesompele *, Florian Walter, Florian Röhrbein Chair of Robotics and Embedded Systems, Department of Informatics,
More informationDEPARTMENT OF INFORMATICS. Event-Based Stereo Vision with Spiking Neural Networks
DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT MÜNCHEN Bachelor s Thesis in Informatics Event-Based Stereo Vision with Spiking Neural Networks Georgi Dikov DEPARTMENT OF INFORMATICS TECHNISCHE UNIVERSITÄT
More informationSCALABLE EVENT-DRIVEN MODELLING ARCHITECTURES FOR NEUROMIMETIC HARDWARE
SCALABLE EVENT-DRIVEN MODELLING ARCHITECTURES FOR NEUROMIMETIC HARDWARE A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical
More informationNeuronal Architecture for Reactive and Adaptive Navigation of a Mobile Robot
Neuronal Architecture for Reactive and Adaptive Navigation of a Mobile Robot Francisco García-Córdova 1, Antonio Guerrero-González 1, and Fulgencio Marín-García 2 1 Department of System Engineering and
More informationSBD WARRIOR DATA SHEET
SBD WARRIOR DATA SHEET www.satelligent.ca v1.3 Features Controller for Iridium 9603 SBD transceiver 48 channel SiRFstarIV chipset based GPS Serial interface for 3rd party equipment or PC control Wide supply
More informationDeveloping a Data Driven System for Computational Neuroscience
Developing a Data Driven System for Computational Neuroscience Ross Snider and Yongming Zhu Montana State University, Bozeman MT 59717, USA Abstract. A data driven system implies the need to integrate
More informationTopographic Mapping with fmri
Topographic Mapping with fmri Retinotopy in visual cortex Tonotopy in auditory cortex signal processing + neuroimaging = beauty! Topographic Mapping with fmri Retinotopy in visual cortex Tonotopy in auditory
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Feedforward Categorization on AER Motion Events Using Cortex-Like Features in a Spiking Neural Network Bo Zhao, Member, IEEE, Ruoxi Ding, Student
More informationA graph theoretical approach for a multistep mapping software for the FACETS project
A graph theoretical approach for a multistep mapping software for the FACETS project Karsten Wendt Technische Universität Dresden ETIT/IEE - HPSN D-01062 Dresden GERMANY wendt@iee.et.tu-dresden.de Matthias
More informationModern Robotics Inc. Sensor Documentation
Sensor Documentation Version 1.0.1 September 9, 2016 Contents 1. Document Control... 3 2. Introduction... 4 3. Three-Wire Analog & Digital Sensors... 5 3.1. Program Control Button (45-2002)... 6 3.2. Optical
More informationEmbedded Systems: Hardware Components (part II) Todor Stefanov
Embedded Systems: Hardware Components (part II) Todor Stefanov Leiden Embedded Research Center, Leiden Institute of Advanced Computer Science Leiden University, The Netherlands Outline Generic Embedded
More informationServosila Robotic Heads
Servosila Robotic Heads www.servosila.com TABLE OF CONTENTS SERVOSILA ROBOTIC HEADS 2 SOFTWARE-DEFINED FUNCTIONS OF THE ROBOTIC HEADS 2 SPECIFICATIONS: ROBOTIC HEADS 4 DIMENSIONS OF ROBOTIC HEAD 5 DIMENSIONS
More information3D Visualization of Sound Fields Perceived by an Acoustic Camera
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 7 Special Issue on Information Fusion Sofia 215 Print ISSN: 1311-972; Online ISSN: 1314-481 DOI: 1515/cait-215-88 3D
More informationBiologically-Inspired Massively-Parallel Architectures computing beyond a million processors
Biologically-Inspired Massively-Parallel Architectures computing beyond a million processors Steve Furber The University of Manchester, Oxford Road, Manchester M13 9PL UK steve.furber@manchester.ac.uk
More informationProject Number: Project Title: Human Brain Project. Neuromorphic Platform Specification public version
Project Number: 604102 Project Title: Human Brain Project Document Title: Neuromorphic Platform Specification public version Document Filename: HBP SP9 D9.7.1 NeuromorphicPlatformSpec e199ac5 from 26 March
More informationA new Computer Vision Processor Chip Design for automotive ADAS CNN applications in 22nm FDSOI based on Cadence VP6 Technology
Dr.-Ing Jens Benndorf (DCT) Gregor Schewior (DCT) A new Computer Vision Processor Chip Design for automotive ADAS CNN applications in 22nm FDSOI based on Cadence VP6 Technology Tensilica Day 2017 16th
More informationOperation of machine vision system
ROBOT VISION Introduction The process of extracting, characterizing and interpreting information from images. Potential application in many industrial operation. Selection from a bin or conveyer, parts
More informationCOS Lecture 10 Autonomous Robot Navigation
COS 495 - Lecture 10 Autonomous Robot Navigation Instructor: Chris Clark Semester: Fall 2011 1 Figures courtesy of Siegwart & Nourbakhsh Control Structure Prior Knowledge Operator Commands Localization
More informationReal time Spaun on SpiNNaker
Real time Spaun on SpiNNaker Functional brain simulation on a massively-parallel computer architecture A THESIS SUBMITTED TO THE UNIVERSITY OF MANCHESTER FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN THE FACULTY
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationRUN-TIME RECONFIGURABLE IMPLEMENTATION OF DSP ALGORITHMS USING DISTRIBUTED ARITHMETIC. Zoltan Baruch
RUN-TIME RECONFIGURABLE IMPLEMENTATION OF DSP ALGORITHMS USING DISTRIBUTED ARITHMETIC Zoltan Baruch Computer Science Department, Technical University of Cluj-Napoca, 26-28, Bariţiu St., 3400 Cluj-Napoca,
More informationA Novel Pseudo 4 Phase Dual Rail Asynchronous Protocol with Self Reset Logic & Multiple Reset
A Novel Pseudo 4 Phase Dual Rail Asynchronous Protocol with Self Reset Logic & Multiple Reset M.Santhi, Arun Kumar S, G S Praveen Kalish, Siddharth Sarangan, G Lakshminarayanan Dept of ECE, National Institute
More informationNetwork-on-chip (NOC) Topologies
Network-on-chip (NOC) Topologies 1 Network Topology Static arrangement of channels and nodes in an interconnection network The roads over which packets travel Topology chosen based on cost and performance
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationHigh Performance Interconnect and NoC Router Design
High Performance Interconnect and NoC Router Design Brinda M M.E Student, Dept. of ECE (VLSI Design) K.Ramakrishnan College of Technology Samayapuram, Trichy 621 112 brinda18th@gmail.com Devipoonguzhali
More informationUSBIO24 RL Digital I/O Module
Ether I/O 24 Digital I/O Module The Ether I/O 24 is an UDP/IP controlled digital Input/Output module. The module features three 8-bit ports with 5V level signal lines. Each of the 24 lines can be independently
More informationComputational Perception. Visual Coding 3
Computational Perception 15-485/785 February 21, 2008 Visual Coding 3 A gap in the theory? - - + - - from Hubel, 1995 2 Eye anatomy from Hubel, 1995 Photoreceptors: rods (night vision) and cones (day vision)
More informationEvent-based Computer Vision
Event-based Computer Vision Charles Clercq Italian Institute of Technology Institut des Systemes Intelligents et de robotique November 30, 2011 Pinhole camera Principle light rays from an object pass through
More informationAutonomous Navigation for Flying Robots
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 7.2: Visual Odometry Jürgen Sturm Technische Universität München Cascaded Control Robot Trajectory 0.1 Hz Visual
More informationPlanning rearrangements in a blocks world with the Neurosolver
Planning rearrangements in a blocks world with the Neurosolver 1 Introduction ndrzej ieszczad andrzej@carleton.ca The Neurosolver was introduced in our earlier papers (ieszczad, [1], [2]) as a device that
More informationNicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science
V1-inspired orientation selective filters for image processing and computer vision Nicolai Petkov Intelligent Systems group Institute for Mathematics and Computing Science 2 Most of the images in this
More informationFPGA based Design of Low Power Reconfigurable Router for Network on Chip (NoC)
FPGA based Design of Low Power Reconfigurable Router for Network on Chip (NoC) D.Udhayasheela, pg student [Communication system],dept.ofece,,as-salam engineering and technology, N.MageshwariAssistant Professor
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationHigh Performance Computing on SpiNNaker Neuromorphic Platform: a Case Study for Energy Efficient Image Processing
High Performance Computing on SpiNNaker Neuromorphic Platform: a Case Study for Energy Efficient Image Processing Indar Sugiarto, Gengting Liu, Simon Davidson, Luis A. Plana and Steve B. Furber School
More informationDirectional Tuning in Single Unit Activity from the Monkey Motor Cortex
Teaching Week Computational Neuroscience, Mind and Brain, 2011 Directional Tuning in Single Unit Activity from the Monkey Motor Cortex Martin Nawrot 1 and Alexa Riehle 2 Theoretical Neuroscience and Neuroinformatics,
More information