Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )

Size: px
Start display at page:

Download "Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )"

Transcription

1 Structure: 1. Introduction 2. Problem 3. Neural network approach a. Architecture b. Phases of CNN c. Results 4. HTM approach a. Architecture b. Setup c. Results 5. Conclusion 1.) Introduction Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( ) In the field of Artificial Intelligence, scientists have made many enhancements that helped for development of smart computers or devices. Image processing is one of them. One of the biggest challenges in it to identify documents in hand-written format. In this project uses Neural Network Modeling and HTM for identification of handwriting from optical images.inputs are two datasets collected by U.S. National Institute of Standards and technology. It contains labeled images of handwritten digits. Expected output is that the program reads the handwritten numbers and translates them ASCII-encoded numbers. The intention of this document is to give an introduction to two significantly different approaches for the problem of handwritten digit recognition: The traditional neural network approach and a biologically inspired hierarchical temporal memory approach. 2.) Problem Description In the given problem we try to interpret handwritten digits. The dataset is published freely and a common test for handwriting recognition with artificial neural networks. It was created by american census bureau employees and contains labeled images inclusive test images.our scope is only arabic digits 0-9 and challenged is maximum accuracy in this project. 3.) Neural network approach One of the type of artificial neuron called a perceptron were developed 50 s they takes several binary inputs and produces a single output. After that they use the complementary logic for human decision making illustration. Sigmoid neurons are develop version of perceptrons which is looking To see how learning might work, suppose we make a small

2 change in some weight (or bias) in the network. And this type of neurons obviously help us with handwriting recognition. a)architecture of neural networks Figure from google to layers of architecture of CNN In the architecture of network leftmost layer is Input layer and neurons are Input neurons.basicly rightmost is Output layers which contains output neurons. In our project we have one output. Middle of these layers network has hidden layer since it is not a mysterious part of network they are basically neither inputs nor outputs.the network has just a single hidden layer in above. b) Algorithm developed from Python Tensorflow library and use Deep neural networks and Convolutional Neural Networks contains code samples from book "Neural Networks and Deep Learning". For this part of theory detailed information can be found here: Mainly we use the components explained in the following:

3 MNIST Dataset a subset of a larger set NIST. that database has digits which is handwritten and divided into training examples. MNIST array is 28x28 values. In MNIST data available both Training and Testing images labels have first 2 column consists number of items in the file. CNN: Convolutional Neural Networks is type of feed forward Artificial neural network between neurons like animal visual cortex inspiration. Data input in CNN arrange like width and height and depth and that 3 element create a deep learning. CNN has 5 layers which are Input, convolutional layer, rectified linear unit, pooling layer and fully connected layer. Picture for showing CNN qualities b) Phases of CNN CNN works with 3 phase. In first phase is basicly input phase. Input MNIST data is take like array of 748-d of pixels and convert it matrixes of that pixels. After that Phase 2 starts and build network architecture like as I mentioned before. In that creation have 3 part such as Convolution Layer, ReLu function and pooling Layer. First Convolution Layer take 20 filters to go slide window 5 times 5 into 28x28 matrices and try to get pixels. After that ReLU function activate Back Propagation and convolutional layer reduces vanish gradient and avoid sparisty. Lastly Pooling Layer get ReLU function and activate #D tensors. Pool the all of the previous pixels a new matrix of smaller sizes. Lastly Phase 3 is Fully connected layer and that phase connect previous layers to nexts.

4 Schematic is basicly shows convolutional phases. Output is confusion matrix for model. We can add more number of layers but adding more might be affect accuracy. Because of the Multiple layers we called Deep learning system. c) Results and Accuracy of Deep-Neural Networks Four Layer Deep Neural Network using Tensorflow: 96.60% : Epoch 0 completed out of 10 loss: Epoch 1 completed out of 10 loss: Epoch 2 completed out of 10 loss: Epoch 3 completed out of 10 loss: Epoch 4 completed out of 10 loss: Epoch 5 completed out of 10 loss: Epoch 6 completed out of 10 loss: Epoch 7 completed out of 10 loss: Epoch 8 completed out of 10 loss: Epoch 9 completed out of 10 loss: Accuracy: We run CNN 3 times and we saw the when we train CNN more it learns more and accuracy is increasing. First run :Two Layer Convolutional Neural Network using Tensorflow: 97.10% : Epoch 0 completed out of 10 loss: Epoch 1 completed out of 10 loss: Epoch 2 completed out of 10 loss: Epoch 3 completed out of 10 loss: Epoch 4 completed out of 10 loss: Epoch 5 completed out of 10 loss: Epoch 6 completed out of 10 loss: Epoch 7 completed out of 10 loss: Epoch 8 completed out of 10 loss: Epoch 9 completed out of 10 loss: Accuracy: Second run: Two Layer Convolutional Neural Network using Tensorflow: 97.24% :

5 Epoch 0 completed out of 10 loss: Epoch 1 completed out of 10 loss: Epoch 2 completed out of 10 loss: Epoch 3 completed out of 10 loss: Epoch 4 completed out of 10 loss: Epoch 5 completed out of 10 loss: Epoch 6 completed out of 10 loss: Epoch 7 completed out of 10 loss: Epoch 8 completed out of 10 loss: Epoch 9 completed out of 10 loss: Accuracy: Third run:two Layer Convolutional Neural Network using Tensorflow: 97.29% : Epoch 0 completed out of 10 loss: Epoch 1 completed out of 10 loss: Epoch 2 completed out of 10 loss: Epoch 3 completed out of 10 loss: Epoch 4 completed out of 10 loss: Epoch 5 completed out of 10 loss: Epoch 6 completed out of 10 loss: Epoch 7 completed out of 10 loss: Epoch 8 completed out of 10 loss: Epoch 9 completed out of 10 loss: Accuracy: ) HTM-Approach a) The hierarchical temporal memory(htm) algorithm is a theory developed from Numenta ( ) to build biologically inspired intelligent systems. It is out of scope to discuss the whole theory here but we will try to cover parts that are relevant for the experiment. Nupic v1.3 is their most recent python implementation of the theories. It is open sourced and can be found here: Mainly we use the components explained in the following: SDRs : Sparse distributed representations are essentially bit arrays which try to cover the essential features of an input representation. An important property is that they are sparse, meaning only about 2% of the bits are active at the same time. This gives very good properties to e.g. detect similarities

6 using overlap scores even there is a lot of noise. Further information can be found: R.pdf where mathimatical properties are discussed in detail. Encoders : This component exists for different data types and tries to convert general data formats e.g. timestamp, scalar values, strings etc. into their SDR representation. There are certain properties that need to be preserved when converting to an SDR e.g. that similar semantic meanings have a large overlap (on bits are the same or close to each other) and that the sparsity is about the same percentage for different features of the data. It does not need to be sparse, but the percentage will represent the importance of a feature. They should also be deterministic meaning the same input always produces the same output. Spatial pooler : The spatial pooler is an array of mini-columns. More specifically for our experiments we usually use an array of size about 2048 mini-columns, each consisting of a bit-array of cells on its own. The features-vocabulary of the spatial pooler: Initially the mini-columns of the spatial pooler will have potential connections to a randomly assigned pool of bits of the input array. [Initial potential connections e.g. 85% can be adjusted] The bits marked yellow are potential connections of the selected column in the spatial pooler. (85% of input space) Each of this potential connections has a permanence value between 0-1 assigned. This are distributed around the connection threshold.

7 Heatmap of the permanence values of potential connections. White color marks input bits that are no potential connections. Red are potential connections which permanence values does not exceed the permanence threshold. Blue dots indicate actual connections. A connection threshold is set, each permanence value exceeding it will turn a potential connection to an actual connection. Inhibition-phase : Permanence values get incremented (reinforced) when the input cell of the connection is active and decremented if a connection is not active. This happens only to the permanence values of active mini-columns. Inactive columns will not be changed. This visualizes the learning process of the system. [Increment and Decrement values for synaptic permanence can be adjusted]

8 The connection history of a column. By comparing to the connections in the previous timestamp we can see permanence values changed which led to new potential connections crossing the threshold (newly connected synapses) and old ones disconnecting. Resulting the overlap score of a column changes and its rank compared to others to be selected as winning column. Overlap score of a mini-column shows how many connections are active for the specific input representation. Overlap threshold sets the # of mini-columns active. After ranking them based on their overlap score the defined number will be active. This columns are also called Winning columns

9 Winning columns are marked green. The overlap threshold is set to 30 columns. They are ranked on their overlap score to the input (formed connections). Boost-factor : A multiplier is applied to the overlap score to slightly boost the weaker scores and possibly shrinks higher ones. This leads to a more distributed (higher granularity) representation of the columns to represent all different features more accurately. This is needed because only active (winner) columns will learn and thus the other columns are inhibited and often do not have a chance to represent their insights of the input. b) In the experiment we will use the MNIST-dataset of handwritten digits. It can be found here: It is already preprocessed as they are all the same size and nicely centered. We will let the HTM system learn to recognize the digits and then use the other provided images of digits to test its accuracy of predictions. We will only use one randomized spatial pooler for the whole image as receptive field, this cuts of a lot of features of the HTM theory such as the temporal pooling, sequence memory, location-paired-information or a local receptive field. Due to this the archived results might be

10 in no sense comparable and nothing more than a beginning how to use the current Nupic 1.3 system for such a task. The main challenges were to setup a network with the right parameters and correctly encode the data into an SDR format. Something similar was done with previous Nupic versions, but all outdated and not compatible anymore. We setup a network with the following parameters: CPP SpatialPooler Parameters Number of Inputs = 1024 (32x32 images) Number of Columns = 4096 Number Active Columns = 240 (~17%) Potential Pooler Connections = And the rest left to mostly default values: globalinhibition = 1 localareadensity = -1 stimulusthreshold = 0 synpermactiveinc = 0 synperminactivedec = 0 synpermconnected = 0.2 minpctoverlapdutycycles = minpctactivedutycycles = dutycycleperiod = 1000 boost-strength = 1 wraparound = 1 CPP SP seed = This could be changed and we could do swarming to find better parameters ( ) but this would require time and computational power. Also the used parameters were already tested from earlier experiments from the Numenta team with other Nupic versions. The code is essentially doing the following steps: Setting up a new network with the given parameters. Adding the 3 needed regions: Sensor input (Image Data) Spatial pooler, connecting to input data region Classifier, to interpret the results (e.g. overlap scores/predictions/anomalies) Train the input - spatial pooler(sp) connections of the network Train the SP-classifier and input-classifier results of the network After this steps the network is ready to be tested on test-data and predictions compared to actual labels. This will be covered in the next part. c) Results:

11 After training the network with images we test it on a different set of hand written digit images. The resulting accuracy is 95.35%. 5.) Conclusion Both approaches reached a high accuracy and could be further improved significantly with fine-tuning. The current state of the art algorithms have a 99.9% accuracy and perform better than humans in recognizing digits. However it gets really interesting if we try to interpret whole alphabets or 2D structures (drawings etc.) and to recognize objects etc. This needs often to be done in context of the data, in the same way humans can infer information of a word often by having it in context of the sentence structure. ( Study ) It is therefore essential to learn in sequences and temporal, to interpret the data in a meaningful context. However we can see that traditional neural networks reach state of the art performances on specific tasks. Further experiments have to be conducted to have meaningful comparisons in this field. The intention of this document was to give an introduction into both approaches.

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

Spatial Pooler Algorithm Implementation and Pseudocode

Spatial Pooler Algorithm Implementation and Pseudocode Chapter Revision History The table notes major changes between revisions. Minor changes such as small clarifications or formatting changes are not noted. Version Date Changes Principal Author(s) 0.4 Initial

More information

Keras: Handwritten Digit Recognition using MNIST Dataset

Keras: Handwritten Digit Recognition using MNIST Dataset Keras: Handwritten Digit Recognition using MNIST Dataset IIT PATNA January 31, 2018 1 / 30 OUTLINE 1 Keras: Introduction 2 Installing Keras 3 Keras: Building, Testing, Improving A Simple Network 2 / 30

More information

INTRODUCTION TO DEEP LEARNING

INTRODUCTION TO DEEP LEARNING INTRODUCTION TO DEEP LEARNING CONTENTS Introduction to deep learning Contents 1. Examples 2. Machine learning 3. Neural networks 4. Deep learning 5. Convolutional neural networks 6. Conclusion 7. Additional

More information

CMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro

CMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro CMU 15-781 Lecture 18: Deep learning and Vision: Convolutional neural networks Teacher: Gianni A. Di Caro DEEP, SHALLOW, CONNECTED, SPARSE? Fully connected multi-layer feed-forward perceptrons: More powerful

More information

Deep Learning for Computer Vision II

Deep Learning for Computer Vision II IIIT Hyderabad Deep Learning for Computer Vision II C. V. Jawahar Paradigm Shift Feature Extraction (SIFT, HoG, ) Part Models / Encoding Classifier Sparrow Feature Learning Classifier Sparrow L 1 L 2 L

More information

Keras: Handwritten Digit Recognition using MNIST Dataset

Keras: Handwritten Digit Recognition using MNIST Dataset Keras: Handwritten Digit Recognition using MNIST Dataset IIT PATNA February 9, 2017 1 / 24 OUTLINE 1 Introduction Keras: Deep Learning library for Theano and TensorFlow 2 Installing Keras Installation

More information

CS 523: Multimedia Systems

CS 523: Multimedia Systems CS 523: Multimedia Systems Angus Forbes creativecoding.evl.uic.edu/courses/cs523 Today - Convolutional Neural Networks - Work on Project 1 http://playground.tensorflow.org/ Convolutional Neural Networks

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

Classifying Depositional Environments in Satellite Images

Classifying Depositional Environments in Satellite Images Classifying Depositional Environments in Satellite Images Alex Miltenberger and Rayan Kanfar Department of Geophysics School of Earth, Energy, and Environmental Sciences Stanford University 1 Introduction

More information

Machine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU,

Machine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU, Machine Learning 10-701, Fall 2015 Deep Learning Eric Xing (and Pengtao Xie) Lecture 8, October 6, 2015 Eric Xing @ CMU, 2015 1 A perennial challenge in computer vision: feature engineering SIFT Spin image

More information

Know your data - many types of networks

Know your data - many types of networks Architectures Know your data - many types of networks Fixed length representation Variable length representation Online video sequences, or samples of different sizes Images Specific architectures for

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information

Deep Convolutional Neural Networks. Nov. 20th, 2015 Bruce Draper

Deep Convolutional Neural Networks. Nov. 20th, 2015 Bruce Draper Deep Convolutional Neural Networks Nov. 20th, 2015 Bruce Draper Background: Fully-connected single layer neural networks Feed-forward classification Trained through back-propagation Example Computer Vision

More information

CS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes

CS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes CS 6501: Deep Learning for Computer Graphics Training Neural Networks II Connelly Barnes Overview Preprocessing Initialization Vanishing/exploding gradients problem Batch normalization Dropout Additional

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

ECE 5470 Classification, Machine Learning, and Neural Network Review

ECE 5470 Classification, Machine Learning, and Neural Network Review ECE 5470 Classification, Machine Learning, and Neural Network Review Due December 1. Solution set Instructions: These questions are to be answered on this document which should be submitted to blackboard

More information

Hand Written Digit Recognition Using Tensorflow and Python

Hand Written Digit Recognition Using Tensorflow and Python Hand Written Digit Recognition Using Tensorflow and Python Shekhar Shiroor Department of Computer Science College of Engineering and Computer Science California State University-Sacramento Sacramento,

More information

Code Mania Artificial Intelligence: a. Module - 1: Introduction to Artificial intelligence and Python:

Code Mania Artificial Intelligence: a. Module - 1: Introduction to Artificial intelligence and Python: Code Mania 2019 Artificial Intelligence: a. Module - 1: Introduction to Artificial intelligence and Python: 1. Introduction to Artificial Intelligence 2. Introduction to python programming and Environment

More information

Deep Learning. Volker Tresp Summer 2014

Deep Learning. Volker Tresp Summer 2014 Deep Learning Volker Tresp Summer 2014 1 Neural Network Winter and Revival While Machine Learning was flourishing, there was a Neural Network winter (late 1990 s until late 2000 s) Around 2010 there

More information

Does the Brain do Inverse Graphics?

Does the Brain do Inverse Graphics? Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features

More information

LSTM: An Image Classification Model Based on Fashion-MNIST Dataset

LSTM: An Image Classification Model Based on Fashion-MNIST Dataset LSTM: An Image Classification Model Based on Fashion-MNIST Dataset Kexin Zhang, Research School of Computer Science, Australian National University Kexin Zhang, U6342657@anu.edu.au Abstract. The application

More information

Handwritten Mathematical Expression Recognition

Handwritten Mathematical Expression Recognition Handwritten Mathematical Expression Recognition Group 27 - Abhyãsa Abhishek Gunda abhigun@iitk.ac.in Krishna Karthik jkrishna@iitk.ac.in Harsha Nalluru harshan@iitk.ac.in Aravind Reddy arareddy@iitk.ac.in

More information

Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad

Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad Table of Contents 1. Project Overview a. Problem Statement b. Data c. Overview of the Two Stages of Implementation

More information

Numenta Node Algorithms Guide NuPIC 1.7

Numenta Node Algorithms Guide NuPIC 1.7 1 NuPIC 1.7 includes early implementations of the second generation of the Numenta HTM learning algorithms. These algorithms are available as two node types: SpatialPoolerNode and TemporalPoolerNode. This

More information

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com

More information

A THREE LAYERED MODEL TO PERFORM CHARACTER RECOGNITION FOR NOISY IMAGES

A THREE LAYERED MODEL TO PERFORM CHARACTER RECOGNITION FOR NOISY IMAGES INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONSAND ROBOTICS ISSN 2320-7345 A THREE LAYERED MODEL TO PERFORM CHARACTER RECOGNITION FOR NOISY IMAGES 1 Neha, 2 Anil Saroliya, 3 Varun Sharma 1,

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

CS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh April 13, 2016

CS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh April 13, 2016 CS 2750: Machine Learning Neural Networks Prof. Adriana Kovashka University of Pittsburgh April 13, 2016 Plan for today Neural network definition and examples Training neural networks (backprop) Convolutional

More information

Deep Learning Benchmarks Mumtaz Vauhkonen, Quaizar Vohra, Saurabh Madaan Collaboration with Adam Coates, Stanford Unviersity

Deep Learning Benchmarks Mumtaz Vauhkonen, Quaizar Vohra, Saurabh Madaan Collaboration with Adam Coates, Stanford Unviersity Deep Learning Benchmarks Mumtaz Vauhkonen, Quaizar Vohra, Saurabh Madaan Collaboration with Adam Coates, Stanford Unviersity Abstract: This project aims at creating a benchmark for Deep Learning (DL) algorithms

More information

Dynamic Routing Between Capsules

Dynamic Routing Between Capsules Report Explainable Machine Learning Dynamic Routing Between Capsules Author: Michael Dorkenwald Supervisor: Dr. Ullrich Köthe 28. Juni 2018 Inhaltsverzeichnis 1 Introduction 2 2 Motivation 2 3 CapusleNet

More information

A NOVEL FPGA IMPLEMENTATION OF HIERARCHICAL TEMPORAL MEMORY SPATIAL POOLER. Paul Jeffrey Mitchell. A thesis. submitted in partial fulfillment

A NOVEL FPGA IMPLEMENTATION OF HIERARCHICAL TEMPORAL MEMORY SPATIAL POOLER. Paul Jeffrey Mitchell. A thesis. submitted in partial fulfillment A NOVEL FPGA IMPLEMENTATION OF HIERARCHICAL TEMPORAL MEMORY SPATIAL POOLER by Paul Jeffrey Mitchell A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in

More information

HW #4. Hierarchical Temporal Memory

HW #4. Hierarchical Temporal Memory HW #4. Hierarchical Temporal Memory Implement episodic memory using an HTM network. Skeleton codes are available from http://rit.kaist.ac.kr/home/ee788-2017 Using the code, you are required to store and

More information

Object Detection Lecture Introduction to deep learning (CNN) Idar Dyrdal

Object Detection Lecture Introduction to deep learning (CNN) Idar Dyrdal Object Detection Lecture 10.3 - Introduction to deep learning (CNN) Idar Dyrdal Deep Learning Labels Computational models composed of multiple processing layers (non-linear transformations) Used to learn

More information

Tutorial on Machine Learning Tools

Tutorial on Machine Learning Tools Tutorial on Machine Learning Tools Yanbing Xue Milos Hauskrecht Why do we need these tools? Widely deployed classical models No need to code from scratch Easy-to-use GUI Outline Matlab Apps Weka 3 UI TensorFlow

More information

Report: Privacy-Preserving Classification on Deep Neural Network

Report: Privacy-Preserving Classification on Deep Neural Network Report: Privacy-Preserving Classification on Deep Neural Network Janno Veeorg Supervised by Helger Lipmaa and Raul Vicente Zafra May 25, 2017 1 Introduction In this report we consider following task: how

More information

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic SEMANTIC COMPUTING Lecture 8: Introduction to Deep Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 7 December 2018 Overview Introduction Deep Learning General Neural Networks

More information

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

COMP 551 Applied Machine Learning Lecture 16: Deep Learning

COMP 551 Applied Machine Learning Lecture 16: Deep Learning COMP 551 Applied Machine Learning Lecture 16: Deep Learning Instructor: Ryan Lowe (ryan.lowe@cs.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted, all

More information

Neural Networks (pp )

Neural Networks (pp ) Notation: Means pencil-and-paper QUIZ Means coding QUIZ Neural Networks (pp. 106-121) The first artificial neural network (ANN) was the (single-layer) perceptron, a simplified model of a biological neuron.

More information

Fuzzy Set Theory in Computer Vision: Example 3, Part II

Fuzzy Set Theory in Computer Vision: Example 3, Part II Fuzzy Set Theory in Computer Vision: Example 3, Part II Derek T. Anderson and James M. Keller FUZZ-IEEE, July 2017 Overview Resource; CS231n: Convolutional Neural Networks for Visual Recognition https://github.com/tuanavu/stanford-

More information

Rotation Invariance Neural Network

Rotation Invariance Neural Network Rotation Invariance Neural Network Shiyuan Li Abstract Rotation invariance and translate invariance have great values in image recognition. In this paper, we bring a new architecture in convolutional neural

More information

Deep Learning in Visual Recognition. Thanks Da Zhang for the slides

Deep Learning in Visual Recognition. Thanks Da Zhang for the slides Deep Learning in Visual Recognition Thanks Da Zhang for the slides Deep Learning is Everywhere 2 Roadmap Introduction Convolutional Neural Network Application Image Classification Object Detection Object

More information

Index. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning,

Index. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning, A Acquisition function, 298, 301 Adam optimizer, 175 178 Anaconda navigator conda command, 3 Create button, 5 download and install, 1 installing packages, 8 Jupyter Notebook, 11 13 left navigation pane,

More information

Deep Learning With Noise

Deep Learning With Noise Deep Learning With Noise Yixin Luo Computer Science Department Carnegie Mellon University yixinluo@cs.cmu.edu Fan Yang Department of Mathematical Sciences Carnegie Mellon University fanyang1@andrew.cmu.edu

More information

Convolutional Neural Networks: Applications and a short timeline. 7th Deep Learning Meetup Kornel Kis Vienna,

Convolutional Neural Networks: Applications and a short timeline. 7th Deep Learning Meetup Kornel Kis Vienna, Convolutional Neural Networks: Applications and a short timeline 7th Deep Learning Meetup Kornel Kis Vienna, 1.12.2016. Introduction Currently a master student Master thesis at BME SmartLab Started deep

More information

Perceptron: This is convolution!

Perceptron: This is convolution! Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image

More information

Combined Weak Classifiers

Combined Weak Classifiers Combined Weak Classifiers Chuanyi Ji and Sheng Ma Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute, Troy, NY 12180 chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu Abstract

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

A Quick Guide on Training a neural network using Keras.

A Quick Guide on Training a neural network using Keras. A Quick Guide on Training a neural network using Keras. TensorFlow and Keras Keras Open source High level, less flexible Easy to learn Perfect for quick implementations Starts by François Chollet from

More information

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 7. Image Processing COMP9444 17s2 Image Processing 1 Outline Image Datasets and Tasks Convolution in Detail AlexNet Weight Initialization Batch Normalization

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu

Artificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu Artificial Neural Networks Introduction to Computational Neuroscience Ardi Tampuu 7.0.206 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

Computer Vision Lecture 16

Computer Vision Lecture 16 Computer Vision Lecture 16 Deep Learning for Object Categorization 14.01.2016 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar registration period

More information

MACHINE LEARNING CLASSIFIERS ADVANTAGES AND CHALLENGES OF SELECTED METHODS

MACHINE LEARNING CLASSIFIERS ADVANTAGES AND CHALLENGES OF SELECTED METHODS MACHINE LEARNING CLASSIFIERS ADVANTAGES AND CHALLENGES OF SELECTED METHODS FRANK ORBEN, TECHNICAL SUPPORT / DEVELOPER IMAGE PROCESSING, STEMMER IMAGING OUTLINE Introduction Task: Classification Theory

More information

Handwritten Hindi Numerals Recognition System

Handwritten Hindi Numerals Recognition System CS365 Project Report Handwritten Hindi Numerals Recognition System Submitted by: Akarshan Sarkar Kritika Singh Project Mentor: Prof. Amitabha Mukerjee 1 Abstract In this project, we consider the problem

More information

Deep Face Recognition. Nathan Sun

Deep Face Recognition. Nathan Sun Deep Face Recognition Nathan Sun Why Facial Recognition? Picture ID or video tracking Higher Security for Facial Recognition Software Immensely useful to police in tracking suspects Your face will be an

More information

Some fast and compact neural network solutions for artificial intelligence applications

Some fast and compact neural network solutions for artificial intelligence applications Some fast and compact neural network solutions for artificial intelligence applications Radu Dogaru, University Politehnica of Bucharest ETTI, Dept. of Applied Electronics and Info. Eng., Natural Computing

More information

ConvolutionalNN's... ConvNet's... deep learnig

ConvolutionalNN's... ConvNet's... deep learnig Deep Learning ConvolutionalNN's... ConvNet's... deep learnig Markus Thaler, TG208 tham@zhaw.ch www.zhaw.ch/~tham Martin Weisenhorn, TB427 weie@zhaw.ch 20.08.2018 1 Neural Networks Classification: up to

More information

Convolution Neural Networks for Chinese Handwriting Recognition

Convolution Neural Networks for Chinese Handwriting Recognition Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven

More information

Inception and Residual Networks. Hantao Zhang. Deep Learning with Python.

Inception and Residual Networks. Hantao Zhang. Deep Learning with Python. Inception and Residual Networks Hantao Zhang Deep Learning with Python https://en.wikipedia.org/wiki/residual_neural_network Deep Neural Network Progress from Large Scale Visual Recognition Challenge (ILSVRC)

More information

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Lecture 20: Neural Networks for NLP. Zubin Pahuja Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple

More information

Using neural nets to recognize hand-written digits. Srikumar Ramalingam School of Computing University of Utah

Using neural nets to recognize hand-written digits. Srikumar Ramalingam School of Computing University of Utah Using neural nets to recognize hand-written digits Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the first chapter of the online book by Michael

More information

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies

More information

Practical 8: Neural networks

Practical 8: Neural networks Practical 8: Neural networks Properly building and training a neural network involves many design decisions such as choosing number and nature of layers and fine-tuning hyperparameters. Many details are

More information

Back propagation Algorithm:

Back propagation Algorithm: Network Neural: A neural network is a class of computing system. They are created from very simple processing nodes formed into a network. They are inspired by the way that biological systems such as the

More information

Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing

Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing feature 3 PC 3 Beate Sick Many slides are taken form Hinton s great lecture on NN: https://www.coursera.org/course/neuralnets

More information

A Deep Learning Approach to Vehicle Speed Estimation

A Deep Learning Approach to Vehicle Speed Estimation A Deep Learning Approach to Vehicle Speed Estimation Benjamin Penchas bpenchas@stanford.edu Tobin Bell tbell@stanford.edu Marco Monteiro marcorm@stanford.edu ABSTRACT Given car dashboard video footage,

More information

CS 4510/9010 Applied Machine Learning. Deep Learning. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Deep Learning. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Deep Learning Paula Matuszek Fall 2016 Beyond Simple Neural Nets 2 In the last few ideas we have seen some surprisingly rapid progress in some areas of AI Image

More information

TRAFFIC SIGN CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORK

TRAFFIC SIGN CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORK TRAFFIC SIGN CLASSIFICATION USING CONVOLUTIONAL NEURAL NETWORK Nemanja Veličković 1, Zeljko Stojković 2, Goran Dimić 2, Jelena Vasiljević 2 and Dhinaharan Nagamalai 3 1 University Union, School of Computing,

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Deep Learning. Visualizing and Understanding Convolutional Networks. Christopher Funk. Pennsylvania State University.

Deep Learning. Visualizing and Understanding Convolutional Networks. Christopher Funk. Pennsylvania State University. Visualizing and Understanding Convolutional Networks Christopher Pennsylvania State University February 23, 2015 Some Slide Information taken from Pierre Sermanet (Google) presentation on and Computer

More information

Radial Basis Function Neural Network Classifier

Radial Basis Function Neural Network Classifier Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University

More information

Logical Rhythm - Class 3. August 27, 2018

Logical Rhythm - Class 3. August 27, 2018 Logical Rhythm - Class 3 August 27, 2018 In this Class Neural Networks (Intro To Deep Learning) Decision Trees Ensemble Methods(Random Forest) Hyperparameter Optimisation and Bias Variance Tradeoff Biological

More information

Deep Learning and Its Applications

Deep Learning and Its Applications Convolutional Neural Network and Its Application in Image Recognition Oct 28, 2016 Outline 1 A Motivating Example 2 The Convolutional Neural Network (CNN) Model 3 Training the CNN Model 4 Issues and Recent

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Convolutional Neural Networks for Handwritten Digit Recognition Andreas Georgopoulos CID: 01281486 Abstract Abstract At this project three different Convolutional Neural Netwroks

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

Khmer Character Recognition using Artificial Neural Network

Khmer Character Recognition using Artificial Neural Network Khmer Character Recognition using Artificial Neural Network Hann Meng * and Daniel Morariu * Faculty of Engineering, Lucian Blaga University of Sibiu, Sibiu, Romania E-mail: meng.hann@rupp.edu.kh Tel:

More information

Parallelization and optimization of the neuromorphic simulation code. Application on the MNIST problem

Parallelization and optimization of the neuromorphic simulation code. Application on the MNIST problem Parallelization and optimization of the neuromorphic simulation code. Application on the MNIST problem Raphaël Couturier, Michel Salomon FEMTO-ST - DISC Department - AND Team November 2 & 3, 2015 / Besançon

More information

A Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images

A Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images A Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images Marc Aurelio Ranzato Yann LeCun Courant Institute of Mathematical Sciences New York University - New York, NY 10003 Abstract

More information

Machine Learning. MGS Lecture 3: Deep Learning

Machine Learning. MGS Lecture 3: Deep Learning Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ Machine Learning MGS Lecture 3: Deep Learning Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ WHAT IS DEEP LEARNING? Shallow network: Only one hidden layer

More information

Why equivariance is better than premature invariance

Why equivariance is better than premature invariance 1 Why equivariance is better than premature invariance Geoffrey Hinton Canadian Institute for Advanced Research & Department of Computer Science University of Toronto with contributions from Sida Wang

More information

Handwritten Digit Recognition Using Convolutional Neural Networks

Handwritten Digit Recognition Using Convolutional Neural Networks Handwritten Digit Recognition Using Convolutional Neural Networks T SIVA AJAY 1 School of Computer Science and Engineering VIT University Vellore, TamilNadu,India ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Unsupervised Learning of Spatiotemporally Coherent Metrics

Unsupervised Learning of Spatiotemporally Coherent Metrics Unsupervised Learning of Spatiotemporally Coherent Metrics Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun arxiv 2015. Presented by Jackie Chu Contributions Insight between slow feature

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 Introduction Pattern recognition is a set of mathematical, statistical and heuristic techniques used in executing `man-like' tasks on computers. Pattern recognition plays an

More information

Capsule Networks. Eric Mintun

Capsule Networks. Eric Mintun Capsule Networks Eric Mintun Motivation An improvement* to regular Convolutional Neural Networks. Two goals: Replace max-pooling operation with something more intuitive. Keep more info about an activated

More information

Neural Nets & Deep Learning

Neural Nets & Deep Learning Neural Nets & Deep Learning The Inspiration Inputs Outputs Our brains are pretty amazing, what if we could do something similar with computers? Image Source: http://ib.bioninja.com.au/_media/neuron _med.jpeg

More information

Machine Learning With Python. Bin Chen Nov. 7, 2017 Research Computing Center

Machine Learning With Python. Bin Chen Nov. 7, 2017 Research Computing Center Machine Learning With Python Bin Chen Nov. 7, 2017 Research Computing Center Outline Introduction to Machine Learning (ML) Introduction to Neural Network (NN) Introduction to Deep Learning NN Introduction

More information

Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class

Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Guidelines Submission. Submit a hardcopy of the report containing all the figures and printouts of code in class. For readability

More information

Opening the Black Box Data Driven Visualizaion of Neural N

Opening the Black Box Data Driven Visualizaion of Neural N Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological

More information

Convolution Optimization with Zynq FPGAs

Convolution Optimization with Zynq FPGAs December 2016 Convolution Optimization with Zynq FPGAs With Application to Convolutional Neural Networks Michael Losh, Oluwakemi Adabonyan Electrical and Computer Engineering Department School of Engineering

More information

Deep (1) Matthieu Cord LIP6 / UPMC Paris 6

Deep (1) Matthieu Cord LIP6 / UPMC Paris 6 Deep (1) Matthieu Cord LIP6 / UPMC Paris 6 Syllabus 1. Whole traditional (old) visual recognition pipeline 2. Introduction to Neural Nets 3. Deep Nets for image classification To do : Voir la leçon inaugurale

More information

CENG 783. Special topics in. Deep Learning. AlchemyAPI. Week 11. Sinan Kalkan

CENG 783. Special topics in. Deep Learning. AlchemyAPI. Week 11. Sinan Kalkan CENG 783 Special topics in Deep Learning AlchemyAPI Week 11 Sinan Kalkan TRAINING A CNN Fig: http://www.robots.ox.ac.uk/~vgg/practicals/cnn/ Feed-forward pass Note that this is written in terms of the

More information

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

More information

Traffic Sign Localization and Classification Methods: An Overview

Traffic Sign Localization and Classification Methods: An Overview Traffic Sign Localization and Classification Methods: An Overview Ivan Filković University of Zagreb Faculty of Electrical Engineering and Computing Department of Electronics, Microelectronics, Computer

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

11. Neural Network Regularization

11. Neural Network Regularization 11. Neural Network Regularization CS 519 Deep Learning, Winter 2016 Fuxin Li With materials from Andrej Karpathy, Zsolt Kira Preventing overfitting Approach 1: Get more data! Always best if possible! If

More information

Polytechnic University of Tirana

Polytechnic University of Tirana 1 Polytechnic University of Tirana Department of Computer Engineering SIBORA THEODHOR ELINDA KAJO M ECE 2 Computer Vision OCR AND BEYOND THE PRESENTATION IS ORGANISED IN 3 PARTS : 3 Introduction, previous

More information

Two-Stream Convolutional Networks for Action Recognition in Videos

Two-Stream Convolutional Networks for Action Recognition in Videos Two-Stream Convolutional Networks for Action Recognition in Videos Karen Simonyan Andrew Zisserman Cemil Zalluhoğlu Introduction Aim Extend deep Convolution Networks to action recognition in video. Motivation

More information