Differentiable Data Structures (and POMDPs)
|
|
- Francis Garrett
- 5 years ago
- Views:
Transcription
1 Differentiable Data Structures (and POMDPs) Yarin Gal & Rowan McAllister February 11, 2016 Many thanks to Edward Grefenstette for graphics material; other sources include Wikimedia licensed under CC BY-SA 3.0
2 Motivation Data structures are abstract data types lie at the core of Computer Science e.g. stacks, queues, heaps, binary trees, DAGs, etc. used in sorting algorithms, cycles in DAGs, many more We d like to teach computers to use data structures in solving tasks For many tasks a data structure is sensible allows for flexible models for such tasks 2 of 27
3 Motivation Data structures are abstract data types lie at the core of Computer Science e.g. stacks, queues, heaps, binary trees, DAGs, etc. used in sorting algorithms, cycles in DAGs, many more We d like to teach computers to use data structures in solving tasks For many tasks a data structure is sensible allows for flexible models for such tasks 2 of 27
4 Motivation Data structures are abstract data types lie at the core of Computer Science e.g. stacks, queues, heaps, binary trees, DAGs, etc. used in sorting algorithms, cycles in DAGs, many more We d like to teach computers to use data structures in solving tasks For many tasks a data structure is sensible allows for flexible models for such tasks 2 of 27
5 Motivation Many working on these ideas at Deep Mind, Facebook (neural Turing machine, memory networks, etc.) Featured on Future of Life Institute s Top A.I. Breakthroughs of of 27
6 Motivation Many working on these ideas at Deep Mind, Facebook (neural Turing machine, memory networks, etc.) Featured on Future of Life Institute s Top A.I. Breakthroughs of of 27
7 Outline Motivation Data Structures Recap Differentiable Data Structures History Applications in Language Processing Future? 3 of 27
8 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 4 of 27
9 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 Push v 2 4 of 27
10 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 Push v 2 Push v 3 4 of 27
11 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 4 of 27
12 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 Push v 2 4 of 27
13 Data structures recap Stack A simple stack example (r is the top of the stack peek ) We push elements to the top of the stack We pop (u elements) from the top of the stack Push v 1 Push v 2 Pop (u = 1) 4 of 27
14 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 5 of 27
15 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 Enqueue v 2 5 of 27
16 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 Enqueue v 2 Enqueue v 3 5 of 27
17 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 5 of 27
18 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 Enqueue v 2 5 of 27
19 Data structures recap Queue A simple queue example (r is the bottom of the queue peek ) We enqueue elements at the top of the queue We dequeue (u elements) from the bottom of the queue Enqueue v 1 Enqueue v 2 Dequeue (u = 1) 5 of 27
20 Outline Motivation Data Structures Recap Differentiable Data Structures History Applications in Language Processing Future? 5 of 27
21 Countless approaches In the past 2 years: Neural Turing Machines (Graves et al., arxiv, 2014) Memory Networks (Weston et al., arxiv, 2014) End-To-End Memory Networks (Sukhbaatar et al., NIPS, 2015) Weakly Supervised Memory Networks (Sukhbaatar et al., 2015) Learning to Transduce with Unbounded Memory (Grefenstette et al., NIPS, 2015) Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets (Joulin and Mikolov, NIPS, 2015) Transition-Based Dependency Parsing with Stack Long Short-Term Memory (Dyer et al., ACL, 2015) Neural Programmer-Interpreters (Reed and de Freitas, ICLR, 2016) Neural Random-Access Machines (Kurach et al., ICLR, 2016) Neural GPUs Learn Algorithms (Kaiser and Sutskever, ICLR, 6 of 27
22 Countless approaches In the past 2 years: Neural Turing Machines (Graves et al., arxiv, 2014) Memory Networks (Weston et al., arxiv, 2014) End-To-End Memory Networks (Sukhbaatar et al., NIPS, 2015) Weakly Supervised Memory Networks (Sukhbaatar et al., 2015) Learning to Transduce with Unbounded Memory (Grefenstette et al., NIPS, 2015) Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets (Joulin and Mikolov, NIPS, 2015) Transition-Based Dependency Parsing with Stack Long Short-Term Memory (Dyer et al., ACL, 2015) Neural Programmer-Interpreters (Reed and de Freitas, ICLR, 2016) Neural Random-Access Machines (Kurach et al., ICLR, 2016) Neural GPUs Learn Algorithms (Kaiser and Sutskever, ICLR, 6 of 27
23 Continuous stack Previous stack example Let s make our stack continuous... 1 Lets push half a v 2 (d = 0.5)... what does that mean? Define stack peek to be a mixture of the top 1.0 elements Push v 1 Push v 2 Pop v 2 1 Learning to Transduce with Unbounded Memory, Grefenstette et al., NIPS, of 27
24 Continuous stack Previous stack example Let s make our stack continuous... 1 Lets push half a v 2 (d = 0.5)... what does that mean? Define stack peek to be a mixture of the top 1.0 elements Push v 1 Push v 2 Pop v 2 1 Learning to Transduce with Unbounded Memory, Grefenstette et al., NIPS, of 27
25 Continuous stack Previous stack example Let s make our stack continuous... 1 Lets push half a v 2 (d = 0.5)... what does that mean? Define stack peek to be a mixture of the top 1.0 elements Push v 1 Push v 2 Pop v 2 Push half a v 2 1 Learning to Transduce with Unbounded Memory, Grefenstette et al., NIPS, of 27
26 Continuous stack Previous stack example Let s make our stack continuous... 1 Lets push half a v 2 (d = 0.5)... what does that mean? Define stack peek to be a mixture of the top 1.0 elements Push v 1 Push v 2 Pop v 2 Push half a v 2 1 Learning to Transduce with Unbounded Memory, Grefenstette et al., NIPS, of 27
27 Continuous stack Define stack pop (with weight u) to remove top u elements (which can be a fraction!). Example: Push v 1 (with weight d = 0.8) 8 of 27
28 Continuous stack Define stack pop (with weight u) to remove top u elements (which can be a fraction!). Example: Push v 1 (d = 0.8) Pop (u = 0.1) 8 of 27
29 Continuous stack Define stack pop (with weight u) to remove top u elements (which can be a fraction!). Example: Push v 1 (d = 0.8) Pop (u = 0.1) Push v 2 (d = 0.5) 8 of 27
30 Continuous stack Define stack pop (with weight u) to remove top u elements (which can be a fraction!). Example: Push v 1 (d = 0.8) Pop (u = 0.1) Push v 2 (d = 0.5) Pop (u = 0.9) 8 of 27
31 Continuous stack Define stack pop (with weight u) to remove top u elements (which can be a fraction!). Example: Push v 1 (d = 0.8) Pop (u = 0.1) Push v 2 (d = 0.5) Pop (u = 0.9) Push v 3 (d = 0.9) 8 of 27
32 Continuous stack And in equations: 9 of 27
33 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 Enqueue v 2 Dequeue v 2 (u = 1) 10 of 27
34 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 Enqueue v 2 Dequeue v 2 (u = 1) 10 of 27
35 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 (d = 0.8) 10 of 27
36 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 (d = 0.8) Dequeue (u = 0.1) Enqueue v 2 (d = 0.5) 10 of 27
37 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 (d = 0.8) Dequeue (u = 0.1) Enqueue v 2 (d = 0.5) Dequeue (u = 0.8) Enqueue v 3 (d = 0.9). 10 of 27
38 Continuous queue Similarly, previous queue example Make our queue continuous... Define enqueue (with weight d) to add element to queue top, dequeue (with weight u) to remove bottom u elements. Example (and exercise 1): Enqueue v 1 (d = 0.8) Dequeue (u = 0.1) Enqueue v 2 (d = 0.5) Dequeue (u = 0.8) Enqueue v 3 (d = 0.9). 10 of 27
39 Continuous queue (exercise) Reminder stack s equations: Exercise 2: What s the equivalent for a continuous queue? 11 of 27
40 Continuous queue (solution) Queue s equations: 12 of 27
41 Data structure as a recurrent unit Equations can be seen as a single time step update of a recurrent stack / queue unit The unit takes an input and previous state, and emits an output and next state prev. values (V t-1 ) next values (V t ) previous state next state prev. strengths (s t-1 ) push (d t ) input pop (u t ) value (v t ) Neural Stack next strengths (s t ) output (r t ) Split Join 13 of 27
42 Controller Grefenstette et al. (2015) use an RNN to control the data structure: (V t-1, s t-1 ) 1 ) (V t-1, s t-1 ) R N N h t (o t, ) o t V t-1 s t-1 d t u t v t Neural Stack r t V t s t previous state H next t-1 (V t, s t ) state H t input output i t o t r t-1 h t-1 (i t, r t-1 ) R N N h t (o t, ) o t V t-1 s t-1 d t u t v t Neural Stack r t V t s t (V t, s t ) previous state next H t-1 state H t input i t output r t-1 h t-1 (i t, r t-1 ) R N N h t (o t, ) o t V t-1 s t-1 d t u t v t Neural Stack r t V t s t o t Hybrid unit s input splits into RNN s input and stack s input 14 of 27
43 Controller Grefenstette et al. (2015) use an RNN to control the data structure: (V t-1, s t-1 ) previous state H t-1 input r t-1 h t-1 R N N h t (o t, ) V t-1 s t-1 d t u t Neural Stack r t V t s t (V t, s t ) next state H t i t (i t, r t-1 ) o t v t output o t A single time step of a combined RNN unit and stack unit 14 of 27
44 Insights Some insights: Stack has no additional parameters Increased space complexity (naive impl. O(MT 2 ) with M dim. of v i and T time steps) Can reduce space complexity by working in place (O(MT ), from personal communication) 15 of 27
45 Insights Some insights: Stack has no additional parameters Increased space complexity (naive impl. O(MT 2 ) with M dim. of v i and T time steps) Can reduce space complexity by working in place (O(MT ), from personal communication) 15 of 27
46 Insights Some insights: Stack has no additional parameters Increased space complexity (naive impl. O(MT 2 ) with M dim. of v i and T time steps) Can reduce space complexity by working in place (O(MT ), from personal communication) 15 of 27
47 Insights Stack s gradients: Can vanish quickly unless initialised carefully 16 of 27
48 Evaluation Model evaluated on: Sequence copying ab ab ab a V X Sequence reversal ab ba ab a V X Learning a grammar svo sov svo ovs V X Most papers above give these anecdotal toy examples Top A.I. Breakthroughs of 2015? Let s learn some history. 17 of 27
49 Evaluation Model evaluated on: Sequence copying ab ab ab a V X Sequence reversal ab ba ab a V X Learning a grammar svo sov svo ovs V X Most papers above give these anecdotal toy examples Top A.I. Breakthroughs of 2015? Let s learn some history. 17 of 27
50 Evaluation Model evaluated on: Sequence copying ab ab V ab a X Sequence reversal ab ba V ab a X Learning a grammar svo sov V svo ovs X Most papers above give these anecdotal toy examples Top A.I. Breakthroughs of 2015? Let s learn some history. 17 of 27
51 Outline Motivation Data Structures Recap Differentiable Data Structures History Applications in Language Processing Future? 17 of 27
52 History Idea goes back as far as 1989 (that I could trace): Higher order recurrent networks and grammatical inference (Giles et al., NIPS, 1989) From the abstract: A higher order single layer recursive network learns to simulate a deterministic finite state machine. When a [..] neural net state machine is connected through a common error term to an external analog stack memory, the combination can be interpreted as a neural net pushdown automata. [It is] given the primitives push and pop, and is able to read the top of the stack. Connectionist Pushdown Automata that Learn Context-free Grammars (Sun et al., IJCNN, 1990) Neural networks with external memory stack that learn 18 of 27
53 History Idea goes back as far as 1989 (that I could trace): Higher order recurrent networks and grammatical inference (Giles et al., NIPS, 1989) Connectionist Pushdown Automata that Learn Context-free Grammars (Sun et al., IJCNN, 1990) Neural networks with external memory stack that learn context-free grammars from examples (Sun et al., CISP, 1990) Using Prior Knowledge in an NNPDA to Learn Context-Free Languages (Das et al., NIPS, 1992) The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations (Sun et al., 1993) mostly showing (empirically) that networks can learn Finite State Automata. 18 of 27
54 History Idea goes back as far as 1989 (that I could trace): Higher order recurrent networks and grammatical inference (Giles et al., NIPS, 1989) Connectionist Pushdown Automata that Learn Context-free Grammars (Sun et al., IJCNN, 1990) Neural networks with external memory stack that learn context-free grammars from examples (Sun et al., CISP, 1990) Using Prior Knowledge in an NNPDA to Learn Context-Free Languages (Das et al., NIPS, 1992) The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations (Sun et al., 1993) mostly showing (empirically) that networks can learn Finite State Automata. 18 of 27
55 History Idea goes back as far as 1989 (that I could trace): Higher order recurrent networks and grammatical inference (Giles et al., NIPS, 1989) Connectionist Pushdown Automata that Learn Context-free Grammars (Sun et al., IJCNN, 1990) Neural networks with external memory stack that learn context-free grammars from examples (Sun et al., CISP, 1990) Using Prior Knowledge in an NNPDA to Learn Context-Free Languages (Das et al., NIPS, 1992) The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations (Sun et al., 1993) mostly showing (empirically) that networks can learn Finite State Automata. 18 of 27
56 History Idea goes back as far as 1989 (that I could trace): Higher order recurrent networks and grammatical inference (Giles et al., NIPS, 1989) Connectionist Pushdown Automata that Learn Context-free Grammars (Sun et al., IJCNN, 1990) Neural networks with external memory stack that learn context-free grammars from examples (Sun et al., CISP, 1990) Using Prior Knowledge in an NNPDA to Learn Context-Free Languages (Das et al., NIPS, 1992) The Neural Network Pushdown Automaton: Model, Stack and Learning Simulations (Sun et al., 1993) mostly showing (empirically) that networks can learn Finite State Automata. 18 of 27
57 History NNPDA Concrete example, NNPDA (1992): 2 Output push pop or no-op Top-of-stack 1.0 State(t+1) Action External stack higher order weights alphabets on the stack copy copy State Neurons Input Neurons Read Neurons State(t) Input(t) Top-of-stack(t) 2 Cited by Grefenstette et al. (2015) 19 of 27
58 History NNPDA Concrete example, NNPDA (1992): 2 2 Cited by Grefenstette et al. (2015) 19 of 27
59 History NNPDA Experimental evaluation on tasks: Balanced parenthesis grammar (())() V ()( X Learning a grammar (1 n 0 n ) V 1110 X Sequence reversal ab ba ab a V X 20 of 27
60 History NNPDA Top A.I. Breakthroughs of 1989 Output push pop or no-op Top-of-stack 1.0 (V t-1, s t-1 ) State(t+1) Action External stack alphabets on the stack higher order weights previous state H t-1 input r t-1 h t-1 R N N h t (o t, ) V t-1 s t-1 d t u t Neural Stack r t V t s t (V t, s t ) next state H t copy State Neurons Input Neurons Read Neurons copy i t (i t, r t-1 ) o t v t output o t State(t) Input(t) Top-of-stack(t) Figure: NNPDA Figure: Neural stack Same ideas (although with different motivation) Similar structure Same evaluations even 21 of 27
61 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
62 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
63 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
64 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
65 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
66 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
67 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
68 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources of 27
69 History NNPDA But, NNPDA limitations: Had to approximate derivatives through the stack Vanishing gradients Only keeps input symbols in stack Coupled with the RNN controller Was built in the 90 s... Modern research: Issues were answered in Grefenstette et al., 2015 Uses advances from recent years (stochastic optimisation, data sub-sampling, adaptive learning rates) More computational resources can use these models from the 90 s in real-world applications. 22 of 27
70 Outline Motivation Data Structures Recap Differentiable Data Structures History Applications in Language Processing Future? 22 of 27
71 Transition Based Dependency Parsing This gives a projective tree in linear time 23 of 27 Dependency grammar: a syntactic structure where words are connected to each other by directed links Various representations of Dependency Grammars Transition Based Dependency Parsing: read words sequentially from a buffer, and combine them incrementally into syntactic structures
72 Transition Based Dependency Parsing Dependency grammar: a syntactic structure where words are connected to each other by directed links Transition Based Dependency Parsing: read words sequentially from a buffer, and combine them incrementally into syntactic structures Example transitions This gives a projective tree in linear time Main challenge: what action should the parser make in each state? 23 of 27
73 Transition Based Dependency Parsing Dependency grammar: a syntactic structure where words are connected to each other by directed links Transition Based Dependency Parsing: read words sequentially from a buffer, and combine them incrementally into syntactic structures This gives a projective tree in linear time Main challenge: what action should the parser make in each state? 23 of 27
74 Transition Based Dependency Parsing Dependency grammar: a syntactic structure where words are connected to each other by directed links Transition Based Dependency Parsing: read words sequentially from a buffer, and combine them incrementally into syntactic structures This gives a projective tree in linear time Main challenge: what action should the parser make in each state? 23 of 27
75 Model Dyer et al. (2015) use stack LSTMs These follows a simpler formulation that Grefenstette et al. (2015) add a stack pointer to determine which LSTM cell to use in next time step 24 of 27
76 Model Use three stack LSTMs: One to represent the input One to hold the partially constructed syntactic trees And one to record the history of the parser actions 25 of 27
77 Results Managed to improve on best results to date (C&M (2014)) 26 of 27
78 Outline Motivation Data Structures Recap Differentiable Data Structures History Applications in Language Processing Future? 26 of 27
79 Future? Exciting applications starting to emerge Going beyond toy examples More recent work starting to combine traditional data structures with reinforcement learning Reinforcement Learning Neural Turing Machines (Zaremba and Sutskever, arxiv, 2015) Should start learning POMDPs of 27
arxiv: v2 [cs.ne] 10 Nov 2018
Number Sequence Prediction Problems for Evaluating Computational Powers of Neural Networks Hyoungwook Nam College of Liberal Studies Seoul National University Seoul, Korea hwnam8@snu.ac.kr Segwang Kim
More informationRecurrent Neural Network (RNN) Industrial AI Lab.
Recurrent Neural Network (RNN) Industrial AI Lab. For example (Deterministic) Time Series Data Closed- form Linear difference equation (LDE) and initial condition High order LDEs 2 (Stochastic) Time Series
More informationContext Encoding LSTM CS224N Course Project
Context Encoding LSTM CS224N Course Project Abhinav Rastogi arastogi@stanford.edu Supervised by - Samuel R. Bowman December 7, 2015 Abstract This project uses ideas from greedy transition based parsing
More informationTransition-Based Dependency Parsing with Stack Long Short-Term Memory
Transition-Based Dependency Parsing with Stack Long Short-Term Memory Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A. Smith Association for Computational Linguistics (ACL), 2015 Presented
More informationCOSC160: Data Structures: Lists and Queues. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Data Structures: Lists and Queues Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Queues I. FIFO Queues I. Usage II. Implementations II. LIFO Queues (Stacks) I. Usage II. Implementations
More informationCSCI312 Principles of Programming Languages!
CSCI312 Principles of Programming Languages!! Chapter 3 Regular Expression and Lexer Xu Liu Recap! Copyright 2006 The McGraw-Hill Companies, Inc. Clite: Lexical Syntax! Input: a stream of characters from
More informationNeural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision
Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision Anonymized for review Abstract Extending the success of deep neural networks to high level tasks like natural language
More informationLR Parsing - The Items
LR Parsing - The Items Lecture 10 Sections 4.5, 4.7 Robb T. Koether Hampden-Sydney College Fri, Feb 13, 2015 Robb T. Koether (Hampden-Sydney College) LR Parsing - The Items Fri, Feb 13, 2015 1 / 31 1 LR
More informationLexical Scanning COMP360
Lexical Scanning COMP360 Captain, we re being scanned. Spock Reading Read sections 2.1 3.2 in the textbook Regular Expression and FSA Assignment A new assignment has been posted on Blackboard It is due
More informationDerivations of a CFG. MACM 300 Formal Languages and Automata. Context-free Grammars. Derivations and parse trees
Derivations of a CFG MACM 300 Formal Languages and Automata Anoop Sarkar http://www.cs.sfu.ca/~anoop strings grow on trees strings grow on Noun strings grow Object strings Verb Object Noun Verb Object
More informationCompiling Regular Expressions COMP360
Compiling Regular Expressions COMP360 Logic is the beginning of wisdom, not the end. Leonard Nimoy Compiler s Purpose The compiler converts the program source code into a form that can be executed by the
More informationPostfix (and prefix) notation
Postfix (and prefix) notation Also called reverse Polish reversed form of notation devised by mathematician named Jan Łukasiewicz (so really lü-kä-sha-vech notation) Infix notation is: operand operator
More informationPushdown Automata. A PDA is an FA together with a stack.
Pushdown Automata A PDA is an FA together with a stack. Stacks A stack stores information on the last-in firstout principle. Items are added on top by pushing; items are removed from the top by popping.
More informationLanguages and Compilers
Principles of Software Engineering and Operational Systems Languages and Compilers SDAGE: Level I 2012-13 3. Formal Languages, Grammars and Automata Dr Valery Adzhiev vadzhiev@bournemouth.ac.uk Office:
More informationTransition-based Parsing with Neural Nets
CS11-747 Neural Networks for NLP Transition-based Parsing with Neural Nets Graham Neubig Site https://phontron.com/class/nn4nlp2017/ Two Types of Linguistic Structure Dependency: focus on relations between
More informationR10 SET a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA?
R1 SET - 1 1. a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA? 2. a) Design a DFA that accepts the language over = {, 1} of all strings that
More informationCT32 COMPUTER NETWORKS DEC 2015
Q.2 a. Using the principle of mathematical induction, prove that (10 (2n-1) +1) is divisible by 11 for all n N (8) Let P(n): (10 (2n-1) +1) is divisible by 11 For n = 1, the given expression becomes (10
More informationResidual Networks And Attention Models. cs273b Recitation 11/11/2016. Anna Shcherbina
Residual Networks And Attention Models cs273b Recitation 11/11/2016 Anna Shcherbina Introduction to ResNets Introduced in 2015 by Microsoft Research Deep Residual Learning for Image Recognition (He, Zhang,
More informationStructured Attention Networks
Structured Attention Networks Yoon Kim Carl Denton Luong Hoang Alexander M. Rush HarvardNLP 1 Deep Neural Networks for Text Processing and Generation 2 Attention Networks 3 Structured Attention Networks
More informationSTACKS AND QUEUES. Problem Solving with Computers-II
STACKS AND QUEUES Problem Solving with Computers-II 2 Stacks container class available in the C++ STL Container class that uses the Last In First Out (LIFO) principle Methods i. push() ii. iii. iv. pop()
More informationMIT Specifying Languages with Regular Expressions and Context-Free Grammars. Martin Rinard Massachusetts Institute of Technology
MIT 6.035 Specifying Languages with Regular essions and Context-Free Grammars Martin Rinard Massachusetts Institute of Technology Language Definition Problem How to precisely define language Layered structure
More informationMIT Specifying Languages with Regular Expressions and Context-Free Grammars
MIT 6.035 Specifying Languages with Regular essions and Context-Free Grammars Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Language Definition Problem How to precisely
More informationStack and Queue. Stack:
Stack and Queue Stack: Abstract Data Type A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the pushdown stacks only two operations
More informationMachine Learning. MGS Lecture 3: Deep Learning
Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ Machine Learning MGS Lecture 3: Deep Learning Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ WHAT IS DEEP LEARNING? Shallow network: Only one hidden layer
More informationNatural Language Processing with Deep Learning CS224N/Ling284
Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 8: Recurrent Neural Networks Christopher Manning and Richard Socher Organization Extra project office hour today after lecture Overview
More informationUNIVERSITY OF SOUTH ALABAMA COMPUTER SCIENCE
UNIVERSITY OF SOUTH ALABAMA COMPUTER SCIENCE 1 Computer Science CSC 108 Intro to Computer Science 3 cr An introduction to the major areas of computer science, such as computing systems, the binary number
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 3 Data Structures Graphs Traversals Strongly connected components Sofya Raskhodnikova L3.1 Measuring Running Time Focus on scalability: parameterize the running time
More informationDifferentiable Inductive Logic Programming Richard Evans,
Differentiable Inductive Logic Programming Richard Evans, Ed Grefenstette @LittleBimble, @egrefen https://jair.org/index.php/jair/article/view/11172 Overview Our system, ILP, learns logic programs from
More informationLecture Notes for Advanced Algorithms
Lecture Notes for Advanced Algorithms Prof. Bernard Moret September 29, 2011 Notes prepared by Blanc, Eberle, and Jonnalagedda. 1 Average Case Analysis 1.1 Reminders on quicksort and tree sort We start
More informationUNIVERSITY OF SOUTH ALABAMA COMPUTER SCIENCE
UNIVERSITY OF SOUTH ALABAMA COMPUTER SCIENCE 1 Computer Science CSC 108 Intro to Computer Science 3 cr An introduction to the major areas of computer science, such as computing systems, the binary number
More informationCOMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou
COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou Administrative! [ALSU03] Chapter 3 - Lexical Analysis Sections 3.1-3.4, 3.6-3.7! Reading for next time [ALSU03] Chapter 3 Copyright (c) 2010 Ioanna
More informationNeural Programming by Example
Neural Programming by Example Chengxun Shu Beihang University Beijing 100191, China shuchengxun@163.com Hongyu Zhang The University of Newcastle Callaghan, NSW 2308, Australia hongyu.zhang@newcastle.edu.au
More informationModule 1: Asymptotic Time Complexity and Intro to Abstract Data Types
Module 1: Asymptotic Time Complexity and Intro to Abstract Data Types Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu
More informationMachine Learning for Natural Language Processing. Alice Oh January 17, 2018
Machine Learning for Natural Language Processing Alice Oh January 17, 2018 Overview Distributed representation Temporal neural networks RNN LSTM GRU Sequence-to-sequence models Machine translation Response
More informationWe can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack.
Other Automata We can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack. From all of this information we decide what state
More informationEnd-Term Examination Second Semester [MCA] MAY-JUNE 2006
(Please write your Roll No. immediately) Roll No. Paper Code: MCA-102 End-Term Examination Second Semester [MCA] MAY-JUNE 2006 Subject: Data Structure Time: 3 Hours Maximum Marks: 60 Note: Question 1.
More information27: Hybrid Graphical Models and Neural Networks
10-708: Probabilistic Graphical Models 10-708 Spring 2016 27: Hybrid Graphical Models and Neural Networks Lecturer: Matt Gormley Scribes: Jakob Bauer Otilia Stretcu Rohan Varma 1 Motivation We first look
More information12 Abstract Data Types
12 Abstract Data Types 12.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: Define the concept of an abstract data type (ADT). Define
More informationFrom Theorem 8.5, page 223, we have that the intersection of a context-free language with a regular language is context-free. Therefore, the language
CSCI 2400 Models of Computation, Section 3 Solutions to Practice Final Exam Here are solutions to the practice final exam. For some problems some details are missing for brevity. You should write complete
More informationContext Free Languages and Pushdown Automata
Context Free Languages and Pushdown Automata COMP2600 Formal Methods for Software Engineering Ranald Clouston Australian National University Semester 2, 2013 COMP 2600 Context Free Languages and Pushdown
More informationUNIT III & IV. Bottom up parsing
UNIT III & IV Bottom up parsing 5.0 Introduction Given a grammar and a sentence belonging to that grammar, if we have to show that the given sentence belongs to the given grammar, there are two methods.
More informationCSE450 Translation of Programming Languages. Lecture 4: Syntax Analysis
CSE450 Translation of Programming Languages Lecture 4: Syntax Analysis http://xkcd.com/859 Structure of a Today! Compiler Source Language Lexical Analyzer Syntax Analyzer Semantic Analyzer Int. Code Generator
More informationSequence Modeling: Recurrent and Recursive Nets. By Pyry Takala 14 Oct 2015
Sequence Modeling: Recurrent and Recursive Nets By Pyry Takala 14 Oct 2015 Agenda Why Recurrent neural networks? Anatomy and basic training of an RNN (10.2, 10.2.1) Properties of RNNs (10.2.2, 8.2.6) Using
More informationDefinition 2.8: A CFG is in Chomsky normal form if every rule. only appear on the left-hand side, we allow the rule S ǫ.
CS533 Class 02b: 1 c P. Heeman, 2017 CNF Pushdown Automata Definition Equivalence Overview CS533 Class 02b: 2 c P. Heeman, 2017 Chomsky Normal Form Definition 2.8: A CFG is in Chomsky normal form if every
More informationDEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla
DEEP LEARNING REVIEW Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature 2015 -Presented by Divya Chitimalla What is deep learning Deep learning allows computational models that are composed of multiple
More informationStacks, Queues and Hierarchical Collections. 2501ICT Logan
Stacks, Queues and Hierarchical Collections 2501ICT Logan Contents Linked Data Structures Revisited Stacks Queues Trees Binary Trees Generic Trees Implementations 2 Queues and Stacks Queues and Stacks
More informationDeep Learning. Architecture Design for. Sargur N. Srihari
Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation
More informationR13. II B. Tech I Semester Supplementary Examinations, May/June DATA STRUCTURES (Com. to ECE, CSE, EIE, IT, ECC)
SET - 1 II B. Tech I Semester Supplementary Examinations, May/June - 2016 PART A 1. a) Write a procedure for the Tower of Hanoi problem? b) What you mean by enqueue and dequeue operations in a queue? c)
More informationLexical and Syntax Analysis. Bottom-Up Parsing
Lexical and Syntax Analysis Bottom-Up Parsing Parsing There are two ways to construct derivation of a grammar. Top-Down: begin with start symbol; repeatedly replace an instance of a production s LHS with
More informationPA3 Design Specification
PA3 Teaching Data Structure 1. System Description The Data Structure Web application is written in JavaScript and HTML5. It has been divided into 9 pages: Singly linked page, Stack page, Postfix expression
More informationQuestion Bank. 10CS63:Compiler Design
Question Bank 10CS63:Compiler Design 1.Determine whether the following regular expressions define the same language? (ab)* and a*b* 2.List the properties of an operator grammar 3. Is macro processing a
More informationStacks, Queues (cont d)
Stacks, Queues (cont d) CSE 2011 Winter 2007 February 1, 2007 1 The Adapter Pattern Using methods of one class to implement methods of another class Example: using List to implement Stack and Queue 2 1
More informationCode Mania Artificial Intelligence: a. Module - 1: Introduction to Artificial intelligence and Python:
Code Mania 2019 Artificial Intelligence: a. Module - 1: Introduction to Artificial intelligence and Python: 1. Introduction to Artificial Intelligence 2. Introduction to python programming and Environment
More informationFormal Languages and Compilers Lecture VI: Lexical Analysis
Formal Languages and Compilers Lecture VI: Lexical Analysis Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/ artale/ Formal
More informationLearning Explanatory Rules from Noisy Data Richard Evans, Ed Grefenstette
Learning Explanatory Rules from Noisy Data Richard Evans, Ed Grefenstette Overview Our system, ILP, learns logic programs from examples. ILP learns by back-propagation. It is robust to noisy and ambiguous
More informationNeural Programmer-Interpreters. Scott Reed and Nando de Freitas
Neural Programmer-Interpreters Scott Reed and Nando de Freitas Neural Programmer Interpreter (NPI) goals: 1. Long-term prediction: Model potentially long sequences of actions by exploiting compositional
More informationStacks, Queues and Hierarchical Collections
Programming III Stacks, Queues and Hierarchical Collections 2501ICT Nathan Contents Linked Data Structures Revisited Stacks Queues Trees Binary Trees Generic Trees Implementations 2 Copyright 2002- by
More informationCOP 3402 Systems Software Syntax Analysis (Parser)
COP 3402 Systems Software Syntax Analysis (Parser) Syntax Analysis 1 Outline 1. Definition of Parsing 2. Context Free Grammars 3. Ambiguous/Unambiguous Grammars Syntax Analysis 2 Lexical and Syntax Analysis
More informationIntroduction p. 1 Pseudocode p. 2 Algorithm Header p. 2 Purpose, Conditions, and Return p. 3 Statement Numbers p. 4 Variables p. 4 Algorithm Analysis
Introduction p. 1 Pseudocode p. 2 Algorithm Header p. 2 Purpose, Conditions, and Return p. 3 Statement Numbers p. 4 Variables p. 4 Algorithm Analysis p. 5 Statement Constructs p. 5 Pseudocode Example p.
More informationRecurrent Neural Networks
Recurrent Neural Networks Javier Béjar Deep Learning 2018/2019 Fall Master in Artificial Intelligence (FIB-UPC) Introduction Sequential data Many problems are described by sequences Time series Video/audio
More informationStacks and queues (chapters 6.6, 15.1, 15.5)
Stacks and queues (chapters 6.6, 15.1, 15.5) So far... Complexity analysis For recursive and iterative programs Sorting algorithms Insertion, selection, quick, merge, (intro, dual-pivot quick, natural
More informationTheory and Compiling COMP360
Theory and Compiling COMP360 It has been said that man is a rational animal. All my life I have been searching for evidence which could support this. Bertrand Russell Reading Read sections 2.1 3.2 in the
More informationCSE 105 THEORY OF COMPUTATION
CSE 105 THEORY OF COMPUTATION Spring 2018 http://cseweb.ucsd.edu/classes/sp18/cse105-ab/ Today's learning goals Sipser Section 2.2 Define push-down automata informally and formally Trace the computation
More informationTheory of Computation Prof. Raghunath Tewari Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Theory of Computation Prof. Raghunath Tewari Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 01 Introduction to Finite Automata Welcome everybody. This is
More informationSpecifying Syntax COMP360
Specifying Syntax COMP360 The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking
More informationCS 206 Introduction to Computer Science II
CS 206 Introduction to Computer Science II 03 / 31 / 2017 Instructor: Michael Eckmann Today s Topics Questions? Comments? finish RadixSort implementation some applications of stack Priority Queues Michael
More informationDependency Parsing 2 CMSC 723 / LING 723 / INST 725. Marine Carpuat. Fig credits: Joakim Nivre, Dan Jurafsky & James Martin
Dependency Parsing 2 CMSC 723 / LING 723 / INST 725 Marine Carpuat Fig credits: Joakim Nivre, Dan Jurafsky & James Martin Dependency Parsing Formalizing dependency trees Transition-based dependency parsing
More informationAsynchronous Parallel Learning for Neural Networks and Structured Models with Dense Features
Asynchronous Parallel Learning for Neural Networks and Structured Models with Dense Features Xu SUN ( 孙栩 ) Peking University xusun@pku.edu.cn Motivation Neural networks -> Good Performance CNN, RNN, LSTM
More informationBayesian model ensembling using meta-trained recurrent neural networks
Bayesian model ensembling using meta-trained recurrent neural networks Luca Ambrogioni l.ambrogioni@donders.ru.nl Umut Güçlü u.guclu@donders.ru.nl Yağmur Güçlütürk y.gucluturk@donders.ru.nl Julia Berezutskaya
More informationLL Parsing, LR Parsing, Complexity, and Automata
LL Parsing, LR Parsing, Complexity, and Automata R. Gregory Taylor Department of Mathematics and Computer Science Manhattan College Riverdale, New York 10471-4098 USA Abstract It
More informationRecurrent Neural Network and its Various Architecture Types
Recurrent Neural Network and its Various Architecture Types Trupti Katte Assistant Professor, Computer Engineering Department, Army Institute of Technology, Pune, Maharashtra, India Abstract----Recurrent
More informationData Structure. Recitation VII
Data Structure Recitation VII Recursion: Stack trace Queue Topic animation Trace Recursive factorial Executes factorial(4) Step 9: return 24 Step 8: return 6 factorial(4) Step 0: executes factorial(4)
More informationCIS 1.5 Course Objectives. a. Understand the concept of a program (i.e., a computer following a series of instructions)
By the end of this course, students should CIS 1.5 Course Objectives a. Understand the concept of a program (i.e., a computer following a series of instructions) b. Understand the concept of a variable
More informationScan and its Uses. 1 Scan. 1.1 Contraction CSE341T/CSE549T 09/17/2014. Lecture 8
CSE341T/CSE549T 09/17/2014 Lecture 8 Scan and its Uses 1 Scan Today, we start by learning a very useful primitive. First, lets start by thinking about what other primitives we have learned so far? The
More informationarxiv: v1 [cs.lg] 8 Feb 2018
IMPROVING THE UNIVERSALITY AND LEARNABIL- ITY OF NEURAL PROGRAMMER-INTERPRETERS WITH COMBINATOR ABSTRACTION Da Xiao 1,2, Jo-Yu Liao 2, Xingyuan Yuan 2 1 School of Cyberspace Security, Beijing University
More informationCSCE 314 Programming Languages
CSCE 314 Programming Languages Syntactic Analysis Dr. Hyunyoung Lee 1 What Is a Programming Language? Language = syntax + semantics The syntax of a language is concerned with the form of a program: how
More informationOn the Efficient Implementation of Pipelined Heaps for Network Processing. Hao Wang, Bill Lin University of California, San Diego
On the Efficient Implementation of Pipelined Heaps for Network Processing Hao Wang, Bill Lin University of California, San Diego Outline Introduction Pipelined Heap Structure Single-Cycle Operation Memory
More informationSlides credited from Dr. David Silver & Hung-Yi Lee
Slides credited from Dr. David Silver & Hung-Yi Lee Review Reinforcement Learning 2 Reinforcement Learning RL is a general purpose framework for decision making RL is for an agent with the capacity to
More informationFormal languages and computation models
Formal languages and computation models Guy Perrier Bibliography John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman - Introduction to Automata Theory, Languages, and Computation - Addison Wesley, 2006.
More informationCS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016
CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation
More informationImageNet Classification with Deep Convolutional Neural Networks
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky Ilya Sutskever Geoffrey Hinton University of Toronto Canada Paper with same name to appear in NIPS 2012 Main idea Architecture
More information11. a b c d e. 12. a b c d e. 13. a b c d e. 14. a b c d e. 15. a b c d e
CS-3160 Concepts of Programming Languages Spring 2015 EXAM #1 (Chapters 1-6) Name: SCORES MC: /75 PROB #1: /15 PROB #2: /10 TOTAL: /100 Multiple Choice Responses Each multiple choice question in the separate
More informationCS 44 Exam #2 February 14, 2001
CS 44 Exam #2 February 14, 2001 Name Time Started: Time Finished: Each question is equally weighted. You may omit two questions, but you must answer #8, and you can only omit one of #6 or #7. Circle the
More informationarxiv: v2 [cs.lg] 23 Feb 2016
Marcin Andrychowicz MARCINA@GOOGLE.COM Google DeepMind Karol Kurach KKURACH@GOOGLE.COM Google / University of Warsaw 1 equal contribution arxiv:1602.03218v2 [cs.lg] 23 Feb 2016 Abstract In this paper,
More informationPrinciples of Compiler Construction ( )
Principles of Compiler Construction ( ) Dr Mayer Goldberg September 5, 2016 Contents 1 Course Objectives 1 2 Course Requirements 2 3 Detailed Syllabus 3 4 References 6 Course number: 201-1-2061 Mandatory
More informationData Structures. Outline. Introduction Linked Lists Stacks Queues Trees Deitel & Associates, Inc. All rights reserved.
Data Structures Outline Introduction Linked Lists Stacks Queues Trees Introduction dynamic data structures - grow and shrink during execution Linked lists - insertions and removals made anywhere Stacks
More informationKEY. A 1. The action of a grammar when a derivation can be found for a sentence. Y 2. program written in a High Level Language
1 KEY CS 441G Fall 2018 Exam 1 Matching: match the best term from the following list to its definition by writing the LETTER of the term in the blank to the left of the definition. (1 point each) A Accepts
More informationDeepWalk: Online Learning of Social Representations
DeepWalk: Online Learning of Social Representations ACM SIG-KDD August 26, 2014, Rami Al-Rfou, Steven Skiena Stony Brook University Outline Introduction: Graphs as Features Language Modeling DeepWalk Evaluation:
More informationCS5371 Theory of Computation. Lecture 8: Automata Theory VI (PDA, PDA = CFG)
CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG) Objectives Introduce Pushdown Automaton (PDA) Show that PDA = CFG In terms of descriptive power Pushdown Automaton (PDA) Roughly
More informationCOMP 261 ALGORITHMS and DATA STRUCTURES
T E W H A R E W Ā N A N G A O T E Ū P O K O O T E I K A A M Ā U I VUW V I C T O R I A UNIVERSITY OF WELLINGTON Student ID:....................... EXAMINATIONS 2012 TRIMESTER 2 *** WITH SOLUTIONS *** COMP
More information(a) R=01[((10)*+111)*+0]*1 (b) ((01+10)*00)*. [8+8] 4. (a) Find the left most and right most derivations for the word abba in the grammar
Code No: R05310501 Set No. 1 III B.Tech I Semester Regular Examinations, November 2008 FORMAL LANGUAGES AND AUTOMATA THEORY (Computer Science & Engineering) Time: 3 hours Max Marks: 80 Answer any FIVE
More informationArtificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu
Artificial Neural Networks Introduction to Computational Neuroscience Ardi Tampuu 7.0.206 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition
More informationmywbut.com GATE SOLVED PAPER - CS (A) 2 k (B) ( k+ (C) 3 logk 2 (D) 2 logk 3
GATE SOLVED PAPER - CS 00 k k-1 Q. 1 The solution to the recurrence equation T( ) = 3T( ) + 1, T( 1) = 1 (A) k (B) ( k+ 1-1 ) is (C) 3 logk (D) logk 3 Q. The minimum number of colours required to colour
More informationONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES *
ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES * Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha and Alireza Farahani National University 3678 Aero Court San Diego, CA 92123 {pdey, mamin,
More information1 P age DS & OOPS / UNIT II
UNIT II Stacks: Definition operations - applications of stack. Queues: Definition - operations Priority queues - De que Applications of queue. Linked List: Singly Linked List, Doubly Linked List, Circular
More informationMidterm I (Solutions) CS164, Spring 2002
Midterm I (Solutions) CS164, Spring 2002 February 28, 2002 Please read all instructions (including these) carefully. There are 9 pages in this exam and 5 questions, each with multiple parts. Some questions
More informationScan and Quicksort. 1 Scan. 1.1 Contraction CSE341T 09/20/2017. Lecture 7
CSE341T 09/20/2017 Lecture 7 Scan and Quicksort 1 Scan Scan is a very useful primitive for parallel programming. We will use it all the time in this class. First, lets start by thinking about what other
More informationEnd-To-End Spam Classification With Neural Networks
End-To-End Spam Classification With Neural Networks Christopher Lennan, Bastian Naber, Jan Reher, Leon Weber 1 Introduction A few years ago, the majority of the internet s network traffic was due to spam
More informationContents. Chapter 1 SPECIFYING SYNTAX 1
Contents Chapter 1 SPECIFYING SYNTAX 1 1.1 GRAMMARS AND BNF 2 Context-Free Grammars 4 Context-Sensitive Grammars 8 Exercises 8 1.2 THE PROGRAMMING LANGUAGE WREN 10 Ambiguity 12 Context Constraints in Wren
More informationLecture Bottom-Up Parsing
Lecture 14+15 Bottom-Up Parsing CS 241: Foundations of Sequential Programs Winter 2018 Troy Vasiga et al University of Waterloo 1 Example CFG 1. S S 2. S AyB 3. A ab 4. A cd 5. B z 6. B wz 2 Stacks in
More information