LinearFold GCGGGAAUAGCUCAGUUGGUAGAGCACGACCUUGCCAAGGUCGGGGUCGCGAGUUCGAGUCUCGUUUCCCGCUCCA. Liang Huang. Oregon State University & Baidu Research

Size: px
Start display at page:

Download "LinearFold GCGGGAAUAGCUCAGUUGGUAGAGCACGACCUUGCCAAGGUCGGGGUCGCGAGUUCGAGUCUCGUUUCCCGCUCCA. Liang Huang. Oregon State University & Baidu Research"

Transcription

1 LinearFold Linear-Time Prediction of RN Secondary Structures Liang Huang Oregon State niversity & Baidu Research Joint work with Dezhong Deng Oregon State / Baidu and Kai Zhao Oregon State / oogle and David Hendrix Oregon State and David Mathews Rochester

2 LinearFold Linear-Time Prediction of RN Secondary Structures x y... Liang Huang Oregon State niversity & Baidu Research Joint work with Dezhong Deng Oregon State / Baidu and Kai Zhao Oregon State / oogle and David Hendrix Oregon State and David Mathews Rochester

3 Bit bout Myself Ph.D., 2008 Research Scientist, 2009 ssistant Professor, Principal Scientist, my main area is computational linguistics aka natural language processing where I develop faster linear-time algorithms to understand/translate languages but I also apply these algorithms to computational structural biology 2

4 RN Structure Prediction and Design RISPR/as9: gene editing RN sequence M. tuberculosis 3

5 RN Structure Prediction and Design RISPR/as9: gene editing RN sequence structure prediction RN secondary structure RN 3D structure M. tuberculosis 3

6 RN Structure Prediction and Design RISPR/as9: gene editing RN sequence structure prediction RN secondary structure design RN 3D structure M. tuberculosis 3

7 Why Eterna Needs Faster Structure Prediction? 4

8 RN Structure Prediction Folding allowed pairs: assume no crossing pairs example: transfer RN trn x y... parse tree

9 RN Structure Prediction Folding allowed pairs: assume no crossing pairs example: transfer RN trn x y... parse tree algorithms are way too slow: On challenge: existing structure prediction

10 RN Structure Prediction Folding allowed pairs: assume no crossing pairs example: transfer RN trn x y... parse tree 1 algorithms are way too slow: On solution: borrow linear-time algorithms 20 from natural language parsing challenge: existing structure prediction

11 Our Linear-Time Prediction is Much Faster 10,000nt ~HIV 244,296nt longest in RNcentral 4min 7s ~200hrs 120s 9 2 hrs running time per sequence sec Vienna RNfold, ~n 2.6 ONTRfold MFE, ~n 2.6 LinearFold b=100, ~n 1.0 LinearFold b=50, ~n nt 2000nt 3000nt s s s s Vienna RNfold: n 2.6 ONTRfold MFE: n 2.6 LinearFold b=100: n 1.0 LinearFold b=050: n nt 10 4 nt 10 5 nt with even slightly better prediction accuracy!! 6

12 omputational Linguistics => omputational Biology linguistics 1955 homsky: context-free grammars computer science 1958 Backus & Naur: Fs in programming lang ocke \ 1965 Kasami - KY Parsing: On Younger / 1965 Knuth: LR Parsing: On biology 1953 Watson & rick: DN double-helix 1980s: On 3 KY for RN structures 1986 Tomita: eneralized LR Parsing ~1990: linear-time greedy parsing 2010: linear-time DP parsing Huang & Sagae 2018: LinearFold: On RN structure prediction 7

13 omputational Linguistics => omputational Biology linguistics 1955 homsky: context-free grammars computer science 1958 Backus & Naur: Fs in programming lang ocke \ 1965 Kasami - KY Parsing: On Younger / 1965 Knuth: LR Parsing: On 1970 Joshi: tree-adjoining grammars 1985 KY-style T parsing in On 6 biology 1953 Watson & rick: DN double-helix 1980s: On 3 KY for RN structures 1985 Shieber: non-f languages 1986 Tomita: eneralized LR Parsing ~1990: linear-time greedy parsing 1999: Ts for RN pseudoknots 2010: linear-time DP parsing Huang & Sagae 2018: LinearFold: On RN structure prediction 7

14 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j 8

15 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j... 8

16 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

17 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

18 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

19 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

20 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

21 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

22 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

23 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

24 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j i k j

25 urrent Structure Prediction Method: On 3 Dynamic Programming On 3 bottom-up KY parsing example: maximize # of pairs i i+1 j-1 j. i k j

26 How to Fold RNs in Linear-Time? 5 3 idea 0: tag each nucleotide from left to right maintain a stack: push, pop, skip. exhaustive: O3 n Huang and Sagae,

27 How to Fold RNs in Linear-Time? idea 0: tag each nucleotide from left to right maintain a stack: push, pop, skip. exhaustive: O3 n Huang and Sagae,

28 How to Fold RNs in Linear-Time? idea 1: DP by merging equivalent states maintain graph-structured stacks DP: On 3 Huang and Sagae,

29 How to Fold RNs in Linear-Time? idea 1: DP by merging equivalent states maintain graph-structured stacks DP: On 3 Huang and Sagae,

30 How to Fold RNs in Linear-Time? idea 2: approximate search: beam pruning keep only top b states per step DP+beam: On Huang and Sagae,

31 How to Fold RNs in Linear-Time? idea 2: approximate search: beam pruning keep only top b states per step DP+beam: On each DP state corresponds to exponentially many non-dp states graph-structured stack SS Tomita, 1986 Huang and Sagae,

32 How to Fold RNs in Linear-Time? idea 2: approximate search: beam pruning keep only top b states per step DP+beam: On each DP state corresponds to exponentially many non-dp states graph-structured stack SS Tomita, 1986 Huang and Sagae,

33 nother View: Left-to-Right KY many variants of KY ~ various topological ordering S, 0, n S, 0, n S, 0, n bottom-up left-to-right right-to-left all On 3, but the incremental ones can apply beam search to run in On 13

34 nother View: Left-to-Right KY many variants of KY ~ various topological ordering S, 0, n S, 0, n S, 0, n bottom-up left-to-right right-to-left all On 3, but the incremental ones can apply beam search to run in On 13

35 nother View: Left-to-Right KY many variants of KY ~ various topological ordering S, 0, n S, 0, n S, 0, n bottom-up left-to-right right-to-left all On 3, but the incremental ones can apply beam search to run in On 13

36 Our Linear-Time Prediction is Much Faster 10,000nt ~HIV 244,296nt longest in RNcentral 4min 7s ~200hrs 120s 9 2 hrs running time per sequence sec Vienna RNfold, ~n 2.6 ONTRfold MFE, ~n 2.6 LinearFold b=100, ~n 1.0 LinearFold b=50, ~n nt 2000nt 3000nt s s s s Vienna RNfold: n 2.6 ONTRfold MFE: n 2.6 LinearFold b=100: n 1.0 LinearFold b=050: n nt 10 4 nt 10 5 nt with even slightly better prediction accuracy!! 14

37 On to details...

38 n Example Path push push skip pop pop 16

39 n Example Path push push skip pop pop 16

40 n Example Path push push skip pop pop 16

41 n Example Path push push skip pop pop 16

42 n Example Path push push skip pop pop 16

43 n Example Path push push skip pop pop 16

44 Version 1: Exhaustive Search O3 n 17

45 Version 1: Exhaustive Search O3 n 18

46 Version 1: Exhaustive Search O3 n 19

47 Version 1: Exhaustive Search O3 n 20

48 Version 1: Exhaustive Search O3 n 21

49 Version 1: Exhaustive Search O3 n 22

50 Idea 1a: Merge Identical Stacks Merge states with the same full stack unpaired openings: Equivalent States 23

51 Version 2: Merge by Full Stack O2 n exhaustive full-stack merge 24

52 Version 2: Merge by Full Stack O2 n merge states with identical exhaustive stacks full-stack merge 25

53 Version 2: Merge by Full Stack O2 n exhaustive full-stack merge O2 n 26

54 Idea 1b: Merge Temporary Equivalents Merge states with the same top of the stack last unpaired opening: O2 n Temporarily Equivalent States 27

55 Version 3: Merge by Stack Top On 3 28

56 Version 3: Merge by Stack Top On 3 packing temporarily equivalent states 28

57 Version 3: Merge by Stack Top On 3 29

58 Version 3: Merge by Stack Top On 3 30

59 Version 3: Merge by Stack Top On 3 unpacking 31

60 Version 3: Merge by Stack Top On 3 unpacking packing 31

61 Version 3: Merge by Stack Top On 3 O2 n packing 32

62 lose p Look at Two Paths 33

63 lose p Look at Two Paths 34

64 lose p Look at Two Paths 34

65 lose p Look at Two Paths 34

66 lose p Look at Two Paths 34

67 Idea 3: Beam Pruning full-stack merge O2 n stack-top merge 35

68 Version 4: DP with Beam Search On stack-top merge +beam pruning 36

69 Recap: O3 n to On 3 to On 5 search algorithms DP: bottom-up KY: On 3 left-to-right exhaustive: O3 n DP: merge by full stack: O2 n DP: merge by stack top: On 3 approx. DP via beam search: On this is a simple illustration that we just maximize the number of pairs our real systems work with complicated feature templates no DP..... O3 n DP O2 n DP+SS ?.. On ?.? LinearFold..?.?..... On approx. DP full stack merge +SS +beam O2 n 37

70 Recap: O3 n to On 3 to On 5 search algorithms DP: bottom-up KY: On 3 left-to-right exhaustive: O3 n DP: merge by full stack: O2 n DP: merge by stack top: On 3 approx. DP via beam search: On this is a simple illustration that we just maximize the number of pairs our real systems work with complicated feature templates no DP..... O3 n DP O2 n DP+SS ?.. On ?.? LinearFold..?.?..... On approx. DP full stack merge +SS +beam O2 n 37

71 Recap: O3 n to On 3 to On 5 search algorithms DP: bottom-up KY: On 3 left-to-right exhaustive: O3 n DP: merge by full stack: O2 n DP: merge by stack top: On 3 approx. DP via beam search: On this is a simple illustration that we just maximize the number of pairs our real systems work with complicated feature templates no DP..... O3 n DP O2 n DP+SS ?.. On ?.? LinearFold..?.?..... On approx. DP full stack merge +SS +beam O2 n 37

72 Recap: O3 n to On 3 to On 5 search algorithms DP: bottom-up KY: On 3 left-to-right exhaustive: O3 n DP: merge by full stack: O2 n DP: merge by stack top: On 3 approx. DP via beam search: On this is a simple illustration that we just maximize the number of pairs our real systems work with complicated feature templates no DP..... O3 n DP O2 n DP+SS ?.. On ?.? LinearFold..?.?..... On approx. DP full stack merge +SS +beam O2 n 37

73 Recap: O3 n to On 3 to On 5 search algorithms DP: bottom-up KY: On 3 left-to-right exhaustive: O3 n DP: merge by full stack: O2 n DP: merge by stack top: On 3 approx. DP via beam search: On this is a simple illustration that we just maximize the number of pairs our real systems work with complicated feature templates no DP..... O3 n DP O2 n DP+SS ?.. On ?.? LinearFold..?.?..... On approx. DP full stack merge +SS +beam O2 n 37

74 onnections to Incremental Parsing shared key observation: local ambiguity packing pack non-crucial local ambiguities along the way unpack in a reduce action only when needed psycholinguistic evidence eye-tracking experiments: delayed disambiguation John and Mary had 2 papers John and Mary had 2 papers Frazier and Rayner 1990, Frazier 1999 Huang and Sagae,

75 onnections to Incremental Parsing shared key observation: local ambiguity packing pack non-crucial local ambiguities along the way unpack in a reduce action only when needed psycholinguistic evidence eye-tracking experiments: delayed disambiguation John and Mary had 2 papers John and Mary had 2 papers each together Frazier and Rayner 1990, Frazier 1999 Huang and Sagae,

76 Experiments

77 pplying Prediction Models on LinearFold models from two most widely-used systems ONTRfold MFE machine-learned Vienna RNfold thermodynamic we linearized both systems from On3 to On efficiency systems time space machine-learned thermo-dynamic baselines On 3 On 2 ONTRfold Vienna RNfold our work On On LinearFold- LinearFold-V 40

78 Efficiency & Scalability tested on two datasets 9 2 hrs memory used rchive II dataset labeled, like Penn Treebank #: 2,889, avglen: 222nt; up to 2,927nt used for both efficiency and accuracy evaluations RNcentral dataset unlabeled, like igaword over 1.3M of unlabeled sequences; up to 244,296nt only used for efficiency/scalability evaluations 10B 1B 100MB 10MB 10 3 nt 10 4 nt 10 5 nt ONTRfold MFE, ~n 2.0 Vienna RNfold: ~n 2.0 LinearFold b=100, ~n 1.0 LinearFold b=50, ~n 1.0 running time per sequence sec Vienna RNfold, ~n 2.6 ONTRfold MFE, ~n 2.6 LinearFold b=100, ~n 1.0 LinearFold b=50, ~n nt 2000nt 3000nt Vienna RNfold: n 2.6 ONTRfold MFE: n 2.6 LinearFold b=100: n 1.0 LinearFold b=050: n nt 10 4 nt 10 5 nt 41

79 ccuracy use beam=100 for LinearFold Tested on rchive II dataset on a family-by-family basis significant improvements on 3 longer families biggest improvements on the longest families: 16S/23S rrns PPV Sensitivity trn 5S rrn * SRP RNaseP * * tmrn * * ONTRfold MFE LinearFold- b=100 Vienna RNfold LinearFold-V b=100 ** * roup I Intron * * telomerase RN 16S rrn ONTRfold MFE LinearFold- b=100 Vienna RNfold LinearFold-V b=100 * * * * ** 23S rrn ** trn SRP 5S rrn tmrn RNaseP roup I Intron 23S rrn 16S rrn telomerase RN % ONTRfold LinearFold- Vienna LinearFold-V PPV precision Sensitivity recall

80 Beam Size and Search Quality beam search achieves good search quality starting b=100 slightly under-predicts compared to exact search, but close average free energy Vienna RNfold LinearFold-V ONTRfold MFE LinearFold beam size average model cost number of pairs predicted Vienna RNfold LinearFold-V round Truth ONTRfold MFE LinearFold beam size 43

81 Beam Size and PPV/Sensitivity PPV/Sensitivity trade-off as a function of beam size stable PPV/Sensitivity around b= PPV ONTRfold MFE LinearFold- ONTRfold MFE LinearFold beam size Sensitivity Sensitivity ONTRfold MFE LinearFold b=150 b=120 b= b=20 b=300 b=75 b= PPV 44

82 Improvements on Long-Range Pairs long-distance pairs are well-known to be hard to predict but our beam search systems are even better on those 70 ONTRfold MFE LinearFold- b= ONTRfold MFE LinearFold- b= PPV Sensitivity > > base pair distance base pair distance 45

83 Incremental Folding left-to-right vs. right-to-left folding cotranscriptional folding? PPV / Sensitivity linear PPV linear Sensitivity right-to-left PPV right-to-left Sensitivity contrafold PPV contrafold Sensitivity negative free energy 46

84 Example Predictions: ONTRfold 47

85 Example Predictions:Vienna RNfold 48

86 Web Demo 1 Beam Sizes 49

87 Web Demo 2 Live Demo 50

88 Bonus Track: Deep Learning for RN Folding

89 Existing Models 52

90 Existing Models Physics-based energy model hundreds of carefully designed and experimentally-measured thermodynamic parameters 52

91 Existing Models Physics-based energy model hundreds of carefully designed and experimentally-measured thermodynamic parameters Machine Learning based model claimed without physics, but still using physics-based feature templates just learn the feature weights parameters from data 52

92 Existing Models Physics-based energy model hundreds of carefully designed and experimentally-measured thermodynamic parameters Machine Learning based model claimed without physics, but still using physics-based feature templates just learn the feature weights parameters from data an we really do it without physics? 52

93 Deep Learning idealized ML actual ML deep learning representation learning Input x Output y Training Model w Input x Output y feature map ϕ Training Model w Input x Output y Training feature map ϕ Model w 53

94 Deep Learning idealized ML actual ML deep learning representation learning Input x Output y Training Model w Input x Output y feature map ϕ Training Model w Input x Output y Training feature map ϕ Model w Deep Learning for RN structure prediction RNNs automatically extract features and learn their weights ompletely without physics First deep learning RN secondary structure prediction algorithm 53

95 Deep Learning idealized ML actual ML deep learning representation learning Input x Output y Training Model w Input x Output y feature map ϕ Training Model w Input x Output y Training feature map ϕ Model w Deep Learning for RN structure prediction RNNs automatically extract features and learn their weights ompletely without physics First deep learning RN secondary structure prediction algorithm We borrow techniques from natural language parsing where deep learning has been very successful since ~2016 our group s span-based parsing framework is now dominant 53

96 Our pproach: from NLP Span Parsing Span differences are taken from an encoder in this case: a bi-lstm span is scored and labeled by a feedforward network si, j, X s The score of a tree is the sum of all the labeled span scores f j f i,b i b j f 0 f 1 f 2 f 3 f 4 f 5 s tree t = P si, j, X i,j,x2t s 0 You 1 should 2 eat 3 ice 4 cream 5 /s b 0 b 1 b 2 b 3 b 4 b 5 ross + Huang 2016 Stern et al Wang + hang

97 Our pproach: RN structure predict. Replace words with nucleotides,,, as inputs si, j, X 3 labels for a span: s Left-unpair. f j f i,b i b j Left-pair f 0 f 1 f 2 f 3 f 4 f 5 Pair s /s b 0 b 1 b 2 b 3 b 4 b 5 55

98 Experiments database: S-Full ndronescu et al. 2008; 2010 preprocessing cap at 700nt; removing pseudoknots and non-canonical base-pairs; 3245 distinct structures; 80% / 20% split for training / set randomly split into training set 2586 and testing set 659 can be found system precision recall F-score Physics-based Vienna RNfold Machine learning ONTRfold off the shelf ONTRfold retrained Deep learning Our model

99 非常感谢! fēi cháng gǎn xiè Just google LinearFold

100 非常感谢! fēi cháng gǎn xiè Just google LinearFold Thank you very much!

Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set

Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set Fast(er) Exact Decoding and Global Training for Transition-Based Dependency Parsing via a Minimal Feature Set Tianze Shi* Liang Huang Lillian Lee* * Cornell University Oregon State University O n 3 O n

More information

Longer, Deeper, and Affordable Kisses

Longer, Deeper, and Affordable Kisses Longer, Deeper, and ffordable Kisses Robert iegerich Faculty of Technology & enter of Biotechnology Bielefeld niversity, ermany robert@techfak.ni-bielefeld.de Esko kkonen s 60th anniversary, Helsinki,

More information

Transition-Based Dependency Parsing with Stack Long Short-Term Memory

Transition-Based Dependency Parsing with Stack Long Short-Term Memory Transition-Based Dependency Parsing with Stack Long Short-Term Memory Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A. Smith Association for Computational Linguistics (ACL), 2015 Presented

More information

On Structured Perceptron with Inexact Search, NAACL 2012

On Structured Perceptron with Inexact Search, NAACL 2012 On Structured Perceptron with Inexact Search, NAACL 2012 John Hewitt CIS 700-006 : Structured Prediction for NLP 2017-09-23 All graphs from Huang, Fayong, and Guo (2012) unless otherwise specified. All

More information

Assignment 4 CSE 517: Natural Language Processing

Assignment 4 CSE 517: Natural Language Processing Assignment 4 CSE 517: Natural Language Processing University of Washington Winter 2016 Due: March 2, 2016, 1:30 pm 1 HMMs and PCFGs Here s the definition of a PCFG given in class on 2/17: A finite set

More information

Stack- propaga+on: Improved Representa+on Learning for Syntax

Stack- propaga+on: Improved Representa+on Learning for Syntax Stack- propaga+on: Improved Representa+on Learning for Syntax Yuan Zhang, David Weiss MIT, Google 1 Transi+on- based Neural Network Parser p(action configuration) So1max Hidden Embedding words labels POS

More information

Online Graph Planarisation for Synchronous Parsing of Semantic and Syntactic Dependencies

Online Graph Planarisation for Synchronous Parsing of Semantic and Syntactic Dependencies Online Graph Planarisation for Synchronous Parsing of Semantic and Syntactic Dependencies Ivan Titov University of Illinois at Urbana-Champaign James Henderson, Paola Merlo, Gabriele Musillo University

More information

Transition-based dependency parsing

Transition-based dependency parsing Transition-based dependency parsing Syntactic analysis (5LN455) 2014-12-18 Sara Stymne Department of Linguistics and Philology Based on slides from Marco Kuhlmann Overview Arc-factored dependency parsing

More information

Advanced Search Algorithms

Advanced Search Algorithms CS11-747 Neural Networks for NLP Advanced Search Algorithms Daniel Clothiaux https://phontron.com/class/nn4nlp2017/ Why search? So far, decoding has mostly been greedy Chose the most likely output from

More information

Gene Clustering & Classification

Gene Clustering & Classification BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering

More information

Easy-First POS Tagging and Dependency Parsing with Beam Search

Easy-First POS Tagging and Dependency Parsing with Beam Search Easy-First POS Tagging and Dependency Parsing with Beam Search Ji Ma JingboZhu Tong Xiao Nan Yang Natrual Language Processing Lab., Northeastern University, Shenyang, China MOE-MS Key Lab of MCC, University

More information

Prediction of Geometrically Feasible Three Dimensional Structures of Pseudoknotted RNA through Free Energy Estimation

Prediction of Geometrically Feasible Three Dimensional Structures of Pseudoknotted RNA through Free Energy Estimation Prediction of eometrically Feasible Three Dimensional Structures of Pseudoknotted RN through Free Energy Estimation Jian Zhang 1,2, Joseph Dundas 1, Ming Lin 3, Rong hen 3, Wei Wang 2, Jie Liang 1, 1 Department

More information

Structured Perceptron with Inexact Search

Structured Perceptron with Inexact Search Structured Perceptron with Inexact Search Liang Huang Suphan Fayong Yang Guo presented by Allan July 15, 2016 Slides: http://www.statnlp.org/sperceptron.html Table of Contents Structured Perceptron Exact

More information

K-best Parsing Algorithms

K-best Parsing Algorithms K-best Parsing Algorithms Liang Huang University of Pennsylvania joint work with David Chiang (USC Information Sciences Institute) k-best Parsing Liang Huang (Penn) k-best parsing 2 k-best Parsing I saw

More information

8/19/13. Computational problems. Introduction to Algorithm

8/19/13. Computational problems. Introduction to Algorithm I519, Introduction to Introduction to Algorithm Yuzhen Ye (yye@indiana.edu) School of Informatics and Computing, IUB Computational problems A computational problem specifies an input-output relationship

More information

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing Roadmap > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing The role of the parser > performs context-free syntax analysis > guides

More information

Deep Model Compression

Deep Model Compression Deep Model Compression Xin Wang Oct.31.2016 Some of the contents are borrowed from Hinton s and Song s slides. Two papers Distilling the Knowledge in a Neural Network by Geoffrey Hinton et al What s the

More information

Recurrent Neural Networks. Nand Kishore, Audrey Huang, Rohan Batra

Recurrent Neural Networks. Nand Kishore, Audrey Huang, Rohan Batra Recurrent Neural Networks Nand Kishore, Audrey Huang, Rohan Batra Roadmap Issues Motivation 1 Application 1: Sequence Level Training 2 Basic Structure 3 4 Variations 5 Application 3: Image Classification

More information

Dynamic Programming (cont d) CS 466 Saurabh Sinha

Dynamic Programming (cont d) CS 466 Saurabh Sinha Dynamic Programming (cont d) CS 466 Saurabh Sinha Spliced Alignment Begins by selecting either all putative exons between potential acceptor and donor sites or by finding all substrings similar to the

More information

Collective Entity Resolution in Relational Data

Collective Entity Resolution in Relational Data Collective Entity Resolution in Relational Data I. Bhattacharya, L. Getoor University of Maryland Presented by: Srikar Pyda, Brett Walenz CS590.01 - Duke University Parts of this presentation from: http://www.norc.org/pdfs/may%202011%20personal%20validation%20and%20entity%20resolution%20conference/getoorcollectiveentityresolution

More information

Algorithms for NLP. Chart Parsing. Reading: James Allen, Natural Language Understanding. Section 3.4, pp

Algorithms for NLP. Chart Parsing. Reading: James Allen, Natural Language Understanding. Section 3.4, pp -7 Algorithms for NLP Chart Parsing Reading: James Allen, Natural Language Understanding Section 3.4, pp. 53-6 Chart Parsing General Principles: A Bottom-Up parsing method Construct a parse starting from

More information

Dependency Parsing 2 CMSC 723 / LING 723 / INST 725. Marine Carpuat. Fig credits: Joakim Nivre, Dan Jurafsky & James Martin

Dependency Parsing 2 CMSC 723 / LING 723 / INST 725. Marine Carpuat. Fig credits: Joakim Nivre, Dan Jurafsky & James Martin Dependency Parsing 2 CMSC 723 / LING 723 / INST 725 Marine Carpuat Fig credits: Joakim Nivre, Dan Jurafsky & James Martin Dependency Parsing Formalizing dependency trees Transition-based dependency parsing

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks 11-785 / Fall 2018 / Recitation 7 Raphaël Olivier Recap : RNNs are magic They have infinite memory They handle all kinds of series They re the basis of recent NLP : Translation,

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

GSNAP: Fast and SNP-tolerant detection of complex variants and splicing in short reads by Thomas D. Wu and Serban Nacu

GSNAP: Fast and SNP-tolerant detection of complex variants and splicing in short reads by Thomas D. Wu and Serban Nacu GSNAP: Fast and SNP-tolerant detection of complex variants and splicing in short reads by Thomas D. Wu and Serban Nacu Matt Huska Freie Universität Berlin Computational Methods for High-Throughput Omics

More information

Let s get parsing! Each component processes the Doc object, then passes it on. doc.is_parsed attribute checks whether a Doc object has been parsed

Let s get parsing! Each component processes the Doc object, then passes it on. doc.is_parsed attribute checks whether a Doc object has been parsed Let s get parsing! SpaCy default model includes tagger, parser and entity recognizer nlp = spacy.load('en ) tells spacy to use "en" with ["tagger", "parser", "ner"] Each component processes the Doc object,

More information

The Expectation Maximization (EM) Algorithm

The Expectation Maximization (EM) Algorithm The Expectation Maximization (EM) Algorithm continued! 600.465 - Intro to NLP - J. Eisner 1 General Idea Start by devising a noisy channel Any model that predicts the corpus observations via some hidden

More information

Kernels and Clustering

Kernels and Clustering Kernels and Clustering Robert Platt Northeastern University All slides in this file are adapted from CS188 UC Berkeley Case-Based Learning Non-Separable Data Case-Based Reasoning Classification from similarity

More information

Syntax. Syntax. We will study three levels of syntax Lexical Defines the rules for tokens: literals, identifiers, etc.

Syntax. Syntax. We will study three levels of syntax Lexical Defines the rules for tokens: literals, identifiers, etc. Syntax Syntax Syntax defines what is grammatically valid in a programming language Set of grammatical rules E.g. in English, a sentence cannot begin with a period Must be formal and exact or there will

More information

CPSC 340: Machine Learning and Data Mining

CPSC 340: Machine Learning and Data Mining CPSC 340: Machine Learning and Data Mining Fundamentals of learning (continued) and the k-nearest neighbours classifier Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart.

More information

Technical University of Munich. Exercise 8: Neural Networks

Technical University of Munich. Exercise 8: Neural Networks Technical University of Munich Chair for Bioinformatics and Computational Biology Protein Prediction I for Computer Scientists SoSe 2018 Prof. B. Rost M. Bernhofer, M. Heinzinger, D. Nechaev, L. Richter

More information

CSE 573: Artificial Intelligence Autumn 2010

CSE 573: Artificial Intelligence Autumn 2010 CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine

More information

Parsing III. (Top-down parsing: recursive descent & LL(1) )

Parsing III. (Top-down parsing: recursive descent & LL(1) ) Parsing III (Top-down parsing: recursive descent & LL(1) ) Roadmap (Where are we?) Previously We set out to study parsing Specifying syntax Context-free grammars Ambiguity Top-down parsers Algorithm &

More information

The designer should construct a can with the height of 2 centimeters and the radius of centimeters in order to minimize cost.

The designer should construct a can with the height of 2 centimeters and the radius of centimeters in order to minimize cost. 1. A 2 liter oil can is being designed in the shape of a right circular cylinder. What dimensions, in terms of centimeters, should the designer use in order to use the least amount of material? Hint: Recall

More information

Advanced PCFG Parsing

Advanced PCFG Parsing Advanced PCFG Parsing BM1 Advanced atural Language Processing Alexander Koller 4 December 2015 Today Agenda-based semiring parsing with parsing schemata. Pruning techniques for chart parsing. Discriminative

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 33: Recognition Basics Slides from Andrej Karpathy and Fei-Fei Li http://vision.stanford.edu/teaching/cs231n/ Announcements Quiz moved to Tuesday Project 4

More information

Introduction to Parsing

Introduction to Parsing Introduction to Parsing The Front End Source code Scanner tokens Parser IR Errors Parser Checks the stream of words and their parts of speech (produced by the scanner) for grammatical correctness Determines

More information

CS 343: Artificial Intelligence

CS 343: Artificial Intelligence CS 343: Artificial Intelligence Kernels and Clustering Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.

More information

CS 406/534 Compiler Construction Parsing Part I

CS 406/534 Compiler Construction Parsing Part I CS 406/534 Compiler Construction Parsing Part I Prof. Li Xu Dept. of Computer Science UMass Lowell Fall 2004 Part of the course lecture notes are based on Prof. Keith Cooper, Prof. Ken Kennedy and Dr.

More information

Large-Scale Syntactic Processing: Parsing the Web. JHU 2009 Summer Research Workshop

Large-Scale Syntactic Processing: Parsing the Web. JHU 2009 Summer Research Workshop Large-Scale Syntactic Processing: JHU 2009 Summer Research Workshop Intro CCG parser Tasks 2 The Team Stephen Clark (Cambridge, UK) Ann Copestake (Cambridge, UK) James Curran (Sydney, Australia) Byung-Gyu

More information

RNA Folding with Hard and Soft Constraints - Supplementary Material

RNA Folding with Hard and Soft Constraints - Supplementary Material RNA Folding with Hard and Soft Constraints - Supplementary Material Ronny Lorenz 1, Ivo L. Hofacker 1,2,3, and Peter F. Stadler 4,1,3,5,6,7 1 Institute for Theoretical Chemistry, University of Vienna,

More information

Efficient Implementation of a Generalized Pair HMM for Comparative Gene Finding. B. Majoros M. Pertea S.L. Salzberg

Efficient Implementation of a Generalized Pair HMM for Comparative Gene Finding. B. Majoros M. Pertea S.L. Salzberg Efficient Implementation of a Generalized Pair HMM for Comparative Gene Finding B. Majoros M. Pertea S.L. Salzberg ab initio gene finder genome 1 MUMmer Whole-genome alignment (optional) ROSE Region-Of-Synteny

More information

Classification: Feature Vectors

Classification: Feature Vectors Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free YOUR_NAME MISSPELLED FROM_FRIEND... : : : : 2 0 2 0 PIXEL 7,12

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Javier Béjar Deep Learning 2018/2019 Fall Master in Artificial Intelligence (FIB-UPC) Introduction Sequential data Many problems are described by sequences Time series Video/audio

More information

Pixel-level Generative Model

Pixel-level Generative Model Pixel-level Generative Model Generative Image Modeling Using Spatial LSTMs (2015NIPS) L. Theis and M. Bethge University of Tübingen, Germany Pixel Recurrent Neural Networks (2016ICML) A. van den Oord,

More information

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric. CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

CS 188: Artificial Intelligence Fall 2008

CS 188: Artificial Intelligence Fall 2008 CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley 1 1 Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

More information

Discriminative Training with Perceptron Algorithm for POS Tagging Task

Discriminative Training with Perceptron Algorithm for POS Tagging Task Discriminative Training with Perceptron Algorithm for POS Tagging Task Mahsa Yarmohammadi Center for Spoken Language Understanding Oregon Health & Science University Portland, Oregon yarmoham@ohsu.edu

More information

Detecting and Parsing of Visual Objects: Humans and Animals. Alan Yuille (UCLA)

Detecting and Parsing of Visual Objects: Humans and Animals. Alan Yuille (UCLA) Detecting and Parsing of Visual Objects: Humans and Animals Alan Yuille (UCLA) Summary This talk describes recent work on detection and parsing visual objects. The methods represent objects in terms of

More information

Statistical parsing. Fei Xia Feb 27, 2009 CSE 590A

Statistical parsing. Fei Xia Feb 27, 2009 CSE 590A Statistical parsing Fei Xia Feb 27, 2009 CSE 590A Statistical parsing History-based models (1995-2000) Recent development (2000-present): Supervised learning: reranking and label splitting Semi-supervised

More information

Dynamic Programming Course: A structure based flexible search method for motifs in RNA. By: Veksler, I., Ziv-Ukelson, M., Barash, D.

Dynamic Programming Course: A structure based flexible search method for motifs in RNA. By: Veksler, I., Ziv-Ukelson, M., Barash, D. Dynamic Programming Course: A structure based flexible search method for motifs in RNA By: Veksler, I., Ziv-Ukelson, M., Barash, D., Kedem, K Outline Background Motivation RNA s structure representations

More information

Dependency grammar and dependency parsing

Dependency grammar and dependency parsing Dependency grammar and dependency parsing Syntactic analysis (5LN455) 2014-12-10 Sara Stymne Department of Linguistics and Philology Based on slides from Marco Kuhlmann Mid-course evaluation Mostly positive

More information

3. Parsing. Oscar Nierstrasz

3. Parsing. Oscar Nierstrasz 3. Parsing Oscar Nierstrasz Thanks to Jens Palsberg and Tony Hosking for their kind permission to reuse and adapt the CS132 and CS502 lecture notes. http://www.cs.ucla.edu/~palsberg/ http://www.cs.purdue.edu/homes/hosking/

More information

Deep Learning Applications

Deep Learning Applications October 20, 2017 Overview Supervised Learning Feedforward neural network Convolution neural network Recurrent neural network Recursive neural network (Recursive neural tensor network) Unsupervised Learning

More information

Database Group Research Overview. Immanuel Trummer

Database Group Research Overview. Immanuel Trummer Database Group Research Overview Immanuel Trummer Talk Overview User Query Data Analysis Result Processing Talk Overview Fact Checking Query User Data Vocalization Data Analysis Result Processing Query

More information

CMSC423: Bioinformatic Algorithms, Databases and Tools Lecture 8. Note

CMSC423: Bioinformatic Algorithms, Databases and Tools Lecture 8. Note MS: Bioinformatic lgorithms, Databases and ools Lecture 8 Sequence alignment: inexact alignment dynamic programming, gapped alignment Note Lecture 7 suffix trees and suffix arrays will be rescheduled Exact

More information

Forest-based Algorithms in

Forest-based Algorithms in Forest-based Algorithms in Natural Language Processing Liang Huang overview of Ph.D. work done at Penn (and ISI, ICT) INSTITUTE OF COMPUTING TECHNOLOGY includes joint work with David Chiang, Kevin Knight,

More information

Advanced PCFG Parsing

Advanced PCFG Parsing Advanced PCFG Parsing Computational Linguistics Alexander Koller 8 December 2017 Today Parsing schemata and agenda-based parsing. Semiring parsing. Pruning techniques for chart parsing. The CKY Algorithm

More information

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/20/2010 Announcements W7 due Thursday [that s your last written for the semester!] Project 5 out Thursday Contest running

More information

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised: EDAN65: Compilers, Lecture 06 A LR parsing Görel Hedin Revised: 2017-09-11 This lecture Regular expressions Context-free grammar Attribute grammar Lexical analyzer (scanner) Syntactic analyzer (parser)

More information

N-Queens problem. Administrative. Local Search

N-Queens problem. Administrative. Local Search Local Search CS151 David Kauchak Fall 2010 http://www.youtube.com/watch?v=4pcl6-mjrnk Some material borrowed from: Sara Owsley Sood and others Administrative N-Queens problem Assign 1 grading Assign 2

More information

Fast Edge Detection Using Structured Forests

Fast Edge Detection Using Structured Forests Fast Edge Detection Using Structured Forests Piotr Dollár, C. Lawrence Zitnick [1] Zhihao Li (zhihaol@andrew.cmu.edu) Computer Science Department Carnegie Mellon University Table of contents 1. Introduction

More information

Incremental Integer Linear Programming for Non-projective Dependency Parsing

Incremental Integer Linear Programming for Non-projective Dependency Parsing Incremental Integer Linear Programming for Non-projective Dependency Parsing Sebastian Riedel James Clarke ICCS, University of Edinburgh 22. July 2006 EMNLP 2006 S. Riedel, J. Clarke (ICCS, Edinburgh)

More information

A Seeded Genetic Algorithm for RNA Secondary Structural Prediction with Pseudoknots

A Seeded Genetic Algorithm for RNA Secondary Structural Prediction with Pseudoknots San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research 2008 A Seeded Genetic Algorithm for RNA Secondary Structural Prediction with Pseudoknots Ryan Pham San

More information

Dynamic Programming. CSE 101: Design and Analysis of Algorithms Lecture 19

Dynamic Programming. CSE 101: Design and Analysis of Algorithms Lecture 19 Dynamic Programming CSE 101: Design and Analysis of Algorithms Lecture 19 CSE 101: Design and analysis of algorithms Dynamic programming Reading: Chapter 6 Homework 7 is due Dec 6, 11:59 PM This Friday

More information

Bi-objective integer programming for RNA secondary structure prediction with pseudoknots

Bi-objective integer programming for RNA secondary structure prediction with pseudoknots Legendre et al. BMC Bioinformatics (2018) 19:13 DOI 10.1186/s12859-018-2007-7 RESEARCH ARTICLE Open Access Bi-objective integer programming for RNA secondary structure prediction with pseudoknots Audrey

More information

Low-level optimization

Low-level optimization Low-level optimization Advanced Course on Compilers Spring 2015 (III-V): Lecture 6 Vesa Hirvisalo ESG/CSE/Aalto Today Introduction to code generation finding the best translation Instruction selection

More information

End-2-End Search. Mices 2018 Duncan Blythe

End-2-End Search. Mices 2018 Duncan Blythe End-2-End Search Mices 2018 Duncan Blythe About Me Duncan Blythe Research Scientist @ Zalando Research M.Math.Phil in Mathematics/ Philosophy @ Oxford Ph.D. & M.Sc. in Machine Learning/ Computational Neuroscience

More information

Agenda for today. Homework questions, issues? Non-projective dependencies Spanning tree algorithm for non-projective parsing

Agenda for today. Homework questions, issues? Non-projective dependencies Spanning tree algorithm for non-projective parsing Agenda for today Homework questions, issues? Non-projective dependencies Spanning tree algorithm for non-projective parsing 1 Projective vs non-projective dependencies If we extract dependencies from trees,

More information

Bidirectional LTAG Dependency Parsing

Bidirectional LTAG Dependency Parsing Bidirectional LTAG Dependency Parsing Libin Shen BBN Technologies Aravind K. Joshi University of Pennsylvania We propose a novel algorithm for bi-directional parsing with linear computational complexity,

More information

Sentiment analysis under temporal shift

Sentiment analysis under temporal shift Sentiment analysis under temporal shift Jan Lukes and Anders Søgaard Dpt. of Computer Science University of Copenhagen Copenhagen, Denmark smx262@alumni.ku.dk Abstract Sentiment analysis models often rely

More information

k-means demo Administrative Machine learning: Unsupervised learning" Assignment 5 out

k-means demo Administrative Machine learning: Unsupervised learning Assignment 5 out Machine learning: Unsupervised learning" David Kauchak cs Spring 0 adapted from: http://www.stanford.edu/class/cs76/handouts/lecture7-clustering.ppt http://www.youtube.com/watch?v=or_-y-eilqo Administrative

More information

Iterative CKY parsing for Probabilistic Context-Free Grammars

Iterative CKY parsing for Probabilistic Context-Free Grammars Iterative CKY parsing for Probabilistic Context-Free Grammars Yoshimasa Tsuruoka and Jun ichi Tsujii Department of Computer Science, University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033 CREST, JST

More information

Evaluation Metrics. (Classifiers) CS229 Section Anand Avati

Evaluation Metrics. (Classifiers) CS229 Section Anand Avati Evaluation Metrics (Classifiers) CS Section Anand Avati Topics Why? Binary classifiers Metrics Rank view Thresholding Confusion Matrix Point metrics: Accuracy, Precision, Recall / Sensitivity, Specificity,

More information

Deep-Q: Traffic-driven QoS Inference using Deep Generative Network

Deep-Q: Traffic-driven QoS Inference using Deep Generative Network Deep-Q: Traffic-driven QoS Inference using Deep Generative Network Shihan Xiao, Dongdong He, Zhibo Gong Network Technology Lab, Huawei Technologies Co., Ltd., Beijing, China 1 Background What is a QoS

More information

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla DEEP LEARNING REVIEW Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature 2015 -Presented by Divya Chitimalla What is deep learning Deep learning allows computational models that are composed of multiple

More information

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering Erik Velldal University of Oslo Sept. 18, 2012 Topics for today 2 Classification Recap Evaluating classifiers Accuracy, precision,

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

FastText. Jon Koss, Abhishek Jindal

FastText. Jon Koss, Abhishek Jindal FastText Jon Koss, Abhishek Jindal FastText FastText is on par with state-of-the-art deep learning classifiers in terms of accuracy But it is way faster: FastText can train on more than one billion words

More information

Word Lattice Reranking for Chinese Word Segmentation and Part-of-Speech Tagging

Word Lattice Reranking for Chinese Word Segmentation and Part-of-Speech Tagging Word Lattice Reranking for Chinese Word Segmentation and Part-of-Speech Tagging Wenbin Jiang Haitao Mi Qun Liu Key Lab. of Intelligent Information Processing Institute of Computing Technology Chinese Academy

More information

The CKY algorithm part 1: Recognition

The CKY algorithm part 1: Recognition The CKY algorithm part 1: Recognition Syntactic analysis (5LN455) 2016-11-10 Sara Stymne Department of Linguistics and Philology Mostly based on slides from Marco Kuhlmann Phrase structure trees S root

More information

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Midterm Examination CS540-2: Introduction to Artificial Intelligence Midterm Examination CS540-2: Introduction to Artificial Intelligence March 15, 2018 LAST NAME: FIRST NAME: Problem Score Max Score 1 12 2 13 3 9 4 11 5 8 6 13 7 9 8 16 9 9 Total 100 Question 1. [12] Search

More information

over Multi Label Images

over Multi Label Images IBM Research Compact Hashing for Mixed Image Keyword Query over Multi Label Images Xianglong Liu 1, Yadong Mu 2, Bo Lang 1 and Shih Fu Chang 2 1 Beihang University, Beijing, China 2 Columbia University,

More information

Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. Presented by: Karen Lucknavalai and Alexandr Kuznetsov

Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. Presented by: Karen Lucknavalai and Alexandr Kuznetsov Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization Presented by: Karen Lucknavalai and Alexandr Kuznetsov Example Style Content Result Motivation Transforming content of an image

More information

Classification and Regression

Classification and Regression Classification and Regression Announcements Study guide for exam is on the LMS Sample exam will be posted by Monday Reminder that phase 3 oral presentations are being held next week during workshops Plan

More information

Patent Image Retrieval

Patent Image Retrieval Patent Image Retrieval Stefanos Vrochidis IRF Symposium 2008 Vienna, November 6, 2008 Aristotle University of Thessaloniki Overview 1. Introduction 2. Related Work in Patent Image Retrieval 3. Patent Image

More information

Artificial Intelligence. Programming Styles

Artificial Intelligence. Programming Styles Artificial Intelligence Intro to Machine Learning Programming Styles Standard CS: Explicitly program computer to do something Early AI: Derive a problem description (state) and use general algorithms to

More information

Dependency grammar and dependency parsing

Dependency grammar and dependency parsing Dependency grammar and dependency parsing Syntactic analysis (5LN455) 2015-12-09 Sara Stymne Department of Linguistics and Philology Based on slides from Marco Kuhlmann Activities - dependency parsing

More information

arxiv: v1 [cs.cv] 2 Sep 2018

arxiv: v1 [cs.cv] 2 Sep 2018 Natural Language Person Search Using Deep Reinforcement Learning Ankit Shah Language Technologies Institute Carnegie Mellon University aps1@andrew.cmu.edu Tyler Vuong Electrical and Computer Engineering

More information

Administrative. Local Search!

Administrative. Local Search! Administrative Local Search! CS311 David Kauchak Spring 2013 Assignment 2 due Tuesday before class Written problems 2 posted Class participation http://www.youtube.com/watch? v=irhfvdphfzq&list=uucdoqrpqlqkvctckzqa

More information

LSTM and its variants for visual recognition. Xiaodan Liang Sun Yat-sen University

LSTM and its variants for visual recognition. Xiaodan Liang Sun Yat-sen University LSTM and its variants for visual recognition Xiaodan Liang xdliang328@gmail.com Sun Yat-sen University Outline Context Modelling with CNN LSTM and its Variants LSTM Architecture Variants Application in

More information

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous. Section A 1. What do you meant by parser and its types? A parser for grammar G is a program that takes as input a string w and produces as output either a parse tree for w, if w is a sentence of G, or

More information

INF5820/INF9820 LANGUAGE TECHNOLOGICAL APPLICATIONS. Jan Tore Lønning, Lecture 8, 12 Oct

INF5820/INF9820 LANGUAGE TECHNOLOGICAL APPLICATIONS. Jan Tore Lønning, Lecture 8, 12 Oct 1 INF5820/INF9820 LANGUAGE TECHNOLOGICAL APPLICATIONS Jan Tore Lønning, Lecture 8, 12 Oct. 2016 jtl@ifi.uio.no Today 2 Preparing bitext Parameter tuning Reranking Some linguistic issues STMT so far 3 We

More information

Elements of Language Processing and Learning lecture 3

Elements of Language Processing and Learning lecture 3 Elements of Language Processing and Learning lecture 3 Ivan Titov TA: Milos Stanojevic Institute for Logic, Language and Computation Today Parsing algorithms for CFGs Recap, Chomsky Normal Form (CNF) A

More information

Automated Application Signature Generation Using LASER and Cosine Similarity

Automated Application Signature Generation Using LASER and Cosine Similarity Automated Application Signature Generation Using LASER and Cosine Similarity Byungchul Park, Jae Yoon Jung, John Strassner *, and James Won-ki Hong * {fates, dejavu94, johns, jwkhong}@postech.ac.kr Dept.

More information

ENCODING STRUCTURED OUTPUT VALUES. Edward Loper. Computer and Information Science

ENCODING STRUCTURED OUTPUT VALUES. Edward Loper. Computer and Information Science ENCODING STRUCTURED OUTPUT VALUES Edward Loper A DISSERTATION PROPOSAL in Computer and Information Science Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements

More information

Natural Language Processing with Deep Learning CS224N/Ling284

Natural Language Processing with Deep Learning CS224N/Ling284 Natural Language Processing with Deep Learning CS224N/Ling284 Lecture 8: Recurrent Neural Networks Christopher Manning and Richard Socher Organization Extra project office hour today after lecture Overview

More information

Inference Complexity As Learning Bias. The Goal Outline. Don t use model complexity as your learning bias

Inference Complexity As Learning Bias. The Goal Outline. Don t use model complexity as your learning bias Inference Complexity s Learning Bias Daniel Lowd Dept. of Computer and Information Science University of Oregon Don t use model complexity as your learning bias Use inference complexity. Joint work with

More information

Automatic Domain Partitioning for Multi-Domain Learning

Automatic Domain Partitioning for Multi-Domain Learning Automatic Domain Partitioning for Multi-Domain Learning Di Wang diwang@cs.cmu.edu Chenyan Xiong cx@cs.cmu.edu William Yang Wang ww@cmu.edu Abstract Multi-Domain learning (MDL) assumes that the domain labels

More information

Lecture 7: Neural network acoustic models in speech recognition

Lecture 7: Neural network acoustic models in speech recognition CS 224S / LINGUIST 285 Spoken Language Processing Andrew Maas Stanford University Spring 2017 Lecture 7: Neural network acoustic models in speech recognition Outline Hybrid acoustic modeling overview Basic

More information