Knuth-Morris-Pratt. Kranthi Kumar Mandumula Indiana State University Terre Haute IN, USA. December 16, 2011

Similar documents
String Matching Algorithms

String matching algorithms تقديم الطالب: سليمان ضاهر اشراف المدرس: علي جنيدي

Algorithms and Data Structures

String matching algorithms

CSCI S-Q Lecture #13 String Searching 8/3/98

Exact String Matching. The Knuth-Morris-Pratt Algorithm

Applied Databases. Sebastian Maneth. Lecture 14 Indexed String Search, Suffix Trees. University of Edinburgh - March 9th, 2017

String Matching Algorithms

String Matching. Geetha Patil ID: Reference: Introduction to Algorithms, by Cormen, Leiserson and Rivest

A string is a sequence of characters. In the field of computer science, we use strings more often as we use numbers.

SORTING. Practical applications in computing require things to be in order. To consider: Runtime. Memory Space. Stability. In-place algorithms???

Algorithms and Data Structures Lesson 3

CSC152 Algorithm and Complexity. Lecture 7: String Match

String Matching in Scribblenauts Unlimited

Indexing and Searching

String Algorithms. CITS3001 Algorithms, Agents and Artificial Intelligence. 2017, Semester 2. CLRS Chapter 32

Implementation of Pattern Matching Algorithm on Antivirus for Detecting Virus Signature

A New String Matching Algorithm Based on Logical Indexing

String Matching. Pedro Ribeiro 2016/2017 DCC/FCUP. Pedro Ribeiro (DCC/FCUP) String Matching 2016/ / 42

Data Structures and Algorithms. Course slides: String Matching, Algorithms growth evaluation

CMSC423: Bioinformatic Algorithms, Databases and Tools. Exact string matching: introduction

CS/COE 1501

Clever Linear Time Algorithms. Maximum Subset String Searching

Application of String Matching in Auto Grading System

Problem Set 9 Solutions

Clever Linear Time Algorithms. Maximum Subset String Searching. Maximum Subrange

kvjlixapejrbxeenpphkhthbkwyrwamnugzhppfx

17 dicembre Luca Bortolussi SUFFIX TREES. From exact to approximate string matching.

Fast Hybrid String Matching Algorithms

COMPARATIVE ANALYSIS ON EFFICIENCY OF SINGLE STRING PATTERN MATCHING ALGORITHMS

Chapter 7. Space and Time Tradeoffs. Copyright 2007 Pearson Addison-Wesley. All rights reserved.

An analysis of the Intelligent Predictive String Search Algorithm: A Probabilistic Approach

Multithreaded Sliding Window Approach to Improve Exact Pattern Matching Algorithms

Given a text file, or several text files, how do we search for a query string?

Data structures for string pattern matching: Suffix trees

Data Structures and Algorithms Dr. Naveen Garg Department of Computer Science and Engineering Indian Institute of Technology, Delhi.

Fast Substring Matching

String Processing Workshop

Volume 3, Issue 9, September 2015 International Journal of Advance Research in Computer Science and Management Studies

Information Processing Letters Vol. 30, No. 2, pp , January Acad. Andrei Ershov, ed. Partial Evaluation of Pattern Matching in Strings

Lecture 7 February 26, 2010

International Journal of Computer Engineering and Applications, Volume XI, Issue XI, Nov. 17, ISSN

Inexact Pattern Matching Algorithms via Automata 1

Project Proposal. ECE 526 Spring Modified Data Structure of Aho-Corasick. Benfano Soewito, Ed Flanigan and John Pangrazio

Inexact Matching, Alignment. See Gusfield, Chapter 9 Dasgupta et al. Chapter 6 (Dynamic Programming)

Exact Matching: Hash-tables and Automata

Experiments on string matching in memory structures

Fast Exact String Matching Algorithms

Study of Selected Shifting based String Matching Algorithms

CSED233: Data Structures (2017F) Lecture12: Strings and Dynamic Programming

CSC Design and Analysis of Algorithms. Lecture 9. Space-For-Time Tradeoffs. Space-for-time tradeoffs

A very fast string matching algorithm for small. alphabets and long patterns. (Extended abstract)

String Patterns and Algorithms on Strings

Giri Narasimhan. COT 6936: Topics in Algorithms. The String Matching Problem. Approximate String Matching

DATA STRUCTURES + SORTING + STRING

Applications of Suffix Tree

END-TERM EXAMINATION

Assignment 2 (programming): Problem Description

University of Waterloo CS240R Fall 2017 Review Problems

Project Proposal. ECE 526 Spring Modified Data Structure of Aho-Corasick. Benfano Soewito, Ed Flanigan and John Pangrazio

SUBSTRING SEARCH BBM ALGORITHMS TODAY DEPT. OF COMPUTER ENGINEERING. Substring search applications. Substring search.

Data Structures and Algorithms(4)

5.3 Substring Search

Algorithms. Algorithms 5.3 SUBSTRING SEARCH. introduction brute force Knuth-Morris-Pratt Boyer-Moore Rabin-Karp ROBERT SEDGEWICK KEVIN WAYNE

University of Waterloo CS240R Winter 2018 Help Session Problems

arxiv: v1 [cs.ds] 3 Jul 2017

A Practical Distributed String Matching Algorithm Architecture and Implementation

Introduction to Algorithms

Giri Narasimhan. CAP 5510: Introduction to Bioinformatics. ECS 254; Phone: x3748

A NEW STRING MATCHING ALGORITHM

Practical and Optimal String Matching

Strings. Zachary Friggstad. Programming Club Meeting

Document Compression and Ciphering Using Pattern Matching Technique

An Index Based Sequential Multiple Pattern Matching Algorithm Using Least Count

Multiple Skip Multiple Pattern Matching Algorithm (MSMPMA)

Keywords Pattern Matching Algorithms, Pattern Matching, DNA and Protein Sequences, comparison per character

6.3 Substring Search. brute force Knuth-Morris-Pratt Boyer-Moore Rabin-Karp !!!! Substring search

Parallel and Sequential Data Structures and Algorithms Lecture (Spring 2012) Lecture 25 Suffix Arrays

Text Algorithms (6EAP) Lecture 3: Exact paaern matching II

The Exact Online String Matching Problem: A Review of the Most Recent Results

A New Multiple-Pattern Matching Algorithm for the Network Intrusion Detection System

Automaton-based Backward Pattern Matching

Patrick Dengler String Search Algorithms CSEP-521 Winter 2007

Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest. Introduction to Algorithms

Practical Fast Searching in Strings

A Multipattern Matching Algorithm Using Sampling and Bit Index

Efficient String Matching Using Bit Parallelism

String Searching Algorithm Implementation-Performance Study with Two Cluster Configuration

Text Algorithms (6EAP) Lecture 3: Exact pa;ern matching II

WAVEFRONT LONGEST COMMON SUBSEQUENCE ALGORITHM ON MULTICORE AND GPGPU PLATFORM BILAL MAHMOUD ISSA SHEHABAT UNIVERSITI SAINS MALAYSIA

Data Structures Lecture 3

A Suffix Tree Construction Algorithm for DNA Sequences

Importance of String Matching in Real World Problems

K-M-P String Matching

Enhanced Component Retrieval Scheme Using Suffix Tree in Object Oriented Paradigm

Data Structures and Algorithms Course Introduction. Google Camp Fall 2016 Yanqiao ZHU

Combinatorial Pattern Matching. CS 466 Saurabh Sinha

Chapter 4. Transform-and-conquer

Faculty of Mathematics and Natural Sciences

String quicksort solves this problem by processing the obtained information immediately after each symbol comparison.

Transcription:

Kranthi Kumar Mandumula Indiana State University Terre Haute IN, USA December 16, 2011 Abstract KMP is a string searching algorithm. The problem is to find the occurrence of P in S, where S is the given string and P is the Pattern string. Firstly, we must compare each character of P with S, if it is matched then we must continue else we must shift P one position to the right and repeat the matching procedure. The running time of the algorithm is O(m + n) where m is the length of pattern P and n is the length of the string S. Since we know that m < n (1) Therefore running time is O(n). 1 Statement of Problem This problem is to find a string inside an another string. In a pattern P for each position i, sp i (p) is said to be the length of longest suffix of P [1, 2i], which matches to the prefix P. It is similar to that of naive algorithm which performs its comparisons from left to right. It also calculates the maximum possible shifts from left to right for the pattern P. 2 History The main reason behind the slowness of the naive algorithm is that after every mismatch the pattern is shifted towards right by one character position to the text so that when we again compare those pairs of characters which has aready been scanned. But observes that it is not necessary to side the pattern by one character position only. They used the information that is gained during previously compared characters and developed a algorithm knows after the names as algorithm. Knuth, Morris and Pratt discovered first linear time string-matching algorithm by following a tight analysis of the naive algorithm. algorithm keeps the information that naive approach wasted gathered during 1

the scan of the text. By avoiding this waste of information, it achieves a running time of O(n + m), which is optimal in the worst case sense. That is, in the worst case algorithm, we have to examine all the characters in the text and pattern at least once. It uses a minimum of comparison through the use of a pre-computed table. The implementation of the Knuth-Morris- Pratt is efficient because it minimizes the total number of comparisons of the pattern against the input string, while consuming characters is a predictable way from the buffer. There are string matching algorithms that have better average-case running times than algorithm, but there is no single comparator algorithm with better worst-case characteristics. The worst case performance of the consistent consumption of input characters makes a buffered implementation possible. 3 Time Complexity The prefix-function values can be computed in O(m). Number of iterations for finding the match takes 2n iterations. Totally, the running time of the knuthmorris-pratt algorithm will be O(m + n). In preprocessing phase it is O(m). In searching phase it would be O(n + m) (since it is independent from the alphabet size). 4 Comparison with other string algorithms 4.1 Karp-Rabin Algorithm 4.1.1 Statement of the problem The Rabin-Karp algorithm is a string searching algorithm that uses hashing function to find the set of pattern strings in a text. The key to performance of Rabin-Karp algorithm is the efficient computations of hash values of the successive substrings of the text. Compared to other algorithms it is inferior to single pattern matching because of its slow worst case behaviour. It is best suits for multiple pattern search. 4.1.2 Complexity of the algorithm Best running Time: O(n + m). Worst-case Time: O(nm). 4.2 Boyer-Moore Algorithm 4.2.1 Statement of the problem It is an efficient string searching algorithm and it has been the standard benchmark for the practical string search literature. This algorithm s execution time 2

can be sublinear as not every character of the string to be searched needs to be checked. The algorithm gets fasters as the target string becomes larger. 4.2.2 Complexity of the algorithm Worst- case Time : O(nm). Best running time : O(n). 4.3 Naive String search algorithm 4.3.1 Statement of the problem This algorithm is said to be the simplest and least efficient way to see where one string occurs inside another or not. In this algorithm we only have to look at one or two characters for each wrong positions to see that it is wrong position. 4.3.2 Complexity of the algorithm Average case : O(n + m). Worst case : O(nm). 5 O(mn) Approach One of the most obvious approach towards the string matching problem would be to compare the first element of the pattern to be searched p, with the first element of the string S, in which to locate p. If the element of p matches the first element of S, compare the second element of p with the second element of S. If match is found proceed likewise until entire p is found. If a mismatch is found at any position, shift p one position to the right and repeat comparison beginning from first element of p. 5.1 Working of O(mn) Let us describe with an illustration of how O(mn) approach works. Let us consider a String S and Pattern p, where String S is given as a b c a b a a b c a b a c and pattern p is given by a b a a. 5.1.1 Steps for O(mn) approach for the above example Step 1 : Compare p[1] with S[1]. String a b c a b a a b c a b a c Pattern a b a a 3

Step 2 : Compare p[2] with S[2]. String a b c a b a a b c a b a c Pattern a b a a Step 3 : Compare p[3] with S[3]. String a b c a b a a b c a b a c Pattern a b a a Since a mismatch is detected, shift p one position to the right and repeat the steps from step 1 to step 3 and repeat matching procedure. String a b c a b a a b c a b a c Pattern a b a a So, we can see that finally a match is found after shifting p three times to the right. 5.1.2 Drawbacks of this approach The running time of the algorithm is very slow, since the matching time is of the order O(mn). The main reason for the slowness of this approach is its repetative comparisons. 6 General Algorithm for all the String searching algorithm 4

p l a c e the pattern at l e f t ; while pattern not f u l l y matched and t e x t not exhausted do begin while pattern c h a r a c t e r d i f f e r s from c u r r e n t t e x t c h a r a c t e r do s h i f t pattern a p p r o p r i a t e l y ; advance to next c h a r a c t e r o f t e x t end ; 7 Components of KMP Algorithm The Prefix Function, : The prefix function, preprocesses the pattern to find matches of prefixes of the pattern with the pattern itself. It is defined as the size of the largest prefix of P [0..j] that is also a suffix of P [1..j]. It is used to avoid useless shifts of the pattern p. In other words, this enables avoiding backtracking on the string S. m length[p] a[1] 0 k 0 for q 2 to m do while k > 0 and p[k + 1] p[q] do k a[k] end while if p[k + 1] = p[q] then k k + 1 end if a[q] k end for The KMP Matcher: With string S, pattern p and prefix function as inputs, finds the occurrence of p in S and returns the number of shifts of p after which occurrence is found. Computation of Prefix-function: Let us consider an example of how to compute for the pattern p below: Pattern a b a b a c a I n i t i a l l y : m = length [ p]= 7 [1]= 0 5

k=0 where m, [1], and k are the length of the pattern, prefix function and initial potential value respectively. Step 1 : q = 2, k = 0 [2]= 0 p a b a b a c a 0 0 Step 2 : q = 3, k = 0 [3]= 1 Step 3 : q = 4, k = 1 [4]= 2 p a b a b a c a 0 0 1 p a b a b a c a 0 0 1 2 Step 4 : q = 5, k = 2 [5]= 3 p a b a b a c a 0 0 1 2 3 Step 5 : q = 6, k = 3 [6]= 1 p a b a b a c a 0 0 1 2 3 1 6

Step 6 : q = 7, k = 1 [7]= 1 p a b a b a c a 0 0 1 2 3 1 1 After i t e r a t i n g 6 times, the p r e f i x f u n c t i o n computations i s complete : 7.1 Run-time complexity p a b A b a c a 0 0 1 2 3 1 1 Prefix function can be computed in O(m) time. 8 Algorithm Step 1 : I n i t i a l i z e the input v a r i a b l e s : n = Length o f the Text. m = Length o f the Pattern. = Prefix f u n c t i o n o f pattern ( p ). q = Number o f c h a r a c t e r s matched. Step 2 : Define the v a r i a b l e : q=0, the beginning o f the match. Step 3 : Compare the f i r s t c h a r a c t e r o f the pattern with f i r s t c h a r a c t e r o f t e x t. I f match i s not found, s u b s t i t u t e the value o f to q. I f match i s found, then increment the value o f q by 1. Step 4 : Check whether a l l the pattern elements are matched with the t e x t elements. I f not, repeat the search p r o c e s s. I f yes, p r i n t the number o f s h i f t s taken by the pattern. Step 5 : look f o r the next match. 7

Pseudo-code for KMP algorithm: n length[s] m length[p] a Compute P refix function q 0 for i 1 to n do while q > 0 and p[q + 1] S[i] do q a[q] if p[q + 1] = S[i] then q q + 1 end if if q == m then q a[q] end if end while end for 8.1 Example of the String Pattern Matching Now let us consider an example so that the algorithm can be clearly understood. Let text be y x y z y x y z x y z x x y z x y z x y z x z x and Pattern be x y z x y z x z x. The space after each character is shown only for clarity. Assume that these strings do not have any other characters in them. The matching process would start by aligning the first character of pattern (from left) on the first character of the text. Initially this situation would be as shown below. Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x We start matching characters of the pattern from left to right. It mismatches at the first character itself so we slide the pattern towards right by one position. We try as shown below Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x 8

Now first three characters of the pattern match with the three characters of the text starting at position ie the fourth character of the pattern does not match with the corresponding character of the text where we know that x y z r where r x. It is some character other than x. Without making any further tests with the Text, we can safely conclude that it is not worth shifting pattern right by one, two or three positions, as such a slide would not result in a proper alignment. Now lets us move four characters from the current position. The situation would be Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x Now eighth character of Pattern mismatches. The first seven characters now matches. It means that the last eight characters of Text tested are x y z x y z x r where r z. Shifting Pattern one or two places would not be correct but a three place shift might be as follows Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x As we have shifted it by appropriately choosing the shift mount so that the characters prior to mismatch location of the Text do match. In this case we find the mismatch has occurred at the same location of the Text. Since the time three position shift is not enough we attempt a four position shift. Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x Still we get a mismatch. essential. This time a shift of three positions would be Text y x y z y x y z x y z x x y z x y z x y z x z x Pattern x y z x y z x z x 9

This completes the matching of the entire pattern. It is known from the above example that it takes 14 shifts to complete the matching process for the above substring of the text. 9 KMP as a Finite State Automata We can use finite automata to do string matching by having an automaton set up to match just one word, and whe we get to the final state, we know that we have found the substring in the text. This technique is very efficient because a finite automaton functions by looking at each point symbol once. This means that we can do the matching with a finite automaton in no more than T comparisons. This is not an easy task, and although algorithms are available that can do this, they take a lot of time. Because of this, finite automata are not a good general purpose solution to string matching. When constructing a finite automaton to look for a substring in a piece of text, it is easy to build the links that move us from the start state to the final accepting state, because they are just labeled with the characters of the sub-string. The problem occurs when we begin to add additional links for the other characters that do not get us to the final state. The algorithm is based on finite automata but uses a simpler method of handling the situation in which the characters do not match. In, we label the state with the symbols that should match at that point. We then need only two links from each state one for a successful match and the other for a failure. The succes link will take us to the next node in the chain, and the fail link will take us back to a previous node based on the word pattern. For a pattern P [1...m], its string matching automaton can be constructed as follows. 1. The state set Q is [0, 1,..., m]. The start state q 0 is state 0, and state m is the only accepting state. 2. The transition function δ is defined by the following equation, for any state q and character a. Let us consider an example: P =ababaca 10

state 0 1 2 3 4 5 6 7 input 1 1 3 1 5 1 7 1 0 2 0 4 0 4 0 2 0 0 0 0 0 6 0 0 The table above shows the transition table which shows the matched string. Let us consider an example to show string matching by using the finite automaton. Let P =ababaca T =abababacaba Step 1 : q = 0, T [1] = a. Go i n t o the s t a t e q = 1. Step 2 : q = 1, T [2] = b. Go i n t o the s t a t e q = 2. Step 3 : q = 2, T [3] = a. Go i n t o the s t a t e q = 3. Step 4 : q = 3, T [4] = b. Go i n t o the s t a t e q = 4. Step 5 : q = 4, T [5] = a. Go i n t o the s t a t e q = 5. Step 6 : q = 5, T [6] = b. Go i n t o the s t a t e q = 4. Step 7 : q = 4, T [7] = a. Go i n t o the s t a t e q = 5. Step 8 : q = 5, T [8] = c. Go i n t o the s t a t e q = 6. Step 9 : q = 6, T [9] = a. Go i n t o the s t a t e q = 7. Therefore from the above example we can see that the matched string is described in the form of steps and finally the string matching gets halted after all the strings in the pattern are matched from the given text. 10 Advantages and Disadvantages of KMP Advantages: It runs in optimal time : O(m + n). Which is very fast. The algorithm never needs to move backwards in the input text T. This makes the algorithm good for processing very large files. Disadvantages: Doesn t work so well as the size of the alphabets increases. Due to which the chances of mismatch is more. 11

References [1] Graham A.Stephen, String Searching Algorithms, year = 1994. [2] Donald Knuth, James H. Morris, Jr, Vaughan Pratt, Fast pattern matching in strings, year = 1977. [3] Thomas H.Cormen; Charles E.Leiserson., Introduction to algorithms second edition, The Algorithm, year = 2001. 12