Gregex: GPU based High Speed Regular Expression Matching Engine

Similar documents
SWM: Simplified Wu-Manber for GPU-based Deep Packet Inspection

CUDA Performance Optimization. Patrick Legresley

Detecting Computer Viruses using GPUs

Tesla Architecture, CUDA and Optimization Strategies

Accelerating String Matching Using Multi-threaded Algorithm

Accelerating String Matching Algorithms on Multicore Processors Cheng-Hung Lin

Portland State University ECE 588/688. Graphics Processors

Lecture 2: CUDA Programming

Efficient Packet Pattern Matching for Gigabit Network Intrusion Detection using GPUs

CUDA. Schedule API. Language extensions. nvcc. Function type qualifiers (1) CUDA compiler to handle the standard C extensions.

Multipattern String Matching On A GPU

Fundamental Optimizations in CUDA Peng Wang, Developer Technology, NVIDIA

TUNING CUDA APPLICATIONS FOR MAXWELL

Fundamental CUDA Optimization. NVIDIA Corporation

Fundamental CUDA Optimization. NVIDIA Corporation

CUDA Optimization with NVIDIA Nsight Visual Studio Edition 3.0. Julien Demouth, NVIDIA

TUNING CUDA APPLICATIONS FOR MAXWELL

CS 179: GPU Computing LECTURE 4: GPU MEMORY SYSTEMS

B. Tech. Project Second Stage Report on

Convolution Soup: A case study in CUDA optimization. The Fairmont San Jose 10:30 AM Friday October 2, 2009 Joe Stam

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

An Efficient AC Algorithm with GPU

GPU Fundamentals Jeff Larkin November 14, 2016

Hermes: An Integrated CPU/GPU Microarchitecture for IP Routing

CUDA Optimizations WS Intelligent Robotics Seminar. Universität Hamburg WS Intelligent Robotics Seminar Praveen Kulkarni

Convolution Soup: A case study in CUDA optimization. The Fairmont San Jose Joe Stam

CUDA Memories. Introduction 5/4/11

CUDA OPTIMIZATIONS ISC 2011 Tutorial

Evaluating the Potential of Graphics Processors for High Performance Embedded Computing

GrAVity: A Massively Parallel Antivirus Engine

high performance medical reconstruction using stream programming paradigms

COMP 605: Introduction to Parallel Computing Lecture : GPU Architecture

Computer Sciences Department

Parallelizing FPGA Technology Mapping using GPUs. Doris Chen Deshanand Singh Aug 31 st, 2010

Performance Analysis and Optimization of Gyrokinetic Torodial Code on TH-1A Supercomputer

Tuning CUDA Applications for Fermi. Version 1.2

HiPANQ Overview of NVIDIA GPU Architecture and Introduction to CUDA/OpenCL Programming, and Parallelization of LDPC codes.

GPGPU LAB. Case study: Finite-Difference Time- Domain Method on CUDA

Dense matching GPU implementation

Design of Deterministic Finite Automata using Pattern Matching Strategy

Introduction to CUDA

Optimization solutions for the segmented sum algorithmic function

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

High-Performance Packet Classification on GPU

CUDA Performance Optimization Mark Harris NVIDIA Corporation

Advanced CUDA Optimizing to Get 20x Performance. Brent Oster

High Performance Computing on GPUs using NVIDIA CUDA

Exploiting graphical processing units for data-parallel scientific applications

Kernel optimizations Launch configuration Global memory throughput Shared memory access Instruction throughput / control flow

GASPP: A GPU- Accelerated Stateful Packet Processing Framework

REDUCING BEAMFORMING CALCULATION TIME WITH GPU ACCELERATED ALGORITHMS

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

Hybrid Regular Expression Matching for Deep Packet Inspection on Multi-Core Architecture

A Parallel Decoding Algorithm of LDPC Codes using CUDA

Performance impact of dynamic parallelism on different clustering algorithms

A GPU Implementation of Tiled Belief Propagation on Markov Random Fields. Hassan Eslami Theodoros Kasampalis Maria Kotsifakou

State of the Art for String Analysis and Pattern Search Using CPU and GPU Based Programming

Profiling-Based L1 Data Cache Bypassing to Improve GPU Performance and Energy Efficiency

BER Guaranteed Optimization and Implementation of Parallel Turbo Decoding on GPU

Advanced CUDA Optimizing to Get 20x Performance

Double-Precision Matrix Multiply on CUDA

GPU-based NFA Implementation for Memory Efficient High Speed Regular Expression Matching

GRAPHICS PROCESSING UNITS

MULTIMEDIA PROCESSING ON MANY-CORE TECHNOLOGIES USING DISTRIBUTED MULTIMEDIA MIDDLEWARE

GPUfs: Integrating a file system with GPUs

Parallelizing Inline Data Reduction Operations for Primary Storage Systems

Numerical Simulation on the GPU

Parallel Computing: Parallel Architectures Jin, Hai

A Capability-Based Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

Efficient Signature Matching with Multiple Alphabet Compression Tables

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

QR Decomposition on GPUs

General Purpose GPU Computing in Partial Wave Analysis

Exploring GPU Architecture for N2P Image Processing Algorithms

Using GPUs to compute the multilevel summation of electrostatic forces

An Efficient Regular Expressions Compression Algorithm From A New Perspective

CUDA Programming Model

Advanced CUDA Programming. Dr. Timo Stich

CUDA Optimization: Memory Bandwidth Limited Kernels CUDA Webinar Tim C. Schroeder, HPC Developer Technology Engineer

A Parallel Access Method for Spatial Data Using GPU

Advanced CUDA Optimization 1. Introduction

Persistent RNNs. (stashing recurrent weights on-chip) Gregory Diamos. April 7, Baidu SVAIL

CS GPU and GPGPU Programming Lecture 8+9: GPU Architecture 7+8. Markus Hadwiger, KAUST

Introduction to Parallel Computing with CUDA. Oswald Haan

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Performance optimization with CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

Improving Performance of Machine Learning Workloads

Programming GPUs for database applications - outsourcing index search operations

ACCELERATING THE PRODUCTION OF SYNTHETIC SEISMOGRAMS BY A MULTICORE PROCESSOR CLUSTER WITH MULTIPLE GPUS

CSE 599 I Accelerated Computing - Programming GPUS. Memory performance

To Use or Not to Use: CPUs Cache Optimization Techniques on GPGPUs

Generic Polyphase Filterbanks with CUDA

arxiv: v1 [physics.comp-ph] 4 Nov 2013

소프트웨어기반고성능침입탐지시스템설계및구현

Dense Linear Algebra. HPC - Algorithms and Applications


THE COMPARISON OF PARALLEL SORTING ALGORITHMS IMPLEMENTED ON DIFFERENT HARDWARE PLATFORMS

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC

Transcription:

11 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing Gregex: GPU based High Speed Regular Expression Matching Engine Lei Wang 1, Shuhui Chen 2, Yong Tang 3,JinshuSu 4 School of Computer Science, National University of Defense Technology Changsha, China 1 wangleinuts@gmail.com 2 csh999@263.net 3 ytang@nudt.edu.cn 4 sjs@nudt.edu.cn Abstract Regular expression matching engine is a crucial infrastructure which is widely used in network security systems, like IDS. We propose Gregex, a Graphics Processing Unit (GPU) based regular expression matching engine for deep packet inspection (DPI). Gregex leverages the computational power and high memory bandwidth of GPUs by storing data in proper GPU memory space and executing massive GPU thread concurrently to process lots of packets in parallel. Three optimization techniques, ATP, CAB, and CAT are proposed to significantly improve the performance of Gregex. On a GTX260 GPU, Gregex achieves a regular matching throughputof 126.8 Gbps, which is a speedup of 210 over traditional CPU-based implementation and a speedup of 7.9 over the state-of-the-art GPU based regular expression engine. I. INTRODUCTION Signature-based deep packet inspection (DPI) has been one of the most important mechanisms in network security systems nowadays. DPI inspects entire packets traveling the network in real time to detect threats, such as intrusions, worms, viruses and spam. Regular expression is widely used for describing DPI signatures because they are much more expressive and flexible than simple strings. Network intrusion detection systems (NIDS), like Snort[1] use regular expressions to describe more complicated signatures. Due to the limited computation power of CPU and the high latency of I/O access[2], pure software implementation of regular expression matching engine cannot satisfy the performance requirements of DPI. A possible solution is offloading regular expression matching to hardware platforms[3], [4], [5], [6], such as ASICs, FPGAs and NPs. Hardware-based solutions could achieve a high performance, but they are complex and not flexible enough. Modern GPUs are specialized for compute-intensive, highly parallel computation. Also, GPUs are more cheap and programmable than other hardware platforms. In this paper, we propose Gregex, a high speed GPU based regular expression matching engine for DPI. In Gregex, the DFA state transition table compiled from regular expressions resides in GPU s texture memory and a large amount of packets are copied to GPU s global memory for matching. Massive GPU threads run concurrently in the way that each GPU thread matches one packet. We describe three optimization techniques for Gregex. On a GTX260 device, Gregex achieves a regular expression matching throughput of 126.8 Gbps which is about 210 faster than traditional CPU implementation[7] and 7.9 faster than solution proposed in [8]. The rest of this paper is organized as follows. In Section II, we present the background knowledge and related works on GPU based regular expressions matching techniques. The design and optimization of Gregex are introduced in Section III. The performance results are evaluated in Section IV. Finally, we conclude our works in Section V. II. BACKGROUND A. Regular Expression Matching Techniques Regular expression matching engines can be based on either nondeterministic finite automata (NFA) or deterministic finite automata (DFA). In DPI, DFA approaches are preferred for better performance. In DFA approaches, a set of regular expressions are usually converted to one DFA by first compiling them into an NFA using the Thompson algorithm[9] and then converting the NFA to DFA using the Subset Construction algorithm. Given the compiled DFA and an input string representing the network traffic, DPI needs to decide whether the DFA accepts the input string. DFA is represented by a state transition table and a state acceptance table. State transition table is a two-dimensional matrix whose width and height are equal to the size of the alphabet and the number of states in DFA respectively. Each cell of the state transition table contains the next state to move to in DFA. State acceptance table is a one-dimensional array, the length of which is equal to the number of states in DFA. Each cell of the state acceptance table indicates whether the corresponding state is an acceptable state or not. DFA matching requires two state table lookups (two memory accesses) per input byte: getting the next state and deciding whether this is a acceptable state. In modern CPU, one memory access may take many cycles to return a result. In contrast, when using GPU to perform DFA matching, massively threads execution concurrently could hide the memory access latency efficiently. 978-0-7695-4372-7/11 $26.00 11 IEEE DOI 10.1109/IMIS.11.107 422 366

B. The CUDA Programming Model We briefly review CUDA which defines the architecture and programming model for NVIDIA GPUs. We focus on GeForce GTX 0 series GPUs, more information could be found in the CUDA documentation [10], [11]. GPU Architecture: The GeForce GTX 0 serior GPUs are based on a reengineered, enhanced, and extended Scalable Processor Array (SPA) architecture which consists of 10 Thread Processing Clusters (TPCs). Each TPC is in turn made up of 3 Streaming Multiprocessors (SMs), and each SM contains 8 Streaming Processors (SPs). Every SM also includes texture filtering processors used in graphics processing. The GPU s compute architecture is SIMT (single instruction, multiple threads) for execution across each SM. SIMT improves upon pure SIMD (single instruction, multiple data) designs in both performance and ease of programmability. Programming Model: In the CUDA model, data parallel portions of an application are expressed as device kernels which run on many threads. CUDA threads execute on device (GPU) that operates as a coprocessor to the host (CPU) running the C program. A CUDA kernel is executed as a grid of thread blocks. The number of threads per block and the number of blocks per grid are specified by the programmer. Threads within a block could cooperate via shared memory, atomic operations and barrier synchronization. All threads within a block are executed concurrently on a SM; several blocks can execute concurrently on a SM. Memory Hierarchy: CUDA devices use several memory spaces, which have different characteristics that reflect their distinct usages in CUDA applications. In addition to a number of 32-bit registers shared across all the active threads, each multiprocessor carries on-chip a 16KB shared memory. The off-chip global memory is connected to each SM with high transfer bandwidth, large amounts and high latency. There are also two additional read-only memory spaces that provide the additional benefit of hardware caching accessible by all threads: the constant and texture memory spaces. The global, constant, and texture memory spaces are optimized for different memory usages but their effectiveness cannot be guaranteed. C. GPU based Regular Expression Matching Engines Randy Smith et al. proposed a programmable signature matching system prototyped on an NVIDIA G80 GPU[12]. They have made a detailed analysis of regular control flow and parallelism available at the packet level. Two types for regular expression matching were examined in their work: standard DFA and extended finite automata (XFA)[13], [14]. XFA approach uses less memory than DFA but have a more complex execution control flow which can impact the performance of GPU by causing threads of the same warp to diverge. Evaluation shows the GPU based prototype achieves a speedup of 6 to 9 compared to implementation on a Pentium4 CPU. Giorgos Vasiliadis et al. presented a regular expression matching engine based on GPU[8]. In their work, regular expressions were compiled separately and the whole packets Global Memory Packet buffer Result buffer Fig. 1. #SP1 #SP2... #SP8 IU State table Texture Memory Framewok of Gregex which uses a GTX 260 GPU. were processed by every thread in isolation. The experimental results show regular expression matching on NVIDIA GeForce 9800 GX2 GPU can achieve up to 16 Gbps of raw processing throughput, which was a 48 speedup by CPU implementations. Furthermore, they had extended the architecture of Gnort[7] by adding a GPU-assisted regular expression matching engine. The overall processing throughput of Snort was increased by a factor of eight compared to the default implementation. Shuai Mu et al. proposed a GPU based solutions for a series of core IP routing applications[15]. In their work, they implemented a finite automata based regular expression matching algorithm for the deep packet inspection application. On a NVIDIA GTX280 GPU, the proposed regular expression matching algorithm can achieve up to a matching throughput of 9.3 Gbps and a overall throughput of 3.2 Gbps. A. Framework III. THE PROPOSED GREGEX The framework of Gregex is depicted by Fig. 1. In Gregex, packets are stored in GPU s global memory; the DFA state transition table resides in GPU s texture memory. Texture memory has a hardware cache so that DFA state transition table lookup latency could be significantly reduced. In Gregex, packets are processed in batches. Each thread processes one of the packets in isolation. Whenever a match occurs, the threads stores the regular expression s ID to the matching result buffer. Matching result buffer is a singledimension array allocated in the global device memory; the size of the array is equal to the number of packets that are processed by GPU at a time, as shown in Fig. 2(b). B. Workflow The packet processing workflow in Gregex could be divided into three phases: pre-processing phase, signature matching phase, and post-processing phase. Pre-processing and postprocessing phase run by CPU threads perform tasks of transferring packets from CPU to GPU and getting match results from GPU memory respectively. Signature matching phase runs by GPU threads and does the regular expression matching tasks. 423 367

0 1 l-1 0 47 (a) 0 31 0 regular expression ID 1 regular expression ID l-1 regular expression ID Fig. 2. The format of (a) packets buffer and (b) matching results buffer in GPU global memory. 1) Pre-processing phase: In pre-processing phase, Gregex does some necessary preparation works, including constructing DFA from regular expression, transferring packets to GPU. Compiling regular expressions to DFA: In our work, the state acceptance table is merged into state transition table as the last column of the state transition table when constructting DFA. Once the DFA has been constructed, the state transition table is copied to texture memory of GPU by two steps: 1. Copy state transition table from CPU memory to GPU global memory; 2. Bind the state transition table in global memory to texture cache. Transferring packets to GPU: Now we consider how packets are transferred from CPU memory to the device memory. Due to the overhead associated with each transfer, batching many packets into one larger transfer performs significantly better than making each transfer separately [11]. So Gregex chooses to copy packets to device memory in batches. The format of buffer allocated for storing packets in global memory is illustrated in Fig. 2(a). The length of the packet is set to 2KB. If packet is shorter than 2KB, Gregex pads 0x00 at the end of the packet; if packet s longer than 2KB, Gregex splits it down into several smaller ones. The maximum IP packet may be up to 65,535 bytes in length. However, assigning the maximum packet length as the size of packet in the buffer would result in a waste of bandwidth. 2) Signature matching phase: Each GPU thread processes a respective packet in isolation during regular expression matching. Algorithm 1 gives the multi-thread version procedure for DFA matching on GPU. Algorithm 1. multi-thread DFA matching procedure. Input: packets : a batch of packets to match Input: DFA : state transition table Output: Results : match results 1 packet packets[thread ID]; 2 current state 0; 3 foreach byte in packet do 4 input packet[byte]; 5 next state DFA[current state, input]; 6 current state next state; 7 if DFA[current state, alphabet size +1] = 1 then 8 Results[thread ID] regex ID; 9 end 10 end (b) Line 1 gets the address of the packet to match according to thread s global ID. Lines 2-10 do work of DFA matching: at each iteration of foreach loop, matching threads read one byte from packet, look up state transition table for the next state and determine whether it is an acceptable state. If DFA goes to an acceptable state, the ID of the regular expression that matched packet is recorded to Results. 3) Post-processing phase : When all GPU threads finish matching, the matching result array is copied to the CPU memory. The kth cell of the matching result array contains the ID of the regular expression that matches the kth packet; if no match occurs, it is set to zero. C. Optimizations Gregex exploits optimization opportunities in workflow by maximizing parallelism as well as reducing GPU memory access latency. Three optimization techniques, ATP, CAB, and CAT are proposed to improve the performance of Gregex. 1) Asynchronous packets Transfer with Page-locked memory (ATP): Packets transferring throughput is the most important performance factor of Gregex. Higher bandwidth between the host and the device is achieved when using page-locked memory [11]. Asynchronous copy: In CUDA, data transfers between the host and the device using cudamemcpyasync function is nonblocking transfers, control is returned immediately to the host thread. Asynchronous copy enables overlap of data transfers with host and device computations. Zero copy: Zero copy requires mapped page-locked memory and enables GPU threads to directly access host memory. Zero copy can make kernel execution overlap data transfers automatically. 2) Coalesced global memory access in regular expression matching: Global memory access has a very high latency, about 400-600 cycles for a load/store operation. All global memory accesses by a half-warp 1 of threads can be coalesced into one or two transactions if these threads access a contiguous range of addresses. In In Algorithm 1, special attention must be paid to how threads loading packet from global memory and storing matching results to Results are performed. Coalesced global memory Access by Buffering packets to shared Memory (CAB) In this work, coalesced global memory access is obtained by having each half warp reading contiguous locations of global memory to shared memory. There is no performance penalty for non-contiguous access in shared memory as there is in global memory. We use s packets which is a 32 32 shared memory array of 32-bit words, to buffer packet from global memory for every thread. If the total length of packet is L bytes, it will take totally L/32 iterations for a thread to process a packet. In each iteration, threads in a block read data to s packets corporately to void uncoalesced global memory access, and then begin to match signatures on one row of s packets separately. 1 In CUDA, warp is a group of threads executed physically in parallel; half-warp is the first or second half of a warp of threads. 424 368

Fig. 3. Throughput(Gbps) The format of packets buffer after transposing. 45 40 35 30 25 15 10 8MB 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 log(data Size/64K) Fig. 4. Throughput of transferring packets to NVIDIA GTX 260 GPU with different sizes. However, shared memory bank conflict will occur if two or more threads in a half warp access bytes within different 32-bit words belonging to the same bank. A way to avoid this conflict is to pad the shared memory array by one column. When changing the size of s packets to 32 33, data in cell (i, j) and cell (x, y) of s packets are mapped to the same bank if and only if (i 33 + j) (x 33 + y) mod banks num =0 where bank num = 16 in current GPU architecture. When threads in half warp read data in the same column, that is j = y, wehave, (i x) 33+(j y) mod banks num = i x 33 mod 16 Thus bank conflict will never occur in a half warp since i x 33 mod 16 0. Coalesced global memory Access by Transposing packets buffer (CAT)Another technique to avoid uncoalesced global memory access is to transpose the packets buffer before matching. Transposing the packets buffer is similar to transpose a matrix. A detailed documentation[16] about optimizing matrix transpose in CUDA by Greg Ruetsch is released along with the CUDA SDK. In our work, we implement a high performance CUDA matrix transpose kernel simply following Ruetsch s steps in [16]. With the packet buffer transposing, the total time cost of packet processing by Gregex consists of time used to transfer packets to GPU memory, time used to transpose the packets buffer and time used to match packets with signatures. Transpose the packets buffer will make half warp of GPU threads access a contiguous range of addresses, as shown in Fig. 3. l 256MB TABLE I PERFORMANCE COMPARISON BETWEEN GREGEX AND OTHER GPU BASED IMPLEMENTATIONS. Hardware Algorithm Throughput(Gbps) Speedup GTX260 1 DFA(CAT) 126.8 - GTX260 DFA(CAB) 26.9 4.7 8600GT 2 Gnort AC 1.4[7] 90.5 9800GX2 3 DFA 16[8] 7.9 GTX280 4 AC 9.3[15] 13.6 1 contains 216 SPs organized in 27 SMs, running at 1.35GHz with 896 MB of memory 2 contains 32 SPs organized in 4 SMs, running at 1.2GHz with 512 MB of memory 3 consists 256 SPs organized in 16 SMs, running at 1.5GHz with 512 MB of memory. 4 contains 240 SPs organized in 30 SMs, running at 1.45GHz with 1024 MB of memory IV. EVALUATION RESULTS A. Experimental Setup Gregex is implemented on a PC with a 2.66 GHz Intel Core 2 Duo processor, 4 GB memory and a NVIDIA GeForce GTX 260 GPU card. GTX260 GPU contains 216 SPs organized in 27 SMs, running at 1.35 GHz with 896 MB of global memory. We implement Gregex under CUDA version 3.1 with device driver version 257.21. Gregex uses signatures in the rule set released with Snort 2.7. The rule set consists of 56 different signature sets. For each signature set, we construct a single DFA for all the regular expressions in it. We use two different network traces for evaluating the performance of Gregex: trace collected on the internet and trace from the 1998-1999 DARPA intrusion detection evaluation data set [17]. In our experiments, Gregex reads packets from local disk, and then transfers them in batches to GPU memory for processing. B. Packets Transfer Performance We first evaluate the throughput of packets transfer from CPU memory to GPU global memory. The throughput of transferring packets to the GPU varies depending on the data size. For this experiment we test two different kinds of host memories: page-locked memory and pageable memory. Pagelocked memory cannot be swapped out to disk by operating system before the GPU is finished using it, so it s faster than pageable memory, as shown in Fig. 4. Both the graphics card and mainboard in our system support PCI-E 16 Gen2. The theoretical peak bandwidth between host memory and device memory (64 Gbps) is far exceeded what we obtain actually. Larger transfer performs significantly better than smaller transfer, but, when data size is larger than 8MB, the throughput no longer increases notably. C. Regular Expression Matching Performance In this experiment, we evaluate the processing performance of Gregex which is measured as the mean bits of data 425 369

0 64 128 192 256 3 384 448 512 Throughput(Gbps) Throughput (Gbps) 35 30 25 15 10 5 0 140 1 100 80 60 40 0 CAT CAB ATP+ CAT CAT ATP + CAB CAB Blocks num per grid (a) 25.6 Gbps 126.8 Gbps 26.9 Gbps 64 128 192 256 3 384 448 512 Blocks num per grid (b) Fig. 5. Performance of Gregex. (a) Regular expression matching throughput and (b)overall throughput. processed per second. From Fig. 5(a), we can see that Gregex gets a regular expression matching throughput of 126.8 Gbps in the best case. Table I compares Gregex with other GPU based regular expression matching engines. The performance statistics presented in Table I are raw performance: the time used for transferring packets to GPU memory is not included in the processing time. Gregex is about 7.9 faster than the stateof-the-art GPU solution proposed in [8]. D. Overall throughput of Gregex We now evaluate the overall performance of Gregex. As shown in Fig. 5(b), the best cast overall performance of Gregex is 25.6 Gbps when the packet transferred asynchronously to GPU global memory use page-locked memory, which is 8 faster than proposed in [15]. V. CONCLUSION A high speed GPU based regular expression matching engine, Gregex, is introduced in this paper. Gregex takes advantage of the high parallelism of GPU to process packets in parallel. We describe three optimization techniques for Gregex in details, including ATP, CAB, and CAT. These optimization techniques significantly improve the performance of Gregex. Our experimental results indicate that the performance of Gregex is about 7.9 faster than the state-of-the-art GPU based regular expression engine. Gregex is high-flexible, lowcost as well as high-speed. We can easily apply Gregex to network security applications such as IDS and anti-virus systems. VI. ACKNOWLEDGMENT This work has been supported by the National High- Tech Research and Development Plan of China under Grant No.09AA01A346. REFERENCES [1] Snort, www.snort.org. [2] N. Jacob and C. Brodley, Offloading ids computation to the gpu, in Proceedings of the 22nd Annual Computer Security Applications Conference. IEEE Computer Society, 06, pp. 371 380. [3] S. Kumar, J. Turner, and J. Williams, Advanced algorithms for fast and scalable deep packet inspection, in Proceedings of the 06 ACM/IEEE symposium on Architecture for networking and communications systems. San Jose, California, USA: ACM, 06, pp. 81 92. [4] F. Yu, Z. Chen, Y. Diao, T. V. Lakshman, and R. H. Katz, Fast and memory-efficient regular expression matching for deep packet inspection, in Proceedings of the 06 ACM/IEEE symposium on Architecture for networking and communications systems. San Jose, California, USA: ACM, 06, pp. 93 102. [5] B. C. Brodie, R. K. Cytron, and D. E. Taylor, A scalable architecture for high-throughput regular-expression pattern matching, SIGARCH Comput. Archit. News, vol. 32, no. 2, pp. 191 2, 06. [6] M. Becchi, C. Wiseman, and P. Crowley, Evaluating regular expression matching engines on network and general purpose processors, in Proceedings of the 09 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Princeton, New Jersey, 09. [7] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and S. Ioannidis, Gnort: High performance network intrusion detection using graphics processors, in Proceedings of the 11th international symposium on Recent Advances in Intrusion Detection. Cambridge, MA, USA: Springer-Verlag, 08, pp. 116 134. [8] G. Vasiliadis, M. Polychronakis, S. Antonatos, E. P. Markatos, and S. Ioannidis, Regular expression matching on graphics hardware for intrusion detection, in Proceedings of the 12th International Symposium on Recent Advances in Intrusion Detection, Saint-Malo, France, 09, pp. 265 283. [9] K. Thompson, Programming techniques: Regular expression search algorithm, Commun. ACM, vol. 11, no. 6, pp. 419 422, 1968. [10] NVIDIA, Cuda c programming guide version 3.1. [11], Cuda c best practices guide version 3.1. [12] R. Smith, N. Goyal, J. Ormont, K. Sankaralingam, and C. Estan, Evaluating gpus for network packet signature matching, in Proceedings of the International Symposium on Performance Analysis of Systems and Software, 09. [13] R. Smith, C. Estan, and S. Jha, Xfa: Faster signature matching with extended automata, in IEEE Symposium on Security and Privacy. IEEE Computer Society, 08, pp. 187 1. [14] R. Smith, C. Estan, S. Jha, and S. Kong, Deflating the big bang: fast and scalable deep packet inspection with extended finite automata, SIGCOMM Comput. Commun. Rev., vol. 38, no. 4, pp. 7 218, 08. [15] S. Mu, X. Zhang, N. Zhang, J. Lu, Y. S. Deng, and S. Zhang, Ip routing processing with graphic processors, in the Design, Automation and Test in Europe, 10, pp. 93 99. [16] G. Ruetsch and P. Micikevicius, Optimizing matrix transpose in cuda, 09. [17] J. McHugh, Testing intrusion detection systems: a critique of the 1998 and 1999 darpa intrusion detection system evaluations as performed by lincoln laboratory, ACM Trans. Inf. Syst. Secur., vol. 3, no. 4, pp. 262 294, 00. 426 370