INTERNATIONAL JOURNAL OF PROFESSIONAL ENGINEERING STUDIES Volume VI /Issue 4 / AUG 2016

Similar documents
INTERNATIONAL JOURNAL OF PROFESSIONAL ENGINEERING STUDIES Volume 9 /Issue 5 / JAN 2018

Survey on Content Addressable Memory and Sparse Clustered Network

Implementation of SCN Based Content Addressable Memory

Low Power using Match-Line Sensing in Content Addressable Memory S. Nachimuthu, S. Ramesh 1 Department of Electrical and Electronics Engineering,

POWER REDUCTION IN CONTENT ADDRESSABLE MEMORY

Content Addressable Memory Architecture based on Sparse Clustered Networks

Content Addressable Memory Using Automatic Charge Balancing with Self-Control Mechanism and Master-Slave Match Line Design

DESIGN OF PARAMETER EXTRACTOR IN LOW POWER PRECOMPUTATION BASED CONTENT ADDRESSABLE MEMORY

Content Addressable Memory with Efficient Power Consumption and Throughput

International Journal of Advance Engineering and Research Development LOW POWER AND HIGH PERFORMANCE MSML DESIGN FOR CAM USE OF MODIFIED XNOR CELL

Design of low power Precharge-Free CAM design using parity based mechanism

Ternary Content Addressable Memory Types And Matchline Schemes

Optimized CAM Design

Content Addressable Memory performance Analysis using NAND Structure FinFET

Unique Journal of Engineering and Advanced Sciences Available online: Research Article

Research Scholar, Chandigarh Engineering College, Landran (Mohali), 2

ECEN 449 Microprocessor System Design. Memories

A Survey on Design of Content Addressable Memories

Design of high speed low power Content Addressable Memory (CAM) using parity bit and gated power matchline sensing

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup

Power Gated Match Line Sensing Content Addressable Memory

Design of Low Power Wide Gates used in Register File and Tag Comparator

High Performance and Energy Efficient SRAM-based Architecture for TCAM

Index Terms- Field Programmable Gate Array, Content Addressable memory, Intrusion Detection system.

A Low Power Content Addressable Memory Implemented In Deep Submicron Technology

Embedded Systems Design: A Unified Hardware/Software Introduction. Outline. Chapter 5 Memory. Introduction. Memory: basic concepts

Embedded Systems Design: A Unified Hardware/Software Introduction. Chapter 5 Memory. Outline. Introduction

ECEN 449 Microprocessor System Design. Memories. Texas A&M University

Area Efficient Z-TCAM for Network Applications

DESIGN ANDIMPLEMENTATION OF EFFICIENT TERNARY CONTENT ADDRESSABLE MEMORY

Magnetic core memory (1951) cm 2 ( bit)

Memory Design I. Array-Structured Memory Architecture. Professor Chris H. Kim. Dept. of ECE.

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

CMOS Logic Circuit Design Link( リンク ): センター教官講義ノートの下 CMOS 論理回路設計

Filter-Based Dual-Voltage Architecture for Low-Power Long-Word TCAM Design

Implementation and Design of High Speed FPGA-based Content Addressable Memory Anupkumar Jamadarakhani 1 Shailesh Kumar Ranchi 2 1, 2

CHAPTER 12 ARRAY SUBSYSTEMS [ ] MANJARI S. KULKARNI

Novel low power CAM architecture

SIDDHARTH INSTITUTE OF ENGINEERING AND TECHNOLOGY :: PUTTUR (AUTONOMOUS) Siddharth Nagar, Narayanavanam Road QUESTION BANK UNIT I

[Kalyani*, 4.(9): September, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

Semiconductor Memory Classification. Today. ESE 570: Digital Integrated Circuits and VLSI Fundamentals. CPU Memory Hierarchy.

SRAMs to Memory. Memory Hierarchy. Locality. Low Power VLSI System Design Lecture 10: Low Power Memory Design

A Comparative Study of Power Efficient SRAM Designs

Survey on Stability of Low Power SRAM Bit Cells

Implementation of Efficient Ternary Content Addressable Memory by Using Butterfly Technique

Column decoder using PTL for memory

Resource-Efficient SRAM-based Ternary Content Addressable Memory

STUDY OF SRAM AND ITS LOW POWER TECHNIQUES

ISSN: [Bilani* et al.,7(2): February, 2018] Impact Factor: 5.164

Reducing Cache Energy in Embedded Processors Using Early Tag Access and Tag Overflow Buffer

Resource Efficient Multi Ported Sram Based Ternary Content Addressable Memory

ESE370: Circuit-Level Modeling, Design, and Optimization for Digital Systems

Gated-Demultiplexer Tree Buffer for Low Power Using Clock Tree Based Gated Driver

Very Large Scale Integration (VLSI)

! Memory. " RAM Memory. " Serial Access Memories. ! Cell size accounts for most of memory array size. ! 6T SRAM Cell. " Used in most commercial chips

ISSN Vol.05, Issue.12, December-2017, Pages:

EMDBAM: A Low-Power Dual Bit Associative Memory with Match Error and Mask Control

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

CS311 Lecture 21: SRAM/DRAM/FLASH

ECE 485/585 Microprocessor System Design

+1 (479)

MTJ-Based Nonvolatile Logic-in-Memory Architecture

Lecture 13: SRAM. Slides courtesy of Deming Chen. Slides based on the initial set from David Harris. 4th Ed.

A Low Power DDR SDRAM Controller Design P.Anup, R.Ramana Reddy

Three DIMENSIONAL-CHIPS

EECS 427 Lecture 17: Memory Reliability and Power Readings: 12.4,12.5. EECS 427 F09 Lecture Reminders

Low Power Testing of VLSI Circuits Using Test Vector Reordering

Memory. Outline. ECEN454 Digital Integrated Circuit Design. Memory Arrays. SRAM Architecture DRAM. Serial Access Memories ROM

! Memory Overview. ! ROM Memories. ! RAM Memory " SRAM " DRAM. ! This is done because we can build. " large, slow memories OR

Integrated Circuits & Systems

Unleashing the Power of Embedded DRAM

Power Reduction Techniques in the Memory System. Typical Memory Hierarchy

COMP3221: Microprocessors and. and Embedded Systems. Overview. Lecture 23: Memory Systems (I)

Z-RAM Ultra-Dense Memory for 90nm and Below. Hot Chips David E. Fisch, Anant Singh, Greg Popov Innovative Silicon Inc.

Design of a High Speed FPGA-Based Classifier for Efficient Packet Classification

Performance Analysis and Designing 16 Bit Sram Memory Chip Using XILINX Tool

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

CENG 4480 L09 Memory 2

THE latest generation of microprocessors uses a combination

Design and Implementation of High Performance Application Specific Memory

EE414 Embedded Systems Ch 5. Memory Part 2/2

Chapter 6 (Lect 3) Counters Continued. Unused States Ring counter. Implementing with Registers Implementing with Counter and Decoder

DESIGN AND PERFORMANCE ANALYSIS OF CARRY SELECT ADDER

Memory Design I. Semiconductor Memory Classification. Read-Write Memories (RWM) Memory Scaling Trend. Memory Scaling Trend

A Low Power SRAM Base on Novel Word-Line Decoding

A Low-Power Field Programmable VLSI Based on Autonomous Fine-Grain Power Gating Technique

VERY large scale integration (VLSI) design for power

Ripple-Precharge TCAM: A Low-Power Solution for Network Search Engines

[Karpakam, 3(11): November, 2014] ISSN: Scientific Journal Impact Factor: (ISRA), Impact Factor: 1.852

Semiconductor Memory Classification

Near Optimal Repair Rate Built-in Redundancy Analysis with Very Small Hardware Overhead

1. Designing a 64-word Content Addressable Memory Background

CS250 VLSI Systems Design Lecture 9: Memory

LOW POWER SRAM CELL WITH IMPROVED RESPONSE

A Software LDPC Decoder Implemented on a Many-Core Array of Programmable Processors

Memory and Programmable Logic

A Novel Architecture of SRAM Cell Using Single Bit-Line

Implementation of Reduce the Area- Power Efficient Fixed-Point LMS Adaptive Filter with Low Adaptation-Delay

Issue Logic for a 600-MHz Out-of-Order Execution Microprocessor

Sense Amplifiers 6 T Cell. M PC is the precharge transistor whose purpose is to force the latch to operate at the unstable point.

Transcription:

Implementation of Content Addressable Memory Based on Sparse Cluster Network using Load Store Queue Technique D.SATEESH KUMAR, EMAILID: Sateeshkumard4@Gmail.Com D.AJAY KUMAR M.E ASST.PROFESSOR,EMAILID: Ajaybabuji@Gmail.Com Sir C. R. Reddy College of Engineering, Eluru, West Godavari (Dt), Andhrapradesh - 534007 clock cycle. In fact CAM is outgrowth of RAM. While CAM's are widely used in many applications like memory mapping, cache controllers for central processing unit, data Abstract: In all type of electronic circuits, optimization of power and delay are the major topics of interest. In the emerging trends of VLSI technology, CAM is the hardware embodiment of what in software terms would be called an associative array. The Content Addressable Memory architecture is proposed by Sparse Clustered Network, using Load Store Queue(LSQ) technique that on average eliminates the most parallel comparisons during a search. Thus, the CAM based Sparse Clustered Network using LSQ technique is designed to match the given input data with the stored value. The stored data is specified with an particular tag. When the input data is given with an tag, a search is performed among the stored data and when a match occurs between the input and stored value, the output will display the input data which is given. Also when an mismatch occurs between the input and stored data, the output will not be display the input value. CAM operation will be performed faster when compared with other memories. Thus the power consumption and delay estimation is reduced by using SCN-CAM with LSQ technique. Keywords Associative memory, content addressable memory (CAM), Sparse Cluster Networks(SCNs), Load Store Queue(LSQ). I. Introduction A Content-Addressable Memory (CAM) is a type of memory that can be accessed using its contents rather than an explicit address. A CAM can be used as a coprocessor for the network processing unit (NPU) to offload the table lookup tasks. CAMs are fast data parallel search circuits. Unlike standard memory circuits, for example Random Access Memory (RAM) data search is performed against all the stored information in single compression and coding etc. It s primary application is fast Internet Protocol (IP) package classification and forwarding at high speed network routers and processors. IP routing is accomplished by examination of the protocol header fields i.e. the originating and destination address, the incoming and outgoing ports, against stored information in the routing tables. If a match is registered the package is forwarded towards the port(s) defined in the table. On very high speed networks and huge traffic volume the task is to be performed in fast and massive parallelism. However, managing high speeds and large lookup tables requires silicon area and power consumption. The power dissipation, silicon area and the speed are three major challenges for designers. Since there is always trade-off between them. The circuit structure of a CAM word which is made of CAM cells. A CAM cell compares its stored bit against its corresponding search bit provided on the search line (SL). The combined search result for the entire word is generated on the match-line (ML). The match -line sense amplifier (MLSA) senses the state of the ML and outputs a full logic swing signal. At the circuit level, research has focused on saving power by reducing the ML and SL signal swing or using low-power current-based approaches. Even with these circuit innovations, the simple CAM circuit structure can be designed that has a very high power consumption if it is used directly to construct a large capacity CAM.

Fig 1 Circuit Structure of CAM A basic CAM cell function could be observed as twofold: bit storage as in RAM and bit comparison which is unique to CAM. At transistor i.e. Circuit level CAM structure implemented as NAND-type or NOR-type. But at architectural level bit storage uses simple SRAM cell and comparison function is equivalent to XOR i.e. XNOR logic operation. In order to access a particular entry in CAM, a search data word is compared against previously stored entries in parallel to find a match. Each stored entry is associated with a tag that is used in the comparison process. Once a search data word is applied to the input of a CAM, the matching data word is retrieved within a single clock cycle if it exists. This prominent feature makes CAM a promising candidate for applications where frequent and fast look-up operations are required, such as in translation look aside buffers (TLBs), network routers, database accelerators, image processing, parametric curve extraction. A new family of associative memories based on sparse clustered networks (SCNs) has been recently introduced, and implemented using field programmable gate arrays (FPGAs). Such memories make it possible to store many short messages instead of few long ones as in the conventional Hopfield networks with significantly lower level of computational complexity. Furthermore, a significant improvement is achieved in terms of the number of information bits stored per memory bit (efficiency). The SCN-CAM consists of SCN based classifier connected with the CAM array. The CAM array is divided into equally sized sub-blocks and are activated independently. When any one of the block is activated, the tag is compared with the stored entries in them while keeping the other entries as deactivated. This reduces the power consumption. II. OVERVIEW OF CAM In the conventional CAM array, each entry of a tag that, if matched with the input, points to the location of a data word in a Static Random Access Memory (SRAM) block. The actual data are stored in the SRAM and a tag is simply a reference to it. Therefore,when it is required to search a data in the SRAM, it suffices to search for its corresponding tag. Fig 2 Typical CAM Array A search data register is used to store the input bits. The register applies the search data on the differential search lines, which is shared among the entries. Then the search data are compared with the CAM entries. Each CAM-word is attached to a match line among its constituent bits, which indicates, whether they match with the input bits. III. LITERATURE REVIEW Yamagata T et al (1992) proposed a 288-kb (8K words X 36 b) fully parallel content addressable memory (CAM) LSI using a compact dynamic Content Addressable Memory cell with a stacked-capacitor structure and a novel hierarchical priority encoder [1]. The proposed CAM cell is shown in Fig 2.1. It consists of five NMOS transistors and two stacked capacitors. Four of the transistors (Ms0, Ms1, Mw0, Mw1) are used to store and access data, and one (Md) is used as a diode to isolate current paths during match operations. Charges are stored on stacked capacitors (Cs0, Cs1) and the Ms0 and Ms1 gates. The opposite electrodes of the Cs0 and Csl are connected to a cell plate voltage Vcp, which is equal to half Vcc (Vcc: supply voltage). Two bit lines are supplied with data in write and search operations. The word line (WL) allows write access to each cell in a word. The match line (ML)

passes through a word to perform a logical AND of the results of each cell s comparison. The ML is also used to read cell data. The storage capacitor (Cso) is stacked on the gate of the Ms0 and Mw0. The gate electrodes of the Mso and Mw0are fabricated with the first poly-si layer [1]. The stacked capacitor is composed of the second poly-si layer, the insulator film, and the third poly-si layer. The second poly-si layer (storage node) is also used to connect the drain (or source) of the Mw0 to the gate of the Ms0.Similarly, the storage capacitor (Csl) is formed on the gate of the Msl and Mwl. A write operation is performed by activating a select WL and then driving the bit lines according to the write data. A read operation is accomplished by discharging the bit lines and then driving a selected ML to a high level. A read operation is accomplished by discharging the bit lines and then driving a selected ML to a high level. A match operation is achieved by pre charging both the bit lines and the match lines to a high level, and then loading the bit lines with search data. When match occurs at several words in a search operation (multiple response), the CAM outputs the address of the matched word with the highest priority. A priority encoder (PE) circuit is utilized for multiple-response resolution and match address generation [1]. As a bit capacity of Content Addressable Memory s becomes larger, the number of words increases rather than the bit length of words. Thus in a high density Content Addressable Memory chip, the configuration is such that the cell array is divided into several blocks. This creates a serious problem concerning the layout of the priority encoder. When the priority encoder is incorporated in each block, the silicon area occupied by the priority encoder and the power dissipation of the priority encoder are increased in proportion to the number of divided blocks. So in order to overcome this, novel hierarchical priority encoder architecture suitable for high-density CAM was proposed. Schultz K et al (1996) developed a CAM based on bank selection architecture. In the bankselection architecture the CAM array is divided into B equally partitioned banks that are activated based on the value of added bits of length log2(b) to the search data word. These extra bits are decoded to determine, which banks must be selected. The basic concept is that bank selection divides the CAM into subsets called banks. The bank-selection scheme partitions the Content Addressable Memory and shuts off unneeded banks. Two extra data bits, called bank-select bits, partition the Content Addressable Memory into four blocks. When storing data, the bank select bits determine into which of the four blocks to store the data. When searching data, the bank-select bits determine which one of the four blocks to activate and search. The decoder accomplishes the selection by providing enable signals to each block [2]. In the original pre classification schemes, this architecture was used to reduce area by sharing the comparison circuitry between blocks. Although the blocks are physically separate, they can be arranged such that words from different blocks are adjacent. Thus only one of four blocks is active at any time, only 1/4 of the comparison circuitry is necessary compared to the case with no bank selection, thus saving area. Bank selection reduces overall power consumption in proportion to the number of blocks. Thus, using four blocks ideally reduces the power consumption by 75% compared to a Content Addressable Memory without bank selection. IV.SCN-CAM USING LSQ The Content Addressable Memory based Sparse Cluster Network is designed with LSQ technique using binary connections that eliminates the parallel comparisons during the search. When an input data is given inside the network, it is compared with the stored data. When the match occurs between the input and stored value, it eliminates the other data stored. This reduces the power consumption of an SCN-CAM using LSQ method. Fig 3 Block diagram of SCN-CAM with LSQ

Fig 3 shows the block diagram of SCNCAM with LSQ. The blocks of Sparse Cluster Network CAM (SCN-CAM) consists of an SCNbased classifier, which is connected to a specialpurpose CAM array. The SCN-based classifier is first trained with the association between the tags and the address of the data to be later retrieved. The CAM array is based on a typical architecture, but is divided into several sub-blocks that can be compare-enabled independently. So, when the input data is given inside the SCN, it will consider the few possibilities of matching data that are similar to the input value. Now, this data will be given inside the CAM blocks. The CAM blocks will compare the input value with the data that have already stored in it. When there occur a match between the input and stored value, the output will be activated. Once the sub-blocks are activated, the tag is compared against the few entries in them while keeping the rest deactivated. Then the datas are loaded inside the Queue. After loading, the activated data will be displayed in the Random Access Memory. When there is a match in the memory the output will be displayed. When the match doesn t occur the output will not be displayed. The SCN-CAM using LSQ is designed using Xilinx software. Fig 4 SCN-CAM i. Architecture of SCN based Classifier The SCN-based classifier in SCN-CAM architecture is given by, A. SCN-CAM The Content Addressable Memory is designed based on Sparse Cluster Network which reduces the comparisons during the search. When the data is given inside the SCN, it enables the data that is stored inside the CAM. When the data is enabled, there is a match between the input and stored value. The output will be displayed when there occurs a match between the stored data. Fig 4 shows the block of Proposed CNCAM. Here when the input is given inside the network, that data will be searched inside the network. When there is a match occurred between the stored data, the output will be displayed in the data out pin. The pass/fail pin is used to display pass value during the functional mode operation. The fail condition is displayed during the test mode operation. Fig 5 Architecture of SCN based Classifier The architecture of SCN-based classifier generates the compare-enable signals for the CAM sub blocks attached to it. It consists of an SRAM module, Decoder, Encoder, Comparator, Counter and Mux in the circuit. The input tag is first reduced in length to q bits, and segmented into c equallength parts. Each segment is presented to its corresponding one-hot decoder, as shown in figure 5 that determines which row of the SRAM is to be accessed. The location of this row corresponds to the index of an activated neuron in P1. In order to read the stored values in the SRAM, a sense amplifier

is not required since the bit lines are only attached to few cells. Therefore, in order to reduce the switching activity of the successive logical stages, the complementary bit line is used to read the values of the SRAM cells since it is pulled down during the read in case of reading a 1. In each SRAM module, the accessed row is the only row that can contain the information leading to the activation of a neuron in PII and inherently eliminates unnecessary operations between the connection values and the neural values. The encoding and decoding techniques are used to serve the data without any distortions. The encoding will encode the data with an address and the decoding will decodes that data with the address. The SRAM is capable of performing both the read and write operations. When the read mode is selected, the output will be enabled. When the write Mode is in off condition, the writing operation will be performed. Thus when a search is given as read the stored variable will be displayed if it matches along with the input data. The counter counts the operation one by one after each comparison is done. The Xilinx code is generated for all blocks using VHDL. ii. Design of SCN-CAM The SCN-CAM is designed with 6pins as its input value and the output with two pins. The design of SCNCAM is created using Xilinx software. Fig 6 shows the design of SCN-CAM. It is simulated using XILINX and MODELSIM software. Here when the input data is given inside the SCN-CAM, that data will be encoded with an specific address. Then the decoder will be used to decode that data with an particular address. After the process of encoding and decoding, the data will be compared with the stored data in the Static Random Access Memory. During this process, the matches may occur between the input and the stored data. This data will be enabled using the Multiplexer as an output. The comparator is used to compare the input and the stored data and the counter will count the operation one by one when an mismatch condition occurs. When an mismatch occur between the input and the stored data, the output will be disabled. For this purpose of matching and mismatching, two modes are considered. They are test mode and functional mode. During the test mode operation, the mode is selected as 1 and the process will not operate in this condition. When the functional mode of operation is performed, the mode is selected as 0. When the functional mode of operation is performed, the mode is selected as 0. At this stage, the input value matches with the stored data. Thus, the output value will be enabled. The stored data will be displayed as output which matches with the input data. The Register Transfer Logic Schematic for an SCN-CAM will be displayed using XILINX and the output value is simulated using MODELSIM software. B. Load Store Queue(LSQ) Technique The LSQ built into simics is based on the instruction tree. Store transactions are kept in the tree to make it possible to inspect the store queue state in different speclation paths. Inserting a store transaction in the queue takes no time and the LSQ has an infinite size. It is upto the user module to set the delay and restrictions to limit the LSQ. The LSQ enforces program order consistency. RAM is a temporary storage place where the computer holds files and data that are used at the present moment. In RAM the data memory the data can be accessed by specifying the address and the

time required to read the contents from a location is independent of its address. contrarily the data can only be need out of a a sequential memory in the same order in which they are originally written. V. Simulation Results the test mode is displayed. Fig 8 shows the output of an SCN-CAM (Functional Mode). Here the input data is matched with the stored value. Thus the pass condition will be displayed as an output, also the Function mode will be ON during this operation. Fig 9 Output of SCN-CAM using LSQ Technique VI. CONCLUSION Fig 7 Output of SCN-CAM (Test Mode) Fig 8 Output of SCN-CAM (Functional Mode) Fig 7 shows the output of an SCN-CAM in test mode operation. There are two modes of operation considered during the simulation process. They are Functional and Test modes. Test mode is considered when an mismatch occurs. i.e when the input data doesn t matches with the stored entry, test mode will be displayed as an output. Here the stored data didn t matched with the input data, thus This paper reviews algorithm and architecture of a low power CAM based on sparse clustered network. In addition other CAM architectures and a comparative study on the same with latest proposed architecture are also included here. As a result of this analysis it came to conclusion that the proposed content- addressable memory architecture with sparse clustered network is more advantageous and reliable one as compared to other architecture sparse clustered network content addressable memory employs a novel associativety mechanism based on a recently developed family of associative memories based on sparse cluster network. Sparse clustered network addressable memory is suitable for low- power applications, where frequent and parallel look-up operations are required. Sparse clustered network contentaddressable memory employs a sparse clustered network based classifier, which is connected to several independently compare-enabled contentaddressable memory sub-blocks, some of which are enabled once a tag is presented to the sparse clustered network based classifier. By using independent nodes in the output part of sparse clustered network content-addressable memory s training network, simple and fast updates can be achieved without retraining the network entirely. With optimized lengths of the reduced length tags, sparse clustered network content-addressable memory

eliminates most of the comparison operations given a uniform distribution of the reduced-length inputs. Depending on the application, non uniform inputs may result in higher power consumptions, but does not affect the accuracy of the final result. In other words, a few false-positives may be generated by the sparse clustered network based classifier, which are then filtered by the enabled content-addressable memory sub blocks. Thus, no false-negatives are ever generated. It has been estimated that for a case study design parameter, the energy consumption and the cycle time of sparse clustered network Content addressable memory are 8.02%, and 28.6% of that of the conventional NAND-type architecture, respectively, with a 10.1% area overhead. REFERENCE [1] Yamagata T, Mihara M, Hamamoto T, Murai Y, Kobayashi T, A 288-kb Fully Parallel Content Addressable Memory Using a Stacked-Capacitor Cell Structure, December 1992. [2] Schultz K and Gulak P, Fully parallel integrated CAM/RAM using pre classification to enable large capacities, J. Solid-State Circuits, May 1996; 5, 689 699. [3] Zukowski C and Wang S Y, Use of selective Pre charge for low power on the match lines of content-addressable memories, Aug. 1997; 64 68. [4] Lin C S, Chang J C, and Liu B D, A low power pre computation based fully parallel contentaddressable memory, J. Solid-State Circuits, April 2003; 4,. 654 662. [5] Pagiamtzis K and Sheikholeslami K, Pipelined match-lines and hierarchical search-lines for low power content-addressable memories, Sep. 2003; 39, 1512 1519. [6] Ruan S J, Wu C Y, and Hsieh J Y, Low power design of pre computation-based content addressable memory, March 2008. [7] Onizawa N, Matsunaga S, Gaudet V C, and Hanyu T, High throughput low-energy content addressable memory based on self-timed overlapped search mechanism, May 2012; 41 48. [8] Jarollahi H, Onizawa N, Hanyu T and Gross W J, Associative Memories Based on MultipleValued Sparse Clustered Networks, July 2014. [9] Jarollahi H, Gripon V, Onizawa N, and Gross W J, Algorithm and Architecture for a Low-Power Content-Addressable Memory Based on Sparse clustered Networks, April 2014. [10] Gripon V and Berrou C, Sparse neural networks with large learning diversity, 2011. [11] Pagiamtzis K and Sheikholeslami A, Content Addressable Memory (CAM) Circuits and Architectures: A Tutorial and Survey, J Solid State Electronics M arch, 2006. [12] Peng M and Azgomi S, Content-Addressable memory (CAM) and its network applications, 1996. D.Sateeshkumar is currently a PG scholar of VLSI in ECE Department in SIR.C.R.Reddy College of Engineering, Eluru. He received B.TECH degree from Aditya Instituteof Technology And Management, JNTUK. D.Ajay Kumar received the Master Degree (M.E) in VLSI Design from the Bannari Amman Institute Of Technology, Sathyamagalam, Tamilnadu affiliated to ANNA University, Coimbatore. He is currently an Assistant Professor with the Department of Electronics & Communication Engineering, SIR.C.R.Reddy College of Engineering, Eluru. His research interests include the relationship of Digital System Testing and Testable Design and very-large-scale integration testing.he is a life member of the ISTE and IETE.