Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding

Size: px
Start display at page:

Download "Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding"

Transcription

1 Master s thesis Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding by Jon Persson LITH-IDA-EX 05/033 SE

2 Master s thesis Deterministic Test Vector Compression/Decompression Using an Embedded Processor and Facsimile Coding by Jon Persson LiTH-IDA-EX 05/033 SE Supervisor and Examiner: Erik Larsson Department of Computer and Information Science at University of Linköping

3

4 Abstract Modern semiconductor design methods makes it possible to design increasingly complex system-on-a-chips (SOCs). Testing such SOCs becomes highly expensive due to the rapidly increasing test data volumes with longer test times as a result. Several approaches exist to compress the test stimuli and where hardware is added for decompression. This master s thesis presents a test data compression method based on a modified facsimile code. An embedded processor on the SOC is used to decompress and apply the data to the cores of the SOC. The use of already existing hardware reduces the need of additional hardware. Test data may be rearranged in some manners which will affect the compression ratio. Several modifications are discussed and tested. To be realistic a decompressing algorithm has to be able to run on a system with limited resources. With an assembler implementation it is shown that the proposed method can be effectively realized in such environments. Experimental results where the proposed method is applied to benchmark circuits show that the method compares well with similar methods. A method of including the response vector is also presented. This approach makes it possible to abort a test as soon as an error is discovered, still compressing the data used. To correctly compare the test response with the expected one the data needs to include don t care bits. The technique uses a mask vector to mark the don t care bits. The test vector, response vector and mask vector is merged in four different ways to find the most optimal way. Keywords: System-on-a-chip(SOC) testing, test data compression/decompression, processor-based testing, variable-to-variable-length codes, facsimile coding, deterministic testing.

5 iii

6 Acknowledgements A lot of thanks to Erik Larsson, my supervisor and examiner at IDA (Department of Computer and Information Science at University of Linköping) who helped me a lot. Not only with the explanation of how SOC s are tested but also all practical issues and last but not the least, a lot of reasoning about upcoming ideas and problems. I would also like to thank Kedarnath Balakrishnan, University of Texas, for sending me ISCAS 89 test vectors and Syed Irtiyaz Gilani at IDA for the D695 test and response vectors. Without ability to test the method with realistic data I would know nothing about the quality of the method. Thanks to all my friends who have discussed the subject with me and the biggest thank to Louise who encouraged me all the way during this work, I love you! Thanks, Jon

7 v

8 Abbreviations ATE ATPG BIST CPU CUT DSP FDR I/O MISR NOP SOC TAM X XOR Automatic Test Equipment Automatic Test Pattern Generator Built-In Self-Test Central Processing Unit Core Under Test Digital Signal Processor (or Processing) Frequency-Directed Run-Length Input/Output Multi-Input Signature (or Shift) Register Dummy Instruction in Assembler System-on-a-Chip Test Access Mechanism Don t Care Bit Exlusive Or

9 vii Contents 1 Introduction System-on-a-Chip(SOC) Testing Don tcarebits(x s) ExaminetheResponse MISR The Problem HighTestDataVolume Solution UsinganEmbeddedProcessor WhatisGiven Related Work DecompressingUsingon-chipCircuitry Built-InSelf-Test(BIST) DecompressingUsingProcessor DecompressionUsingLinearOperations Design and Implementation FacsimileStandard One-Dimensional Two-Dimensional CompressingTestVectors... 21

10 CONTENTS PlainFacsimile,NoReorder GreedySort Frequency-DirectedRun-Length(FDR) ModifyingFacsimileCodewords LocalSearch TheCompleteProposedMethod Example Decompression DecompressioninAssembler IncludingResponseVectors UsingMask TwoBitsEach MergedTestandResponseVector Experimental Results CompressingTestVectorsOnly Unix gzip utility LocalSearchHeuristic IncludingResponseVectors Discussion ProposedMethod LocalSearch DiscardedTechniques StoringPreviousVector StoreinMemory CoreFeedback Synchronization ComparingtheResult IncludingResponse ComplexMethods Conclusions and Further Work Conclusions FurtherWork... 52

11 A Assembler Code 57 ix

12 CONTENTS

13 1 Chapter 1 Introduction This chapter gives an introduction to System-on-a-Chip (SOC) and to testing. 1.1 System-on-a-Chip (SOC) System-on-a-chip (SoC or SOC) is an idea of integrating all components of a computer system into a single chip. It may contain digital, analogue, mixed-signal, and often radiofrequency functions all on one chip. Wikipedia ( Modern semiconductor design methods and manufacturing technologies enable the creation of a complete system on one single die, the so-called system chip or SOC [4]. Such system chips typically are very large Integrated Circuits (ICs), consisting of millions of transistors, containing a variety of hardware modules [4]. These modules, called cores are reusable, predesigned silicon circuit blocks. Embedded cores incorporated into system chips cover a very wide range of functions like processor, mpeg coding/decoding, memory etc.

14 1.2. Testing Core A Memory Core B Processor Figure 1.1: Example SOC Throughout this report we will look at a simple example SOC shown in Figure 1.1. It contains a processor, a memory and two small cores, the ones that will be tested. 1.2 Testing When testing a core (referred to as core-under-test or CUT) the core will be set to a starting state and the system clock will be applied, bringing the core to its next state, the response, which is examined. If the response is the expected one then this test has passed. A core has a number of such tests to pass, each checking for different modelled faults that can arise. To easily set the starting state the core is equipped with scan chains, shift registers connected to the inner parts of the core. The scan chains are first filled with a test vector by shifting it in, then the system clock is applied and the response is captured into the scan chains. The response is shifted out and compared with the expected response. The data bus used to transfer the test data, called Test Access Mechanism (TAM) is dedicated

15 3 ATE Memory Processor TAM Wrapper Core A Scan Chain 1 Scan Chain 2. Scan Chain n Wrapper Core B Scan Chain 1 Scan Chain 2.. Scan Chain n Figure 1.2: Example SOC with TAM, wrappers and scan chains to testing only. Often the TAM is of a width different from the number of scan chains. To handle the interface between scan chains and the TAM every core is surrounded by a wrapper, applying the incoming bits to the right scan chain. As mentioned a number of test vectors is to be applied when testing a core, together these test vectors constitutes the test data, sometimes referred to as a test cube. The example SOC has n scan chains per core and the TAM is four bits wide. (Figure 1.2) Where do the test vectors come from? Together with the specification of the core an automatic test pattern generator (ATPG) can produce the test sets and responses. If the core is constructed as a black box where the buyers have no information of its internals the vendor of the core will deliver test sets and the corresponding responses. The test vectors are then usually stored in an automatic test equipment (ATE) which is connected to the SOC when testing it, sending over the vectors one by one.

16 1.3. Examine the Response Don t Care Bits (X s) Each test vector is designed to test the SOC for one or more modelled faults. Every such fault deals only with some of the input bits, thus leaving other bits that can be either 0 or 1. These are called don t care bits and are represented with X s in the test and response vectors. For the test vectors used in this report the number of don t care bits can be as much as 95% of the total number of bits [2]. A good compression algorithm should maximize the compression ratio by assigning the don t care bits to either 0 or 1 carefully. 1.3 Examine the Response There are mainly two alternatives to examine the response. The first one is to compare every bit of the response with the expected, modelled response. This approach will detect all possible errors and can also be used to terminate a test as soon as the first error is detected, so called abort-on-fail. This way less time is used testing faulty SOCs. The second approach is to compress the response before it is compared with an equally compressed expected response. The response can be compressed without keeping all the information as long as the probability of accepting a faulty SOC is low. One straightforward compressing algorithm would be to count the sum of all the 1 s in the responses, if the sum is different from the expected one the SOC is faulty. If several faults occur there is a possibility that the sum ends up to the correct value and the SOC wrongly passes the test. Today the most commonly used approach is to place a multi-input signature register (MISR) at the outputs of each core MISR A multi-input signature register (MISR) is a small circuit designed to create a signature of the data sent to its inputs. When all the tests are completed the signature is compared with the desired signature, if equal the MISR will signal that the tests are passed, otherwise fail is signalled. The desired signature is small enough to be stored inside the MISR itself. In Figure 1.3

17 5 Inputs D 1 D2 D3 reg 1 reg 2 reg 3 Time Inputs D 1 D 2 D 3 reg1 reg2 reg (end) Signature Figure 1.3: Example MISR and signature calculations an example MISR is shown together with a signature calculated from some example inputs. The -symbols represent modulo-2 adders, an odd number of 1 s to its inputs will set output to 1, even number of 1 s will set output to 0. As seen in the example MISR the output from register 3 is connected to the modulo-2 adders in front of register 1 and 2. Which modulo-2 adders that will be connected to the output of the last register can be changed to give the MISR other characteristic. Differently connected MISRs produces signatures of differerent quality. [9] Due to its cyclical behaviour a MISR distributes faults evenly over all its registers. This way multiple faults are less probably to produce the correct signature. It can been shown that the probability for erroneous inputs to generate the correct signature is nearly 2 n where n is the number of registers in the MISR. Figure 1.4 shows where the MISRs are added to the SOC. [9]

18 1.3. Examine the Response ATE Memory Processor TAM Wrapper Core A Scan Chain 1 Scan Chain 2. Scan Chain n Wrapper Core B Scan Chain 1 Scan Chain 2. Scan Chain n.. MISR MISR Figure 1.4: Example SOC with MISRs added

19 7 Chapter 2 The Problem 2.1 High Test Data Volume With rapidly increasing complexity in the SOCs the test data increases just as fast. This brings two problems, the ATE needs more memory to store test data and the tests takes longer time to perform. Especially the longer test times, that is a huge bottleneck in the production of SOCs, increases the production cost Solution What can be done to reduce the size of the test data? One popular approach is the use of compression techniques. The test data for a particular SOC is compressed and stored in the ATE. This requires less memory than the original data, giving us a solution to the first problem. When testing a SOC the compressed data is sent to the SOC where a decompressor restores the original data. The decompressor is usually some extra circuitry added to the SOC. The decompressed, original data is then sent to the CUT as if the ATE did send the original data directly. There is still the same amount of data to be applied to each core even if it were compressed when sent to the SOC. How can the second problem with long test time be solved? Luckily the technique described above will

20 2.1. High Test Data Volume ATE I/O Memory Processor TAM Wrapper Core A Scan Chain 1 Scan Chain 2. Scan Chain n Wrapper Core B Scan Chain 1 Scan Chain 2.. Scan Chain n... M I S R M I S R Figure 2.1: Example SOC with processor connected to TAM help also in this matter. ATEs are usually built with slower electronics than SOCs and a SOC will have to operate at very slow speed during test. When an ATE sends compressed data only the parts of the SOC receiving this data needs to operate at the same clock speed as the ATE. The decompressor and also the rest of the SOC can operate at higher clock speed applying test vectors in less time Using an Embedded Processor Many of the SOCs of today have embedded processors to solve calculations specific to the operation of the SOC. Is it possible to use the embedded processor to decompress a compact version of the test data? This question was the starting point for this thesis. The idea is somewhat like Figure 2.1. The ATE will send precomputed, compressed test vectors to the SOC. The embedded processor then restores the original test vectors using a

21 9 decompression algorithm and applies them to the cores. It turned out this approach had already been tested with good results, but there exists more compression algorithms that hasn t been tested yet What is Given Figure 2.1 shows the layout for the example SOC that is to be tested. The following requirements are fulfilled for this SOC: The ATE is capable of using the I/O-module to send data to the right place in memory. Not only can it send the compressed data but also the decompression program can be transferred and executed. The memory is of sufficient size to hold the decompression program, a buffer for the incoming data and also one copy of the longest test vector. There exists controlling circuitry which will synchronize the data flow from the ATE and also send enable signals to the right parts of the system. Test vectors are available and come, one for each core, in the following format: XXXXXXX101XXXXXXXX XXX111100XXX The first two rows specifies how many vectors there are and how long each vector is. Don t care bits are represented with X s. What is left to be done is the compressing algorithm and the decompression program. The compressed data will only deal with the vectors. The two first controlling rows may be transferred as they are, telling the processor how many vectors to decompress and how many bits each of them are. The output from the compression program will be a stream of

22 2.1. High Test Data Volume bits which uncompressed will yield the same vectors as in the original data with one exception; each X is replaced with either 0 or 1. Since each vector is a stand-alone test, the vectors produced from the compressed data may be in different order than the original vectors. What matters is that the response vectors needs to be reordered in the exact same way. This report presents a technique that compresses the test data above into this: The two vectors are represented by 30 bits instead of 50 bits in the original data.

23 11 Chapter 3 Related Work This chapter discusses some of the different solutions to the problem of reducing test data volume. Both decompression techniques using hardware and software are represented. 3.1 Decompressing Using on-chip Circuitry As long as the decompression scheme is not to difficult decompression can be made in hardware using additional circuitry inside the SOC. The main advantage is that these techniques can be used in any SOC without the requirement of an embedded processor and/or memory. The cost is the area overhead inside the SOC to fit the decompressing circuitry. Frequency-Directed Run-Length (FDR) code (described in Section 4.2.3) is used by Chandra and Chakrabarty [3]. The report shows that FDR code outperforms other compressing schemes when dealing with a special case such as test vector compressing. In the report Chandra also applies the technique on difference vectors where every vector only represents the difference with the previous one. This way longer runs of 0 s is achieved and better compression. The result is also compared with more complex method like gzip and compress, two Unix utilities for compressing data files.

24 3.2. Built-In Self-Test (BIST) Gonciari and Al-Hashimi [5] propose a Huffman-coding algorithm using patterns of variable lengths. The method aims to solve three problems to SOC testing, on-chip area overhead, high test data volume and test application time. 3.2 Built-In Self-Test (BIST) A BIST technique is only applicable when the interior of a module is known. The idea is to create the test vectors somewhat randomly and see which modelled faults these random vectors covers. It is important that the randomizing algorithm produces exactly the same vectors each time. Such algorithm is called pseudo-random generator. Those faults not covered by the random vectors are tested with ordinary, deterministic test vectors. Hwang and Abraham [6] suggest a BIST technique where each pseudo random pattern is shifted cyclical to cover more simple faults. To avoid testing the circuit with a high number of unnecessary vectors the distance to the next good vector is sent for each test. For the deterministic part of the method they encode the difference for each deterministic vector to one of the random ones. The probability that there exist one similar random vector is high. 3.3 Decompressing Using Processor A few other methods where an embedded processor is used for decompression already exists. Compressed data is sent to the memory. A decompression program, running on the embedded processor decompresses the vectors and applies them to the CUT. Jas and Touba [7] present an approach where only the difference from the previous vector is sent. The vectors are divided into blocks of a certain length and only blocks with changed bits will be sent. The compressed data consists of a list of blocks. For each block the position must be saved and also one bit to tell if a block is the last one for a vector. The vectors in the test set are reordered to achive less difference between the vectors. Balakrishnan and Touba [1] use matrix operations to compress the test

25 13 data. For a number n the first n 2 bits forms a n n matrix. A set of equations is then solved to find two vectors which, together with a XORing algorithm, can reproduce the original matrix. XORing two or more bits works like this; if an odd number of bits are 1 the result is 1. Otherwise, the result is 0. If the equations can t be solved the first n bits are sent uncompacted Decompression Using Linear Operations The method proposed in Balakrishnan and Touba [2] where linear operations are used to decompress the test set is presented in more detail. The scheme for testing a SOC with this method is based on word-based XOR operation. The length of the words is usually chosen to be the word-length of the processor, 32 is the most common today. The method works basically like this: 1 All the words from the compressed data are sent to the embedded memory. 2 A pseudo-random number generator inside the SOC creates a number of integers smaller or equal to the number of words in the compressed data. 3 The integers points out words in the compressed data which are XORed together bitwise. 4 The resulting word is sent to the CUT. 5 Unless all tests are done, repeat from step 2. A pseudo-random number generator gives, what seems, a series of random numbers but the important thing is that each time it is restarted it will produce exactly the same series. This way it is known which words from the compressed data that will be XORed together to create a certain word in the decompressed data. The compressed data needs to be created in such a way that when decompressed it will correspond to the original data. This is done by creating linear equations using all that is known from above.

26 3.3. Decompressing Using Processor Compressed Data W1 W2 W3 W4 W5 W6 W7 W8 Test Vector 1 Test Vector 2 Test Vector 3 Test Vector 4 Test Vector 5 W1 W5 W8 W2 W6 W7 W3 W4 W5 W1 W2 W7 W2 W5 W6 W3 W6 W8 W4 W1 W6 W1 W7 W8 W2 W3 W4 W1 W3 W6 W5 W7 W2 W8 W4 W2 W5 W4 W8 W2 W1 W4 W7 W6 W3 W4 W5 W6 W8 W2 W6 W1 W5 W7 W2 W4 W6 W6 W7 W8 Test Vector 1 Test Vector 2 Test Vector 3 Test Vector 4 Test Vector 5 10 XX 0X XX X1 XX 1X 1X X0 XX 1X XX 11 XX X1 0X 01 XX X0 0X Original Test Set Figure 3.1: Forming example test set from compressed bits

27 15 The method is illustrated with an example where the situation is like in Figure 3.1. W 1-W 8 refers to words, usually of length 32. To reduce the size of this example the word-length is set to 2. In this case the pseudo-random generator produces the series , three by three these the words correspoding to these numbers are XORed together inside the box in the middle of Figure 3.1. Setting the XORed expressions equal to the original data, found at the bottom of Figure 3.1, will give us the following equations. W1 W5 W8 = 10 W5 W7 W2 = 1X W2 W6 W7 = XX W8 W4 W2 = XX W3 W4 W5 = 0X W5 W4 W8 = 11 W1 W2 W7 = XX W2 W1 W4 = XX W2 W5 W6 = X1 W7 W6 W3 = X1 W3 W6 W8 = XX W4 W5 W6 = 0X W4 W1 W6 = 1X W8 W2 W6 = 01 W1 W7 W8 = 1X W1 W5 W7 = XX W2 W3 W4 = X0 W2 W4 W6 = X0 W1 W3 W6 = XX W6 W7 W8 = 0X All equations are then divided to handle one bit each, those where the right-hand side is X can be removed, whatever the bits of the left-hand side are they will always satisfy a don t care bit. W1(1) refers to the first bit of W1 and W1(2) to the second. This gives the followign equations. W1(1) W5(1) W8(1) = 1 W1(2) W5(2) W8(2) = 0 W3(1) W4(1) W5(1) = 0 W2(2) W5(2) W6(2) = 1 W4(1) W1(1) W6(1) = 1 W1(1) W7(1) W8(1) = 1 W2(2) W3(2) W4(2) = 0 W5(1) W7(1) W2(1) = 1 W5(1) W4(1) W8(1) = 1 W5(2) W4(2) W8(2) = 1 W7(2) W6(2) W3(2) = 1 W4(1) W5(1) W6(1) = 0 W8(1) W2(1) W6(1) = 0 W8(2) W2(2) W6(2) = 1 W2(2) W4(2) W6(2) = 0 W6(1) W7(1) W8(1) = 0 Solving this system of equations is the major task in this method. Balakrishnan and Touba show that every such system of equations can be made solvable by increasing the size of the compressed data. This small example has one solution in the following values of the compressed data:

28 3.3. Decompressing Using Processor W1 = 00 W2 = 11 W3 = 10 W4 = 01 W5 = 10 W6 = 10 W7 = 11 W8 = 00 With this method we have compressed the test set from 40 bits (the original data at the bottom of Figure 3.1) to 16 bits (8 words of 2 bits each). 16 also happen to be the number of specified bits, s tot, in the original test set. Most often this method only needs a few more bits than s tot to get solvable equations [2]. The major disadvantage with this method is the requirement of available memory. For every word it decompresses the method needs to look up words from different parts of the compressed data, hence all of the compressed data needs to be sent to the systems memory before decompression can take place. There can also be a problem when solving enormous system of equations as they can be too large to solve in a reasonable amount of time. If these factors become an issue, then the test set can simply be partitioned and each partition processed one at a time [2]. Partitioning the test set will reduce the overall compression slightly (the larger the partitions, the better the overall compression) [2].

29 17 Chapter 4 Design and Implementation This chapter begins with a description of the facsimile standard. The method is then designed through a number of stages, each adding new features. An algorithm for decompressing the vectors constructed in assembler using an emulator for 8086 processor is also presented. 4.1 Facsimile Standard The facsimile coding standard used in this report is the ITU-T Group 3 standard. The idea behind this facsimile coding is that many lines of a printed paper is similar to the line just above. Every dot on the paper is coded to be either white or black, also known as Bi-Level images. The sender compares the next runs of equally colored dots with the dots right above on the previous line. If they are somewhat similar special codewords are sent to the receiver. The receiver, who already got the previous line, can calculate the length of the runs. The facsimile standard in more details follows below, as described by Sayood [8]. In the recommendations for Group 3 facsimile the code is divided into two schemes. The first is a one-dimensional scheme in which the data is

30 4.1. Facsimile Standard Figure 4.1: Two rows of an Image. The transition pixels are marked. coded independently of any other data. The other is two-dimensional where special codewords are sent using the line-to-line correlations One-Dimensional The one-dimensional coding scheme is a run-length coding scheme in which the next block of data is represented as a series of alternating white runs and black runs. If this scheme is used at the beginning of a line, the first run is always a white run. If the first pixel is a black pixel, then a white run of length zero is sent first. The run-length code used is a Huffman code, a way of choosing the best fitted codeword for each situation based on how frequently a situation occurs. Each line of a A4-size document is representated by 1728 pixels. Creating 1728 different Huffman codes are not very suitable, instead the code is divided into two parts, m and t andarunoflengthr i is expressed as r i =64 m + t for t =0, 1,...,63; and m =1, 2,...,27. The codes for t are called the terminating codes and the codes for m are called the make-up codes. Black and white run length also have separate codes. If r i < 63, only a terminating code is used. Otherwise, both a makeup code and a terminating code are used. This coding scheme is generally referred to as a Modified Huffman (MH) scheme Two-Dimensional In the two-dimensionalscheme, the keyis thetransition pixels. A transition pixel is a pixel of different color than the pixel to the left of it. In Figure 4.1 the transition pixels are marked with dots. Even the leftmost pixel on a

31 19 row can be a transition pixel. One can think of each row extended with an imaginary white pixel to the left of the row, if the first pixel is black it is also a transition pixel. In most documents a row is very similar to its neighbours and the transition pixels will be close to each other. The idea is to encode the position of a transition pixel in relation to a transition pixel on the previous line. This is a modification of a coding scheme called Relative Element Address Designate (READ) code and is often called Modified READ (MR). Some definitions are needed to explain the coding scheme: a 0 : The last pixel of the row currently being encoded. The position and color is known to both encoder and decoder. At the beginning of each line, a 0 refers to the imaginary white pixel to the left of the first actual pixel. Often this pixel is a transition pixel but not always. a 1 : The first transition pixel on the same row and to the right of a 0.The location of this pixel is known only to the encoder. a 2 : The second transition pixel on the same row and to the right of a 0. As with a 1 its location is known only to the encoder. b 1 : The first transition pixel with the opposite color of a 0 on the line above and to the right of a 0. As the line above is known to both encoder and decoder, as is the value of a 0, the location of b 1 is also known to both encoder and decoder. b 2 : The second transition pixel on the line above and more than one pixel to the right of a 0. Also known to both encode and decoder. For the implementation of the facsimile standard used in this report b 1 and b 2 maybeplacedtotherightoftheentirerow. Ifonlyb 2 is to the right it is placed one pixel to the right. If both are outside, b 1 is placed one pixel and b 2 is placed two pixels to the right. This is slightly different from Sayood [8] where an additional codeword is mentioned, representing the situation where all the remaining pixels of a row is equally colored. In Figure 4.2 the example rows are labelled. In this situation the second row is the one currently being encoded and the encoder has encoded the pixels up to the second pixel (marked with a 0 ). The pixel assignments

32 4.1. Facsimile Standard b 1 b 2 a0 a 1 a 2 Figure 4.2: The transition pixels are labelled. for a slightly different arrangement of black and white pixels are shown in Figure 4.3. If a 1 is to the right of b 2, we call the coding mode used the pass mode. This mode is coded with When the decoder receives this code it knows that all the pixels from the last one decoded to the pixel straight below b 2 has the same color. For the next round this pixel below b 2 is the last pixel known to both encoder and decoder. This is the only time where the last known pixel is not a transition pixel. If a 1 is to the left of or straight below b 2 one of two things can happen. The vertical mode is used if the number of pixels from a 1 to right under b 1 is less than or equal to three. Seven different codes tell the location of a 1 in relation to b 1.Theseare: 1: a 1 is straight below b : a 1 is to the right of b 1 by one pixel : a 1 is to the right of b 1 by two pixels : a 1 is to the right of b 1 by three pixels. 010: a 1 is to the left of b 1 by one pixel : a 1 is to the left of b 1 by two pixels : a 1 is to the left of b 1 by three pixels.

33 21 b 1 b 2 a 0 a 1 a 2 Figure 4.3: Two slightly different rows with transition pixels labelled. After the decoder has received and decoded one of these codes the pixel at a 1 is the last one known to both encoder and decoder and the coding process is continued. Inthecasewherea 1 is to the left of or straight below b 2 and the distance to b 1 is greater than three the one-dimensional technique described in Section is used. To inform the decoder about this mode the code 001 is sent followed by two sets of Modified Huffman codewords. The first run-length is of the same color as the last decoded pixel and the second of the opposite. This is in fact the runs from a 0 to a 1 and from a 1 to a 2.The decoder then adds one pixel with the same color as the first run and this is the last known pixel for the next round. 4.2 Compressing Test Vectors Plain Facsimile, No Reorder This first solution uses plain facsimile code to compress the vectors in the order given in the test cube. Later we will see that reordering the vectors improves the compression ratio. The first line to be coded also needs one previous vector. The algorithm uses a imaginary first vector containing only 0 s which forces the first vector to be coded with run-length codes only.

34 4.2. Compressing Test Vectors Afirstlookatthetestdataclarifiesthat the X s (don t care bits) needs to be assigned 0 or 1 carefully. When the algorithm comes across don t care bits it tries to set a 1 after b 2 (see Section 4.1). If it fails it will try to place a 1 as close to b 1 as possible. If a 1 can not be placed as close to b 1 as three steps away the horizontal mode is used sending run-length codes. In the facsimile standard the run-length codes are compressed. This compression technique is based on the length of one row of pixels for a paper copy, which is fixed. This is not applicable for test vectors with different lengths. Instead of creating new compression techniques for each circuit the runlength case is not compressed at all. In Section a better solution is presented Greedy Sort Since each test vector is a separate test it does not matter in which order the test vectors are applied as long as all of them are applied. A reorder of the vectors is done to achieve better compression. A test data set with n vectors can be reordered in n! ways. With conventional computers it is impossible to test all n! combinations unless n is very small, a heuristic is necessary. Even with a heuristic reordering the test set is a difficult problem because when one test vector is moved inside the test cube it will affect the facsimile code for many other vectors. To start with the vector that is moved needs to get all its don t care bits reassigned to achieve better compression. Then it will be coded in relation to its new previous vector. This vector will also force the next vector to be recalculated in the same way and this will propagate downwards. Only when a vector happens to be assigned the don t care bits in the same way as before this chain reaction can be broken. Otherwise all following vectors needs to be recalculated. The greedy sort heuristic starts with the first imaginary vector of 0 s and compresses every vector in the test cube with this as the previous vector. The vector with the shortest facsimile code is chosen and acts as the previous vector in the next round. This way the algorithm chooses the next vector that extends the compressed data the least until all vectors are included. The biggest disadvantage is that the last vectors are not very well suited to be compressed in relation to each other.

35 Frequency-Directed Run-Length (FDR) As mentioned in Section the run-length code used in the facsimile standard is not very suitable for test vector compression. Chandra and Chakrabarty [3] show that FDR codes are easy to decompress and compresses test data very good. Its finest characteristic is the ability to code runs of any length. The FDR code is constructed to give short codewords for short runs and works like this: A codeword consist of two parts, the group prefix and a tail. The group prefix tells which group of run-lengths the codeword belongs to. The first group, A 1, has a single 0 as its group prefix, group A 2 has 10 as prefix and A 3 has 110. This way every next group gets one more leading 1. Given a complete FDR code the group is determined by seeking the first occurrence of the bit 0. If this is found on the kth position the group is A k. The next part is the tail that points out one of the run-lengths in the group. It consists of the same number of bits as the group prefix, one for group A 1,twoforA 2 and so on. With k bits available the group A k will include 2 k different run-lengths, 0 and 1 for group A 1,2-5forgroup A 2, etc. The first 14 run-lengths are shown in Table 4.1. The right-most column shows the codeword (the prefix and tail concatenated) used for each run-length. The FDR code has the following properties: It is easy to extract the prefix and the tail. The prefix is all bits from the beginning including the first 0. The tail is of equal length as the prefix. For any codeword the sum of the binary representation of the prefix and the tail equals the run-length that is coded. Short run-lengths are coded with shorter codewords. This next modification uses FDR where the original run-length code should have been used.

36 4.2. Compressing Test Vectors Group Run-length Group prefix Tail Codeword A A A Table 4.1: The first 14 run-lengths and their codewords Modifying Facsimile Codewords The choice of codewords in the facsimile standard is based on characteristics of paper copies. In this next modification to the method statistics weregatheredofhowmanytimeseachcodewordwereusedinthecompressed data. The ordering algorithm described in Section will benefit from using the short codewords, hence all the codewords needs to be made equally long, otherwise the shorter ones would be used more often than longer ones only because they are shorter. The statistics are the sum from all six circuits used in the experiments in Chapter 5. Statistics shows that four of the codewords are rarely used. They correspond to the cases where a 1 is placed two or three bits to the left or right of b 1. One by one these codewords were removed from the method. For all of them the removal reduced the size of the compressed set. The remaining codewords can be changed further to enhance the compression even more. The new codewords can be found in the last column of Table 4.2. Not only

37 25 Situation Org. code Times used New codeword run-length a1 > b a1 right under b a1 one right of b a1 two right of b not used a1 three right of b not used a1 one left of b a1 two left of b not used a1 three left of b not used Table 4.2: Statistics for codewords do these changes reduce the size of the compressed set, it also makes the decompressing algorithm simpler and faster. 50 bits has become 30! Local Search As mentioned in Section ordering the vectors is difficult. Local search is a heuristic, a looping algorithm working like this; the algorithm starts with a given starting solution, in this casea testsetthat hasa specificorder. Given this starting solution the facsimile coding algorithm compresses the data, rendering the size of the compressed data. The size of this compressed data is what the heuristic tries to minimize. For each loop the heuristic will try a set of different orderings and calculate the size of the compressed data. The one change that gives the best solution and is better than the one given is taken as the starting point for the next run in the loop. The set of orderings that are tested is determined by a rule. In every loop the algorithm will check all the solutions that can be reached with the rule, called the surroundings, to find a better one. Usually it is a good idea to keep the surroundings very small, hence the name local search. Examples of suitable rules defining the surroundings could be: Moving one vector to another place

38 4.2. Compressing Test Vectors Switch place for two adjacent vectors Switch place for two arbitrary vectors The heuristic was added to the modifications described earlier. The surroundings were chosen to the last one in the list above. As we will see the result is not much better than greedy sort. Therefore this modification is not part of the proposed method. Because of the long execution time of the heuristic, even for these small example cores, it is not suitable The Complete Proposed Method The modification mentioned above bring us to one complete algorithm for test vector compression. The local search heuristic is not part of the proposed method. To encode a test vector the algorithm uses the previous vector and set the don t care bits to get the best position of a 1, preferably after b 2 otherwise as close to b 1 as possible. The different cases are encoded with the following codes: 10: a 1 is to the right of b 2 01: a 1 is right under b 1 001: a 1 is placed one to the right of b 1 000: a 1 is placed one to the left of b 1 If none of the above is applicable the code 11 is used and thereafter two sets of FDR codes. There is dependence to the bit at a 0. The first FDR codeword gives the run-length with the same value as at a 0 and the second gives the run-length for the opposite bit. This code also includes one final bit with the value at a 0. For example will be encoded as if the preceding bit is 0. 11(FDR-code)+1011(runlength 5)+1001(runlength 3). As described in Section the vectors are sorted. Figure 4.4 shows pseudo-code that illustrates the process.

39 27 void GreedySort(testCube) { int lengthofvectors; string previous = ; //length = lengthofvectors string tempfax; int shortest; } for each vector in testcube { shortest = FindShortestCode(); MarkAsCoded(shortest); tempfax = EncodeVector(shortest, previous); previous = DecodeVector(tempfax, previous); Write(tempfax); } Figure 4.4: Pseudo-code for Greedy Sort

40 4.2. Compressing Test Vectors Example The small test set from Section is here encoded with the algorithm desribed above. Vector XXXXXXX101XXXXXXXX0 Vector XXX111100XXX The two vectors are first encoded with the imaginary first vector of 0 s as the previous vector forcing them to be coded with run-length only. The vectors get the following compressed codes (decompressed data is shown under): Vector1 : decompressed: Vector2 : decompressed: run length {}}{ }{{} run length {}}{ }{{} }{{} }{{} }{{} 0 }{{} 0 run length {}}{ 11 }{{} }{{}}{{} a 1 >b 2 {}}{ 10 }{{} Since vector2 is encoded with a shorter code it is included first in the compressed data. Vector1 is then encoded with a decompressed vector2 as previous vector //Vector2 decompressed XXXXXXX101XXXXXXXX0 //Vector1 with don t-cares Vector1 : a 1 =b 1 { }} { }{{} 01 decompressed: Compressed data Vector Vector run length {}}{ 11 }{{} }{{} 01 0 }{{} 1 a 1 =b 1 {}}{}{{} a 1 >b 2 {}}{ 10 }{{} 00000

41 Decompression When decompressing the compressed data the algorithm needs to know how long the vectors are and it also requires access to the previous vector. It is sufficient to treat the previous vector as an input stream since it only will be read in a sequence from the beginning. The decompression algorithm will consume each codeword in the compressed data and output the original data with the help of the previous vector. For any codeword to be decompressed the current bit denotes the value of the last bit that was decompressed. At the beginning of a new vector the current bit is set to 0. With different codewords different actions are taken: 10: a 1 is placed after b 2, keep producing bits with the same value as current bit until b 2 is reached, i.e. when the value in the previous vector input stream has changed two times. Current bit keeps the same value. 01: a 1 right under b 1, produce bits with the same value as current bit until there is a change in the previous vector. Add one bit with the opposite value and change current bit. 001: a 1 one bit right of b 1, same as with 01 except produce one extra bit before the last opposite bit. 000: a 1 one bit left of b 1, same as with 01 except produce one bit less before the last opposite bit. 11: FDR run-length code. The decompression algorithm should consume two sets of FDR codes. The first set tells how many bits with the value of current bit that will be produced, the second tells how many bits of the opposite value. Finally one bit with the value of current bit is produced. Current bit keeps the same value. After each codeword is taken care of, the algorithm should see if all the bits of one single vector is produced, otherwise continue with the next codeword. In each turn the algorithm should consume the same amount of bits from the previous vector as it produced itself. Codeword 10 is a bit special since b 2 can be placed to the right of all bits in the vector.

42 4.4. Decompression in Assembler When decompressing a 10 codeword the algorithm should stop producing bits when the right side is reached. 4.4 Decompression in Assembler The decompression algorithm is easily implemented using a high level language such as C or java. But can it run on a simple processor with a small amount of memory? Since none of the tested compiler together with a disassemlber could generate a small program, an implementation of a facsimile decoder were made in assembler directly. For this the Emu8086 emulator were used to test the code. Without any SOC specific programming the size of the assembler code is 88 instructions. This is similar in size to implementation for other methods. The full code can be seen in Appendix A. Instead of sending the output to the screen a real implementation would send the output to the CUT. Also there would be some I/O instructions to read the input stream and the previous vector. 4.5 Including Response Vectors In most testing applications the response from a core is inserted into a MISR (multi-input signature register). The MISR only reports a signature of all its inputs at the end of the test. If the signature doesn t match the expected one the chip is faulty and will be destroyed. An alternative is to compare every bit of each response with the expected one. The main advantage is that a test can be stopped as soon as a fault is discovered (abort-on-fail). There is also a risk that a response with multiple faults still generates the correct signature in a MISR. Usually the response is sent back to the ATE where the comparison is made. This transfer is done without any compression technique and that is why the MISR has become so popular, it decreases the test application time a lot. A new approach to response examination is presented here. The idea is to send the responses in compressed form and let the embedded processor do the comparison with the actual response. The compression is done using

43 31 the same facsimile technique as in previous sections and the test vectors can simply be extended to include the responses. The responses are very similar to the test vectors as they consists of many don t care bits, but we need to be careful. A don t care bit in the response can not only be chosen to 0 or 1. It must match the expected response when the corresponding test vector is used, a test vector with a lot of don t care bits assigned to either 0 or 1. This can only be done with an ATPG which would need to be incorporated in the proposed method. This approach would probably make response vectors that do not fit to be compressed with the facsimile method. A better solution is to send the response vectors with don t care bits left untouched. With a don t care bit in the expected response the comparison program then should accept any bit in the actual response. Additional data needs to be sent to represent the don t care bits. Four ways of coding the response vector is presented here. For all four methods an example is shown with the test vector X1XXXXX001X and the response vector XXXXX10XX0X. The complete vector is encoded to the representation that is sent to the facsimile coding algorithm Using Mask The don t care bits are chosen freely to 0 or 1 in the same way as in the test vector. To determine which ones are don t care bits a mask is added at the end of each vector. A 0 in the mask indicates that the corresponding bit in the response vector is don t care, a 1 indicates that the bit is specified and should be compared with the bit in the actual response. Orig. test {}}{ X1XXXXX001X Orig. response The mask {}}{{}}{ XXXXX10XX0X Two Bits Each If each bit in the response vector is coded with two bits each there would be no need of a mask. Actually only two of the four different combinations of two bits are used by the bit 1 and 0 leaving two combinations to code

44 4.5. Including Response Vectors the don t care bit. One solution would be to code 1 as 11, 0 as 10 and X as either 00 or 01. To code an X as either 00 or 01 we simple code it 0X. Orig. test Response {}}{{}}{ X1XXXXX001X }{{} 0X }{{} 0X }{{} 0X }{{} 0X }{{} 0X }{{} 11 }{{} 10 }{{} 0X }{{} 0X }{{} 10 }{{} 0X X X X X X 1 0 X X 0 X You may ask why the X is coded as 0X and not simply a single 0. The reason is that the vectors would become different in length and the facsimile coding algorithm requires a previous vector of the same length Merged Test and Response Vector When running the test application there is a matter of timing not previously discussed. In Section 1.2 it is written that a test vector is shifted into the core, the clock is applied and the response is shifted out. However, at the same time as the response is shifted out it is possible to shift in the next testvector.thisiscalledpipelining and saves a great amount of time. In order to use pipelining the application should compare the response vector with the expected one, at the same time as it shifts in the next vector. The last two approaches merge the response vector with the next test vector. For each bit that is shifted into the core the decompression program also will decompress one bit from the response, compare it with the actual response shifted out and continue only if they match (or the decompressed bit is don t care). The first approach uses a mask that is placed at the beginning of the vector. The mask has to be decompressed and saved before comparison can take place. The second uses two bits for each bit in the response vector in the same way as the method above. In both these methods a last empty test vector needs to be added to include the last response and mask. In the examples below the response vector is the same as before but here it refers to the response vector of the preceeding vector. A t denotes a bit from the test vector. An r denotes a bit from the response. The Mask Merged vector {}}{{}}{ X X 1 X X X X X X X X 1 X 0 0 X 0 X 1 0 X X t r t r t r t r t r t r t r t r t r t r t r

45 33 Merged vector {}}{ X 0X 1 0X X 0X X 0X X 0X X 11 X X 0 0X 1 10 X 0X t r t r t r t r t r t r t r t r t r t r t r

46 4.5. Including Response Vectors

47 35 Chapter 5 Experimental Results With a set of experiments this chapter will show the efficiency of the proposed method. The experiments are made on real test data since real test data has special properties that will affect the results. Results from the different stages show which modifiation is the most valuable one and the results are also compared with results from other methods. During all test including only test vectors some of the ISCAS 89 circuit s test vectors were used. For the algorithms that also include the response vectors test data for the circuit D695 were used. These ciruits are small circuits released in public for development purposes. 5.1 Compressing Test Vectors Only The compression algorithm was implemented and tested with Java on a SunBlade100 (500 MHz). In Table 5.1 the result for the different stages are shown. The second column shows the size of the uncompressed set, T D. For every stage both the compressed number of bits are shown and the percentage compression. The percentage data compression was computed as: Percentage Data Compression = Original Bits - Compressed Bits Original Bits 100

1 Introduction & The Institution of Engineering and Technology 2008 IET Comput. Digit. Tech., 2008, Vol. 2, No. 4, pp.

1 Introduction & The Institution of Engineering and Technology 2008 IET Comput. Digit. Tech., 2008, Vol. 2, No. 4, pp. Published in IET Computers & Digital Techniques Received on 15th May 2007 Revised on 17th December 2007 Selected Papers from NORCHIP 06 ISSN 1751-8601 Architecture for integrated test data compression

More information

Scan-Based BIST Diagnosis Using an Embedded Processor

Scan-Based BIST Diagnosis Using an Embedded Processor Scan-Based BIST Diagnosis Using an Embedded Processor Kedarnath J. Balakrishnan and Nur A. Touba Computer Engineering Research Center Department of Electrical and Computer Engineering University of Texas

More information

Test Application Time and Volume Compression through Seed Overlapping

Test Application Time and Volume Compression through Seed Overlapping Test Application Time and Volume Compression through Seed verlapping ABSTRACT We propose in this paper an approach based on the Scan Chain Concealment technique to further reduce test time and volume.

More information

Reconfigurable Linear Decompressors Using Symbolic Gaussian Elimination

Reconfigurable Linear Decompressors Using Symbolic Gaussian Elimination Reconfigurable Linear Decompressors Using Symbolic Gaussian Elimination Kedarnath J. Balakrishnan and Nur A. Touba Computer Engineering Research Center University of Texas at Austin {kjbala,touba}@ece.utexas.edu

More information

How Effective are Compression Codes for Reducing Test Data Volume?

How Effective are Compression Codes for Reducing Test Data Volume? How Effective are Compression Codes for Reducing Test Data Volume Anshuman Chandra, Krishnendu Chakrabarty and Rafael A Medina Dept Electrical & Computer Engineering Dept Electrical Engineering & Computer

More information

WITH integrated circuits, especially system-on-chip

WITH integrated circuits, especially system-on-chip IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 14, NO. 11, NOVEMBER 2006 1227 Improving Linear Test Data Compression Kedarnath J. Balakrishnan, Member, IEEE, and Nur A. Touba, Senior

More information

Efficient Algorithm for Test Vector Decompression Using an Embedded Processor

Efficient Algorithm for Test Vector Decompression Using an Embedded Processor Efficient Algorithm for Test Vector Decompression Using an Embedded Processor Kamran Saleem and Nur A. Touba Computer Engineering Research Center Department of Electrical and Computer Engineering University

More information

TEST DATA COMPRESSION BASED ON GOLOMB CODING AND TWO-VALUE GOLOMB CODING

TEST DATA COMPRESSION BASED ON GOLOMB CODING AND TWO-VALUE GOLOMB CODING TEST DATA COMPRESSION BASED ON GOLOMB CODING AND TWO-VALUE GOLOMB CODING Priyanka Kalode 1 and Mrs. Richa Khandelwal 2 1 Department of Electronics Engineering, Ramdeobaba college of Engg and Mgt, Nagpur

More information

Design for Test of Digital Systems TDDC33

Design for Test of Digital Systems TDDC33 Course Outline Design for Test of Digital Systems TDDC33 Erik Larsson Department of Computer Science Introduction; Manufacturing, Wafer sort, Final test, Board and System Test, Defects, and Faults Test

More information

Core-Level Compression Technique Selection and SOC Test Architecture Design 1

Core-Level Compression Technique Selection and SOC Test Architecture Design 1 17th Asian Test Symposium Core-Level Compression Technique Selection and SOC Test Architecture Design 1 Anders Larsson +, Xin Zhang +, Erik Larsson +, and Krishnendu Chakrabarty * + Department of Computer

More information

TEST cost in the integrated circuit (IC) industry has

TEST cost in the integrated circuit (IC) industry has IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 33, NO. 8, AUGUST 2014 1219 Utilizing ATE Vector Repeat with Linear Decompressor for Test Vector Compression Joon-Sung

More information

Deterministic Test Vector Compression/Decompression for Systems-on-a-Chip Using an Embedded Processor

Deterministic Test Vector Compression/Decompression for Systems-on-a-Chip Using an Embedded Processor JOURNAL OF ELECTRONIC TESTING: Theory and Applications 18, 503 514, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Deterministic Test Vector Compression/Decompression for Systems-on-a-Chip

More information

A Heuristic for Concurrent SOC Test Scheduling with Compression and Sharing

A Heuristic for Concurrent SOC Test Scheduling with Compression and Sharing A Heuristic for Concurrent SOC Test Scheduling with Compression and Sharing Anders Larsson, Erik Larsson, Petru Eles, and Zebo Peng Embedded Systems Laboratory Linköpings Universitet SE-582 83 Linköping,

More information

Efficiently Utilizing ATE Vector Repeat for Compression by Scan Vector Decomposition

Efficiently Utilizing ATE Vector Repeat for Compression by Scan Vector Decomposition Efficiently Utilizing ATE Vector Repeat for Compression by Scan Vector Decomposition Jinkyu Lee and Nur A. Touba Computer Engineering Research Center University of Teas, Austin, TX 7872 {jlee2, touba}@ece.uteas.edu

More information

Static Compaction Techniques to Control Scan Vector Power Dissipation

Static Compaction Techniques to Control Scan Vector Power Dissipation Static Compaction Techniques to Control Scan Vector Power Dissipation Ranganathan Sankaralingam, Rama Rao Oruganti, and Nur A. Touba Computer Engineering Research Center Department of Electrical and Computer

More information

Test-Architecture Optimization and Test Scheduling for SOCs with Core-Level Expansion of Compressed Test Patterns

Test-Architecture Optimization and Test Scheduling for SOCs with Core-Level Expansion of Compressed Test Patterns Test-Architecture Optimization and Test Scheduling for SOCs with Core-Level Expansion of Compressed Test Patterns Anders Larsson, Erik Larsson, Krishnendu Chakrabarty *, Petru Eles, and Zebo Peng Embedded

More information

FAULT TOLERANT SYSTEMS

FAULT TOLERANT SYSTEMS FAULT TOLERANT SYSTEMS http://www.ecs.umass.edu/ece/koren/faulttolerantsystems Part 6 Coding I Chapter 3 Information Redundancy Part.6.1 Information Redundancy - Coding A data word with d bits is encoded

More information

An Efficient Test Relaxation Technique for Synchronous Sequential Circuits

An Efficient Test Relaxation Technique for Synchronous Sequential Circuits An Efficient Test Relaxation Technique for Synchronous Sequential Circuits Aiman El-Maleh and Khaled Al-Utaibi King Fahd University of Petroleum and Minerals Dhahran 326, Saudi Arabia emails:{aimane, alutaibi}@ccse.kfupm.edu.sa

More information

A Reconfigured Twisted Ring Counter Using Tristate Coding For Test Data Compression

A Reconfigured Twisted Ring Counter Using Tristate Coding For Test Data Compression A Reconfigured Twisted Ring Counter Using Tristate Coding For Test Data Compression 1 R.Kanagavalli, 2 Dr.O.Saraniya 1 PG Scholar, 2 Assistant Professor Department of Electronics and Communication Engineering,

More information

Improving Encoding Efficiency for Linear Decompressors Using Scan Inversion

Improving Encoding Efficiency for Linear Decompressors Using Scan Inversion Improving Encoding Efficiency for Linear Decompressors Using Scan Inversion Kedarnath J. Balakrishnan and Nur A. Touba Computer Engineering Research Center Department of Electrical and Computer Engineering

More information

TAM design and Test Data Compression for SoC Test Cost Reduction

TAM design and Test Data Compression for SoC Test Cost Reduction TAM design and Test Data Compression for SoC Test Cost Reduction Julien DALMASSO, Marie-Lise FLOTTES, Bruno ROUZEYRE LIRMM, UMR5506 CNRS/Université Montpellier II 161 rue Ada, 34932 Montpellier cedex 5,

More information

A Technique for High Ratio LZW Compression

A Technique for High Ratio LZW Compression A Technique for High Ratio LZW Compression Michael J. Knieser Francis G. Wolff Chris A. Papachristou Daniel J. Weyer David R. McIntyre Indiana University Purdue University Indianapolis Case Western Reserve

More information

Deterministic BIST ABSTRACT. II. DBIST Schemes Based On Reseeding of PRPG (LFSR) I. INTRODUCTION

Deterministic BIST ABSTRACT. II. DBIST Schemes Based On Reseeding of PRPG (LFSR) I. INTRODUCTION Deterministic BIST Amiri Amir Mohammad Ecole Polytechnique, Montreal, December 2004 ABSTRACT This paper studies some of the various techniques of DBIST. Normal BIST structures use a PRPG (LFSR) to randomly

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

Compression-based SoC Test Infrastructures

Compression-based SoC Test Infrastructures Compression-based SoC Test Infrastructures LIRMM, Univ. Montpellier II/CNRS, 161 rue Ada, 34932 Montpellier cedex 5, France {dalmasso, flottes, rouzeyre}@lirmm.fr Abstract. Test Data Compression techniques

More information

Test Data Compression Using Variable Prefix Run Length (VPRL) Code

Test Data Compression Using Variable Prefix Run Length (VPRL) Code IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 2, Ver. I (Mar-Apr. 2014), PP 91-95 e-issn: 2319 4200, p-issn No. : 2319 4197 Test Data Compression Using Variable Prefix Run Length

More information

FCSCAN: An Efficient Multiscan-based Test Compression Technique for Test Cost Reduction

FCSCAN: An Efficient Multiscan-based Test Compression Technique for Test Cost Reduction FCSCAN: An Efficient Multiscan-based Test Compression Technique for Test Cost Reduction Youhua Shi, Nozomu Togawa, Shinji Kimura, Masao Yanagisawa, and Tatsuo Ohtsuki Dept. of Computer Science Waseda University,

More information

Chapter 3 - Memory Management

Chapter 3 - Memory Management Chapter 3 - Memory Management Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 3 - Memory Management 1 / 222 1 A Memory Abstraction: Address Spaces The Notion of an Address Space Swapping

More information

The Gray Code. Script

The Gray Code. Script Course: B.Sc. Applied Physical Science (Computer Science) Year & Sem.: IInd Year, Sem - IIIrd Subject: Computer Science Paper No.: IX Paper Title: Computer System Architecture Lecture No.: 9 Lecture Title:

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

System-on-Chip Test Data Compression Based on Split-Data Variable Length (SDV) Code

System-on-Chip Test Data Compression Based on Split-Data Variable Length (SDV) Code Circuits and Systems, 2016, 7, 1213-1223 Published Online June 2016 in SciRes. http://www.scirp.org/journal/cs http://dx.doi.org/10.4236/cs.2016.78105 System-on-Chip Test Data Based on Split-Data Variable

More information

Redundancy in fault tolerant computing. D. P. Siewiorek R.S. Swarz, Reliable Computer Systems, Prentice Hall, 1992

Redundancy in fault tolerant computing. D. P. Siewiorek R.S. Swarz, Reliable Computer Systems, Prentice Hall, 1992 Redundancy in fault tolerant computing D. P. Siewiorek R.S. Swarz, Reliable Computer Systems, Prentice Hall, 1992 1 Redundancy Fault tolerance computing is based on redundancy HARDWARE REDUNDANCY Physical

More information

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14 CS24: INTRODUCTION TO COMPUTING SYSTEMS Spring 2014 Lecture 14 LAST TIME! Examined several memory technologies: SRAM volatile memory cells built from transistors! Fast to use, larger memory cells (6+ transistors

More information

A Novel Test-Data Compression Technique using Application-Aware Bitmask and Dictionary Selection Methods Kanad Basu 1 and Prabhat Mishra 2

A Novel Test-Data Compression Technique using Application-Aware Bitmask and Dictionary Selection Methods Kanad Basu 1 and Prabhat Mishra 2 A Novel Test-Data Compression Technique using Application-Aware Bitmask and Selection Methods Kanad Basu 1 and Prabhat Mishra 2 Computer and Information Science and Engineering Department University of

More information

Reducing Control Bit Overhead for X-Masking/X-Canceling Hybrid Architecture via Pattern Partitioning

Reducing Control Bit Overhead for X-Masking/X-Canceling Hybrid Architecture via Pattern Partitioning Reducing Control Bit Overhead for X-Masking/X-Canceling Hybrid Architecture via Pattern Partitioning Jin-Hyun Kang Semiconductor Systems Department Sungkyunkwan University Suwon, Korea, 16419 kangjin13@skku.edu

More information

Test Data Compression Using Dictionaries with Fixed-Length Indices

Test Data Compression Using Dictionaries with Fixed-Length Indices Test Data Compression Using Dictionaries with Fixed-Length Indices Lei Li and Krishnendu Chakrabarty Department of Electrical and Computer Engineering Duke University, Durham, NC 27708 {ll, krish}@ee.duke.edu

More information

Runlength Compression Techniques for FPGA Configurations

Runlength Compression Techniques for FPGA Configurations Runlength Compression Techniques for FPGA Configurations Scott Hauck, William D. Wilson Department of Electrical and Computer Engineering Northwestern University Evanston, IL 60208-3118 USA {hauck, wdw510}@ece.nwu.edu

More information

On Using Machine Learning for Logic BIST

On Using Machine Learning for Logic BIST On Using Machine Learning for Logic BIST Christophe FAGOT Patrick GIRARD Christian LANDRAULT Laboratoire d Informatique de Robotique et de Microélectronique de Montpellier, UMR 5506 UNIVERSITE MONTPELLIER

More information

Digital VLSI Testing Prof. Santanu Chattopadhyay Department of Electronics and EC Engineering India Institute of Technology, Kharagpur.

Digital VLSI Testing Prof. Santanu Chattopadhyay Department of Electronics and EC Engineering India Institute of Technology, Kharagpur. Digital VLSI Testing Prof. Santanu Chattopadhyay Department of Electronics and EC Engineering India Institute of Technology, Kharagpur Lecture 05 DFT Next we will look into the topic design for testability,

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1 CHAPTER 1 INTRODUCTION 1.1 Advance Encryption Standard (AES) Rijndael algorithm is symmetric block cipher that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256

More information

Horn Formulae. CS124 Course Notes 8 Spring 2018

Horn Formulae. CS124 Course Notes 8 Spring 2018 CS124 Course Notes 8 Spring 2018 In today s lecture we will be looking a bit more closely at the Greedy approach to designing algorithms. As we will see, sometimes it works, and sometimes even when it

More information

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson zhuyongxin@sjtu.edu.cn 2 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information

More information

FPGA Acceleration of 3D Component Matching using OpenCL

FPGA Acceleration of 3D Component Matching using OpenCL FPGA Acceleration of 3D Component Introduction 2D component matching, blob extraction or region extraction, is commonly used in computer vision for detecting connected regions that meet pre-determined

More information

Data Representation Type of Data Representation Integers Bits Unsigned 2 s Comp Excess 7 Excess 8

Data Representation Type of Data Representation Integers Bits Unsigned 2 s Comp Excess 7 Excess 8 Data Representation At its most basic level, all digital information must reduce to 0s and 1s, which can be discussed as binary, octal, or hex data. There s no practical limit on how it can be interpreted

More information

Bits, Words, and Integers

Bits, Words, and Integers Computer Science 52 Bits, Words, and Integers Spring Semester, 2017 In this document, we look at how bits are organized into meaningful data. In particular, we will see the details of how integers are

More information

Memory Addressing, Binary, and Hexadecimal Review

Memory Addressing, Binary, and Hexadecimal Review C++ By A EXAMPLE Memory Addressing, Binary, and Hexadecimal Review You do not have to understand the concepts in this appendix to become well-versed in C++. You can master C++, however, only if you spend

More information

Bit error recovery in Internet facsimile without retransmission

Bit error recovery in Internet facsimile without retransmission Bit error recovery in Internet facsimile without retransmission Hyunju Kim and Abdou Youssef Department of Computer Science The George Washington University Washington, DC 20052 Email: {hkim, youssef}@seas.gwu.edu

More information

Move-to-front algorithm

Move-to-front algorithm Up to now, we have looked at codes for a set of symbols in an alphabet. We have also looked at the specific case that the alphabet is a set of integers. We will now study a few compression techniques in

More information

Chapter 10 Error Detection and Correction 10.1

Chapter 10 Error Detection and Correction 10.1 Chapter 10 Error Detection and Correction 10.1 10-1 INTRODUCTION some issues related, directly or indirectly, to error detection and correction. Topics discussed in this section: Types of Errors Redundancy

More information

Greedy Algorithms CHAPTER 16

Greedy Algorithms CHAPTER 16 CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often

More information

IMPLEMENTATION OF A FAST MPEG-2 COMPLIANT HUFFMAN DECODER

IMPLEMENTATION OF A FAST MPEG-2 COMPLIANT HUFFMAN DECODER IMPLEMENTATION OF A FAST MPEG-2 COMPLIANT HUFFMAN ECOER Mikael Karlsson Rudberg (mikaelr@isy.liu.se) and Lars Wanhammar (larsw@isy.liu.se) epartment of Electrical Engineering, Linköping University, S-581

More information

LZ UTF8. LZ UTF8 is a practical text compression library and stream format designed with the following objectives and properties:

LZ UTF8. LZ UTF8 is a practical text compression library and stream format designed with the following objectives and properties: LZ UTF8 LZ UTF8 is a practical text compression library and stream format designed with the following objectives and properties: 1. Compress UTF 8 and 7 bit ASCII strings only. No support for arbitrary

More information

S 1. Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources

S 1. Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources Evaluation of Fast-LZ Compressors for Compacting High-Bandwidth but Redundant Streams from FPGA Data Sources Author: Supervisor: Luhao Liu Dr. -Ing. Thomas B. Preußer Dr. -Ing. Steffen Köhler 09.10.2014

More information

CS 134 Midterm Fall 2006

CS 134 Midterm Fall 2006 CS 34 Midterm Fall 26 This is a closed book exam. You have 5 minutes to complete the exam. There are 5 questions on this examination. The point values for the questions are shown in the table below. Your

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms

6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms 6. Finding Efficient Compressions; Huffman and Hu-Tucker Algorithms We now address the question: How do we find a code that uses the frequency information about k length patterns efficiently, to shorten

More information

Compaction mechanism to reduce test pattern counts and segmented delay fault testing for path delay faults

Compaction mechanism to reduce test pattern counts and segmented delay fault testing for path delay faults University of Iowa Iowa Research Online Theses and Dissertations Spring 2013 Compaction mechanism to reduce test pattern counts and segmented delay fault testing for path delay faults Sharada Jha University

More information

6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia

6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia 6.338 Final Paper: Parallel Huffman Encoding and Move to Front Encoding in Julia Gil Goldshlager December 2015 1 Introduction 1.1 Background The Burrows-Wheeler transform (BWT) is a string transform used

More information

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 10 Error Detection and Correction 0. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Note The Hamming distance between two words is the number of differences

More information

A Partition-Based Approach for Identifying Failing Scan Cells in Scan-BIST with Applications to System-on-Chip Fault Diagnosis

A Partition-Based Approach for Identifying Failing Scan Cells in Scan-BIST with Applications to System-on-Chip Fault Diagnosis A Partition-Based Approach for Identifying Failing Scan Cells in Scan-BIST with Applications to System-on-Chip Fault Diagnosis Chunsheng Liu and Krishnendu Chakrabarty Department of Electrical & Computer

More information

CS Computer Architecture

CS Computer Architecture CS 35101 Computer Architecture Section 600 Dr. Angela Guercio Fall 2010 Computer Systems Organization The CPU (Central Processing Unit) is the brain of the computer. Fetches instructions from main memory.

More information

Indexing and Searching

Indexing and Searching Indexing and Searching Introduction How to retrieval information? A simple alternative is to search the whole text sequentially Another option is to build data structures over the text (called indices)

More information

CIS 121 Data Structures and Algorithms with Java Spring 2018

CIS 121 Data Structures and Algorithms with Java Spring 2018 CIS 121 Data Structures and Algorithms with Java Spring 2018 Homework 6 Compression Due: Monday, March 12, 11:59pm online 2 Required Problems (45 points), Qualitative Questions (10 points), and Style and

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION Rapid advances in integrated circuit technology have made it possible to fabricate digital circuits with large number of devices on a single chip. The advantages of integrated circuits

More information

11 Data Structures Foundations of Computer Science Cengage Learning

11 Data Structures Foundations of Computer Science Cengage Learning 11 Data Structures 11.1 Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: Define a data structure. Define an array as a data structure

More information

A Linear-Time Heuristic for Improving Network Partitions

A Linear-Time Heuristic for Improving Network Partitions A Linear-Time Heuristic for Improving Network Partitions ECE 556 Project Report Josh Brauer Introduction The Fiduccia-Matteyses min-cut heuristic provides an efficient solution to the problem of separating

More information

GDSII to OASIS Converter Performance and Analysis

GDSII to OASIS Converter Performance and Analysis GDSII to OASIS Converter Performance and Analysis 1 Introduction Nageswara Rao G 8 November 2004 For more than three decades GDSII has been the de-facto standard format for layout design data. But for

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Optimizing for DirectX Graphics. Richard Huddy European Developer Relations Manager

Optimizing for DirectX Graphics. Richard Huddy European Developer Relations Manager Optimizing for DirectX Graphics Richard Huddy European Developer Relations Manager Also on today from ATI... Start & End Time: 12:00pm 1:00pm Title: Precomputed Radiance Transfer and Spherical Harmonic

More information

We will give examples for each of the following commonly used algorithm design techniques:

We will give examples for each of the following commonly used algorithm design techniques: Review This set of notes provides a quick review about what should have been learned in the prerequisite courses. The review is helpful to those who have come from a different background; or to those who

More information

EE67I Multimedia Communication Systems Lecture 4

EE67I Multimedia Communication Systems Lecture 4 EE67I Multimedia Communication Systems Lecture 4 Lossless Compression Basics of Information Theory Compression is either lossless, in which no information is lost, or lossy in which information is lost.

More information

Random Access Memory (RAM)

Random Access Memory (RAM) Best known form of computer memory. "random access" because you can access any memory cell directly if you know the row and column that intersect at that cell. CS1111 CS5020 - Prof J.P. Morrison UCC 33

More information

Lossless Compression Algorithms

Lossless Compression Algorithms Multimedia Data Compression Part I Chapter 7 Lossless Compression Algorithms 1 Chapter 7 Lossless Compression Algorithms 1. Introduction 2. Basics of Information Theory 3. Lossless Compression Algorithms

More information

Wednesday, January 28, 2018

Wednesday, January 28, 2018 Wednesday, January 28, 2018 Topics for today History of Computing (brief) Encoding data in binary Unsigned integers Signed integers Arithmetic operations and status bits Number conversion: binary to/from

More information

Frequency Oriented Scheduling on Parallel Processors

Frequency Oriented Scheduling on Parallel Processors School of Mathematics and Systems Engineering Reports from MSI - Rapporter från MSI Frequency Oriented Scheduling on Parallel Processors Siqi Zhong June 2009 MSI Report 09036 Växjö University ISSN 1650-2647

More information

Memory. Objectives. Introduction. 6.2 Types of Memory

Memory. Objectives. Introduction. 6.2 Types of Memory Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts

More information

AN 831: Intel FPGA SDK for OpenCL

AN 831: Intel FPGA SDK for OpenCL AN 831: Intel FPGA SDK for OpenCL Host Pipelined Multithread Subscribe Send Feedback Latest document on the web: PDF HTML Contents Contents 1 Intel FPGA SDK for OpenCL Host Pipelined Multithread...3 1.1

More information

Bit Error Recovery in MMR Coded Bitstreams Using Error Detection Points

Bit Error Recovery in MMR Coded Bitstreams Using Error Detection Points Bit Error Recovery in MMR Coded Bitstreams Using Error Detection Points Hyunju Kim and Abdou Youssef Department of Computer Science The George Washington University Washington, DC, USA Email: {hkim, ayoussef}@gwu.edu

More information

EE-575 INFORMATION THEORY - SEM 092

EE-575 INFORMATION THEORY - SEM 092 EE-575 INFORMATION THEORY - SEM 092 Project Report on Lempel Ziv compression technique. Department of Electrical Engineering Prepared By: Mohammed Akber Ali Student ID # g200806120. ------------------------------------------------------------------------------------------------------------------------------------------

More information

Test Data Compression Using Dictionaries with Selective Entries and Fixed-Length Indices

Test Data Compression Using Dictionaries with Selective Entries and Fixed-Length Indices Test Data Compression Using Dictionaries with Selective Entries and Fixed-Length Indices LEI LI and KRISHNENDU CHAKRABARTY Duke University and NUR A. TOUBA University of Texas, Austin We present a dictionary-based

More information

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup

A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup A Hybrid Approach to CAM-Based Longest Prefix Matching for IP Route Lookup Yan Sun and Min Sik Kim School of Electrical Engineering and Computer Science Washington State University Pullman, Washington

More information

Volume 2, Issue 9, September 2014 ISSN

Volume 2, Issue 9, September 2014 ISSN Fingerprint Verification of the Digital Images by Using the Discrete Cosine Transformation, Run length Encoding, Fourier transformation and Correlation. Palvee Sharma 1, Dr. Rajeev Mahajan 2 1M.Tech Student

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information

(Refer Slide Time: 00:01:30)

(Refer Slide Time: 00:01:30) Digital Circuits and Systems Prof. S. Srinivasan Department of Electrical Engineering Indian Institute of Technology, Madras Lecture - 32 Design using Programmable Logic Devices (Refer Slide Time: 00:01:30)

More information

Digital Integrated Circuits

Digital Integrated Circuits Digital Integrated Circuits Lecture Jaeyong Chung System-on-Chips (SoC) Laboratory Incheon National University Design/manufacture Process Chung EPC655 2 Design/manufacture Process Chung EPC655 3 Layout

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation

ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation ECE902 Virtual Machine Final Project: MIPS to CRAY-2 Binary Translation Weiping Liao, Saengrawee (Anne) Pratoomtong, and Chuan Zhang Abstract Binary translation is an important component for translating

More information

Dec Hex Bin ORG ; ZERO. Introduction To Computing

Dec Hex Bin ORG ; ZERO. Introduction To Computing Dec Hex Bin 0 0 00000000 ORG ; ZERO Introduction To Computing OBJECTIVES this chapter enables the student to: Convert any number from base 2, base 10, or base 16 to any of the other two bases. Add and

More information

CSE380 - Operating Systems. Communicating with Devices

CSE380 - Operating Systems. Communicating with Devices CSE380 - Operating Systems Notes for Lecture 15-11/4/04 Matt Blaze (some examples by Insup Lee) Communicating with Devices Modern architectures support convenient communication with devices memory mapped

More information

6. Finding Efficient Compressions; Huffman and Hu-Tucker

6. Finding Efficient Compressions; Huffman and Hu-Tucker 6. Finding Efficient Compressions; Huffman and Hu-Tucker We now address the question: how do we find a code that uses the frequency information about k length patterns efficiently to shorten our message?

More information

Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering

Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering Efficient Test Compaction for Combinational Circuits Based on Fault Detection Count-Directed Clustering Aiman El-Maleh, Saqib Khurshid King Fahd University of Petroleum and Minerals Dhahran, Saudi Arabia

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

SSoCC'01 4/3/01. Specific BIST Architectures. Gert Jervan Embedded Systems Laboratory (ESLAB) Linköping University

SSoCC'01 4/3/01. Specific BIST Architectures. Gert Jervan Embedded Systems Laboratory (ESLAB) Linköping University Specific BIST Architectures Gert Jervan Embedded Systems Laboratory (ESLAB) Linköping University General Concepts Test-per-scan architectures Multiple scan chains Test-per-clock architectures BIST conclusions

More information

CPE300: Digital System Architecture and Design

CPE300: Digital System Architecture and Design CPE300: Digital System Architecture and Design Fall 2011 MW 17:30-18:45 CBC C316 Cache 11232011 http://www.egr.unlv.edu/~b1morris/cpe300/ 2 Outline Review Memory Components/Boards Two-Level Memory Hierarchy

More information

Code Compression for RISC Processors with Variable Length Instruction Encoding

Code Compression for RISC Processors with Variable Length Instruction Encoding Code Compression for RISC Processors with Variable Length Instruction Encoding S. S. Gupta, D. Das, S.K. Panda, R. Kumar and P. P. Chakrabarty Department of Computer Science & Engineering Indian Institute

More information

Deduction and Logic Implementation of the Fractal Scan Algorithm

Deduction and Logic Implementation of the Fractal Scan Algorithm Deduction and Logic Implementation of the Fractal Scan Algorithm Zhangjin Chen, Feng Ran, Zheming Jin Microelectronic R&D center, Shanghai University Shanghai, China and Meihua Xu School of Mechatronical

More information

Testing Embedded Cores Using Partial Isolation Rings

Testing Embedded Cores Using Partial Isolation Rings Testing Embedded Cores Using Partial Isolation Rings Nur A. Touba and Bahram Pouya Computer Engineering Research Center Department of Electrical and Computer Engineering University of Texas, Austin, TX

More information

Soft-Core Embedded Processor-Based Built-In Self- Test of FPGAs: A Case Study

Soft-Core Embedded Processor-Based Built-In Self- Test of FPGAs: A Case Study Soft-Core Embedded Processor-Based Built-In Self- Test of FPGAs: A Case Study Bradley F. Dutton, Graduate Student Member, IEEE, and Charles E. Stroud, Fellow, IEEE Dept. of Electrical and Computer Engineering

More information

Lecture 16. Today: Start looking into memory hierarchy Cache$! Yay!

Lecture 16. Today: Start looking into memory hierarchy Cache$! Yay! Lecture 16 Today: Start looking into memory hierarchy Cache$! Yay! Note: There are no slides labeled Lecture 15. Nothing omitted, just that the numbering got out of sequence somewhere along the way. 1

More information

Delay Test with Embedded Test Pattern Generator *

Delay Test with Embedded Test Pattern Generator * JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 29, 545-556 (2013) Delay Test with Embedded Test Pattern Generator * Department of Computer Science National Chung Hsing University Taichung, 402 Taiwan A

More information

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)

Chapter 5 VARIABLE-LENGTH CODING Information Theory Results (II) Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,

More information