A New Algorithm to Create Prime Irredundant Boolean Expressions

Similar documents
A New Decomposition of Boolean Functions

Formal Verification using Probabilistic Techniques

SEPP: a New Compact Three-Level Logic Form

File Formats. Appendix A. A.1 Benchmarks. A.2 ESPRESSO Format

Advanced Digital Logic Design EECS 303

Logic Synthesis & Optimization Lectures 4, 5 Boolean Algebra - Basics

Homework 3 Handout 19 February 18, 2016

1/28/2013. Synthesis. The Y-diagram Revisited. Structural Behavioral. More abstract designs Physical. CAD for VLSI 2

Unit 4: Formal Verification

Beyond the Combinatorial Limit in Depth Minimization for LUT-Based FPGA Designs

VLSI System Design Part II : Logic Synthesis (1) Oct Feb.2007

Functional Test Generation for Delay Faults in Combinational Circuits

Low Power PLAs. Reginaldo Tavares, Michel Berkelaar, Jochen Jess. Information and Communication Systems Section, Eindhoven University of Technology,

Two-Level Logic Optimization ( Introduction to Computer-Aided Design) School of EECS Seoul National University

Heuristic Minimization of Boolean Relations Using Testing Techniques

CSE241 VLSI Digital Circuits UC San Diego

Multi-Level Logic Synthesis for Low Power

An Efficient Framework of Using Various Decomposition Methods to Synthesize LUT Networks and Its Evaluation

Synthesis of 2-level Logic Heuristic Method. Two Approaches

DAC synthesis benchmark circuits.

Functional extension of structural logic optimization techniques

L3: Representations of functions

ECE260B CSE241A Winter Logic Synthesis

ICS 252 Introduction to Computer Design

Minimization of Multiple-Valued Functions in Post Algebra

Binary recursion. Unate functions. If a cover C(f) is unate in xj, x, then f is unate in xj. x

LOGIC SYNTHESIS AND VERIFICATION ALGORITHMS. Gary D. Hachtel University of Colorado. Fabio Somenzi University of Colorado.

Design of Framework for Logic Synthesis Engine

Boolean Matching for Complex PLBs in LUT-based FPGAs with Application to Architecture Evaluation. Jason Cong and Yean-Yow Hwang

ESOP CIRCUIT MINIMIZATION BASED ON THE FUNCTION ON-SET. Likai Chai

Multi-valued Logic Synthesis. Robert K Brayton Sunil P Khatri University of California Berkeley, CA brayton,

Don't Cares in Multi-Level Network Optimization. Hamid Savoj. Abstract

* 1: Semiconductor Division, FUJITSU Limited, Kawasaki Japan

Unate Recursive Complement Algorithm

Logic Synthesis and Verification

Giovanni De Micheli. Integrated Systems Centre EPF Lausanne

IEEE Transactions on computers

On Computing Minimum Size Prime Implicants

SOFTWARE FOR THE MINIMIZATION OF THE COMBINATIONAL LOGIC FUNCTIONS

Flexible Two-Level Boolean Minimizer BOOM-II and Its Applications

BoolTool: A Tool for Manipulation of Boolean Functions

Motivation. CS389L: Automated Logical Reasoning. Lecture 5: Binary Decision Diagrams. Historical Context. Binary Decision Trees

FlowMap: An Optimal Technology Mapping Algorithm for Delay Optimization in Lookup-Table Based FPGA Designs

Software for The Minimization of The Combinational Logic Functions

Optimized Implementation of Logic Functions

Software Implementation of Break-Up Algorithm for Logic Minimization

ABC basics (compilation from different articles)

COPYRIGHTED MATERIAL INDEX

Approach to partially self-checking combinational circuits design

Synthesis 1. 1 Figures in this chapter taken from S. H. Gerez, Algorithms for VLSI Design Automation, Wiley, Typeset by FoilTEX 1

Binary Decision Diagram with Minimum Expected Path Length

Cofactoring-Based Upper Bound Computation for Covering Problems

Propositional Calculus: Boolean Algebra and Simplification. CS 270: Mathematical Foundations of Computer Science Jeremy Johnson

Binary Decision Diagrams (BDD)

ELCT201: DIGITAL LOGIC DESIGN

Simultaneous Depth and Area Minimization in LUT-based FPGA Mapping

A Boolean Paradigm in Multi-Valued Logic Synthesis

Signed Binary Addition Circuitry with Inherent Even Parity Outputs

(Refer Slide Time 3:31)

1. Mark the correct statement(s)

ECE260B CSE241A Winter Logic Synthesis

TEST FUNCTION SPECIFICATION IN SYNTHESIS

On Nominal Delay Minimization in LUT-Based FPGA Technology Mapping

Assign auniquecodeto each state to produce a. Given jsj states, needed at least dlog jsje state bits. (minimum width encoding), at most jsj state bits

CHAPTER-2 STRUCTURE OF BOOLEAN FUNCTION USING GATES, K-Map and Quine-McCluskey

Combinatorial Algorithms. Unate Covering Binate Covering Graph Coloring Maximum Clique

SYNTHESIS OF MAPPING LOGIC FOR GENERATI TRANSFORMED PSEUDO-RANDOM PATTERNS FOR

Boolean Representations and Combinatorial Equivalence

Field Programmable Gate Arrays

COE 561 Digital System Design & Synthesis Introduction

Towards a Memory-Efficient Knapsack DP Algorithm

ece5745-pla-notes.txt

A New Heuristic for DSOP Minimization

ELCT201: DIGITAL LOGIC DESIGN

BOOLEAN ALGEBRA AND CIRCUITS

What Can Boolean Networks Learn?

IT 201 Digital System Design Module II Notes

Framework for Design of Dynamic Programming Algorithms

Supplement to. Logic and Computer Design Fundamentals 4th Edition 1

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary

Implicant Expansion Methods Used in The Boom Minimizer

An Efficient Learning Procedure for Multiple Implication Checks

Boolean algebra. June 17, Howard Huang 1

Application of Binary Decision Diagram in digital circuit analysis.

Compatible Class Encoding in Roth-Karp Decomposition for Two-Output LUT Architecture

Inadmissible Class of Boolean Functions under Stuck-at Faults

Boolean Factoring with Multi-Objective Goals

Cell Generator-Based Technology Mapping by Constructive Tree-Matching

ALTERING A PSEUDO-RANDOM BIT SEQUENCE FOR SCAN-BASED BIST

Read-Once Functions (Revisited) and the Readability Number of a Boolean Function. Martin Charles Golumbic

Search Pruning Conditions for Boolean Optimization

Online algorithms for clustering problems

ESE535: Electronic Design Automation. Today. EDA Use. Problem PLA. Programmable Logic Arrays (PLAs) Two-Level Logic Optimization

[Ch 6] Set Theory. 1. Basic Concepts and Definitions. 400 lecture note #4. 1) Basics

IN multilevel logic synthesis, an important step in minimizing

Conditions for Non-Chronological Backtracking in Boolean Optimization

ON AN OPTIMIZATION TECHNIQUE USING BINARY DECISION DIAGRAM

2.6 BOOLEAN FUNCTIONS

A Logically Complete Reasoning Maintenance System Based on a Logical Constraint Solver

Symbolic Manipulation of Boolean Functions Using a Graphical Representation. Abstract

Transcription:

A New Algorithm to Create Prime Irredundant Boolean Expressions Michel R.C.M. Berkelaar Eindhoven University of technology, P.O. Box 513, NL 5600 MB Eindhoven, The Netherlands Email: michel@es.ele.tue.nl Abstract This paper describes a new, efficient algorithm to make Boolean sum of cubes expressions prime and irredundant. A description of the logic function of a combinational logic circuit in terms of prime and irredundant expressions is a solid basis for the synthesis of fully testable logic circuits, as was recently shown in [HAC89] and [HAC92]. We have, therefore, developed an efficient recursive algorithm, based on Shannon expansion. Expressions are first expanded until the leaves are unate expressions. These can be made prime and irredundant by just checking for single cube containment [BRA84]. Then, the expression has to be reassembled, by merging the expanded results. By carefully exploiting the properties of Shannon expansion, we have been able to find a merging procedure, by which the primeness and irredundancy of the children are retained, while performing only a minimum number of containment tests. Using the algorithm on the MCNC benchmark set shows, that the algorithm is very efficient. A comparison of the results with those from ESPRESSO shows, that the results of our algorithm give rise to smaller and faster multi level logic circuits, such as can be produced by a multi level logic synthesis system like MIS or EUCLID [BER90] [BER91]. 1 Basic definitions 1.1 Notation Boolean function: f Boolean sum of cubes expression: f {0, 1} Boolean variable: x i, a Complement of Boolean variable: x i, a 1.2 Boolean functions In this paper we deal with single output, completely specified, Boolean functions only. This means, that there are no don t care values in the range of the function. A single output, completely specified Boolean function with n input variables x 1,, x n is a function defined as follows: Definition 1: If we write x for [x 1,, x n ], we can define the following: Definition 2: The on set X f on {x f(x) 1} Definition 3: The off set X f off {x f(x) 0} Because we are dealing with completely specified functions, X f on X f off n The on set or the off set of a Boolean function determine the function uniquely. Therefore, an on set or an off set can be used to define a function. 1.3 Operations on Boolean functions Definition 4: The complement f of a Boolean function f is defined by: X f on X f off, or in a complementary way by: X f off X f on 1

Definition 5: The intersection of two Boolean functions f 1 and f 2, f 3 f 1 f 2, can be defined by: X f3 on X f1 on X f2 on Definition 6: The union of two Boolean functions f 1 and f 2, f 3 f 1 f 2, can be defined by: X f3 on X f1 on X f2 on 1.4 Boolean expressions 1.4.1 Sum of cubes expressions Boolean functions can be represented in many ways. In this paper we choose the sum of cubes expression (s.o.c. expression) as a representation. Definition 7: A literal is a Boolean variable or its complement. Its algebraic representation is the (possibly complemented) variable. Definition 8: A cube is a set of literals, for which: no literal occurs more than once no Boolean variable and its complement occur simultaneously Its algebraic representation is the logical and of the literals, denoted by or by juxtaposition. Definition 9: A sum of cubes expression is a set of cubes. Its algebraic representation is the logical or of the cubes, denoted by. 1.4.2 S.o.c. expressions and Boolean functions A s.o.c. expression can be used to represent a Boolean function. In such a case, we say that a s.o.c. expression covers a Boolean function. Exactly which Boolean function is covered by a s.o.c. expression is defined by the following three definitions. Definition 10: A literal covers the Boolean half space for which 1. Definition 11: A cube covers the intersection of the Boolean half spaces covered by the individual literals. In this paper we do not make a notational difference between a cube as a set of literals and a cube covering a part of the Boolean n space. If the context does not make clear what is meant in a specific formula, a note clarifies the situation. Definition 12: A s.o.c. expression covers the union of the sub spaces covered by the individual cubes. In this paper we do not make a notational difference between a s.o.c. expression as a set of cubes and a s.o.c. expression covering a part of the Boolean n space. If the context does not make clear what is meant in a specific formula, a note clarifies the situation. 1.4.3 Primeness and irredundancy We are now ready to define primeness and irredundancy of s.o.c. expressions. Definition 13: The expansion of a cube c in the direction c is a cube c c \ { }. Definition 14: A cube c in a s.o.c. expression f covering the Boolean function f is called prime when it can not be expanded in the direction of any of its literals without making the expression intersect with the off set of f: c (c \ { }) X f off Likewise, a s.o.c. expression is called prime when all its cubes are prime. Definition 15: A cube c in a s.o.c. expression f is called irredundant, when: c f \ {c}. Likewise, a s.o.c. expression is called irredundant when all its cubes are irredundant. 1.5 Shannon expansion An important notion for this paper is the Shannon expansion of a s.o.c. expression. It was introduced by Shannon in [SHA48], and is widely used to implement divide and conquer algorithms for Boolean expressions. 2

Definition 16: The cofactor of a s.o.c. expression f with respect to a literal, denoted by f, can be defined by: For all cubes c i f: if c i, then c i \ { } is a member of f if c i c i, then c i is also a member of f if c i, then c i is not a member of f f contains no other cubes. Note again that a cube is regarded as a set of literals, and a s.o.c. expression as a set of cubes. Definition 17: The Shannon expansion of a s.o.c. expression f with respect to a literal is defined as the following sum of products: f f f 2 The algorithm We can now define the algorithm, which takes an arbitrary s.o.c. expressions as input, and returns a prime and irredundant s.o.c. expression covering the same Boolean function. The algorithm consists of two basic phases: 1. The recursive decomposition of the original expression into unate expressions. These unate expressions are made prime and irredundant. 2. The reassembling (merging) of the prime and irredundant parts such, that the primeness and irredundancy are retained. The first step is simple and is performed in exactly the same way as in [BRA84]. The original expression is decomposed recursively with the aid of Shannon expansion. As the splitting variable we always choose the most binate variable. After obtaining unate expressions, each of them is made prime and irredundant by removing simple cube containment. The second step, however, is new, and requires more explanation. We completely redefine the merging algorithm given in [BRA84]. The merging algorithm in [BRA84] is completely heuristical, and does not guarantee primeness, irredundancy or minimal size results. We improve on this by maintaining the primeness and irredundancy in every merging step. In as much as expression size is concerned, we rely on the same heuristical mechanism that, as practical experience has shown, works well in [BRA84]. The task we face is the construction of a prime and irredundant expression f from two prime and irredundant expressions f 1 and f 0 obtained by Shannon expansion over the variable x j (see Figure 1). If we can do this, we have solved the problem, because the leaves of our expansion tree are prime and irredundant unate expressions, and retaining primeness and irredundancy on every merge process guarantees the construction of a prime and irredundant complete expression. f x j x j Figure 1: Shannon expansion f 1 f 0 2.1 Primeness If we just construct f x j f 1 x j f 0, it is probably not prime and not irredundant. Let us focus on primeness first. In order to check if a cube is prime, we must try to expand it in all its dimensions, 3

which means we must try to remove all of its literals, and check whether we are hitting the off set of the function. If this is not the case, the literal can be discarded. But do we really have to try to expand every cube from f in all Boolean dimensions? Fortunately not. We can make use of the properties of f 1, f 0 and the Shannon expansion, thus minimizing the effort to make f prime. Consider: cb 1 f 1, and, therefore, x j cb 1 f Let us try to expand cb 1 in a direction different from x j. Let x j cb 1 be the expanded cube. If the expansion is to be valid, we must have: x j cb 1 f x j f 1 x j f 0 But: x j cb 1 x j f 0 Therefore: x j cb 1 f x j cb 1 x j f 1 cb 1 f 1 This is clearly a contradiction, since f 1 is prime. The dual proof with a cube from f 0 is of course identical, and is not repeated here. Now, let us expand x j cb 1 in direction x j. The result is: cb 1 x j cb 1 x j cb 1 In order for the expansion to be valid, we must have: cb 1 f But we already now that: x j cb 1 x j f 1 So, we only have to check whether: x j cb 1 f But: x j cb 1 x j f 1 Therefore: cb 1 f x j cb 1 x j f 0 cb 1 f 0 So, to ensure the expansion to be valid, we have to check whether cb 1 is covered by f 0. The same reasoning, of course, also holds for cubes cb 0 from f 0, which only have to be tested against f 1. Thus, before performing the actual merging, we can find out which cubes can be expanded. The merging process now constructs: with: f prime f x j f 1 x j f 0 f 1 f 0 (1.1) f 1 f 1 f 1 f 0 f 0 f 0 f 1 f 0 f 0 f 1 by means of Algorithm 1. procedure prime_merge (f 0, f 1 ) { f 0 = f 0 = f 1 = f 1 = empty; for (all cubes cb 0 from f 0 ) if (cb 0 covered by f 1 ) Algorithm 1: prime_merge 4

f 0 = f 0 + cb 0 ; else f 0 = f 0 + cb 0 ; for (all cubes cb 1 from f 1 ) if (cb 1 covered by f 0 ) f 1 = f 1 + cb 1 ; else f 1 = f 1 + cb 1 ; } return (f 0, f 1, f 0, f 1 ); 2.2 Redundancy In order to make f irredundant as well as prime, we must now check whether any of its cubes are redundant. Therefore, we must test: (cb? cb f f \ {cb}) Note that f should be regarded as a set of cubes in equation (1.2). The question is: do we really need to test all cubes? There are two different types of cubes in f : 1. cubes from x j f 1 (x j f 0 ) 2. cubes from f 1 (f 0 ) We treat both types separately in the next two sections. (1.2) 2.2.1 Cubes from x j f 1 To get a good look at the situation, we write equation (1.1) like: f x j f 1 x j f 0 x j f 1 x j f 1 x j f 0 x j f 0 (1.3) Because cubes with x j in them have an empty intersection with cubes with x j, we can concentrate on the Boolean half space x j 1. When we imagine the Boolean space to be a two dimensional set of points, we can picture the situation as in Figure 2. x j f 1 and x j f 1 partly overlap, and x j f 0 is covered completely by their union. We can now rephrase the question: Is there a cube x j cb 1 x j f 1, for which holds: x j cb 1 x j (f 1 \ {cb 1 }) x j f 1 x j f 0? Realizing that x j 1, we can write this like: cb 1 f 1 \ {cb 1 } f 1 f 0 (1.4) Unfortunately, there is no reason why this should not be possible. We can easily construct an example: f 1 a b a c f 0 a b c Because: a b c a b a c our concern for primeness yields: f x j a b x j a c a b c with: f 0 a b c But: a b a c a b c 5

x j f 1 x j f 0 x j f 1 ÉÉÉÉÉÉ ÉÉÉÉÉÉÉÉ ÉÉÉÉÉÉÉÉ ÉÉÉÉÉÉÉÉ ÉÉÉÉÉÉ ÇÇÇÇ ÇÇÇÇÇÇ ÉÉÉÉ... ÇÇÇÇÇÇ ÇÇÇÇÇÇ ÇÇÇÇ x j 1 x j 0 Figure 2: Boolean space for merging as can easily be seen from Figure 3, and, therefore: x j a b x j a c a b c or, in English, x j a b is a redundant cube, and we should write f as: f x j a c a b c a b c c b a b (0,0,0) a a c Figure 3: Boolean space of a b, a b c and a c The whole reasoning above can of course be repeated for the dual case with cubes from x j f 0. Then, the Boolean half space x j 0 is regarded. 6

2.2.2 Cubes from f 1 The cubes from f 1 cover both x j f 1 and x j f 1. The cubes x j f 1 are covered by the rest of f by definition. So, we only have to look at cubes x j cb 1 from x j f 1. As can be seen from equation (1.4), we can rephrase this as: Is there a cube cb 1 from f 1 such that: cb 1 f 1 f 1 \ {cb 1 } f 0 (1.5) Unfortunately again, there is no reason for this to be impossible. We prove so by constructing an example. Let: f 1 a c a b a c f 0 a b c Then: a b a c a, and b c a c a b which yields: f 1 a c f 1 a b a c f 0 a f 0 b c f x j a c x j a a b a c b c The Boolean space of f 1, f 1 and f 0 is depicted in Figure 4 a c c a c b b c a b (0,0,0) a Figure 4: Boolean space of a b, a c, b c and a c It can clearly be seen that cube a b from f 1 is covered by cube a c from f 1 and cube b c from f Cube a b is, therefore, redundant in f, which should be written as: f x j a c x j a a c b c The whole reasoning above can of course be repeated for the dual case with cubes from f 0. Then, only the Boolean half space x i 0 is regarded. 2.2.3 The redundancy removal algorithm The two examples constructed above have shown, that we have to test all cubes for redundancy. However, the test can be performed in a slightly simplified way, without the splitting variable x j, and without some of the cubes. The algorithm is spelled out in Algorithm 2. 0. 7

Algorithm 2: redundancy removal procedure remove_redundancy (f 0, f 1, f 0, f 1 ) { /* according to section 2.2.1 */ for (all cubes cb 0 from f 0 ) /* use equation (3.5) */ if (cb 0 covered by f 0 \ {cb 0 } + f 0 + f 1 ) f 0 = f 0 \ {cb 0 }; for (all cubes cb 1 from f 1 ) /* use equation (3.5) */ if (cb 1 covered by f 1 \ {cb 1 } + f 0 + f 1 ) f 1 = f 1 \ {cb 1 }; /* according to section 2.2.2 */ for (all cubes cb 0 from f 0 ) /* use equation (3.6) */ if (cb 0 covered by f 0 + f 0 \ {cb 0 } + f 1 ) f 0 = f 0 \ {cb 0 }; for (all cubes cb 1 from f 1 ) /* use equation (3.6) */ if (cb 1 covered by f 1 + f 1 \ {cb 1 } + f 0 ) f 1 = f 1 \ {cb 1 }; } return (f 0, f 1, f 0, f 1 ) 3 Complete merging We can now formulate the complete prime and irredundant merging in terms of the procedures prime_merge and remove_redundancy. It can be found in Algorithm 3. Algorithm 3: Complete merging procedure merge (f 0, f 1, x j ) { (f 0, f 1, f 0, f 1 ) = prime_merge (f 0, f 1 ); (f 0, f 1, f 0, f 1 ) = remove_redundancy (f 0, f 1, f 0, f 1 ); return (x j f 1 + x j f 0 + f 1 + f 0 ) } This is the algorithm we have implemented in LOG_IRR. Section 2.2 has shown, that the redundancy check is a relatively expensive operation in terms of CPU time. Every cube is a candidate for redundancy. An alternative would be to maintain only the primeness property in every merging process, and remove the redundancy only at the top level. This influences both the quality of the result, and the run time. It is, however, very difficult to indicate whether quality and run times decrease or increase. This should be tested when the subject of prime irredundant expressions is investigated further. 4 Testing for cube covering As we have seen above, testing for primeness and testing for redundancy can both be performed by testing whether a specific cube is covered by a set of cubes. According to [BRAY84], a cube cb is covered by an expression f if and only if the cofactor of f with respect to cb is a tautology: cb f f cb 1 (1.6) 8

Because a lot of these tests have to be performed, a fast algorithm is required. Fortunately, these algorithms exist. For use in the program described in this paper, which we have baptized LOG_IRR, the algorithm described in [BRAY82] was implemented. This tautology checker is also based on Shannon expansion, and, therefore, fits rather well into the program. 5 Results 5.1 Size of the prime irredundant expressions To judge the quality of the above algorithm, we have applied it to a large number of expressions from the circuit descriptions from the MCNC Workshop on Logic Synthesis benchmark set [LIS88a]. As a comparison, we did also run the same examples through ESPRESSO, a very well known 2 level minimizer, which is clearly described in [BRA84]. Given the right options, ESPRESSO can also produce prime irredundant results. At the same time, ESPRESSO tries to minimize the number of cubes and literals in the result. As a third test, we ran all examples first through ESPRESSO, to get small prime irredundant forms, and then through LOG_IRR, in order to find out if LOG_IRR is very sensitive to the input format. example Table 1: Sizes of prime irredundant expressions NUMBER OF LITERALS ESPRESSO ESPRESSO + LOG_IRR LOG_IRR sum of cubes factored sum of cubes factored sum of cubes factored % diff % diff % diff % diff 5xp1 293 170 299 +2.0 165 2.9 299 +2.0 165 2.9 9sym 510 293 888 +74.1 259 11.6 888 +74.1 307 11.6 9symml 522 289 888 +70.1 260 10.0 888 +7.01 256 11.4 bw 423 302 430 +1.7 302 0 431 +1.9 307 +1.7 con1 23 19 23 0 19 0 23 0 19 0 duke2 1746 778 1748 +0.1 775 0.4 1748 +0.1 803 +3.2 f2 36 24 36 0 24 0 36 0 24 0 f51m 319 174 325 +1.9 169 2.9 325 +1.9 169 2.9 misex1 122 89 122 0 89 0 122 0 91 +2.2 misex2 188 164 188 0 164 0 188 0 164 0 misex3 11540 2302 11665 +1.1 2335 +1.4 12118 +5.0 2238 2.8 misex3c 1694 839 1769 +4.4 860 +2.5 1784 +5.3 860 +2.5 rd53 140 71 156 +11.4 75 +5.6 156 +11.4 71 0 rd73 840 240 876 +4.3 233 2.9 876 +4.3 214 10.8 rd84 1970 469 2041 +3.6 455 3.0 2041 +3.6 398 15.1 sao2 480 204 490 +2.1 199 2.5 499 +4.0 209 +2.5 seq 17045 3954 17326 +1.6 4000 +1.2 17422 +2.2 3984 +0.8 vg2 804 335 804 0 335 0 804 0 278 17.0 z4ml 252 89 252 0 89 0 252 0 73 0 Average: +9.4 1.3 +9.8 3.2 As a measure for quality, we have taken the number of literals in the resulting sets of expressions. For multi level implementations, the number of literals in the factored form is a better measure than the number of literals in the s.o.c. form. Both are listed in Table 1. If we look at the bottom row, with the average difference to the ESPRESSO results in it, it is clear that ESPRESSO finds the smallest s.o.c. form. This was only to be expected, ESPRESSO was carefully designed 9

for this purpose. The only surprising thing is, that ESPRESSO improves the result of LOG_IRR by only about 10%. But if we look at the factored form, LOG_IRR wins the competition, by 3.2%. The combination ESPRESSO LOG_IRR finishes second in both competitions, which means that it does not ruin the ESPRESSO results badly, but that it is better to use just LOG_IRR if one is interested in the size of the factored form. The interesting thing is to see what happens if we really synthesize all these circuits with the logic synthesis tools. LOG_IRR is meant to be used in this combination. ESPRESSO aims at a two level implementation like a PLA. 5.2 Quality of the completely synthesized circuits The best way to judge the results from ESPRESSO and LOG_IRR, is to go through all the steps of logic synthesis, and look at the design parameters area (number of cells, number of literals) and delay at that point. The results of this exercise can be found in Table 2 and Table 3. They were gathered by running the results from Table 1 through the logic optimization tool LOG_DECOM, and through the technology mapper LOG_MAPPER, both aiming at an implementation with (3, 3) AOI cells in a 1.6 micron CMOS technology. Table 2: Results after complete synthesis (area) ESPRESSO ESPRESSO + LOG_IRR LOG_IRR example gates #tr gates #tr gates #tr %diff %diff %diff %diff 5xp1 53 165 54 +1.9 156 5.5 51 3.8 149 9.7 9sym 92 326 93 +1.1 278 14.7 85 7.6 263 19.3 9symml 90 304 83 7.8 284 6.6 84 6.7 248 18.4 bw 83 268 95 +14.5 278 +3.7 88 +6.0 267 0.4 con1 12 29 12 0 29 0 12 0 29 0 duke2 224 612 237 +5.8 690 +12.7 215 4.0 599 2.1 f2 8 28 8 0 28 0 8 0 28 0 f51m 56 169 52 7.1 167 1.2 55 1.8 161 4.7 misex1 30 94 30 0 94 0 28 6.7 88 6.4 misex2 72 161 72 0 161 0 72 0 161 0 misex3 624 2323 608 2.6 2210 4.9 585 6.3 2130 8.3 misex3c 264 851 270 +2.3 856 +0.6 262 0.8 899 +5.6 rd53 24 74 31 +29.2 87 +17.6 25 +4.2 79 +6.8 rd73 68 243 74 +8.8 236 2.9 67 1.5 202 16.9 rd84 139 481 138 0.7 444 7.7 113 18.7 399 17.0 sao2 85 261 74 12.9 238 8.8 80 5.9 246 5.7 seq 1219 4151 1171 3.9 3982 4.1 1194 2.1 4142 0.2 vg2 76 177 76 0 177 0 90 +18.4 212 +19.8 z4ml 31 74 31 0 74 0 29 6.5 80 +8.1 Average +1.5 1.1 2.3 3.6 The results show clearly that the relatively simple algorithm in LOG_IRR results on average in slightly better (3.6% smaller and 3.8% faster) multi level implementations than the complex algorithms used in ESPRESSO. This is very likely due to the slightly larger s.o.c. result from LOG_IRR, which gives the other logic synthesis tools more freedom to find a good factorization. 10

Table 3: Results after complete synthesis (delay) ESPRESSO ESPRESSO + LOG_IRR LOG_IRR example ns ns % diff ns % diff 5xp1 32 36 +12.5 35 +9.4 9sym 60 49 18.3 48 20.0 9symml 59 49 16.9 49 16.9 bw 44 47 +6.8 41 6.8 con1 12 12 0 12 0 duke2 67 73 +9.0 67 0 f2 9 9 0 9 0 f51m 34 33 2.9 40 +17.6 misex1 21 21 0 20 4.8 misex2 24 24 0 24 0 misex3 296 266 10.1 269 9.1 misex3c 98 99 +1.0 106 +8.2 rd53 22 26 +18.2 22 0 rd73 41 40 2.4 44 +7.3 rd84 70 62 11.4 55 21.4 sao2 40 35 12.5 37 7.5 seq 746 723 3.1 751 +0.7 vg2 39 39 0 28 28.2 z4ml 19 19 0 19 0 average 1.6 3.8 1000.00 cpu usage (s) 100.00 10.00 (N 2 ) 1.00 0.10 0.01 100 1000 10000 100000 size of example (s.o.c. literals) Figure 5: Run times versus problem size 11

5.3 Run time complexity The theoretical run time complexity is exponential in the number of input variables, because: Shannon expansion can result in an exponential number of nodes in the expansion tree. Tautology checking is a NP complete problem. However, the experimental run times do not show very bad behavior, as can be seen in Figure 5. The run times are the times for the examples from Table 2, plus a larger one. They were obtained on a HP9000/750 workstation, which runs approximately 76 MIPS. As a measure for the problem size, we have taken the number of literals in the resulting sum of product expressions. For comparison, the slope of (N 2 ) behavior is plotted. It is clear, that run times do not increase more than quadratically with the problem size for these examples. It is also clear, that for the same problem size, run times can be very different. Run times are very reasonable, even for very large examples. References [BER90] [BER91] [BRA84] [HAC89] [HAC92] [SHA48] BERKELAAR, M.R.C.M. and J.F.M. THEEUWEN, Real Area Power Delay Trade off in the EUCLID Logic Synthesis System, Proceedings of the IEEE 1990 Custom Integrated Circuits Conference, Boston MA May 1990, pp. 14.3.1 14.3.4. BERKELAAR, M.R.C.M. and J.F.M. THEEUWEN. Logic Synthesis with Emphasis on Area Power Delay Trade Off Journal of Semicustom ICs, September 1991, pp. 37 42. BRAYTON, R.K., C.D. HACHTEL, C.T. MCMULLEN and A.L. SANGIOVANNI VINCENTELLI, Logic Minimization Algorithms for VLSI Synthesis, Kluwer Academic Publishers, 1984. HACHTEL, G.D., R. JACOBY, K. KEUTZER and C. MORRISON, On the Relationship Between Area Optimization and Multifault Testability of Multilevel Logic, Proceedings of the MCNC Workshop on Logic Synthesis 1989. HACHTEL, G.D., R. JACOBY, K. KEUTZER and C. MORRISON, On Properties of Algebraic Transformations and the Synthesis os Multifault Irredundant Circuits, IEEE Transactions on Computer Aided Design, March 1992, pp. 313 321. SHANNON, C.E., The Synthesis of Two Terminal Switching Circuits, Bell Systems Technical Journal, 1948. 12