Read-Once Functions (Revisited) and the Readability Number of a Boolean Function. Martin Charles Golumbic

Similar documents
An Effective Upperbound on Treewidth Using Partial Fill-in of Separators

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

Specifying logic functions

Treewidth and graph minors

These notes present some properties of chordal graphs, a set of undirected graphs that are important for undirected graphical models.

1 Definition of Reduction

GRAPH THEORY and APPLICATIONS. Factorization Domination Indepence Clique

Combinatorial Algorithms. Unate Covering Binate Covering Graph Coloring Maximum Clique

On Covering a Graph Optimally with Induced Subgraphs

ELCT201: DIGITAL LOGIC DESIGN

BOOLEAN FUNCTIONS Theory, Algorithms, and Applications

A Decomposition for Chordal graphs and Applications

On 2-Subcolourings of Chordal Graphs

Computability Theory

Last course. Last course. Michel Habib 28 octobre 2016

ELCT201: DIGITAL LOGIC DESIGN

9 About Intersection Graphs

EE512 Graphical Models Fall 2009

On Cyclically Orientable Graphs

Synthesis 1. 1 Figures in this chapter taken from S. H. Gerez, Algorithms for VLSI Design Automation, Wiley, Typeset by FoilTEX 1

CSCI 220: Computer Architecture I Instructor: Pranava K. Jha. Simplification of Boolean Functions using a Karnaugh Map

Graph Theory S 1 I 2 I 1 S 2 I 1 I 2

Vertical decomposition of a lattice using clique separators

Boolean Representations and Combinatorial Equivalence

On Algebraic Expressions of Generalized Fibonacci Graphs

A B AB CD Objectives:

Simplification of Boolean Functions

Logic Synthesis and Verification

Faster parameterized algorithms for Minimum Fill-In

Discrete mathematics

Necessary edges in k-chordalizations of graphs

K 4 C 5. Figure 4.5: Some well known family of graphs

Small Survey on Perfect Graphs

CS521 \ Notes for the Final Exam

NP-Completeness. Algorithms

Giovanni De Micheli. Integrated Systems Centre EPF Lausanne

Faster parameterized algorithms for Minimum Fill-In

Module 11. Directed Graphs. Contents

Graph Algorithms Using Depth First Search

Reachability in K 3,3 -free and K 5 -free Graphs is in Unambiguous Logspace

Section 3.1: Nonseparable Graphs Cut vertex of a connected graph G: A vertex x G such that G x is not connected. Theorem 3.1, p. 57: Every connected

Simpler, Linear-time Transitive Orientation via Lexicographic Breadth-First Search

Matching Theory. Figure 1: Is this graph bipartite?

7.3 Spanning trees Spanning trees [ ] 61

Fully dynamic algorithm for recognition and modular decomposition of permutation graphs

Binary recursion. Unate functions. If a cover C(f) is unate in xj, x, then f is unate in xj. x

KRUSKALIAN GRAPHS k-cographs

Efficient Read-Restricted Monotone CNF/DNF Dualization by Learning with Membership Queries

Unit 4: Formal Verification

1. Fill in the entries in the truth table below to specify the logic function described by the expression, AB AC A B C Z

Graphs: Introduction. Ali Shokoufandeh, Department of Computer Science, Drexel University

Minimal Dominating Sets in Graphs: Enumeration, Combinatorial Bounds and Graph Classes

Nim-Regularity of Graphs

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014

Endomorphisms and synchronization, 2: Graphs and transformation monoids. Peter J. Cameron

Some Hardness Proofs

Lecture 5: Graphs. Rajat Mittal. IIT Kanpur

Endomorphisms and synchronization, 2: Graphs and transformation monoids

LSN 4 Boolean Algebra & Logic Simplification. ECT 224 Digital Computer Fundamentals. Department of Engineering Technology

Complexity of Disjoint Π-Vertex Deletion for Disconnected Forbidden Subgraphs

NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

A linear delay algorithm for building concept lattices

V8 Molecular decomposition of graphs

The Encoding Complexity of Network Coding

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Shannon capacity and related problems in Information Theory and Ramsey Theory

Acyclic Network. Tree Based Clustering. Tree Decomposition Methods


Introduction III. Graphs. Motivations I. Introduction IV

L3: Representations of functions

Slide Set 5. for ENEL 353 Fall Steve Norman, PhD, PEng. Electrical & Computer Engineering Schulich School of Engineering University of Calgary

Lecture 8: The Traveling Salesman Problem

Gate Level Minimization Map Method

Module 1. Preliminaries. Contents

Combinational Logic Circuits

Bipartite Roots of Graphs

Minimum sum multicoloring on the edges of trees

Disjoint Support Decompositions

Mining Social Network Graphs

Characterizing Graphs (3) Characterizing Graphs (1) Characterizing Graphs (2) Characterizing Graphs (4)

DKT 122/3 DIGITAL SYSTEM 1

Tree Decompositions Why Matroids are Useful

Combinational Logic Circuits

Graph and Digraph Glossary

Decomposition of log-linear models

Two Characterizations of Hypercubes

Assignment 4 Solutions of graph problems

Motivation. CS389L: Automated Logical Reasoning. Lecture 5: Binary Decision Diagrams. Historical Context. Binary Decision Trees

Recitation 4: Elimination algorithm, reconstituted graph, triangulation

V,T C3: S,L,B T C4: A,L,T A,L C5: A,L,B A,B C6: C2: X,A A

Constructions of k-critical P 5 -free graphs

VLSI System Design Part II : Logic Synthesis (1) Oct Feb.2007

Abstract. Figure 1. No. of nodes No. of SC graphs

Trees Rooted Trees Spanning trees and Shortest Paths. 12. Graphs and Trees 2. Aaron Tan November 2017

EE512A Advanced Inference in Graphical Models

Applied Mathematics Letters. Graph triangulations and the compatibility of unrooted phylogenetic trees

Chapter 3. Gate-Level Minimization. Outlines

Chordal Probe Graphs (extended abstract)

Chordal graphs MPRI

Algorithmic Graph Theory and Perfect Graphs

Transcription:

Read-Once Functions (Revisited) and the Readability Number of a Boolean Function Martin Charles Golumbic Caesarea Rothschild Institute University of Haifa Joint work with Aviad Mintz and Udi Rotics

Outline of Talk Read Once Functions Motivation CoGraphs and CoTrees Graph of a Formula Normality Gurvich Theorem Previous Work of Recognizing Read Once Functions Our Method for Recognition (the ROF Algorithm) Complexity of Checking Normality Results Advantages and Drawbacks Conclusions

Read-Once Functions A Boolean function that has a factored form in which each variable appears only once. Example. A read-once function: F aq abp abd a( q b( p d ))

Read-Once Functions A Boolean function that has a factored form in which each variable appears only once. Example. A read-once function: F aq abp abd a( q b( p d )) Example. Not read-once functions: F' F'' ab bc cd ab bc ac Read-once functions are assumed to be positive.

Our Motivation The Read-Once algorithm (so called ROF) is a dedicated factoring subroutine to handle the lower levels of the general factoring algorithm using graph partitioning. Factoring using Graph Partitioning ROF

Graph of a Boolean Function Each literal of function F is mapped to a vertex in the graph co-occurrence graph =(V,E). V { a,..., a } 1 n Each prime implicant (minterm, cube) forms a clique in (not necessarily maximal). E {( a, a ) a, a C for some k} i j i j k where Ck is a prime implicant.

Graph of a Formula - Example F acd aef ag bcd bef bg V { a, b, c, d, e, f, g} e f a g b c d

CoGraphs Recursive Definition: A single vertex is a cograph. The disjoint union of cographs is a cograph. The join of cographs is a cograph, (i.e., adding all edges between them).

CoGraphs Recursive Definition: A single vertex is a cograph. The disjoint union of cographs is a cograph. The join of cographs is a cograph, (i.e., adding all edges between them). CoTree Representation: Leaves are labeled by the vertices. Union nodes labeled by 0 -- or by + Join nodes labeled by 1 -- or by *.

CoTrees (Example) 1 0 0 cotree cograph

CoTrees (Example)

CoGraph Properties The complement of a CoGraph is also a CoGraph, and The cotree of G is obtained from the cotree of G by reversing the 0/1 labels of the internal nodes. The complement of a connected CoGraph is disconnected. Two vertices of a cograph G are adjacent iff their lowest common ancestor in the cotree is labeled 1. The CoTree is a unique representation of the CoGraph (up to isomorphism blowing in the wind). Two views: recursive construction recursive decomposition

P 4 -free Graphs P 4 - A chordless path containing 3 edges and 4 vertices. A P P 4 -free Graph does not contain any copy of 4 as an induced sub-graph. P 4 -free Graphs are equivalent to CoGraphs! A CoGraph Not a CoGraph

CoGraph Characterization Theorem Theorem. The following conditions are equivalent: 1. G is a cograph * 2. G has a cotree representation ** 3. G is P 4 free 4. For every subset X of the vertices, X >1, either the induced subgraph G X is disconnected or its complement G X is disconnected. * (recursive construction) ** (recursive decomposition)

Recognizing CoGraphs A naïve algorithm for recognizing a given cograph and constructing the cotree: (1) If G is disconnected, make labeled root 0 and continue recursively on each component. (2) If G is connected, make labeled root 1, form the complement and FAIL if it is connected; otherwise continue recursively.

Recognizing CoGraphs A naïve algorithm for recognizing a given cograph and constructing the cotree: (1) If G is disconnected, make labeled root 0 and continue recursively on each component. (2) If G is connected, make labeled root 1, form the complement and FAIL if it is connected; otherwise continue recursively. Complexity of naïve is O(n 3 ), but there are Linear time algorithms: Corneil,.Habib.

Recognizing CoGraphs A naïve algorithm for recognizing a given cograph and constructing the cotree: (1) If G is disconnected, make labeled root 0 and continue recursively on each component. (2) If G is connected, make labeled root 1, form the complement and FAIL if it is connected; otherwise continue recursively. Complexity of naïve is O(n 3 ), but there are Linear time algorithms: Corneil,.Habib. The term cograph stands for complement reducible graph since it can be decomposed using the naïve algorithm.

CoGraph Generation - Example e f T 0 1 a g b c e d f T 0 1 T 1 T 2 a g b 0 0 c d

CoGraph Generation Example (Cont. 1) T 0 1 a b T 1 T 2 0 0 e f a T 3 T 4 b T 0 1 g 0 T 1 T 2 0 c d a T 5 b g T 3 T 4 1 1 T 6 T 7

CoGraph Generation Example (Cont. 2) T 0 1 0 T 1 T 2 0 c d a T 5 b 1 g T 3 T 4 c d T 6 T 7 1 T 0 1 T 8 T 9 0 T 1 T 2 0 e f a T 5 b 1 g T 3 T 4 c T 6 T 7 d e f T 8 T 9 T 10 T 11 1 1

CoGraph Generation Example (Cont. 3) * + + a b * g 1 * c d e f

Recall: Co-occurrence Graph of a Boolean Function Each literal of the function F is mapped to a vertex in the graph co-occurrence graph =(V,E). V { a,..., a } 1 n Each prime implicant (minterm, cube) forms a clique in (not necessarily maximal). E {( a, a ) a, a C for some k} i j i j k where Ck is a prime implicant.

Graph of a Formula - Example F acd aef ag bcd bef bg V { a, b, c, d, e, f, g} e f a g b c d

Extracting a Function from a Graph Before: function to a graph. Now: graph to a function. For an arbitrary graph, we transform each maximal clique into a prime implicant (minterm) in F a d b c e F = abc + ad + df + cef f F Generally, constructing is an NP-Complete problem, since we must find all maximal cliques.

Normality F is normal if mapping it to graph and back to a function F F? F Example: will yield the original function, i.e. Is F abc 1 b F 2 ab ac bc a c That is, F is normal iff the cliques of are precisely the prime implicants of F.

Gurvich Theorem F Let be a positive function and let be the graph of F equivalent:. Then the following statements are F is a read-once function. F is normal and its graph contains no subgraph isomorphic to P 4 (CoGraph). The graphs and d of the function F and the dual function F d are complementary. For all prime implicants P and dual prime implicants D, we have P D = 1.

Example F = x 1 x 2 + x 2 x 3 + x 3 x 4 + x 4 x 5 + x 5 x 1 co-occurrence graph is the chordless 5-cycle C 5. - F is normal, but Γ F is not a cograph (has a P 4 ) F d = x 1 x 2 x 4 + x 2 x 3 x 5 + x 3 x 4 x 1 + x 4 x 5 x 2 + x 5 x 1 x 3 co-occurrence graph is the complete graph K 5. - F d is not normal, but Γ F d is a cograph (has no P 4 )

Proposed Method for Recognition Function f Boolean Formula (SOP, POS) Build Graph Initial Graph Graph f CoGraph Recognition Normality Checking CoGraph CoTree and function F f Read-Once

Complexity Use of Gurvich Theorem and CoGraph recognition algorithm. Checking normality in polynomial complexity. Result: The proposed algorithm has polynomial complexity of O( L* N) where L is the formula s size and N is the number of function s variables (the support).

Previous Work on Read Once Recognition Hayes [1975] Recognition and factoring of Fanout-Free functions based on variables adjacency.* Peer and Pinter [1995] Recognition and factoring of Non- Repeatable functions based on Non-Repeatable Trees. * * Both algorithms have Non-Polynomial complexity. Additional theoretical work has been done by: Gurvich [1991] Karchmer, Linial, Newman, Saks and Wigderson [1993] Goldman, Kearns and Schapire [1993] Bshouty, Hancock and Hellerstein [1995] Others

Read-once Recognition Algorithm Golumbic, Mintz and Rotics, 2004-08 Step 0: Check that the function f is positive (unate). Step 1: Build the co-occurrence graph f of f. Step 2: Test whether f is P 4 -free, and construct the cotree T for f. Otherwise, exit with failure. Step 3: Test whether f is a normal function, and output T as the read-once expression. Otherwise, exit with failure.

How do we do it efficiently?

CoGraph and CoTree Generation Use one of the fast algorithms by Corneil, et al. or use the Naive algorithm: If the graph has only one vertex The algorithm ends the recursion with a success. If the graph is connected and it is the initial graph The algorithm calls itself with the complement of the graph. If the graph is connected and is not the initial graph The algorithm ends the recursion with a failure. If the graph is disconnected For each connected sub-graph, the algorithm calls itself with the complement of that sub-graph.

Checking Normality: Is F = F? Compare the input SOP/POS representation of and the generated function by parse trees). F F (both represented Build for each of P: the prime implicants of F and C: the cliques of, a bit matrix, where each row is the characteristic vector of the subset. Check for their equivalence (Sort Lexicographically). (vector-1) (vector-2) Input Tree Generated Tree

The matrices P and C: of F and F P represents the prime implicants of F C represents the prime implicants of F Variable is denoted by the bit i = 1. a i Each row of the matrix is an n-bit number encoding a set of variables. (vector-1) (vector-2) Input Tree Generated Tree

Checking Normality: Using the cotree T Clique Calculations bottom up For a node a of T, we denote by Ta the subtree of T rooted at a Ta is also the cotree representing the subgraph of f induced by the set of labels of the leaves of Ta. Construct the set C(T) of maximal cliques of f, recursively, traversing the cotree T from bottom to top. More precisely, we construct the set of maximal subcliques C(Ta) for each internal node a, combining them as we move up the tree, using two operations, set union and set join, and the following lemma:

Lemma 1. Let h be an internal node of the cotree T, and let h 1,..., h r be the children of h in T. (i) If h is labeled with 0 in the cotree, then C(T h ) = C(T h1 ) C(T hr ). (ii) If h is labeled with 1 in the cotree, then C(T h ) = C(T h1 ) C(T hr ). For example, {a 1 }, { a 2 } {a 3, a 4 }, {a 5, a 6 }, {a 7 } = {a 1, a 3, a 4 }, {a 1, a 5, a 6 }, {a 1, a 7 }, {a 2, a 3, a 4 }, {a 2, a 5, a 6 }, {a 2, a 7 } or using bit vectors, (1,2)*(12, 48, 64) = (13, 49, 65, 14, 50, 66).

Checking Normality Example The Input Tree F acd aef ag bcd bef bg * Append "Multiply" (13) (49) (65) (14) (50) (66) * (13,49,65,14,50,66) * + * * * a c d a e f a g b c d b e f b g (1) (4) (8) (1) (16) (32) (1) (64) (2) (4) (8) (2) (16) (32) (2) (64)

Checking Normality Example The Generated Tree a + (13,65,49,14,66,50) (1,2) b (1) (2) * * (12,64,48) (64) "multiply" + (12) (48) g c d e f (4) (8) (16) (32) append "multiply" 1 * F ( a b)*( cd ef g)

Comparing the Vectors For F: the vectors represent the prime implicants of F by definition. For F : the vectors represent the maximal cliques of the cograph, and hence the prime implicants of F By sorting the vectors, they can easily be compared. Important Complexity Issue (Heuristic Speed-up): - Pre-compute the number of cliques (and compare). - Pre-compute the size of F (and compare)

Complexity To bound the complexity, the algorithm also computes a parameter: s(ta) : the number of the cliques in C(Ta). By comparing s(ta) with the number of prime implicants k at each step, a speed-up mechanism is obtained -- insuring that the number of maximal cliques never exceeds the number of prime implicants (a necessary condition for normality). To this we add an additional pre-computed parameter: L(Ta): the total length of the list of cliques at Ta, namely, L(Ta) = { C C C(Ta)}. In this way, at the root of the cotree, we can also check the necessary (but not sufficient) condition that the length L(T ) must equal F, again bounding the complexity, before actually computing the clique set C(T ).

Complexity Building the co-occurrence graph G is O(Σn i2 ) for n i variables in implicant P i. Testing whether G is a cograph and building the cotree is O(n 2 ). Checking normality is O(L*n). Result: The proposed algorithm has polynomial complexity of O(L*n) where L is the formula s size and n is the number of variables.

Empirical Results Comparing ROF with the traditional factoring methods Good Factor (GF) and Quick Factor (QF) of SIS and PPF of Peer-Pinter. All the different algorithms yield the read-once factorization. The next table compares the algorithms CPU time (in seconds).

Results (Cont.) Formula SOP Literals PPF GF QF IROF L2_b10 10240 20 44.67 Fail 0.11 0.3 L4_b3 3072 24 8.43 4.67 0.11 0.1 L4_b6 7290 24 25.1 147.41 0.19 0.21 L6_b4 672 20 0.41 0.3 0.03 0.02 L6_b8a 132 52 0.48 0.06 0.04 0.02 L6_b8b 24192 52 345.04 Fail 1.52 0.74 L8_b5 3380 29 6.16 5.88 0.09 0.09 L10_b3 2160 30 2.67 2.33 0.11 0.07 L14_b3 6720 42 22.45 20.96 0.44 0.2

Advantages and Drawbacks Advantages Fast method to recognize read-once functions. Fast method to factor read-once functions. Low complexity O(L*n) Drawbacks Factors only read-once functions.

Conclusions Very fast algorithm for recognizing and factoring read-once function is presented. The algorithm appears in the general factoring algorithm of [GM 99]. It is based on CoGraph recognition and on Normality checking. Comparison to other methods is shown.

Learning a Read-Once Function Input: A oracle (black box) to evaluate F at any given point, where F is not known, but IS known to be read-once. Output: A read-once expression for F. Example: Is (1,1,0,0) true? YES: (1,1,1,0) (1,1,0,1) (1,1,1,1) are all true. NO: (1,0,0,0) (0,1,0,0) (0,0,0,0) are all false.

Learning a Read-Once Function Angluin, Hellerstein, Karpinski (1993): Theorem. The Read-Once Learning problem can be solved using O(n 3 ) time and O(n 2 ) queries. They showed how to build the co-occurrence graph Γ using an oracle.

Open Questions #1 What can be said about RECOGNITION if F is given in some other type of representation? Comment: If F is some other formula (not a DNF or CNF) or if F is represented as a BDD, or whatever, we might have to pay a high price to convert it into a DNF and use the GMR method.

Lisa replies: If F is a non-monotone DNF formula, for example, the complement of the recognition problem (i.e. does F not represent a monotone read-once function?) is NP-complete. Thank you

Related Topics and Applications Readability of a Boolean Function Factoring General Boolean Functions

The Readability Number What if F is not read-once? Could it be read-twice? Readability Number k: The smallest value of k such that F can be factored so that each variable appears at most k times.

The Readability Number Theorem: If F is a normal function and Γ F is a partial k-tree, then F has readability number at most 2 k. Moreover, a formula can be constructed in O(n k+1 ) time. Corollary: If Γ F is a tree, then F is read-twice.

Example: A 2-tree

Open Questions #2 What other graphs give read-twice functions? Grids give read-twice functions. Others? Is our 2 k upper bound tight for partial k-trees? Find other families for which the readability can be determined. Remark. Our 2 k method also holds for normal functions for which Γ F has an ordering of its vertices v 1, v 2,, v n such that v i is adjacent to at most k of v 1, v 2,, v i-1.

Factoring General Boolean Functions Using Graph Partitioning Aviad Mintz Martin C. Golumbic University of Haifa

Cluster Intersection Graph Definition: G(C,E) an undirected weighted graph where: C - clusters of the function E i, jci Cj - the number of common literals W( i, j) Example: F abd abg acd cg {a,b,d} {a,c,d} 2 2 1 1 1 {a,b,g} {c,g}

Separability Function Measure separability of the cluster intersection graph by a graph partitioning algorithm. Try to partition the graph evenly (to minimize the number of stages). Choose the minimal separabilty and return its value and partitions ( - sets of vertices). A, B

Form Conversions P2S and S2P Functions P2S - convert sum of product to product of sums S2P - convert product of sums to sum of products P2SS2P a shortcut for both

Factoring Algorithm Using Graph Partitioning - Xfactor Input: Parse tree T of the Boolean function being factored. Xfactor(T). If T is a Read Once function - Construct the Read Once tree and exit. Otherwise, Construct the Cluster Intersection Graph G corresponding to T and Calculate the Separability. If the Separability of G is not 0 - Build T* by P2SS2P, Construct the Cluster Intersection Graph G* and Calculate the Separability. If the Separability of G is lower than the Separabilty of G* - Partition T accordingly and Call Xfactor with each part. Otherwise, Partition T* and Call Xfactor with each part.

Factoring Algorithm - Example F abd abg acd cg G(C,E): {a,b,d} {a,c,d} 2 2 1 1 1 {a,b,g} {c,g}

Factoring Algorithm - Example (Cont. 1) F* ( a c)( a g)( b c)( d g) {a,g} G*(C*,E*): {d,g} 1 1 1 {a,c} {b,c} Xfactor chooses the second option and partitions G* evenly. F F 1 2 ( a ( a g)( d c)( b g) c)

Factoring Algorithm - Example (Cont. 2) F1 and F2 are Read Once functions. Thus. F F Algebraic factoring (GF) produces worse result: Boolean factoring produces same result as. Xfactor. 1 2 ( a g)( d g) ad g ( a c)( b c) ab c F ( ad g)( ab c) F ( a( b( d g) cd) cg

THANK YOU