Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models

Similar documents
Pushing Forward Marginal MAP with Best-First Search

From Exact to Anytime Solutions for Marginal MAP

From Exact to Anytime Solutions for Marginal MAP

Anytime AND/OR Depth-first Search for Combinatorial Optimization

Advances in Parallel Branch and Bound

Evaluating Weighted DFS Branch and Bound over Graphical Models

Look-ahead with Mini-Bucket Heuristics for MPE

UNIVERSITY OF CALIFORNIA, IRVINE. Advancing Heuristics for Search over Graphical Models DISSERTATION

Weighted heuristic anytime search: new schemes for optimization over graphical models

Anytime AND/OR Best-First Search for Optimization in Graphical Models

Evaluating the Impact of AND/OR Search on 0-1 Integer Linear Programming

Best-First AND/OR Search for Most Probable Explanations

A CSP Search Algorithm with Reduced Branching Factor

Towards Parallel Search for Optimization in Graphical Models

Heuristic (Informed) Search

Anytime Best+Depth-First Search for Bounding Marginal MAP

Stochastic Anytime Search for Bounding Marginal MAP

Applying Search Based Probabilistic Inference Algorithms to Probabilistic Conformant Planning: Preliminary Results

Analysis of Algorithms - Greedy algorithms -

A Hybrid Recursive Multi-Way Number Partitioning Algorithm

Constraint Satisfaction Problems

Artificial Intelligence

Evaluating the impact of AND/OR search on 0-1 integer linear programming

Introduction to parallel Computing

Informed search algorithms. Chapter 4

Multi-Way Number Partitioning

Today s s lecture. Lecture 3: Search - 2. Problem Solving by Search. Agent vs. Conventional AI View. Victor R. Lesser. CMPSCI 683 Fall 2004

AND/OR Cutset Conditioning

Artificial Intelligence

Search Algorithms for Solving Queries on Graphical Models & the Importance of Pseudo-trees in their Complexity.

Selecting DaoOpt solver configuration for MPE problems

Lecture 5 Heuristics. Last Time: A* Search

Advancing AND/OR Search for Optimization Using Diverse Principles

Mathematical Programming Formulations, Constraint Programming

1 Introduction. 2 Iterative-Deepening A* 3 Test domains

APHID: Asynchronous Parallel Game-Tree Search

Chapter 11 Search Algorithms for Discrete Optimization Problems

General Methods and Search Algorithms

Heuristic (Informed) Search

Anytime Heuristic Search

LOCAL STRUCTURE AND DETERMINISM IN PROBABILISTIC DATABASES. Theodoros Rekatsinas, Amol Deshpande, Lise Getoor

Search Algorithms for Discrete Optimization Problems

Notes. Video Game AI: Lecture 5 Planning for Pathfinding. Lecture Overview. Knowledge vs Search. Jonathan Schaeffer this Friday

Recursive Best-First Search with Bounded Overhead

AND/OR Search for Marginal MAP

Using Multiple Machines to Solve Models Faster with Gurobi 6.0

Branch-and-bound: an example

Efficient memory-bounded search methods

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Best-First Search! Minimizing Space or Time!! RBFS! Save space, take more time!

Consistency and Set Intersection

ITCS 6150 Intelligent Systems. Lecture 5 Informed Searches

Improving the Efficiency of Depth-First Search by Cycle Elimination

Informed search. Soleymani. CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016

COURSE: DATA STRUCTURES USING C & C++ CODE: 05BMCAR17161 CREDITS: 05

CS 380: Artificial Intelligence Lecture #4

Abstraction Sampling in Graphical Models

(2,4) Trees. 2/22/2006 (2,4) Trees 1

Lecture 5: Search Algorithms for Discrete Optimization Problems

A* optimality proof, cycle checking

Basic Search Algorithms

Comparative Study of RBFS & ARBFS Algorithm

Winning the PASCAL 2011 MAP Challenge with Enhanced AND/OR Branch-and-Bound

Chapter S:V. V. Formal Properties of A*

Discrete Motion Planning

CS473 - Algorithms I

V Advanced Data Structures

15-780: Problem Set #2

Informed State Space Search B4B36ZUI, LS 2018

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Constraint Satisfaction Problems

Informed Search. Best-first search. Greedy best-first search. Intelligent Systems and HCI D7023E. Romania with step costs in km

Parallel Programming. Parallel algorithms Combinatorial Search

Monte Carlo Methods; Combinatorial Search

FINDING MOST LIKELY HAPLOTYPES IN GENERAL PEDIGREES THROUGH PARALLEL SEARCH WITH DYNAMIC LOAD BALANCING

HW#1 due today. HW#2 due Monday, 9/09/13, in class Continue reading Chapter 3

A Framework for Space and Time Efficient Scheduling of Parallelism

V Advanced Data Structures

Informed search strategies (Section ) Source: Fotolia

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Decoupled Software Pipelining in LLVM

Anytime AND/OR Depth-first Search for Combinatorial Optimization

Block-Parallel IDA* for GPUs

CS 771 Artificial Intelligence. Informed Search

Outline. Best-first search

Search EECS 395/495 Intro to Artificial Intelligence

Chapter 12: Indexing and Hashing. Basic Concepts

Best-First AND/OR Search for Graphical Models

Example: Bioinformatics. Soft Constraint Processing. From Optimal CSP to Soft CSP. Overview. From Optimal CSP to Soft CSP.

Problem Solving and Search

Search EECS 348 Intro to Artificial Intelligence

Informed Search and Exploration

Recursive Best-First Search with Bounded Overhead

Outline. Best-first search

Search and Optimization

Incremental Query Optimization

Chapter 12: Indexing and Hashing

Search Algorithms for Discrete Optimization Problems

Lecture 6: Constraint Satisfaction Problems (CSPs)

What is a Pattern Database? Pattern Databases. Success Story #1. Success Story #2. Robert Holte Computing Science Dept. University of Alberta

Transcription:

Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Akihiro Kishimoto IBM Research, Ireland Joint work with Radu Marinescu and Adi Botea

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Graphical Models A Graphical Model is a tuple X, D, F, where X = {X 1,..., X n} is a set of variables having finite domains D = {D 1,..., D n}. F = {f 1,..., f r } is a set of positive real-valued local functions. Each function f j F is defined by f j : Y j R + and Y j X It represents the function C(X) = r j=1 f j (Y j ) MAP: maximum a posteriori C = C(x ) = max X r f j (Y j ) j=1 Equivalent with the following energy minimization: r C = C(x ) = min log(f j (Y j )) X j=1

Example Primal graph C(A, B, C, D, E) = f 1(A, B, C) f 2(A, B, D) f 3(B, D, E)

Solving Exact MAP Inference by AND/OR Tree Search Exact MAP inference can be formulated as an AND/OR tree search problem OR Node: Assign one value to one variable AND node: Select one unassigned variable from the set of variables The pseudo tree determines the set of variables to choose from at each AND node (see the next slide) The MAP inference task is converted to the energy minimization task. The size of the AND/OR search tree is estimated to be O(exp(h)) where h is the depth of the pseudo tree [Decther & Mateescu, 2004] The size of the OR search tree is estimated to be O(exp(d)) where d is the number of variables.

Example: AND/OR Search Tree (a) Pseudo tree (b) AND/OR search tree Annotated with edge cost w(n, m) where n is an OR node and m is an AND node. Minimize the sum of the edge costs in the solution tree. The optimal solution tree shown in red indicates an assignment that returns the best value for C. Minimization at OR node and summation at AND node

Solving Exact MAP Inference by AND/OR Graph Search Two nodes which contain the identical subproblems can be merged into one node. I.e., the search space of exact MAP inference is finite DAG Identical subproblems can be identified by the context that is a partial assignment separating the subproblem from the rest of the problem graph (see next slide) [Decther & Mateescu, 2007]

Example: AND/OR Search Graph (a) Pseudo tree (b) AND/OR search graph Node E is merged based on the assigned values to variables B and D.

Previous Approaches based on AND/OR Search AND/OR Branch-and-Bound (AOBB) performs depth-first branch and bound + context-based caching [Marinescu & Dechter, 2009] Pros: Memory efficient as a depth-first search Cons: Explores many subspaces that are irrelevant to optimal solutions Best-First AND/OR search (AOBF) performs AO*-based best-first search [Marinescu & Dechter, 2009] Pros: Explores the smallest search space Cons: Requires a huge amount of memory It is known that many best-first search algorithms are translated into depth-first search algorithms E.g., A* and RBFS/IDA*, proof-number search and depth-first proof-number search, and SSS* and MTD(f) Such translation enables to inherit both advantages of depth-first search and best-first search

Contributions RBFAOO A new recursive best-first search algorithm for AND/OR graphs for exact MAP inference Requires limited memory (even linear) Explores nodes in best-first order Shared-memory parallelization of RBFAOO Carefully selects a set of nodes to examine in parallel Distributed-memory parallelization of RBFAOO based on dovetailing Pass different problem formulations to RBFAOO/SPRBFAOO

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Recursive Best-First AND/OR Search with Overestimation (RBFAOO) [Kishimoto & Marinescu, UAI 2014] Basic Idea Transform best-first search (AO* like) into depth-first search using a threshold controlling mechanism (explained next) Based on Korf s recursive best-first search idea (RBFS) [Korf, 1993] and depth-first proof-number search [Nagai, 2002] Adapted to the context minimal AND/OR graph for graphical models Nodes are still expanded in best-first order Use admissible function h that returns a lower-bound of the optimal solution Node values called q-values are updated in the usual manner based on the values of their successors OR - minimization: q(n) = min (w(n, n succ(n) n ) + q(n )) AND - summation: q(n) = q(n ) n succ(n) Initially, q(n) = h(n) the heuristic lower bound of the cost below n. q(n) is improved as the search progresses. Some nodes are going to be re-expanded Use caching (limited memory) - done efficiently based on contexts Use overestimation (of the threshold)

Example - step 1 Expand OR node A by generating its AND successors: A, 0 and A, 1. Best successor is A, 0 Set threshold θ(a, 0) = 4 indicates next best successor is A, 1 ; we can backtrack to A, 1 is the updated cost of the subtree below A, 0 θ = 4.

Example - step 2 Expand AND node A, 0 by generating its OR successors: B and C. Update node value q(a, 0) = h(b) + h(c) = 3 threshold OK

Example - step 3 Expand OR node B by generating its AND successor: B, 0. Update node values: q(b) = 4 and q(a, 0) = 6 threshold NOT OK

Example - step 4 Backtrack to A, 0 and select next best node A, 1 Set threshold θ(a, 1) = 6 (updated value of the left subtree) Cache q-value of each expanded node

Overestimation Some of the nodes in the subtree below A, 0 may be re-expanded.

Overestimation average CPU time (sec) 1500 1000 500 grids pedigree protein 0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 overestimation δ Simple overestimation scheme for minimizing the node re-expansions Inflate the threshold with some small δ: θ = θ + δ (δ > 0) In practice, we determine δ experimentally (eg, δ = 1 worked best) Use current best solution cost for bounding

Properties Correctness and completeness Theorem (correctness) Given a graphical model M = X, D, F, if RBFAOO solves M with admissible heuristic function h, its solution is always optimal. Theorem (completeness) Let M = X, D, F be a graphical model with primal graph G, let T be a pseudo tree G and let C T be the context minimal AND/OR search graph based on T (also a finite DAG). Assume that RBFAOO preserves the q-values of the nodes n 1, n 2,, n k which are on the current search path and the q-values of n i s siblings. Then RBFAOO eventually returns an optimal solution or proves no solution exists.

Benchmarks Set of problem instances from the PASCAL2 Inference Challenge pedigree, grid, protein also from http://graphmod.ics.uci.edu/ (Rina Dechter s group) 64 bit C++ code; 2.4GHz 12-core CPU; 80GB of RAM Algorithms: AOBB(i): Depth-first AND/OR Branch and Bound (full caching) AOBF(i): Best-First AND/OR Search (full caching) RBFAOO(i): Recursive Best-First AND/OR Search with Overestimation All guided by the pre-compiled MBE(i) heuristic

Results Genetic linkage analysis instance algorithm i = 6 i = 8 i = 10 i = 12 (n, k, w, h) time nodes time nodes time nodes time nodes AOBB - - - - pedigree7 AOBF oom oom oom oom (1068,4,28,140) RBFAOO - - - 2210 345204317 AOBB - - - - pedigree9 AOBF oom oom 1846 30506650 1379 20960401 (1118,7,25,123) RBFAOO 1084 195214857 728 136764248 522 97410715 248 46922921 AOBB - - - - pedigree13 AOBF oom oom oom oom (1077,3,30,125) RBFAOO - - - - AOBB - - - - pedigree19 AOBF oom oom oom oom (793,5,21,51) RBFAOO - - 1753 319268527 834 168262596 AOBB 825 113195179 1450 198371250 244 34182326 63 10855277 pedigree30 AOBF 83 2648120 103 2689106 45 1717523 24 867988 (1289,5,20,105) RBFAOO 20 5435997 19 5401921 14 3840692 5 1406493 AOBB - 935 125740961 107 15616376 5 885551 pedigree39 AOBF 307 9740964 215 8073776 53 2347928 8 384757 (1272,5,20,77) RBFAOO 79 19804239 67 16260143 14 3461943 2 480866 AOBB - - - - pedigree41 AOBF oom oom oom oom (1062,5,29,119) RBFAOO - - - 2706 373308327 Table: Results for pedigree networks. CPU time (in seconds) and number of nodes expanded. Time limit 1 hour. RBFAOO(i) ran with a 10-20GB cache table (134,217,728 table entries).

Results Binary grid networks instance algorithm i = 6 i = 8 i = 10 i = 12 (n, k, w, h) time nodes time nodes time nodes time nodes AOBB - - - - 50-20-5 AOBF oom oom oom oom (400,2,27,97) RBFAOO 1163 214829892 736 142564959 385 80803927 232 48848448 AOBB - 738 111785572 309 43858649 36 5997367 75-20-5 AOBF 2268 30767273 567 20761132 182 5685498 42 1766240 (400,2,27,99) RBFAOO 212 46289779 89 19603768 39 8451032 9 2026625 AOBB - - - 2206 314621887 75-22-5 AOBF oom oom 1123 34528523 743 22103512 (484,2,30,107) RBFAOO 563 107126385 643 118981360 227 44947693 153 29325424 AOBB - - - - 75-23-5 AOBF oom oom 1751 39532238 417 11103193 (529,2,31,122) RBFAOO 1860 304935340 1109 198613807 455 87285533 106 20952230 AOBB - - 1479 195188949 560 74507590 90-23-5 AOBF 970 32478634 376 12937697 289 10087022 91 3169720 (529,2,31,116) RBFAOO 277 52920346 105 20736738 71 14460847 18 3602104 AOBB - - - - 90-26-5 AOBF 1016 25948278 1108 29700313 505 16035732 552 16728882 (676,2,36,136) RBFAOO 241 44068170 183 33284922 68 12738955 65 12176988 Table: Results for grid networks. CPU time (in seconds) and number of nodes expanded. Time limit 1 hour. RBFAOO(i) ran with a 10-20GB cache table (134,217,728 table entries).

Results Protein side chain interaction instance algorithm i = 2 i = 3 i = 4 i = 5 (n, k, w, h) time nodes time nodes time nodes time nodes AOBB - - 2218 65175805 oom pdb1a3c AOBF oom oom oom oom (144,81,15,32) RBFAOO 1915 45513907 344 259261 oom AOBB 129 2919570 8 2694 204 1302 oom pdb1aac AOBF 2851 3195539 11 6264 205 3072 oom (85,81,11,21) RBFAOO 51 1148212 8 1492 204 783 oom AOBB 996 55994055 2672 162495198 16 1593 136 4767 pdb1acf AOBF oom oom 17 4021 139 10464 (90,81,9,22) RBFAOO 22 987416 56 2553896 16 1090 137 3212 AOBB 259 7770890 134 3806154 657 3312980 2552 274955 pdb1ad2 AOBF 831 1250161 394 715399 1109 1049637 2595 135265 (177,81,9,33) RBFAOO 36 1227741 42 858899 585 1218780 2543 113780 AOBB 4 177150 31 75051 610 1474224 oom pdb1ail AOBF 80 66207 47 15817 1728 928375 oom (62,81,8,23) RBFAOO 2 78677 30 16427 599 1311325 oom AOBB - 6 154434 260 9348036 236 24412 pdb1atg AOBF oom 38 119195 632 1196429 247 30072 (175,81,12,39) RBFAOO 620 24347033 4 71446 32 430747 236 11545 Table: Results for protein networks. CPU time (in seconds) and number of nodes expanded. Time limit 1 hour. RBFAOO(i) ran with a 10-20GB cache table (134,217,728 entries).

Results pedigree grids normalized total CPU time (sec) 1.0 0.8 0.6 0.4 0.2 AOBB AOBF RBFAOO normalized total CPU time (sec) 1.0 0.8 0.6 0.4 0.2 AOBB AOBF RBFAOO 0.0 0 5 10 15 20 i-bound 0.0 0 5 10 15 20 i-bound Figure: Normalized total CPU time as a function of the i-bound.

Impact of overestimation node re-expansion rate 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 overestimation δ grids pedigree protein percentage of solved instances 1.0 0.8 0.6 0.4 0.2 0.0 grids pedigree protein 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 overestimation δ Figure: Node re-expansion rate and percentage of instances solved by RBFAOO(i) as a function of the overestimation rate δ. Time limit 1 hour. i = 10 for grids and pedigrees, i = 4 for protein.

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Shared-memory Parallel RBFAOO (SPRBFAOO) [Kishimoto, Marinescu & Botea, NIPS 2015] Obstacles to Efficient Parallelization Shared-memory parallel search suffers from search and synchronization overheads. Search overhead refers to extra states examined by parallel search Synchronization overhead is the idle time wasted at the synchronization points including locks for mutual exclusion. These overheads are usually inter-dependent. It is difficult to theoretically find the right mix of these overhead. RBFAOO has a characteristic of best-first search that tries to examine only the promising portions of the search space. I.e., RBFAOO attempts not to examine useless portions of the search space. How can we initiate parallelism without increasing the search overhead?

Shared-memory Parallel RBFAOO (SPRBFAOO) Basic Idea Use ideas behind parallel depth-first proof-number search [Kaneko, 2010] All threads start from the root with the same search and with one shared cache table. The virtual q-value vq(n) for node n is used to control parallel search. 1. Initially, vq(n) is set to q(n). 2. When a thread examines n, vq(n) is incremented by a small value ζ. 3. When all the threads finish examining n, vq(n) is set to q(n). An effective load balancing is obtained without any sophisticated schemes, while promising portions of the search space are examined.

Example Step 1 Using 2 threads and η = 1.

Example Step 2 Using 2 threads and η = 1.

Example Step 3 Using 2 threads and η = 1.

Example Step 4 Using 2 threads and η = 1.

Example Step 5 Using 2 threads and η = 1.

Properties Correctness Theorem Given a graphical model M = X, D, F, and an admissible heuristic h, SPRBFAOO returns optimal solutions.

Benchmarks The same set of problem instances used in RBFAOO experiments pedigree, grid, protein also from http://graphmod.ics.uci.edu/ (Rina Dechter s group) 64 bit C++ code; 2.4GHz 12-core CPU; 80GB of RAM

Results Small i-bounds, 12 threads, η = 0.01 10 4 grids : total CPU time, i-bound (i = 6) 10 4 pedigree : total CPU time, i-bound (i = 6) 10 4 protein : total CPU time, i-bound (i = 2) 10 3 10 3 10 3 SPRBFAOO 10 2 SPRBFAOO 10 2 SPRBFAOO 10 2 10 1 10 1 10 1 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO grid, i = 6 pedigree, i = 6 protein, i = 2

Results Large i-bounds, 12 threads, η = 0.01 10 4 grids : total CPU time, i-bound (i = 14) 10 4 pedigree : total CPU time, i-bound (i = 14) 10 4 protein : total CPU time, i-bound (i = 4) 10 3 10 3 10 3 SPRBFAOO 10 2 SPRBFAOO 10 2 SPRBFAOO 10 2 10 1 10 1 10 1 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO 10 0 10 0 10 1 10 2 10 3 10 4 RBFAOO grid, i = 14 pedigree, i = 14 protein, i = 4 Total CPU time (sec) for RBFAOO vs. SPRBFAOO (time limit 2 hours)

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Dovetailing-based Parallel RBFAOO [Marinescu, Kishimoto & Botea, ECAI 2016] Parallel dovetailing is a simple way of parallelization when the best parameter to start with is not known beforehand. Given a graphical model M and m processing cores, run in parallel m instances of an exact MAP algorithm a. As a different parameter configuration θ i (1 i m), use the pseudo tree generated by a randomized min-fill heuristic [Kjaerulff 1990]. When a solution is found for one of the m instances, that solution is proven to be optimal. The performance of AND/OR search is strongly influenced by the quality of the pseudo tree that determines variable ordering [Marinescu & Dechter 2009]. This approach often works efficiently in distributed-memory environments. In addition to search and synchronization overheads, distributed-memory parallel search that attempts to partition search spaces suffers from communication overhead. Parallel dovetailing incurs almost no communication overhead.

Baseline SPRBFAOO runs on 12 cores guided by the pre-compiled MBE(i) heuristic (i = 6, 14 for pedigree and grid, i = {2, 4} for protein). 3.5 3.0 Average speedup (SPRBFAOO) - grid networks i = 6 i = 14 70 60 Average speedup (SPRBFAOO) - pedigree networks i = 6 i = 14 16 14 Average speedup (SPRBFAOO) - protein networks i = 2 i = 4 Average speedup 2.5 2.0 1.5 1.0 Average speedup 50 40 30 20 Average speedup 12 10 8 6 4 0.5 10 2 0.0 0 2 4 6 8 10 12 14 ordering 0 0 2 4 6 8 10 12 14 ordering 0 0 2 4 6 8 10 12 14 ordering grid pedigree protein Average speedups for SPRBFAOO

Outline 1 Background 2 RBFAOO Recursive Best-First AND/OR Search with Overestimation 3 SPRBFAOO Shared-memory Parallelization of RBFAOO 4 Parallel Dovetailing + SPRBFAOO 5 Conclusions

Conclusions RBFAOO limited memory best-first AND/OR search for graphical models, thus bridging the gap between depth-first and best-first search methods. SPRBFAOO parallelization of RBFAOO that leads to considerable speedups (up to 7-fold using 12 threads) especially on hard problem instances Parallel Dovetailing + SPRBFAOO combination that further improves RBFAOO s performance Empirical evaluation on a set of difficult pedigree networks that demonstrates clearly the benefit of these approaches. Future work: Extend it into an anytime scheme Exploitation of better pseudo tree Parallelization based on partitioning search spaces in distributed-memory environments Parallelization in heuristic construction