Lagrangian relaxations for multiple network alignment

Size: px
Start display at page:

Download "Lagrangian relaxations for multiple network alignment"

Transcription

1 Noname manuscript No. (will be inserted by the editor) Lagrangian relaxations or multiple network alignment Eric Malmi Sanjay Chawla Aristides Gionis Received: date / Accepted: date Abstract We propose a principled approach or the problem o aligning multiple partially overlapping networks. The objective is to map multiple graphs into a single graph while preserving vertex and edge similarities. The problem is inspired by the task o integrating partial views o a amily tree (genealogical network) into one uniied network, but it also has applications, or example, in social and biological networks. Our approach, called Flan, introduces the idea o generalizing the acility location problem by adding a non-linear term to capture edge similarities and to iner the underlying entity network. The problem is solved using an alternating optimization procedure with a Lagrangian relaxation. Flan has the advantage o being able to leverage prior inormation on the number o entities, so that when this inormation is available, Flan is shown to work robustly without the need to use any ground truth data or ine-tuning method parameters. Additionally, we present three multiple-network extensions to an existing state-o-the-art pairwise alignment method called Natalie. Extensive experiments on synthetic, as well as realworld datasets on social networks and genealogical networks, attest to the eectiveness o the proposed approaches which clearly outperorm a popular multiple network alignment method called IsoRankN. E. Malmi HIIT, Aalto University, Espoo, Finland eric.malmi@aalto.i S. Chawla Qatar Computing Research Institute, Doha, Qatar schawla@q.org.qa A. Gionis HIIT, Aalto University, Espoo, Finland aristides.gionis@aalto.i

2 2 Eric Malmi et al. 1 Introduction The multiple network alignment problem encodes the task o de-duplicating vertices in a collection o graphs while preserving similarity between vertices and edges. Vertex similarity is typically modeled by comparing vertex attributes or eature vectors. Edge similarity encodes structural dependencies among the input graphs, in particular, a desirable network alignment should preserve the edges o the input graphs to the largest extent. A need or data de-duplication oten arises when interrelated datasets rom dierent sources have to be integrated. Many datasets naturally have a network structure which is why the potential application areas o network alignment methods are plentiul. These methods have attracted a signiicant amount o attention in the area o biological networks [8], in particular or the problem o aligning protein protein interaction networks in order to identiy unctional orthologs across dierent species [13]. Other applications include social network alignment [15, 30], ontology alignment [3], and image matching in computer vision [9]. However, our initial motivation to study this problem arises rom the application o merging amily trees (genealogical networks) which is a common problem or genealogists. Services like Ancestry.com and MyHeritage attract millions o paying subscribers who upload their own amily tree to the service, trying to ind new relatives to add to their network. Automatic alignment o individual amily trees can help people to ind new relatives and trace their ancestry urther back in time. Consider or example the top hal o Figure 1, which shows three individuals, A, B and C, and the partial views they have on their ancestry. The bottom hal shows the underlying amily tree which is hidden and unknown. The trees contain two types o inormation: vertex attributes (not shown in the igure) and relationships between vertices. Due to the diiculty o interpreting historical documents and errors in these documents, the vertices and edges o individual trees may contain errors. Furthermore, amily trees are not trees in the graph-theoretic sense as they contain cycles such as Mother FirstChild Father SecondChild Mother. Note that we draw an edge between every parent-child pair, whereas Figure 1 ollows the layout commonly used by genealogists. In conclusion, the problem o merging amily trees is an instance o the multiple network alignment problem. To address the multiple network alignment problem, we introduce a novel extension o the acility location problem [27] to account or both vertex and edge similarity. In particular, we present a non-linear extension o acility location to speciically avor mappings where neighboring vertices are mapped to other neighbors. To the best o our knowledge, this extension is a novel problem in its own right and o independent interest. We reer to the extension as the acility location ormulation or aligning multiple networks (Flan). Since Flan is NP-hard, we provide an approximate solution using a Lagrangian relaxation approach. A practical beneit o Flan is that it allows the user to ix the number o entities (i.e., vertices in the hidden graph) in cases when prior

3 Lagrangian relaxations or multiple network alignment 3 A B C G1 G2 G3 A B C Fig. 1 The bottom hal o the igure shows an underlying but unknown amily tree, indicating the location o three individuals A, B and C. The square vertices represent males and the round vertices emales. Each individual has a partial view o the tree. The objective is to reconstruct the underlying and hidden tree rom the partial views shown in the top hal. inormation on that is available. This can help with parameter selection which is a common problem when applying network alignment methods in practice. Natalie, proposed by Klau and collaborators [12, 18], is one o the best perorming existing network alignment methods [8]. Similar to our approach, Natalie ormulates the pairwise problem as a non-linear integer program and provides an approximate solution using a Lagrangian relaxation approach. However, Natalie only supports pairwise alignments. Thereore, we investigate three approaches to extending Natalie to multiple networks. Finally, we present an experimental comparison between Flan and the extensions o Natalie on synthetic data, social network data, and amily tree data. We demonstrate that Flan perorms well in all o these experiments, especially in terms o precision, whereas the proposed Natalie extensions typically yield the highest recall. Furthermore, i prior inormation on the number o entities is available, Flan works robustly without any ground truth data which is typically needed or ine-tuning method parameters. To summarize, our main contributions are: We ormalize the multiple network alignment problem using a novel nonlinear extension o the acility location problem. This extension captures both the problem o inerring the vertices reerring to the same entity and the problem o inerring the underlying entity network. We reer to the extension as the acility location ormulation or aligning multiple networks (Flan). We propose an alternating optimization approach to obtain an approximate solution o Flan, using the technique Lagrangian relaxation. The advantage o the Lagrangian relaxation is that we automatically obtain instance-level approximation bounds on the quality o the solution.

4 4 Eric Malmi et al. I prior inormation on the number o entities is available, Flan is shown to work robustly without the need to use any ground truth data or inetuning method parameters. We present three multiple-network extensions to Natalie, which is a stateo-the-art pairwise network alignment method. A progressive extension with edge updates (prognatalie++) is shown to provide a good experimental perormance. The code and the data or reproducing the experiments are publicly available at: The rest o the paper is structured as ollows. In Section 2, we present the acility location ormulation or the multiple network alignment problem, and in Section 3, we describe the Lagrangian relaxation approach or solving it, as well as the extensions o Natalie to multiple networks. In Section 4, we discuss the related work, and in Section 5, provide an experimental evaluation o the dierent methods. Finally, we draw conclusions in Section 6. 2 Problem ormulation The input to the network-alignment problem consists o k graphs G 1 = (V 1, E 1 ),..., G k = (V k, E k ). We deine V = k i=1 V i to be the set o all vertices in the k input graphs and we set n = V. As the correspondence between the graph vertices is unknown initially, we assume that all vertex sets are pairwise disjoint. The case that some correspondences among vertices o input graphs are known can be easily incorporated in our ramework. We assume that the input graphs are maniestations o an underlying entity graph G e = (U, E e ), where the vertex set U denotes the underlying entities and E e the edges between them. The objective o the network-alignment problem is to iner the entity graph and ind an assignment X : V U so that each vertex in the input graphs can be mapped to its underlying entity vertex. We urther assume that the entities are represented by a subset o the vertices in the input graphs, that is, U V. The alignment o the input graphs is driven by the graph structure, so that to the largest extent possible, neighbors in the input graphs should map to neighbors in the entity graph, as well as by the similarity between vertices in dierent graphs. In particular, we assume that each vertex has a set o attributes. A distance unction (dissimilarity) d(i, j) is then deined between each pair o vertices i and j. The distance d(i, j) is derived by comparing the attributes o i and j. For the relationship between the input graphs and the underlying graph we consider the ollowing characteristics: The attributes o a vertex o an input graph may have been distorted rom the attributes o the corresponding entity o the underlying graph. The vertices o an input graph may correspond to only a subset o the entities (V i U).

5 Lagrangian relaxations or multiple network alignment 5 The edges between entities are preserved with probability p < 1 (not necessarily ixed or all edges) so the edges o an input graph may correspond to only a subset o the edges in the underlying entity graph. In other words, the entity graph contains the edges ( o the input graphs and potentially k ) some missing edges, that is, E e = i=1 E i E m, where E m is a set o potentially missing edges. From the above considerations it ollows that the assignment X : V U we are searching or should satisy the ollowing properties: (P1) (P2) Vertices are mapped to entities with as similar attributes as possible. Adjacent vertices are assigned to adjacent entities. To ind a set o entities U, the edges between them E e, and an assignment X : V U that respects properties (P1) and (P2) we ormulate a non-linear integer programming (IP) problem. The IP ormulation is an optimization problem over binary variables y i and x ij and an adjacency matrix B. The irst two variables encode a solution in terms o the sought set U and the assignment X, respectively. In particular, y i indicates whether there is some vertex j V that has been assigned to entity i, and x ij indicates whether vertex i is assigned to entity j. The binary adjacency matrix B encodes the missing edges E m to be optimized and the edges o the input graphs E I = k i=1 E i. The integer program is the ollowing min x,y,b y j + d(i, j)x ij g A ik B jl x ij x kl j i,j i,j,k,l + γ Bij, 2 (1) (i,j) / E I such that x ij y j, i, j = 1,... n, (2) x ij = 1, i = 1,..., n, (3) j i V m x ij 1, j = 1,..., n and m = 1,..., k, (4) x ij, y j {0, 1}, i, j = 1,..., n. (5) In the above ormulation,, g, and γ are scalar parameters, while A and B are adjacency matrices representing the graph structure o the problem instance. In particular, entry A ik indicates whether the vertices i and k o the input graphs are neighbors, while entry B jl indicates whether entities j and l are neighbors. The integer program presented above can be seen as a non-linear extension o the uncapacitated acility-location problem [16]. Selecting vertex i V to be an entity so that other vertices can be mapped to it corresponds to setting y i = 1, which can be seen as opening vertex i as a acility. The parameter represents the cost o opening a acility, so the irst term in the objective unction (1) penalizes or every opened entity. Note that one extreme solution

6 6 Eric Malmi et al. is to consider every vertex as an entity; setting = 0 would make such a solution optimal. The second term in the objective unction (1) penalizes or assigning vertices to dissimilar entities. Recall that d(i, j) is the distance between vertex i and entity j, computed using the attributes o i and j, and note that the cost is paid only when vertex i is assigned to entity j, expressed by x ij = 1. The third term uses the adjacency matrices A and B and gives a discount o g or each pair o adjacent vertices that are assigned to adjacent entities. The ourth term adds L 2 regularization to the adjacency matrix B by introducing cost γ or every added missing edge. Parameter γ controls the amount o evidence needed or introducing new edges to the entity graph; i γ = 0, the complete graph becomes the optimal solution or B, whereas i γ = g 2, a new edge is added when at least one pair o adjacent vertices has been assigned to the corresponding entity pair, as shown later in Section Next we discuss the constraints o the integer program given in (2) (4). The irst set o constraints (2) ensures that vertices are only assigned to opened entities. The second set o constraints (3) ensures that each vertex is assigned to exactly one entity. Finally, the third set o constraints (4) prevents two vertices in the same input graph rom being assigned to the same entity. An alternative ormulation results by having prior knowledge about the total number o entities N e. In this case we can set a constraint or the number o opened entities, so the objective unction (1) is replaced by min x,y,b d(i, j)x ij g A ik B jl x ij x kl + γ i,j i,j,k,l (i,j) / E I B 2 ij, (6) and we need to include constraint y i N e, (7) i on top o constraints (2) (5). Both problem ormulations require three parameters as input: the irst ormulation requires, g, and γ, while the second ormulation requires N e, g, and γ. For both o these ormulations we have the ollowing result. Proposition 1 Multiple network alignment, as deined by optimizing objective unction (1) subject to constraints (2) (5), or optimizing objective unction (6), subject to constraints (2) (5) and (7), is NP-hard. Proo Following the proo or the NP-hardness o pairwise network alignment by El-Kebir et al. [12], we show a reduction rom the Clique decision problem, which asks to determine whether a k-vertex clique G c = (V c, E c ) exists in graph G = (V, E). We consider an instance o the multiple network alignment problem with two networks, i.e., k = 2, which are set to G 1 = G (the input graph or the Clique problem) and G 2 = G c (the k-clique). Let = 0 (or N e = i we consider the problem with the ixed number o entities) and { 0, i (u = v and u V ) or (u V c and v V ) d(u, v) =, otherwise,

7 Lagrangian relaxations or multiple network alignment 7 so that the vertices o G are encouraged to be assigned to themselves, whereas the vertices in G c are encouraged to be assigned to any vertices in V. A k-vertex clique exists in G i and only i the cost o the optimal multiple network alignment or input graphs G and G c is g ( E + E c ). This cost corresponds to the sum o the discounts we get rom aligning each neighbor pair in G to itsel and being able to align each neighbor pair in G c to another neighbor pair in G. Note that the same pairwise cost could be achieved by aligning the neighbor pairs in G c to non-neighbor pairs in G and adding edges between the corresponding entities. However, this would result in a higher overall cost due to the regularization term γ (i,j) / E I Bij 2 when γ > 0. 3 Methods We now discuss our methods or solving the integer programs introduced in the previous section. Our algorithms (Section 3.3) use the Lagrangian relaxation ramework, so we irst give a brie overview o the ramework (Section 3.1). The Lagrangian relaxation ramework is also used by Natalie by Klau et al. [12, 18] or aligning two networks. We present Natalie and propose an extension o it to multiple networks in Section Background: Lagrangian relaxation ramework The Lagrangian relaxation approach [14] aims to obtain approximate solutions to constrained optimization problems, like the ones presented in the previous section. The method dualizes/relaxes some constraint(s) by adding them to the objective with multipliers λ. In practice, the constraint(s) to be relaxed are chosen so that the relaxed problem Z LD (λ) can be solved in polynomial time. Next, the problem Z LD (λ ) = max λ Z LD (λ) is solved using the subgradient method [24]. Since the value o Z LD (λ) is a lower bound or the original problem or any λ, the value o Z LD (λ ) yields a lower bound (l ) or the optimal solution. Every relaxed solution computed during the subgradient optimization is modiied using some heuristics to construct easible solutions or the original non-relaxed problem. The easible solution with the lowest cost provides an upper bound (u ) or the optimal solution. I l = u, then the optimal solution or the original problem has been ound. Even or many NPhard problems, the optimal solution is oten ound in a reasonable time with this approach [14]. 3.2 Natalie Natalie is a state-o-the-art pairwise (i.e., k = 2) network-alignment method introduced by Klau [18]. It ormulates the two-network alignment problem as

8 8 Eric Malmi et al. the ollowing integer program max σ(i, j)x ij + x (i,j) V 1 V 2 such that x ij 1, or all i V 1, j V 2 x ij 1, or all j V 2, i V 1 (i,j) V 1 V 2 x ij {0, 1}, or all (i, j) V 1 V 2, (k,l) V 1 V 2 τ(i, j, k, l)x ij x kl, where σ(i, j) is a similarity score between vertices i and j. The parameter τ(i, j, k, l) is a similarity score between pairs o vertices (i, k) and (j, l) which is typically set to { g, i (i, k) E 1 and (j, l) E 2 τ(i, j, k, l) = 0, otherwise, where g is a positive constant. This ormulation is equivalent to the ormulation presented in Section 2 applied to two networks. The main dierence is that we have introduced an entity-opening cost (or alternatively a budget N e ) to control the number o entities when aligning partially overlapping networks. Natalie also supports partial alignment since a vertex is not required to be matched to another vertex. Instead it can get mapped to a gap i some o the scores σ(i, j) are negative. The number o aligned vertices can be controlled by shiting the scores towards more negative or more positive values. In our implementation o Natalie, 1 which is used or the experiments o this paper, we set a threshold score or mapping a vertex to a gap instead o shiting the scores. In our experiments, this threshold is also denoted by or consistency. To solve the integer program, Natalie irst linearizes it and then employs a Lagrangian relaxation approach. The derivation is similar to the one presented in Section but due to the additional term in the objective (or alternatively an additional constraint) in our ormulation, we need to relax two constraints instead o one and develop easibility heuristics (Section 3.3.3) whereas Natalie s ormulation allows to directly extract a easible solution rom a relaxed one. Natalie 2.0 [12] is an extension o the original algorithm [18]. The original method adopts a subgradient method or updating the Lagrangian multipliers, whereas Natalie 2.0 employs both the subgradient method and a dual descent method to obtain stronger upper and lower bounds. Nevertheless, in this paper we only consider the subgradient approach or updating the multipliers. In summary, the main dierences between our method and Natalie are the ollowing: Our method directly supports multiple network alignment (as well as pairwise alignment) and it also optimizes the underlying entity graph, 1 The code is available at:

9 Lagrangian relaxations or multiple network alignment 9 but these improvements come with the cost o having to solve a more complex optimization problem. Both methods require parameters and g, but our method also supports speciying the number o vertices N e instead o speciying. In addition, our method requires parameter γ which is set to g 2 as discussed later in Section Adaptation to multiple networks Klau [18] suggests that Natalie can be extended to multiple networks or alternatively it can be used to progressively align multiple networks. In this section, we present one possible extension to multiple networks and propose an improvement or the straightorward progressive version o Natalie. To ind a multiple network alignment, we assume an ordering o graphs and consider aligning vertex i with any vertex rom graphs g = 1,..., g 1. Thus we obtain the ollowing problem. max x such that 1 g <g k + j V g 1 g <g k (i,j) V g V g (i,j) V 1 V 2 σ(i, j)x ij (k,l) V 1 V 2 τ(i, j, k, l)x ij x kl, x ij 1, or all 1 g < g k, i V g, i V g x ij 1, or all 1 g < g k, j V g, x ij {0, 1}, or all 1 g < g k, (i, j) V g V g, Natalie can be used to solve this problem with the ollowing modiications: (i) i a vertex is mapped to a gap, we deine it to be mapped to itsel, and (ii) the last bipartite matching step (or details, see Klau [18]) must be done or each input graph separately since two vertices rom dierent graphs can be mapped to the same vertex even though two vertices rom the same graph cannot. To avoid obtaining too many entities, we assume transitivity and deine that i x ab = x bc = 1, then vertices a, b, and c should all belong to the same entity. Hence, entities can be extracted by inding connected components o the alignment graph. However, the transitivity assumption comes with the drawback that we cannot guarantee anymore that vertices a and a rom the same input graph are mapped to a distinct entity since we might have that x ab = 1 and x a c = x cb = 1. Avoiding such injectivity violations does not seem trivial, so in the experiments, we simply ignore the injectivity constraint in case the aorementioned situation occurs. This method is called Natalie in the experiments. To solve the multiple networks alignment problem progressively, we pick a target graph and solve k 1 pairwise network alignment problems using Natalie to align other graphs to it. For every vertex mapped to a gap, we

10 10 Eric Malmi et al. create a new vertex in the target graph so that the vertices o subsequent graphs can be mapped to it. This method is called prognatalie. One limitation o prognatalie is that the edges between the original vertices and the newly created vertices in the target graph are entirely absent. Furthermore, since the edge sets are noisy, it would be useul to be able to aggregate edge inormation across the graphs. Thereore, we propose prognatalie++ which updates target graph edges ater every aligned input graph: or each pair o neighboring input graph vertices mapped to vertices j and l in the target graph, we create an edge (j, l) i it is not already present. 3.3 Our approach: Flan Beore discussing our method in detail, we present a high-level overview. We adopt an alternating optimization procedure [4] which splits the variables into two subsets {x, y} and {B}, and iteratively solves the alternating restricted minimization problems over the two subsets. The method consists o the ollowing steps. 1. Initialize the entity graph B = A. 2. Keeping B ixed, solve the optimal alignment x, y as ollows Linearize the problem to obtain an integer linear programming (ILP) problem (Section 3.3.1). Solve the integer linear program using a Lagrangian relaxation approach (Sections 3.3.2, 3.3.3, and 3.3.4). 3. Optimize the adjacency matrix B o the entity graph, keeping x and y ixed (Section 3.3.5). 4. Go back to step 2 unless the iteration has converged Linearizing the problem The irst step is to eliminate quadratic terms, and turn the quadratic integer program into an integer linear program. This is achieved by introducing new variables w ijkl = x ij x kl. Note that we only need to create a variable w ijkl or index quadruplets that orm a square in the input graphs and the entity graph (i.e., vertices i and k are neighbors and entities j and l are neighbors) since in all other cases A ik B jl = 0 and w ijkl does not play a role. We denote by S the set o quadruplet indices or which a variable w ijkl is introduced. To ensure that the deinition o variables w ijkl is consistent, we introduce the ollowing constraints w ijkl = x ij x kl = x ij, or all i, j, k (8) l:(i,j,k,l) S k:(i,j,k,l) S x ij x kl l:(i,j,k,l) S l w ijkl = x ij x kl k:(i,j,k,l) S k V (i) x ij x kl x ij, or all i, j, l (9) w ijkl = w klij, or all i, j, k, l (10)

11 Lagrangian relaxations or multiple network alignment 11 where V (i) = {v : i V j and v V j }, that is, the vertices o the input graph to which vertex i belongs. Ater the linearization step and dropping the regularization term which does not aect the minimum when B is ixed, we obtain the ollowing integer linear program, where B is ixed min x,y,w y i + d(i, j)x ij g i i,j (i,j,k,l) S w ijkl (11) such that x ij y j, i, j = 1,... n (12) x ij = 1, i = 1,..., n (13) j x ij 1, j = 1,..., n and m = 1,..., k (14) i V m w ijkl x ij, or all i, j, k (15) l:(i,j,k,l) S k:(i,j,k,l) S w ijkl x ij, or all i, j, l (16) w ijkl = w klij, or all i, j, k, l (17) x ij, y i, w ijkl {0, 1}, or all i, j, k, l. (18) Note that the number variables in (11) is potentially very high, which could prevent us rom solving problem instances o a realistic size. However, this shortcoming can be overcome by a technique known as blocking in the entity-resolution literature [6]. The idea is to consider that each vertex can be mapped not to every other vertex but only to a set o candidate entities. These candidates are typically determined by selecting entities above a similarity threshold or by taking the c most similar entities. 2 This is also known as sparse network alignment [3, 12] Solving an instance o the relaxed problem To solve the integer linear program (11) (18), we adopt the Lagrangian relaxation approach. We start by dualizing constraints (13) and (17), which yields 2 Like in the case o Natalie, we assume an ordering o graphs and consider aligning vertex i with itsel or any vertex rom graphs g = 1,..., g 1. In other words, we avoid considering simultaneously vertex i as an entity or vertex j and j as an entity or i, which we have observed to result in larger duality gaps.

12 12 Eric Malmi et al. the ollowing problem Z LD (λ) = min y i + d(i, j)x ij g w ijkl x,y,w i i,j (i,j,k,l) S + λ i (1 ) x ij + λ ijkl (w ijkl w klij ) j i (i,j,k,l) S = min λ i + y i + (d(i, j) λ i )x ij (19) x,y,w i i i,j + (2λ ijkl g)w ijkl (i,j,k,l) S subject to constraints (12), (14) (16), and (18). Despite the act that the relaxed problem has integral variables (as we have not relaxed constraint (18)), as we show next, the problem can be solved in polynomial time or any given λ. Theorem 1 The relaxed problem (19) can be solved in polynomial time. Proo The relaxed problem can be decomposed into two problems, so that the irst one is over variables x and y, and the second one is over variables w. In particular, the relaxed problem can be written as Z LD (λ) = min x,y λ i + i i y i + i,j such that x ij y j, i, j = 1,... n x ij, y i {0, 1} i, j = 1,... n, [d(i, j) λ i + v ij (λ)]x ij (20) where v ij (λ) = min w such that (2λ ijkl g)w ijkl (21) k,l:(i,j,k,l) S l:(i,j,k,l) S k:(i,j,k,l) S w ijkl 1, or all k w ijkl x ij, or all l w ijkl {0, 1}, or all k, l. Notice that problems (20) and (21) are equivalent to the relaxed problem (19) despite the act that in (20) the terms v ij (λ) are multiplied by x ij. The reason is that all variables x ij and w ijkl are either 0 and 1, and whenever a x ij is 0, then w ijkl are also 0, or all k and l. Observe that a variable v ij is deined or each x ij, and the value o v ij is given as a solution to the minimization problem (21). Given that the variables x ij in the second constraint are either 0 and 1, it is not hard to see that the minimization problem (21), together with the corresponding constraints, is a

13 Lagrangian relaxations or multiple network alignment 13 minimum-cost matching problem. Thus the value o each v ij can be computed using the Hungarian algorithm. Furthermore, given λ and x, all these minimization problems are independent, and thus, the value o each v ij can be computed separately. Once we have computed v ij, we can solve Z LD easily ater observing that the optimal value o x ij is given by x ij = { y j, i d(i, j) λ i + v ij (λ) 0 0, otherwise, enabling us to write Z LD (λ) = min λ i + y i i = min λ i + y i i such that y i {0, 1}, y i + ( ) min{0, d(i, j) λ i + v ij (λ)} j i ( + C i )y i y j where C i = i min{0, d(i, j) λ i + v ij (λ)}. Hence we get y i = { 1, i + j min{0, d(j, i) λ j + v ji (λ)} < 0 0, otherwise. (22) I instead o setting cost or opening an entity we want to set a constraint or the number o opened entities, the inal minimization problem would take the orm Z LD (λ) = min y such that C i y i i y i = N e and y i {0, 1}. i This problem can be solved by sorting y i based on the coeicients C i and setting y i = 1 or the N e smallest coeicients. In Section we discuss how a solution to the problem Z LD (λ), or a given vector λ, is used within the Lagrangian relaxation method. Beore that, we present our methods or inding a easible solution to the multiple networkalignment problem rom a solution to the relaxed problem (Section 3.3.3).

14 14 Eric Malmi et al Finding a easible network alignment Once we have obtained the optimal solution x, y, w or the relaxed problem, we still need to obtain a easible solution or the original network-alignment problem. Note that although the solution x, y o the relaxed problem is integral, it is not necessarily easible, since we have relaxed the constraint j x ij = 1 and hence, some vertices may be mapped to zero or more than one entities. We will next show that transorming the optimal solution x, y o the relaxed problem to a easible solution or the original non-relaxed and nonlinearized network-alignment problem with minimal additional cost in Z LD (λ) is an NP-hard problem. In act, the problem is related to set cover, and thus, we propose an algorithm that is based on a greedy approach. We will show that both o the versions we consider lead to an NP-hard problem: (i) when parameter is given as input (and thus, the number o open entities depends on ); and (ii) when the number o opened entities N e is given as input. Recall that in the ormer case, y is obtained by opening the entities with negative coeicients (according to (22)), while in the latter case y is obtained by opening the entities with the N e smallest coeicients. As already mentioned, the optimal y may lead to some vertices not being assigned to any entities. The hardness o the problem o inding a easible solution based on y stems rom the act that more (or dierent) entities need to be opened in order to ensure easibility. Both versions o the problem (or ixed and ixed N e ) can be compactly ormulated as ollows: min C i y i, (23) y such that i a matching between the vertices and opened entities exists, (24) ( ) y i = N e, i = 1,..., n, (25) i y i {0, 1}, i = 1,..., n, (26) where C i = + C i i is ixed, and C i = C i i the number o entities N e is ixed, and C i = i min{0, d(i, j) λ i + v ij (λ)} as deined in the previous section. Constraint (25) is only included in the case o ixing the number o entities N e. The hardness result is stated and proven next. Proposition 2 Finding the optimal set o open entities, as deined in (23) (26), is an NP-hard problem both when the entity-opening cost is ixed, and when the number o entities N e is ixed. Proo We give a reduction rom the optimization version o the weighted set cover problem. We consider a special case where the number o sets is equal to the number o items to be covered, which still keeps the problem NP-hard. Let {1, 2,..., n} be a set o items to be covered, S a collection o n sets, and

15 Lagrangian relaxations or multiple network alignment 15 M(j) the sub-collection o sets that contain item j, or j = 1,..., n. Each set S i is associated with a cost C i. By using variable y i to denote whether set i is included in the cover or not, the weighted set cover problem can be written as min y such that C i y i, (27) i c M(j) y c 1, j = 1,..., n, (28) y i {0, 1}, i = 1,..., n. Now we observe that this is a special case o problem (23) where each vertex is considered to be its own graph and M(i) denotes the set o candidate entities or vertex i, thus making constraints (24) and (28) equivalent. Furthermore, we could add constraint (25), without loss o generality, by setting N e = n, which proves the result or both cases o ixed and ixed N e. The Lagrangian ramework itsel does not require us to ind the optimum y, which we have shown to be NP-hard, but the urther the easible y we obtain is rom the optimum, the larger the duality gap will be, and thus we want to come up with a reasonable way o inding an approximate solution. Greedy methods are known to give good approximations or set cover problems and hence we adopt a greedy approach or inding y. Next we present a high-level overview o this approach the details can be ound rom the publicly available source code. 3 In the case o ixed N e, we start opening entities one-by-one ater ranking them primarily by how many vertices rom dierent input graphs can be assigned to each entity and secondarily by the coeicient o each entity, until a matching between vertices and opened entities exists. Note that this greedy strategy is dierent than the usual normalized cost strategy that is used or weighted set cover, however, we preer to prioritize the selection o entities based on the number o matching vertices, as we have a budget N e on the number o entities that we can open. When all vertices are matched by at least one opened entity, we can still open extra entities based on the coeicients until the number o open entities is N e. Finally, we ind a easible assignment x by matching the input graphs to the open entities with the Hungarian algorithm one graph at a time note that all these matching problems are independent, so the order we process the graphs does not matter. The edge weights or the matching problem are given by d(i, j) λ i + v ij (λ) rom (20). In the case o ixed, we initially set y = y. Then we start again matching input graphs one at a time and open extra entities along the way i a matching cannot be ound otherwise. 3 The implementation o the easibility heuristics is available at: ekq/lan

16 16 Eric Malmi et al Solving the ull Lagrangian relaxation Recall rom Section 3.1 that or the ull Lagrangian relaxation algorithm we wish to ind λ as close to the optimal vector λ, that is, Z LD (λ ) = max λ Z LD (λ), as possible. Such a vector is ound by the subgradient method [24]. In particular, we start with λ = 0, as well as a trivial upper bound u = and lower bound l = to the optimal cost o the original problem. Then we start an iterative process: in each iteration, we solve Z LD (λ), as described by Theorem 1, or the current value o λ. This solution is transormed to a easible solution or the network-alignment problem, as discussed in Section 3.3.3, and the two solutions are used or updating the bounds l and u. Next, a new vector λ is computed by the subgradient update λ t i = λ t 1 i + θ t (1 j x ij), λ t ijkl = λ t 1 ijkl + θt (w ijkl w klij ), where x ij and w ijkl are part o the relaxed solution. For θ t, we adopt the same update rules used in Natalie 2.0 [12]. The iterative process continues by solving Z LD (λ) or the new vector λ and repeating the aorementioned steps until a convergence criterion u l ɛ is satisied or the maximum number o iterations (300 in our experiments) is satisied Inerring the edges o the entity graph The edges o the underlying entity graph are unknown. A reasonable initial guess can be obtained by setting B = A, but some edges may be missing rom this initial guess as (i) A does not contain any edges between vertices rom dierent graphs, and thus, there will not be any edges between entities that correspond to vertices in dierent input graphs, and (ii) the vertices o an input graph may correspond to only a subset o the entities. To iner the missing edges when x and y are ixed, we need to minimize the objective unction (1) with respect to the elements o B which correspond to the potentially missing edges. 4 We decompose the third term o the objective unction and denote all the terms which are constant with respect to the optimized elements o B by C. This allows us to write the optimization problem 4 For simplicity, we write min B objective although the objective is being minimized only w.r.t. elements B jl, where (j, l) / E I.

17 Lagrangian relaxations or multiple network alignment 17 as min B = min B = min B y j + d(i, j)x ij g j i,j g A ik B jl x ij x kl + γ (j,l) / E I C + C + (j,l) / E I i,k (j,l) E m B jl (j,l) E I A ik B jl x ij x kl i,k (j,l) / E I B 2 jl g A ik B jl x ij x kl + γb 2 jl i,k γb jl g i,k A ik x ij x kl. This is solved by setting B jl = 1 whenever term γb jl g i,k A ikx ij x kl is negative, which happens when A ik x ij x kl > γ g. i,k In all o our experiments, we set γ = g 2, which means that we add an edge between entities j and l i there is at least one pair o neighboring vertices that is aligned to entities j and l. 4 Related work The Lagrangian relaxation ramework has been successully applied to many NP-hard optimization problems [14]. Cornuejols et al. [10] show that it is well suited or the uncapacitated acility location problem where the one-to-one constraint on sites and acilities is relaxed. Klau [18] later shows that it is also applicable to the pairwise network alignment problem where a symmetry constraint is relaxed. Our multiple network alignment method combines these two ideas and relaxes both the one-to-one constraint (13) and the symmetry constraint (17) to make the relaxed problem easible. Another related application o the Lagrangian relaxation approach is by Althaus and Canzar [1] who adopt it or multiple sequence alignment. A recent survey by Elmsallati et al. [13] provides an overview o thirteen dierent network alignment methods. While the application ocus o the survey is on protein protein interaction networks, the techniques described are general and can be applied to dierent domains. Out o the thirteen methods, only three support multiple network alignment, namely, IsoRankN [20], SMETANA [23], and NetCoee [17]. IsoRankN is a multiple network extension o the earlier IsoRank method [25] which is inspired by the PageRank algorithm, SMETANA is a greedy method based on a semi-markov random walk model, and NetCoee employs simulated annealing to optimize an objective unction developed or multiple network alignment.

18 18 Eric Malmi et al. Clark and Kalita [8] present an experimental survey on ten dierent pairwise alignment methods. In many o the experiments presented in the survey, Natalie [12, 18] yields the highest accuracy and it is also reported to have a ast running time [12]. Thereore, one o our objectives is to extend Natalie to support multiple networks. This extension and the dierences between Natalie and our method Flan have been discussed in Section 3.2. Apart rom the biological problems, network alignment has been previously applied at least to ontology alignment by Bayati et al. [3]. The authors present a novel approach or solving the pairwise graph alignment problem based on the use o belie propagation (BP): given two networks the BP algorithm begins by deining a probability distribution on all matchings between the networks and then using a message passing algorithm to approximately iner a matching which gives the maximum a posterior (MAP) assignment. However, again the BP algorithm is restricted to pairwise alignment and it is not clear how a generalization to the multiple alignment case can be derived. Multiple network alignment can also be seen as a collective entity resolution problem [5, 26]. In the uture work, it would be interesting to study the applicability o multiple network alignment methods to typical collective entity resolution problems, such as author disambiguation. The problem o merging amily trees has been recently studied by Kouki et al. [19], who employ a greedy method, and Malmi et al. [22], who study an active learning setting. Furthermore, entity resolution or genealogical data has been previously studied, or example, by Eremova et al. [11] and Christen et al. [7]. There are also methods developed or a related problem o tree alignment, or example, in the context o web data extraction [29]. However, these methods are not applicable to amily trees since the latter contain cycles. 5 Experimental evaluation In this section, we present experiments on synthetic and real-world datasets. Our real-world data are social networks and genealogical trees. In each o these scenarios, the input graphs are only partially overlapping and the number o graphs is more than two. A common challenge when applying network alignment methods in practice is the tuning o the method parameters. In the methods studied in this paper, the parameters are:, which controls the dissimilarity o vertices we are willing to align instead o keeping them separate, and g, which controls the balance between the importance o aligning neighbors to neighbors vs. aligning vertices to similar vertices. I ground truth data is available, the parameter values can be tuned via cross-validation but otherwise, it is common to resort to using some deault parameter values. Motivated by the challenge o parameter selection, the ocus o the ollowing experiments is on studying the sensitivity o the methods to the selection o which is crucial when aligning partially overlapping networks. We study both the perormance o dierent methods or a range o values and the per-

19 Lagrangian relaxations or multiple network alignment 19 ormance o Flan when prior knowledge on the number o entities is available and thus the selection o is not required. Method naming conventions. The ollowing methods are compared in the experiments: Natalie, prognatalie, and prognatalie++ (Section 3.2) are our multiple-network extensions o the pairwise method originally presented in [18]. Flan is our acility-location-based method or multiple network alignment, whereas Flan0 is a baseline method which only solves the Lagrangian relaxation once and does not update B, the adjacency matrix or entities. cflan N e reers to the version o Flan with N e as the ixed number o entities. Both Flan and cflan N e run the alternating optimization procedure or at most ive iterations since the solution typically does not improve anymore ater that. IsoRankN [20] is a popular multiple network alignment method. Instead o, it has parameter alpha which controls the relative weight o network and sequence data, taking values between 0 and 1. The other parameters o IsoRankN are set to K = 30, thresh = 10 4, and maxveclen = 10 6, based on the recommendations in the README ile o the program. Finally, method Unary, which is only used in Section 5.3, reers to prognatalie where discount g or the quadratic term is set to zero so the method aligns the vertices only based on their attribute similarity. 5.1 Aligning synthetic networks We start by describing the data generation process and then present the results Data First, we generate an underlying entity graph using the preerential attachment model [2] with 100 vertices and 2 as the number o edges new vertices are attached with. Then we sample a label or each vertex rom a set o 33 unique labels so that there will be 3 vertices with a duplicate label on average. The labels are treated as attributes and the distance between two vertices is set to 0 i their labels are the same and 1 otherwise. Only the vertices with the same label are considered as candidate entities. Second, we generate 10 maniestations o the entity graph which serve as the input graphs. The maniestations are generated by picking a random seed vertex and then doing random walk until 30 distinct vertices have been discovered. Third, we corrupt the edges o the input graphs to make the alignment problem more challenging. We irst discard 20 % o the graph edges, chosen at random, and then we add 10 % more edges, again selected at random. This process is done independently or each input graph. The optimization problem corresponding to the alignment o these graphs contains variables x ij and variables w ijkl on average, depending on the initialization.

20 20 Eric Malmi et al. Precision Recall F1 score # o entities Duality gap prognatalie prognatalie++ Natalie FLAN0 FLAN cflan 75 cflan 100 cflan 125 IsoRankN Fig. 2 Aligning partial random graphs, varying the cost o opening an entity Results We set g = 0.5 and vary. Parameter g controls the balance between vertex attribute distances and the pairwise discounts, but since in this case the attribute distances are 0 or each candidate entity, only the proportion /g matters. The proportion can be varied merely by varying. The results are shown in Figure 2. Precision and recall are computed based on the set o vertex pairs which are predicted to correspond to the same entity (P) and the set o vertex pairs that truly correspond to the same entity (T ) as ollows precision = P T P, recall = P T. T

21 Lagrangian relaxations or multiple network alignment 21 = 0.2 = Cost 200 Cost Feasible solution Relaxed solution Iteration Iteration Fig. 3 Evolution o the cost o the easible solution, which yields an upper bound, and the relaxed solution, which yields a lower bound. For = 0.2 the algorithm inds the global minimum and converges in 280 steps. In terms o precision, prognatalie++ and Flan yield the best overall perormance. Both o these methods clearly outperorm their counterparts Flan0 and prognatalie that do not iner the underlying network. In terms o recall, Natalie and prognatalie++ obtain the best perormance, particularly with higher valuer. All o these methods clearly outperorm Iso- RankN when is suiciently large. We also study the scenario that we have some prior knowledge about the number o entities, 100, allowing us to employ cflan. The results show that by setting the constraint on the number o entities above or below the true number, we can either improve precision or improve recall, respectively. This observation can be leveraged in the case that one o the measures is more important than the other. Although, it is possible to obtain a higher precision or recall compared to cflan by careully selecting, in overall, cflan works rather robustly, providing good alignments without having to ine-tune. Finally, in the bottom-middle plot o Figure 2, we show the duality gaps or all the methods except or the progressive ones since they solve several alignment problems and hence do not provide a single duality gap value. The gaps are relatively small compared to the total number o variables, 2 800, but we can notice that especially or Flan0, the duality gap increases with. Figure 3 shows two examples on how the duality gap evolves when using Flan0. For Flan, the gaps are smaller, which shows that by updating the adjacency matrix B and solving the optimization problem again the problem becomes easier and the Lagrangian relaxation tighter. 5.2 Aligning social networks The social network alignment problem reers to the problem o inding matching users across dierent social networks, such as Facebook and Twitter. This problem points out privacy concerns as identiying a person s user handles

22 22 Eric Malmi et al. Table 1 The CS-Aarhus multiplex dataset beore and ater removing some people to have only partially overlapping networks. The last row shows the number o distinct vertices ater combining the datasets. Method # o vertices # o edges Used # o vertices Used # o edges Lunch Facebook Co-author Leisure Work Total across multiple service can lead to highly increased user proiling accuracy. However, it also comes with useul applications or instance, a social networking service can provide helpul riend suggestions or a new user i it can identiy the user s proile in another service Data We study the multiplex dataset rom Aarhus University [21]. The dataset contains ive networks between the employees o the Department o Computer Science. Table 1 shows statistics about these networks. All attributes o the vertices have been anonymized but the true alignment o users is known through user IDs. We make two modiications to the dataset. First, we select at most 40 users rom each network in order to make the alignment problem more challenging by having only partially overlapping networks. Second, we sample a name label or each user ID so that each label is shared by three user IDs on average. Candidate entities are ormed by taking the vertices with the same label. An example o a social network alignment problem is presented in Figure 4 where each name label is assigned a distinct color Results Figure 5 shows the results or the social network experiment. The relative perormance o the methods is similar to the previous experiment. However, this time the precision dierence between prognatalie++ and Flan is higher. Precision o IsoRankN is not shown in the igure to make the dierences between the other methods more visible but it is always between 0.34 and Aligning amily trees Services like Ancestry.com and MyHeritage attract millions o paying subscribers who upload their ancestry inormation to the service, trying to ind

CS485/685 Computer Vision Spring 2012 Dr. George Bebis Programming Assignment 2 Due Date: 3/27/2012

CS485/685 Computer Vision Spring 2012 Dr. George Bebis Programming Assignment 2 Due Date: 3/27/2012 CS8/68 Computer Vision Spring 0 Dr. George Bebis Programming Assignment Due Date: /7/0 In this assignment, you will implement an algorithm or normalizing ace image using SVD. Face normalization is a required

More information

Binary recursion. Unate functions. If a cover C(f) is unate in xj, x, then f is unate in xj. x

Binary recursion. Unate functions. If a cover C(f) is unate in xj, x, then f is unate in xj. x Binary recursion Unate unctions! Theorem I a cover C() is unate in,, then is unate in.! Theorem I is unate in,, then every prime implicant o is unate in. Why are unate unctions so special?! Special Boolean

More information

MAPI Computer Vision. Multiple View Geometry

MAPI Computer Vision. Multiple View Geometry MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry

More information

An Efficient Configuration Methodology for Time-Division Multiplexed Single Resources

An Efficient Configuration Methodology for Time-Division Multiplexed Single Resources An Eicient Coniguration Methodology or Time-Division Multiplexed Single Resources Benny Akesson 1, Anna Minaeva 1, Přemysl Šůcha 1, Andrew Nelson 2 and Zdeněk Hanzálek 1 1 Czech Technical University in

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

A Proposed Approach for Solving Rough Bi-Level. Programming Problems by Genetic Algorithm

A Proposed Approach for Solving Rough Bi-Level. Programming Problems by Genetic Algorithm Int J Contemp Math Sciences, Vol 6, 0, no 0, 45 465 A Proposed Approach or Solving Rough Bi-Level Programming Problems by Genetic Algorithm M S Osman Department o Basic Science, Higher Technological Institute

More information

Global Constraints. Combinatorial Problem Solving (CPS) Enric Rodríguez-Carbonell (based on materials by Javier Larrosa) February 22, 2019

Global Constraints. Combinatorial Problem Solving (CPS) Enric Rodríguez-Carbonell (based on materials by Javier Larrosa) February 22, 2019 Global Constraints Combinatorial Problem Solving (CPS) Enric Rodríguez-Carbonell (based on materials by Javier Larrosa) February 22, 2019 Global Constraints Global constraints are classes o constraints

More information

Piecewise polynomial interpolation

Piecewise polynomial interpolation Chapter 2 Piecewise polynomial interpolation In ection.6., and in Lab, we learned that it is not a good idea to interpolate unctions by a highorder polynomials at equally spaced points. However, it transpires

More information

Reducing the Bandwidth of a Sparse Matrix with Tabu Search

Reducing the Bandwidth of a Sparse Matrix with Tabu Search Reducing the Bandwidth o a Sparse Matrix with Tabu Search Raael Martí a, Manuel Laguna b, Fred Glover b and Vicente Campos a a b Dpto. de Estadística e Investigación Operativa, Facultad de Matemáticas,

More information

MATRIX ALGORITHM OF SOLVING GRAPH CUTTING PROBLEM

MATRIX ALGORITHM OF SOLVING GRAPH CUTTING PROBLEM UDC 681.3.06 MATRIX ALGORITHM OF SOLVING GRAPH CUTTING PROBLEM V.K. Pogrebnoy TPU Institute «Cybernetic centre» E-mail: vk@ad.cctpu.edu.ru Matrix algorithm o solving graph cutting problem has been suggested.

More information

A Novel Accurate Genetic Algorithm for Multivariable Systems

A Novel Accurate Genetic Algorithm for Multivariable Systems World Applied Sciences Journal 5 (): 137-14, 008 ISSN 1818-495 IDOSI Publications, 008 A Novel Accurate Genetic Algorithm or Multivariable Systems Abdorreza Alavi Gharahbagh and Vahid Abolghasemi Department

More information

Classifier Evasion: Models and Open Problems

Classifier Evasion: Models and Open Problems Classiier Evasion: Models and Open Problems Blaine Nelson 1, Benjamin I. P. Rubinstein 2, Ling Huang 3, Anthony D. Joseph 1,3, and J. D. Tygar 1 1 UC Berkeley 2 Microsot Research 3 Intel Labs Berkeley

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

9.8 Graphing Rational Functions

9.8 Graphing Rational Functions 9. Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm P where P and Q are polynomials. Q An eample o a simple rational unction

More information

Formalizing Cardinality-based Feature Models and their Staged Configuration

Formalizing Cardinality-based Feature Models and their Staged Configuration Formalizing Cardinality-based Feature Models and their Staged Coniguration Krzyszto Czarnecki, Simon Helsen, and Ulrich Eisenecker 2 University o Waterloo, Canada 2 University o Applied Sciences Kaiserslautern,

More information

SUPER RESOLUTION IMAGE BY EDGE-CONSTRAINED CURVE FITTING IN THE THRESHOLD DECOMPOSITION DOMAIN

SUPER RESOLUTION IMAGE BY EDGE-CONSTRAINED CURVE FITTING IN THE THRESHOLD DECOMPOSITION DOMAIN SUPER RESOLUTION IMAGE BY EDGE-CONSTRAINED CURVE FITTING IN THE THRESHOLD DECOMPOSITION DOMAIN Tsz Chun Ho and Bing Zeng Department o Electronic and Computer Engineering The Hong Kong University o Science

More information

Fair evaluation of global network aligners

Fair evaluation of global network aligners DOI 10.1186/s13015-015-0050-8 RESEARCH Open Access Fair evaluation of global network aligners Joseph Crawford 1, Yihan Sun 1,2,3 and Tijana Milenković 1* Abstract Background: Analogous to genomic sequence

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Chapter 3 Image Enhancement in the Spatial Domain

Chapter 3 Image Enhancement in the Spatial Domain Chapter 3 Image Enhancement in the Spatial Domain Yinghua He School o Computer Science and Technology Tianjin University Image enhancement approaches Spatial domain image plane itsel Spatial domain methods

More information

Automated Planning for Feature Model Configuration based on Functional and Non-Functional Requirements

Automated Planning for Feature Model Configuration based on Functional and Non-Functional Requirements Automated Planning or Feature Model Coniguration based on Functional and Non-Functional Requirements Samaneh Soltani 1, Mohsen Asadi 1, Dragan Gašević 2, Marek Hatala 1, Ebrahim Bagheri 2 1 Simon Fraser

More information

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,

More information

Balanced Multiresolution for Symmetric/Antisymmetric Filters

Balanced Multiresolution for Symmetric/Antisymmetric Filters Balanced Multiresolution or Symmetric/Antisymmetric Filters Mahmudul Hasan Faramarz F. Samavati Mario C. Sousa Department o Computer Science University o Calgary Alberta Canada {mhasan samavati smcosta}@ucalgary.ca

More information

ROBUST FACE DETECTION UNDER CHALLENGES OF ROTATION, POSE AND OCCLUSION

ROBUST FACE DETECTION UNDER CHALLENGES OF ROTATION, POSE AND OCCLUSION ROBUST FACE DETECTION UNDER CHALLENGES OF ROTATION, POSE AND OCCLUSION Phuong-Trinh Pham-Ngoc, Quang-Linh Huynh Department o Biomedical Engineering, Faculty o Applied Science, Hochiminh University o Technology,

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Symbolic System Synthesis in the Presence of Stringent Real-Time Constraints

Symbolic System Synthesis in the Presence of Stringent Real-Time Constraints Symbolic System Synthesis in the Presence o Stringent Real-Time Constraints Felix Reimann, Martin Lukasiewycz, Michael Glass, Christian Haubelt, Jürgen Teich University o Erlangen-Nuremberg, Germany {elix.reimann,glass,haubelt,teich}@cs.au.de

More information

Larger K-maps. So far we have only discussed 2 and 3-variable K-maps. We can now create a 4-variable map in the

Larger K-maps. So far we have only discussed 2 and 3-variable K-maps. We can now create a 4-variable map in the EET 3 Chapter 3 7/3/2 PAGE - 23 Larger K-maps The -variable K-map So ar we have only discussed 2 and 3-variable K-maps. We can now create a -variable map in the same way that we created the 3-variable

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

9. Reviewing Printed Circuit Board Schematics with the Quartus II Software

9. Reviewing Printed Circuit Board Schematics with the Quartus II Software November 2012 QII52019-12.1.0 9. Reviewing Printed Circuit Board Schematics with the Quartus II Sotware QII52019-12.1.0 This chapter provides guidelines or reviewing printed circuit board (PCB) schematics

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

Small Survey on Perfect Graphs

Small Survey on Perfect Graphs Small Survey on Perfect Graphs Michele Alberti ENS Lyon December 8, 2010 Abstract This is a small survey on the exciting world of Perfect Graphs. We will see when a graph is perfect and which are families

More information

Joint Congestion Control and Scheduling in Wireless Networks with Network Coding

Joint Congestion Control and Scheduling in Wireless Networks with Network Coding Title Joint Congestion Control and Scheduling in Wireless Networks with Network Coding Authors Hou, R; Wong Lui, KS; Li, J Citation IEEE Transactions on Vehicular Technology, 2014, v. 63 n. 7, p. 3304-3317

More information

SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING FUNCTIONS

SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING FUNCTIONS SIMULATION OPTIMIZER AND OPTIMIZATION METHODS TESTING ON DISCRETE EVENT SIMULATIONS MODELS AND TESTING UNCTIONS Pavel Raska (a), Zdenek Ulrych (b), Petr Horesi (c) (a) Department o Industrial Engineering

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning

More information

Optics Quiz #2 April 30, 2014

Optics Quiz #2 April 30, 2014 .71 - Optics Quiz # April 3, 14 Problem 1. Billet s Split Lens Setup I the ield L(x) is placed against a lens with ocal length and pupil unction P(x), the ield (X) on the X-axis placed a distance behind

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Hyunghoon Cho and David Wu December 10, 2010 1 Introduction Given its performance in recent years' PASCAL Visual

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information

Skill Sets Chapter 5 Functions

Skill Sets Chapter 5 Functions Skill Sets Chapter 5 Functions No. Skills Examples o questions involving the skills. Sketch the graph o the (Lecture Notes Example (b)) unction according to the g : x x x, domain. x, x - Students tend

More information

Automated Modelization of Dynamic Systems

Automated Modelization of Dynamic Systems Automated Modelization o Dynamic Systems Ivan Perl, Aleksandr Penskoi ITMO University Saint-Petersburg, Russia ivan.perl, aleksandr.penskoi@corp.imo.ru Abstract Nowadays, dierent kinds o modelling settled

More information

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are

More information

A new approach for ranking trapezoidal vague numbers by using SAW method

A new approach for ranking trapezoidal vague numbers by using SAW method Science Road Publishing Corporation Trends in Advanced Science and Engineering ISSN: 225-6557 TASE 2() 57-64, 20 Journal homepage: http://www.sciroad.com/ntase.html A new approach or ranking trapezoidal

More information

KANGAL REPORT

KANGAL REPORT Individual Penalty Based Constraint handling Using a Hybrid Bi-Objective and Penalty Function Approach Rituparna Datta Kalyanmoy Deb Mechanical Engineering IIT Kanpur, India KANGAL REPORT 2013005 Abstract

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

CS 161: Design and Analysis of Algorithms

CS 161: Design and Analysis of Algorithms CS 161: Design and Analysis o Algorithms Announcements Homework 3, problem 3 removed Greedy Algorithms 4: Human Encoding/Set Cover Human Encoding Set Cover Alphabets and Strings Alphabet = inite set o

More information

Learning Implied Global Constraints

Learning Implied Global Constraints Learning Implied Global Constraints Christian Bessiere LIRMM-CNRS U. Montpellier, France bessiere@lirmm.r Remi Coletta LRD Montpellier, France coletta@l-rd.r Thierry Petit LINA-CNRS École des Mines de

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

Reflection and Refraction

Reflection and Refraction Relection and Reraction Object To determine ocal lengths o lenses and mirrors and to determine the index o reraction o glass. Apparatus Lenses, optical bench, mirrors, light source, screen, plastic or

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

PROTEIN MULTIPLE ALIGNMENT MOTIVATION: BACKGROUND: Marina Sirota

PROTEIN MULTIPLE ALIGNMENT MOTIVATION: BACKGROUND: Marina Sirota Marina Sirota MOTIVATION: PROTEIN MULTIPLE ALIGNMENT To study evolution on the genetic level across a wide range of organisms, biologists need accurate tools for multiple sequence alignment of protein

More information

Himanshu Jain and Kalyanmoy Deb, Fellow, IEEE

Himanshu Jain and Kalyanmoy Deb, Fellow, IEEE An Evolutionary Many-Objective Optimization Algorithm Using Reerence-point Based Non-dominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach Himanshu Jain and Kalyanmoy

More information

Introduction to Trees

Introduction to Trees Introduction to Trees Tandy Warnow December 28, 2016 Introduction to Trees Tandy Warnow Clades of a rooted tree Every node v in a leaf-labelled rooted tree defines a subset of the leafset that is below

More information

Generell Topologi. Richard Williamson. May 6, 2013

Generell Topologi. Richard Williamson. May 6, 2013 Generell Topologi Richard Williamson May 6, Thursday 7th January. Basis o a topological space generating a topology with a speciied basis standard topology on R examples Deinition.. Let (, O) be a topological

More information

Notes for Lecture 24

Notes for Lecture 24 U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined

More information

Evolutionary tree reconstruction (Chapter 10)

Evolutionary tree reconstruction (Chapter 10) Evolutionary tree reconstruction (Chapter 10) Early Evolutionary Studies Anatomical features were the dominant criteria used to derive evolutionary relationships between species since Darwin till early

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Leveraging Set Relations in Exact Set Similarity Join

Leveraging Set Relations in Exact Set Similarity Join Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,

More information

Scaling Social Media Applications into Geo-Distributed Clouds

Scaling Social Media Applications into Geo-Distributed Clouds 1 Scaling Social Media Applications into Geo-Distributed Clouds Yu Wu, Chuan Wu, Bo Li, Linquan Zhang, Zongpeng Li, Francis C.M. Lau Department o Computer Science, The University o Hong Kong, Email: {ywu,cwu,lqzhang,cmlau}@cs.hku.hk

More information

Clustering Using Graph Connectivity

Clustering Using Graph Connectivity Clustering Using Graph Connectivity Patrick Williams June 3, 010 1 Introduction It is often desirable to group elements of a set into disjoint subsets, based on the similarity between the elements in the

More information

Classifier Evasion: Models and Open Problems

Classifier Evasion: Models and Open Problems . In Privacy and Security Issues in Data Mining and Machine Learning, eds. C. Dimitrakakis, et al. Springer, July 2011, pp. 92-98. Classiier Evasion: Models and Open Problems Blaine Nelson 1, Benjamin

More information

Social-Network Graphs

Social-Network Graphs Social-Network Graphs Mining Social Networks Facebook, Google+, Twitter Email Networks, Collaboration Networks Identify communities Similar to clustering Communities usually overlap Identify similarities

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Semi-Supervised SVMs for Classification with Unknown Class Proportions and a Small Labeled Dataset

Semi-Supervised SVMs for Classification with Unknown Class Proportions and a Small Labeled Dataset Semi-Supervised SVMs or Classiication with Unknown Class Proportions and a Small Labeled Dataset S Sathiya Keerthi Yahoo! Labs Santa Clara, Caliornia, U.S.A selvarak@yahoo-inc.com Bigyan Bhar Indian Institute

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

PACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS

PACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS PACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS PAUL BALISTER Abstract It has been shown [Balister, 2001] that if n is odd and m 1,, m t are integers with m i 3 and t i=1 m i = E(K n) then K n can be decomposed

More information

Lab 9 - GEOMETRICAL OPTICS

Lab 9 - GEOMETRICAL OPTICS 161 Name Date Partners Lab 9 - GEOMETRICAL OPTICS OBJECTIVES Optics, developed in us through study, teaches us to see - Paul Cezanne Image rom www.weidemyr.com To examine Snell s Law To observe total internal

More information

Relaxing the 3L algorithm for an accurate implicit polynomial fitting

Relaxing the 3L algorithm for an accurate implicit polynomial fitting Relaxing the 3L algorithm or an accurate implicit polynomial itting Mohammad Rouhani Computer Vision Center Ediici O, Campus UAB 08193 Bellaterra, Barcelona, Spain rouhani@cvc.uab.es Angel D. Sappa Computer

More information

Proportional-Share Scheduling for Distributed Storage Systems

Proportional-Share Scheduling for Distributed Storage Systems Proportional-Share Scheduling or Distributed Storage Systems Abstract Yin Wang University o Michigan yinw@eecs.umich.edu Fully distributed storage systems have gained popularity in the past ew years because

More information

Monocular 3D human pose estimation with a semi-supervised graph-based method

Monocular 3D human pose estimation with a semi-supervised graph-based method Monocular 3D human pose estimation with a semi-supervised graph-based method Mahdieh Abbasi Computer Vision and Systems Laboratory Department o Electrical Engineering and Computer Engineering Université

More information

2. Recommended Design Flow

2. Recommended Design Flow 2. Recommended Design Flow This chapter describes the Altera-recommended design low or successully implementing external memory interaces in Altera devices. Altera recommends that you create an example

More information

ES 240: Scientific and Engineering Computation. a function f(x) that can be written as a finite series of power functions like

ES 240: Scientific and Engineering Computation. a function f(x) that can be written as a finite series of power functions like Polynomial Deinition a unction () that can be written as a inite series o power unctions like n is a polynomial o order n n ( ) = A polynomial is represented by coeicient vector rom highest power. p=[3-5

More information

Parameterized Complexity of Independence and Domination on Geometric Graphs

Parameterized Complexity of Independence and Domination on Geometric Graphs Parameterized Complexity of Independence and Domination on Geometric Graphs Dániel Marx Institut für Informatik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. dmarx@informatik.hu-berlin.de

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

Computationally Light Multi-Speed Atomic Memory

Computationally Light Multi-Speed Atomic Memory Computationally Light Multi-Speed Atomic Memory Antonio Fernández Anta 1, Theophanis Hadjistasi 2, and Nicolas Nicolaou 1 1 IMDEA Networks Institute Madrid, Spain antonio.ernandez@imdea.org, nicolas.nicolaou@imdea.org

More information

Performance Assessment of the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems

Performance Assessment of the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC09 Test Problems Perormance Assessment o the Hybrid Archive-based Micro Genetic Algorithm (AMGA) on the CEC9 Test Problems Santosh Tiwari, Georges Fadel, Patrick Koch, and Kalyanmoy Deb 3 Department o Mechanical Engineering,

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Using a Projected Subgradient Method to Solve a Constrained Optimization Problem for Separating an Arbitrary Set of Points into Uniform Segments

Using a Projected Subgradient Method to Solve a Constrained Optimization Problem for Separating an Arbitrary Set of Points into Uniform Segments Using a Projected Subgradient Method to Solve a Constrained Optimization Problem or Separating an Arbitrary Set o Points into Uniorm Segments Michael Johnson May 31, 2011 1 Background Inormation The Airborne

More information

What is Clustering? Clustering. Characterizing Cluster Methods. Clusters. Cluster Validity. Basic Clustering Methodology

What is Clustering? Clustering. Characterizing Cluster Methods. Clusters. Cluster Validity. Basic Clustering Methodology Clustering Unsupervised learning Generating classes Distance/similarity measures Agglomerative methods Divisive methods Data Clustering 1 What is Clustering? Form o unsupervised learning - no inormation

More information

2. Getting Started with the Graphical User Interface

2. Getting Started with the Graphical User Interface February 2011 NII52017-10.1.0 2. Getting Started with the Graphical User Interace NII52017-10.1.0 The Nios II Sotware Build Tools (SBT) or Eclipse is a set o plugins based on the popular Eclipse ramework

More information

Lagrangean relaxation - exercises

Lagrangean relaxation - exercises Lagrangean relaxation - exercises Giovanni Righini Set covering We start from the following Set Covering Problem instance: min z = x + 2x 2 + x + 2x 4 + x 5 x + x 2 + x 4 x 2 + x x 2 + x 4 + x 5 x + x

More information

Binary Morphological Model in Refining Local Fitting Active Contour in Segmenting Weak/Missing Edges

Binary Morphological Model in Refining Local Fitting Active Contour in Segmenting Weak/Missing Edges 0 International Conerence on Advanced Computer Science Applications and Technologies Binary Morphological Model in Reining Local Fitting Active Contour in Segmenting Weak/Missing Edges Norshaliza Kamaruddin,

More information

Joint Entity Resolution

Joint Entity Resolution Joint Entity Resolution Steven Euijong Whang, Hector Garcia-Molina Computer Science Department, Stanford University 353 Serra Mall, Stanford, CA 94305, USA {swhang, hector}@cs.stanford.edu No Institute

More information

Identifying Representative Traffic Management Initiatives

Identifying Representative Traffic Management Initiatives Identiying Representative Traic Management Initiatives Alexander Estes Applied Mathematics & Statistics, and Scientiic Computation University o Maryland-College Park College Park, MD, United States o America

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Neighbourhood Operations

Neighbourhood Operations Neighbourhood Operations Neighbourhood operations simply operate on a larger neighbourhood o piels than point operations Origin Neighbourhoods are mostly a rectangle around a central piel Any size rectangle

More information

Scheduling in Multihop Wireless Networks without Back-pressure

Scheduling in Multihop Wireless Networks without Back-pressure Forty-Eighth Annual Allerton Conerence Allerton House, UIUC, Illinois, USA September 29 - October 1, 2010 Scheduling in Multihop Wireless Networks without Back-pressure Shihuan Liu, Eylem Ekici, and Lei

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

Belief Propagation on the GPU for Stereo Vision

Belief Propagation on the GPU for Stereo Vision Belie Propagation on the GPU or Stereo Vision Alan Brunton School o Inormation Technology and Engineering (SITE), University o Ottawa Ottawa, Ontario e-mail: abrunton@site.uottawa.ca Chang Shu Institute

More information

Reduce and Aggregate: Similarity Ranking in Multi-Categorical Bipartite Graphs

Reduce and Aggregate: Similarity Ranking in Multi-Categorical Bipartite Graphs Reduce and Aggregate: Similarity Ranking in Multi-Categorical Bipartite Graphs Alessandro Epasto J. Feldman*, S. Lattanzi*, S. Leonardi, V. Mirrokni*. *Google Research Sapienza U. Rome Motivation Recommendation

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

Leveraging Transitive Relations for Crowdsourced Joins*

Leveraging Transitive Relations for Crowdsourced Joins* Leveraging Transitive Relations for Crowdsourced Joins* Jiannan Wang #, Guoliang Li #, Tim Kraska, Michael J. Franklin, Jianhua Feng # # Department of Computer Science, Tsinghua University, Brown University,

More information

Introduction to Graph Theory

Introduction to Graph Theory Introduction to Graph Theory Tandy Warnow January 20, 2017 Graphs Tandy Warnow Graphs A graph G = (V, E) is an object that contains a vertex set V and an edge set E. We also write V (G) to denote the vertex

More information

A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization

A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization IJMGE Int. J. Min.& Geo-Eng. Vol.47, No.2, Dec. 2013, pp. 129-137 A New Algorithm or Determining Ultimate Pit Limits Based on Network Optimization Ali Asghar Khodayari School o Mining Engineering, College

More information

THE FINANCIAL CALCULATOR

THE FINANCIAL CALCULATOR Starter Kit CHAPTER 3 Stalla Seminars THE FINANCIAL CALCULATOR In accordance with the AIMR calculator policy in eect at the time o this writing, CFA candidates are permitted to use one o two approved calculators

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

On Universal Cycles of Labeled Graphs

On Universal Cycles of Labeled Graphs On Universal Cycles of Labeled Graphs Greg Brockman Harvard University Cambridge, MA 02138 United States brockman@hcs.harvard.edu Bill Kay University of South Carolina Columbia, SC 29208 United States

More information