Evolutionary tree reconstruction (Chapter 10)

Similar documents
11/17/2009 Comp 590/Comp Fall

Lecture 20: Clustering and Evolution

Lecture 20: Clustering and Evolution

4/4/16 Comp 555 Spring

1. Evolutionary Tree Reconstruction 2. Two Hypotheses for Human Evolution 3. Did we evolve from Neanderthals? 4. Distance-Based Phylogeny 5.

Algorithms for Bioinformatics

Sequence clustering. Introduction. Clustering basics. Hierarchical clustering

10 Clustering and Trees

Phylogenetic Trees Lecture 12. Section 7.4, in Durbin et al., 6.5 in Setubal et al. Shlomo Moran, Ilan Gronau

EVOLUTIONARY DISTANCES INFERRING PHYLOGENIES

Codon models. In reality we use codon model Amino acid substitution rates meet nucleotide models Codon(nucleotide triplet)

Molecular Evolution & Phylogenetics Complexity of the search space, distance matrix methods, maximum parsimony

What is a phylogenetic tree? Algorithms for Computational Biology. Phylogenetics Summary. Di erent types of phylogenetic trees

Distance based tree reconstruction. Hierarchical clustering (UPGMA) Neighbor-Joining (NJ)

Recent Research Results. Evolutionary Trees Distance Methods

Parsimony-Based Approaches to Inferring Phylogenetic Trees

Phylogenetics. Introduction to Bioinformatics Dortmund, Lectures: Sven Rahmann. Exercises: Udo Feldkamp, Michael Wurst

Terminology. A phylogeny is the evolutionary history of an organism

human chimp mouse rat

CSE 549: Computational Biology

The worst case complexity of Maximum Parsimony

Introduction to Trees

Clustering of Proteins

CLUSTERING IN BIOINFORMATICS

Reconstructing long sequences from overlapping sequence fragment. Searching databases for related sequences and subsequences

CISC 889 Bioinformatics (Spring 2003) Multiple Sequence Alignment

Scaling species tree estimation methods to large datasets using NJMerge

DIMACS Tutorial on Phylogenetic Trees and Rapidly Evolving Pathogens. Katherine St. John City University of New York 1

Distance-based Phylogenetic Methods Near a Polytomy

Gene expression & Clustering (Chapter 10)

Genetics/MBT 541 Spring, 2002 Lecture 1 Joe Felsenstein Department of Genome Sciences Phylogeny methods, part 1 (Parsimony and such)

Construction of a distance tree using clustering with the Unweighted Pair Group Method with Arithmatic Mean (UPGMA).

Algorithms Dr. Haim Levkowitz

CS 581. Tandy Warnow

Parsimony methods. Chapter 1

Designing parallel algorithms for constructing large phylogenetic trees on Blue Waters

CISC 636 Computational Biology & Bioinformatics (Fall 2016) Phylogenetic Trees (I)

9/29/13. Outline Data mining tasks. Clustering algorithms. Applications of clustering in biology

On the Optimality of the Neighbor Joining Algorithm

Lecture: Bioinformatics

Phylogenetics on CUDA (Parallel) Architectures Bradly Alicea

Alignment ABC. Most slides are modified from Serafim s lectures

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Rapid Neighbour-Joining

Evolution of Tandemly Repeated Sequences

Dynamic Programming: Widget Layout

Olivier Gascuel Arbres formels et Arbre de la Vie Conférence ENS Cachan, septembre Arbres formels et Arbre de la Vie.

Discrete mathematics

ML phylogenetic inference and GARLI. Derrick Zwickl. University of Arizona (and University of Kansas) Workshop on Molecular Evolution 2015

Algorithms: Lecture 10. Chalmers University of Technology

BMI/CS 576 Fall 2015 Midterm Exam

Trinets encode tree-child and level-2 phylogenetic networks

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Evolution Module. 6.1 Phylogenetic Trees. Bob Gardner and Lev Yampolski. Integrated Biology and Discrete Math (IBMS 1300)

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

GLOBEX Bioinformatics (Summer 2015) Multiple Sequence Alignment

Exercise set 2 Solutions

6. Algorithm Design Techniques

12/5/17. trees. CS 220: Discrete Structures and their Applications. Trees Chapter 11 in zybooks. rooted trees. rooted trees

Chapter 10. Fundamental Network Algorithms. M. E. J. Newman. May 6, M. E. J. Newman Chapter 10 May 6, / 33

Clustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York

Parsimony Least squares Minimum evolution Balanced minimum evolution Maximum likelihood (later in the course)

Isometric gene tree reconciliation revisited

Chapter 6: Cluster Analysis

Multiple Sequence Alignment. Mark Whitsitt - NCSA

Copyright 2000, Kevin Wayne 1

Treewidth and graph minors

Notes 4 : Approximating Maximum Parsimony

Prior Distributions on Phylogenetic Trees

10701 Machine Learning. Clustering

FastJoin, an improved neighbor-joining algorithm

Basic Tree Building With PAUP

PROTEIN MULTIPLE ALIGNMENT MOTIVATION: BACKGROUND: Marina Sirota

Graph Algorithms Using Depth First Search

DISTANCE BASED METHODS IN PHYLOGENTIC TREE CONSTRUCTION

6. Finding Efficient Compressions; Huffman and Hu-Tucker

Distance Methods. "PRINCIPLES OF PHYLOGENETICS" Spring 2006

Multiple Sequence Alignment Sum-of-Pairs and ClustalW. Ulf Leser

46 Grundlagen der Bioinformatik, SoSe 11, D. Huson, May 16, 2011

Basics of Multiple Sequence Alignment

Improvement of Distance-Based Phylogenetic Methods by a Local Maximum Likelihood Approach Using Triplets

Clustering Using Graph Connectivity

Cluster Analysis. Angela Montanari and Laura Anderlucci

Introduction to Computational Phylogenetics

1. R. Durbin, S. Eddy, A. Krogh und G. Mitchison: Biological sequence analysis, Cambridge, 1998

CS 161 Lecture 11 BFS, Dijkstra s algorithm Jessica Su (some parts copied from CLRS) 1 Review

Copyright 2000, Kevin Wayne 1

Chapter 11: Graphs and Trees. March 23, 2008

Unique reconstruction of tree-like phylogenetic networks from distances between leaves

Exercises: Disjoint Sets/ Union-find [updated Feb. 3]

Computational Genomics and Molecular Biology, Fall

Lesson 13 Molecular Evolution

UNIT 4 Branch and Bound

1. R. Durbin, S. Eddy, A. Krogh und G. Mitchison: Biological sequence analysis, Cambridge, 1998

March 20/2003 Jayakanth Srinivasan,

Module 5 Graph Algorithms

Multiple sequence alignment. November 20, 2018

Theorem 2.9: nearest addition algorithm

of the Balanced Minimum Evolution Polytope Ruriko Yoshida

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Transcription:

Evolutionary tree reconstruction (Chapter 10)

Early Evolutionary Studies Anatomical features were the dominant criteria used to derive evolutionary relationships between species since Darwin till early 1960s The evolutionary relationships derived from these relatively subjective observations were often inconclusive. Some of them were later proved incorrect

Evolution and DNA Analysis: the Giant Panda Riddle For roughly 100 years scientists were unable to figure out which family the giant panda belongs to Giant pandas look like bears but have features that are unusual for bears and typical for raccoons, e.g., they do not hibernate In 1985, Steven O Brien and colleagues solved the giant panda classification problem using DNA sequences and algorithms

Evolutionary Tree of Bears and Raccoons

Out of Africa Hypothesis DNA-based reconstruction of the human evolutionary tree led to the Out of Africa Hypothesis that claims our most ancient ancestor lived in Africa roughly 200,000 years ago

mtdna analysis supports Out of Africa Hypothesis African origin of humans inferred from: African population was the most diverse (sub-populations had more time to diverge) The evolutionary tree separated one group of Africans from a group containing all five populations. Tree was rooted on branch between groups of greatest difference.

Evolutionary tree A tree with leaves = species, and edge lengths representing evolutionary time Internal nodes also species: the ancestral species Also called phylogenetic tree How to construct such trees from data?

Rooted and Unrooted Trees In the unrooted tree the position of the root ( oldest ancestor ) is unknown. Otherwise, they are like rooted trees

Distances in Trees Edges may have weights reflecting: Number of mutations on evolutionary path from one species to another Time estimate for evolution of one species into another In a tree T, we often compute d ij (T) - the length of a path between leaves i and j This may be based on direct comparison of sequence between i and j

Distance in Trees: an Example j i d 1,4 = 12 + 13 + 14 + 17 + 13 = 69

Fitting Distance Matrix Given n species, we can compute the n x n distance matrix D ij Evolution of these genes is described by a tree that we don t know. We need an algorithm to construct a tree that best fits the distance matrix D ij That is, find tree T such that d ij (T) = D ij for all i,j

Reconstructing a 3 Leaved Tree Tree reconstruction for any 3x3 matrix is straightforward We have 3 leaves i, j, k and a center vertex c Observe: d ic + d jc = D ij d ic + d kc = D ik d jc + d kc = D jk

Reconstructing a 3 Leaved Tree (cont d) d ic + d jc = D ij d ic + d kc = D ik 2d ic + d jc + d kc = D ij + D ik 2d ic + D jk = D ij + D ik d ic = (D ij + D ik D jk )/2 Similarly, d jc = (D ij + D jk D ik )/2 d kc = (D ki + D kj D ij )/2

Trees with > 3 Leaves Any tree with n leaves has 2n-3 edges This means fitting a given tree to a distance matrix D requires solving a system of n choose 2 equations with 2n-3 variables This is not always possible to solve for n > 3

Additive Distance Matrices Matrix D is ADDITIVE if there exists a tree T with d ij (T) = D ij NON-ADDITIVE otherwise

Distance Based Phylogeny Problem Goal: Reconstruct an evolutionary tree from a distance matrix Input: n x n distance matrix D ij Output: weighted tree T with n leaves fitting D If D is additive, this problem has a solution and there is a simple algorithm to solve it

Solution 1

Degenerate Triples A degenerate triple is a set of three distinct elements 1 i,j,k n where D ij + D jk = D ik Element j in a degenerate triple i,j,k lies on the evolutionary path from i to k (or is attached to this path by an edge of length 0).

Looking for Degenerate Triples If distance matrix D has a degenerate triple i,j,k then j can be removed from D thus reducing the size of the problem. If distance matrix D does not have a degenerate triple i,j,k, one can create a degenerate triple in D by shortening all hanging edges (edge leading to a leaf) in the tree.

Shortening Hanging Edges to Produce Degenerate Triples Shorten all hanging edges (edges that connect leaves) until a degenerate triple is found

Finding Degenerate Triples If there is no degenerate triple, all hanging edges are reduced by the same amount δ, so that all pair-wise distances in the matrix are reduced by 2δ. Eventually this process collapses one of the leaves (when δ = length of shortest hanging edge), forming a degenerate triple i,j,k and reducing the size of the distance matrix D. The attachment point for j can be recovered in the reverse transformations by saving D ij for each collapsed leaf.

Reconstructing Trees for Additive Distance Matrices

AdditivePhylogeny Algorithm 1. AdditivePhylogeny(D) 2. if D is a 2 x 2 matrix 3. T = tree of a single edge of length D 1,2 4. return T 5. if D is non-degenerate 6. δ = trimming parameter of matrix D 7. for all 1 i j n 8. D ij = D ij - 2δ 9. else 10. δ = 0

AdditivePhylogeny (cont d) 11. Find a triple i, j, k in D such that D ij + D jk = D ik 12. x = D ij 13. Remove j th row and j th column from D 14. T = AdditivePhylogeny(D) 15. Add a new vertex v to T at distance x from i to k 16. Add j back to T by creating an edge (v,j) of length 0 17. for every leaf l in T 18. if distance from l to v in the tree D l,j 19. output matrix is not additive 20. return 21. Extend all hanging edges by length δ 22. return T

AdditivePhylogeny (Cont d) This algorithm checks if the matrix D is additive, and if so, returns the tree T. How to compute the trimming parameter δ? Inefficient way to check additivity More efficient way comes from Four point condition

The Four Point Condition A more efficient additivity check is the four-point condition Let 1 i,j,k,l n be four distinct leaves in a tree

The Four Point Condition (cont d) Compute: 1. D ij + D kl, 2. D ik + D jl, 3. D il + D jk 2 2 and 3 represent the same number: the length of all edges + the middle edge (it is counted twice) 1 3 1 represents a smaller number: the length of all edges the middle edge

The Four Point Condition: Theorem The four point condition for the quartet i,j,k,l is satisfied if two of these sums are the same, with the third sum smaller than these first two Theorem : An n x n matrix D is additive if and only if the four point condition holds for every quartet 1 i,j,k,l n

Solution 2

UPGMA: Unweighted Pair Group Method with Arithmetic Mean UPGMA is a clustering algorithm that: computes the distance between clusters using average pairwise distance assigns a height to every vertex in the tree

UPGMA s Weakness The algorithm produces an ultrametric tree : the distance from the root to any leaf is the same UPGMA assumes a constant molecular clock: all species represented by the leaves in the tree are assumed to accumulate mutations (and thus evolve) at the same rate. This is a major pitfall of UPGMA.

UPGMA s Weakness: Correct tree Example UPGMA 3 2 1 4 1 4 3 2

Clustering in UPGMA Given two disjoint clusters C i, C j of sequences, 1 d ij = Σ {p Ci, q Cj} d pq C i C j Algorithm is a variant of the hierarchical clustering algorithm

UPGMA Algorithm Initialization: Assign each x i to its own cluster C i Define one leaf per sequence, each at height 0 Iteration: Find two clusters C i and C j such that d ij is min Let C k = C i C j Add a vertex connecting C i, C j and place it at height d ij /2 Length of edge (C i,c k ) = h(c k ) - h(c i ) Length of edge (C j,c k ) = h(c k ) - h(c j ) Delete clusters C i and C j Termination: When a single cluster remains

UPGMA Algorithm (cont d) 1 4 3 2 5 1 4 2 3 5

Solution 3

Using Neighboring Leaves to Construct the Tree Find neighboring leaves i and j with parent k Remove the rows and columns of i and j Add a new row and column corresponding to k, where the distance from k to any other leaf m can be computed as: D km = (D im + D jm D ij )/2 Compress i and j into k, iterate algorithm for rest of tree

Finding Neighboring Leaves To find neighboring leaves we simply select a pair of closest leaves. WRONG!

Finding Neighboring Leaves Closest leaves aren t necessarily neighbors i and j are neighbors, but (d ij = 13) > (d jk = 12) Finding a pair of neighboring leaves is a nontrivial problem!

Neighbor Joining Algorithm In 1987 Naruya Saitou and Masatoshi Nei developed a neighbor joining algorithm for phylogenetic tree reconstruction Finds a pair of leaves that are close to each other but far from other leaves: implicitly finds a pair of neighboring leaves Similar to UPGMA, merges clusters iteratively Finds two clusters that are closest to each other and farthest from the other clusters Advantages: works well for additive and other nonadditive matrices, it does not have the flawed molecular clock assumption

Solution 4

Alignment Matrix vs. Distance Matrix Sequence a gene of length m nucleotides in n species to generate a n x m alignment matrix CANNOT be transformed back into alignment matrix because information was lost on the forward transformation Transform into n x n distance matrix

Character-Based Tree Better technique: Reconstruction Character-based reconstruction algorithms use the n x m alignment matrix (n = # species, m = #characters) directly instead of using distance matrix. GOAL: determine what character strings at internal nodes would best explain the character strings for the n observed species

Character-Based Tree Reconstruction (cont d) Characters may be nucleotides, where A, G, C, T are states of this character. Other characters may be the # of eyes or legs or the shape of a beak or a fin. By setting the length of an edge in the tree to the Hamming distance, we may define the parsimony score of the tree as the sum of the lengths (weights) of the edges

Parsimony Approach to Evolutionary Tree Reconstruction Applies Occam s razor principle to identify the simplest explanation for the data Assumes observed character differences resulted from the fewest possible mutations Seeks the tree that yields lowest possible parsimony score - sum of cost of all mutations found in the tree

Parsimony and Tree Reconstruction

Small Parsimony Problem Input: Tree T with each leaf labeled by an m- character string. Output: Labeling of internal vertices of the tree T minimizing the parsimony score. We can assume that every leaf is labeled by a single character, because the characters in the string are independent.

Weighted Small Parsimony Problem A more general version of Small Parsimony Problem Input includes a k * k scoring matrix describing the cost of transformation of each of k states into another one For Small Parsimony problem, the scoring matrix is based on Hamming distance d H (v, w) = 0 if v=w d H (v, w) = 1 otherwise

Scoring Matrices 0 1 1 1 C 1 0 1 1 G 1 1 0 1 T 1 1 1 0 A C G T A 0 4 4 9 C 4 0 2 4 G 4 2 0 3 T 9 4 3 0 A C G T A Small Parsimony Problem Weighted Parsimony Problem

Weighted Small Parsimony Problem: Formulation Input: Tree T with each leaf labeled by elements of a k-letter alphabet and a k x k scoring matrix (δ ij ) Output: Labeling of internal vertices of the tree T minimizing the weighted parsimony score

Sankoff s Algorithm Check children s every vertex and determine the minimum between them An example

Sankoff Algorithm: Dynamic Programming Calculate and keep track of a score for every possible label at each vertex s t (v) = minimum parsimony score of the subtree rooted at vertex v if v has character t The score at each vertex is based on scores of its children: s t (parent) = min i {s i ( left child ) + δ i, t } + min j {s j ( right child ) + δ j, t }

Sankoff Algorithm (cont.) Begin at leaves: If leaf has the character in question, score is 0 Else, score is

Sankoff Algorithm (cont.) s t (v) = min i {s i (u) + δ i, t } + min j {s j (w) + δ j, t } s i (u) δ i, A sum s A (v) = 0min i {s i (u) + δ i, A } + min j {s j (w) + δ j, A } A T 0 0 3 0 G 4 C 9

Sankoff Algorithm (cont.) s t (v) = min i {s i (u) + δ i, t } + min j {s j (w) + δ j, t } s j (u) δ j, A sum s A (v) = min 0 i {s i (u) + δ i, A } + 9 min = j 9{s j (w) + δ j, A } A T 0 3 G 4 C 0 9 9

Sankoff Algorithm (cont.) s t (v) = min i {s i (u) + δ i, t } + min j {s j (w) + δ j, t } Repeat for T, G, and C

Sankoff Algorithm (cont.) Repeat for right subtree

Sankoff Algorithm (cont.) Repeat for root

Sankoff Algorithm (cont.) Smallest score at root is minimum weighted parsimony score In this case, 9 so label with T

Sankoff Algorithm: Traveling down the Tree The scores at the root vertex have been computed by going up the tree After the scores at root vertex are computed the Sankoff algorithm moves down the tree and assign each vertex with optimal character.

Sankoff Algorithm (cont.) 9 is derived from 7 + 2 So left child is T, And right child is T

Sankoff Algorithm (cont.) And the tree is thus labeled

Large Parsimony Problem Input: An n x m matrix M describing n species, each represented by an m-character string Output: A tree T with n leaves labeled by the n rows of matrix M, and a labeling of the internal vertices such that the parsimony score is minimized over all possible trees and all possible labelings of internal vertices

Large Parsimony Problem (cont.) Possible search space is huge, especially as n increases (2n 3)!! possible rooted trees (2n 5)!! possible unrooted trees Problem is NP-complete Exhaustive search only possible w/ small n(< 10) Hence, branch and bound or heuristics used

Nearest Neighbor Interchange A Greedy Algorithm A Branch Swapping algorithm Only evaluates a subset of all possible trees Defines a neighbor of a tree as one reachable by a nearest neighbor interchange A rearrangement of the four subtrees defined by one internal edge Only three different arrangements per edge