Cover-Based Method KD Tree Algorithm for Estimating Fractal Characteristics

Size: px
Start display at page:

Download "Cover-Based Method KD Tree Algorithm for Estimating Fractal Characteristics"

Transcription

1 Cover-Based Method KD Tree Algorithm for Estimating Fractal Characteristics by Troy A. Thielen A thesis submitted to the Graduate Office in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN BIOMEDICAL ENGINEERING SOUTH DAKOTA SCHOOL OF MINES AND TECHNOLOGY RAPID CITY, SOUTH DAKOTA 2012 Prepared by: Degree Candidate Approved by: Major Professor Graduate Education Representative Committee Member Chairman, Department of Biomedical Engineering Dean, Graduate Education

2 i Abstract Fractal geometry utilizes three properties to characterize the texture of a fractal dataset: fractal dimension, lacunarity, and connectivity. Significant study has been given to fractal dimension, which was developed as a measurement that characterizes the spacefilling nature of a fractal dataset. Lacunarity and connectivity were originally described as measurements that characterize the size and distribution of the gaps in a fractal dataset, respectively. While lacunarity has received more attention recently, connectivity remains largely overlooked as a fractal characteristic. This work introduces a new approach to measuring all three of these fractal characteristics, called the cover-based method. The cover-based method uses a dual mathematical form of the traditional box-counting method by first choosing the number of cover elements and then finding the position and minimum size of each element required to optimally cover a dataset. The cover-based method is extensible to any arbitrary R K Euclidean dimensional space and establishes synchronized definitions of fractal dimension, lacunarity, and connectivity. By these definitions, all three fractal characteristics can be estimated simultaneously via a single numerical algorithm. This algorithm basically converts the process of estimating fractal characteristics into an optimization problem, whose solution allows for the estimation of each characteristic. An efficient numerical implementation of the cover-based method which produces a suboptimal cover using a modified version of the KD tree algorithm is presented in this work. This method is used to test the accuracy and effectiveness of the cover-based method against traditional numerical algorithms used to estimate fractal characteristics for Cantor sets, random percolation fractals, and simulated fractional Brownian surface fractals. These results show the cover-based method KD tree algorithm out-performs traditional box-counting algorithms by producing more reliable fractal dimension estimates. In addition, the cover-based method approach of using separate lacunarity and connectivity measurements to characterize the texture of a fractal dataset is shown to be superior to the conventional gliding box algorithm approach to measuring lacunarity. The potential applications of the cover-based method in medical image analysis are also briefly examined in this work.

3 ii Acknowledgments I would like to express my gratitude to Dr. Charles Tolle for his guidance and support during my time at South Dakota School of Mines and Technology. I would like to thank my committee members, Dr. Randy Hoover and Dr. Brian Hemmelman for their time and suggestions during my research. I would also like to thank the members of the SDSM&T ECE Controls Group and Craig Bidstrup for their expertise and assistance throughout this work. Special thanks to Dr. Howard Peterson who was instrumental in my enrollment at SDSM&T. I cannot thank my parents, my grandparents, and my girlfriend enough for their constant love, support, and encouragement throughout my career.

4 iii Table of Contents Abstract Acknowledgments List of Figures List of Tables Nomenclature i ii v viii ix 1 Introduction Background Motivation Objective Scope Review of Related Literature Fractal Geometry Fractal Construction Hausdorff-Besicovitch Dimension Box Dimension Box-Counting Method for Estimating Fractal Dimension Differential Box-Counting Algorithm for Estimating Fractal Dimension Lacunarity Gliding Box Method for Estimating Lacunarity Succolarity Connectivity Cover-Based Method Cover-Based Method Fractal Dimension Definition Cover-Based Method Lacunarity Definition Numerical Example Cover-Based Method Connectivity Definition Numerical Example Cover-Based Method KD Tree Algorithm KD Tree Algorithm Cover-Based Method KD Tree Algorithm Suboptimal Cover Cover-Based KD Tree Algorithm Lacunarity and Connectivity Estimates 50

5 iv 5 Testing and Results Fractal Dimension Cantor Sets and Percolation Random Fractals Simulated Fractional Brownian Surface Fractals Medical Image Analysis Application Lacunarity and Connectivity Medical Image Analysis Application Conclusions Direction of Future Work References 86 Appendices 91 A Mathematical Definition of Lacunarity 92 B Mathematical Definition of Connectivity 97 C Run-Time Measurements 101 Vita 103

6 v List of Figures 1.1 An example of two possible coverings of a geometric fractal using simple square covering elements; on the left is the optimal cover of the fractal requiring only 11 elements, and on the right is a typical box-counting algorithm cover of the fractal requiring 16 elements, a 45 percent increase in the number of elements needed [1] Construction of the middle third Cantor set F, by repeated removal of the middle third of each interval [2] Construction of the von Koch curve F, by repeated replacement of the middle third of each interval with the other two sides of an equilateral triangle [2] Von Koch curve with random orientation [3] The famous Mandelbrot set is defined as the set of values of c in the complex plane for which the orbit of z 0 = 0 does not diverge under iteration of the complex quadratic polynomial z n+1 = zn 2 + c [4] Example of two random fractal datasets created with different processes, but producing the same fractal dimension equal to Set A and two possible ɛ covers. The optimal cover, C, gives the infimum of i=0 diam(c i) s for all possible ɛ covers Set A and three different possible covers: (i) the HB optimal cover of A given by the least number of sets of diameter at most ɛ needed to cover A; (ii) the box-counting algorithm cover of A given by the number of ɛ-mesh cubes that intersect A; (iii) the box dimension optimal cover of A given by the least number of balls of radius ɛ needed to cover A; [2] An example of two simulated fractional Brownian surfaces with different fractal dimensions Example of how the number of boxes is determined by the DBC [5] A stack of Cantor sets with equal fractal dimension D s = 1/2 whose lacunarity increases from very low at the bottom to very high at the top of the stack [6] An example of two simulated fractional Brownian surfaces with fractal dimension D h = 2.5 and Musgrave lacunarities: a) λ = 2; b) λ = Illustration of the gliding box method; the underlying lattice is represented by open dots, those which are occupied by the center of a particle are indicated by a solid square. The gliding box is a square of side 2r [7] Schematic diagram of the probability, p, versus the percolation probability, θ(p). Many aspects of this graph are still conjectural [8] Minimum spanning tree for 10 4 points uniformly distributed on the Sierpinski triangle [9] Minimum spanning tree ɛ-component quantities: (a) ɛ-connected components, C(ɛ); (b) the largest diameter, D(ɛ); (c) the number of isolated points, I(ɛ); for the Sierpinski triangle shown in Figure 2.14 [9] Dataset A

7 vi 3.2 Simple undirected graph, T 4 (A) = (c, E) Bipartite graph G 1 (A) = (U 1, V 1, E 1 ), showing edges between the elements of U 1 and V Bipartite graph G 2 (A) = (U 2, V 2, E 2 ), showing edges between the elements of U 2 and V Bipartite graph G 3 (A) = (U 3, V 3, E 3 ), showing edges between the elements of U 3 and V Maximum spanning tree, Mmax(A) 4 = (C, E max) Dataset A Simple undirected graph, T 4 (A) = (c, E) Bipartite graph G 1 (A) = (U 1, V 1, E 1 ), showing edges between the elements of U 1 and V Bipartite graph G 2 (A) = (U 2, V 2, E 2 ), showing edges between the elements of U 2 and V Bipartite graph G 3 (A) = (U 3, V 3, E 3 ), showing edges between the elements of U 3 and V Minimum spanning tree, Mmin 4 (C, E min ) An example of the ability of the KD tree algorithm to partition the hyperspace containing a dataset through three iterations An example of the modified KD tree algorithm s ability to partition a dataset through three iterations Example of the critical boundary points created by the KDTREE algorithm for I = 4 cover elements. The boundary points, υ j i, for each cover element are marked by an x and the 8 boundary points for C 2 are labeled, υ j Example of the critical boundary points created by the KDTREE algorithm for I = 4 cover elements. The critical boundary points, Υ ab, for each cover element are marked by an x An example of the simple undirected graph, T 4 (A), created by the KDTREE algorithm. The spans, S ab, between each cover element are shown and the span S 14 = S 41 is shown as a dashed line because is not included in the edge set S as it bisects C Typical logn ɛ -log( 1 1 ɛ ) and logi-log( v n ) plot for a percolation random fractal with a theoretical fractal dimension of Typical logn ɛ /I-log( 1 1 ɛ ) and logi-log( v n ) plot for a simulated fractional Brownian surface fractal with a theoretical fractal dimension of An example of a R 2 Euclidean dimensional and a R 3 Euclidean dimensional Cantor set Fractal dimension estimates of percolation random fractals with theoretical fractal dimensions from 2.1 to Fractal dimension estimates of simulated fractional Brownian surface fractals with theoretical fractal dimensions from 2.1 to DICOM MRI lateral lumbar spine image with KDTREE fractal dimension results Simple point-set distributions constructed to demonstrate variations in lacunarity estimates produced by the KDTREE and GB algorithms... 70

8 5.8 Simple point-set distributions constructed to demonstrate variations in connectivity estimates produced by the KDTREE algorithm and the lacunarity estimates produced by the GB algorithm Group 1 Results Group 2 Results Group 3 Results Benign Case Malignant Case Microcalcification Test Results vii

9 viii List of Tables 3.1 Edge set E for the simple undirected graph T 4 (A) = (c, E) seen in Figure 3.2, shown in a matrix as the lower triangular set of edges because d e (c 1, c 1 ) = 0 and d e (c 1, c 2 ) = d e (c 2, c 1 ) Edge set E 1 for the bipartite graph G 1 (A) = (U 1, V 1, E 1 ), shown in Figure Edge set E 2 for the bipartite graph G 2 (A) = (U 2, V 2, E 2 ), shown in Figure Edge set E 3 for the bipartite graph G 3 (A) = (U 3, V 3, E 3 ), shown in Figure Edge set E for the simple undirected graph T 4 (A) = (c, E) seen in Figure 3.8, shown in a matrix as the lower triangular set of edges because d e (c 1, c 1 ) = 0 and d e (c 1, c 2 ) = d e (c 2, c 1 ) Edge set E 1 for the bipartite graph G 1 (A) = (U 1, V 1, E 1 ), shown in Figure Edge set E 2 for the bipartite graph G 2 (A) = (U 2, V 2, E 2 ), shown in Figure Edge set E 3 for the bipartite graph G 3 (A) = (U 3, V 3, E 3 ), shown in Figure Fractal dimension estimates for various Cantor sets Fractal dimension estimation results for percolation random fractals with theoretical fractal dimensions from 2.1 to Overall fractal dimension estimation results for percolation random fractals Fractal dimension estimation results for simulated fractional Brownian surface fractals with theoretical fractal dimensions from 2.1 to 2.9 and varying Musgrave lacunarity Fractal dimension estimation results for simulated fractional Brownian surface fractals with varying Musgrave lacunarity Overall fractal dimension estimation results for simulated fractional Brownian surface fractals Lacunarity measurements of simple point-set distributions shown in Figure Measurements of simple point-set distributions shown in Figure C.1 Run-time measurements for percolation random fractals with theoretical fractal dimensions from 2.1 to C.2 Run-time measurements for simulated fractional Brownian surface fractals with theoretical fractal dimensions from 2.1 to 2.9 and varying Musgrave lacunarity

10 ix A: image or signal to be analyzed D t : topographic dimension R K : K dimensional Euclidean space D s : similarity dimension Nomenclature D h : Hausdorff-Besicovitch (HB) dimension D b : box dimension D c : cover-based method box dimension ɛ: size of a covering element λ: Musgrave lacunarity L GB : gliding box lacunarity C : optimal cover c : set of centers of optimal cover i, k, j, n: used as counting indices throughout T I = (c, E): simple undirected graph E : cardinality of the edge set E ˆT : summation of the edge set of the simple undirected graph T I d e (a, b): Euclidean distance function between a and b G n = (U n, V n, E n ): nth bipartite graph e maxn : the greatest span in E n between cover elements with centers in U n (G n ) and V n (G n ) e minn : the least span in E n between cover elements with centers in U n (G n ) and V n (G n ) M I max(a) = (C, E max): cover-based method optimal maximum spanning tree Mmin I (A) = (C, E min ): cover-based method optimal minimum spanning tree ˆL I c: cover-based method lacunarity measurement at I cover elements L I c: normalized cover-based method lacunarity measurement at I cover elements L c : normalized cover-based method lacunarity measurement set C I c : cover-based method connectivity measurement at I cover elements C c : cover-based method connectivity measurement set

11 x li k : length of the ith cover element in the kth Euclidean dimensions vn: median cover element magnitude after n iterations D d : KDTREE algorithm fractal dimension measurement υ j i : jth boundary point of ith cover element Υ a,b : critical boundary point connecting cover element C a and C b M I max(a) = (C, S max): KDTREE algorithm maximum spanning tree Mmin I (A) = (C, S min ): KDTREE algorithm minimum spanning tree ˆL I d : KDTREE algorithm lacunarity measurement at I cover elements L I d : normalized KDTREE algorithm lacunarity measurement at I cover elements L d : normalized KDTREE algorithm lacunarity measurement set Cd I : KDTREE algorithm connectivity measurement at I cover elements C d : KDTREE algorithm connectivity measurement set σ: standard deviation µ: mean

12 1 1. Introduction 1.1 Background In his landmark book The Fractal Geometry of Nature, Benoit Mandelbrot stated that texture is an elusive notion which mathematicians and scientists tend to avoid, but that several of the individual facets of texture could be mastered quantitatively with fractal geometry, which he considered to be an implicit study of texture [4]. In 1975, Mandelbrot initiated the study of fractal geometry when he initially defined a fractal as a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole [10]. This characteristic, known as self-similarity, is exhibited by many irregular natural objects such as coastlines and forms the basis of fractal geometry by explaining how a complex dataset can be formed by a simple recursive process. The fractal dimension of a dataset is a statistical quantity describing the space-filling capacity of the dataset, originally defined as a positive non-integer Hausdorff-Besicovitch dimension that is greater than a dataset s natural topographical dimension [11]. Many alternative definitions of fractal dimension have been developed, including the box dimension, the packing dimension, and the similarity dimension. The box dimension definition is the most commonly used upper-bound estimation of the Hausdorff-Besicovitch dimension and the most popular numerical algorithm that used box dimension definition to estimate fractal dimension is the box-counting algorithm due to its speed and simplicity of implementation. The ability to describe the nature of complex datasets has led to the use of fractal dimension measurements in a number of applications, including medical image analysis [12],[13] and target detection [14],[15].

13 2 After developing the concept of fractal dimension, Mandelbrot theorized that two additional properties were required to fully characterize the texture of a fractal dataset. Mandelbrot explained that the fractal dimension of a dataset characterizes how much data the set contains, but does not describe how the data is distributed. Mandelbrot then introduced two new fractal properties: lacunarity to describe the size of the gaps in a fractal dataset and succolarity, which this research refers to as connectivity, to describe how the gaps in the dataset are connected [4]. The most popular method for measuring lacunarity is the gliding box algorithm, which is similar to the box counting algorithm and employs a localized mass calculation [4],[16]. This definition of lacunarity attempts to quantify the translational and rotational invariance of a dataset [7] by measuring changes in the set s mass distribution. 1.2 Motivation The original mathematical basis of fractal geometry was the Hausdorff-Besicovitch dimension definition. This definition is based on an optimal cover to estimate the dimension of a dataset. The optimal cover of a dataset is the ideal positioning of a specific number of cover elements that minimizes the sum of their diameter while completely covering the dataset. The optimal cover is difficult to determine for general datasets and the box-counting algorithm does not strive to achieve it. As seen in Figure 1.1, the box-counting algorithm estimates the box dimension of a dataset by simply covering the dataset with a grid and counting the number of grid elements filled by the dataset. This approach results in a non-optimized cover that will not, in general, obtain the optimal cover described in the Hausdorff-Besicovitch dimension definition. A new approach is required that converges toward the optimal cover by attempting to optimize the size

14 3 and placement for a known number of cover elements. Figure 1.1: An example of two possible coverings of a geometric fractal using simple square covering elements; on the left is the optimal cover of the fractal requiring only 11 elements, and on the right is a typical box-counting algorithm cover of the fractal requiring 16 elements, a 45 percent increase in the number of elements needed [1]. Mandelbrot envisioned fractal dimension, lacunarity, and connectivity to be independent properties that compliment each other and combine to fully describe the texture of a fractal dataset. The box-counting algorithm does not provide a natural extension for measuring the other fractal characteristics beyond fractal dimension. The gliding box method was developed as a measure of lacunarity based on the box-counting concept, but rather than simply measuring gap size independent of the other fractal properties, the gliding box measurement of lacunarity is overarching and is affected by any change in the dataset, including fractal dimension. This has led to connectivity being largely ignored as an important characteristic of fractal datasets. A new approach is required that decouples the fractal characteristics combined in the gliding box definition of lacunarity, allowing a fractal dataset to be more accurately described. The cover-based method described in this research solves these problems associated with the box-counting and gliding box algorithms. The cover-based method is an approach to estimating fractal characteristics based on the concept of the optimal cover that uses separate lacunarity and connectivity measurements to characterize the gap size and gap distribution in a fractal dataset with a single numerical algorithm. This was alluded to in [1].

15 4 1.3 Objective This research extends prior work [17],[1],[16],[18] using the cover-based method to develop more efficient and accurate methods for calculating the fractal characteristics of a dataset. The first objective of this research is to establish mathematical definitions through the cover-based method for the three fractal characteristics: fractal dimension, lacunarity, and connectivity. The second objective of this research is to test the accuracy and effectiveness of an efficient numerical implementation of the cover-based method against the numerical algorithms traditionally used to estimate fractal characteristics. The third objective of this research is to apply these measures to biomedical image analysis tasks. 1.4 Scope In the section that follows, the basic concepts of fractal geometry and fractal construction are introduced. Next, the Hausdorff-Besicovitch dimension definition is examined, leading to the simpler definition of the box dimension. From the detailed discussion of the box dimension and the box-counting algorithm, the cover-based method is developed from a dual mathematical approach to the box-counting method and new definitions for fractal dimension, lacunarity, and connectivity are established. An efficient numerical implementation of the cover-based method which produces a suboptimal cover using a modified version of the KD tree algorithm (KDTREE algorithm) is then presented. The box dimension estimations produced by the KDTREE and box-counting algorithms are compared for a series of Cantor sets, random percolation fractals, and simulated fractional Brownian surface fractals and lacunarity and connectivity estimations produced by the KDTREE algorithm are compared to the results of the gliding

16 5 box algorithm for a series of Cantor sets. The potential applications of the KDTREE algorithm s fractal characteristic measurements in medical image analysis are then briefly examined. Finally, this research closes with conclusions and a description of the future direction of this research.

17 6 2. Review of Related Literature 2.1 Fractal Geometry In the past, mathematicians were restricted to studying sets and functions that were sufficiently smooth, regular, and could be described by the methods of classical calculus. Irregular sets were typically regarded as curiosities not worthy of further study because no general theory existed to describe them. This was problematic because it prevented mathematics from describing much of the natural world, which tends to consist of irregular sets. Fractal geometry provides a general theory for the study of such irregular sets. Falconer described a fractal set as exhibiting the following general features [2]: It has a fine structure at arbitrarily small scales. It is too irregular to be easily described in traditional Euclidean geometric language, both locally and globally. It is self-similar at least approximately or statistically. It has a Hausdorff-Besicovitch dimension which is greater than its topological dimension. It has a simple and recursive definition. One of the first irregular sets to be studied that would be considered fractal today was the middle third Cantor set, introduced by German mathematician Georg Cantor in 1883 [19]. The middle third Cantor set is constructed by repeatedly deleting the open middle third of a set of line segments. This process is illustrated in Figure 2.1,

18 7 where E 1 is the set obtained by deleting the middle third of the initial interval E 0, so that E 1 consists of the two intervals [0, 1 3 ] and [ 2 3,1]. This process is continued, with E n obtained by deleting the middle third of each interval in E n 1, so that the middle third Cantor set F is the limit of the sequence of sets E n, as n tends toward infinity. Figure 2.1: Construction of the middle third Cantor set F, by repeated removal of the middle third of each interval [2]. Intuitively, it might seem that so much of the initial interval E 0 was removed during the construction of F, that nothing remains, but in fact F is an infinite set of disconnected points [2]. This clearly makes F impossible to draw, so representation of F tend to be pictures of one of the E n intervals, where n is reasonably large. This limitation applies to the numerical representations of all fractal sets, which demonstrate fractal characteristics across a series of scales down to the limit of resolution of the dataset. Another well-known fractal is the von Koch curve, shown in Figure 2.2. This fractal curve is constructed in the same recursive manner as the middle third Cantor set, but rather than being deleted, the open middle third of each interval is replaced with the opposite sides of an equilateral triangle. As n tends to infinity, the detail in the sequence of curves becomes finer and E n approaches a limiting curve F, called the von Koch curve.

19 8 Figure 2.2: Construction of the von Koch curve F, by repeated replacement of the middle third of each interval with the other two sides of an equilateral triangle [2]. The topographic dimension of an object, D t, is informally understood as the number of coordinates in R K Euclidean space needed to specify any point in the object. This idea is easily demonstrated by a 1-dimensional curve or a 2-dimensional surface in 3-dimensional Euclidean space (R 3 ). The magnitude of a 1-dimensional curve is measured in terms of length while the magnitude of a 2-dimensional surface is measured in terms of area. However, the von Koch curve demonstrates how fractal sets complicate these simple concepts. The von Koch curve is much too irregular to have a measurable tangent in the classic sense, and while this curve is of infinite length, it occupies no area in the plane. So the traditional concepts of length and area do not provide a useful measurement of the magnitude of F. This general measurement problem led Benoit Mandelbrot to develop fractal geometry and its central concept of fractal dimension to characterize these types of highly irregular sets [4]. Fractal dimension is a statistical

20 9 quantity of a dataset that describes the space-filling capacity of the set [11] by expanding the traditional concept of topographic dimension beyond simple integer values. Fractal dimension was originally and most rigorously defined by the Hausdorff-Besicovitch dimension definition [10] but many simpler definitions of fractal dimension have been introduced including the box dimension [2], correlation dimension [20], packing dimension [2], the generalized dimension [21], the similarity dimension [2], and the minimum cluster volume dimension [1]. For sets with exact self-similarity (such as the middle third Cantor set and the von Koch curve), the similarity dimension definition provides a simple example of how the dimension of a set can be expressed based on the set s scaling and selfsimilarity. When the R K Euclidean space containing a set is scaled by a factor of 1 r in each topographical dimension as the set is divided into N segments, the similarity dimension of the set is defined as [2]: D s = log(n) log(r) (2.1) A set with one topographical dimension, D t = 1, can be divided into four equal parts, each of which is an exact copy of the original, scaled by the factor of 1 4. This set then has a similarity dimension of log(4)/log(4) = 1. A set with two topographical dimensions, D t = 2, can be divided into four equal parts, each of which is an exact copy of the original, scaled by the factor of 1 2. This set has a similarity dimension of log(4)/log(2) = 2. In the middle third Cantor set, the R 1 Euclidean space is scaled by a factor of 1 3 and contains two remaining line segments, giving the set a similarity dimension of log(2)/log(3) = The similarity dimension of the von Koch curve is log(4)/log(3) = 1.262, which is consistent with the notion of the von Koch curve

21 10 being larger than 1-dimensional but smaller than 2-dimensional. Unfortunately, the similarity dimension is only meaningful for exactly self-similar sets. Many fractal sets are created randomly, where the strict self-similarity seen in the von Koch curve is replaced by statistical self-similarity, as seen in Figure 2.3. The dimension of these random fractals can be expressed using the more formal Hausdorff-Besicovitch dimension which is defined for any set and may be shown to equal the similarity dimension for exactly self-similar sets. Figure 2.3: Von Koch curve with random orientation [3]. Before presenting the Hausdorff-Besicovitch dimension definition of fractal dimension, it is important to take a closer look at the construction of fractal datasets Fractal Construction Artificially generated fractal sets can be divided into the following five categories based on the general method of their construction: Escape-time: fractals created by using a point in the complex plane to initialize a function, then iterating the function to determine the point s orbit. The fractal consists of the set of points in the complex plane whose orbit remains bounded; examples include the well-known Mandelbrot set, shown in Figure 2.4. Iterated function: fractals with exact self-similarity created by the iterating of a

22 11 fixed geometric replacement rule. Random: fractals generated by stochastic rather than deterministic processes. Strange attractors: fractals created from phase portraits of dynamical systems of initial-value differential equations as they evolve over time. L-systems: fractals generated by iterative string rewriting, developed to model the growth and branching patterns of plants. Figure 2.4: The famous Mandelbrot set is defined as the set of values of c in the complex plane for which the orbit of z 0 = 0 does not diverge under iteration of the complex quadratic polynomial z n+1 = z 2 n + c [4]. While the fractal sets in each of these categories are mathematically important, this research focuses on iterated function fractals and random fractals. Iterated function fractals such as the middle third Cantor set and the von Koch curve can both be easily generated with specific fractal dimensions, but there is an important distinction between the approaches used to construct the middle third Cantor set and the von Koch curve that should be noted. These two iterated function fractals were constructed using the same initial interval, a D t = 1 line segment of unit length E 0. The iterative process used to create the middle third Cantor set removes the middle section of each line segment,

23 12 introducing gaps in the dataset. This subtractive approach produces a discontinuous dataset with a fractal dimension that is less than the topographic dimension of the initial interval E 0. In contrast, the iterative process used to create the von Koch curve replaces the middle segment of each interval with two line segments. This additive approach produces a continuous dataset with a fractal dimension that is greater than the topographic dimension of the initial interval E 0. Random fractals closely mimic naturally occurring fractal sets and like iterated function fractals, they can also be easily generated with specific fractal dimensions. Percolation and simulated fractional Brownian motion are two examples of random fractals that further demonstrate the difference in these additive and subtractive approaches. Percolation theory is a well-known probabilistic theory that examines the likelihood of a fluid to flow from gap to gap through the dataset [8]. A random percolation fractal is created by dividing an R K Euclidean initial interval, E 0, into N segments in each of its K Euclidean dimensions. Each of the resulting N K segments have an independent probability of (1 p) of being eliminated as segments in E 1. This procedure is continued with E n obtained by randomly removing the (N K ) n segments composing E n 1 with a probability of (1 p) so that the random percolation fractal F = n=0 E n is dependent on the value of p (0, 1) [2]. The fractal dimension of a random percolation fractal can be given as: D h = log(n K p) log(n) (2.2) Figure 2.5a shows how percolation uses a subtractive approach to reduce an initial D t = 3 cube into a fractal set with fractal dimension of 2.3. This random percolation fractal is a binary fractal dataset which closely mimics natural fractal patterns such as the distribution of star clusters. The fractal dimension of these fractals is traditionally

24 13 estimated with the standard box-counting method, but researchers have also recognized the importance of characterizing the size and distribution of the gaps in this type of fractal dataset with lacunarity and connectivity measurements. Classical Brownian motion was originally developed to describe the random movements of minute particles suspended in liquid [2]. A simulated fractional Brownian surface fractal (fbm) is defined as an integral over time of the increments of the path of Gaussian Brownian motion [22]. A simulated fractional Brownian surface fractal is a D t = 2 plane with a random variable X(x, y) to give the height of a point (x, y). For 0 < α < 1 an index-α Brownian function X is defined such that [2]: X(x, y) = C i λ αi sin(λ i (x cos B i + y sin B i ) + A i ) (2.3) i=0 where C i is independent, having a normal distribution of mean zero and variance equal to 1, and the B i and A i are independent, with the uniform distribution on [0, 2π) [2]. The fractal dimension of a fbm can then be given as: D h = 3 α (2.4) Figure 2.5b shows how simulated fractional Brownian motion uses the additive approach to grow an initial D t = 2 plane into a fractal set with fractal dimension of 2.3. This fbm is a fractal image with a surface intensity which closely mimics a grayscale image of a natural fractal landscape [4]. Because these fractals are not independent in all R 3 Euclidean dimensions, the standard box-counting method is unable to accurately estimate their fractal dimension. This has led researchers to develop the differential box-counting method specifically for measuring the fractal dimension of this type of fractal dataset [23].

25 14 (a) Subtractive approach seen in a random percolation fractal. (b) Additive approach seen in a simulated fractional Brownian surface fractal. Figure 2.5: Example of two random fractal datasets created with different processes, but producing the same fractal dimension equal to 2.3.

26 Hausdorff-Besicovitch Dimension The Hausdorff-Besicovitch (HB) dimension, D h, is a non-negative real number that generalizes the concept of the dimension of a real vector space. The HB dimension is defined as follows [24]: Let A be an image or signal in R K Euclidean space, then the diameter of some cover element C i is defined as the greatest distance between any pair of points in C i as formalized in: diam(c i ) = sup{d e (x, y) x, y C i } (2.5) where d e (x, y) denotes the Euclidean distance function. If C is a countable collection of cover elements of diameter at most ɛ where: A C i (2.6) i=1 then C is called an ɛ cover of A. For any ɛ > 0 let: { } h s ɛ(a) = inf diam(c i ) s C i is an ɛ cover of A i=0 (2.7) The optimal cover, C (A) is then defined as the collection of cover elements of diameter at most ɛ that minimizes the sum of the sth power of diam(c i ), (see Figure 2.6). As ɛ decreases, the class of permissible covers of A is reduced, increasing (2.7) so that the s-dimensional Hausdorff measure of A is: h s (A) = lim ɛ 0 h s ɛ(a). (2.8) There is a critical value of s where h s (A) jumps 0 from 0 to. This critical value is called the Hausdorff-Besicovitch dimension of A and is defined as: D h (A) = inf{s h s (A) = 0} = sup{s h s (A) = } (2.9)

27 16 Figure 2.6: Set A and two possible ɛ covers. The optimal cover, C, gives the infimum of i=0 diam(c i) s for all possible ɛ covers. While conceptually useful, the complexity in determining the optimal cover makes the HB dimension impractical to calculate for general sets. The most commonly used upper-bound estimation of the HB dimension is the box dimension definition. 2.3 Box Dimension The box dimension, D b, is defined as [11]: D b (A) = lim ɛ 0 logn ɛ (A) log( 1 ɛ ) (2.10) While the Hausdorff-Besicovitch dimension uses cover elements of different sizes, the simpler box dimension is typically defined using cover element of the same size. Like the HB dimension, the box dimension can also be defined in terms of an optimal cover as follows: Let A be an image or signal in R K Euclidean space, then C is a countable collection of cover elements of diameter ɛ where: A C i (2.11) i=1

28 17 C is an ɛ cover of A. For any ɛ > 0, the box dimension optimal cover is defined as the ɛ cover that satisfies: { } inf ɛ(c i ) s C i is an ɛ cover of A i=0 (2.12) The box dimension optimal cover is produced when the open cover, N ɛ (A), is defined as the number of hyper-balls of radius ɛ needed to optimally cover A. It is important to note that the box dimension does not always equal the HB dimension. For example, it can be shown that D b (A) = n for any dense subset A such that A R n = {x x = (x 1,..., x n ), x i R}. Likewise, for the same A, D h (A) n. Moreover, D h (A) = 0 for any such countable set A [19]. Therefore, given the set A of rational numbers on [0, 1], the box dimension is D b (A) = 1 while the HB dimension is D h (A) = 0. Although the box dimension fails in some instances, the value it normally produces is similar to the HB dimension Box-Counting Method for Estimating Fractal Dimension The box-counting algorithm is the most popular numerical algorithm for estimating fractal dimension based on the box dimension definition. The box-counting algorithm typically defines the open cover, N ɛ (A), as the number of ɛ-mesh cubes that intersect A. This makes the box-counting algorithm simple to implement, but this definition of N ɛ (A) means the box-counting algorithm does not typically produce the optimal cover. The differences between the Hausdorff-Besicovitch optimal cover, the box dimension optimal cover, and box-counting algorithm cover of a dataset are illustrated in Figure 2.7. The general procedure employed by the box-counting algorithm involves choosing a box length, ɛ, placing a grid of boxes over the dataset, and counting the number

29 18 Figure 2.7: Set A and three different possible covers: (i) the HB optimal cover of A given by the least number of sets of diameter at most ɛ needed to cover A; (ii) the box-counting algorithm cover of A given by the number of ɛ-mesh cubes that intersect A; (iii) the box dimension optimal cover of A given by the least number of balls of radius ɛ needed to cover A; [2]. of nonempty boxes as the open cover, N ɛ [1]. This process is repeated so that N ɛ is determined for boxes of decreasing length until the resolution of the dataset is reached. The open cover is plotted versus the reciprocal of the box length and the estimate of the box dimension is then calculated from a least squared linear regression of the plot s monotonically rising slope. Because the box-counting algorithm does not ideally place the cover elements to minimize their total number based on the distribution of the dataset, the box-counting algorithm tends to over-count the number of nonempty boxes needed to cover the dataset [1]. Because the percentage of over-counted boxes to total boxes decreases as the box size is increased, the slope of the open cover versus the reciprocal of the box length plot

30 19 is reduced, causing the box-counting algorithm to underestimate the fractal dimension of the set. The key concept underlying the box-counting algorithm is that first the box size is chosen, then the number of boxes needed to cover the set is estimated. If the dataset is not distributed proportionately in all Euclidean dimensions, estimating the hypervolume becomes exceedingly difficult to achieve because the open cover generated by the boxcounting algorithm is not versatile enough to adjust to the non-uniform distribution of the dataset. This makes the traditional box-counting algorithm unsuitable for datasets such as fbms where two coordinates (x,y) represent position in a plane and the third coordinate (z) represents pixel gray level intensity. This has led to the development of several adaptations of the box-counting algorithm [23],[25],[26],[5] that vary in how boxes are counted along the image intensity surface in order to estimate the fractal dimension of fbms and other similar fractal grayscale images. (a) D h = 2.1 (b) D h = 2.7 Figure 2.8: An example of two simulated fractional Brownian surfaces with different fractal dimensions.

31 Differential Box-Counting Algorithm for Estimating Fractal Dimension The differential box-counting algorithm (DBC), proposed by Sarkar and Chaudhuri [23], is the most commonly used method for measuring the fractal dimension of fbms and other fractal grayscale images. Figure 2.9: Example of how the number of boxes is determined by the DBC [5]. The procedure for the DBC begins by partitioning an image of size M x M pixels into non-overlapping squares of size s x s where M/2 s > 1 and s is an integer. On each square a column of boxes of size s x s x s is placed, where s is the height of each box in the third coordinate, given by G/s = M/s, and G is the total number of gray levels. Letting the minimum and maximum gray levels in the (i, j)th column fall into box number k and l, respectively, the number of boxes in this column is counted as: n r (i, j) = l k + 1 (2.13) where r = s/m is the scale. The total number of boxes contributed by all columns is then given by: N r (A) = ij n r (i, j) (2.14)

32 21 The fractal dimension of the image is then estimated from the least squared linear regression of log(n r ) versus log(1/r). The shifting and scanning box-counting algorithms proposed by Wen-Shiung et al. [25] may provide more accurate results by eliminating the tendency of the DBC to over-count the number of boxes due to quantization-computation by shifting the boxes to better fit the dataset. Another improved DBC algorithm proposed by Jian et al. [5] may produce more accurate results by allowing columns to overlap and including the standard deviation of the image in the calculation of the box height. These changes to the DBC are attempts to improve the open cover of the dataset produced by the DBC to be closer to the optimal cover. Another inherent problem with the differential box-counting algorithm is that it deviates from the Hausdorff-Besicovitch definition of fractal dimension because it is not extensible to any arbitrary R K Euclidean space. The DBC is only designed for grayscale fractal images and is not suited for analyzing other types of fractals or higher dimensional datasets, such as fused multi-spectral images. 2.4 Lacunarity The term lacunarity, derived from the Latin lacuna meaning gap, was introduced by Mandelbrot [4] as a means to quantify the size of the gaps in a fractal dataset. The series of Cantor sets shown in Figure 2.10 demonstrates how changes in lacunarity alter the look of a dataset. The Cantor sets at the bottom of the series have small gaps. These sets can be described as fine grained, and are termed to have a low lacunarity. The Cantor sets at the top of the series have large gaps. These sets can be described as coarse grained and are termed to have a high lacunarity. The difference in gap size is obvious for these simple Cantor sets, but examining the lacunarity at different scales is

33 22 very important because fractal sets that are as fine grained at small scales can be quite coarse grained when examined at larger scales or vice versa [27]. Mandelbrot explained this by saying, it is tempting to estimate a Cantor dust s degree of lacunarity by the largest gap s relative length but a more promising measure comes from examining the distribution of the gap sizes in the dataset at different scales [4]. Therefore, lacunarity can be defined as the scale-dependent measure of the gap size of a dataset, typically expressed as the set of lacunarity measurements at specific box sizes. Figure 2.10: A stack of Cantor sets with equal fractal dimension D s = 1/2 whose lacunarity increases from very low at the bottom to very high at the top of the stack [6]. There is not a general consensus on the mathematical definition of lacunarity. In Ebert s book [28], Musgrave introduced a new concept of lacunarity for simulated fractional Brownian surfaces. Musgrave defines lacunarity as the spatial resolution, i.e., the gap size between successive spatial frequencies, given by the value of λ in (2.3) that controls the spatial scaling factor between self-similar levels which are then superimposed to form a fbm, (see Figure 2.11). However, problems arise with this definition of lacunarity because it is not an independent fractal characteristic. The value of λ is typically taken to be 2 > λ > 1 in variations of the Weierstrass function such as (2.3) and values of λ > 2 could affect the Hausdorff dimension of a fbm [2], but changes

34 23 in the texture of fbms are usually only noticeable for λ > 2 [22]. Also, because this definition does not create actual gaps in the datasets, it conflicts with Mandelbrot s original concept for characterizing the texture of non-continuous binary fractal datasets with the complementary aspects of lacunarity and connectivity. Therefore, fbms with various Musgrave lacunarities, λ, are used to test the precision fractal dimension estimations produced by the DBC algorithm and the KDTREE algorithm developed in this research, but not used to test the estimations of lacunarity. (a) λ = 2 (b) λ = 10 Figure 2.11: An example of two simulated fractional Brownian surfaces with fractal dimension D h = 2.5 and Musgrave lacunarities: a) λ = 2; b) λ = Gliding Box Method for Estimating Lacunarity The most popular method for measuring lacunarity is the gliding box (GB) algorithm, which employs a localized mass calculation [4],[16] to quantify the translational and rotational invariance of a dataset [7]. Similar to the box-counting algorithm, the procedure of the GB algorithm, demonstrated in 2.12, centers a box of size r over each point p in the dataset and counts the number of points contained within the box, creating a distribution of box masses B(p, r). The box mass distribution is then converted into a probability distribution

35 24 Figure 2.12: Illustration of the gliding box method; the underlying lattice is represented by open dots, those which are occupied by the center of a particle are indicated by a solid square. The gliding box is a square of side 2r [7]. Q(p, r) and the lacunarity of the dataset is given by: L GB = Z(2) (r) Z (1) (r) 2 (2.15) where Z (1) (r) and Z (2) (r) are the first and second moments of the box mass probability distribution, respectively. The GB algorithm has a strong sensitivity to mass change because the algorithm measures the mass distribution rather than the actual gaps in the dataset [16]. This causes the gliding box measurement of lacunarity to be affected by any change in the dataset, including fractal dimension. 2.5 Succolarity Mandelbrot described a succolating fractal as one that nearly includes the filaments that would have allowed percolation [4]. Percolation theory is a probabilistic theory for describing the connectedness of the components of a dataset that was introduced earlier as a method for creating random fractals. In general, percolation theory involves a lattice with nodes that are independently chosen to be open with probability p and closed with probability 1 p. Clusters are then formed by collections of connected open nodes. The percolation probability, θ(p), is the probability that there exists an

36 25 open path from the origin to the exterior of the dataset as the number of nodes tends toward infinity; this percolation path is known as the infinite cluster. Much of the interest in percolation theory is based on the investigation of the geometric properties of the percolation path near the critical probability P c = sup p : θ(p) = 0, shown in Figure 2.13, where an open path exists across the dataset and percolation becomes possible. Figure 2.13: Schematic diagram of the probability, p, versus the percolation probability, θ(p). Many aspects of this graph are still conjectural [8]. Because the fractal dimension of a percolation dataset is related to p according to (2.2), Mandelbrot referred to a succolating fractal as one with a fractal dimension just below the value that corresponds to P c. The problem with this approach to characterizing connectivity is that determining the value of P c for even simple lattices is an open problem. While lacunarity simply quantifies the magnitude of the gaps in a dataset, connectivity is a substantially more complex concept to quantize for fractal datasets because it attempts to quantify the overall distribution of a fractal dataset at different scales. The complex nature of connectivity combined with the all-inclusive nature of the popular gliding box definition of lacunarity has led to the importance of connectivity as a unique characterization of fractal datasets being largely ignored, or only used to characterize the rotational invariance of fractal datasets [29].

37 Connectivity Connectivity is a fundamental property of graph theory and point-set topology for classifying datasets. The traditional definition [9] of connectivity is that a set A is connected if there does not exist two closed disjointed subsets, U and V with U V = X. Robins et. al., [24] introduced a new definition of connectivity where a given subset A of a metric space is said to be ɛ-disconnected if it can be written as the union of two sets that are separated by a distance of at least ɛ, i.e., there are two closed subsets U and V with U V = X and d(u, V ) inf x U,y V d(x, y) ɛ, otherwise, A is ɛ-connected. Figure 2.14: Minimum spanning tree for 10 4 points uniformly distributed on the Sierpinski triangle [9]. This new definition for connectivity quantifies the distribution of the points in a set by examining them at different scales. This definition for connectivity was then used to numerically investigate the connectivity of finite point-set approximations of fractal datasets by creating minimum spanning trees [9]. In the field of graph theory, a graph is defined as a finite set of points called vertices and a list of pairs of connected points called edges. A graph is connected if there is a sequence of edges (a path) joining any vertex to any other vertex. A connected graph is called a tree when it contains no closed paths. Given a graph, G, a spanning tree is a subgraph that is a tree and contains all the vertices in G. In this application, the vertices are the data points

38 27 forming a fractal dataset, the edges are the lines joining two points, and the weight of each edge is the Euclidean distance between the two points connected by the edge. (a) C(ɛ) (b) D(ɛ) (c) I(ɛ) Figure 2.15: Minimum spanning tree ɛ-component quantities: (a) ɛ-connected components, C(ɛ); (b) the largest diameter, D(ɛ); (c) the number of isolated points, I(ɛ); for the Sierpinski triangle shown in Figure 2.14 [9]. The minimum spanning tree is the spanning tree of the dataset with the minimal total weight, as seen in Figure Once constructed, the minimum spanning tree holds all the information needed to deduce connectedness properties [9]. The number of ɛ-connected components, C(ɛ), the largest diameter, D(ɛ), and the number of isolated points, I(ɛ) were counted at different scales (different values of ɛ), as shown in Figure By extrapolating the limiting behavior of these quantities as ɛ tends toward zero, the connectivity of fractal sets with different distributions can be compared. The first 3 momenta (mean µ, standard deviation σ, and skewness s) of the histogram of the minimum spanning tree edges have also been used to characterize the distribution of D t = 2 galaxy distributions [30]. This approach was able to efficiently discriminate between simulated galaxy clusters with random Poisson distributions, distributions with a centered King profile, and distributions with a centered NFW profile [30].

39 28 3. Cover-Based Method The cover-based method is an approach to estimating fractal characteristics based on the concept of the optimal cover, C (A) defined for the box dimension in (2.11)-(2.12) as the collection of cover elements of diameter ɛ that minimizes the sum of the sth power of their diameter while completely covering a dataset. The difficulty in determining the optimal placement of the cover elements makes the box dimension optimal cover difficult to determine for general datasets. The cover-based method attempts to solve this problem by using a dual mathematical form of the box-counting method, as discussed in [1]. The traditional box-counting method finds the number of mesh cubes that intersect a dataset using a given element size. The cover-based method reverses this procedure by first choosing the number of cover elements and then finding the position and minimum size of each element required to optimally cover a dataset. 3.1 Cover-Based Method Fractal Dimension Definition The box dimension, D c, can be defined for the cover-based method as: where I is the number of optimal cover elements: logi D c (A) = lim ɛ 0 log( 1 ɛ ) (3.1) C (A) = {C 1, C 2,..., C I } = A I C i (3.2) and ɛ is the diameter of each of the I cover elements needed to optimally cover the dataset A. i=1

40 Cover-Based Method Lacunarity Definition The cover-based method definition for lacunarity and connectivity that will now be introduced via an expression based on the minimum and maximum spanning trees between cover elements which mathematically describes the distribution of a dataset at different resolutions, as alluded to in [16]. While spanning trees are wellknown in graph theory, the following mathematical definitions are included to define the simple construction of minimum and maximum spanning trees based on the box dimension definition of the optimal cover. The lacunarity and connectivity measurements introduced in this research are just one possible expression of the information contained in the minimum and maximum spanning trees of the optimal cover of a fractal dataset. The cover-based method definition of lacunarity uses a simple undirected graph, T I (A) = (c, E), to initialize a series of bipartite graphs, G n (A) = (U n, V n, E n ), which are then trimmed to create a maximum spanning tree, M I max(a) = (C, E max), that connects all the cover elements in C. The cover-based method for lacunarity measurement at I cover elements is the summation of the spans of this maximum spanning tree, normalized by the summation of the edge set of the simple undirected graph. This process is repeated for different numbers of cover elements to produce lacunarity measurements at different scales. The cover-based method s mathematical definition of lacunarity is shown next. For a complete development see Appendix A. The cover-based method definition of lacunarity begins by defining the gaps in a dataset as the distance between the I elements of the optimal cover: C (A) = {C 1, C 2,..., C I } = A I i=1 C i

41 30 The centers of the elements of the optimal cover are defined as: c i = center of cover element C i (3.3) and the set of cover centers is defined as: c (A) = {c 1, c 2,..., c I } = I c i (3.4) T I (A) = (c, E) is a simple undirected graph, created for I cover elements. E is the edge set of T I (A), containing a unique edge connecting every cover element center given in c to every other, unless the edge bisects another cover element, in which case it is excluded from E. The cardinality of edge set of T I (A) is then: E (I, and the summation of the edge set of T I (A) is: E i=1 I(I 1) ) (3.5) 2 ˆT I = (e k ɛ) E e k (3.6) k=1 The edge set of this simple undirected graph is then trimmed to create the edge sets for a series of bipartite graphs. G n (A) = (U n, V n, E n ) is a bipartite graph, where the partitioned vertex sets, (U n, V n ) are composed of the centers of the cover elements so that c = U n V n. E n is the edge set where each vertex in U n is adjacent to every vertex in V n. The vertex sets of G n are defined as: U n (G n ) = {c \ V n } (3.7) V n (G n ) = {V n 1 c b c b maximizes e maxn } (3.8) the greatest span in E n between cover elements with centers in U n (G n ) and V n (G n ) is defined as: e maxn = sup{d e (c a, c b ) ɛ c a V n ; c b U n } (3.9) a,b

42 31 where ɛ is the diameter of each of the I cover elements. This process is repeated until n = (I 1) bipartite graphs have been created and the greatest span in each E n (G n ) between cover elements with centers in U n (G n ) and V n (G n ) has been found. When this process is completed, the e maxn spans are used to create a maximum spanning tree that connects all of the cover elements in C. Mmax(A) I = (C, E max) is a connected, acyclic maximum spanning tree of the dataset A for I cover elements. The maximum spanning tree edge set is defined as: E max = I 1 n=1 e maxn (3.10) the lacunarity at I cover elements is the summation of the spans of M I max(a) = (C, E max), defined as: I 1 ˆL I c(a) = E max (3.11) n=1 This lacunarity measurement is normalized by the summation of the edge set of the simple undirected graph, T (A) = (c, E), as defined in (3.6). Normalizing the lacunarity measurements by dividing by ˆT I and taking its reciprocal makes this lacunarity measurement unitless and allows it to be comparable over differences in dataset ranges and scales. The normalized lacunarity measurement is defined as: L I c(a) = ˆT I ˆL I c (3.12) As the gap size in a dataset increases, the value of ˆT I tends to increase faster than ˆL I c. Therefore, the normalized lacunarity measurement is inverted to produce a direct relationship with gap size. The lacunarity of a fractal dataset A is the set of these normalized lacunarity measurements at different scales, given by: L c (A) = L I c (3.13) I=2

43 Numerical Example A numerical example of the cover-based method definition for lacunarity is now presented for the simple dataset, A, shown in Figure 3.1. This example will present an optimal cover of A, a simple undirected graph, T I (A) = (c, E), a series of bipartite graphs, G n (A) = (U n, V n, E n ), and a maximum spanning tree, Mmax(A) I = (C, E max) for the dataset A at I = 4 cover elements. This dataset was designed to have an easily determined optimal cover where the diameter of each cover element, ɛ = 14.14, is the smallest diameter capable of completely covering A with I = 4 cover elements. The optimal placement of the cover elements was determined intuitively for this simple example, but remains an open problem for general datasets. Figure 3.1: Dataset A. For I = 4 cover elements the optimal cover, C, is: C = {C 1, C 2, C 3, C 4 } (3.14) c i = center of cover element C i c = {c 1, c 2, c 3, c 4 } = {(19.5, 11.5), (44.5, 11.5), (6.5, 25.5), (29.5, 44.5)} (3.15) ɛ = diam(c 1 ) = diam(c 2 ) = diam(c 3 ) = diam(c 4 ) = (3.16)

44 33 Figure 3.2: Simple undirected graph, T 4 (A) = (c, E). Table 3.1: Edge set E for the simple undirected graph T 4 (A) = (c, E) seen in Figure 3.2, shown in a matrix as the lower triangular set of edges because d e (c 1, c 1 ) = 0 and d e (c 1, c 2 ) = d e (c 2, c 1 ). c 1 c 2 c 3 c 4 c c c c The cardinality of the edge set of T 4 (A) is: E = 6 (3.17) and the summation of the edge set of T 4 (A) is: 6 ˆT 4 (A) = (e k ɛ) E e k k=1 = {( ) + ( ) + ( ) +( ) + ( ) + ( )} =

45 34 for n = 1, U 1 = {c 2, c 3, c 4 } (3.18) V 1 = {c 1 } (3.19) E 1 = {d e (c 1, c 2 ), d e (c 1, c 3 ), d e (c 1, c 4 )} (3.20) Figure 3.3: Bipartite graph G 1 (A) = (U 1, V 1, E 1 ), showing edges between the elements of U 1 and V 1. Table 3.2: Edge set E 1 for the bipartite graph G 1 (A) = (U 1, V 1, E 1 ), shown in Figure 3.3. c 1 c c c Table 3.2 shows that (3.9) is maximized when c a = c 1 c b = c 4 (3.21) so that e max1 = {d e (c 1, c 4 ) ɛ} = = (3.22)

46 35 for n = 2, U 2 = {c 2, c 3 } (3.23) V 2 = {c 1, c 4 } (3.24) E 2 = {d e (c 1, c 2 ), d e (c 1, c 3 ), d e (c 4, c 2 ), d e (c 4, c 3 )} (3.25) Figure 3.4: Bipartite graph G 2 (A) = (U 2, V 2, E 2 ), showing edges between the elements of U 2 and V 2. Table 3.3: Edge set E 2 for the bipartite graph G 2 (A) = (U 2, V 2, E 2 ), shown in Figure 3.4. c 1 c 4 c c Table 3.3 shows that (3.9) is maximized when c a = c 4 c b = c 2 (3.26) so that e max2 = {d e (c 4, c 2 ) ɛ} = = (3.27)

47 36 for n = 3 U 3 = {c 3 } (3.28) V 3 = {c 1, c 4, c 2 } (3.29) E 3 = {d e (c 1, c 3 ), d e (c 2, c 3 ), d e (c 4, c 3 )} (3.30) Figure 3.5: Bipartite graph G 3 (A) = (U 3, V 3, E 3 ), showing edges between the elements of U 3 and V 3. Table 3.4: Edge set E 3 for the bipartite graph G 3 (A) = (U 3, V 3, E 3 ), shown in Figure 3.5. c 1 c 4 c 2 c Table 3.4 shows that (3.9) is maximized when c a = c 2 c b = c 3 (3.31) so that e max3 = {d e (c 2, c 3 ) ɛ} = = (3.32)

48 37 Figure 3.6: Maximum spanning tree, M 4 max(a) = (C, E max). The edge set of the maximum spanning tree shown in Figure 3.6 is: E max = {e max1, e max2, e max3 } = {21.59, 23.43, 27.64} (3.33) The lacunarity measurement at I = 4 is: 3 ˆL 4 c(a) = E max = { } = (3.34) n=1 The lacunarity measurement normalized by the magnitude of the edge set of the simple undirected graph shown in Figure 3.2 is: L 4 c(a) = ˆT 4 ˆL 4 c = = 1.49 (3.35) 3.3 Cover-Based Method Connectivity Definition The cover-based method definition for lacunarity uses the maximum spanning tree between cover elements to quantify the size of the gaps in a dataset at different scales. This definition provides for a simple complimentary mathematical definition, referred to as the cover-based method definition for connectivity, which uses the minimum

49 38 spanning tree between cover elements to quantify how closely connected the cover elements are at different scales. While lacunarity quantifies the magnitude of the gap size in a fractal dataset based on the magnitude of the maximum spanning tree, connectivity is a more intricate measurement that quantifies the overall distribution of a fractal dataset based on the distribution of the lengths of the spans forming the minimum spanning tree. The logical process of developing the following definition for connectivity is shown in more detail in Appendix B. The cover-based method definition for connectivity begins by defining the gaps in a dataset as the distance between the I elements of the optimal cover: I C (A) = {C 1, C 2,..., C I } = A i=1 C i The centers of the elements of the optimal cover are once again defined as: c i = center of cover element C i (3.36) and the set of centers are defined as: I c = {c 1, c 2,..., c I } = c i (3.37) T I (A) = (c, E) is a simple undirected graph, created for I cover elements. E is the edge set of T I (A), containing a unique edge connecting every cover element center given in c to every other, unless the edge bisects another cover element, in which case it is i=1 excluded from E. The edge set of this simple undirected graph is then trimmed to create the edge sets for a series of bipartite graphs. G n (A) = (V n, U n, E n ) is a bipartite graph where the vertex sets of G n are defined as: U n (G) = {c \ V n } (3.38)

50 39 V n (G) = where the least span between cover element in E n is defined as: i v n c i (3.39) e minn = inf a,b {d e(v a U b ) ɛ a v n ; b u n } (3.40) where ɛ is the diameter of each of the I cover elements. This process is repeated until n = (I 1) bipartite graphs have been created and the least span in each E n (G n ) between cover elements with centers in U n (G n ) and V n (G n ) has been found. When this process is completed, the e minn spans are used to create a minimum spanning tree that connects all of the cover elements in C. Mmin I (A) = (C, E min ) is a connected, acyclic minimum spanning tree of the dataset A for I cover elements. The minimum spanning tree edge set is defined as: E min = I 1 n=1 e minn (3.41) the connectivity at I cover elements is the coefficient of variation of the minimum spanning tree edge set, defined as: C I c (A) = σ(e min ) µ(e min ) (3.42) where σ is the standard deviation and µ is the mean of the minimum spanning tree edge set. Coefficient of variation is a normalized unitless measure of the dispersion of a probability distribution. The connectivity of a fractal dataset A is the set of these connectivity measurements at different scales, given by: C c (A) = Cc I (3.43) I=2

51 Numerical Example A numerical example of the cover-based method definition for connectivity is now presented for the simple dataset, A, shown in Figure 3.7. This example will present an optimal cover of A, a simple undirected graph, T I (A) = (c, E), a series of bipartite graphs, G n (A) = (U n, V n, E n ), and a minimum spanning tree, Mmin I (A) = (C, E min ) for the dataset A at I = 4 cover elements. This dataset was designed to have a easily determined optimal cover where the diameter of each cover element, ɛ = 14.14, is the smallest diameter capable of completely covering A with I = 4 cover elements. The optimal placement of the cover elements was determined intuitively for this simple example, but remains an open problem for general datasets. Figure 3.7: Dataset A. For I = 4 cover elements the optimal cover, C, is: C = {C 1, C 2, C 3, C 4 } (3.44) c i = center of cover element C i c = {c 1, c 2, c 3, c 4 } = {(19.5, 11.5), (44.5, 11.5), (6.5, 25.5), (29.5, 44.5)} (3.45) ɛ = diam(c 1 ) = diam(c 2 ) = diam(c 3 ) = diam(c 4 ) = (3.46)

52 41 Figure 3.8: Simple undirected graph, T 4 (A) = (c, E). Table 3.5: Edge set E for the simple undirected graph T 4 (A) = (c, E) seen in Figure 3.8, shown in a matrix as the lower triangular set of edges because d e (c 1, c 1 ) = 0 and d e (c 1, c 2 ) = d e (c 2, c 1 ). c 1 c 2 c 3 c 4 c c c c

53 42 for n = 1, U 1 = {c 2, c 3, c 4 } (3.47) V 1 = {c 1 } (3.48) E 1 = {d e (c 1, c 2 ), d e (c 1, c 3 ), d e (c 1, c 4 )} (3.49) Figure 3.9: Bipartite graph G 1 (A) = (U 1, V 1, E 1 ), showing edges between the elements of U 1 and V 1. Table 3.6: Edge set E 1 for the bipartite graph G 1 (A) = (U 1, V 1, E 1 ), shown in Figure 3.9. c 1 c c c Table 3.6 shows that (3.40) is minimized when c a = c 1 c b = c 3 (3.50) so that e min1 = {d e (c 1, c 3 ) ɛ} = = 6.38 (3.51)

54 43 for n = 2, U 2 = {c 2, c 4 } (3.52) V 2 = {c 1, c 3 } (3.53) E 2 = {d e (c 1, c 2 ), d e (c 1, c 4 ), d e (c 3, c 2 ), d e (c 3, c 4 )} (3.54) Figure 3.10: Bipartite graph G 2 (A) = (U 2, V 2, E 2 ), showing edges between the elements of U 2 and V 2. Table 3.7: Edge set E 2 for the bipartite graph G 2 (A) = (U 2, V 2, E 2 ), shown in Figure c 1 c 3 c c Table 3.7 shows that (3.40) is minimized when c a = c 1 c b = c 2 (3.55) so that e min2 = {d e (c 1, c 2 ) ɛ} = = (3.56)

55 44 for n = 3, U 2 = {c 4 } (3.57) V 2 = {c 1, c 3, c 2 } (3.58) E 2 = {d e (c 1, c 4 ), d e (c 3, c 4 ), d e (c 2, c 4 )} (3.59) Figure 3.11: Bipartite graph G 3 (A) = (U 3, V 3, E 3 ), showing edges between the elements of U 3 and V 3. Table 3.8: Edge set E 3 for the bipartite graph G 3 (A) = (U 3, V 3, E 3 ), shown in Figure c 1 c 3 c 2 c Table 3.8 shows that (3.40) is minimized when c a = c 3 c b = c 4 (3.60) so that e min3 = {d e (c 3, c 4 ) ɛ} = = (3.61)

56 45 Figure 3.12: Minimum spanning tree, M 4 min (C, E min ). The edge set of the minimum spanning tree shown in Figure 3.12, E min = {e min1, e min2, e min3 } = {6.38, 11.88, 17.10} (3.62) The connectivity measurement at I = 4, C 4 c (A) = σ(6.38, 11.88, 17.10) µ(6.38, 11.88, 17.10) = = 0.37 (3.63) 4.38

57 46 4. Cover-Based Method KD Tree Algorithm The cover-based method mathematically defines measurements of fractal characteristics based on the box dimension definition of the optimal cover, but because the precise placement of the cover elements that minimizes their size is difficult to determine for specific examples, a numerical implementation of the cover-based method does not typically produce the optimal cover. Therefore, the cover of a dataset produced by a numerical implementation of the cover-based method is referred to as the suboptimal cover of a dataset. Prior cover-based method research [17],[1],[16],[18] employed the Minimum Cluster Volume clustering algorithm, with initial values established by the Fuzzy-C Means algorithm, to determine the suboptimal cover of the dataset and then used scatter matrices to describe the size and shape of the cover elements. This procedure produced highly accurate estimations of fractal dimension but required lengthy run times, limiting the uses of this approach. This prompted further research using the cover-based method to determine if similar results could be obtained more quickly using other data partitioning methods, rather than computationally intensive clustering algorithms. The cover-based method KD tree (KDTREE) algorithm developed in this research uses a modified version of the KD tree algorithm to generate the suboptimal cover of a dataset which is then used to estimate the fractal dimension, lacunarity, and connectivity of fractal datasets based on the cover-based method definitions. 4.1 KD Tree Algorithm The KD tree algorithm is a well-known space partitioning data structure used for organizing data points that can be constructed in O(N log N) time for N data points

58 47 [31]. The traditional KD tree algorithm shown in Figure 4.1 partitions the hyperspace containing a dataset by successively dividing each mother box into two daughter boxes, containing equal numbers of data points. The algorithm first sorts the data points in the mother box in one of the dataset s R K Euclidean dimensions, which are cycled for each series of partitions. Then the box s hyperspace is divided through the median coordinate point along the axis of the sorting dimension, or divided through one of the two middle data points in the case of an even number of points where no median point exists [31]. As each pair of daughter boxes is created, they are linked with a pointer to their respective mother box. (a) 1 iteration (b) 2 iterations (c) 3 iterations Figure 4.1: An example of the ability of the KD tree algorithm to partition the hyperspace containing a dataset through three iterations.

59 Cover-Based Method KD Tree Algorithm Suboptimal Cover The KDTREE algorithm developed in this research uses a modified KD tree approach that is designed to group the points in a dataset rather than partitioning the entire hyperspace that contains the dataset, as shown in Figure 4.2. Each iteration of the KDTREE algorithm divides each mother group into two daughter groups by value in one of the dataset s R K Euclidean dimensions, which are cycled for each iteration. Each daughter group contains an equal number of points, or in the case of an odd number of points in the mother group, one of two daughter groups is chosen to contain an extra data point. This process creates I = 2 N new daughter groups after N iterations of the KDTREE algorithm and is repeated until a daughter group containing only two data points is created. (a) 1 iteration (b) 2 iterations (c) 3 iterations Figure 4.2: An example of the modified KD tree algorithm s ability to partition a dataset through three iterations.

60 49 This modified KD tree approach creates daughter groups with equal numbers of data points rather than groups of equal size. Therefore, the suboptimal cover of the dataset is determined by calculating the size of the cover element required to encompass each daughter group, after each iteration of the KDTREE algorithm. This is done by first calculating the length, l k i, of each cover element, C i, where i [1, 2,..., I ]; in each of the dataset s R K Euclidean dimensions, where k [1,2,...,K ]; as the difference between the maximum and minimum coordinate values, x k i, of each daughter group and adding 1 to prevent a fencepost error. l k i = max(x k i ) min(x k i ) + 1 (4.1) This process produces cover elements that are minimum bounding boxes of each daughter group. The total size of each cover element is then expressed as the cover element s total magnitude equal to the product of the cover element s length in all R K Euclidean dimensions. For example, a grayscale image with three independent parameters (two specifying location in a plane and one grayscale intensity parameter) will create R 3 dimensional cubic cover elements whose magnitudes are equal to length x width x height. The total magnitude, v i, of a cover element is given by: v i = K (l k i ) (4.2) k=1 This approach for expressing cover element size is advantageous because a cover element s magnitude in all Euclidean dimensions tends to change linearly even as the length of the cover element changes unequally in each Euclidean dimension. The median cover element magnitude, v n, of all I cover elements after N iterations is then substituted for ɛ, the diameter of each of the I cover elements needed to optimally cover

61 50 the dataset, in the cover-based method box dimension definition by: (ɛ) K = v n ( ) 1 K ( ) 1 = ɛ ( ) 1 K log ɛ ( ) 1 log ɛ v n ( ) 1 = log v n = 1 K log ( 1 v n ) (4.3) Substituting (4.3) into the cover-based definition of the box dimension yields: [ ] logi D d (A) = K log( 1 v n ) (4.4) The box dimension of A can then be estimated as the monotonically rising slope of the least squared linear regression of the number of covers I, plotted versus the reciprocal of the median cover element magnitude v n, multiplied by the number of Euclidean dimension K of the dataset. 4.3 Cover-Based KD Tree Algorithm Lacunarity and Connectivity Estimates The distances between suboptimal cover elements generated with the modified KD tree approach are then used to estimate the lacunarity and connectivity of the dataset. Once the suboptimal cover has been generated for N iterations, the lacunarity and connectivity calculations begin by generating a simple undirected graph, T I (A) = (C, S), of the suboptimal cover in which all the cover element are adjacent so that each span, S ab, connects cover elements a and b where a b; a, b [1,2,...,I]; S ab = S ba. The process of trimming the simple undirected graph into a spanning tree is a more efficient numerical method than creating a series of bipartite graphs to determine each span. In previous research [16], the simple undirected graph was formed by measuring the distance between the centers of two hyperelliptical cover elements, then subtracting

62 51 the radius for both cover elements. This approach is not effective for measuring the distance between the R K Euclidean dimensions hyperrectangular cover elements produced by the KDTREE algorithm because it does not typically produce spans of the minimum distance between cover elements. The minimum distance between two hyperrectangular cover elements is the distance between two vertices or can be closely approximated by the distance to the midpoint of the side of a hyperrectangle. The 2 K vertices and 2K side midpoints of each R K Euclidean dimensional hyperrectangular cover element, C i, form the boundary points, υ j i, of the cover element, where j [1,2,...,2K +2K ]. The process the KDTREE algorithm uses to create the simple undirected graph, T I (A), begins with calculating the boundary points for each cover element using the cover element s max(x k i ), min(xk i ), and mid(xk i ) coordinate values. Figure 4.3 provides an example of the boundary points created by the KDTREE algorithm for I = 4 cover elements. Figure 4.3: Example of the critical boundary points created by the KDTREE algorithm for I = 4 cover elements. The boundary points, υ j i, for each cover element are marked by an x and the 8 boundary points for C 2 are labeled, υ j 2. If cover elements C a and C b are not in contact, the two nearest boundary points

63 52 of C a and C b, known as the critical boundary points of C a and C b, are then determined by the following procedure: The critical boundary point of C a is the boundary point of C a nearest to the center of C b, given by: Υ ab = min{d e (υ, c b ) υ υa} j (4.5) where d e is the Euclidean distance function. The critical boundary point of C b is the boundary point of C b nearest to Υ ab given by: Υ ba = min{d e (Υ ab, υ) υ υ j b } (4.6) Figure 4.4 provides an example of the critical boundary points created by the KDTREE algorithm for I = 4 cover elements. Figure 4.4: Example of the critical boundary points created by the KDTREE algorithm for I = 4 cover elements. The critical boundary points, Υ ab, for each cover element are marked by an x. Finally, the distance between cover elements C a and C b is the distance between the two critical boundary points, Υ ab and Υ ba : S ab = d e (Υ ab, Υ ba ) (4.7)

64 53 If the span S ab bisects any cover elements, the span is immediately trimmed from T I (A) and is not included in the further calculations. Figure 4.5 provides an example of the simple undirected graph, T 4 (A) produced by the KDTREE algorithm. Figure 4.5: An example of the simple undirected graph, T 4 (A), created by the KDTREE algorithm. The spans, S ab, between each cover element are shown and the span S 14 = S 41 is shown as a dashed line because is not included in the edge set S as it bisects C 2. The cardinality of edge set of T I (A) is then: S (I, The summation of the edge set of T I (A) = (C, S) is: S I(I 1) ) (4.8) 2 ˆT I (A) = S k S k S (4.9) k=1 Prim s algorithm is then used to convert the simple undirected graph into a maximum spanning tree, M I max(a) = (C, S max), for calculating lacunarity and a minimum spanning tree, Mmin I (A) = (C, S min ), for calculating connectivity using the following procedure: Input: T I (A) = (C, S) Initialization: Ĉ max = {C 1 }, S max = {0}; Ĉ min = {C 1 }, S min = {0}

65 54 Repeat for I 1 iterations: Choose span S ab with maximum value such that C a is in Ĉmax and C b is not Add C b to Ĉmax and S ab to S max Choose span S ab with minimal value such that C a is in Ĉmin and C b is not Add C b to Ĉmin and S ab to S min Output: S max, S min The edge sets of the maximum spanning tree, S max, and minimum spanning tree, S min, are comprised of edges, S n, where n [1,2,...,I-1]. The maximum spanning tree is then used to define the lacunarity at I cover elements as: I 1 ˆL I d (A) = S max; Mmax(A) I T I (A) (4.10) n=1 This lacunarity measurement is normalized by the summation of the edge set of the simple undirected graph, T (A) = (C, S), as defined in (4.9). Normalizing the lacunarity measurements by dividing by ˆT I makes this lacunarity measurement unitless and allows it to be comparable over differences in dataset ranges and scales. The normalized lacunarity measurement is defined as: ˆT L I I d (A) = ˆL I d (4.11) Finally, the KDTREE algorithm measurement of the lacunarity of a fractal dataset A is the set of these lacunarity measurements at different scales, given by: L d (A) = I=2 L I d (4.12) The connectivity at I cover elements is the coefficient of variation of the minimum

66 55 spanning tree edge set, defined as: C I d (A) = σ(s min ) µ(s min ) (4.13) where σ is the standard deviation and µ is the mean of the minimum spanning tree edge set. Finally, the KDTREE algorithm measurement of the connectivity of a fractal dataset A is the set of these connectivity measurements at different scales, given by: C d (A) = I=2 C I d (4.14)

67 56 5. Testing and Results In the previous sections, the mathematical definitions for estimating fractal characteristics using the cover-based method were introduced and the numerical implementation of the cover-based method developed in this research and referred to as the cover-based method KD tree (KDTREE) algorithm was described. In this section, the fractal dimension, lacunarity, and connectivity estimates produced by the KDTREE algorithm will be compared to the results of the numerical algorithms most commonly used to estimate fractal characteristics and the superiority of the cover-based method will be demonstrated. 5.1 Fractal Dimension The accuracy of the estimation of the box dimension produced by the KDTREE algorithm will now be compared to the box dimension estimations produced by the boxcounting algorithm using Cantor sets and percolation random fractals and the differential box-counting algorithm (DBC) using simulated fractional Brownian surface fractals (fbms) [22]. Two variations of the box-counting algorithm [23],[5] were required for this research because there is no single implementation of the box-counting method that is versatile enough to measure these different types of datasets, while the cover-based method is capable of estimating the box dimension of any R K Euclidean dimensional dataset. Determining which data points to consider is an important factor when using a numerical algorithm to estimate box dimension. Box dimension is theoretically estimated from the monotonically rising nonzero linear slope of the cover element count

68 57 versus the reciprocal of the element size; but when a numerical algorithm estimates the box dimension of a statistically self-similar fractal dataset such as a percolation random fractal [2] or simulated fractional Brownian surface fractal [22], the slope of the corresponding plot tends to flatten out as the number of cover elements approaches the resolution of the dataset. This is clearly shown in the plots in Figures 5.1 and 5.2. (a) Box-Counting Algorithm (b) KDTREE Algorithm Figure 5.1: Typical logn ɛ -log( 1 1 ɛ ) and logi-log( v n ) plot for a percolation random fractal with a theoretical fractal dimension of 2.5. (a) DBC Algorithm (b) KDTREE Algorithm Figure 5.2: Typical logn ɛ /I-log( 1 1 ɛ ) and logi-log( v n ) plot for a simulated fractional Brownian surface fractal with a theoretical fractal dimension of 2.9. Foroutan-pour et al. [26], described an optimization procedure for determining the best box range sizes for a box-counting software program, demonstrating how the

69 58 choice of data points can greatly influence the box dimension estimate. The intent of this research is to demonstrate the superior mathematical basis of the cover-based method, not to show how the KDTREE algorithm can be optimized to outperform the traditional box-counting algorithms based on the data points chosen. Therefore, this research chose to order the results of all test datasets by increasing the number of cover elements and used only the points between the lower and upper quartile generated by each algorithm when estimating the box dimension for each test dataset. This approach preserves the definition of each method and allows their estimations of box dimension to be compared fairly. The data points used to estimate box dimension for each numerical algorithm are displayed in red in Figures 5.1 and 5.2. Fit error is commonly used to measure how closely the data points produced by a numerical algorithm correspond to the least squares linear fit that provides the algorithm s fractal dimension estimate. The fit error E of points (x, y) for their fitted straight line satisfying y = cx + d is defined as [23]: E = 1 n (cx i + d y i ) 2 n 1 + c 2 (5.1) i=1 Fit error does not indicate how accurately a numerical algorithm estimates fractal dimension, but it does demonstrate how consistently a numerical algorithm generates data points at different resolutions. Higher fit error indicates that a numerical algorithm is less reliable because it produces data points that are less linear, causing its box dimension estimate to vary based on which data points were used to make the estimate. The average fit error of the box-counting algorithm for the total test set of random percolation fractals was , while the average fit error of the KDTREE algorithm was The average fit error of the DBC algorithm for the total test set of fbms

70 59 was , while the average fit error of the KDTREE algorithm was The fit error of the box-counting algorithm was five times higher than the fit error of the KDTREE algorithm, while the fit errors of the KDTREE and DBC algorithms were nearly identical. These algorithms were run on a Intel Core 2 Duo 2.66 GHz platform running Mac OS X 10.6 Snow Leopard. Each algorithm was implemented in MATLAB and general run-time measurements for these algorithms were made using MATLAB s built in tic and toc functions. The full run-time measurement results for each algorithm can be seen in Appendix C. As explained earlier, the number of data points in a random percolation fractal dataset is dependent on the theoretical fractal dimension of the dataset, while the number of data points in a simulated fractional Brownian surface fractal remains constant with changes in theoretical fractal dimension. Because the number of iterations of the KDTREE algorithm is dependent on the number of data points in the dataset, while the number of iterations of the box-counting algorithm is primarily dependent on the size of the space containing the set, the run-time of the KDTREE algorithm for a random percolation fractal dataset varies largely with the theoretical fractal dimension of the dataset. The KDTREE algorithm produced 15 data points with an average run-time of 6.93 seconds for the random percolation fractal datasets with a theoretical fractal dimension of D h = 2.1 and 22 data points with an average run-time of seconds for the random percolation fractal datasets with a theoretical fractal dimension of D h = 2.9. The box-counting algorithm produced eight data points with an average run-time of 1.17 seconds for the random percolation fractal datasets with a theoretical fractal dimension of D h = 2.1 and eight data points with an average run-time of 0.44 seconds for the

71 60 random percolation fractal datasets with a theoretical fractal dimension of D h = 2.9. The run-time measurements are very consistent for both algorithms on total test set of simulated fractional Brownian surface fractals with the KDTREE algorithm produced 17 data points with an average run-time of seconds while the DBC produced nine data points with an average run-time of 0.45 seconds. While the KDTREE algorithm is more computationally complex than both of the box-counting algorithms, the following results demonstrate that this new method makes a better upper-bound estimate of the box dimension. In addition, the cover-based method employed by the KDTREE algorithm can also be used to calculate lacunarity and connectivity. These important fractal characteristics allow the textural qualities of a dataset to be more fully described and cannot be obtained using traditional boxcounting methods Cantor Sets and Percolation Random Fractals (a) D s = (b) D s = Figure 5.3: An example of a R 2 Euclidean dimensional and a R 3 Euclidean dimensional Cantor set. Cantor sets are a well-known type of fractal set used in many branches of

72 61 Table 5.1: Fractal dimension estimates for various Cantor sets. Euclidean Theoretical Box-Counting KDTREE Dimensions FD Algorithm Algorithm mathematics whose construction is explained in [2]. Cantor sets were used in this research because their exact self-similarity allows their fractal dimension to be determined precisely by the simple similarity dimension definition. The Cantor set fractal dimension estimations presented in Table 5.1 show that the KDTREE algorithm was able to produce more accurate fractal dimension estimations than the box-counting algorithm for six of the seven Cantor sets. The KDTREE algorithm was able to exactly estimate the fractal dimension of three of the seven Cantor sets. The average experimental error of the KDTREE algorithm was 4.04 percent, while the average experimental error of the box-counting algorithm was percent. A set of random percolation fractals with theoretical fractal dimensions ranging from 2.1 to 2.9 were created using equation (2.2) by dividing a 243 x 243 x 243 cube into N = 3 segments in each of the K = 3 Euclidean dimensions then independently choosing to eliminate segments based on the probability p. The remaining segments were divided and eliminated through five iterations, reaching the resolution of the dataset. The resulting random percolation fractals are binary datasets which contain an increasing number of points as the fractal dimension is increased from 2.1 to 2.9. An example of a random percolation fractal with a theoretical fractal dimension of 2.3 can seen in

73 62 Figure 2.5a. Groups of ten random percolation fractals were created for each of the nine fractal dimensions from 2.1 to 2.9, for a total test set of 90 random percolation fractals. Figure 5.4 shows the minimum, mean, and maximum values of the fractal dimension estimates produced by the box-counting and KDTREE algorithms for these groups of random percolation fractals. Figure 5.4: Fractal dimension estimates of percolation random fractals with theoretical fractal dimensions from 2.1 to 2.9. Table 5.2: Fractal dimension estimation results for percolation random fractals with theoretical fractal dimensions from 2.1 to 2.9. Theoretical Box-Counting Algorithm KDTREE Algorithm FD mean error var. mean error var

74 63 Table 5.3: Overall fractal dimension estimation results for percolation random fractals. Box-Counting Algorithm KDTREE Algorithm range avg. error avg. var Table 5.2 shows the mean, average experimental error, and coefficient of variation for each group of ten random percolation fractals created for each of the nine fractal dimensions from 2.1 to 2.9. The average experimental error is defined as the average difference between the group s theoretical and estimated fractal dimension values. The coefficient of variation is defined as the standard deviation of the group s estimated fractal dimension values, normalized by the mean estimated fractal dimension value of the group. Table 5.3 shows the range, average experimental error, and average coefficient of variation for the total test set of 90 random percolation fractals. The fractal dimension range is defined as the difference between the estimated fractal dimension mean values of the groups of random percolation fractals with theoretical fractal dimensions of 2.1 and 2.9. These results show that both the box-counting and KDTREE algorithms tend to overestimate the fractal dimension of random percolation fractals. The box-counting algorithm s mean fractal dimension estimates for the groups of random percolation fractals with theoretical fractal dimensions of 2.8 and 2.9 are and , respectively, which are unrealistic results for a dataset with a maximum dimension equal to the dataset s R 3 Euclidean dimensions. The KDTREE algorithm s mean fractal dimension estimate for the groups of random percolation fractals with a theoretical fractal

75 64 dimension of 2.9 is , which is more reasonable, but still unrealistic. While the box-counting algorithm is capable of estimating a slightly larger range of fractal dimension values than the KDTREE algorithm, the KDTREE algorithm s lower average experimental error and lower average coefficient of variation demonstrate that the fractal dimension estimates produced by the KDTREE algorithm are more accurate and precise than the fractal dimension estimates produced by the box-counting algorithm Simulated Fractional Brownian Surface Fractals A standard set of virtual simulated fractional Brownian surface fractals with theoretical fractal dimensions ranging from 2.1 to 2.9 were produced using Musgrave s texture generation program [28]. This texture generation program also controls the theoretical Musgrave lacunarity (λ) previously described in Section 2.4, which can further vary the appearance of these fractals with minimal affect on the theoretical fractal dimension. For Musgrave lacunarities of 2, 4, 7 and 10, ten test surfaces were created for each of the nine fractal dimensions from 2.1 to 2.9 for a total test set of 360 virtual fbms. In this research, the word virtual is used to indicate a method of creating and evaluating fbms generated in their nonscaled floating point form. With the use of these virtual images, instead of grayscale images, it is believed that any effects that might change the theoretical fractal dimension in the scaling and discretizing process used to create a grayscale image will be minimized. Figure 5.5 shows the minimum, mean, and maximum values of the fractal dimension estimates produced by the DBC and KDTREE algorithms for the total test set of fbms.

76 65 Table 5.4: Fractal dimension estimation results for simulated fractional Brownian surface fractals with theoretical fractal dimensions from 2.1 to 2.9 and varying Musgrave lacunarity. Musgrave Theoretical DBC Algorithm KDTREE Algorithm Lacunarity FD mean error var. mean error var

77 66 Figure 5.5: Fractal dimension estimates of simulated fractional Brownian surface fractals with theoretical fractal dimensions from 2.1 to 2.9. Table 5.4 shows the mean, average experimental error, and coefficient of variation for each group of ten fbms created for each of the nine fractal dimensions from 2.1 to 2.9 with Musgrave lacunarities of 2, 4, 7 and 10. Table 5.5 shows the range, average experimental error, and average normalized standard deviation for the fbms with varying Musgrave lacunarities and Table 5.6 shows the range, average experimental error, and average coefficient of variation for the total test set of 360 fbms. Table 5.5: Fractal dimension estimation results for simulated fractional Brownian surface fractals with varying Musgrave lacunarity. Musgrave DBC Algorithm KDTREE Algorithm Lacunarity range error var. range error var These results show that both the DBC and KDTREE algorithms are unable to estimate the entire range of fractal dimensions from 2.1 to 2.9 for simulated fractional

78 67 Table 5.6: Overall fractal dimension estimation results for simulated fractional Brownian surface fractals. DBC Algorithm KDTREE Algorithm range avg. error avg. var Brownian surface fractals. As the Musgrave lacunarity of the datasets increases, the range of the fractal dimension estimation increases for both algorithms. Overall, the KDTREE algorithm is capable of a 12.4 percent larger range in fractal dimension values. The lower average error of the KDTREE algorithm is a result of the algorithm s larger range producing fractal dimension estimates which are closer to the theoretical fractal dimension of the dataset. The average coefficient of variation results show that the KDTREE algorithm has a slightly higher precision than the DBC algorithm Medical Image Analysis Application One well-known application of fractal dimension estimation is in medical image analysis [32],[12],[29],[33]. Figure 5.6a shows a grayscale DICOM MRI lateral lumbar spine image. This image was parsed into 400 sections and the KDTREE algorithm was then used to estimate the fractal dimension of each section. The KDTREE fractal dimension results are shown in Figure 5.6c where red color intensity indicates the sections relative estimated fractal dimension. These results were then overlaid on the DICOM MRI lateral lumbar spine image in Figure 5.6b. This DICOM MRI lateral lumbar spine image was chosen because it contains a cancerous tumor present in the body of the first lumbar vertebrate, centered at approximately (800,550). When the KDTREE fractal dimension results are isolated for

79 68 (a) DICOM MRI lateral lumbar spine image. (b) KDTREE fractal dimension results overlaid on the DICOM MRI lateral lumbar spine image. (c) KDTREE fractal dimension results. (d) KDTREE fractal dimension results for lumbar vertebrates. Figure 5.6: DICOM MRI lateral lumbar spine image with KDTREE fractal dimension results. the first four lumbar vertebrates in Figure 5.6d, the first lumbar vertebrate containing the cancerous tumor stands out with lighter regions that indicate a reduced fractal di-

80 69 mension compared to the other three vertebrates. This example demonstrates how the KDTREE algorithm could be used to detect cancer and other abnormalities in grayscale DICOM medical images by highlighting areas where fractal dimension varies from the surrounding tissue. This approach is promising for large medical image datasets but more study is needed to identify the region size lower limit where the limited number of data points in each region causes the KDTREE algorithm to become inaccurate. 5.2 Lacunarity and Connectivity The accuracy and effectiveness of the estimation of the lacunarity and connectivity produced by the cover-based method KD tree (KDTREE) algorithm will now be compared to the lacunarity results of the traditional gliding box (GB) algorithm using simple point-set distributions and Cantor sets. These results will demonstrate the superiority of the cover-based method approach of using separate lacunarity and connectivity measurements to characterize the gap size and gap distribution in a fractal dataset compared to the GB algorithm lacunarity estimate based on mass probability distributions. Figure 5.7 shows the simple point-set distributions used to compare the lacunarity estimates of the KDTREE and GB algorithms. As explained in Section 2.4, lacunarity is typically expressed as the set of measurements at different scales because fractal sets that are heterogeneous at small scales can be homogeneous when examined at larger scales, or vice versa. These simple sets were constructed to demonstrate how these two algorithms react to changes in a set s mass, orientation, and gap size at a single specific scale. The results of the KDTREE algorithm at a scale of I = 9, L 9 d (L), and the GB algorithm at a scale of ɛ = 4 are shown in Table 5.7.

81 70 Figure 5.7: Simple point-set distributions constructed to demonstrate variations in lacunarity estimates produced by the KDTREE and GB algorithms. Table 5.7: Lacunarity measurements of simple point-set distributions shown in Figure 5.7. Set Gliding Box KDTREE KDTREE Name Algorithm Lacunarity Connectivity L L L L L L L The first group of sets, L1 L3, shows how the algorithms respond to changes in mass by adding an interior point to L2 and removing an interior point from L3. This change to the number of internal data points does not affect the basic gap size between the nine clusters of data points seen in sets L1 L3. This leads to the KDTREE algorithm measuring a constant lacunarity for sets L1 L3 while the results of the GB algorithm show a decrease in lacunarity when a data point is added in set L2 and an increase in lacunarity when a data point is removed from set L3. This result highlights

82 71 how the GB algorithm definition for lacunarity based on mass distribution makes it more sensitive to mass change than to the gaps that lacunarity is intended to measure. The second group of datasets, L4 L6, shows that the results of both algorithms are unaffected when a heterogeneous dataset is rotated, reflected, or translated in space. The final comparison of these sets shows an overall increase in gap size so that the gap size of set L7 is larger that the gap size of sets L4 L6, which is larger than the gap size of sets L1 L3. As the clusters of data points become tighter between L1, L4, and L7, the gaps in the sets become larger and the lacunarity is increased. These results show that both algorithms recognize an increase in lacunarity between sets L1, L4, and L7. Table 5.7 also includes the KDTREE algorithm connectivity estimates for the simple point-set distributions shown in Figure 5.7. These results demonstrate how the cover-based method connectivity measurement is an independent characteristic, complementary to lacunarity, which quantifies the distribution of a dataset at a specific scale. The KDTREE algorithm measures a connectivity of zero for sets L1 L3 and L7, indicating that each of these sets are distributed evenly at this scale. The KDTREE algorithm measures a connectivity of for sets L4 L6, indicating that these sets are not evenly distributed, but contain some clusters of data points which are slightly more closely grouped than others. The consistent results of sets L4 L6 also demonstrate that connectivity measurements are not affected when a dataset is shifted in space. Figure 5.8 shows the simple point-set distributions used to compare the connectivity estimates of the KDTREE algorithm to the results of the GB algorithm. These simple sets were constructed to demonstrate how these two algorithms react to changes in a set s distribution while maintaining a large gap size at a specific scale. The results

83 72 Figure 5.8: Simple point-set distributions constructed to demonstrate variations in connectivity estimates produced by the KDTREE algorithm and the lacunarity estimates produced by the GB algorithm. Table 5.8: Measurements of simple point-set distributions shown in Figure 5.8. Set Gliding Box KDTREE KDTREE Name Algorithm Lacunarity Connectivity C C C C of the KDTREE algorithm at a scale of I = 16, Cd 16 (C), and the GB algorithm at a scale of ɛ = 4 are shown in Table 5.8. The results in Table 5.8 show the KDTREE algorithm connectivity estimates decrease from sets C1 to C4 as the sets become less clustered and more evenly distributed. It is difficult to produce sets with different distributions and the exact same gap size. This leads to the KDTREE algorithm measuring an increase in the lacunarity and the GB algorithm measuring a decrease in lacunarity from sets C1 to C4. The results in Tables 5.7 and 5.8 demonstrate how the KDTREE algorithm

84 73 lacunarity and connectivity measurements work together to quantify different characteristics of a dataset and how the nature of the GB algorithm causes it to respond to changes in both lacunarity and connectivity. In Table 5.7, the GB algorithm mimics the KDTREE algorithm lacunarity estimates by increasing in value as the gap size increases between sets L1, L4, and L7. In Table 5.8, the GB algorithm mimics the KDTREE algorithm connectivity estimates by decreasing in value as the sets become less clustered and more evenly distributed from sets C1 to C4. (a) Group 1: Cantor Sets with D s = (b) L GB (c) L d Figure 5.9: Group 1 Results Next, the KDTREE and GB algorithms results for three different groups of

85 74 Cantor sets are presented as plots of the measurement value versus the measurement scale. The nature of the these two algorithms dictates that the KDTREE algorithm lacunarity and connectivity measurements are plotted versus the increase in box count and the GB algorithm lacunarity measurements are plotted versus the increase in box size. Because box count and box size are inversely related, the final data points produced by the KDTREE algorithm and the initial data points produced by the GB algorithm are used to present the results of these algorithms for an equal number of data points across a similar scale of the datasets. This also makes it difficult to compare the run times of these two algorithms as the calculations of the KDTREE algorithm increase in complexity while the calculations of the GB algorithm decrease in complexity with each iteration. Overall, the KDTREE algorithm lacunarity and connectivity measurements require a similar run-time to the GB algorithm lacunarity measurement, both of which are significantly longer than the run-times required to produce measurements of fractal dimension. Cantor sets were used to test the accuracy and effectiveness of the results of these algorithms because Cantor sets can be created with specific fractal similarity dimensions and because they are exactly self-similar, meaning the proportions of their gap sizes are maintained at all scales. Figure 5.9 shows the first group of Cantor sets, which all have a fractal similarity dimension of D s = The Cantor sets in this group contain totally internal gaps that increase in size as the removed sections of the Cantor sets become more centralized from Set 1 to Set 5. The Cantor sets in this group were created to be totally connected, meaning there is no portion of the dataset completely separated from the remainder of the set at any level, down to the resolution of the dataset. The cover-based method connectivity measurement does not exist for

86 75 these totally connected sets; therefore lacunarity is the only fractal characteristic that distinguishes between the sets in Group 1. The lacunarity results for Group 1 show that both algorithms were able to measure an increase in the interior gap size from Sets 1 to Sets 5. (a) Group 2: Cantor Sets with D s = (b) L GB (c) L d (d) C d Figure 5.10: Group 2 Results Figure 5.10 shows the second group of Cantor sets. Each of these sets have a fractal similarity dimension of D s = and are the same length with the same number of points but vary in the distribution and number of self-similar levels. Set 1 was created with four clusters and five self-similar levels, Set 2 was created with 16

87 76 (a) Group 3: Cantor Sets with D s = (b) L GB (c) L d (d) C d Figure 5.11: Group 3 Results clusters and two self-similar levels, and Set 3 was created with 64 clusters and one selfsimilar level. The lacunarity results for Group 2 show that both algorithms measured a decrease in gap size from Set 1 to Set 3. The KDTREE algorithm connectivity results for Group 2 provide important additional information about the distribution of these Cantor sets. The KDTREE algorithm connectivity results for Set 1 show that it is the most clustered and the shape of the results plot shows how the connectivity changes at different scales, creating three peaks as the algorithm measures the distribution across three of the dataset s self-similar levels. The KDTREE algorithm connectivity results

88 77 for Set 2 show that it is less clustered than Set 1 and the single peak in the results plot shows the KDTREE algorithm has measured the connectivity across one of the dataset s self-similar levels. The KDTREE algorithm connectivity results for Set 3 show that it is the least clustered and most evenly distributed with a connectivity of zero for the first six iterations of the KDTREE algorithm. The shape of the results plot for Set 3 shows that after N = 8 iterations, the KDTREE algorithm has not completely measured the connectivity across the set s first self-similar level. Figure 5.11 shows the third group of Cantor sets, which all have a fractal similarity dimension of D s = These Cantor sets are actually versions of the same set with the same distribution and proportional gap size, but on an increasingly larger scale. The results for Group 3 in Figure 5.11 show that the GB algorithm actually measures Set 1 to have the largest gap size and Set 3 to have the smallest gap size because the GB algorithm includes the gaps at the edges of the sets in its lacunarity calculations. The KDTREE algorithm measures the lacunarity and connectivity to be nearly identical for all three sets in Group 3, demonstrating the ability of the KDTREE algorithm to compare the relative gap size and distribution in datasets of different sizes Medical Image Analysis Application Lacunarity analysis is recognized as a useful tool in medical image processing, capable of characterizing skin lesions [34], [32] and assessing osteoporosis in vertebral trabecular bone [35], [36]. This section presents an example of how the KDTREE algorithm lacunarity and connectivity estimates could be used to classify the microcalcifications that are commonly detected in mammograms. Ductal carcinoma-in-situ (DCIS) constitutes percent of all reported breast

89 78 cancers and approximately 95 percent of all DCIS cases are diagnosed after microcalcifications are found in the patient s mammogram [37]. Microcalcifications are tiny mineral deposits that can occur in the mammary gland and are typically produced by tiny benign cysts, but can also be caused by the calcification of cellular debris produced by cancer cells [37]. When microcalcifications are detected in a mammogram, a radiologist will analyze their morphology, distribution, and change over time to determine whether the calcifications are likely benign or malignant and require further investigatory techniques such as a biopsy or additional screenings. The BI-RADS atlas gives the following descriptions for the distribution of microcalcifications [38]: Scattered: diffuse calcifications or multiple similar appearing clusters of calcifications throughout the whole breast. Regional: scattered in a larger volume (> 2 cc) of breast tissue and not in the expected ductal distribution. Clustered : at least 5 calcifications occupy a small volume of tissue (< 1 cc). Linear: calcifications arrayed in a line, which suggests deposits in a duct. Segmental: calcium deposits in ducts and branches of a segment or lobe. Scattered and regional distributions are typically seen in benign cases and are of minimal concern. Clustered distributions are seen in both benign and malignant cases and are of intermediate concern. A single calcification cluster favors a malignant abnormality, while multiple clusters scattered throughout the breast favors a benign abnormality. Linear and segmental distributions are of the greatest concern because

90 79 they are typically created when DCIS cells fill the entire duct and its branches with calcifications. Calcifications with the highest probability of malignancy exhibit a pleomorphic (multiple shapes) irregular morphology and a fine thin, linear or curvilinear distribution that is usually greater than 0.5 mm in width and may be discontinuous. (a) Original Mammogram Image (b) Microcalcifications (c) Binary Dataset Figure 5.12: Benign Case These subtle differences in the distributions of microcalcifications that can indicate the presence of DCIS can be quantized by the KDTREE algorithm lacunar-

91 80 ity and connectivity measurements. This is demonstrated by the KDTREE algorithm results for two digitized mammogram images containing microcalcifications associated with a benign and a malignant abnormality taken from The Mammographic Image Analysis Society Digital Mammogram Database [39]. The digital mammogram images in this database have been clipped or padded to be 1024 x 1024 pixels and are stored in portable graymap format with a bit depth of eight bits. The database provides location and size of the microcalcifications present in each image along with the eventual diagnosis of the abnormality (benign or malignant) associated with each microcalcification pattern. Figure 5.12a shows the original mammogram image that contains microcalcifications associated with a benign abnormality and Figure 5.12b shows an enlarged image of the microcalcification pattern. Using the location and size of the microcalcifications, a binary dataset of the microcalcifications distribution was created by thresholding the image. This process eliminated the surrounding fibrous tissue, leaving just the D t = 2 microcalcification pattern contained in the original mammogram image. The binary dataset of the microcalcifications associated with the benign abnormality is shown in Figure 5.12c. Figure 5.13a shows the original mammogram image that contains microcalcifications associated with a malignant abnormality and Figure 5.13b shows an enlarged image of the microcalcification pattern. The binary dataset of the microcalcification pattern associated with the malignant abnormality is shown in Figure 5.13c. The KDTREE fractal dimension measurements of these two binary datasets are nearly identical at for the benign case and for the malignant case. This means that fractal dimension alone cannot differentiate between these microcalcifications. The KDTREE algorithm lacunarity and connectivity measurement results

92 81 (a) Original Mammogram Image (b) Microcalcifications (c) Binary Dataset Figure 5.13: Malignant Case for these two binary datasets are shown in Figure The results in Figure 5.14a show the binary dataset associated with the benign abnormality has a higher lacunarity. The results in Figure 5.14b show that the benign case produced two connectivity measurements while the malignant case produced only one connectivity measurement, which was larger than the benign case connectivity measurement. The larger lacunarity and lower connectivity results of the benign case are con-

Fractal Geometry. LIACS Natural Computing Group Leiden University

Fractal Geometry. LIACS Natural Computing Group Leiden University Fractal Geometry Contents Introduction The Fractal Geometry of Nature Self-Similarity Some Pioneering Fractals Dimension and Fractal Dimension Cellular Automata Particle Systems Scope of Fractal Geometry

More information

Fractal Geometry. Prof. Thomas Bäck Fractal Geometry 1. Natural Computing Group

Fractal Geometry. Prof. Thomas Bäck Fractal Geometry 1. Natural Computing Group Fractal Geometry Prof. Thomas Bäck Fractal Geometry 1 Contents Introduction The Fractal Geometry of Nature - Self-Similarity - Some Pioneering Fractals - Dimension and Fractal Dimension Scope of Fractal

More information

Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013

Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013 Fractals: Self-Similarity and Fractal Dimension Math 198, Spring 2013 Background Fractal geometry is one of the most important developments in mathematics in the second half of the 20th century. Fractals

More information

Lecture 3: Some Strange Properties of Fractal Curves

Lecture 3: Some Strange Properties of Fractal Curves Lecture 3: Some Strange Properties of Fractal Curves I have been a stranger in a strange land. Exodus 2:22 1. Fractal Strangeness Fractals have a look and feel that is very different from ordinary curves.

More information

FRACTALS The term fractal was coined by mathematician Benoit Mandelbrot A fractal object, unlike a circle or any regular object, has complexity at all scales Natural Fractal Objects Natural fractals

More information

Scope and Sequence for the New Jersey Core Curriculum Content Standards

Scope and Sequence for the New Jersey Core Curriculum Content Standards Scope and Sequence for the New Jersey Core Curriculum Content Standards The following chart provides an overview of where within Prentice Hall Course 3 Mathematics each of the Cumulative Progress Indicators

More information

Fractals: a way to represent natural objects

Fractals: a way to represent natural objects Fractals: a way to represent natural objects In spatial information systems there are two kinds of entity to model: natural earth features like terrain and coastlines; human-made objects like buildings

More information

Session 27: Fractals - Handout

Session 27: Fractals - Handout Session 27: Fractals - Handout Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. Benoit Mandelbrot (1924-2010)

More information

Fractals and Multi-Layer Coloring Algorithms

Fractals and Multi-Layer Coloring Algorithms Fractals and Multi-Layer Coloring Algorithms Javier Barrallo and Santiago Sanchez Mathematics, Physics and Computer Science The University of the Basque Country School of Architecture. Plaza Onati, 2.

More information

Fractal Coding. CS 6723 Image Processing Fall 2013

Fractal Coding. CS 6723 Image Processing Fall 2013 Fractal Coding CS 6723 Image Processing Fall 2013 Fractals and Image Processing The word Fractal less than 30 years by one of the history s most creative mathematician Benoit Mandelbrot Other contributors:

More information

Fractal Dimension and the Cantor Set

Fractal Dimension and the Cantor Set Fractal Dimension and the Cantor Set Shailesh A Shirali Shailesh Shirali is Director of Sahyadri School (KFI), Pune, and also Head of the Community Mathematics Centre in Rishi Valley School (AP). He has

More information

Generation of 3D Fractal Images for Mandelbrot and Julia Sets

Generation of 3D Fractal Images for Mandelbrot and Julia Sets 178 Generation of 3D Fractal Images for Mandelbrot and Julia Sets Bulusu Rama #, Jibitesh Mishra * # Department of Computer Science and Engineering, MLR Institute of Technology Hyderabad, India 1 rama_bulusu@yahoo.com

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

Graph Fractals. An Honors Thesis (Honrs 499) by Ben J. Kelly. Thesis Advisor. Ball State University Muncie, Indiana. May 1995

Graph Fractals. An Honors Thesis (Honrs 499) by Ben J. Kelly. Thesis Advisor. Ball State University Muncie, Indiana. May 1995 Graph Fractals An Honors Thesis (Honrs 499) by Ben J. Kelly Thesis Advisor Ball State University Muncie, Indiana May 1995 Expected Date Of Graduation: May 6, 1995 ~, 5fCol! rj,e5;s ~ 7) 2 J+B(). L~ if

More information

Mathematics 350 Section 6.3 Introduction to Fractals

Mathematics 350 Section 6.3 Introduction to Fractals Mathematics 350 Section 6.3 Introduction to Fractals A fractal is generally "a rough or fragmented geometric shape that is self-similar, which means it can be split into parts, each of which is (at least

More information

The Size of the Cantor Set

The Size of the Cantor Set The Size of the Cantor Set Washington University Math Circle November 6, 2016 In mathematics, a set is a collection of things called elements. For example, {1, 2, 3, 4}, {a,b,c,...,z}, and {cat, dog, chicken}

More information

Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008

Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008 Fractal Image Coding (IFS) Nimrod Peleg Update: Mar. 2008 What is a fractal? A fractal is a geometric figure, often characterized as being self-similar : irregular, fractured, fragmented, or loosely connected

More information

Grade 9 Math Terminology

Grade 9 Math Terminology Unit 1 Basic Skills Review BEDMAS a way of remembering order of operations: Brackets, Exponents, Division, Multiplication, Addition, Subtraction Collect like terms gather all like terms and simplify as

More information

Prime Time (Factors and Multiples)

Prime Time (Factors and Multiples) CONFIDENCE LEVEL: Prime Time Knowledge Map for 6 th Grade Math Prime Time (Factors and Multiples). A factor is a whole numbers that is multiplied by another whole number to get a product. (Ex: x 5 = ;

More information

Exploring the Effect of Direction on Vector-Based Fractals

Exploring the Effect of Direction on Vector-Based Fractals BRIDGES Mathematical Connections in Art, Music, and Science Exploring the Effect of Direction on Vector-Based Fractals Magdy Ibrahim and Robert J. Krawczyk College of Architecture Dlinois Institute of

More information

Hei nz-ottopeitgen. Hartmut Jürgens Dietmar Sau pe. Chaos and Fractals. New Frontiers of Science

Hei nz-ottopeitgen. Hartmut Jürgens Dietmar Sau pe. Chaos and Fractals. New Frontiers of Science Hei nz-ottopeitgen Hartmut Jürgens Dietmar Sau pe Chaos and Fractals New Frontiers of Science Preface Authors VU X I Foreword 1 Mitchell J. Feigenbaum Introduction: Causality Principle, Deterministic

More information

X Std. Topic Content Expected Learning Outcomes Mode of Transaction

X Std. Topic Content Expected Learning Outcomes Mode of Transaction X Std COMMON SYLLABUS 2009 - MATHEMATICS I. Theory of Sets ii. Properties of operations on sets iii. De Morgan s lawsverification using example Venn diagram iv. Formula for n( AÈBÈ C) v. Functions To revise

More information

Fractals Week 10, Lecture 19

Fractals Week 10, Lecture 19 CS 430/536 Computer Graphics I Fractals Week 0, Lecture 9 David Breen, William Regli and Maim Peysakhov Geometric and Intelligent Computing Laboratory Department of Computer Science Dreel University http://gicl.cs.dreel.edu

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Prentice Hall Mathematics: Course Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8)

Prentice Hall Mathematics: Course Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8) Colorado Model Content Standards and Grade Level Expectations (Grade 8) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the

More information

Gentle Introduction to Fractals

Gentle Introduction to Fractals Gentle Introduction to Fractals www.nclab.com Contents 1 Fractals Basics 1 1.1 Concept................................................ 1 1.2 History................................................ 2 1.3

More information

9. Three Dimensional Object Representations

9. Three Dimensional Object Representations 9. Three Dimensional Object Representations Methods: Polygon and Quadric surfaces: For simple Euclidean objects Spline surfaces and construction: For curved surfaces Procedural methods: Eg. Fractals, Particle

More information

Middle School Math Course 2

Middle School Math Course 2 Middle School Math Course 2 Correlation of the ALEKS course Middle School Math Course 2 to the Indiana Academic Standards for Mathematics Grade 7 (2014) 1: NUMBER SENSE = ALEKS course topic that addresses

More information

Math 7 Glossary Terms

Math 7 Glossary Terms Math 7 Glossary Terms Absolute Value Absolute value is the distance, or number of units, a number is from zero. Distance is always a positive value; therefore, absolute value is always a positive value.

More information

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12

CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Tool 1: Standards for Mathematical ent: Interpreting Functions CCSSM Curriculum Analysis Project Tool 1 Interpreting Functions in Grades 9-12 Name of Reviewer School/District Date Name of Curriculum Materials:

More information

Montana City School GRADE 5

Montana City School GRADE 5 Montana City School GRADE 5 Montana Standard 1: Students engage in the mathematical processes of problem solving and reasoning, estimation, communication, connections and applications, and using appropriate

More information

Course of study- Algebra Introduction: Algebra 1-2 is a course offered in the Mathematics Department. The course will be primarily taken by

Course of study- Algebra Introduction: Algebra 1-2 is a course offered in the Mathematics Department. The course will be primarily taken by Course of study- Algebra 1-2 1. Introduction: Algebra 1-2 is a course offered in the Mathematics Department. The course will be primarily taken by students in Grades 9 and 10, but since all students must

More information

Students will understand 1. that numerical expressions can be written and evaluated using whole number exponents

Students will understand 1. that numerical expressions can be written and evaluated using whole number exponents Grade 6 Expressions and Equations Essential Questions: How do you use patterns to understand mathematics and model situations? What is algebra? How are the horizontal and vertical axes related? How do

More information

Big Mathematical Ideas and Understandings

Big Mathematical Ideas and Understandings Big Mathematical Ideas and Understandings A Big Idea is a statement of an idea that is central to the learning of mathematics, one that links numerous mathematical understandings into a coherent whole.

More information

Chapel Hill Math Circle: Symmetry and Fractals

Chapel Hill Math Circle: Symmetry and Fractals Chapel Hill Math Circle: Symmetry and Fractals 10/7/17 1 Introduction This worksheet will explore symmetry. To mathematicians, a symmetry of an object is, roughly speaking, a transformation that does not

More information

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #11 Image Segmentation Prof. Dan Huttenlocher Fall 2003 Image Segmentation Find regions of image that are coherent Dual of edge detection Regions vs. boundaries Related to clustering problems

More information

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters

Two Algorithms of Image Segmentation and Measurement Method of Particle s Parameters Appl. Math. Inf. Sci. 6 No. 1S pp. 105S-109S (2012) Applied Mathematics & Information Sciences An International Journal @ 2012 NSP Natural Sciences Publishing Cor. Two Algorithms of Image Segmentation

More information

A New Approach for Finding Correlation Dimension Based on Escape Time Algorithm

A New Approach for Finding Correlation Dimension Based on Escape Time Algorithm International Journal of Engineering and Applied Sciences (IJEAS) ISSN: 2394-3661, Volume-3, Issue-9, September 2016 A New Approach for Finding Correlation Dimension Based on Escape Time Algorithm Dr.Arkan

More information

CGT 581 G Procedural Methods Fractals

CGT 581 G Procedural Methods Fractals CGT 581 G Procedural Methods Fractals Bedrich Benes, Ph.D. Purdue University Department of Computer Graphics Technology Procedural Techniques Model is generated by a piece of code. Model is not represented

More information

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse Tutorial Outline Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse exams. Math Tutorials offer targeted instruction,

More information

Precalculus, Quarter 2, Unit 2.1. Trigonometry Graphs. Overview

Precalculus, Quarter 2, Unit 2.1. Trigonometry Graphs. Overview 13 Precalculus, Quarter 2, Unit 2.1 Trigonometry Graphs Overview Number of instructional days: 12 (1 day = 45 minutes) Content to be learned Convert between radian and degree measure. Determine the usefulness

More information

Clouds, biological growth, and coastlines are

Clouds, biological growth, and coastlines are L A B 11 KOCH SNOWFLAKE Fractals Clouds, biological growth, and coastlines are examples of real-life phenomena that seem too complex to be described using typical mathematical functions or relationships.

More information

Parameterization. Michael S. Floater. November 10, 2011

Parameterization. Michael S. Floater. November 10, 2011 Parameterization Michael S. Floater November 10, 2011 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to generate from point

More information

2003/2010 ACOS MATHEMATICS CONTENT CORRELATION GRADE ACOS 2010 ACOS

2003/2010 ACOS MATHEMATICS CONTENT CORRELATION GRADE ACOS 2010 ACOS CURRENT ALABAMA CONTENT PLACEMENT 5.1 Demonstrate number sense by comparing, ordering, rounding, and expanding whole numbers through millions and decimals to thousandths. 5.1.B.1 2003/2010 ACOS MATHEMATICS

More information

A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods

A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods A Random Variable Shape Parameter Strategy for Radial Basis Function Approximation Methods Scott A. Sarra, Derek Sturgill Marshall University, Department of Mathematics, One John Marshall Drive, Huntington

More information

Prentice Hall Mathematics: Pre-Algebra 2004 Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8)

Prentice Hall Mathematics: Pre-Algebra 2004 Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8) Colorado Model Content Standards and Grade Level Expectations (Grade 8) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the

More information

Equations and Functions, Variables and Expressions

Equations and Functions, Variables and Expressions Equations and Functions, Variables and Expressions Equations and functions are ubiquitous components of mathematical language. Success in mathematics beyond basic arithmetic depends on having a solid working

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

Minnesota Academic Standards for Mathematics 2007

Minnesota Academic Standards for Mathematics 2007 An Alignment of Minnesota for Mathematics 2007 to the Pearson Integrated High School Mathematics 2014 to Pearson Integrated High School Mathematics Common Core Table of Contents Chapter 1... 1 Chapter

More information

COMPUTER ANALYSIS OF FRACTAL SETS

COMPUTER ANALYSIS OF FRACTAL SETS Proceedings of the Czech Japanese Seminar in Applied Mathematics 2006 Czech Technical University in Prague, September 14-17, 2006 pp. 1 8 COMPUTER ANALYSIS OF FRACTAL SETS PETR PAUŠ1 Abstract. This article

More information

Fractal Image Compression

Fractal Image Compression Ball State University January 24, 2018 We discuss the works of Hutchinson, Vrscay, Kominek, Barnsley, Jacquin. Mandelbrot s Thesis 1977 Traditional geometry with its straight lines and smooth surfaces

More information

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following

More information

<The von Koch Snowflake Investigation> properties of fractals is self-similarity. It means that we can magnify them many times and after every

<The von Koch Snowflake Investigation> properties of fractals is self-similarity. It means that we can magnify them many times and after every Jiwon MYP 5 Math Ewa Puzanowska 18th of Oct 2012 About Fractal... In geometry, a fractal is a shape made up of parts that are the same shape as itself and are of

More information

Iterated Functions Systems and Fractal Coding

Iterated Functions Systems and Fractal Coding Qing Jun He 90121047 Math 308 Essay Iterated Functions Systems and Fractal Coding 1. Introduction Fractal coding techniques are based on the theory of Iterated Function Systems (IFS) founded by Hutchinson

More information

PITSCO Math Individualized Prescriptive Lessons (IPLs)

PITSCO Math Individualized Prescriptive Lessons (IPLs) Orientation Integers 10-10 Orientation I 20-10 Speaking Math Define common math vocabulary. Explore the four basic operations and their solutions. Form equations and expressions. 20-20 Place Value Define

More information

Self-similar space-filling packings in three dimensions

Self-similar space-filling packings in three dimensions Self-similar space-filling packings in three dimensions Reza Mahmoodi Baram, Hans J. Herrmann December 11, 2003 Institute for Computational Physics, University of Stuttgart, Pfaffenwaldring 27, 70569 Stuttgart,

More information

2 Geometry Solutions

2 Geometry Solutions 2 Geometry Solutions jacques@ucsd.edu Here is give problems and solutions in increasing order of difficulty. 2.1 Easier problems Problem 1. What is the minimum number of hyperplanar slices to make a d-dimensional

More information

Knowledge libraries and information space

Knowledge libraries and information space University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2009 Knowledge libraries and information space Eric Rayner University

More information

7. Stochastic Fractals

7. Stochastic Fractals Stochastic Fractals Christoph Traxler Fractals-Stochastic 1 Stochastic Fractals Simulation of Brownian motion Modelling of natural phenomena, like terrains, clouds, waves,... Modelling of microstructures,

More information

Filling Space with Random Line Segments

Filling Space with Random Line Segments Filling Space with Random Line Segments John Shier Abstract. The use of a nonintersecting random search algorithm with objects having zero width ("measure zero") is explored. The line length in the units

More information

GTPS Curriculum Mathematics Grade 8

GTPS Curriculum Mathematics Grade 8 4.2.8.B2 Use iterative procedures to generate geometric patterns: Fractals (e.g., the Koch Snowflake); Self-similarity; Construction of initial stages; Patterns in successive stages (e.g., number of triangles

More information

Lecture Tessellations, fractals, projection. Amit Zoran. Advanced Topics in Digital Design

Lecture Tessellations, fractals, projection. Amit Zoran. Advanced Topics in Digital Design Lecture Tessellations, fractals, projection Amit Zoran Advanced Topics in Digital Design 67682 The Rachel and Selim Benin School of Computer Science and Engineering The Hebrew University of Jerusalem,

More information

Application of fuzzy set theory in image analysis. Nataša Sladoje Centre for Image Analysis

Application of fuzzy set theory in image analysis. Nataša Sladoje Centre for Image Analysis Application of fuzzy set theory in image analysis Nataša Sladoje Centre for Image Analysis Our topics for today Crisp vs fuzzy Fuzzy sets and fuzzy membership functions Fuzzy set operators Approximate

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

Agile Mind Mathematics 6 Scope and Sequence, Indiana Academic Standards for Mathematics

Agile Mind Mathematics 6 Scope and Sequence, Indiana Academic Standards for Mathematics In the three years prior Grade 6, students acquired a strong foundation in numbers and operations, geometry, measurement, and data. Students are fluent in multiplication of multi-digit whole numbers and

More information

Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks

Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks Ted Brown, Deniz Sarioz, Amotz Bar-Noy, Tom LaPorta, Dinesh Verma, Matthew Johnson, Hosam Rowaihy November 20, 2006 1 Introduction

More information

Mapping Common Core State Standard Clusters and. Ohio Grade Level Indicator. Grade 5 Mathematics

Mapping Common Core State Standard Clusters and. Ohio Grade Level Indicator. Grade 5 Mathematics Mapping Common Core State Clusters and Ohio s Grade Level Indicators: Grade 5 Mathematics Operations and Algebraic Thinking: Write and interpret numerical expressions. Operations and Algebraic Thinking:

More information

Algorithms for Grid Graphs in the MapReduce Model

Algorithms for Grid Graphs in the MapReduce Model University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Computer Science and Engineering: Theses, Dissertations, and Student Research Computer Science and Engineering, Department

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Space Filling Curves and Hierarchical Basis. Klaus Speer

Space Filling Curves and Hierarchical Basis. Klaus Speer Space Filling Curves and Hierarchical Basis Klaus Speer Abstract Real world phenomena can be best described using differential equations. After linearisation we have to deal with huge linear systems of

More information

Columbus State Community College Mathematics Department Public Syllabus. Course and Number: MATH 1172 Engineering Mathematics A

Columbus State Community College Mathematics Department Public Syllabus. Course and Number: MATH 1172 Engineering Mathematics A Columbus State Community College Mathematics Department Public Syllabus Course and Number: MATH 1172 Engineering Mathematics A CREDITS: 5 CLASS HOURS PER WEEK: 5 PREREQUISITES: MATH 1151 with a C or higher

More information

Adaptive-Mesh-Refinement Pattern

Adaptive-Mesh-Refinement Pattern Adaptive-Mesh-Refinement Pattern I. Problem Data-parallelism is exposed on a geometric mesh structure (either irregular or regular), where each point iteratively communicates with nearby neighboring points

More information

Los Angeles Unified School District. Mathematics Grade 6

Los Angeles Unified School District. Mathematics Grade 6 Mathematics Grade GRADE MATHEMATICS STANDARDS Number Sense 9.* Compare and order positive and negative fractions, decimals, and mixed numbers and place them on a number line..* Interpret and use ratios

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis

Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Bayesian Spherical Wavelet Shrinkage: Applications to Shape Analysis Xavier Le Faucheur a, Brani Vidakovic b and Allen Tannenbaum a a School of Electrical and Computer Engineering, b Department of Biomedical

More information

5 Mathematics Curriculum. Module Overview... i. Topic A: Concepts of Volume... 5.A.1

5 Mathematics Curriculum. Module Overview... i. Topic A: Concepts of Volume... 5.A.1 5 Mathematics Curriculum G R A D E Table of Contents GRADE 5 MODULE 5 Addition and Multiplication with Volume and Area GRADE 5 MODULE 5 Module Overview... i Topic A: Concepts of Volume... 5.A.1 Topic B:

More information

The Space of Closed Subsets of a Convergent Sequence

The Space of Closed Subsets of a Convergent Sequence The Space of Closed Subsets of a Convergent Sequence by Ashley Reiter and Harold Reiter Many topological spaces are simply sets of points(atoms) endowed with a topology Some spaces, however, have elements

More information

Constrained Diffusion Limited Aggregation in 3 Dimensions

Constrained Diffusion Limited Aggregation in 3 Dimensions Constrained Diffusion Limited Aggregation in 3 Dimensions Paul Bourke Swinburne University of Technology P. O. Box 218, Hawthorn Melbourne, Vic 3122, Australia. Email: pdb@swin.edu.au Abstract Diffusion

More information

Mathematics - Grade 7: Introduction Math 7

Mathematics - Grade 7: Introduction Math 7 Mathematics - Grade 7: Introduction Math 7 In Grade 7, instructional time should focus on four critical areas: (1) developing understanding of and applying proportional relationships; (2) developing understanding

More information

Infinite Geometry supports the teaching of the Common Core State Standards listed below.

Infinite Geometry supports the teaching of the Common Core State Standards listed below. Infinite Geometry Kuta Software LLC Common Core Alignment Software version 2.05 Last revised July 2015 Infinite Geometry supports the teaching of the Common Core State Standards listed below. High School

More information

DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION

DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION DISCRETE DOMAIN REPRESENTATION FOR SHAPE CONCEPTUALIZATION Zoltán Rusák, Imre Horváth, György Kuczogi, Joris S.M. Vergeest, Johan Jansson Department of Design Engineering Delft University of Technology

More information

correlated to the Michigan High School Mathematics Content Expectations

correlated to the Michigan High School Mathematics Content Expectations correlated to the Michigan High School Mathematics Content Expectations McDougal Littell Algebra 1 Geometry Algebra 2 2007 correlated to the STRAND 1: QUANTITATIVE LITERACY AND LOGIC (L) STANDARD L1: REASONING

More information

Computing connectedness: disconnectedness and discreteness

Computing connectedness: disconnectedness and discreteness Physica D 139 (2000) 276 300 Computing connectedness: disconnectedness and discreteness V. Robins a,, J.D. Meiss a, E. Bradley b a Department of Applied Mathematics, University of Colorado, Boulder, CO

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Agile Mind Mathematics 6 Scope & Sequence for Common Core State Standards, DRAFT

Agile Mind Mathematics 6 Scope & Sequence for Common Core State Standards, DRAFT Agile Mind Mathematics 6 Scope & Sequence for, 2013-2014 DRAFT FOR GIVEN EXPONENTIAL SITUATIONS: GROWTH AND DECAY: Engaging problem-solving situations and effective strategies including appropriate use

More information

Birkdale High School - Higher Scheme of Work

Birkdale High School - Higher Scheme of Work Birkdale High School - Higher Scheme of Work Module 1 - Integers and Decimals Understand and order integers (assumed) Use brackets and hierarchy of operations (BODMAS) Add, subtract, multiply and divide

More information

APS Sixth Grade Math District Benchmark Assessment NM Math Standards Alignment

APS Sixth Grade Math District Benchmark Assessment NM Math Standards Alignment SIXTH GRADE NM STANDARDS Strand: NUMBER AND OPERATIONS Standard: Students will understand numerical concepts and mathematical operations. 5-8 Benchmark N.: Understand numbers, ways of representing numbers,

More information

Beyond Competent (In addition to C)

Beyond Competent (In addition to C) Grade 6 Math Length of Class: School Year Program/Text Used: Everyday Math Competency 1: Ratios and Proportional Relationships - Students will demonstrate the ability to understand ratios and proportional

More information

Middle School Math Course 3 Correlation of the ALEKS course Middle School Math 3 to the Illinois Assessment Framework for Grade 8

Middle School Math Course 3 Correlation of the ALEKS course Middle School Math 3 to the Illinois Assessment Framework for Grade 8 Middle School Math Course 3 Correlation of the ALEKS course Middle School Math 3 to the Illinois Assessment Framework for Grade 8 State Goal 6: Number Sense 6.8.01: 6.8.02: 6.8.03: 6.8.04: 6.8.05: = ALEKS

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions Unit 1: Rational Numbers & Exponents M07.A-N & M08.A-N, M08.B-E Essential Questions Standards Content Skills Vocabulary What happens when you add, subtract, multiply and divide integers? What happens when

More information

Middle School Math Course 3

Middle School Math Course 3 Middle School Math Course 3 Correlation of the ALEKS course Middle School Math Course 3 to the Texas Essential Knowledge and Skills (TEKS) for Mathematics Grade 8 (2012) (1) Mathematical process standards.

More information

Scientific Calculation and Visualization

Scientific Calculation and Visualization Scientific Calculation and Visualization Topic Iteration Method for Fractal 2 Classical Electrodynamics Contents A First Look at Quantum Physics. Fractals.2 History of Fractal.3 Iteration Method for Fractal.4

More information

morphology on binary images

morphology on binary images morphology on binary images Ole-Johan Skrede 10.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides

More information

Mathematics K-8 Content Standards

Mathematics K-8 Content Standards Mathematics K-8 Content Standards Kindergarten K.1 Number and Operations and Algebra: Represent, compare, and order whole numbers, and join and separate sets. K.1.1 Read and write whole numbers to 10.

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Smarter Balanced Vocabulary (from the SBAC test/item specifications)

Smarter Balanced Vocabulary (from the SBAC test/item specifications) Example: Smarter Balanced Vocabulary (from the SBAC test/item specifications) Notes: Most terms area used in multiple grade levels. You should look at your grade level and all of the previous grade levels.

More information

= f (a, b) + (hf x + kf y ) (a,b) +

= f (a, b) + (hf x + kf y ) (a,b) + Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals

More information

Some geometries to describe nature

Some geometries to describe nature Some geometries to describe nature Christiane Rousseau Since ancient times, the development of mathematics has been inspired, at least in part, by the need to provide models in other sciences, and that

More information