# THE preceding chapters were all devoted to the analysis of images and signals which

Save this PDF as:

Size: px
Start display at page:

Download "THE preceding chapters were all devoted to the analysis of images and signals which"

## Transcription

1 Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to process vector-valued images where each pixel value is a vector belonging to IR M, with M 1. The entries of this vector could correspond to red, green, and blue intensity values in color images [74], to data gathered from several sensors [33] or imaging modalities [68], or to the entries of a feature vector obtained from analyzing a texture image [10]. In the next section, we generalize our SIDEs to vector-valued images and argue that most properties of scalar-valued SIDEs still apply. We then give several examples of segmentation of color and texture images. Section 5.2 treats images whose every pixel belongs to a circle S 1. Such images arise in the analysis of orientation [48] and optical flow []. 5.1 Vector-Valued Images. We use one of the properties discussed in Chapter 3 in order to generalize our evolution equations to vector-valued images. We recall that a SIDE is the gradient descent for the global energy (2.8), which we re-write as follows: E = i,jare neighbors E( u j u i ), where E is a SIDE energy function (Figure 2.6). The norm here stands simply for the absolute value of its scalar argument. Now notice that we still can use the above equation if the image under consideration is vector-valued, by interpreting as the l 2 norm. To remind ourselves that the pixel values of vector-valued images are vectors, we will use arrows to denote them: E = E( u j u i ), (5.1) i,jare neighbors 99

2 CHAPTER 5. SEGMENTATION OF COLOR, TEXTURE, AND ORIENTATION IMAGES Figure 5.1. Spring-mass model for vector-valued diffusions. This figure shows a 2-by-2 image whose pixels are two-vectors: (2,2), (0,0), (0,1), and (1,2). The pixel values are depicted, with each pixel connected by springs to its neighboring pixels. where u i = (u i,1,...,u i,m ) T IR M, u j = (u j,1,...,u j,m ) T IR M, and u j u i = M (u j,k u i,k ) 2. k=1 We will call the collection of the k-th entries of these vectors the k-th channel of the image. In this case, the gradient descent equation is: u i = 1 m i j A i u j u i u j u i F ( u j u i )p ij. (5.2) (This notation combines M equations one for each channel in a single vector equation). Just as for the scalar images, we merge two neighboring pixels at locations i and j when their values become equal: u j (t) = u i (t). Just as in Chapter 3, m i and A i are the area of the i-th region and the set of its neighbors, respectively. The length of the boundary between regions i and j is p ij. A slight modification of the spring-mass models of Chapter 3, Figures 3.1 and 3.3, can be used to visualize this evolution. We recall that those models consisted of particles forced to move along straight lines, and connected by springs to their neighbors. If we replace each straight line with an M-dimensional space, we will get the model corresponding to the vector-valued equation, with pixel values in IR M. For example, when M = 2, the particles are forced to move in 2-D planes. Another way to visualize the system is by depicting all the particles as points in a single M-dimensional space, as shown in Figure 5.1. Each pixel value u i =(u i,1,u i,2 ) T is depicted in Figure 5.1 as a particle whose coordinates are u i,1 and u i,2. Each particle is connected by springs to its neighbors. The spring whose length is v exerts a force whose absolute value is F (v), and which is directed parallel to the spring. By using techniques similar to those in Chapter 3, it can be verified that Equation (5.2) inherits many useful properties of the scalar equation. Namely, the conservation of mean and the local maximum principle hold for each channel; the equation reaches

3 Sec Vector-Valued Images. 101 Figure 5.2. (a) A test image; (b) its noisy version (normalized); (c) detected boundary, superimposed onto the noise-free image Figure 5.3. (a) A test image; (b) its noisy version (normalized); (c) detected boundary, superimposed onto the noise-free image the steady state in finite time and has the energy dissipation properties described in Subsection For usual SIDE force functions, there is no sliding for vector images just as there was no sliding for scalar images. However, for force functions which are infinite at zero, such as that of Figure 3.8, the sliding property holds, and therefore so does well-posedness. Vector-valued SIDEs are also robust to severe noise, as we show in the experiments Experiment 1: Color Images. We start by applying a vector-valued SIDE to the color image in Figure 5.2. The image in Figure 5.2, (a) consists of two regions: two of its three color channels undergo an abrupt change at the boundary between the regions. More precisely, the {red, green, blue} channel values are {0.1, 0.6, 0.9} for the background and {0.6, 0.6, 0.5} for the square. Each channel is corrupted with independent white Gaussian noise whose standard deviation is 0.4. The resulting image (normalized in order to make every pixel of every channel be between 0 and 1) is shown in Figure 5.2, (b). We evolve a vectorvalued SIDE on the noisy image, until exactly two regions remain. The final boundary,

4 102 CHAPTER 5. SEGMENTATION OF COLOR, TEXTURE, AND ORIENTATION IMAGES (a) (b) Figure 5.4. (a) Image of two textures: fabric (left) and grass (right); (b) the ideal segmentation of the image in (a). (d) (e) (f) Figure 5.5. (a-c) Filters; (d-f) Filtered versions of the image in Figure 5.4, (a). superimposed onto the initial image, is depicted in Figure 5.2, (c). Just as in the scalar case, the algorithm is very accurate in locating the boundary: less than 0.2% of the pixels are misclassified in this -by- image. A similar experiment, with the same level of noise, is conducted for a more complicated shape, whose image is in Figure 5.3, (a). The result of processing the noisy image of Figure 5.3, (b) is shown in Figure 5.3, (c). In this -by- image, 0.8% of the pixels are misclassified Experiment 2: Texture Images. Many algorithms for segmenting vector-valued images may be applied to texture segmentation, by extracting features from the texture image and presenting the image consisting of these feature vectors to the segmentation algorithm [36]. This is the paradigm we adopt here. We note that the problem of feature extraction is very important: it is clear that the quality of features will influence the quality of the final segmentation. However, this problem is beyond the scope of the present thesis. The purpose of this experiment is to show the feasibility of texture segmentation using SIDEs, given a

5 Sec Vector-Valued Images. 103 (a) (b) Figure 5.6. (a) Two-region segmentation, and (b) its deviation from the ideal one. Figure 5.7. (a) A different feature image: the direction of the gradient; (b) the corresponding tworegion segmentation, and (c) its deviation from the ideal one. reasonable set of features. Our first example is the image depicted in Figure 5.4, (a), which consists of two textures: fabric (left) and grass (right). The underlying texture images whose grayscale versions were used to create the image of Figure 5.4, (a), were taken from the VisTex database of the MIT Media Lab [42]. The correct segmentation is in Figure 5.4, (b). We extract features by convolving our image with the three Gabor functions [40] depicted in Figure 5.5, (a-c), which essentially corresponds to computing smoothed directional derivatives. The results of filtering our image with these functions are shown in Figure 5.5, (d-f). These are the three channels of the vector-valued image which we use as the initial condition for the SIDE (5.2). Figure 5.6, (a) is the resulting two-region segmentation, and its difference from the ideal segmentation is in Figure 5.6, (b). The result is very accurate: about 3.2% of the pixels are classified incorrectly, all of which are very close to the initial boundary. The next example illustrates the importance of the pre-processing step. We again segment the image of Figure 5.4, (a), but this time we use a different feature. Instead of the three features of Figure 5.5, we use a single feature, which is the absolute value of the gradient direction, depicted in Figure 5.7, (a). More precisely, the (i, j)-th pixel value of the feature image is angle(u i,j+1 u i,j + 1(u i+1,j u i,j )) 2, where u i,j is the (i, j)-th pixel of the original image. This leads to a significant improvement in performance, shown in the rest of Figure 5.7: the shape of the boundary is much closer to that of the ideal boundary, and the number of misclassified pixels is

6 104 CHAPTER 5. SEGMENTATION OF COLOR, TEXTURE, AND ORIENTATION IMAGES (a) (b) Figure 5.8. (a) Image of two wood textures; (b) the ideal segmentation of the image in (a). Figure 5.9. (a) Feature image for the wood textures; (b) the corresponding five-region segmentation, and (c) its deviation from the ideal one. only 2% of the total area of the image. In our next example, we segment the image of Figure 5.8, which involves two wood textures generated using the Markov random field method of [37]. We again use the direction of the gradient as the sole feature (Figure 5.9, (a)). The five-region segmentation of Figure 5.9, (b) is again very accurate, with the area between our boundary and the ideal one being only 1.6% of the total area of the image (Figure 5.9, (c)). Our final example of this subsection involves the same wood textures, but arranged in more complicated shapes (Figure 5.10). The feature image and the corresponding segmentation (2.3% of the pixels are misclassified) are shown in Figure We will see in the next section that if the two textures only differ in orientation (as in the last two examples), then performance can be improved by using a different version of SIDEs, adapted for processing orientation images. We close this section by noting that in all the examples of this subsection, our segmentation algorithm performed very well, despite the fact that the feature images were of poor quality. No attempt was made to optimize the pre-processing step of extracting the features.

7 Sec Vector-Valued Images. 105 (a) (b) Figure (a) Another image of two wood textures; (b) the ideal segmentation of the image in (a). Figure (a) Feature image for the wood textures in Figure 5.10, (a); (b) the corresponding five-region segmentation, and (c) its deviation from the ideal one.

8 106 CHAPTER 5. SEGMENTATION OF COLOR, TEXTURE, AND ORIENTATION IMAGES E(v) π π v Figure A SIDE energy function which is flat at π and π and therefore results in a force function which vanishes at π and π. 5.2 Orientation Diffusions. In [48], Perona used the linear heat diffusion as a basis for developing orientation diffusions. In this section, we follow a similar procedure to adapt SIDEs for processing orientation images. In other words, we consider the case when the image to be processed takes its values on the unit circle S 1, i.e., each pixel value is a complex number with unit magnitude. We define the signed distance ζ(a, b) between two such numbers a and b as their phase difference: ζ(a, b) def = 1 log a b, (5.3) where log is the principal value, i.e., π ζ<π. We would like the force acting between two antipodal points to change continuously as the distance between them changes from π 0to π + 0 (i.e., from slightly smaller than π to slightly larger than π). We therefore restrict our attention to SIDE energy functions for which E (π) =E ( π) = 0, as shown in Figure We define the following global energy: E = E( ζ(u j,u i ) ). (5.4) i,jare neighbors We define SIDEs for circle-valued images and signals as the following gradient descent equation: with the i-th pixel evolving according to: u = E, u i = i E, where i is the gradient taken on the unit circle S 1. To visualize this evolution, the straight vertical lines of the spring-mass models of Figures 3.1 and 3.3 are replaced with circles: each particle is moving around a circle. After taking the gradients, simplifying, and taking into account merging of pixels, we obtain that the differential equation governing the evolution of the phase angles of u i s is very similar to the scalar SIDE: θ i = 1 F (ζ(u j,u i ))p ij, (5.5) m i j A i

9 Sec Orientation Diffusions. 107 where θ i is the phase angle of u i (we use the convention 0 θ i < 2π, and identify θ i =2π with θ i = 0). The rest of the notation is the same as in Chapter 3. Two neighboring pixels are merged when they have the same phase angle. While this evolution has many similarities to its scalar and vector-valued counterparts, it also has important differences, stemming from the fact that it operates on the phase angles. Thus, it is not natural to talk about the mean of the (complex) values of the input image; instead, this evolution preserves the sum of the phases, modulo 2π. This is easily verified by summing up the equations (5.5). Property 5.1 (Total phase conservation). The phase angle of the product of all pixel values stays invariant throughout the evolution of (5.5). Another important distinction from the scalar and vector SIDEs is the existence of unstable equilibria. Since F (π) =E (π) = 0, it follows that if two neighboring pixels have values which are two antipodal points on S 1, these points will neither attract not repulse each other. Thus, unlike the scalar and vector-valued equations whose only equilibria were constant images, many other equilibrium states are possible here. The next property, however, guarantees that the only stable equilibria are constant images. Property 5.2 (Equilibria). Suppose that all the pixel values of an image u have the same phase. Then u is a stable equilibrium of (5.5). Conversely, such images u are the only stable equilibria of (5.5). Proof. If all the phases are the same, then the whole image u is a single region, which therefore has no neighbors and is not changing. Moreover, if a pixel value is perturbed by a small amount, forces exerted by its neighbors will immediately pull it back. Thus, it is a stable equilibrium. Suppose now that an equilibrium image u has more than one region. Let us pick an arbitrary region, call its value u, and partition the set of its neighboring regions into two subsets: U = {u 1,...,u p }, whose every element pulls u in the counter-clockwise direction (i.e., U is comprised of those regions for which the phase of u i u is positive and strictly less than π), and the set V = {v 1,...,v q }, whose elements pull u in the clockwise direction (i.e., those regions for which the phase of v i u is negative and greater than or equal to π). One of the sets U, V but not both can be empty. Since our system (5.5) is in equilibrium, it means that the resultant force acting on u is zero i.e., the right-hand side of the corresponding differential equation is zero. Suppose now that u is slightly perturbed in the clockwise direction. Since the force function F is monotonically decreasing, this means that the resultant force exerted on u by the regions comprising the set V will increase, and the resultant force exerted by U will decrease. The net result will be to further push u in the clockwise direction. A similar argument applies if u is perturbed in the counter-clockwise direction. Thus, the equilibrium is unstable, which concludes the proof. For any reasonable probabilistic model of the initial data, the probability of attaining an unstable equilibrium during the evolution is zero. In any case, a numerical

10 108 CHAPTER 5. SEGMENTATION OF COLOR, TEXTURE, AND ORIENTATION IMAGES Figure (a) The orientation image for Figure 5.7, (a); (b) the corresponding two-region segmentation, and (c) its deviation from the ideal one. implementation can be designed to avoid such equilibria. Therefore, in the generic case, the steady state of the evolution is a stable equilibrium, which, by the above property, is a constant image. This corresponds to the coarsest segmentation (everything is one region), just like in the scalar-valued and vector-valued cases. Since there is no notion of a maximum on the unit circle, there is no notion of a maximum principle, either. This means, moreover, that we cannot mimic the proof of Property 3.2 (finite evolution time) of the scalar evolutions. This property does hold for the evolutions on a circle, but the proof is different. Specifically, Property 3.7 holds here, with a similar proof, which means that between two consecutive mergings, the global energy is a concave decreasing function of time (Figure 3.7). It will therefore reach zero in finite time, at which point the evolution will be at one of its equilibria. Property 5.3 (Finite evolution time). The SIDE (5.5) reaches its equilibrium in finite time, starting with any initial condition Experiments. To illustrate segmentation of orientation images, we use the same texture images which we used in the previous section. To extract the orientations, we use the direction of the gradient. The (i, j)-th pixel value of the orientation image is angle[(u i,j+1 u i,j + 1(u i+1,j u i,j ))] 2, where u i,j is the (i, j)-th pixel value of the raw texture image. (Note that the absolute value of this orientation image was used as a feature image in the previous section.) The expression in the above formula is squared so as to equate the phases which differ by π. The orientations for the fabric-and-grass image are shown in Figure 5.13, (a). We present this image as the initial condition to the circle-valued SIDE (5.5) and evolve it until two regions remain. The resulting segmentation is depicted in Figure 5.13, (b), and its difference from the ideal one is in 5.13, (c). About 4.3% of the total number of pixels are classified incorrectly. It is not surprising that this method performs slightly worse on this example than the evolutions of the previous section. Indeed, if a human were asked to segment the original texture image based purely on orientation, he might

11 Sec Conclusion. 109 Figure (a) The orientation image for Figure 5.9, (a); (b) the corresponding five-region segmentation, and (c) its deviation from the ideal one. Figure (a) The orientation image for Figure 5.10, (a); (b) the corresponding five-region segmentation, and (c) its deviation from the ideal one. make similar errors. Note in particular that the protrusion in the upper portion of the boundary found by the SIDE corresponds to a horizontally oriented piece of grass in the original image, which can be mistaken for a portion of the fabric. In the second example, however, the orientation information is very appropriate for characterizing and discriminating the two differently oriented wood textures. The orientation image is in Figure 5.14, (a). The resulting five-region segmentation of Figure 5.13, (b) incorrectly classifies only 252 pixels (1.5%) in the 128-by-128 image, which is 14 pixels better than the method of the previous section. A more dramatic improvement is achieved in the example of Figure 5.15, which shows the five-region segmentation of the image in Figure 5.10, (a), using the circle-valued SIDE (5.5). Only 1.5% of the pixels are misclassified, as compared to 2.3% using the method of the previous section. 5.3 Conclusion. In this chapter, we generalized SIDEs to the situations when the image to be processed is not scalar-valued. We described the properties of the resulting evolutions and demonstrated their application to segmenting color, texture, and orientation images.

### Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

### Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

### Image Processing. Image Features

Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

### CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

### Here are some of the more basic curves that we ll need to know how to do as well as limits on the parameter if they are required.

1 of 10 23/07/2016 05:15 Paul's Online Math Notes Calculus III (Notes) / Line Integrals / Line Integrals - Part I Problems] [Notes] [Practice Problems] [Assignment Calculus III - Notes Line Integrals Part

### HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

### Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

### Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

### CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

### Segmentation and Grouping

Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

### 2 Second Derivatives. As we have seen, a function f (x, y) of two variables has four different partial derivatives: f xx. f yx. f x y.

2 Second Derivatives As we have seen, a function f (x, y) of two variables has four different partial derivatives: (x, y), (x, y), f yx (x, y), (x, y) It is convenient to gather all four of these into

### Inverse and Implicit functions

CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

### Research in Computational Differential Geomet

Research in Computational Differential Geometry November 5, 2014 Approximations Often we have a series of approximations which we think are getting close to looking like some shape. Approximations Often

### FILTERBANK-BASED FINGERPRINT MATCHING. Dinesh Kapoor(2005EET2920) Sachin Gajjar(2005EET3194) Himanshu Bhatnagar(2005EET3239)

FILTERBANK-BASED FINGERPRINT MATCHING Dinesh Kapoor(2005EET2920) Sachin Gajjar(2005EET3194) Himanshu Bhatnagar(2005EET3239) Papers Selected FINGERPRINT MATCHING USING MINUTIAE AND TEXTURE FEATURES By Anil

### Chapter 3. Set Theory. 3.1 What is a Set?

Chapter 3 Set Theory 3.1 What is a Set? A set is a well-defined collection of objects called elements or members of the set. Here, well-defined means accurately and unambiguously stated or described. Any

### Region-based Segmentation

Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

### 3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

### SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

### Introduction to Homogeneous coordinates

Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

### CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP

### Semi-Automatic Transcription Tool for Ancient Manuscripts

The Venice Atlas A Digital Humanities atlas project by DH101 EPFL Students Semi-Automatic Transcription Tool for Ancient Manuscripts In this article, we investigate various techniques from the fields of

### Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

### Topology Homework 3. Section Section 3.3. Samuel Otten

Topology Homework 3 Section 3.1 - Section 3.3 Samuel Otten 3.1 (1) Proposition. The intersection of finitely many open sets is open and the union of finitely many closed sets is closed. Proof. Note that

### Math 170, Section 002 Spring 2012 Practice Exam 2 with Solutions

Math 170, Section 002 Spring 2012 Practice Exam 2 with Solutions Contents 1 Problems 2 2 Solution key 10 3 Solutions 11 1 1 Problems Question 1: A right triangle has hypothenuse of length 25 in and an

### A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS

A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS HEMANT D. TAGARE. Introduction. Shape is a prominent visual feature in many images. Unfortunately, the mathematical theory

### Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

### Segmentation of Images

Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

### The Geometry of Carpentry and Joinery

The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose

### Image Analysis. Edge Detection

Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

### Interactive Math Glossary Terms and Definitions

Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

### To earn the extra credit, one of the following has to hold true. Please circle and sign.

CS 188 Spring 2011 Introduction to Artificial Intelligence Practice Final Exam To earn the extra credit, one of the following has to hold true. Please circle and sign. A I spent 3 or more hours on the

### Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College

### Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

### Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

### CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

### Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

### Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

### Classification: Linear Discriminant Functions

Classification: Linear Discriminant Functions CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Discriminant functions Linear Discriminant functions

### Section 12.1 Translations and Rotations

Section 12.1 Translations and Rotations Any rigid motion that preserves length or distance is an isometry. We look at two types of isometries in this section: translations and rotations. Translations A

### Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan

### COSC 3351 Software Design. An Introduction to UML (I)

COSC 3351 Software Design An Introduction to UML (I) This lecture contains material from: http://wps.prenhall.com/esm_pfleeger_softengtp_2 http://sunset.usc.edu/classes/cs577a_2000/lectures/05/ec-05.ppt

### MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves

MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves David L. Finn Over the next few days, we will be looking at extensions of Bezier

### Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

### 9.5 Equivalence Relations

9.5 Equivalence Relations You know from your early study of fractions that each fraction has many equivalent forms. For example, 2, 2 4, 3 6, 2, 3 6, 5 30,... are all different ways to represent the same

### Lecture 3: Art Gallery Problems and Polygon Triangulation

EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified

### Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

### Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means

### EULER S FORMULA AND THE FIVE COLOR THEOREM

EULER S FORMULA AND THE FIVE COLOR THEOREM MIN JAE SONG Abstract. In this paper, we will define the necessary concepts to formulate map coloring problems. Then, we will prove Euler s formula and apply

### Other Linear Filters CS 211A

Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

### Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line

### Graph Theory II. Po-Shen Loh. June edges each. Solution: Spread the n vertices around a circle. Take parallel classes.

Graph Theory II Po-Shen Loh June 009 1 Warm-up 1. Let n be odd. Partition the edge set of K n into n matchings with n 1 edges each. Solution: Spread the n vertices around a circle. Take parallel classes..

### Image Processing. Filtering. Slide 1

Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple

### Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

### Lecture 14 Shape. ch. 9, sec. 1-8, of Machine Vision by Wesley E. Snyder & Hairong Qi. Spring (CMU RI) : BioE 2630 (Pitt)

Lecture 14 Shape ch. 9, sec. 1-8, 12-14 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these slides by John Galeotti,

### Finding 2D Shapes and the Hough Transform

CS 4495 Computer Vision Finding 2D Shapes and the Aaron Bobick School of Interactive Computing Administrivia Today: Modeling Lines and Finding them CS4495: Problem set 1 is still posted. Please read the

### Lifting Transform, Voronoi, Delaunay, Convex Hulls

Lifting Transform, Voronoi, Delaunay, Convex Hulls Subhash Suri Department of Computer Science University of California Santa Barbara, CA 93106 1 Lifting Transform (A combination of Pless notes and my

### Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

### Motion Control (wheeled robots)

Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

### Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

### Hyperbolic structures and triangulations

CHAPTER Hyperbolic structures and triangulations In chapter 3, we learned that hyperbolic structures lead to developing maps and holonomy, and that the developing map is a covering map if and only if the

### Graph theory. Po-Shen Loh. June We begin by collecting some basic facts which can be proved via bare-hands techniques.

Graph theory Po-Shen Loh June 013 1 Basic results We begin by collecting some basic facts which can be proved via bare-hands techniques. 1. The sum of all of the degrees is equal to twice the number of

### Kinematics of Wheeled Robots

CSE 390/MEAM 40 Kinematics of Wheeled Robots Professor Vijay Kumar Department of Mechanical Engineering and Applied Mechanics University of Pennsylvania September 16, 006 1 Introduction In this chapter,

### COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS

COMPARATIVE STUDY OF IMAGE EDGE DETECTION ALGORITHMS Shubham Saini 1, Bhavesh Kasliwal 2, Shraey Bhatia 3 1 Student, School of Computing Science and Engineering, Vellore Institute of Technology, India,

### Image features. Image Features

Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

### Let be a function. We say, is a plane curve given by the. Let a curve be given by function where is differentiable with continuous.

Module 8 : Applications of Integration - II Lecture 22 : Arc Length of a Plane Curve [Section 221] Objectives In this section you will learn the following : How to find the length of a plane curve 221

### MATHEMATICS 105 Plane Trigonometry

Chapter I THE TRIGONOMETRIC FUNCTIONS MATHEMATICS 105 Plane Trigonometry INTRODUCTION The word trigonometry literally means triangle measurement. It is concerned with the measurement of the parts, sides,

### MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS Miguel Alemán-Flores, Luis Álvarez-León Departamento de Informática y Sistemas, Universidad de Las Palmas

### Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing

### Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

### An Introduction to Content Based Image Retrieval

CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

### On the Max Coloring Problem

On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive

### Edge detection. Gradient-based edge operators

Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

### Morphological Image Processing

Morphological Image Processing Ranga Rodrigo October 9, 29 Outline Contents Preliminaries 2 Dilation and Erosion 3 2. Dilation.............................................. 3 2.2 Erosion..............................................

### x ~ Hemispheric Lighting

Irradiance and Incoming Radiance Imagine a sensor which is a small, flat plane centered at a point ~ x in space and oriented so that its normal points in the direction n. This sensor can compute the total

### CS201 Computer Vision Lect 4 - Image Formation

CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view

### Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

### Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

### Medical Image Segmentation using Level Sets

Medical Image Segmentation using Level Sets Technical Report #CS-8-1 Tenn Francis Chen Abstract Segmentation is a vital aspect of medical imaging. It aids in the visualization of medical data and diagnostics

### CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

### . Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;...

Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; 2 Application of Gradient; Tutorial Class V 3-10/10/2012 1 First Order

### Lecture: k-means & mean-shift clustering

Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap: Image Segmentation Goal: identify groups of pixels that go together

### 14.5 Directional Derivatives and the Gradient Vector

14.5 Directional Derivatives and the Gradient Vector 1. Directional Derivatives. Recall z = f (x, y) and the partial derivatives f x and f y are defined as f (x 0 + h, y 0 ) f (x 0, y 0 ) f x (x 0, y 0

### Planes Intersecting Cones: Static Hypertext Version

Page 1 of 12 Planes Intersecting Cones: Static Hypertext Version On this page, we develop some of the details of the plane-slicing-cone picture discussed in the introduction. The relationship between the

### Image Sampling and Quantisation

Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

### Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

### Image Analysis. Edge Detection

Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

### Preferred directions for resolving the non-uniqueness of Delaunay triangulations

Preferred directions for resolving the non-uniqueness of Delaunay triangulations Christopher Dyken and Michael S. Floater Abstract: This note proposes a simple rule to determine a unique triangulation

### Edge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above

Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)

### Lectures in Discrete Differential Geometry 3 Discrete Surfaces

Lectures in Discrete Differential Geometry 3 Discrete Surfaces Etienne Vouga March 19, 2014 1 Triangle Meshes We will now study discrete surfaces and build up a parallel theory of curvature that mimics

### Image Sampling & Quantisation

Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

### Image representation. 1. Introduction

Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments

### Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

### Image Restoration and Reconstruction

Image Restoration and Reconstruction Image restoration Objective process to improve an image Recover an image by using a priori knowledge of degradation phenomenon Exemplified by removal of blur by deblurring

### K-Means and Gaussian Mixture Models

K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser

### Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Yonggang Shi, Janusz Konrad, W. Clem Karl Department of Electrical and Computer Engineering Boston University, Boston, MA 02215

### The Largest Pure Partial Planes of Order 6 Have Size 25

The Largest Pure Partial Planes of Order 6 Have Size 5 Yibo Gao arxiv:1611.00084v1 [math.co] 31 Oct 016 Abstract In this paper, we prove that the largest pure partial plane of order 6 has size 5. At the

### Image Restoration and Reconstruction

Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

### THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION

THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION Dan Englesson danen344@student.liu.se Sunday 12th April, 2011 Abstract In this lab assignment which was done in the course TNM079, Modeling and animation,