3.1. Solution for white Gaussian noise
|
|
- Vivian Sparks
- 6 years ago
- Views:
Transcription
1 Low complexity M-hypotheses detection: M vectors case Mohammed Nae and Ahmed H. Tewk Dept. of Electrical Engineering University of Minnesota, Minneapolis, MN mnae,tewk@ece.umn.edu Abstract Low complexity algorithms are essential in many applications which require low power implementation. In this paper we present a low complexity technique for solving M-hypotheses detection problems, that involve vector observations. Our technique works in these cases where the number of vectors is equal to or smaller than the dimensionality of the vectors. It attempts to optimally trade o complexity with probability of error through solving the problem in a lower dimension. 1. Introduction M-ary hypotheses testing arises in many applications. Optimal detector gas a complexity that is linear in M. However, M can be large, or we may operate under low power constraint. Hence, the need for low power and low complexity detection algorithms. In [1] we presented a progressive renement approach to M-ary detection problems. This approach can lead to a logarithmic reduction in the complexity of the detector. Ideally, we would hope to group the hypotheses in two large groups and subsequently split these two sets in two recursively. Unfortunately, binary partitions cannot always capture the exact boundary between two groups of hypotheses. Therefore, we cannot group the hypotheses in disjoint groupings if we wish to achieve a detection performance close to optimal. On the other hand, we want to minimize overlap between the groupings to minimize the number of comparisons required to make a decision. Hence, our problem then is one of approximating the partitions of the decision plane with the minimal number of binary partitions, or equivalently of designing a tree of minimal depth. In this paper we will discuss a low power detection algorithm applied to M-hypotheses detection when these hypotheses involve vector observations, and the number of hypotheses, M, is smaller than the dimensionality, N, of the vectors. This case arises in many practical applications. Rather than doing the detection in a tree like structure as in [1], the approach we present here projects the received vector onto a lower dimensional space where the detection algorithm can be performed using a relatively smaller number of operations. The projection is done in such a way so as to maximize the minimum distance between the projections of the dierent hypothesis. The rest of this paper is organized as follows: in the next section we will formulate the problem we are trying to solve. In the following section, we explain our proposed projection approach, as applied to orthogonal and non orthogonal vectors. We then end with the conclusion. 2. Problem Formulation Assume that we have M possible hypotheses described by N dimensional vectors, Therefore, under hypothesis i, we receive the vector r = v i + noise, 1 i M. Assume that the noise is white Gaussian noise. We want to select one of the M hypotheses so as to minimize the probability of error. It can be shown [2] that the optimal receiver is to select v i that achieves max 2r T v i? v v i T v i (1) i where r is the received vector. Therefore, we have M matched lter operations and M? 1 comparisons to nally decide on a hypothesis out of the M hypotheses. We have two cases: Case 1: The M-hypotheses lie in N dimensional space, where M N. This will be solved through a projection approach that nds the optimal subspace to project the received vector onto. Case 2: The M-hypotheses lie in N dimensional space, where M > N. In this case, we use the progressive renement approach presented in [1].
2 3. M-ary hypothesis testing problems in larger dimensional spaces In this case, we want to solve the problem using complexity, P, that is lower than M. We rst present a general formulation of the problem, and then we show how to solve this problem in several cases. We want to divide our space into K = 2 P regions bounded by P hyperplanes. Each region is indicated by being above or below each of the P hyperplanes. To minimize the probability of error we solve the following minimization problem min ~x 1; ~x 2;:: x~ P ;th 1;:::th P where MX KX (1?(i?j k ))(P rob:(r 2 R k jh i )) i=1 k=1 (2) j k = index max H i P rob:(r 2 R k jh i ) (3) ~x p 's are unit norm vectors orthogonal on the hyperplanes, th p 's are thresholds determining the exact placement of the hyperplanes, and R k is the kth region bound by these hyperplanes Solution for white Gaussian noise The above formulation holds for any noise density function. The way it is formulated above does not allow any easy solution. In the rest of this paper we will assume we have white Gaussian noise. Since the noise is white, it can be shown that assuming that the hyperplanes are orthogonal does not sacrice any performance. Hence, we have PY P rob:(r k jh i ) = Q((?1) h T (th p? ~x p vi )) (4) p=1 where we dene Q(x) = Z 1 1 p exp? x 2 x2 2 2 (5) and h is either 1 or -1 depending on whether the region R k is above or below the hyperplane designated by ~x p and th p. Now let us assume that the vectors v i are orthogonal vectors, and further assume that M = 2 P. We can simplify the above optimization problem, since we can now pre-assign each of the M regions to any one hypothesis without any loss in the performance. Since the vectors, v i, are orthogonal, we can always transform them to any other set of orthogonal vectors. Therefore, the vectors are \interchangeable". The problem now becomes subject to max x p;th p MX PY Q((?1) h T (th p? ~x p vi )) (6) i=1 p=1 T T ~x p ~xp = 1; x~ m ~xn = ; m 6= n (7) where the h's are chosen according to which region we assign the hypothesis i. We now make the following remark, the way we divide the space is that above each hyperplane we have M 2 vectors and the same number of vectors below it. Therefore, at high signal to noise ratio, we choose that plane such as to maximize the minimum distance between these 2 groups of vectors. Assume, without any loss of generality, that these vectors are represented by the unit vectors, v i = e i, where e i is zeros everywhere except in the ith place where it is a 1. We can write the maxmin problem, and it can be shown to be convex [3]. However, it can easily be seen that to maximize the minimum distance, every vector has to be projected such as its distances from the plane is equal to that of all the other vectors, as, if this is not the case, and we increase the distance of one of the vectors, we have to decrease the distance of some other vector, leading to a decrease in the minimum distance. The vector that we project onto now becomes, y y y : : z z : (8) where a group of vectors is projected onto y and the other group onto z. Therefore we write the problem as max y;z y? z subject to ( M 2 )(y2 + z 2 ) = 1 (9) Therefore, we dierentiate the following equation with respect to y, z, and the Lagrange multiplier and equate to zero We get y? z + ( M 2 )(y2 + z 2 )? 1) (1) y =?z = 1 p M (11) Dividing the space into these optimal regions is equivalent to a projection algorithm, where we map the vectors onto the optimal M? sized constellation in P dimensions, where P is the order of the computations we want to decrease our computations to. If the vectors are orthogonal this constellation is such that we need only to perform P matched lter operations and P comparisons to reach a detection decision.
3 In the case of non orthogonal vectors, we require that we only do P matched lter operations, while we allow ourselves more than P comparisons. Therefore, we require that the vectors corresponding to subtraction of the projection of the M hypotheses in the P dimensional space must be in no more than P directions. In that case we need to only project the received vector, r, on these directions, and through some comparisons we can decide on the sent hypothesis Special Cases We now consider some special cases. Assume that M is a power of 2. For example, let us consider the case of M=4 vectors in 4 dimensional space. We have two cases: 1. Orthogonal signal set: Our M vectors are represented by vectors that are orthogonal, i.e. v T i :v j = (12) for all i 6= j. We also assume that the vectors are all equal in norm. We want to project these vectors onto a P = 2 dimensional space. The optimal size four constellation in a 2 dimensional space is a square. Therefore, we project our 4 vectors on the vertices shown in Fig. 1. Then we project the received vector, r, on the x axis and y axis. We then have 2 comparisons to make: compare the x and y components of this projection to zero. performance when the battery is fully charged, but will tend to sacrice some performance for the sake of extending the battery life, when the battery is almost discharged. 2. Nonorthogonal signal set: The M vectors are general. We only assume that they form an independent signal set. We form a matrix such that these M vectors are its columns. If we now multiply this matrix with its inverse from the left side, we have projected the M hypotheses to the same vertices as Fig. 1. The problem is that the noise now is no longer white, and hence the nearest neighbor technique wouldn't be useful here. Therefore, we have to whiten the noise. As a result the vectors now lie on the vertices of Fig. 2. We have to notice that dierent arrangements of the vectors as columns in the matrix will give dierent performance results, and to obtain the best performance, all the possibilities have to be compared. Also, notice that the comparison operations might now be more than P. In the gure shown, notice that if a received vector is closer to hypothesis 1 than 2, then it is closer to 3 than 4, and hence you only need to compare between 3 and 4. While if it is closer to 2, then we still have to compare 2 and 4, and then the winner of these and 3. Therefore the number of comparisons is lower bound by P and upper bound by M? 1. In all cases we only need to do P matched lter operations instead of M. Hyp. 3 Hyp. 1 Hyp. 3 Hyp. 1 Hyp. 4 Hyp. 2 Hyp. 4 Hyp. 2 Figure 1. Orthogonal Signal Set Orthogonal signal sets where the number of vectors is a power of 2, though seemingly restrictive, has many practical applications. For example, in CDMA communications, a user sends one out of 64 orthogonal Walsh functions [4] of length 64. A receiver needs to do matched lter operations with each of these function to be able to decide on the received vector. With the increased usage of antenna arrays, this operation might be repeated several times, making it a large computational burden. Through our technique, we can decrease the computations. As we will show shortly, we can also trade-o complexity and performance. Perhaps, a receiver designer would opt to achieve the maximum Figure 2. Nonorthogonal Signal Set If we have 8 hypotheses in 8 dimensional space, and we wish to solve the problem using 3 operations, we map the hypotheses to the vertices of the cube shown in Fig. 3. If we have a signal set of size smaller than 8, and we want to solve the problem in order 3 operations, we map the hypotheses into a subset of these 8 vertices. This guarantees that, starting from an orthogonal signal set, the projected noise is still white. Therefore our algorithm can be stated in the following: 1. Map the M hypotheses to the vertices of a parallelopoid of dimension P, where log2m P M. 2. Solve the detection problem in the reduced dimensional space, using P matched lter operations. We can now use the nearest neighbor algorithm to solve the problem in the reduced dimensional space. We can
4 ( 1 8, 1 8, 1 8 ) ( 1 8, 18, 1 8 ) ( 1 8, 1 8, 1 8 ) most increase in the minimum distance between the projected hypotheses, and so on. Fig. 4 shows how the minimum distance changes with increasing the dimensionality at several values of M. Notice here that although the at regions in the graphs correspond to no increase in the minimum distance, we gain some performance by increasing the dimensionality in these regions, as we have a decrease in the number of nearest neighbors as we project on more vectors. 5 Figure 3. Projection of size 8 signal set onto 3 dimensional space vectors 128 vectors 256 vectors 5 prove that the nearest neighbor in the reduced dimensional space is the optimal receiver in that space: We have M vectors, and we receive r = v i + noise, where i is any integer between 1 and M. We now project the received vector, r, by multiplying it by P row vectors, u j ; 1 j P. Therefore we get, r p = v p i + n (13) where r p is the projected received vector, v p i is the projection of v i. If the u j 's are orthogonal, the n vector will be a a P component white Gaussian noise. Therefore the problem is now equivalent to a smaller problem, and the optimal receiver is the nearest neighbor in the new dimensional space. 3. With some additions and comparisons, we can decide on the true hypothesis. Notice that in the case of orthogonal hypotheses, the parallelopoid is a cube Higher Complexity Solutions In some applications, we need to trade-o complexity and performance, so we want to increase the complexity in steps if this achieves a desired gain in performance. Our projection approach allows that by allowing us to progressively project on subspaces of dimensions P log2m. To see how we do this, assume, without loss of generality, that we have an orthogonal signal set composed of M unit vectors in M dimensional space. To project this on a P = log2m dimensional space, we choose a subset of the M length M Walsh functions corresponding to the optimal parallel constellation. The Walsh functions hare orthogonal series of bipolar vectors of the form p 1 p 1 p 1 p 1 : : : 1 M M M M p M When we want to incrementally increase the performance, we project on the Walsh vector that gives the i db loss dimension Figure 4. Minimum distance (in db loss from the optimum ) with dimensions in an orthogonal signal set For nonorthogonal signal sets, we can proceed as with orthogonal unit vectors, by multiplying the matrix of the hypotheses by its inverse. We have an extra step at the end, namely that of whitening the noise. Fig. 5 shows how the performance increases with increased dimensionality for several values of M. The original hypotheses here were chosen to be a random set of unit norm vectors. As for other values of M which are not powers of 2, we can always choose a subset of the projected powerof-2 hypotheses. For example, assume that we have M = 3 unit vectors in a 3 dimensional space. We want to project them onto a 2 dimensional space. We have 2 options, the constellation shown in Fig. 6, and the one shown in Fig. 7. In Fig. 6, we projected the 3 vectors onto 3 corners of the unit square ( the one whose vertices correspond to vectors of norm 1 ), while in Fig. 7, we project the vectors onto the vertices of the unit equilateral triangle. Comparing the minimum distance achieved by these 2 constellations, we found that the latter constellation provides a higher minimum distance. For M=5 projected onto a 3 dimensional space, we can use a pyramid constellation, or use 5 vertices of the
5 2 8 vectors vectors db loss vectors dimension Figure 7. Constellation 2 for 3 vectors projected onto a plane Figure 5. Minimum distance ( in db loss from the optimum )with dimensions in a nonorthogonal signal set Figure 8. Constellation of 13 points in 5 dimensions Conclusion Figure 6. Constellation 1 for 3 vectors projected onto a plane cube shown in Fig.3. For M = 6, we can use 2 parallel constellation of 3. For general M our projection algorithm for projecting onto P = dlog2m e would be: 1. Divide M into factors 4 that are close together as possible (eg. 13=6+7, 7=4+3, 6=3+3 etc.. ) 2. Form the constellation in 2 dimensions using the largest factor rst, then add on parallel constellations as you increase the dimension. 3. Whiten the noise If we want to use more dimensions, we choose an M sized subset of the power-of-2 constellation in that dimension. This subset is chosen according to the factors of M. Fig. 8 shows how we choose the 13 points in 5 dimensions out of the original 16. The solid dots correspond to the 16 points, while the larger circles correspond to the 13 points. In this paper we presented a low power detection approach that leads to an almost logarithmic reduction in complexity in solving hypotheses detection problems when these hypothesis are represented by vectors. This approach relies on projecting the hypothesis onto an optimal constellation in a lower dimensional space. This approach also allows us to trade o complexity and performance, by increasing the dimension of the space where we solve our detection problem. References [1] M. Nae and A. Tewk, \Reduced Complexity M- ary Hypotheses Testing in Wireless Communications," Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Seattle, Washnigton, [2] H. Van Trees, Detection, estimation and Modulation theory, Wiley, NY, [3] P. Gill, W. Murray, and M. Wright, Practical Optimization, Academic Press, CA, [4] M. Zoltowsky, Talk given in EE Dept. Colloquium Series, University of Minnesota, Winter 1998.
The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a
Preprint 0 (2000)?{? 1 Approximation of a direction of N d in bounded coordinates Jean-Christophe Novelli a Gilles Schaeer b Florent Hivert a a Universite Paris 7 { LIAFA 2, place Jussieu - 75251 Paris
More informationAll lecture slides will be available at CSC2515_Winter15.html
CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 9: Support Vector Machines All lecture slides will be available at http://www.cs.toronto.edu/~urtasun/courses/csc2515/ CSC2515_Winter15.html Many
More informationSupport Vector Machines
Support Vector Machines . Importance of SVM SVM is a discriminative method that brings together:. computational learning theory. previously known methods in linear discriminant functions 3. optimization
More information1 INTRODUCTION The LMS adaptive algorithm is the most popular algorithm for adaptive ltering because of its simplicity and robustness. However, its ma
MULTIPLE SUBSPACE ULV ALGORITHM AND LMS TRACKING S. HOSUR, A. H. TEWFIK, D. BOLEY University of Minnesota 200 Union St. S.E. Minneapolis, MN 55455 U.S.A fhosur@ee,tewk@ee,boley@csg.umn.edu ABSTRACT. The
More informationSupport Vector Machines.
Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing
More informationIntersection of an Oriented Box and a Cone
Intersection of an Oriented Box and a Cone David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International
More informationApproximation Algorithms for Geometric Intersection Graphs
Approximation Algorithms for Geometric Intersection Graphs Subhas C. Nandy (nandysc@isical.ac.in) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Outline
More informationChapter 8. Voronoi Diagrams. 8.1 Post Oce Problem
Chapter 8 Voronoi Diagrams 8.1 Post Oce Problem Suppose there are n post oces p 1,... p n in a city. Someone who is located at a position q within the city would like to know which post oce is closest
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationDense Matrix Algorithms
Dense Matrix Algorithms Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003. Topic Overview Matrix-Vector Multiplication
More informationSupport Vector Machines
Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing
More informationLecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.
CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear
More informationSupport Vector Machines.
Support Vector Machines srihari@buffalo.edu SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions
More informationAdaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh
Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented which, for a large-dimensional exponential family G,
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationChapter 18. Geometric Operations
Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;
More informationCS 229 Midterm Review
CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask
More informationBinary Space Partitions for Orthogonal Segments and Hyperrectangles Adrian Dumitrescu Joe Mitchell Micha Sharir
Binary Space Partitions for Orthogonal Segments and Hyperrectangles Adrian Dumitrescu Joe Mitchell Micha Sharir State University of New York Stony Brook, NY 11794 3600 Binary Space Partitions (BSP): l5
More informationLecture 9: Linear Programming
Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative
More informationSolutions to Problem Set 1
CSCI-GA.3520-001 Honors Analysis of Algorithms Solutions to Problem Set 1 Problem 1 An O(n) algorithm that finds the kth integer in an array a = (a 1,..., a n ) of n distinct integers. Basic Idea Using
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007,
More informationLinear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?
Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x
More informationInterleaving Schemes on Circulant Graphs with Two Offsets
Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical
More informationSparse Distributed Memory Pattern Data Analysis
Sparse Distributed Memory Pattern Data Analysis František Grebeníček * grebenic@dcse.fee.vutbr.cz Abstract: This paper discusses, how some statistical properties of pattern data can affect efficiency of
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationACTUALLY DOING IT : an Introduction to Polyhedral Computation
ACTUALLY DOING IT : an Introduction to Polyhedral Computation Jesús A. De Loera Department of Mathematics Univ. of California, Davis http://www.math.ucdavis.edu/ deloera/ 1 What is a Convex Polytope? 2
More informationJ. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998
Density Estimation using Support Vector Machines J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report CSD-TR-97-3 February 5, 998!()+, -./ 3456 Department of Computer Science
More informationRecognition, SVD, and PCA
Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another
More informationA GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS
A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS HEMANT D. TAGARE. Introduction. Shape is a prominent visual feature in many images. Unfortunately, the mathematical theory
More informationBinary vector quantizer design using soft centroids
Signal Processing: Image Communication 14 (1999) 677}681 Binary vector quantizer design using soft centroids Pasi FraK nti *, Timo Kaukoranta Department of Computer Science, University of Joensuu, P.O.
More informationLecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject
More informationTHE VIEWING TRANSFORMATION
ECS 178 Course Notes THE VIEWING TRANSFORMATION Kenneth I. Joy Institute for Data Analysis and Visualization Department of Computer Science University of California, Davis Overview One of the most important
More informationA Two-phase Distributed Training Algorithm for Linear SVM in WSN
Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (EECSS 015) Barcelona, Spain July 13-14, 015 Paper o. 30 A wo-phase Distributed raining Algorithm for Linear
More informationSimplicial Cells in Arrangements of Hyperplanes
Simplicial Cells in Arrangements of Hyperplanes Christoph Dätwyler 05.01.2013 This paper is a report written due to the authors presentation of a paper written by Shannon [1] in 1977. The presentation
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationMachine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013
Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork
More information1 Training/Validation/Testing
CPSC 340 Final (Fall 2015) Name: Student Number: Please enter your information above, turn off cellphones, space yourselves out throughout the room, and wait until the official start of the exam to begin.
More informationKernel Methods & Support Vector Machines
& Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector
More informationSupport Vector Machines
Support Vector Machines About the Name... A Support Vector A training sample used to define classification boundaries in SVMs located near class boundaries Support Vector Machines Binary classifiers whose
More informationOrthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet. Y. C. Pati R. Rezaiifar and P. S.
/ To appear in Proc. of the 27 th Annual Asilomar Conference on Signals Systems and Computers, Nov. {3, 993 / Orthogonal Matching Pursuit: Recursive Function Approximation with Applications to Wavelet
More informationReview for Mastery Using Graphs and Tables to Solve Linear Systems
3-1 Using Graphs and Tables to Solve Linear Systems A linear system of equations is a set of two or more linear equations. To solve a linear system, find all the ordered pairs (x, y) that make both equations
More informationWrite an iterative real-space Poisson solver in Python/C
Write an iterative real-space Poisson solver in Python/C Ask Hjorth Larsen asklarsen@gmail.com October 10, 2018 The Poisson equation is 2 φ(r) = ρ(r). (1) This is a second-order linear dierential equation
More informationData Partitioning. Figure 1-31: Communication Topologies. Regular Partitions
Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy
More informationAssignment 3: Edge Detection
Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse
More informationPrentice Hall Mathematics: Course Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8)
Colorado Model Content Standards and Grade Level Expectations (Grade 8) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the
More informationwould be included in is small: to be exact. Thus with probability1, the same partition n+1 n+1 would be produced regardless of whether p is in the inp
1 Introduction 1.1 Parallel Randomized Algorihtms Using Sampling A fundamental strategy used in designing ecient algorithms is divide-and-conquer, where that input data is partitioned into several subproblems
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationPrentice Hall Mathematics: Pre-Algebra 2004 Correlated to: Colorado Model Content Standards and Grade Level Expectations (Grade 8)
Colorado Model Content Standards and Grade Level Expectations (Grade 8) Standard 1: Students develop number sense and use numbers and number relationships in problemsolving situations and communicate the
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More information3. Draw the orthographic projection (front, right, and top) for the following solid. Also, state how many cubic units the volume is.
PAP Geometry Unit 7 Review Name: Leave your answers as exact answers unless otherwise specified. 1. Describe the cross sections made by the intersection of the plane and the solids. Determine if the shape
More informationData Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners
Data Mining 3.5 (Instance-Based Learners) Fall 2008 Instructor: Dr. Masoud Yaghini Outline Introduction k-nearest-neighbor Classifiers References Introduction Introduction Lazy vs. eager learning Eager
More informationFinite Element Methods for the Poisson Equation and its Applications
Finite Element Methods for the Poisson Equation and its Applications Charles Crook July 30, 2013 Abstract The finite element method is a fast computational method that also has a solid mathematical theory
More informationSHARED MEMORY VS DISTRIBUTED MEMORY
OVERVIEW Important Processor Organizations 3 SHARED MEMORY VS DISTRIBUTED MEMORY Classical parallel algorithms were discussed using the shared memory paradigm. In shared memory parallel platform processors
More information1 Introduction Data format converters (DFCs) are used to permute the data from one format to another in signal processing and image processing applica
A New Register Allocation Scheme for Low Power Data Format Converters Kala Srivatsan, Chaitali Chakrabarti Lori E. Lucke Department of Electrical Engineering Minnetronix, Inc. Arizona State University
More information.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar..
.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar.. Machine Learning: Support Vector Machines: Linear Kernel Support Vector Machines Extending Perceptron Classifiers. There are two ways to
More informationSome Open Problems in Graph Theory and Computational Geometry
Some Open Problems in Graph Theory and Computational Geometry David Eppstein Univ. of California, Irvine Dept. of Information and Computer Science ICS 269, January 25, 2002 Two Models of Algorithms Research
More informationMATH3016: OPTIMIZATION
MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is
More informationCMSC 754 Computational Geometry 1
CMSC 754 Computational Geometry 1 David M. Mount Department of Computer Science University of Maryland Fall 2005 1 Copyright, David M. Mount, 2005, Dept. of Computer Science, University of Maryland, College
More informationÇANKAYA UNIVERSITY Department of Industrial Engineering SPRING SEMESTER
TECHNIQUES FOR CONTINOUS SPACE LOCATION PROBLEMS Continuous space location models determine the optimal location of one or more facilities on a two-dimensional plane. The obvious disadvantage is that the
More informationThe Simplex Algorithm
The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly
More informationBumptrees for Efficient Function, Constraint, and Classification Learning
umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A
More information6 Randomized rounding of semidefinite programs
6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can
More informationRobust Kernel Methods in Clustering and Dimensionality Reduction Problems
Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust
More informationCS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation
CS 534: Computer Vision Segmentation II Graph Cuts and Image Segmentation Spring 2005 Ahmed Elgammal Dept of Computer Science CS 534 Segmentation II - 1 Outlines What is Graph cuts Graph-based clustering
More informationTHE CAMERA TRANSFORM
On-Line Computer Graphics Notes THE CAMERA TRANSFORM Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis Overview To understanding the
More informationIntroduction to Machine Learning
Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574
More informationScanning Real World Objects without Worries 3D Reconstruction
Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects
More informationMATH 890 HOMEWORK 2 DAVID MEREDITH
MATH 890 HOMEWORK 2 DAVID MEREDITH (1) Suppose P and Q are polyhedra. Then P Q is a polyhedron. Moreover if P and Q are polytopes then P Q is a polytope. The facets of P Q are either F Q where F is a facet
More informationComputer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,
More informationAn optimization method for generating self-equilibrium shape of curved surface from developable surface
25-28th September, 2017, Hamburg, Germany Annette Bögle, Manfred Grohmann (eds.) An optimization method for generating self-equilibrium shape of curved surface from developable surface Jinglan CI *, Maoto
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationComputer Graphics. The Two-Dimensional Viewing. Somsak Walairacht, Computer Engineering, KMITL
Computer Graphics Chapter 6 The Two-Dimensional Viewing Somsak Walairacht, Computer Engineering, KMITL Outline The Two-Dimensional Viewing Pipeline The Clipping Window Normalization and Viewport Transformations
More information1 The range query problem
CS268: Geometric Algorithms Handout #12 Design and Analysis Original Handout #12 Stanford University Thursday, 19 May 1994 Original Lecture #12: Thursday, May 19, 1994 Topics: Range Searching with Partition
More informationThe Curse of Dimensionality
The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more
More information(Refer Slide Time: 00:02:02)
Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 20 Clipping: Lines and Polygons Hello and welcome everybody to the lecture
More informationDesign and Analysis of Algorithms 演算法設計與分析. Lecture 7 April 15, 2015 洪國寶
Design and Analysis of Algorithms 演算法設計與分析 Lecture 7 April 15, 2015 洪國寶 1 Course information (5/5) Grading (Tentative) Homework 25% (You may collaborate when solving the homework, however when writing
More informationGeometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks
Geometric Considerations for Distribution of Sensors in Ad-hoc Sensor Networks Ted Brown, Deniz Sarioz, Amotz Bar-Noy, Tom LaPorta, Dinesh Verma, Matthew Johnson, Hosam Rowaihy November 20, 2006 1 Introduction
More information2. CNeT Architecture and Learning 2.1. Architecture The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes form an
Competitive Neural Trees for Vector Quantization Sven Behnke and Nicolaos B. Karayiannis Department of Mathematics Department of Electrical and Computer Science and Computer Engineering Martin-Luther-University
More informationLectures 19: The Gauss-Bonnet Theorem I. Table of contents
Math 348 Fall 07 Lectures 9: The Gauss-Bonnet Theorem I Disclaimer. As we have a textbook, this lecture note is for guidance and supplement only. It should not be relied on when preparing for exams. In
More informationRobust PDF Table Locator
Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records
More informationICA. PCs. ICs. characters. E WD -1/2. feature. image. vector. feature. feature. selection. selection.
A Study of Feature Extraction and Selection Using Independent Component Analysis Seiichi Ozawa Div. of System Function Science Graduate School of Science and Technology Kobe University Kobe 657-851, JAPAN
More informationColor Dithering with n-best Algorithm
Color Dithering with n-best Algorithm Kjell Lemström, Jorma Tarhio University of Helsinki Department of Computer Science P.O. Box 26 (Teollisuuskatu 23) FIN-00014 University of Helsinki Finland {klemstro,tarhio}@cs.helsinki.fi
More informationICS 161 Algorithms Winter 1998 Final Exam. 1: out of 15. 2: out of 15. 3: out of 20. 4: out of 15. 5: out of 20. 6: out of 15.
ICS 161 Algorithms Winter 1998 Final Exam Name: ID: 1: out of 15 2: out of 15 3: out of 20 4: out of 15 5: out of 20 6: out of 15 total: out of 100 1. Solve the following recurrences. (Just give the solutions;
More informationAnnouncements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10
Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal
More informationOn Covering a Graph Optimally with Induced Subgraphs
On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number
More informationRectangular Partitioning
Rectangular Partitioning Joe Forsmann and Rock Hymas Introduction/Abstract We will look at a problem that I (Rock) had to solve in the course of my work. Given a set of non-overlapping rectangles each
More informationHyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University
Hyperplane Ranking in Simple Genetic Algorithms D. Whitley, K. Mathias, and L. yeatt Department of Computer Science Colorado State University Fort Collins, Colorado 8523 USA whitley,mathiask,pyeatt@cs.colostate.edu
More informationImage Processing. Image Features
Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching
More informationLifting Transform, Voronoi, Delaunay, Convex Hulls
Lifting Transform, Voronoi, Delaunay, Convex Hulls Subhash Suri Department of Computer Science University of California Santa Barbara, CA 93106 1 Lifting Transform (A combination of Pless notes and my
More information5. GENERALIZED INVERSE SOLUTIONS
5. GENERALIZED INVERSE SOLUTIONS The Geometry of Generalized Inverse Solutions The generalized inverse solution to the control allocation problem involves constructing a matrix which satisfies the equation
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationCSE 417 Dynamic Programming (pt 4) Sub-problems on Trees
CSE 417 Dynamic Programming (pt 4) Sub-problems on Trees Reminders > HW4 is due today > HW5 will be posted shortly Dynamic Programming Review > Apply the steps... 1. Describe solution in terms of solution
More informationRASTERIZING POLYGONS IN IMAGE SPACE
On-Line Computer Graphics Notes RASTERIZING POLYGONS IN IMAGE SPACE Kenneth I. Joy Visualization and Graphics Research Group Department of Computer Science University of California, Davis A fundamental
More informationChapter 11 Arc Extraction and Segmentation
Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge
More informationUsing Genetic Algorithms to Solve the Box Stacking Problem
Using Genetic Algorithms to Solve the Box Stacking Problem Jenniffer Estrada, Kris Lee, Ryan Edgar October 7th, 2010 Abstract The box stacking or strip stacking problem is exceedingly difficult to solve
More informationConvexization in Markov Chain Monte Carlo
in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non
More informationS(x) = arg min s i 2S kx
8 Clustering This topic will focus on automatically grouping data points into subsets of similar points. There are numerous ways to define this problem, and most of them are quite messy. And many techniques
More information2 Geometry Solutions
2 Geometry Solutions jacques@ucsd.edu Here is give problems and solutions in increasing order of difficulty. 2.1 Easier problems Problem 1. What is the minimum number of hyperplanar slices to make a d-dimensional
More informationCS 5540 Spring 2013 Assignment 3, v1.0 Due: Apr. 24th 11:59PM
1 Introduction In this programming project, we are going to do a simple image segmentation task. Given a grayscale image with a bright object against a dark background and we are going to do a binary decision
More information