Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels

Size: px
Start display at page:

Download "Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels"

Transcription

1 Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Nathan D. Cahill, Selene E. Chew, and Paul S. Wenger Center for Applied and Computational Mathematics, School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 4623, USA ABSTRACT Laplacian Eigenmaps (LE) and Schroedinger Eigenmaps (SE) are effective dimensionality reduction algorithms that are capable of integrating both the spatial and spectral information inherent in a hyperspectral image. In this paper, we consider how to extend LE- and SE-based spatial-spectral dimensionality reduction algorithms to situations where partial knowledge of class labels exists, for example, when a subset of pixels has been manually labeled by an expert user. This partial knowledge is incorporated through the use of cluster potentials, turning each underlying algorithm into an instance of SE. Using publicly available data, we show that incorporating this partial knowledge improves the performance of subsequent classification algorithms. Keywords: Dimensionality reduction, Laplacian eigenmaps, Schroedinger eigenmaps, spatial-spectral fusion.. INTRODUCTION Dimensionality reduction algorithms have been used in a variety of hyperspectral imaging analysis applications to provide low-dimensional representations of the image data. Since the original image data cannot be assumed to be linear and may implicitly reside on a nonlinear manifold in a high-dimensional space, it is important that dimensionality reduction algorithms yield representations that preserve the structure of the manifold. A variety of nonlinear approaches to dimensionality reduction have been investigated with respect to applications in hyperspectral imaging, including Local Linear Embedding (LLE), 2 Isometric Feature Mapping (ISOMAP), 3 Kernel Principal Components Analysis (KPCA), 4 Laplacian Eigenmaps (LE), 5 and Schroedinger Eigenmaps (SE). 6 Many nonlinear dimensionality reduction algorithms are capable of integrating or fusing both the spatial and spectral information inherent in hyperspectral imagery to provide low-dimensional representations that can be effectively used as input for clustering, segmentation, and classification of hyperspectral imagery. In this paper, we consider how to extend LE- and SE-based spatial-spectral dimensionality reduction algorithms 7 2 to situations where partial knowledge of class labels exists, for example, when a subset of pixels has been manually labeled by an expert user. This partial knowledge is incorporated through the use of cluster potentials, turning each underlying algorithm into an instance of SE. Using publicly available hyperspectral image data (Indian Pines), we manually identify small subsets of pixels with ground-truth labels to be incorporated as partial knowledge in various spatial-spectral dimensionality reduction algorithms. With the resulting low-dimensional representations (generated both with and without the partial knowledge), we carry out Support Vector Machine (SVM)-based classification to predict class labels for all pixels. Our analysis shows that incorporating partial knowledge of class labels in the low-dimensional representations yields classification results that are competitive with or superior to those based on using lowdimensional representations generated without partial knowledge. The remainder of this paper is organized in the following manner: Section 2 provides mathematical preliminaries describing the LE and SE dimensionality reduction algorithms, and how spatial and spectral information can be fused within these algorithms; Section 3 shows how partial knowledge of class labels can be incorporated into LE- and SE-based spatial-spectral dimensionality reduction; Section 4 describes how the partial knowledge can be propagated in local neighborhoods on the manifold; Section 5 uses the resulting reduced-dimensional representations to perform classification experiments on publicly available data, illustrating improved results when partial knowledge is incorporated; and Section 6 provides conclusions and future work. Send correspondence to Nathan D. Cahill: nathan.cahill@rit.edu

2 2. MATHEMATICAL PRELIMINARIES In this section, we will describe the LE and SE algorithms and we will detail various approaches that have been developed to incorporate or fuse spatial and spectral components of the high-dimensional data in the dimensionality reduction process. We will denote X = {x,...,x k } to be a set of points on a manifold M R n, where n is assumed to be large. Each dimensionality reduction algorithm will identify a set of corresponding points Y = {y,...,y k } in R m, where m n, so that the relationships of points in Y are similar to the relationships of the corresponding points in X. 2. Laplacian Eigenmaps Laplacian Eigenmaps 3 (LE) is a popular graph-based dimensionality reduction algorithm that was introduced by Belkin and Niyogi in 23 and involves the following three steps:. Construct an undirected graph G = (X,E) whose vertices are the points in X and whose edges E are defined based on proximity between vertices. 2. Define a weight W i,j for each edge x i x j in E. 3. Compute the smallest m+ eigenvalues and eigenvectors of the generalized eigenvector problem Lf = λdf, wheredisthediagonalweighteddegreematrixdefinedbyd i,i = j W i,j, andl = D W isthelaplacian matrix. If the resulting eigenvectors f,..., f m, are ordered so that = λ λ λ m, then the points y T, y T 2,..., y T m are defined to be the rows of F = [f f 2 f m ]. One of the great strengths of LE is its flexibility in allowing different ways to define edges and edge weights. Common ways to define edges use ǫ-neighborhoods or (mutual) k-nearest neighbors search in some metric space. ( To define edge ) weights, the heat kernel is a common choice; i.e., the weight W i,j is defined to be exp x i x j 2 /σ if an edge exists between x i and x j or zero otherwise. 2.2 Schroedinger Eigenmaps Schroedinger Eigenmaps 6,4 (SE), a straightforward, yet powerful, generalization of LE, incorporates a potential matrix V that encodes extra information about the data that may be available. The potential matrix includes barrier potentials for points in X that pull the corresponding points in Y towards the origin, and/or cluster potentials for points in X that pull the corresponding points in Y towards each other. Barrier potentials are created by defining V to be a nonnegative diagonal matrix, with V i,i defined to be positive for each of the selected x i s. Cluster potentials are created by defining V to be a weighted sum of nondiagonal matrices V (i,j) that encode individual cluster potentials between x i and x j : V (i,j) k,l =, (k,l) {(i,i),(j,j)}, (k,l) {(i,j),(j,i)}, otherwise. () With a potential matrix defined, SE proceeds in the same manner as LE, but with the generalized eigenvector problem in step 3 replaced by the problem (L+αV)f = λdf, where α is a parameter chosen to relatively weight the contributions of the Laplacian matrix and potential matrix. It is the ability of SE to incorporate cluster potentials that we will exploit in Section 3 to encode partial knowledge of class labels. We will define cluster potentials for pairs of points in X that have the same known class label. 2.3 Spatial-Spectral Fusion The structure of the manifold M is influenced by the spectral (intensity) information at each pixel in an image as well as the spatial relationships between the spectra of neighboring pixels. A variety of techniques have been used to incorporate both spectral and spatial information in LE- and SE-based dimensionality reduction. To summarize these techniques in a common framework, we first consider each manifold point x i to be represented by concatenating the pixel s spectral information x f i with its spatial location x p i ; i.e., xt i = [ x f i T x p i T ] T.

3 LE-Based Spatial-Spectral Fusion Various references propose different ways of incorporating spatial and spectral information into LE by defining specific ways to determine graph edges and compute their weights: Shi-Malik (SM): 7,8 Edge between x i and x j if x p i xp j < ǫ; edge weights defined by: 2 ) 2 xfi xfj exp ( xp i xp j 2 W i,j = σf 2 σ, (x p 2 i,x j ) E. (2), otherwise Gillis-Bowles (GB): 9 Edge between x i and x j if x p i xp j < ǫ; edge weights defined by: 2 ( ( ) exp cos W i,j = f x i,xf 2 j xpi xpj ) x f i x f σ, (x j p 2 i,x j ) E. (3), otherwise Hou-Zhang-Ye-Zheng (HZYZ): Edge between x i and x j if x i and x j are mutually in the k-nearest neighbors of each other according to the measure: x f i xf j 2 ( ( x p i d(x i,x j ) = exp exp xp 2 )) j ; (4) 2σ 2 f 2σ 2 p edge weights defined by: W i,j = {, (xi,x j ) E, otherwise. (5) Benedetto et al. Fused Metric (BM): Edge between x i and x j if x i and x j are mutually in the k-nearest neighbors of each other according to the measure: d β (x i,x j ) = x f i xf j 2 ( x p 2 i β +( β) xp 2 ) j 2, (6) σ 2 f σ 2 p where β ; edge weights defined by: { ( W (β) exp d i,j = β (x i,x j ) 2), (x i,x j ) E β, otherwise. (7) A slightly different approach involves constructing two graphs: Benedetto et al. Fused Eigenvectors (BE):. Construct graphs G and G so that the sets of edges E and E are defined based on mutual k-nearest neighbors according to the metrics d and d from (6), respectively. 2. Define edge weights for G and G according to (7) with β = and, respectively. 3. Choose m and m so that m +m = m. Compute the smallest m + eigenvalues and eigenvectors L f () = λd f (), and compute the smallest m + eigenvalues and eigenvectors of L f () = λd f (). Assuming each set of eigenvectors is sorted so that [ the eigenvalues are increasing, then the points y T, y2 T,..., ym T are defined to be the rows of F = f () f m () f () f m () ].

4 SE-Based Spatial-Spectral Fusion Cahill et al. 2 proposed the Spatial-Spectral Schroedinger Eigenmaps (SSSE) algorithm, which defines graphs with spectral information and uses cluster potentials to encode spatial proximity. Edges are defined based on proximity between the spectral components of the vertices, and edge weights are defined according to: exp W i,j = 2 xfi xfj ( σf 2 ), (x i,x j ) E, otherwise. (8) A cluster potential matrix V is defined to encode proximity between the spatial components of the vertices: ( k x p V = V (i,j) i γ i,j exp xp 2 ) j, (9) i= x j Nǫ(x p i) where N p ǫ (x i ) is the set of points in X whose spatial components are in an ǫ-neighborhood of the spatial components of x i ; i.e., σ 2 p N p ǫ (x i ) = {x X x i s.t. x p i xp ǫ}, () V (i,j) is defined as in (), and γ i,j can be chosen in a manner that provides greater influence for spatial neighbors having ( nearby spectral components. Cahill et al. 2 proposed two versions of SSSE: SSSE - SSSE with γ i,j = ) ( ( )) exp x f i xf j 2 /σf 2, and SSSE2 - SSSE with γ i,j = exp cos f x i,xf j. With these choices for x f i x f j γ i,j, the coefficients of each V (i,j) in (9) are equivalent to the Shi-Malik edge weights in (2) or the Gillis-Bowles edge weights in (3), respectively. 3. PARTIAL KNOWLEDGE OF CLASS LABELS AS CLUSTER POTENTIALS In many scenarios, an expert user or analyst can be queried to identify a small set of pixels or regions of an image that should have the same class labels. Wagstaff et al. 5 defines this type of partial knowledge of class labels as must-link constraints. Normalized Cuts, 8 a graph-based algorithm for data clustering and image segmentation, solves the same generalized eigenvector problem as LE; it has been generalized to handle must-link constraints, either as hard 6,7 or soft 8 constraints. We extend the idea of Chew and Cahill 8 to show how soft must-link constraints in data clustering have the natural analog of cluster potentials in dimensionality reduction. To formalize this idea for graph-based dimensionality reduction, suppose that each of the ordered pairs in the set M = {(x i,x j ),(x i2,x j2 ),...,(x im,x jm )} represent two graph vertices that should ultimately be given the same class label. By introducing cluster potentials for each of these ordered pairs, we can bias the dimensionality reduction process to yield lower dimensional representations so that the distances y ik y jk, k =,2,...,m, are small. These cluster potentials can be encoded in a matrix M given by: M = η ik,j k V (i k,j k ), () (x ik,x jk ) M where V (i,j) is defined as in (), and η i,j can be chosen to weight each must-link constraint individually, if desired. All of the LE-based dimensionality reduction techniques described in Section 2.3 are based on solving generalized eigenvector problems of the form Lf = λdf. By adding a scalar multiple of the cluster potential matrix M to the graph Laplacian so that the generalized eigenvector problems take the form (L+βM)f = λdf, we can bias the representations toward satisfying the must-link constraints. This modification of the generalized eigenvector problems turns each original LE-based algorithm into an instance of Schroedinger Eigenmaps. The SSSE algorithm in Section 2.3 can also be extended to incorporate partial knowledge in the form of mustlink constraints by changing the generalized eigenvector problem (L+αV)f = λdf to (L+αV+βM)f = λdf.

5 (a) (b) (c) (d) (e) Figure : Example of incorporating partial knowledge in dimensionality reduction: (a) Indian Pines image (spectral bands 29, 5, 2) with zoomed-in region, (b) ground truth class labels (orange = corn-notill, blue = soybean-mintill), (c) SVM-based class labels after LE dimensionality reduction, with corn-notill region incorrectly labeled as soybean-mintill, (d) orange points manually labeled by an expert user indicating pixels have the same class label, and (e) SVM-class labels after dimensionality reduction that incorporates the partial knowledge indicated in (d), with corn-notill region correctly labeled. Choosing the appropriate weights η ik,j k and β can be tricky. In fact, there is some redundancy, as multiplying each η ik,j k by c and dividing β by c yields exactly the same generalized eigenvector problem for any c. One way to enable a consistent interpretation of β is to define the weights η ik,j k so that tr(m) = tr(l). Hence, if all ordered pairs in M are to be given equal weight (which is reasonable if all ordered pairs should be assigned the same class label), one should define η ik,j k = tr(l)/(2m). β can initially be set to, and it can be increased or decreased logarithmically until the resulting representation appears optimal to the user. More generally, consider the possibility that the ordered pairs in M can be partitioned into l different groups (corresponding to l different class labels), with m ordered pairs assigned to label, m 2 ordered pairs assigned to label 2, etc., and m +m 2 + +m l = m. If the must-link constraints should be equally weighted across class labels, then the weight for each ordered pair corresponding to class label ν should be defined as η ik,j k = tr(l)/(2lm ν ). To illustrate the impact of incorporating partial knowledge in dimensionality reduction, consider the publiclyavailable Indian Pines hyperspectral image, shown rendered in RGB in Figure a. The image has an associated set of ground truth pixels encompassing 6 distinct classes, as shown in Figure b. Performing LE-based dimensionality reduction (with SM graph construction), followed by SVM-based classification yields the predicted class labels shown in Figure c. In the zoomed-in region, we see that a rectangular subregion of orange (cornnotill) pixels has been mislabeled as belonging to the blue class (soybean-mintill). By allowing an expert user to use a paintbrush tool to manually identify a few small sets of pixels across the image (shown highlighted in orange in Figure d) as belonging to the same class, a cluster potential matrix can be defined for these manually labeled pixels. Incorporating this cluster potential matrix into the dimensionality reduction algorithm and then performing SVM-based classification yields the predicted class labels shown in Figure e that correctly identify the corn-notill and soybean-mintill classes in the zoomed-in region. 4. KNOWLEDGE PROPAGATION In some situations, cluster potentials that are generated from small sets of manually provided labels may impact dimensionality reduction in a brittle manner. The cluster potentials may correctly bias dimensionality

6 (a) (b) (c) (d) (e) Figure 2: Example of the influence of knowledge propagation: (a) Ground-truth class labels for Indian Pines image with zoomed-in regions (orange = corn-notill, light green = grass-pasture); (b) SVM-based class labels after LE dimensionality reduction, with both corn-notill and grass-pasture regions incorrectly labeled; (c) orange and light green points manually labeled by an expert user indicating pixels that have the same class labels; (d) SVM-class labels after dimensionality reduction incorporating the partial knowledge indicated in (c) without knowledge propagation, illustrating the brittle nature of the grass-pasture cluster potential; and (e) SVM-class labels after dimensionality reduction with knowledge propagation, yielding correct class labels for both corn-notill and green-pasture regions. reduction so that the low-dimensional representations of the manually labeled points are close together, but this behavior may not propagate adequately to other points that are nearby in the high dimensional space. Consider the example illustrated in Figure 2. The Indian Pines ground truth class labels are shown again in Figure 2a. We will focus specifically on two subregions: the corn-notill (orange) region in the center of the lower zoomed region, and the rectangular grass-pasture (light green) region in the center of the upper zoomed region. Performing LE-based dimensionality reduction (with GB graph construction), followed by SVM-based classification yields the predicted class labels shown in Figure 2b. Comparing the results to the ground truth, we that as in Figure c, the corn-notill (orange) region is incorrectly labeled as soybean-mintill (blue); in addition, the grass-pasture (light green) region is incorrectly labeled as a mixture of other classes. To try to improve on these results, we manually identify six sets of pixels in the orange corn-notill regions and four sets of pixels in the light green grass-pasture regions, as shown in Figure 2c. Constructing cluster potentials between every pair of manually labeled like-colored pixels, incorporating the cluster potential matrix in dimensionality reduction, and performing SVM-based classification yields class labels shown in Figure 2d. A portion of the corn-notill region that was incorrectly labeled in Figure 2b has now been classified correctly; however, the grass-pasture region has now been correctly labeled only in the pixels that were manually identified. This example illuminates the problem that can occur when the provided manual labels are sparse and two or more clusters of points may be present in the high-dimensional space. If manual labels are provided that indicate a point in one of the high-dimensional clusters should be identified with points in a different high-dimensional cluster, dimensionality reduction may yield a result in which the manually labeled points are pulled together in the low-dimensional representation without pulling together the separate clusters.

7 ThisproblemhaspreviouslybeenidentifiedinthespectralclusteringliteraturebyYuandShi. 6 Theysuggest that for the related clustering problem, constraints on individual points can be smoothed or propagated to nearby points. In dimensionality reduction, this can be achieved by replacing the cluster potential matrix M in () with: M = η ik,j k WD V (i k,j k ) D W, (2) (x ik,x jk ) M where W and D are the weighted adjacency matrix and degree matrix of the graph, respectively. Like the constraint propagation in Yu and Shi, 6 this modification of the cluster potentials encourages points that are close to the manually-labeled points in the high-dimensional space to have low-dimensional representations that are also close. Hence, we refer to the use of (2) as opposed to () in dimensionality reduction as knowledge propagation. Referring back to Figure 2, incorporating cluster potentials with knowledge propagation in dimensionality reduction and performing SVM-based classification yields the class labels shown in Figure 2e. Comparing the results to those in Figure 2d in which knowledge propagation was not used, we see that both the corn-notill and grass-pasture regions have now been classified correctly. 5. CLASSIFICATION EXPERIMENTS In order to determine the effect of incorporating partial knowledge of class labels in LE- and SE-based dimensionality reduction, we explore the publicly-available Indian Pines image and compute various low-dimensional representations without partial knowledge, with partial knowledge, and with partial knowledge and knowledge propagation. Then, using the representations as input, we perform multi-class SVM-based classification. The data, experiments, and results are described in this section. 5. Data The Indian Pines image, shown in Figures and 2, was captured by an AVIRIS spectrometer over the rural Indian Pines test site in Northwestern Indiana, USA. The image contains pixels with spatial resolution of approximately 2 meters per pixel, with 224 spectral bands, 4 of which we have discarded due to noise and water. A set of 249 ground truth pixels associated with 6 classes have been manually identified by an expert analyst for use in training and testing classification algorithms. 5.2 Experimental Setup To compare the various dimensionality reduction algorithms without partial knowledge, with partial knowledge, and with partial knowledge and knowledge propagation, we manually label various subsets of pixels to mimic the type of partial knowledge that could be provided by an expert user. Figure 3 shows larger versions of the original Indian Pines image (rendered in RGB) and the corresponding ground truth class labels. In addition, Figure 3 shows four different sets of partial knowledge to be tested: six sets of pixels in the orange corn-notill regions (Case: C), four sets of pixels in the light green grass-pasture regions (Case: G), the combined sets of orange corn-notill and light green grass-pasture regions (Case: CG), and a set of pixels selected from each of the connected ground-truth regions (Case: All). Using these four cases both with and without knowledge propagation, we compute low-dimensional embeddings using all of the dimensionality reduction algorithms. The low-dimensional embeddings are provided as input to a classifier, through which class labels will be predicted and compared to the ground truth. The classifier we use is based on one-versus-all SVMs with Gaussian RBF kernels as implemented in MATLAB s fitcsvm function. We employ the heuristic procedure available in fitcsvm in order to automatically determine the kernel scaling. We select % of the ground truth pixels from each class for training, and we use the remaining ground truth pixels for testing. From the resulting confusion matrices, we compute per-class accuracy, precision, sensitivity, and specificity, as defined in Table. A few notes about data treatment and parameter choices:

8 Indian Pines Image Case C: corn-notill Case CG: corn + grass Ground Truth Case G: grass-pasture Case All: all classes Figure 3: Indian Pines image (spectral bands 29, 5, 2), ground truth class labels, and manually provided class labels representing different test cases for evaluating dimensionality reduction algorithms. Measure Definition Measure Definition Accuracy Precision TP +TN TP +FP +FN +TN TP TP +FP Sensitivity Specificity TP TP +FN TN FP +TN Table : Definitions of per-class classification performance measures, in terms of true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN).

9 Prior to dimensionality reduction, the spectral components of the data in X are normalized so that (/k) k =. We also assume that components of x p are in units of pixels. 2 i= x f i For all algorithms, we choose the reduced dimension to be n = 5. This value was chosen to ensure that the resulting accuracy of representation is high; i.e., that with other parameters fixed, increasing n does not result in significant changes in accuracy. For algorithms requiring graph construction via k-nearest neighbors (SSSE, HZYZ, BE, BM), we select k = 2. This value was chosen to guarantee that the resulting graphs are connected. For parameter choices in each dimensionality reduction algorithm, we use the values that were reported optimal in Cahill et al.: 2 SM: ǫ = 5, σ f =., σ p = ; GB: ǫ = 5, σ f =.2, σ p = ; HZYZ: σ f =, σ p = ; BE: 8 spatial / 42 spectral eigenvectors, σ f =, σ p = ; BM: σ f =, σ p =, β = 8; SSSE: α = 7.78 tr(l)/tr(v). 5.3 Results Figures 4 illustrate per-class classification performance measures after dimensionality reduction and SVMbased classification. The sequence of figures corresponds to SM, GB, HZYZ, BM, BE, SSSE, and SSSE2 dimensionality reduction methods. In each figure, subplots show accuracy, precision, sensitivity, and specificity measures for each partial-knowledge case (C, G, CG, and All). Within each subplot, three symbols indicate how partial knowledge is used: red circles indicate dimensionality reduction was performed without partial knowledge, blue crosses indicate dimensionality reduction was performed with partial knowledge but without knowledge propagation, and black triangles indicate dimensionality reduction was performed with partial knowledge and with knowledge propagation. A number of observations can be drawn from these figures. For nearly all situations, in cases C, GC, and All, each dimensionality reduction algorithm improves all classification measures in the corn-notill regions when prior knowledge is used. The largest improvements appear to be in SM- and GB-based dimensionality reduction Accuracy Precision Sensitivity Specificity Case C Case G Case CG Case All Figure 4: Per-class classification performance after SM-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided.

10 Accuracy Precision Sensitivity Specificity Case C Case G Case CG Case All Figure 5: Per-class classification performance after GB-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided. Accuracy Precision Sensitivity Specificity Case C Case G Case CG Case All Figure 6: Per-class classification performance after HZYZ-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided.

11 Accuracy Precision Sensitivity Specificity 8 8 Case C Case G Case CG Case All Figure 7: Per-class classification performance after BM-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided. Case C Case G Case CG Case All Accuracy Precision Sensitivity Specificity Figure 8: Per-class classification performance after BE-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided.

12 Accuracy Precision Sensitivity Specificity Case C Case G Case CG Case All Figure 9: Per-class classification performance after SSSE-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided. Accuracy Precision Sensitivity Specificity Case C Case G Case CG Case All Figure : Per-class classification performance after SSSE2-based dimensionality reduction using: no partial knowledge (red ) partial knowledge without propagation (blue +), and partial knowledge with propagation (black ). Vertical dashed lines indicate classes for which partial knowledge is provided.

13 algorithms. The few situations where there do not appear to be improvements when prior knowledge is used occur when knowledge is not propagated. For the cases G, GC, and All, classification performance in the grass-pasture regions is already quite good without the use of prior knowledge in dimensionality reduction. Incorporating prior knowledge can minimally improve various performance measures when knowledge propagation is used. However, if knowledge propagation is not used, results can be problematic. Note in particular the SSSE and SSSE2 dimensionality reduction algorithms for case G when partial knowledge is used without propagation: all classification performance measures are worse than when partial knowledge is not used at all. In these cases, the pixel regions that were labeled as grass-pasture were all classified incorrectly into a common class. When the partial knowledge is used with propagation, the pixel regions become classified correctly. Also of interest are how partial knowledge of specific classes impacts classification performance of other classes. When the corn-notill regions are provided as partial knowledge with propagation in SM-, GB-, BM-, and BE-based dimensionality reduction, classification performance is also improved in the soybean-mintill regions (class ). This observation confirms the results obtained visually in Figure e. 6. CONCLUSION In this paper, we showed how the dimensionality reduction algorithms Laplacian Eigenmaps and Schroedinger Eigenmaps can be extended to incorporate partial knowledge of class labels, such as pixels or regions that have been manually labeled by an expert user. Using cluster potentials to encode the partial knowledge turns each algorithm into an instance of Schroedinger Eigenmaps. With publicly available data, we illustrated that incorporating this partial knowledge yields low-dimensional representations of the data that can improve the performance of subsequent SVM-based classification. APPENDIX Prototype implementations of the algorithms presented in this paper are available for download at MATLAB Central ( under File ID #589. ACKNOWLEDGEMENTS The authors would like to thank Prof. Landgrebe (Purdue University, USA) for providing the Indian Pines data. Selene Chew was supported in part by Rochester Institute of Technology s 24 Honors Summer Undergraduate Research Fellowship program. REFERENCES [] Prasad, S. and Bruce, L., Limitations of principal components analysis for hyperspectral target recognition, IEEE Geoscience and Remote Sensing Letters 5, (Oct 28). [2] Kim, D. and Finkel, L., Hyperspectral image processing using locally linear embedding, in [Proc. IEEE EMBS Conference on Neural Engineering], (March 23). [3] Bachmann, C., Ainsworth, T., and Fusina, R., Exploiting manifold geometry in hyperspectral imagery, IEEE Transactions on Geoscience and Remote Sensing 43, (March 25). [4] Fauvel, M., Chanussot, J., and Benediktsson, J., Kernel principal component analysis for the classification of hyperspectral remote sensing data of urban areas, EURASIP Journal on Advances in Signal Processing 29(78394), 4 (29). [5] Halevy, A., Extensions of Laplacian Eigenmaps for Manifold Learning, PhD thesis, University of Maryland, College Park (2). [6] Benedetto, J., Czaja, W., Dobrosotskaya, J., Doster, T., Duke, K., and Gillis, D., Semi-supervised learning of heterogeneous data in remote sensing imagery, in [Proc. SPIE Independent Component Analyses, Compressive Sampling, Wavelets, Neural Net, Biosystems, and Nanoengineering X], 844, 2 (June 22).

14 [7] Shi, J. and Malik, J., Normalized cuts and image segmentation, in [Proc. IEEE Conference on Computer Vision and Pattern Recognition], (997). [8] Shi, J. and Malik, J., Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), (2). [9] Gillis, D. B. and Bowles, J. H., Hyperspectral image segmentation using spatial-spectral graphs, in [Proc. SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII], 839Q, (22). [] Hou, B., Zhang, X., Ye, Q., and Zheng, Y., A novel method for hyperspectral image classification based on Laplacian eigenmap pixels distribution-flow, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6(3), (23). [] Benedetto, J., Czaja, W., Dobrosotskaya, J., Doster, T., Duke, K., and Gillis, D., Integration of heterogeneous data for classification in hyperspectral satellite imagery, in [Proc. SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII], 83927, 2 (June 22). [2] Cahill, N. D., Czaja, W., and Messinger, D. W., Schroedinger eigenmaps with nondiagonal potentials for spatial-spectral clustering of hyperspectral imagery, in [Proc. SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XX], 9884, 3 (June 24). [3] Belkin, M. and Niyogi, P., Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 5, (June 23). [4] Czaja, W. and Ehler, M., Schroedinger eigenmaps for the analysis of biomedical data, IEEE Transactions on Pattern Analysis and Machine Intelligence 35, (May 23). [5] Wagstaff, K., Cardie, C., Rogers, S., and Schrödl, S., Constrained k-means clustering with background knowledge, in [Proceedings of the Eighteenth International Conference on Machine Learning], ICML, (2). [6] Yu, S. X. and Shi, J., Segmentation given partial grouping constraints, IEEE Trans. Pattern Analysis and Machine Intelligence 26, (Feb 24). [7] Eriksson, A., Olsson, C., and Kahl, F., Normalized cuts revisited: A reformulation for segmentation with linear grouping constraints, Journal of Mathematical Imaging and Vision 39(), 45 6 (2). [8] Chew, S. E. and Cahill, N. D., Normalized cuts with soft must-link constraints for image segmentation and clustering, in [Proc. IEEE Western New York Image and Signal Processing Workshop], 6 (November 24).

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Nathan D. Cahill a, Wojciech Czaja b, and David W. Messinger c a Center for Applied and Computational

More information

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 8-31-2015 Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Yang Hu Nathan

More information

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT

More information

Hyperspectral image segmentation using spatial-spectral graphs

Hyperspectral image segmentation using spatial-spectral graphs Hyperspectral image segmentation using spatial-spectral graphs David B. Gillis* and Jeffrey H. Bowles Naval Research Laboratory, Remote Sensing Division, Washington, DC 20375 ABSTRACT Spectral graph theory

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract

More information

Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification

Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification Noname manuscript No. (will be inserted by the editor) Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification John J. Benedetto 1 Wojciech Czaja 1 Julia Dobrosotskaya 2 Timothy

More information

Spatial-spectral operator theoretic methods for hyperspectral image classification

Spatial-spectral operator theoretic methods for hyperspectral image classification Int J Geomath (2016) 7:275 297 DOI 10.1007/s13137-016-0085-0 ORIGINAL PAPER Spatial-spectral operator theoretic methods for hyperspectral image classification John J. Benedetto 1 Wojciech Czaja 1 Julia

More information

Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data

Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 12-2018 Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data Poppy

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

Frame based kernel methods for hyperspectral imagery data

Frame based kernel methods for hyperspectral imagery data Frame based kernel methods for hyperspectral imagery data Norbert Wiener Center Department of Mathematics University of Maryland, College Park Recent Advances in Harmonic Analysis and Elliptic Partial

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,

More information

Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples

Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Franck Olivier Ndjakou Njeunje Applied Mathematics, Statistics, and Scientific Computation University

More information

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Wenzhi liao a, *, Frieke Van Coillie b, Flore Devriendt b, Sidharta Gautama a, Aleksandra Pizurica

More information

Tensor Sparse PCA and Face Recognition: A Novel Approach

Tensor Sparse PCA and Face Recognition: A Novel Approach Tensor Sparse PCA and Face Recognition: A Novel Approach Loc Tran Laboratoire CHArt EA4004 EPHE-PSL University, France tran0398@umn.edu Linh Tran Ho Chi Minh University of Technology, Vietnam linhtran.ut@gmail.com

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

Dimension Reduction CS534

Dimension Reduction CS534 Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of

More information

Data fusion and multi-cue data matching using diffusion maps

Data fusion and multi-cue data matching using diffusion maps Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST 2016 1059 A Modified Locality-Preserving Projection Approach for Hyperspectral Image Classification Yongguang Zhai, Lifu Zhang, Senior

More information

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.X, NO.X, Y 1 Classification of Hyperspectral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised

More information

Application of Spectral Clustering Algorithm

Application of Spectral Clustering Algorithm 1/27 Application of Spectral Clustering Algorithm Danielle Middlebrooks dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park Advance

More information

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network 1 Xi an Hi-Tech Institute Xi an 710025, China E-mail: dr-f@21cnl.c Hongyang Gu Xi an Hi-Tech Institute

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation

Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Lori Cillo, Attebury Honors Program Dr. Rajan Alex, Mentor West Texas A&M University Canyon, Texas 1 ABSTRACT. This work is

More information

Learning a Manifold as an Atlas Supplementary Material

Learning a Manifold as an Atlas Supplementary Material Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito

More information

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6,

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6, Spectral Clustering XIAO ZENG + ELHAM TABASSI CSE 902 CLASS PRESENTATION MARCH 16, 2017 1 Presentation based on 1. Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17.4

More information

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..

More information

Locally Linear Landmarks for large-scale manifold learning

Locally Linear Landmarks for large-scale manifold learning Locally Linear Landmarks for large-scale manifold learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu

More information

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 7-31-2016 Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov

More information

Relative Constraints as Features

Relative Constraints as Features Relative Constraints as Features Piotr Lasek 1 and Krzysztof Lasek 2 1 Chair of Computer Science, University of Rzeszow, ul. Prof. Pigonia 1, 35-510 Rzeszow, Poland, lasek@ur.edu.pl 2 Institute of Computer

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

Improving Image Segmentation Quality Via Graph Theory

Improving Image Segmentation Quality Via Graph Theory International Symposium on Computers & Informatics (ISCI 05) Improving Image Segmentation Quality Via Graph Theory Xiangxiang Li, Songhao Zhu School of Automatic, Nanjing University of Post and Telecommunications,

More information

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data

An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data An Intelligent Clustering Algorithm for High Dimensional and Highly Overlapped Photo-Thermal Infrared Imaging Data Nian Zhang and Lara Thompson Department of Electrical and Computer Engineering, University

More information

Spectral Clustering on Handwritten Digits Database

Spectral Clustering on Handwritten Digits Database October 6, 2015 Spectral Clustering on Handwritten Digits Database Danielle dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park Advance

More information

School of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China

School of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China Send Orders for Reprints to reprints@benthamscienceae The Open Automation and Control Systems Journal, 2015, 7, 253-258 253 Open Access An Adaptive Neighborhood Choosing of the Local Sensitive Discriminant

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

Face Recognition using Laplacianfaces

Face Recognition using Laplacianfaces Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an

More information

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER A.Shabbir 1, 2 and G.Verdoolaege 1, 3 1 Department of Applied Physics, Ghent University, B-9000 Ghent, Belgium 2 Max Planck Institute

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Cluster Analysis (b) Lijun Zhang

Cluster Analysis (b) Lijun Zhang Cluster Analysis (b) Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Grid-Based and Density-Based Algorithms Graph-Based Algorithms Non-negative Matrix Factorization Cluster Validation Summary

More information

Hyperspectral Image Classification by Using Pixel Spatial Correlation

Hyperspectral Image Classification by Using Pixel Spatial Correlation Hyperspectral Image Classification by Using Pixel Spatial Correlation Yue Gao and Tat-Seng Chua School of Computing, National University of Singapore, Singapore {gaoyue,chuats}@comp.nus.edu.sg Abstract.

More information

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute

More information

Spectral-Spatial Response for Hyperspectral Image Classification

Spectral-Spatial Response for Hyperspectral Image Classification Article Spectral-Spatial Response for Hyperspectral Image Classification Yantao Wei 1,2, *,, Yicong Zhou 2, and Hong Li 3 1 School of Educational Information Technology, Central China Normal University,

More information

Stratified Structure of Laplacian Eigenmaps Embedding

Stratified Structure of Laplacian Eigenmaps Embedding Stratified Structure of Laplacian Eigenmaps Embedding Abstract We construct a locality preserving weight matrix for Laplacian eigenmaps algorithm used in dimension reduction. Our point cloud data is sampled

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Clustering Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1 / 19 Outline

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Including the Size of Regions in Image Segmentation by Region Based Graph

Including the Size of Regions in Image Segmentation by Region Based Graph International Journal of Emerging Engineering Research and Technology Volume 3, Issue 4, April 2015, PP 81-85 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Including the Size of Regions in Image Segmentation

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification 1 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification Changhong Liu, Jun Zhou, Senior Member, IEEE, Jie Liang, Yuntao Qian,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery

A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery K. Borna 1, A. B. Moore 2, P. Sirguey 3 School of Surveying University of Otago PO Box 56, Dunedin,

More information

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology

More information

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College

More information

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003 CS 664 Slides #11 Image Segmentation Prof. Dan Huttenlocher Fall 2003 Image Segmentation Find regions of image that are coherent Dual of edge detection Regions vs. boundaries Related to clustering problems

More information

COMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park. { zzon, khliu, jaeypark

COMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park.   { zzon, khliu, jaeypark COMPRESSED DETECTION VIA MANIFOLD LEARNING Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park Email : { zzon, khliu, jaeypark } @umich.edu 1. INTRODUCTION In many imaging applications such as Computed Tomography

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Identifying Layout Classes for Mathematical Symbols Using Layout Context

Identifying Layout Classes for Mathematical Symbols Using Layout Context Rochester Institute of Technology RIT Scholar Works Articles 2009 Identifying Layout Classes for Mathematical Symbols Using Layout Context Ling Ouyang Rochester Institute of Technology Richard Zanibbi

More information

Graph Laplacian Kernels for Object Classification from a Single Example

Graph Laplacian Kernels for Object Classification from a Single Example Graph Laplacian Kernels for Object Classification from a Single Example Hong Chang & Dit-Yan Yeung Department of Computer Science, Hong Kong University of Science and Technology {hongch,dyyeung}@cs.ust.hk

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Constrained Clustering with Interactive Similarity Learning

Constrained Clustering with Interactive Similarity Learning SCIS & ISIS 2010, Dec. 8-12, 2010, Okayama Convention Center, Okayama, Japan Constrained Clustering with Interactive Similarity Learning Masayuki Okabe Toyohashi University of Technology Tenpaku 1-1, Toyohashi,

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

Semi-supervised Data Representation via Affinity Graph Learning

Semi-supervised Data Representation via Affinity Graph Learning 1 Semi-supervised Data Representation via Affinity Graph Learning Weiya Ren 1 1 College of Information System and Management, National University of Defense Technology, Changsha, Hunan, P.R China, 410073

More information

3 Nonlinear Regression

3 Nonlinear Regression 3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Interactive Text Mining with Iterative Denoising

Interactive Text Mining with Iterative Denoising Interactive Text Mining with Iterative Denoising, PhD kegiles@vcu.edu www.people.vcu.edu/~kegiles Assistant Professor Department of Statistics and Operations Research Virginia Commonwealth University Interactive

More information

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

The Analysis of Parameters t and k of LPP on Several Famous Face Databases The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun

More information

Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification

Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification Selim Aksoy and H. Gökhan Akçay Bilkent University Department of Computer Engineering Bilkent, 06800, Ankara, Turkey

More information

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial

More information

Appearance Manifold of Facial Expression

Appearance Manifold of Facial Expression Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

CIE L*a*b* color model

CIE L*a*b* color model CIE L*a*b* color model To further strengthen the correlation between the color model and human perception, we apply the following non-linear transformation: with where (X n,y n,z n ) are the tristimulus

More information

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Hyperspectral Image Classification Using Gradient Local Auto-Correlations Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

Locality Preserving Projections (LPP) Abstract

Locality Preserving Projections (LPP) Abstract Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL

More information

SPATIAL ENTROPY BASED MUTUAL INFORMATION IN HYPERSPECTRAL BAND SELECTION FOR SUPERVISED CLASSIFICATION

SPATIAL ENTROPY BASED MUTUAL INFORMATION IN HYPERSPECTRAL BAND SELECTION FOR SUPERVISED CLASSIFICATION INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volume 9, Number 2, Pages 181 192 c 2012 Institute for Scientific Computing and Information SPATIAL ENTROPY BASED MUTUAL INFORMATION IN HYPERSPECTRAL

More information

Visual Representations for Machine Learning

Visual Representations for Machine Learning Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering

More information

Clustering Large Credit Client Data Sets for Classification with SVM

Clustering Large Credit Client Data Sets for Classification with SVM Clustering Large Credit Client Data Sets for Classification with SVM Ralf Stecking University of Oldenburg Department of Economics Klaus B. Schebesch University Vasile Goldiş Arad Faculty of Economics

More information

Posture detection by kernel PCA-based manifold learning

Posture detection by kernel PCA-based manifold learning University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2010 Posture detection by kernel PCA-based manifold learning Peng

More information

Fuzzy Entropy based feature selection for classification of hyperspectral data

Fuzzy Entropy based feature selection for classification of hyperspectral data Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering NIT Kurukshetra, 136119 mpce_pal@yahoo.co.uk Abstract: This paper proposes to use

More information

A (somewhat) Unified Approach to Semisupervised and Unsupervised Learning

A (somewhat) Unified Approach to Semisupervised and Unsupervised Learning A (somewhat) Unified Approach to Semisupervised and Unsupervised Learning Ben Recht Center for the Mathematics of Information Caltech April 11, 2007 Joint work with Ali Rahimi (Intel Research) Overview

More information

Design of Orthogonal Graph Wavelet Filter Banks

Design of Orthogonal Graph Wavelet Filter Banks Design of Orthogonal Graph Wavelet Filter Banks Xi ZHANG Department of Communication Engineering and Informatics The University of Electro-Communications Chofu-shi, Tokyo, 182-8585 JAPAN E-mail: zhangxi@uec.ac.jp

More information

Classifying Depositional Environments in Satellite Images

Classifying Depositional Environments in Satellite Images Classifying Depositional Environments in Satellite Images Alex Miltenberger and Rayan Kanfar Department of Geophysics School of Earth, Energy, and Environmental Sciences Stanford University 1 Introduction

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

Kernel spectral clustering: model representations, sparsity and out-of-sample extensions

Kernel spectral clustering: model representations, sparsity and out-of-sample extensions Kernel spectral clustering: model representations, sparsity and out-of-sample extensions Johan Suykens and Carlos Alzate K.U. Leuven, ESAT-SCD/SISTA Kasteelpark Arenberg B-3 Leuven (Heverlee), Belgium

More information

A Framework of Hyperspectral Image Compression using Neural Networks

A Framework of Hyperspectral Image Compression using Neural Networks A Framework of Hyperspectral Image Compression using Neural Networks Yahya M. Masalmah, Ph.D 1, Christian Martínez-Nieves 1, Rafael Rivera-Soto 1, Carlos Velez 1, and Jenipher Gonzalez 1 1 Universidad

More information

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images Author manuscript, published in "International Conference on Image Analysis and Recognition, Burnaby : Canada (2011)" DOI : 10.1007/978-3-642-21593-3_38 Spatially variant dimensionality reduction for the

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data

A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data Journal of Computational Information Systems 11: 6 (2015) 2139 2146 Available at http://www.jofcis.com A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial Image Similarities for Learning Video Manifolds Selen Atasoy MICCAI 2011 Tutorial Image Spaces Image Manifolds Tenenbaum2000 Roweis2000 Tenenbaum2000 [Tenenbaum2000: J. B. Tenenbaum, V. Silva, J. C. Langford:

More information

The K-modes and Laplacian K-modes algorithms for clustering

The K-modes and Laplacian K-modes algorithms for clustering The K-modes and Laplacian K-modes algorithms for clustering Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://faculty.ucmerced.edu/mcarreira-perpinan

More information

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection Based on Locality Preserving Projection 2 Information & Technology College, Hebei University of Economics & Business, 05006 Shijiazhuang, China E-mail: 92475577@qq.com Xiaoqing Weng Information & Technology

More information

Head Frontal-View Identification Using Extended LLE

Head Frontal-View Identification Using Extended LLE Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging

More information

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING Divesh Kumar 1 and Dheeraj Kalra 2 1 Department of Electronics & Communication Engineering, IET, GLA University, Mathura 2 Department

More information