A Comparison of Three Image Classification Techniques for Satellite Remote Sensing

Size: px
Start display at page:

Download "A Comparison of Three Image Classification Techniques for Satellite Remote Sensing"

Transcription

1 University of Alabama in Huntsville ATS 670 Final Project A Comparison of Three Image Classification Techniques for Satellite Remote Sensing Author: Brian Freitag April 21,

2 Abstract High-resolution multi-spectral satellite observations have increased the capabilities of satellite remote sensing. The Visible Infrared Imaging Radiometer Suite (VIIRS) is aboard a polar orbiting satellite that provides daily global coverage with sub-kilometer resolution (nadir), multi-spectral images that can be utilized to perform more accurate image classification. This project will evaluate the performance of three classification techniques: maximum likelihood supervised classification, migrating means unsupervised classification, and the hybrid classification technique both qualitatively and quantitatively for a single VIIRS image on 21 March 2014 at 18:36 UTC. The selected image covers west-central South America and has a variety of complex land and cloud features that challenged the ability of the classification techniques. Qualitative analysis using a VIIRS true color composite from the red (M µm), green (M µm), and blue (M µm) channels provided mixed results with each technique having strengths and weaknesses. Quantitative analysis from a confusion matrix showed very good performance from all three classification methods with mean producer accuracies greater than 86 %. The maximum likelihood supervised classification had the highest average producer accuracy at 91.1 %, but since it took the longest amount of time to implement, applications outside of a single satellite image may be difficult to attain in a timely manner. 2

3 1 INTRODUCTION 1 Introduction Since the launch of polar- and geostationary satellite observations, surface spatial resolution has increased. Increased spatial resolution has adapted satellite remote sensing from basic cloud detection using threshold-based algorithms [Saunders and Kriebel, 1988] to full on pixel classification using a number of algorithms [Comber et al., 2012]. The accuracy of pixel level classification is a complex issue that is highly dependent upon classification technique, pixel contamination, number of classes, the user s understanding of the scene, and others [ Comber et al., 2012]. Image classification has become an integral part of the satellite remote sensing field as classification accuracy is paramount for understanding global land use and cloud coverage [Kontoes et al., 2009]. Choosing an appropriate classification technique requires careful consideration of the imaged scene and the imaging sensor used. Typically, image classification is achieved using empirically derived thresholds obtained from spectral signature analysis. While initially developed to identify the locations of clouds in a given image, further studies have allowed this technique to be applied to land, vegetation, and snow surfaces, to name a few. Increased spatial resolution of newer imaging sensors now provides an opportunity for remote sensing scientists to study smaller scale features such as forest fires and burn scars as in Kontoes et al. [2009]. Threshold-based classification algorithms tend to be more simple and computationally inexpensive compared to other thresholding techniques however, they have drawbacks as well. Empirically derived thresholds usually perform well for simple scenes, but can struggle in regions with complex features or sharp edges and therefore do not perform well on a global scale. Even on the regional scale, developing a threshold algorithm for classification requires some a priori knowledge of the spectral signatures of the prominent features in that domain. Additionally, threshold classification algorithms can leave unclassified pixels in the image particularly when confidence thresholds are applied which can areal coverage calculations [Richards, 2013]. More complex classification techniques have since been developed. Generally, these classification techniques can be divided into 3 categories, supervised, unsupervised, and a combination of the two. Like threshold algorithms, supervised classification techniques require an a priori knowledge of the scene, while unsupervised classification requires none. There are a number of supervised classification techniques that can be applied, but two of the most common are the minimum distance and maximum likelihood techniques. For both of these techniques training samples are obtained for any number of classes preselected by the user based on their pre-existing knowledge of the scene (e.g. level-ii satellite products, true-color images, etc.). The training samples are a collection of pixels the user defines to be certain class. Statistics from these training samples are then collected for each band and applied to the supervised classification algorithm. Thus, spatial distribution and the number of training samples collected tend to have significant impacts on the accuracy of the image classification. The statistics collected from the training data is then applied to the supervised classification algorithm, which subsequently classifies all of the pixels in the image [Richards, 2013]. For the minimum distance technique, means from the training data are used to determine which element in the mean array is closest to the pixel to be classified based on the pixel s value at each band. The maximum likelihood classification uses a discriminant function, taking the array of means and a covariance matrix to determine the most likely class for each pixel in the image (more technical discussion will be discussed in the next section). Supervised classification techniques tend to be more computationally expensive and time consuming because the user is required to collect the training data that is applied to the selected algorithm. That said, supervised classification is one of the more widely used classification techniques [Richards, 2013] and applications can be seen in Rozenstein and Karnieli [2011], Congalton [1991], and Yuksel et al. [2008]. Unsupervised classification is also a commonly used classification technique because it does not require a priori knowledge of the scene and can be applied to any scene. Unsupervised classification uses any one of a number of clustering algorithms to assign pixels into classes. One of the more common clustering algorithms in remote sensing is the migrating means clustering method. The migrating means method uses an initial set of cluster centers defined by the user (note: the user also defines the initial number of clusters) and uses a minimum distance algorithm to assign pixels to clusters [Richards, 2013]. The migrating component of the classification scheme refers to the redefinition of the cluster centers after the initial clustering assignment and the subsequent assignments. The iterative process stops once the 3

4 2 DATA AND METHODOLOGY mean vector between subsequent iterations is equal to or within some threshold value between the two iterations. Once clustering is complete, the user then classifies the image based on true color imagery and the results of the clustering algorithm. The user can merge clusters or remove clusters at their discretion to develop a final result. Unsupervised classification is used as a stand-alone technique, but can also be used to supplement supervised classification techniques by helping the user identify distinct classes in the image [Richards, 2013]. This method refers to a hybrid classification scheme. In the hybrid classification scheme, the user first runs a clustering algorithm on part of the satellite image to identify potential classes. Using the results from the unsupervised clustering, the user then selects the training sample data for the supervised classification algorithms. The supervised classification algorithm assigns pixels into classes based on their spectral information [Richards, 2013]. Richards [2013] provides a step-by-step explanation of the hybrid classification scheme and how it is applied. Applications of unsupervised classification and hybrid classification techniques are given in Rozenstein and Karnieli [2011], Congalton [1991], Lo and Choi [2003], and Kumar et al.[2013]. The purpose of this project is to assess and compare the performance of three different classification techniques: unsupervised, supervised, and hybrid. Each classification scheme will be applied to a single National Polar-orbiting Operational Environmental Satellite System (NPOESS) Visible Infrared Imaging Radiometer Suite (VIIRS) image obtained over west-central South America for 21 March 2014 at 18:36 UTC. Performance of the classification techniques will be measured quantitatively using a confusion matrix and qualitatively using a true-color image for visual comparison. 2 Data and Methodology 2.1 Domain of Interest VIIRS imagery was obtained for 21 March 2014 over western South America near Northern Chile and Argentina. A true color composite of the full VIIRS image is provided in Figure 1 which includes VIIRS bands M3 ( µm), M4 (0.555 µm) and M5 (0.672 µm) covering the blue, green, and red portion of the electromagnetic (EM) spectrum respectively. This region was selected because the domain is quite complex with a variety of land types such as ocean, complex topography, open desert, vegetation, and a complicated cloud scene. VIIRS data files consist of 3200 x 3072 pixels for a single image and therefore can be quite computationally expensive. To alleviate computational costs, only a 640 x 512 portion of the original 21 March image was used for image classification.the sub-sampled region was selected to capture as much complexity as possible and true color imagery for this domain is given in Figure 2. The image selected is essentially bisected by the Andes mountain range oriented North/South with sparse vegetation on the leeward and more dense vegetation on the windward side of the mountain range. Along the leeward side of the mountains, there are many distinct surface features within the sparsely vegetated regions suggesting a complex soil structure in this part of the image. The clouds in the bottom right corner of the image were associated with a precipitation event that occurred on this day, shown by the NASA Giovanni output displayed in Figure 3 [Acker and Leptoukh, 2007]. West central South America is governed by a subtropical climate regime, but is strongly influenced by the Andes Mountain range that traverses the region. Typically, mean atmospheric flow is easterly and supports the vegetation distribution pattern discussed above and sustains the coastal deserts seen in Northern Chile in Figure 2. On the eastern side of the domain, topographically influenced cloud formations persist, which can be influenced by urban emissions upstream of the topography. The primary urban pollution source for the sub-sampled region is San Miguel de Tucumán which located just east of the selected domain. Also, on the eastern side of the topography, a monsoonal rain pattern is observed with most of the precipitation occurring in the summer (November-March) and other months remaining dry [Grau et al., 2008]. Off the coast, stratocumulus decks are common due to relatively cool sea surface temperatures supporting cloud development beneath the inversion during the summer months. These stratocumulus decks have a significant influence on regional and global radiation budgets because of the contrast between the dark ocean surface and the bright clouds in the visible portion of the EM spectrum. 4

5 2.2 Data Source 2 DATA AND METHODOLOGY 2.2 Data Source NPOESS VIIRS data is public access and can be ordered with an account through the NOAA site: For this project, VIIRS sensor data record (SDR) files were obtained. The VIIRS instrument aboard the Suomi National Polar-orbiting Partnership polar orbiting satellite and Joint Polar Satellite System was developed to sample Earth s radiation budget and other atmospheric variables [Cao, 2013]. VIIRS is a whiskbroom scanning radiometer equipped with 22 spectral bands between µm and µm with a field of view. A 3,040 km swath width is larger than that of its predecessors and therefore can provide full daily global coverage without gaps near the equator. For this project, the 16 moderate resolution bands from VIIRS were used, which have a resolution of approximately 750 meters at nadir [Cao, 2013]. More technical information about the VIIRS instrument is given in Table 2 and specific band information can be seen in Table 3. Because along track scan width varies with scan angle, i.e. the bow-tie effect, there are overlapping pixels that need to be accounted for in the VIIRS data. The bow-tie effect increases with scan angle and therefore there are more overlapping pixels at the edges of the swath. VIIRS SDR files have been processed to remove some of the overlap with on-board deletion of 2 M-band pixels at and 4 M-band pixels at scan angle, but this does not account for all of the overlap in the image [Cao, 2013]. The bow-tie deletion that occurs when the raw data record is processed to generate the sensor data record leaves missing pixels in the image that collectively appear as stripes across the image. Visible striping within the image needs to be accounted for when analyzing VIIRS SDR images. For this project, striping was accounted for by replacing the first two rows of stripes with a nearby value above the stripe and the last two rows of stripes with a nearby value below the stripe. This process is repeated again when scan angle is greater than to remove all of the stripes from the image as seen in Figure 1 and Figure Band Selection Another non-trivial component of photo interpretation and image classification is selecting which bands to use for the classification algorithms. Since band statistics are used to classify pixels, band selection is an important task that requires ample consideration. Band classification for this project was approached using a three-step approach (1) Assess empirically derived spectral signature relationships for discernible features in the image (2) Find correlation between bands to maximize the amount of distinct information used for classification (3) Analyze the band statistics in physical units and identify bad or missing data. 1. Spectral Signatures: The spectral information for each band is given in Table 3 (note: the final bands selected are highlighted in yellow for all band information tables). Using the empirically derived spectral signature curves in Richards [2013], Jedlovec [2009], as well as the idealized curves in Figure 4, phase 1 of the band selection process was performed. Looking at the true color image from Figure 2, we can see some of the classes in our image discussed above. First, we have at least two distinct cloud types in the image: a low stratus deck over the ocean, and towering cumulus east of the mountain range. To differentiate between these two cloud classes, thermal bands should be used. Jedlovec [2009], shows that water and ice clouds tend to have drastically different spectral signatures in the near IR ( 1.6 µm) portion of the EM spectrum with ice clouds being more reflective than water clouds. Additionally, in the thermal portion of the EM spectrum we can exploit the temperature difference between the two clouds types. The land surface in the image is quite complex with various soil types, shadows, and vegetation strewn across the land. To differentiate between the vegetated and nonvegetated portion of the image, we can exploit the dynamic spectral signature of vegetation and the relatively static spectral signature of non-vegetative land surfaces. In the visible portion of the EM spectrum, chlorophyll in the vegetation absorbs much of the energy. However, in the near-ir portion of the EM spectrum, vegetation is highly reflective. Therefore, at least one visible and one near-ir band should be selected. To differentiate between the soil types in the image, thermal bands will be used to discern temperature differences. Additionally since they appear visually different, spectral differences will be exploited, particularly the reflectance differences in sand between the near-ir and the visible portion of the electromagnetic spectrum. Even though sand and vegetation have similar signatures in the visible/near IR portion of the EM spectrum, temperature differences in the thermal bands will be used to differentiate. While the soil types are still not fully understood, the fact that there is a high 5

6 2.4 Classification Schemes 2 DATA AND METHODOLOGY coastal desert within the domain leads one to believe that there is a sandy component to the soil. 2. Correlation Matrix: The correlation matrix was computed using the original digital count values for the 16 moderate resolution bands and is given in Table 4. The correlation matrix shows that the data obtained in the visible bands (M1-M5) is fairly highly correlated. In the previous section, we established that the green band (M4) would be useful for vegetation classification. Among the other 4 bands in the visible spectrum, the weakest correlation is with band M1, which has a correlation of The near-ir bands (M6-M7) are not nearly as correlated suggesting some differentiation between the information in the two bands. However, band 6 is used to obtain atmospheric correction information and is continually re-calibrated using the South Pacific Gyre to force aerosol type retrievals to Tahiti AERONET measurements [Franz et al., 2007]. This could explain why there is a weaker correlation between the two near-ir bands. The shortwave-ir (M8-M11) is fairly well correlated (> 0.90) except for Band M9 which has weak correlation to the other three bands. This is because band M9 is located within a water vapor absorption band (1.378 µm) and is used to identify high cirrus clouds [Cao, 2013]. The mid-ir bands (M12-M13) have virtually no correlation for this image, but are removed from consideration because there are no active fires visible in the original image. The thermal bands (M14-16) are very highly correlated ( > 0.988) suggesting they contain similar information. 3. Band Statistics: Simple band characteristics were computed using the computed physical values for the 640x512 subsampled image and can be seen in Table 5. These statistics include min, max, mean, and number of bad data points in the image. First, we see that band M13 should not be used for image classification, even if there were fires, because more than 3 % is comprised of bad data. The rest of the bands do not have negative pixel contamination, but unphysical reflectances are observed in the visible through mid-ir bands. In theory, reflectance values should not exceed one because of atmospheric effects. These bands will still be used, but rather than use converted values, original digital count values will be used. It also should be noted that bands M15 and M16 have nearly identical statistics however, these two bands are selected because they tend to be very effective at differentiating surface temperatures between classes as shown in Figure 4. The final bands selected are highlighted in Tables 3-5. The final selection includes two visible bands (M1 and M4), a near-ir band (M7), two shortwave-ir bands (M8 and M10), and two thermal bands(m15 and M16). The data within these bands will be sufficient to pull out the features of this particular image, but may not perform well in other regions. A similar band selection approach should be applied for different seasons over this location or for different locations altogether. 2.4 Classification Schemes Results can vary greatly simply based on the classification algorithm selected. Some of the most common classification techniques were discussed above. Here, a discussion of the techniques used in this project will be provided. The methodology used in this project follows very closely to that of Rozenstein and Kanieli [2011] in that the maximum likelihood, migrating means, and hybrid classification techniques are used and performance analysis is provided in a confusion matrix Maximum Likelihood Classification Scheme The maximum likelihood classification scheme is the supervised classification technique used for this project. Explicit steps for performing supervised classification are provided in Richards [2013] and the procedures used here will be explicitly defined using that format. 1. Photo-interpretation using true color imagery in Figure 2 was used to decide on an initial set of classes: clouds, water, vegetation, soil type 1, soil type 2. The true color image is used to visually identify an array classes based on how features appear in the image. For example, clouds are bright, water is dark blue, vegetation is green, and the soil types are light and dark respectively. 2. Next, representative pixels were selected for each class to develop the training data. Training samples are collected to form statistics about the pre-selected classes. For each class, 700 representative pixels 6

7 2.4 Classification Schemes 2 DATA AND METHODOLOGY were selected to develop the training data set and were used to compute the mean vector and the covariance matrix. The mean vector and covariance are used to represent the spectral signature profile of the class across the selected bands. 3. Using the spectral signature profile of each class, each pixel is classified using a discriminant function. In this step, the whole 640x512 image is classified, which is different than step one where only 4900 pixels were identified for each class. Since we are not given additional information about the scene, prior probabilities are assumed to be equal which gives us the following equation (Note: variable definitions are given in Appendix A): g i (x) = 1 2 ln Σ i 1 2 ( x µ i) t Σ 1 i ( x µ i ) (1) The decision rule is then applied to classify each pixel in the image: x ω i if g i (x) > g j (x), j i (2) Therefore, all pixels will be given a unique classification. A threshold value can also be applied to the classification algorithm using χ 2 values (e.g. χ = 14.1 when N=7) however, thresholds were not applied in this project. 4. Next a thematic map is produced and statistics are computed for the initial image classification which summarizes the areal coverage of each class. For accurate area calculations, the panoramic distortion and Earth curvature effects need to be accounted for and were for this project. 5. The accuracy of the classification algorithm is displayed in a confusion matrix. 6. If the classification statistics and the results are not representative of the actual scene, the process is iterated until the classification scheme performs satisfactorily. For this project, the process was iterated 10 times before the image classification was satisfactory Migrating Means Classification The migrating means classification scheme is the unsupervised classification scheme used for this project. The migrating means technique is explained explicitly in Richards [2013] and the procedures used to obtain the migrating means classification will be defined in that format. 1. Unlike the maximum likelihood technique, rather than define classes, the user defines a number of initial cluster centers. Common practice is to select 2-3 spectral classes per information class and to merge/delete clusters after the clustering algorithm finishes. For the project, 12 initial cluster centers were defined. Cluster centers were evenly distributed between minimum and maximum digital count values for each band thus, initial cluster means are not consistent across the different bands. This technique allowed for a smaller number of initial cluster centers and a quicker clustering solution. 2. Each pixel is assigned to a candidate cluster of the nearest mean using a discriminant function for a Euclidean minimum distance classifier (Note: variable definitions are given in Appendix A): Apply the decision rule to classify each pixel in the image: g i (x) = 2 µ it x µ it µ i (3) x ω i if g i (x) > g j (x), j i (4) 3. If a cluster has less than 20 pixels in it, the cluster is removed from the mean vector and the iteration process continues with one fewer cluster center. 4. A new set of cluster means is computed based on the new clusters developed in Step 2. Additionally, the standard error is computed for each of the clusters as a metric for the performance of the clustering. 7

8 3 RESULTS 5. The standard error was computed based on digital count values and is used to terminate the clustering algorithm, such that when the difference between the sum of the standard error for two consecutive iterations is less than 0.025, the clustering algorithm terminates. The threshold was found by trialand-error for this particular image and will vary based on location. Standard error was selected over sum of squared error because the size of each class is included in the error statistics. The clustering algorithm for this image completed after 10 iterations. 6. Each cluster is color-coded after the clustering algorithm is completed. The user uses photo-interpretation to assign the clusters a class and then the user can use separability measures to see whether merging of clusters is appropriate for the scene however, this is optional. 7. Once the final classes have been established, statistics are computed for the image classification which includes areal coverage of each class. Earth curvature and panoramic distortion effects were accounted for in the area calculations. The accuracy of the unsupervised classification is displayed in a confusion matrix Hybrid Classification The hybrid classification scheme is a combination unsupervised and supervised classification. The hybrid classification technique is explained in Richards [2013] and the procedures to obtain the final hybrid classification will be presented in that format. 1. Migrating means is used to determine the spectral classes into which the image resolves. Rather than running the clustering algorithm over the entire image, a subset of the image is selected. In this case, the image was divided into six distinct sections and the clustering algorithm was operated on three of them. 2. Using available reference data, associate the spectral clusters with information classes. As with the migrating means procedure described, there is commonly more than one spectral class for each information class. 3. Use results from the migrating means algorithm to develop a training data set for the maximum likelihood portion of the hybrid classification. 4. Using the maximum likelihood algorithm, classify the entire image into the set of spectral classes. 5. Label each pixel in the classification with the information class corresponding to its spectral class defined by the algorithm. 6. Similar to the other algorithms, statistics are computed for the classification, particularly areal coverage of each class. For area calculations, panoramic and curvature effects are accounted for. Classification accuracy for the hybrid classification algorithm is displayed in a confusion matrix 3 Results 3.1 Maximum Likelihood Classification (Supervised) From the VIIRS true color composite image in Figure 1 and 2, water, vegetation, land, and vegetation pixels are easily visible in the domain. At least two distinct land surface types can be seen in the images, a lighter land cover near the coast and a darker land cover near the topography in the center of the image. While there is not a discernible difference in the true color image, during the training sample retrieval, discernible differences were noticed between the cloud field over the ocean and the clouds over the land. Therefore, six classes were defined for the training sample procedure. For each class, seven 10x10 clusters of pixels were selected to develop the training field and class signature for each band. The training data samples are overlaid on the VIIRS true color composite image in Figure 5 and color-coded as follows: light blue=water,beige=vegetation,red=land 1, white=land2, dark gray=cloud1, light gray=cloud2. 8

9 3.1 Maximum Likelihood Classification (Supervised) 3 RESULTS The training samples were used to develop the class signatures for each band and the statistics for the training samples are given in Tables 6 and 7. Table 6 provides information about the accuracy of the samples, while Table 7 can be used in comparison with empirically understood spectral signatures. The covariance matrix in Table 6 was calculated using digital count and the purity of the samples generally decreases moving down the table. As expected, clouds generally produce the highest covariance values because of the complexities in cloud structure and variability in the background surface. The mean and standard deviation values in Table 7 are relatively consistent with our current understanding of these classes spectral signatures. The water has consistently low reflectance values and clouds have relatively high consistent reflectance values in Bands M1-M10. Cloud 1 includes cloud edge, and therefore has lower reflectance than Cloud 2 as expected. In addition, the vegetation has higher reflectance in the near-ir bands (M7) than in the visible bands (M1 and M4) as is seen in observations. The training data does not show significant differences between the vegetation pixels and the pixels for land use 1 in the reflectance bands. The main differences between these two classes are seen in the thermal bands (M15 and M16). While the spectral signature for the thermal bands exists in observations, the vegetation does not represent the magnitude of the variability between the visible and near-ir bands observed in Jedlovec [2009]. This can possibly be explained by the fact that the vegetation in the domain is located on the downslope of topography at a time when the transition from summer to fall is ongoing. Therefore, it is possible that for this image the chlorophyll in the green vegetation is starting to break down thereby reducing the spectral signature observed for healthy green vegetation. This also could serve as an explanation why the covariance increases through bands M1-M10. The statistics from the training field were used in Equation 1 and passed through the decision rule defined in Equation 2 to obtain an information classification for the entire image. The same statistics in Table 7 for the final supervised classification are given in Table 8. The final statistics are consistent with the statistics obtained from the training data set, but there are some differences. The vegetation and land pixels maintained the spectral signatures observed in the training field for bands M1-M10, but the temperature difference between the two classes is reduced when classification is applied to the full image. This could be explained by the misclassification of vegetation pixels as land type 1 or vice versa. Similarly, land type 2 trends toward the spectral signature calculated for vegetation after classification of the full image. This further drives the point of the struggle within the algorithm to differentiate between these classes, most likely in transition regions. Cloud type 2 had the greatest deviation from the training samples and that is reflected in the increase in the distribution of the class. In each of the bands, the standard deviation for cloud type 2 increased and in the thermal bands by nearly 300%. The increase in the distribution for cloud type 2 further drives the point that spectral signatures for clouds are highly variable because of the complex 3-dimensional structures, particularly in the presence of mixed-phase cloud particles. The final supervised classification in Figure 6 can be compared visually with the VIIRS true color composite image in Figure 2. The algorithm did very well representing the cloud field over the ocean and establishing the coastline in the image. At the top left of the image, the algorithm effectively picks up on the cloud striations over the ocean, likely because of the distinct differences between the two classes. While cloud edges are typically difficult to detect in classification algorithms, the maximum likelihood supervised classification scheme performed very well with cloud edge over the ocean for this scene. For the land surface, cloud detection was a bit more difficult. Most visible differences for the cloud field over land between the classification and true color image is in the bottom right corner of the image. In this region, there s a thin cloud in the bottom middle of the image and some very small pop up clouds beneath the convective showers in the bottom right that are not classified correctly in supervised classification. This is likely because of contamination from the background pixels particularly by land surface 1. The vegetation appears fairly well represented on the right side of the image, but there is some misclassification near the coast where more complex land features in the true color image are classified as vegetation. There are other locations in the image where misclassification occurs between the two land types and the vegetation, particularly near the boundary. This supports the idea that the complex land surface features in this image are a significant factor in the classification error. Spatial statistics for the classification are also useful for assessing the performance of the algorithm. The areal coverage of each class is given in Table 9 as well as fractional coverage. The land surface features 9

10 3.2 Migrating Means Classification (Unsupervised) 3 RESULTS make up over 85 % of the image for the supervised classification, which is consistent when compared to the true color image in Figure 2. Fractional coverage of ocean and clouds is nearly equal, and the two cloud fields are equally represented in the final classification. Land surface 2 has the highest areal coverage of the six classes and while this appears consistent with the true color image, the location is not entirely consistent. Classification accuracy can also be evaluated using a confusion matrix, which for this image and the maximum likelihood supervised classification is provided in Table 10. To develop the confusion matrix, fifteen ground truth samples were selected for each class. The producer accuracy offers insight into the accuracy of the algorithm while the user accuracy offers insight into the accuracy of the ground truth samples selected by the user. The producer accuracy for the supervised classification shows positive results. For all classes, the algorithm correctly classifies over 75 % of the pixels correctly. Water, land surface 1, and cloud type 1 were perfectly classified by the algorithm for this small sample size, while land surface 2 and cloud type 2 were the two worst performers. The user accuracy statistics suggest misidentification was common among the land features (land 1, land 2, and vegetation) and for cloud type 1.These results speak to the complexity of the surface in the visible portion of the electromagnetic spectrum and support the addition of multi-spectral classification techniques that include visible through thermal IR wavelengths. In general, the maximum likelihood classification performed well in all of the assessments performed in this section and provides a reasonably accurate classification of the image. 3.2 Migrating Means Classification (Unsupervised) For the migrating means classification, training data is not collected and the user has no a priori knowledge of the image. Instead, initial classification clusters are defined and the algorithm converges on a solution based on those clusters. The benefit of an unsupervised classification algorithm is that user bias is removed from the training sample selection where an inaccurate training field could be passed to the supervised classification algorithm. For this classification algorithm, twelve initial cluster centers were defined and initial cluster clusters were defined based on minimum and maximum digital counts for each band. The mean vector for each cluster is given in Table 11 in units of reflectance and temperature. For all of the bands, the value of the cluster center increases as cluster number increases down the table. For cluster 12, there are reflectance values greater than one for bands M1, M4, M7 (Cluster M11 and M12), and M8. It is unlikely that these are physically measured values from the surface, therefore, a threshold was applied such that only clusters with more than twenty pixels were retained. After ten iterations, the clustering algorithm produced a final product with nine cluster centers retained. The clustering algorithm converged on the cluster centers listed in Table 12 before the standard error satisfied the established threshold. The values of the cluster center means suggest that the clustering algorithm was more heavily influenced by the reflectance bands (M1-M10) than the thermal bands (M15 and M16). For example, while cluster 1 started with the lowest initial temperature and reflectance values, the clustering algorithm adjusted the means such that the reflectance values were still the lowest, but the temperature values had risen to fourth warmest. This is most likely a result of an unequal distribution of reflectance and temperature bands selected for classification. The results of the clustering algorithm have been placed on a thematic map and color-coded in Figure 7. The clustering algorithm represented the water well based on the VIIRS true color image in Figure 2. In addition, there are significantly more features visible in the cloud fields better illustrating the variability of the cloud features. For comparison, of the nine clusters, five represent some portion of the cloud field each of which is significant in size. In this case, the clustering algorithm suggests the possibility of mixed phase or ice clouds represented by cluster 9. The mean reflectance values for cluster 9 are all greater than 0.80 and the temperatures are less than 273 K suggesting that the mean temperature for the red pixels in the image are below freezing. The areal coverage for each class is given in Table 14. The first four classes all take up more than 5 % of the image, while five classes that represent the cloud field are responsible for less than 10 %. An important part of the unsupervised classification procedure is assigning information classes to spectral classes. Above, we essentially divided the image into two classes: cloud and non-cloud. The five cloud classes were reduced to two by merging clusters 5-6 and clusters 7-9. The decision to merge these clusters was made based on the fact that clusters 5/6 and 7/8 have very similar spectral signatures, particularly in the thermal bands. Cluster 9 was merged into the 7/8 cluster because areal coverage was less than 1 10

11 3.3 Hybrid Classification (Both) 3 RESULTS % of the image. Assigning information classes to the remaining clusters requires spectral analysis of the clusters. The spectral signatures for cluster one contain low reflectance and relatively high temperatures for the scene. Comparison to the maximum likelihood results and training samples shows that cluster 1 very nearly resembles the spectral signatures of water from the supervised classification, thus cluster 1 is assigned a water classification.the remaining three clusters are 2,3, and 4, which have settled on a solution that is not consistent with the signatures observed in the previous section. Therefore, analysis from the true color image in Figure 2 will be used to assign information classes to these spectral clusters. From the true color image, there is a funnel-shaped feature in right hand corner of the image that was selected as land surface 1 for the maximum likelihood classification. For the unsupervised clustering algorithm, this feature is assigned to cluster 4, therefore, cluster 4 will be assigned to land surface 1. The vegetation in the true color image is located on the right hand side of the image, which correlates well with cluster 3 from the results of the clustering algorithm, thus, cluster 3 will be assigned to vegetation and cluster 2 will be assigned to land surface 2. The final migrating means unsupervised classification thematic map in Figure 8 can be compared visually with the VIIRS true color image in Figure 2. Since the signatures between the three land classes were similar, the algorithm had a difficult time classifying the land surface resulting in a large overestimation of the vegetation. Since the region to the west of the topography is a coastal desert, it s unlikely that the pixels classified as vegetation near the coast are actually vegetation pixels. The vegetation in the right side of the image is fairly consistent with the true color image except for the lower right corner. The clustering algorithm performed exceptionally well with the cloud field in this part of the image, picking up on many small-scale cloud features. However, the background is assigned to land surface 1 which the true color image shows is vegetation. The cloud field over the water struggles near the cloud edge, but the cloud features are well defined despite being misclassified at the edges. Cloud edge detection also struggles in the center of the image where there are pixels classified as water south of the clouds. The physical water boundary at the coast is well represented, which is expected since the mean vectors between the two classification schemes for water are consistent. Spatial statistics for the unsupervised classification are provided in Table 14. The over-representation of vegetation in the classification is evident accounting for over 40 % of the image. The land features account for nearly 85% of the image as a whole which is consistent with the true color image and the results of the maximum likelihood classification. Cloud and water areal coverage statistics are consistent with the true color image and smaller cloud features are represented quite well relative to the maximum likelihood algorithm. The results from the confusion matrix in Table 15 show positive results with producer accuracy exceeding 70% over all classes. Consistent with previous analysis, the land classification performed the worst relative to the other classes. The user accuracy for land 1 and vegetation were also low relative to the other classes supporting the idea that the complex land features in this image make classification challenging. Water and cloud 1 classification produced a perfect producer accuracy for the small sample of ground truth observations. The cloud 2 field performed very well in the unsupervised classification, which suggests that the clustering algorithm was better able the complexities within the 3-D cloud structure. The unsupervised classification as a whole overestimated the amount of vegetation in the image and underperformed at cloud edge detection, but did well with water features and interior cloud structures away from cloud edge. 3.3 Hybrid Classification (Both) The hybrid classification procedure utilizes both unsupervised and supervised classification techniques to classify an image. For the unsupervised portion of the hybrid classification, twelve cluster centers were defined as in the migrating means classification given in Table 11. Rather than apply unsupervised classification to the entire image, the clustering algorithm was applied to three equal sized segments of the image. The same threshold of minimum twenty pixels in a class was applied to the clustering algorithm and the output from the unsupervised portion of the hybrid classification procedure is given in Figure 9. The clustering algorithm converged on a solution with eight cluster centers in the top and middle sections and seven for the bottom section. The clustering algorithm shows the complex structure of the clouds in the image observed in the unsupervised classification algorithm. Additionally, the clustering algorithm shows more than three land surface features in each section, further illustrating the complexity of the land surface in this image. 11

12 3.3 Hybrid Classification (Both) 3 RESULTS The results from the unsupervised component of the hybrid classification are then used as a guide for the supervised classification component. The supervised classification now uses the true color image from Figure 2 and the result from the unsupervised clustering result in Figure 9 to select training samples for each class. Like the other two classification schemes, six information classes were selected. For each class, eight 10x10 clusters of pixels were selected to develop the training field and class signature for each band. The training samples for the supervised component of the hybrid classification are in Figure 10 and color-coded as in the maximum likelihood classification. The training samples were used to develop the class signatures for each band and statistics for the training samples are given in Tables 16 and 17. The results in Table 16 are produced using digital count and are similar to what was observed in the maximum likelihood classification algorithm. The water samples tend to be the most pure and the cloud fields tend to have the highest covariance. The complex spectral signatures of the cloud field provided from the original migrating means statistics in Table 12 and the results from the unsupervised component of the hybrid classification in Figure 9 illustrate the complexity of cloud fields caused by their 3-D structure and potential mixed-phase composition. The mean and standard deviation are consistent with the values obtained for the maximum likelihood classification and current understanding of the spectral signatures for these classes. The vegetation training data statistics do not match the magnitude of the spectral signature curve for healthy green vegetation in Jedlovec [2009]. Additionally, the spectral differences between the three land classes, particularly vegetation, and land surface 2 are not as distinct in idealized cases. As discussed above, the scene in this image is not an idealized case, especially for the land surface. Recall, the scene for this setting is on the vernal equinox and the domain is transitioning from summer to fall and it is possible that some of the chlorophyll in the vegetation has broken down, thereby affecting the spectral signatures for the vegetation and land surface features. The statistics from the training field were then input into Equation 1 and passed through the decision rule defined in Equation 2 to obtain an information classification for the entire image. The statistics for the finished hybrid classification are provided in Table 18. The final statistics are consistent with the mean and standard deviation obtained from the training field, with the exception of the land surface features. The computed spectral signatures for vegetation and land surface 2 pixels are nearly identical for the hybrid classification with the exception of Band M10. Band M10 is centered at 1.61 µm and Jedlovec [2009] shows a secondary peak in the spectral signature curve for vegetation and soil with soil reflectance consistently less than vegetation for beyond the visible portion of the EM spectrum. While the magnitudes are lower than is typically observed, the pattern exists in the spectral signature curves for these two land surfaces using the hybrid classification scheme. Cloud type 2 shows significant differences between the training sample data and the statistics computed after final classification. In Figure 9, we see the various cloud classes that are detected by the clustering algorithm that are now being forced into only two classes, thereby increasing the variation of the pixel values in the class. Performance of the hybrid classification scheme can be analyzed with a visual comparison of the final classification thematic map in Figure 11 and the true color composite in Figure 2. The classification over estimates the extent of cloud type 1, particularly over the ocean in the upper left corner of the image.in addition, there are pixels near the coast classified as vegetation and a large area of vegetation in the upper right portion of the image that is misclassified when compared to the true color image. The vegetation is fairly well represented throughout the rest of the image and classification in transition regions is consistent with the true color image. The center of the thematic map is largely classified as land surface 2, which is fairly accurate. The cloud field over land is mostly classified as cloud type 2, which is fairly accurate, but smaller scale cloud features are not represented in the final classification. Spatial statistics for the classification are provided in Table 19 and 20. The overestimation of cloud type 1 over the ocean can be seen in the water column where areal coverage decreased by nearly 1% and the fractional coverage of water is less than the two cloud classes combined. Similar to the other classification techniques, land features make up nearly 85 % of the image. Land surface 2 has the highest areal coverage 12

13 4 DISCUSSION of the six classes, but the misclassification of land surface 2 pixels as vegetation pixels in the upper right corner of the image suggests the fractional coverage should be higher. The confusion matrix for the hybrid classification is provided in Table 20. The producer accuracy for the hybrid classification scheme shows positive results. The algorithm correctly classifies over 70 % of the pixels correctly for all of the classes and for five of the six, the producer accuracy is over 93 %. Water and cloud type 1 were well classified according to the confusion matrix despite appearing over-represented in the thematic map and fractional area analysis. The producer accuracy for the land features is very well classified however, for each of the land classes, one pixel of the fifteen was misclassified as another land class resulting in the same user accuracy for these three classes. The hybrid classification technique struggled with the cloud type 2 and user accuracy was poor for cloud type 1. While spectral differences for various cloud types are more obvious in clustering techniques, discernible differences are more difficult to detect in the supervised component of the classification procedure. The hybrid classification performed well in all of the assessments performed in this section and despite some obvious errors, provided a reasonably accurate classification of the image. 4 Discussion The purpose of this project was to compare the performance of three different classification techniques: supervised, unsupervised, and hybrid. While the three classification techniques performed within 5% of one another on average, qualitative analysis showed each classification technique had its strengths and weaknesses. For example, while the migrating means classification underperformed at classifying of land features relative to the other two techniques, it proved to be very effective at detecting differences in the cloud field. A visual comparison between Figure 6, 8, and 11 show distinct differences in the representation of the cloud field. While the maximum likelihood and hybrid classification over-classify cloud type 2, the migrating means classification detects the transition between cloud types much more accurately (nearly 10 %) as seen when comparing the confusion matrices. Maximum likelihood classification performed very well with respect to cloud edge detection over water. While the other two classification schemes underperformed at cloud edge (over-classification by hybrid and misclassification by migrating means), the cloud edge over water from maximum likelihood matches very well with the VIIRS true color image. While this represents a small fraction of the total area in the image, there are significant implications misclassifying pixels at cloud edge particularly over water because of the stark contrast between the reflective properties of clouds and water. A comparison of the areal coverage between supervised and unsupervised classification is provided in Figure??. The areal coverage for the cloud and water features are consistent between the two techniques while the land features show some deviation. The disparity between the two techniques with land class coverage is likely because there is little separation between the calculated spectral signatures between classes. Statistically speaking, the hybrid classification technique performed the best with respect to the land features in the image compared to the other two classification techniques. Several issues with classification of land features using the hybrid technique were introduced above however, based on the results tabulated in the confusion matrix, the hybrid classification handled the complexity of the land surface the best of the three techniques. The improved results of the land classification with using a hybrid approach is likely attributed to the addition of the unsupervised classification for assistance in selecting training samples for the supervised component. Using a clustering technique is helpful in highlighting different classes in scenes where the land features are complex and do not have distinct, obvious spectral differences in true color imagery. All of the classification techniques had an average producer accuracy of 86 % or higher, but accuracy is not the only consideration when choosing a classification scheme. The application of the classification is important to consider, i.e. is the classification technique going to be applied to a single location, a country, or globally? This becomes important because the time required to classify each image becomes increasingly important as domain size increases. Thus, it is beneficial to analyze the performance of the three algorithms with respect to the time required to implement them. For the supervised classification scheme, a significant amount of time is required to select pure training samples. The more time the user spends learning the image, the better the classification. While an exact time calculation was not obtained for selecting training samples, the process took about 5 minutes on average and about five to ten iterations of training sample selection were required to get a image that represented the scene well. Therefore, it s assumed that the selection of training data took about 35 minutes for this image. Once the training samples are obtained, 13

14 5 CONCLUSION the time taken to run through the maximum likelihood algorithm was much shorter: only a minute and 32 seconds. The migrating means classification takes no a priori knowledge, therefore, no preparatory work is required before the clustering algorithm is initialized. The total time to run the migrating means clustering algorithm was four minutes and 55 seconds however, this number can vary greatly depending on how the user decides to end the iterative process. For example, selecting a threshold where the standard error was forced to be equal took the clustering algorithm over 40 minutes to converge on a solution. For the hybrid classification technique, the unsupervised component took over eight minutes and thirty seconds, while the supervised component took one minute and thirty seconds (excluding selecting training data). While the time taken to collect training samples was not measured directly, the number of iterations required to collect pure samples was reduced to two. Even though the clustering algorithm took the least amount of time to apply information classes to the image, based on the confusion matrix as a whole, it performed the worst of the three algorithms tested. 5 Conclusion All three classification techniques produced thematic maps that aligned relatively well with the VIIRS true color imagery provided for 21 March 2014 at 18:36 UTC. Each of the classification schemes had strengths and weaknesses in the final representation of the scene, but the lowest average producer accuracy among the three was over 86 %. Statistically speaking, the maximum likelihood supervised classification technique performed the best, but it also took the longest amount of time to implement. The time required to implement each technique varies depending on the number of training samples selected, the number of iterations required to obtain pure training data,the number of initial cluster centers, and the allowable difference in error between consecutive iterations in the unsupervised classification. The complex nature of the land surface in this image caused inconsistent classification of land features between the three classification algorithms, although the total area covered by land classes stayed approximately constant. Applying the hybrid classification algorithm provides added benefit to the maximum likelihood classification technique because more different features in the image are highlighted by the unsupervised clustering algorithm. Therefore, the land surface was better represented using the hybrid approach suggesting it might be beneficial for application in complex scenes. Choosing the best classification technique is an important consideration when trying to obtain the most accurate thematic map however, there are a number of factors that can influence that decision. For this project, the best image classification was obtained from the maximum likelihood supervised technique, but it also took the longest to implement suggesting that although it worked the best for this application, doesn t mean it s the best for other applications. 14

15 6 References 1. J. G. Acker and G. Leptoukh. Online Analysis Enhances Use of NASA Earth Science Data, C. Cao. NOAA Technical Report NESDIS 142 Visible Infrared Imaging Radiometer Suite ( VIIRS ) Sensor Data Record ( SDR ) User s Guide. Technical Report September, U.S. Department of Commerce, A. Comber, P. Fisher, C. Brunsdon, and A. Khmag. Spatial analysis of remote sensing image classification accuracy. Remote Sensing of Environment, 127: , R. G. Congalton. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sensing of Environment, 37(1):35 46, B. a. Franz, S. W. Bailey, P. J. Werdell, and C. R. McClain. Sensor-independent approach to the vicarious calibration of satellite ocean color radiometry. Applied optics, 46(22): , H. R. Grau, M. E. Hernández, J. Gutierrez, N. I. Gasparri, M. C. Casavecchia, E. E. Flores-Ivaldi, and L. Paolini. A peri-urban neotropical forest transition and its consequences for environmental services. Ecology and Society, 13(1), G. Jedlovec. Automated Detection of Clouds in Satellite Imagery. pages 1 14, C. C. Kontoes, H. Poilvé, G. Florsch, I. Keramitsoglou, and S. Paralikidis. A comparative analysis of a fixed thresholding vs. a classification tree approach for operational burn scar detection and mapping. International Journal of Applied Earth Observation and Geoinformation, 11(5): , P. Kumar, B. K. Singh, and M. Rani. An efficient hybrid classification approach for land use/land cover analysis in a semi-desert area using ETM+ and LISS-III sensor. IEEE Sensors Journal, 13(6): , D. C. Leon. Observations of Drizzle Cells in Marine Stratocumulus. PhD thesis, University of Wyoming, C. Lo and J. Choi. A hybrid approach to urban land use/cover mapping using Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images. International Journal of Remote Sensing, 25(14): , J. Richards. Remote sensing digital image analysis: An introduction. Springer, Berlin, 5th edition, O. Rozenstein and A. Karnieli. Comparison of methods for land-use classification incorporating remote sensing and GIS inputs. Applied Geography, 31(2): , R. W. Saunders and K. T. Kriebel. An improved method for detecting clear sky and cloudy radiances from AVHRR data. International Journal of Remote Sensing, 9(1): , D. Tuia, F. Ratle, F. Pacifici, M. Kanevski, and W. Emery. Active Learning Methods for Remote Sensing Image Classification. IEEE Transactions on Geoscience and Remote Sensing, 47(7): , Unknown. VIIRS bands and bandwidths. pages A. Yüksel, A. E. Akay, and R. Gundogan. Using ASTER Imagery in Land Use/cover Classification of Eastern Mediterranean Landscapes According to CORINE Land Cover Project. Sensors, 8(2): , Feb

16 APPENDIX A VARIABLE LIST AND DEFINITIONS Appendix A Variable List and Definitions Table 1: Explanation of Variables used in Equations 1-4 as defined in Richards [2013]. Variable Definition g discriminant score x pixel x i The pixel value array for all of the bands Covariance Matrix for class i 1 i Inverse of the Covariance Matrix for class i µ i Mean Vector for class i µ i T Transpose of the Mean Vector For class i ω The array of classes 16

17 APPENDIX B FIGURES AND TABLES Appendix B Figures and Tables Appendix B.1 FIGURES Figure 1: VIIRS true color image for 21 Mar :36 UTC over western South America. True Color composite using VIIRS channel M3 ( ), channel M4 ( ), and channel M5 ( ). The sub-sampled 640 x 512 used for classification is outlined in black at the bottom of the image. 17

18 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 2: VIIRS true color image for 21 Mar :36 UTC over western South America for sub-sampled region outlined in black in Figure 1. True Color composite using VIIRS channel M3 ( ), channel M4 ( ), and channel M5 ( ). 18

19 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 3: Dailiy accumulated rainfall (in mm) for 21 Mar 2014 from the NASA Global Precipitation Measurement (GPM) satellite over west central South America. [Acker and Leptoukh, 2007] 19

20 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 4: Spectral signature curves for various classes based on 36 MODIS channels for the visible through thermal IR portion of the electromagnetic spectrum (distributed as part of class material). 20

21 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 5: VIIRS true color image for 21 March 2015 at 18:36 UTC with training sample locations for maximum likelihood algorithm overlaid. The sampled selections are color-coded such that pixels classified as: ocean=light blue, vegetation=beige, land 1 = red, land 2=white, Cloud 1 = dark gray, Cloud 2 = light gray. 21

22 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 6: Thematic map produced by maximum likelihood supervised classification scheme using 7 VIIRS bands and 6 training classes each containing 700 pixels. 22

23 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 7: Thematic map of migrating means clustering algorithm before classification and merging. Clustering algorithm was given a mean vector of 12 initial clusters given in Table

24 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 8: Thematic map of migrating means unsupervised classification after merging clusters 5-7 and 8-9 from Figure 9 into two classes. Information classes were also applied to the final 6 clusters. 24

25 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 9: Thematic map of migrating means clustering algorithm used for hybrid classification. Clustering algorithm was applied independently to three equal sized sections of the image with the 12 initial cluster centers provided in Table

26 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 10: VIIRS true color image for 21 March 2015 at 18:36 UTC with training sample locations for the supervised component of the hybrid classification procedure overlaid. The sampled selections are color-coded such that pixels classified as: ocean=light blue, vegetation=beige, land 1 = red, land 2=white, Cloud 1 = dark gray, Cloud 2 = light gray. 26

27 Appendix B.1 FIGURES APPENDIX B FIGURES AND TABLES Figure 11: Thematic map of hybrid classification scheme using both unsupervised and supervised classification techniques for 7 VIIRS bands. Unsupervised classification was assigned 12 initial cluster centers and supervised classification contained statistics for 6 training classes each containing 800 pixels. 27

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle Sensor Calibration - Application Station Identifier ASN Scene Center atitude 34.840 (34 3'0.64"N) Day Night DAY Scene Center ongitude 33.03270 (33 0'7.72"E) WRS Path WRS Row 76 036 Corner Upper eft atitude

More information

Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics

Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics Operations What Do I Need? Classify Merge Combine Cross Scan Score Warp Respace Cover Subscene Rotate Translators

More information

P1.58 Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics

P1.58 Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics P1.58 Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics Richard L. Bankert* and Cristian Mitrescu Naval Research Laboratory, Monterey, CA Steven D. Miller CIRA,

More information

Hyperspectral Remote Sensing

Hyperspectral Remote Sensing Hyperspectral Remote Sensing Multi-spectral: Several comparatively wide spectral bands Hyperspectral: Many (could be hundreds) very narrow spectral bands GEOG 4110/5100 30 AVIRIS: Airborne Visible/Infrared

More information

Digital Image Classification Geography 4354 Remote Sensing

Digital Image Classification Geography 4354 Remote Sensing Digital Image Classification Geography 4354 Remote Sensing Lab 11 Dr. James Campbell December 10, 2001 Group #4 Mark Dougherty Paul Bartholomew Akisha Williams Dave Trible Seth McCoy Table of Contents:

More information

Machine learning approach to retrieving physical variables from remotely sensed data

Machine learning approach to retrieving physical variables from remotely sensed data Machine learning approach to retrieving physical variables from remotely sensed data Fazlul Shahriar November 11, 2016 Introduction There is a growing wealth of remote sensing data from hundreds of space-based

More information

Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection

Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection MODIS and AIRS Workshop 5 April 2006 Pretoria, South Africa 5/2/2006 10:54 AM LAB 2 Lab on MODIS Cloud spectral properties, Cloud Mask, NDVI and Fire Detection This Lab was prepared to provide practical

More information

Remote Sensing Introduction to the course

Remote Sensing Introduction to the course Remote Sensing Introduction to the course Remote Sensing (Prof. L. Biagi) Exploitation of remotely assessed data for information retrieval Data: Digital images of the Earth, obtained by sensors recording

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 2

GEOG 4110/5100 Advanced Remote Sensing Lecture 2 GEOG 4110/5100 Advanced Remote Sensing Lecture 2 Data Quality Radiometric Distortion Radiometric Error Correction Relevant reading: Richards, sections 2.1 2.8; 2.10.1 2.10.3 Data Quality/Resolution Spatial

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

Lab #4 Introduction to Image Processing II and Map Accuracy Assessment

Lab #4 Introduction to Image Processing II and Map Accuracy Assessment FOR 324 Natural Resources Information Systems Lab #4 Introduction to Image Processing II and Map Accuracy Assessment (Adapted from the Idrisi Tutorial, Introduction Image Processing Exercises, Exercise

More information

Classification (or thematic) accuracy assessment. Lecture 8 March 11, 2005

Classification (or thematic) accuracy assessment. Lecture 8 March 11, 2005 Classification (or thematic) accuracy assessment Lecture 8 March 11, 2005 Why and how Remote sensing-derived thematic information are becoming increasingly important. Unfortunately, they contain errors.

More information

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING J. Wolfe a, X. Jin a, T. Bahr b, N. Holzer b, * a Harris Corporation, Broomfield, Colorado, U.S.A. (jwolfe05,

More information

Image Classification. RS Image Classification. Present by: Dr.Weerakaset Suanpaga

Image Classification. RS Image Classification. Present by: Dr.Weerakaset Suanpaga Image Classification Present by: Dr.Weerakaset Suanpaga D.Eng(RS&GIS) 6.1 Concept of Classification Objectives of Classification Advantages of Multi-Spectral data for Classification Variation of Multi-Spectra

More information

Defining Remote Sensing

Defining Remote Sensing Defining Remote Sensing Remote Sensing is a technology for sampling electromagnetic radiation to acquire and interpret non-immediate geospatial data from which to extract information about features, objects,

More information

Land Cover Classification Techniques

Land Cover Classification Techniques Land Cover Classification Techniques supervised classification and random forests Developed by remote sensing specialists at the USFS Geospatial Technology and Applications Center (GTAC), located in Salt

More information

Motivation. Aerosol Retrieval Over Urban Areas with High Resolution Hyperspectral Sensors

Motivation. Aerosol Retrieval Over Urban Areas with High Resolution Hyperspectral Sensors Motivation Aerosol etrieval Over Urban Areas with High esolution Hyperspectral Sensors Barry Gross (CCNY) Oluwatosin Ogunwuyi (Ugrad CCNY) Brian Cairns (NASA-GISS) Istvan Laszlo (NOAA-NESDIS) Aerosols

More information

Fourier analysis of low-resolution satellite images of cloud

Fourier analysis of low-resolution satellite images of cloud New Zealand Journal of Geology and Geophysics, 1991, Vol. 34: 549-553 0028-8306/91/3404-0549 $2.50/0 Crown copyright 1991 549 Note Fourier analysis of low-resolution satellite images of cloud S. G. BRADLEY

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES Chang Yi 1 1,2,*, Yaozhong Pan 1, 2, Jinshui Zhang 1, 2 College of Resources Science and Technology, Beijing Normal University,

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Review What factors influence radiometric distortion? What is striping in an image?

More information

Prof. Vidya Manian Dept. of Electrical l and Comptuer Engineering. INEL6007(Spring 2010) ECE, UPRM

Prof. Vidya Manian Dept. of Electrical l and Comptuer Engineering. INEL6007(Spring 2010) ECE, UPRM Inel 6007 Introduction to Remote Sensing Chapter 5 Spectral Transforms Prof. Vidya Manian Dept. of Electrical l and Comptuer Engineering Chapter 5-1 MSI Representation Image Space: Spatial information

More information

EVALUATION OF THE THEMATIC INFORMATION CONTENT OF THE ASTER-VNIR IMAGERY IN URBAN AREAS BY CLASSIFICATION TECHNIQUES

EVALUATION OF THE THEMATIC INFORMATION CONTENT OF THE ASTER-VNIR IMAGERY IN URBAN AREAS BY CLASSIFICATION TECHNIQUES EVALUATION OF THE THEMATIC INFORMATION CONTENT OF THE ASTER-VNIR IMAGERY IN URBAN AREAS BY CLASSIFICATION TECHNIQUES T. G. Panagou a *, G. Ch. Miliaresis a a TEI, Dpt. of Topography, 3 P.Drakou Str., Thiva,

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM IMAGERY FOR SAN ANTONIO AREA. Remote Sensing Project By Newfel Mazari Fall 2005

CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM IMAGERY FOR SAN ANTONIO AREA. Remote Sensing Project By Newfel Mazari Fall 2005 CORRELATION BETWEEN NDVI AND SURFACE TEMPERATURES USING LANDSAT ETM IMAGERY FOR SAN ANTONIO AREA Remote Sensing Project By Newfel Mazari Fall 2005 Procedure Introduction and Objectives Site Date Acquisition

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Aardobservatie en Data-analyse Image processing

Aardobservatie en Data-analyse Image processing Aardobservatie en Data-analyse Image processing 1 Image processing: Processing of digital images aiming at: - image correction (geometry, dropped lines, etc) - image calibration: DN into radiance or into

More information

Ice Cover and Sea and Lake Ice Concentration with GOES-R ABI

Ice Cover and Sea and Lake Ice Concentration with GOES-R ABI GOES-R AWG Cryosphere Team Ice Cover and Sea and Lake Ice Concentration with GOES-R ABI Presented by Yinghui Liu 1 Team Members: Yinghui Liu 1, Jeffrey R Key 2, and Xuanji Wang 1 1 UW-Madison CIMSS 2 NOAA/NESDIS/STAR

More information

Artificial Neural Networks Lab 2 Classical Pattern Recognition

Artificial Neural Networks Lab 2 Classical Pattern Recognition Artificial Neural Networks Lab 2 Classical Pattern Recognition Purpose To implement (using MATLAB) some traditional classifiers and try these on both synthetic and real data. The exercise should give some

More information

Data Mining Support for Aerosol Retrieval and Analysis:

Data Mining Support for Aerosol Retrieval and Analysis: Data Mining Support for Aerosol Retrieval and Analysis: Our Approach and Preliminary Results Zoran Obradovic 1 joint work with Amy Braverman 2, Bo Han 1, Zhanqing Li 3, Yong Li 1, Kang Peng 1, Yilian Qin

More information

Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore

Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore Remote Sensing and GIS for Monitoring Urban Dynamics Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore 560 012. Remote

More information

MODULE 3. FACTORS AFFECTING 3D LASER SCANNING

MODULE 3. FACTORS AFFECTING 3D LASER SCANNING MODULE 3. FACTORS AFFECTING 3D LASER SCANNING Learning Outcomes: This module discusses factors affecting 3D laser scanner performance. Students should be able to explain the impact of various factors on

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Estimating land surface albedo from polar orbiting and geostationary satellites

Estimating land surface albedo from polar orbiting and geostationary satellites Estimating land surface albedo from polar orbiting and geostationary satellites Dongdong Wang Shunlin Liang Tao He Yuan Zhou Department of Geographical Sciences University of Maryland, College Park Nov

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement Techniques Discussions

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., 5, C741 C750, 2012 www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement

More information

Software requirements * : Part III: 2 hrs.

Software requirements * : Part III: 2 hrs. Title: Product Type: Developer: Target audience: Format: Software requirements * : Data: Estimated time to complete: Mapping snow cover using MODIS Part I: The MODIS Instrument Part II: Normalized Difference

More information

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures

Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Determining satellite rotation rates for unresolved targets using temporal variations in spectral signatures Joseph Coughlin Stinger Ghaffarian Technologies Colorado Springs, CO joe.coughlin@sgt-inc.com

More information

Raster Classification with ArcGIS Desktop. Rebecca Richman Andy Shoemaker

Raster Classification with ArcGIS Desktop. Rebecca Richman Andy Shoemaker Raster Classification with ArcGIS Desktop Rebecca Richman Andy Shoemaker Raster Classification What is it? - Classifying imagery into different land use/ land cover classes based on the pixel values of

More information

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone:

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone: Remote Sensing & Photogrammetry W4 Beata Hejmanowska Building C4, room 212, phone: +4812 617 22 72 605 061 510 galia@agh.edu.pl 1 General procedures in image classification Conventional multispectral classification

More information

Unsupervised and Self-taught Learning for Remote Sensing Image Analysis

Unsupervised and Self-taught Learning for Remote Sensing Image Analysis Unsupervised and Self-taught Learning for Remote Sensing Image Analysis Ribana Roscher Institute of Geodesy and Geoinformation, Remote Sensing Group, University of Bonn 1 The Changing Earth https://earthengine.google.com/timelapse/

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

VALIDATION OF A NEW 30 METER GROUND SAMPLED GLOBAL DEM USING ICESAT LIDARA ELEVATION REFERENCE DATA

VALIDATION OF A NEW 30 METER GROUND SAMPLED GLOBAL DEM USING ICESAT LIDARA ELEVATION REFERENCE DATA VALIDATION OF A NEW 30 METER GROUND SAMPLED GLOBAL DEM USING ICESAT LIDARA ELEVATION REFERENCE DATA M. Lorraine Tighe Director, Geospatial Solutions Intermap Session: Photogrammetry & Image Processing

More information

GEOBIA for ArcGIS (presentation) Jacek Urbanski

GEOBIA for ArcGIS (presentation) Jacek Urbanski GEOBIA for ArcGIS (presentation) Jacek Urbanski INTEGRATION OF GEOBIA WITH GIS FOR SEMI-AUTOMATIC LAND COVER MAPPING FROM LANDSAT 8 IMAGERY Presented at 5th GEOBIA conference 21 24 May in Thessaloniki.

More information

Improvements to the SHDOM Radiative Transfer Modeling Package

Improvements to the SHDOM Radiative Transfer Modeling Package Improvements to the SHDOM Radiative Transfer Modeling Package K. F. Evans University of Colorado Boulder, Colorado W. J. Wiscombe National Aeronautics and Space Administration Goddard Space Flight Center

More information

Introduction to Remote Sensing

Introduction to Remote Sensing Introduction to Remote Sensing Spatial, spectral, temporal resolutions Image display alternatives Vegetation Indices Image classifications Image change detections Accuracy assessment Satellites & Air-Photos

More information

Using MODIS to Estimate Cloud Contamination of the AVHRR Data Record

Using MODIS to Estimate Cloud Contamination of the AVHRR Data Record 586 JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY VOLUME 9 Using MODIS to Estimate Cloud Contamination of the AVHRR Data Record ANDREW K. HEIDINGER NOAA/NESDIS Office of Research and Applications, Washington,

More information

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011

DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 INTELLIGENT OPTO SENSOR Number 38 DESIGNER S NOTEBOOK Proximity Detection and Link Budget By Tom Dunn July 2011 Overview TAOS proximity sensors operate by flashing an infrared (IR) light towards a surface

More information

Global and Regional Retrieval of Aerosol from MODIS

Global and Regional Retrieval of Aerosol from MODIS Global and Regional Retrieval of Aerosol from MODIS Why study aerosols? CLIMATE VISIBILITY Presented to UMBC/NESDIS June 4, 24 Robert Levy, Lorraine Remer, Yoram Kaufman, Allen Chu, Russ Dickerson modis-atmos.gsfc.nasa.gov

More information

DIGITAL HEIGHT MODELS BY CARTOSAT-1

DIGITAL HEIGHT MODELS BY CARTOSAT-1 DIGITAL HEIGHT MODELS BY CARTOSAT-1 K. Jacobsen Institute of Photogrammetry and Geoinformation Leibniz University Hannover, Germany jacobsen@ipi.uni-hannover.de KEY WORDS: high resolution space image,

More information

Madagascar satellite data: an inversion test case

Madagascar satellite data: an inversion test case Stanford Exploration Project, Report 111, June 9, 2002, pages 335 347 Madagascar satellite data: an inversion test case Jesse Lomask 1 ABSTRACT The Madagascar satellite data set provides images of a spreading

More information

Images Reconstruction using an iterative SOM based algorithm.

Images Reconstruction using an iterative SOM based algorithm. Images Reconstruction using an iterative SOM based algorithm. M.Jouini 1, S.Thiria 2 and M.Crépon 3 * 1- LOCEAN, MMSA team, CNAM University, Paris, France 2- LOCEAN, MMSA team, UVSQ University Paris, France

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Infrared Scene Simulation for Chemical Standoff Detection System Evaluation

Infrared Scene Simulation for Chemical Standoff Detection System Evaluation Infrared Scene Simulation for Chemical Standoff Detection System Evaluation Peter Mantica, Chris Lietzke, Jer Zimmermann ITT Industries, Advanced Engineering and Sciences Division Fort Wayne, Indiana Fran

More information

Modeling of the ageing effects on Meteosat First Generation Visible Band

Modeling of the ageing effects on Meteosat First Generation Visible Band on on Meteosat First Generation Visible Band Ilse Decoster, N. Clerbaux, J. Cornelis, P.-J. Baeck, E. Baudrez, S. Dewitte, A. Ipe, S. Nevens, K. J. Priestley, A. Velazquez Royal Meteorological Institute

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Geometric Distortion Geometric Distortion: Errors in image geometry, (location, dimensions,

More information

ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data 2

ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data 2 ENVI Classic Tutorial: Multispectral Analysis of MASTER HDF Data Multispectral Analysis of MASTER HDF Data 2 Files Used in This Tutorial 2 Background 2 Shortwave Infrared (SWIR) Analysis 3 Opening the

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., 5, C751 C762, 2012 www.atmos-meas-tech-discuss.net/5/c751/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement

More information

Obtaining Submerged Aquatic Vegetation Coverage from Satellite Imagery and Confusion Matrix Analysis

Obtaining Submerged Aquatic Vegetation Coverage from Satellite Imagery and Confusion Matrix Analysis Obtaining Submerged Aquatic Vegetation Coverage from Satellite Imagery and Confusion Matrix Analysis Brian Madore April 7, 2015 This document shows the procedure for obtaining a submerged aquatic vegetation

More information

Calibration of IRS-1C PAN-camera

Calibration of IRS-1C PAN-camera Calibration of IRS-1C PAN-camera Karsten Jacobsen Institute for Photogrammetry and Engineering Surveys University of Hannover Germany Tel 0049 511 762 2485 Fax -2483 Email karsten@ipi.uni-hannover.de 1.

More information

Introduction to Geospatial Analysis

Introduction to Geospatial Analysis Introduction to Geospatial Analysis Introduction to Geospatial Analysis 1 Descriptive Statistics Descriptive statistics. 2 What and Why? Descriptive Statistics Quantitative description of data Why? Allow

More information

Understanding The MODIS Aerosol Products

Understanding The MODIS Aerosol Products Understanding The MODIS Aerosol Products Rich Kleidman Science Systems and Applications Rob Levy Science Systems and Applications Lorraine Remer NASA Goddard Space Flight Center Chistina Chu NASA Goddard

More information

About LIDAR Data. What Are LIDAR Data? How LIDAR Data Are Collected

About LIDAR Data. What Are LIDAR Data? How LIDAR Data Are Collected 1 of 6 10/7/2006 3:24 PM Project Overview Data Description GIS Tutorials Applications Coastal County Maps Data Tools Data Sets & Metadata Other Links About this CD-ROM Partners About LIDAR Data What Are

More information

Extension of the CREWtype Analysis to VIIRS. Andrew Heidinger, Andi Walther, Yue Li and Denis Botambekov NOAA/NESDIS and UW/CIMSS, Madison, WI, USA

Extension of the CREWtype Analysis to VIIRS. Andrew Heidinger, Andi Walther, Yue Li and Denis Botambekov NOAA/NESDIS and UW/CIMSS, Madison, WI, USA Extension of the CREWtype Analysis to VIIRS Andrew Heidinger, Andi Walther, Yue Li and Denis Botambekov NOAA/NESDIS and UW/CIMSS, Madison, WI, USA CREW-4 Grainau, Germany, March 2014 Motivation Important

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

2-band Enhanced Vegetation Index without a blue band and its application to AVHRR data

2-band Enhanced Vegetation Index without a blue band and its application to AVHRR data 2-band Enhanced Vegetation Index without a blue band and its application to AVHRR data Zhangyan Jiang*, Alfredo R. Huete, Youngwook Kim, Kamel Didan Department of Soil, Water, and Environmental Science,

More information

MODIS Atmosphere: MOD35_L2: Format & Content

MODIS Atmosphere: MOD35_L2: Format & Content Page 1 of 9 File Format Basics MOD35_L2 product files are stored in Hierarchical Data Format (HDF). HDF is a multi-object file format for sharing scientific data in multi-platform distributed environments.

More information

fraction of Nyquist

fraction of Nyquist differentiator 4 2.1.2.3.4.5.6.7.8.9 1 1 1/integrator 5.1.2.3.4.5.6.7.8.9 1 1 gain.5.1.2.3.4.5.6.7.8.9 1 fraction of Nyquist Figure 1. (top) Transfer functions of differential operators (dotted ideal derivative,

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Analysis Ready Data For Land (CARD4L-ST)

Analysis Ready Data For Land (CARD4L-ST) Analysis Ready Data For Land Product Family Specification Surface Temperature (CARD4L-ST) Document status For Adoption as: Product Family Specification, Surface Temperature This Specification should next

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

ADVANCED TERRAIN PROCESSING: ANALYTICAL RESULTS OF FILLING VOIDS IN REMOTELY SENSED DATA TERRAIN INPAINTING

ADVANCED TERRAIN PROCESSING: ANALYTICAL RESULTS OF FILLING VOIDS IN REMOTELY SENSED DATA TERRAIN INPAINTING ADVANCED TERRAIN PROCESSING: ANALYTICAL RESULTS OF FILLING VOIDS IN REMOTELY SENSED DATA J. Harlan Yates Patrick Kelley Josef Allen Mark Rahmes Harris Corporation Government Communications Systems Division

More information

Image Classification. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman

Image Classification. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Image Classification Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Classification Multispectral classification may be performed using a variety of methods,

More information

VIIRS Radiance Cluster Analysis under CrIS Field of Views

VIIRS Radiance Cluster Analysis under CrIS Field of Views VIIRS Radiance Cluster Analysis under CrIS Field of Views Likun Wang, Yong Chen, Denis Tremblay, Yong Han ESSIC/Univ. of Maryland, College Park, MD; wlikun@umd.edu Acknowledgment CrIS SDR Team 2016 CICS

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

Continued Development of the Look-up-table (LUT) Methodology For Interpretation of Remotely Sensed Ocean Color Data

Continued Development of the Look-up-table (LUT) Methodology For Interpretation of Remotely Sensed Ocean Color Data Continued Development of the Look-up-table (LUT) Methodology For Interpretation of Remotely Sensed Ocean Color Data W. Paul Bissett Florida Environmental Research Institute 10500 University Center Dr.,

More information

Daytime Cloud Overlap Detection from AVHRR and VIIRS

Daytime Cloud Overlap Detection from AVHRR and VIIRS Daytime Cloud Overlap Detection from AVHRR and VIIRS Michael J. Pavolonis Cooperative Institute for Meteorological Satellite Studies University of Wisconsin-Madison Andrew K. Heidinger Office of Research

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Appendix E. Development of methodologies for Brightness Temperature evaluation for the MetOp-SG MWI radiometer. Alberto Franzoso (CGS, Italy)

Appendix E. Development of methodologies for Brightness Temperature evaluation for the MetOp-SG MWI radiometer. Alberto Franzoso (CGS, Italy) 111 Appendix E Development of methodologies for Brightness Temperature evaluation for the MetOp-SG MWI radiometer Alberto Franzoso (CGS, Italy) Sylvain Vey (ESA/ESTEC, The Netherlands) 29 th European Space

More information

Figure 1: Workflow of object-based classification

Figure 1: Workflow of object-based classification Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one

More information

University of Florida CISE department Gator Engineering. Clustering Part 5

University of Florida CISE department Gator Engineering. Clustering Part 5 Clustering Part 5 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville SNN Approach to Clustering Ordinary distance measures have problems Euclidean

More information

By Colin Childs, ESRI Education Services. Catalog

By Colin Childs, ESRI Education Services. Catalog s resolve many traditional raster management issues By Colin Childs, ESRI Education Services Source images ArcGIS 10 introduces Catalog Mosaicked images Sources, mosaic methods, and functions are used

More information

Spectral Classification

Spectral Classification Spectral Classification Spectral Classification Supervised versus Unsupervised Classification n Unsupervised Classes are determined by the computer. Also referred to as clustering n Supervised Classes

More information

Airborne LiDAR Data Acquisition for Forestry Applications. Mischa Hey WSI (Corvallis, OR)

Airborne LiDAR Data Acquisition for Forestry Applications. Mischa Hey WSI (Corvallis, OR) Airborne LiDAR Data Acquisition for Forestry Applications Mischa Hey WSI (Corvallis, OR) WSI Services Corvallis, OR Airborne Mapping: Light Detection and Ranging (LiDAR) Thermal Infrared Imagery 4-Band

More information

NEXTMap World 10 Digital Elevation Model

NEXTMap World 10 Digital Elevation Model NEXTMap Digital Elevation Model Intermap Technologies, Inc. 8310 South Valley Highway, Suite 400 Englewood, CO 80112 10012015 NEXTMap (top) provides an improvement in vertical accuracy and brings out greater

More information

SEA BOTTOM MAPPING FROM ALOS AVNIR-2 AND QUICKBIRD SATELLITE DATA

SEA BOTTOM MAPPING FROM ALOS AVNIR-2 AND QUICKBIRD SATELLITE DATA SEA BOTTOM MAPPING FROM ALOS AVNIR-2 AND QUICKBIRD SATELLITE DATA Mohd Ibrahim Seeni Mohd, Nurul Nadiah Yahya, Samsudin Ahmad Faculty of Geoinformation and Real Estate, Universiti Teknologi Malaysia, 81310

More information

6. Dicretization methods 6.1 The purpose of discretization

6. Dicretization methods 6.1 The purpose of discretization 6. Dicretization methods 6.1 The purpose of discretization Often data are given in the form of continuous values. If their number is huge, model building for such data can be difficult. Moreover, many

More information

DISCRIMINATING CLEAR-SKY FROM CLOUD WITH MODIS ALGORITHM THEORETICAL BASIS DOCUMENT (MOD35) MODIS Cloud Mask Team

DISCRIMINATING CLEAR-SKY FROM CLOUD WITH MODIS ALGORITHM THEORETICAL BASIS DOCUMENT (MOD35) MODIS Cloud Mask Team DISCRIMINATING CLEAR-SKY FROM CLOUD WITH MODIS ALGORITHM THEORETICAL BASIS DOCUMENT (MOD35) MODIS Cloud Mask Team Steve Ackerman 1, Kathleen Strabala 1, Paul Menzel 1,2, Richard Frey 1, Chris Moeller 1,

More information

Copyright 2005 Society of Photo-Optical Instrumentation Engineers.

Copyright 2005 Society of Photo-Optical Instrumentation Engineers. Copyright 2005 Society of Photo-Optical Instrumentation Engineers. This paper was published in the Proceedings, SPIE Symposium on Defense & Security, 28 March 1 April, 2005, Orlando, FL, Conference 5806

More information

Estimating the wavelength composition of scene illumination from image data is an

Estimating the wavelength composition of scene illumination from image data is an Chapter 3 The Principle and Improvement for AWB in DSC 3.1 Introduction Estimating the wavelength composition of scene illumination from image data is an important topics in color engineering. Solutions

More information

Flood detection using radar data Basic principles

Flood detection using radar data Basic principles Flood detection using radar data Basic principles André Twele, Sandro Martinis and Jan-Peter Mund German Remote Sensing Data Center (DFD) 1 Overview Introduction Basic principles of flood detection using

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

InSAR Operational and Processing Steps for DEM Generation

InSAR Operational and Processing Steps for DEM Generation InSAR Operational and Processing Steps for DEM Generation By F. I. Okeke Department of Geoinformatics and Surveying, University of Nigeria, Enugu Campus Tel: 2-80-5627286 Email:francisokeke@yahoo.com Promoting

More information

Thematic Mapping with Remote Sensing Satellite Networks

Thematic Mapping with Remote Sensing Satellite Networks Thematic Mapping with Remote Sensing Satellite Networks College of Engineering and Computer Science The Australian National University outline satellite networks implications for analytical methods candidate

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Beginner s Guide to VIIRS Imagery Data. Curtis Seaman CIRA/Colorado State University 10/29/2013

Beginner s Guide to VIIRS Imagery Data. Curtis Seaman CIRA/Colorado State University 10/29/2013 Beginner s Guide to VIIRS Imagery Data Curtis Seaman CIRA/Colorado State University 10/29/2013 1 VIIRS Intro VIIRS: Visible Infrared Imaging Radiometer Suite 5 High resolution Imagery channels (I-bands)

More information

INTERPOLATED GRADIENT FOR DATA MAPS

INTERPOLATED GRADIENT FOR DATA MAPS Technical Disclosure Commons Defensive Publications Series August 22, 2016 INTERPOLATED GRADIENT FOR DATA MAPS David Kogan Follow this and additional works at: http://www.tdcommons.org/dpubs_series Recommended

More information

Retrieval of Aerosol and Cloud Properties using the ATSR Dual and Single View algorithms

Retrieval of Aerosol and Cloud Properties using the ATSR Dual and Single View algorithms Retrieval of Aerosol and Cloud Properties using the ATSR Dual and Single View algorithms Gerrit de Leeuw 1,2, Larisa Sogacheva 1, Pekka Kolmonen 1, Giulia Saponaro 1, Timo H. Virtanen 1, Edith Rodriguez

More information