Probably 3 Questions Each on:

Size: px
Start display at page:

Download "Probably 3 Questions Each on:"

Transcription

1 Probably 3 Questions Each on: 1. Radiometric Enhancement 2. Geometric Enhancement 3. Temperature Retrieval 4. The Frequency Domain 5. Classification 6. Principal Component Analysis (PCA)

2 Effects of the Atmosphere

3 Data Quality and Characteristics Resolution Spatial è IFOV è GIFOV Resolution, Pixel Size Spectral è l Range, Bandwidth, Dl Temporal è Dt Radiometric è Dynamic Range, SNR Quality, Resolution

4 Distortion Radiometric Distortion: Errors in pixel brightness values Instrumentation Wavelength dependence of solar radiation Effect of atmosphere Geometric Distortion: Errors in image geometry, (location, dimensions, etc.) Platform and instrument relative motions Scan angles and scan patterns Rotation of the Earth Attitude and altitude variability

5 Radiometric Distortion Relative brightness differs from what exists on the ground Relative brightness of a single pixel from band to band can be different from the true spectral reflectance characteristics on the ground Primarily effect from atmosphere

6 Correcting for Atmospheric Effects Factors that influence atmospheric distortion effects Humidity Temperature Pressure Aerosols, clouds, particulate matter, etc. These characteristics determine optical thickness (t) of the atmosphere I I 0 = e τ cosθ T θ = e τ cosθ T φ = e τ cosθ Optical thickness and scattering mechanism (Mie, Rayleigh and Non-selective) in turn determine the radiance available to reach the sensor Corrections are typically theoretically derived with radiative transfer models

7 Bulk Atmospheric Correction Often it is sufficient to assume there are pixel values close to zero in the imagery (e.g. water) In this case, any brightness observed will be a result of atmospheric contributions (Primarily L P but also E D ) Histograms of each channel will show an offset from zero as a result Wavelength dependent Subtracting this offset from the entire image will remove the vast majority of atmospheric effects

8 Histogram Matching Matching histogram of one image to that of another, so brightness distribution between the two is as close as possible Used in creating image mosaics so as to minimize the difference in brightness at the interface Matching of histogram to pre-specified shape e.g. Gaussian function which concentrates values in the middle range and minimizes extremes (blacks and whites). 2-step process: equalization followed by transformation to desired shape y = g -1 The transformation to the desired shape is achieved by [ f(x) ] determining the tranformation function of the reference image that creates a uniform histogram, and applying its inverse to the equalized histogram of the original image y = g -1 (z) GEOG 4110/5100 8

9 Striping Transfer Characteristics Mismatches between detectors Ideal radiation detector has a consistent transfer function (radiation in à radiation out) In reality, different detectors have different transfer functions Same irradiance causes different brightnesses in different detectors 6 detectors on MSS/band, 16 on TM, 6000 on SPOT HRV

10 Destriping Correction of radiometric mismatches can be made by adopting one sensor as a reference sensor, and adjusting the offsets of the others to match it Where y = σ d σ i x + m d σ d σ i m i x = old brightness of a pixel y = new (destriped) brightness m d = reference value mean brightness s d = reference value standard deviation m i = mean brighntess of detector under consideration s i = standard deviation of detector under consideration Assumes brightness values don t change significantly over distance equivalent to one scan of detectors (474 m for Landsats 1, 2, and 3)

11 Image Histogram and Contrast Modification Best overall image visual quality occurs when histogram is spread over full range of brightness values situation is when histogram uses full range. Define a mapping function y = f(x) In some cases we want to focus our dynamic range on sections of the histogram. GEOG 4110/

12 Contrast Enhancement (Radiometric) Linear stretch: expands histogram of image to span the full dynamic range (0-255). Linear stretch with saturation: expands a subset of the image brightnesses to span the full dynamic range. Anything below the minimum subset value is set to zero, anything above is set to % contrast stretch: Ignores values below 2 nd percentile and above 98 th percentile. It is a kind of linear stretch with saturation Logarithmic stretch scales pixels by a log function (used to enhance dark images) Exponential contrast enhancement: scales pixels by an exponential function (used to enhance bright images) Histogram equalization: seeks to modify the image such that the histogram of the modified image conforms to a desired shape or distribution. Histogram matching Matching histogram of one image to that of another, so brightness distribution between the two is as close as possible Used in creating image mosaics so as to minimize the difference in brightness at the interface Matching of histogram to pre-specified shape e.g. Gaussian function which concentrates values in the middle range and minimizes extremes (blacks and whites). 2-step process: equalization followed by transformation to desired shape

13 Linear Stretching y = (7/2)x - 7 Values between minimum and maximum values are scaled linearly from 0 to 255. y = f(x) = ax + b Look-up table maps old x values to new y values Not uniquely reversible This was done using IDL in exercise #1 with the tvscl or bytscl commands Look-up Table (LUT) x y GEOG 4110/

14 Geometric Distortion Sources Earth rotation during image acquisition Sensor scan characteristics Wide field of view of some sensors Curvature of the Earth Sensor realities (not perfect) Variations in platform altitude, attitude and velocity Panoramic effects related to image geometry

15 Mathematical Modeling: Example Aspect Ratio Distortion Correction Aspect Ratio: relative vertical and horizontal scales (width/height) x ½ x x AR=1 x AR=0.5 Samples are sometimes acquired too quickly across a scan line compared to the instrument IFOV e.g. Landsat MSS acquires pixels at 56 m intervals with an IFOV of 79 m Landsat effective pixel size is 79 m x 56 m (along-track across-track) Image displayed on a square grid will be too wide for its height Have to compress in width by a factor of 79/56 or Another example is aircraft moving too slow or too fast compared to cross-track scan 15

16 Explicit Mapping Functions Ideally, we would like to know the function that allows us to map from known locations to image locations (x,y to u,v) y x v u Usually we don t, so we have to determine them by matching distinct features in image, to known location on map, e.g. Road intersections, bends in rivers, coastline features, etc. ground-control points (GCPs) Generally chosen as simple polynomials of the first, second, or third degree Requires 3, 6, and 10 GCPs respectively u = a 0 + a 1 x + a 2 y + a 3 xy + a 4 x 2 + a 5 y 2 v = b 0 + b 1 x + b 2 y + b 3 xy + b 4 x 2 + b 5 y 2 16

17 Correction of Geometric Distortion Mapping Polynomials for Image Projection Assumes geometrically correct map of image region is available Need to map known locations (map) to corresponding locations in image u = f(x,y) v = g(x,y) 17

18 Geometric Enhancement Template 3 x 3 template positioned over a group of 9 image pixels A 3 x 3 template positioned over a group of nine image pixel (Richards and Jia, 2006, Fig. 5.1) GEOG 4110/

19 Image Smoothing (Low Pass Filtering) Reduces high-variability values (such as noise) in an image by sliding a smoothing template across the image Example: Mean Value Smoothing 3x3 mean smoothing kernel GEOG 4110/

20 Convolution ( m, n) t( m, n) Convolution Filter r( i, j) Convolution filter produce output image in which the brightness value at a given pixel is a function of the brightness values of the neighboring pixels. Convolution filters include: Low Pass, High Pass, Median, Sobel, Roberts, etc GEOG 4110/

21 Interpolation Nearest Neighbor Resampling Assigns the value of the actual pixel that is closest to the grid point in the image Preferred method for classification, since it leaves the actual pixel values intact, and just rearranges them in position match desired image geometry Bilinear Interpolation Uses 3 linear interpolations over four pixels surrounding the image grid point to display grid position Cubic Convolution Cubic Polynomials are fit along four lines of four pixels and a fifth is fit to the four interpolants 21

22 Geometric Enhancement Enhances geometric detail in an image, as opposed to radiometric detail Changes in pixel brightness are driven by geometric considerations, and thus are directly influenced by the character of other surrounding pixels Spatial interdependence of pixel values leads to variations in the perceived image geometric detail Operations occur over neighborhoods Our current focus will be in the image domain as opposed to the spatial frequency domain Image domain: operations consider the characteristics of the image itself Spatial frequency: operations consider rate at which image intensity values are changing in the image domain GEOG 4110/

23 Common Enhancement Techniques Smoothing: suppression of small, frequently occurring variability within an image Edge detection and enhancement: Increasing contrast or highlighting edges in imagery through saturation Line detection: enhances single-pixel width image features, increasing their contrast or highlighting them through saturation Template methods a window is defined and moved over an image row by row and column by column with operations performed on the pixels contained within that window. The value resulting from the operation is assigned to the center pixel of the window GEOG 4110/

24 Edge Detection Image Enhancement Edge Enhancement increases geometric detail in an image Edges are very sharp gradients in brightness indicating boundaries of features in an image Accomplished by detecting edges and adding them back to the original image to increase contrast or by using saturated overlays (black or white) on the original image to define borders. Why do we care about edges??? Three general approaches to edge detection Use of an edge-detection template Calculating spatial derivatives (spatial gradients) Subtracting a smoothed image from its original. GEOG 4110/

25 Image Smoothing (Low Pass Filtering) Reduces high-variability values (such as noise) in an image by sliding a smoothing template across the image Example: Mean Value Smoothing r(i, j) = 1 MN M m=1 n=1 (m,n) Pixel r(i,j) is assigned the average of all values within the template N f GEOG 4110/

26 Time Series Smoothing Using Moving Average Original time series Precip. (mm) Precip. (mm) Day Precip. (mm) Day Day Smoothed time series using interval 3 Smoothed time series using interval 5 GEOG 4110/

27 Linear Edge Detecting Templates Template that detects vertical edges in an image is given by t(m,n) = Central value is the accumulated difference horizontally between pixels in 3 adjacent rows GEOG 4110/

28 Linear Edge Detecting Templates The template is referred to as a kernel The systematic sequential application of that template across the image is referred to as convolution of the kernel Other convolution kernels for different applications are given below Vertical Horizontal Diagonal Diagonal NW/SE Edge NE/SW Edge Central value is horizontally accumulated difference Central value is vertically accumulated difference Central value is accumulated difference across a NW/SE line Central value is accumulated difference across a NE/SW line GEOG 4110/

29 Spatial Derivatives Techniques: Gradient Operators For edge detection, we are typically only concerned with the magnitude of change given by: Where = d 1 dx f(x,y) 2 2 = 1 ^ ^ = d (x,y) d f y In other words, the magnitude of the vector is the vector (Pythagorean) sum of the gradient in the x direction and the gradient in the y direction The above is for continuous gradients. For discrete gradients (i.e. across pixels in imagery), we replace the derivatives with differences. Two difference-based spatial operators we will discuss are the Roberts operator and the Sobel operators (each is a function in ENVI) GEOG 4110/

30 The Roberts Operator Discrete components of the derivative on the previous chart are given by f - f f - f 1 = (i, j) (i +1, j +1) 2 = (i +1, j) (i, j +1) for the point i+½, j+½ In other words: we assess the gradient across the two diagonals as a means to determine the edges i i + 1 j j + 1 (i+½, j+½) Since a local gradient is computed, it is necessary to specify a threshold value to determine edge gradients and suppress minor gradients Detects horizontal, vertical and diagonal edges and assigns them to the the upper left sides of the edges GEOG 4110/

31 Application of Roberts Operator GEOG 4110/

32 The Sobel Operator Computes the discrete gradient in the horizontal and vertical directions at the pixel location with the following operators: j - 1 j j + 1 i - 1 This approach places edge locations at pixels, but comes at greater computational cost i i + 1 GEOG 4110/

33 Sobel Operator Sobel Operator Templates = = Original Pixel Values Values after applying the Roberts Operator (or the above templates) GEOG 4110/

34 Subtractive Smoothing Edge Enhancement Smoothing removes highfrequency aspects of an image Subtracting a smoothed image from its original image leaves only the high frequency information (edges and lines) Adding back the highfrequency information to the original image amplifies the effects of these highfrequency more detailed aspects of the images. GEOG 4110/

35 Geometric Properties Texture Texture is an expression of the local spatial structure in digital images based on the spatial distribution of variation in tones or colors. e.g. rough or smooth Differs from pattern in that pattern is structure in an image that derives from spatial regularity in variability. Assessment of texture requires some means of quantifying the variability of tones or colors in an image. Common approach is Grey Level Co-occurrence Matrix (GLCM). Matrix defined over an image to be the distribution of co-occurring values at a given offset Employs a 2-dimensional histogram. GEOG 4110/

36 Texture and the GLCM A GLCM is a two-dimensional histogram of grey levels for a pair of pixels (Ref,Nbr) which are separated by a fixed spatial relationship. Approximates the joint probability distribution of a pair of pixels GLCM (right) for a 5x5 pixel showing the number of occurrences of the corresponding row and pixel values for all cases on the left in which the row value occurs one pixel beneath the corresponding column value. For L values the GLCM will be an L x L matrix GEOG 4110/

37 Texture In more general terms, g(f1,f2 h, q) refers to the relative occurrence of pixels with grey levels of f1 and f2 separated by a distance of h pixels in the direction q. Relative occurrence is the number of times this combination occurs in a specified direction q for a given distance h divided by the total possible number of grey level pairs. GLCM dimensions (L x L) can be large. 8-bit dynamic range: L x L = 2552 = 65,025 elements for each value of h and q 10-bit dynamic range: L x L = = 1,048,576 elements for each value of h and q It is helpful to describe/quantify the amount of texture with a single value or set of values that captures the variability of of the GLCM Entropy Energy ETM+ Image of greater Canberra, Australia (Richards and Jia, Fig. 5.12a) GEOG 4110/

38 Texture Energy decreases with lag (h) and Entropy increases Texture diminishes with distance Forest, suburbs, grassland and mountains are distinguishable by their texture Greatest texture in grass Mountains and suburbs have similar textures and are difficult to distinguish at these scales GEOG 4110/

39 Apparent Surface Temperature Instrument Characteristics Image Characteristics 255 Output Signal (DN) 0 Offset Transfer Function Radiant Intensity L max L min Bias Gain L min L Input Signal max Digital Number (Radiant Intensity) Radiometric Resolution/Dynamic Range L = Bias + (Gain DN) 39

40 Temperature Retrieval To calculate surface temperature Correction of data for geometric distortions Correction of data for radiometric distortions Converting DN to radiance Calculate the temperature in degrees Kelvin Converting the degrees Kelvin to degrees Celsius 40

41 Sea Surface Temperature (SST) Various techniques Split window: uses 2 channels (windows). Dual window: uses 2 channels. Triple window: Uses 3 channels. AVHRR sea surface temperature. Global coverage. Good resolution. Nearly 30 years of data. Reasonably stable surface emissivity characteristics. Channel 3: µm; Channel 4: µm; Channel 5: µm. 41

42 Multi-Channel Sea Surface Temperature (MCSST) Algorithms AVHRR Split Window Algorithm T s = a 0 + a 1 *band4 + a 2 (band4 - band5) + a 3 (band4 - band5)(sec(φ) -1) a 0, a 1, a 2, and a 3 are constants that are theoretically and empirically derived Differ for night and day. Differ from AVHRR instrument to AVHRR instrument. f is the satellite zenith angle. Dual Window Algorithm T s = a 0 + a 1 *band4 + a 2 (band3 - band4) + a 3 (sec(φ) -1) Triple Window Algorithm various combinations of bands 3, 4, and 5 for different instruments. 42

43 Fourier Transforms: The Frequency Domain Any spatial or temporal signal has an equivalent frequency representation What do frequencies mean in an image? High frequencies correspond to pixel values that change rapidly across the image text, texture, leaves, roads, etc. Strong low frequency components correspond to large scale features in the image (e.g. a single, homogenous object that dominates an image, or slowly varying character) GEOG 4110/

44 Fourier Transforms: The Frequency Domain Image (or spatial) Domain Each image value at image position F represents the intensity value at position F, and positions are preserved between an original image and its transform operations consider the pixel brightness values of the image itself and are performed on the image directly Frequency Domain Each image value at image position F represents the amount that the intensity values in image vary over a specific distance related to F. In the frequency domain, changes in image position correspond to changes in the brightness values Operations consider rate at which pixel brightness values are changing in the image domain Transform the image to its frequency representation Perform image processing Compute inverse transform back to the spatial domain GEOG 4110/

45 Fourier Transforms Transformations in the Frequency Domain Fourier theory states that any signal, in our case visual images, can be expressed as a sum of a series of sinusoids. In the case of imagery, these are sinusoidal variations in brightness across the image. The sinusoidal patterns shown below can be captured in a single Fourier term that encodes 1: the spatial frequency, 2: the magnitude (positive or negative), and 3: the phase. Spatial Frequency: number of transitions through the cycle from bright to dark across a given distance Magnitude: Contrast or range between darkest and brightest peaks of the image negative magnitude is contrast reversal Sinusoid in horiz. direction with low spatial frequency Sinusoid in horiz. direction with high spatial frequency Phase: Position along the black/white continuum GEOG 4110/

46 Fourier Transform Fourier transform is a mathematical technique for decomposing an image into its different spatial frequencies. Fourier theorem states that any function f(x)can be expressed as the sum of sines and/or cosines of different spatial frequencies: = GEOG 4110/ JR Spletzer

47 Fourier Transforms Transformations in the Frequency Domain Mean image brightness The Fourier transform encodes all of the spatial frequencies present in an image simultaneously. A signal containing only a single spatial frequency of frequency f is plotted as a single peak at point f along the spatial frequency axis, the height of that peak corresponding to the amplitude, or contrast of that sinusoidal signal. Fourier transform also plots a mirror-image of the spatial frequency plot reflected across the origin, with spatial frequency increasing in both directions from the origin. these two plots are always mirrorimage reflections of each other, with identical peaks at f and f. GEOG 4110/

48 Frequency and Image Domain Filters Low-pass Frequency Domain: filter out higher frequency components Image domain: Convolve a smoothing Kernel (mean, median, etc.) High-Pass Frequency Domain: filter out lower frequency components Image Domain: Subtract smoothed image from original Band-Pass/Band-cut Frequency Domain: Filter out all frequencies that fall outside a specified range (band-pass) or that fall within a specified frequency range (band-cut). GEOG 4110/

49 There are Spectral Classes Within Clusters Representation of information classes by sets of spectral classes (Fig 3.6 from Richards and Jia, 2006) Rarely this clean Additional dimensions help discriminate further, when there is overlap GEOG 4110/

50 Classification Unsupervised classification: The assigning of pixels of an image to spectral classes without the knowledge of their existence and names. Performed using clusters. The methods determine the location and the number of classes in the data and the class of each pixel. Identifying the classes using a reference data (maps, field..). It is useful in identifying the spectral classes of an image before further analysis (e.g. supervised). Supervised classification A number of statistical and non-statistical methods are available. Statistical methods assume that each spectral class has a particular probability distribution (Gaussian) function in multispectral space. Consists of three broad phases : (1) Selection of training pixels (field data, maps, ), (2) Compute the mean and covariance matrix, and (3) Assigning each pixel to a class using the highest probability. GEOG 4110/

51 Supervised Classification Types of Supervised Classification Maximum Likelihood Minimum Distance Parallelepiped (par al lel e pi ped) Context Classification Others Two underlying principles Probability distribution models for classes of interest Partitioning of multi-spectral space into class-specific regions using optimally located surfaces GEOG 4110/

52 Six Steps in Supervised Classification 1. Decide on set of ground cover types into which the image is classified. 2. Choose representative pixels or training data from each class. Based on knowledge of the region acquired either through ancillary information, or interpretation of the imagery 3. Use the training data to estimate the parameters of the particular classifier algorithm to be used. Properties that define a probability model Equations that define partitions in multi-spectral space Signature of that class 4. Use the trained classify to classify every pixel in the image into one of the information classes. 5. Produce tables or thematic maps that summarize the results of the classification. 6. Assess the accuracy of the classification using a testing dataset. GEOG 4110/

53 Maximum Likelihood Classification Most common supervised classification with remote sensing imagery. We define a vector (x) that is the set of brightness values of a pixel in multi-spectral space. Band Brightness This vector has a certain probability of being in one of M spectral classes (w i ) in an image p(w i x), i = 1, 2, M x is classified as follows x, if p(w i x) > p(w j x) for all j i w i i.e. the probability of a given pixel is greatest that it falls into class i rather than any other class x = GEOG 4110/

54 Minimum Distance Classification Large number of pixels needed for maximum likelihood in order to calculate mean vector and covariance matrix for each spectral class Requires sufficient number of training pixels for each class Minimum Distance Classification does not use co-variance information Relies instead on mean positions of the spectral classes Can be performed when number of training pixels is limited Training data used to determine class means Classification performed by placing pixel in class of nearest mean x w, i if d(x,m i ) 2 < d(x,m j ) 2 for all j i where: d(x,m i ) 2 = (x-m i ) t (x-m i ) Water Distance thresholds can also be applied Vegetation Soil GEOG 4110/

55 Parallelepiped Classification Simple classification technique based on histograms of individual spectral components in training data Histograms for the components of a 2-D set of training data corresponding to a single spectral class. (Fig. 8.5 from Richards and Jia, 2006) x 2 The upper and lower boundaries of the histograms above define the edges of a 2-D parallelepiped. x 2 GEOG 4110/

56 Context Classification Maximum Likelihood, Minimum Distance, and Parallelepiped classifiers are all pixel-specific Context Classification considers characteristics of neighboring behavior Sensors acquire some energy from adjacent pixels Ground cover variability is usually larger than pixel sizes Greater probability that a pixel will be similar to its neighboring pixel than one far away Can reduce misclassifications due to noisy data GEOG 4110/

57 Unsupervised Classification Unsupervised classification (clustering) is the portioning of remote sensing data into different spectral classes. Successful application of supervised classification, depends on how correctly we delineated the training classes. It is not easy to identify unimodal groups, therefore clustering methods are practical alternatives. They are applied to indentify the spectral classes of the data. GEOG 4110/

58 Unsupervised Classification Methods Clustering Grouping of pixels in a multispectral space. Pixels in each cluster are spectrally similar. Euclidean distance can be used to measure the similarity between pixels. Fig From Richards Jia (2006) Where: N = number of spectral bands GEOG 4110/

59 Unsupervised Classification Methods Sum of squared error (SSE) can be used to measure the quality of clusters. It calculates the cumulative distance to the center for each cluster and the sums the distances of all clusters. The clusters are favorable when the distance is small. It requires to calculate a big numbers of SSE to evaluate all clusters. GEOG 4110/

60 Unsupervised Classification Methods Migrating means (Iterative optimization, Isodata ) Based on assigning the pixel vectors into candidate clusters. x = Moving the clusters until a minimum SSE is reached. GEOG 4110/

61 Unsupervised Classification Methods Steps for migrating means method Selecting N points to in multispectral space to serve as cluster centers. The cluster means should be spaced uniformly over the data. As with the means, the number of clusters must be identified beforehand. m i, i = 1, 2 N Based on Euclidean distance each pixel at a location x is assigned to the nearest cluster. New means are calculated from the clustering resulted from the previous step. m i, i = 1, 2 N If m i = m i the procedure is terminated, otherwise m i is redefined as m i. GEOG 4110/

62 Unsupervised Classification Methods Once the clustering is completed. The clusters can be examined for: Any clusters contain very few points will not be useful as training classes for supervised classification. Some clusters are so closed and need to be merged. Clustering by Isodata method (Fig from Richards and Jia, 2006). GEOG 4110/

63 Spectral Unmixing GEOG 4110/

64 Spectral Unmixing M = the number of endmembers N = the number of spectral bands f m = the fraction of coverage for a particular class m where m is from 1 to M R n is the observed reflectance in the nth spectral band where n is from 1 to N a n,m is the spectral reflectance of the n th band of the m th endmember R n = M m=1 f m a n,m + ξ n Where ξ n is an error in band n n = 1, N Observed reflectance in each band is the linear sum of the reflectances of the endmembers (within the uncertainty expressed in the error term) GEOG 4110/

65 R n = M m=1 Spectral Unmixing Observed reflectance in each band is the linear sum of the reflectances of the endmembers (within the uncertainty expressed in the error term) f m a n,m + ξ n In Matrix form: R = Af + ξ n = 1, N Where f is a column vector of size M, R and ξ are column vectors of size N, and A is an N x M matrix of endmember spectral signatures by column Reflectance of a pixel at each wavelength Fraction of pixel occupied by each class 1 - M " $ $ # $ R 1 R n % ' ' = &' " a 1,1 a 1,m % " $ ' $ $ ' $ # $ a n,1 a n,m &' # $ f 1 f m % ' ' + &' " $ $ # $ ξ 1 ξ n % ' ' &' Error matrix Spectral signature of class 1 Spectral signature of class m GEOG 4110/

66 Spectral Unmixing For the unmixing Matrix: R = Af + ξ We try to find values of f that minimize ξ If we assume we have the correct set of endmembers, the equation simplifies to R = Af We solve through an error minimization using psuedo inverse (Moore-Penrose) f = (A t A) -1 A t R For more explanation on this minimization refer to: Solution requires: a) the sum of the fm values is 1 and b) 0 < f m < 1 for all m GEOG 4110/

67 Principal Components Analysis (PCA) PCA is a technique that transforms the original vector image data into smaller set of uncorrelated variables. The variables represent most of the image information and easier to interpret. Principal components are derived such that the first PC accounts for much of the variation of the original data. The second (vertical) accounts for most of the remaining variation. PCA is useful in reducing the dimensionality (number of bands) that used for analysis. Minimum noise fraction (MNF) method can be used with hyperspectral data for noise reduction. GEOG 4110/

68 Principal Components Seek new coordinate system in vector space in which data can be represented without correlation Covariance matrix is diagonal y = Gx = D t x GEOG 4110/

69 Eigenvalues and Eigenvectors Eigenvalues (l) and eigenvectors (x) of a Matrix (M) are scalar and vector terms such that the multiplication of x by l has the same result as the matrix transformation of x by matrix M or Mx = lx (i.e. y = lx is equivalent to y = Mx) Mx - lx = 0 à (M-lI)x =0; where I is the identity matrix For the above to be true, then either x = 0 or M-lI = 0 This is the characteristic equation from which the eigenvalues (l) can be determined When plugged into the equation: (M-lI)x =0, the eigenvectors (x) can be determined GEOG 4110/

70 Principal Component Transformation The eigenvectors determine the transformation matrix that produces each principal component The transformation matrix is the transposed matrix of eigenvectors (D is the matrix of eigenvectors) y = Gx = D t x The eigenvalue describes the percentage of the variance that is contained within each principal component The higher the eigenvector as a fraction of the sum of the eigenvectors, the more relative information is contained in the corresponding principal component The n th component (n = 1 N) represents z percent of the variance where ζ n = λ n λ 1 + λ λ n GEOG 4110/

71 Principal Component Transformation Steps 1. Compute the covariance matrix of the data set in vector space 2. Calculate the eigenvalues of the covariance matrix 3. The diagonal matrix with the eigenvalues along the diagonal will be the covariance matrix of the transformed axes (principal component axes) 4. Find the matrix of eigenvectors (D i ) for each individual l of interest by solving for [S x l i I]g i = 0. for that l. 5. Transpose the Matrix D to produce principal component transformation matrix (g). The number of rows in g will equal the number of spectral dimensions from which the eigenvalues and eigenvectors were calculated 6. For each g matrix (derived from a given l) the original data values (in original x coordinate system) are multiplied by the rows in g (g 1, g 2, g n where n is the number of dimensions in vector space), to produce coordinates in the transformed dimension (new y coordinate system). Each axis in the original spectral space will be multiplied by its corresponding row in the g matrix to produce the transformed coordinate system (principal component) 7. Steps 4 6 are repeated until the desired number of principal component transformations have been executed.

72 Mean Vector and Covariance The covariance matrix (S x ) is a matrix of covariance values that describes the scatter or spread between variables. Computation of Covariance Matrix (Table 8.1 from Richards and Jia, 2006) S x = 1 - n S n 1 i= 1 ( x i - m)( x - i m) t m = GEOG 4110/

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Review What factors influence radiometric distortion? What is striping in an image?

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 2

GEOG 4110/5100 Advanced Remote Sensing Lecture 2 GEOG 4110/5100 Advanced Remote Sensing Lecture 2 Data Quality Radiometric Distortion Radiometric Error Correction Relevant reading: Richards, sections 2.1 2.8; 2.10.1 2.10.3 Data Quality/Resolution Spatial

More information

GEOG 4110/5100 Advanced Remote Sensing Lecture 4

GEOG 4110/5100 Advanced Remote Sensing Lecture 4 GEOG 4110/5100 Advanced Remote Sensing Lecture 4 Geometric Distortion Relevant Reading: Richards, Sections 2.11-2.17 Geometric Distortion Geometric Distortion: Errors in image geometry, (location, dimensions,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Spectral Classification

Spectral Classification Spectral Classification Spectral Classification Supervised versus Unsupervised Classification n Unsupervised Classes are determined by the computer. Also referred to as clustering n Supervised Classes

More information

Image Processing and Analysis

Image Processing and Analysis Image Processing and Analysis 3 stages: Image Restoration - correcting errors and distortion. Warping and correcting systematic distortion related to viewing geometry Correcting "drop outs", striping and

More information

Hyperspectral Remote Sensing

Hyperspectral Remote Sensing Hyperspectral Remote Sensing Multi-spectral: Several comparatively wide spectral bands Hyperspectral: Many (could be hundreds) very narrow spectral bands GEOG 4110/5100 30 AVIRIS: Airborne Visible/Infrared

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Correction and Calibration 2. Preprocessing

Correction and Calibration 2. Preprocessing Correction and Calibration Reading: Chapter 7, 8. 8.3 ECE/OPTI 53 Image Processing Lab for Remote Sensing Preprocessing Required for certain sensor characteristics and systematic defects Includes: noise

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle Sensor Calibration - Application Station Identifier ASN Scene Center atitude 34.840 (34 3'0.64"N) Day Night DAY Scene Center ongitude 33.03270 (33 0'7.72"E) WRS Path WRS Row 76 036 Corner Upper eft atitude

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

INTENSITY TRANSFORMATION AND SPATIAL FILTERING 1 INTENSITY TRANSFORMATION AND SPATIAL FILTERING Lecture 3 Image Domains 2 Spatial domain Refers to the image plane itself Image processing methods are based and directly applied to image pixels Transform

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY Development of Algorithm for Fusion of Hyperspectral and Multispectral Imagery with the Objective of Improving Spatial Resolution While Retaining Spectral Data Thesis Christopher J. Bayer Dr. Carl Salvaggio

More information

Aardobservatie en Data-analyse Image processing

Aardobservatie en Data-analyse Image processing Aardobservatie en Data-analyse Image processing 1 Image processing: Processing of digital images aiming at: - image correction (geometry, dropped lines, etc) - image calibration: DN into radiance or into

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee.

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee. Modern Surveying Techniques Prof. S. K. Ghosh Department of Civil Engineering Indian Institute of Technology, Roorkee Lecture - 12 Rectification & Restoration In my previous session, I had discussed regarding

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Principal Component Image Interpretation A Logical and Statistical Approach

Principal Component Image Interpretation A Logical and Statistical Approach Principal Component Image Interpretation A Logical and Statistical Approach Md Shahid Latif M.Tech Student, Department of Remote Sensing, Birla Institute of Technology, Mesra Ranchi, Jharkhand-835215 Abstract

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Geometric Correction

Geometric Correction CEE 6150: Digital Image Processing Geometric Correction 1 Sources of Distortion Sensor Characteristics optical distortion aspect ratio non-linear mirror velocity detector geometry & scanning sequence Viewing

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens Terrain correction and ortho-rectification Terrain correction Rüdiger Gens Why geometric terrain correction? Backward geocoding remove effects of side looking geometry of SAR images necessary step to allow

More information

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION

CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION 1 CORRECTING RS SYSTEM DETECTOR ERROR GEOMETRIC CORRECTION Lecture 1 Correcting Remote Sensing 2 System Detector Error Ideally, the radiance recorded by a remote sensing system in various bands is an accurate

More information

Basic relations between pixels (Chapter 2)

Basic relations between pixels (Chapter 2) Basic relations between pixels (Chapter 2) Lecture 3 Basic Relationships Between Pixels Definitions: f(x,y): digital image Pixels: q, p (p,q f) A subset of pixels of f(x,y): S A typology of relations:

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Spatial Interpolation & Geostatistics

Spatial Interpolation & Geostatistics (Z i Z j ) 2 / 2 Spatial Interpolation & Geostatistics Lag Lag Mean Distance between pairs of points 1 Tobler s Law All places are related, but nearby places are related more than distant places Corollary:

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

GEOBIA for ArcGIS (presentation) Jacek Urbanski

GEOBIA for ArcGIS (presentation) Jacek Urbanski GEOBIA for ArcGIS (presentation) Jacek Urbanski INTEGRATION OF GEOBIA WITH GIS FOR SEMI-AUTOMATIC LAND COVER MAPPING FROM LANDSAT 8 IMAGERY Presented at 5th GEOBIA conference 21 24 May in Thessaloniki.

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/ TEXTURE ANALYSIS Texture analysis is covered very briefly in Gonzalez and Woods, pages 66 671. This handout is intended to supplement that

More information

Remote Sensing Introduction to the course

Remote Sensing Introduction to the course Remote Sensing Introduction to the course Remote Sensing (Prof. L. Biagi) Exploitation of remotely assessed data for information retrieval Data: Digital images of the Earth, obtained by sensors recording

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Spatial Interpolation - Geostatistics 4/3/2018

Spatial Interpolation - Geostatistics 4/3/2018 Spatial Interpolation - Geostatistics 4/3/201 (Z i Z j ) 2 / 2 Spatial Interpolation & Geostatistics Lag Distance between pairs of points Lag Mean Tobler s Law All places are related, but nearby places

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image Recover an image by using a priori knowledge of degradation phenomenon Exemplified by removal of blur by deblurring

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Filtering and Enhancing Images

Filtering and Enhancing Images KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.

More information

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone:

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone: Remote Sensing & Photogrammetry W4 Beata Hejmanowska Building C4, room 212, phone: +4812 617 22 72 605 061 510 galia@agh.edu.pl 1 General procedures in image classification Conventional multispectral classification

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Examination in Image Processing

Examination in Image Processing Umeå University, TFE Ulrik Söderström 203-03-27 Examination in Image Processing Time for examination: 4.00 20.00 Please try to extend the answers as much as possible. Do not answer in a single sentence.

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

Digital Image Processing Chapter 11: Image Description and Representation

Digital Image Processing Chapter 11: Image Description and Representation Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

NAME :... Signature :... Desk no. :... Question Answer

NAME :... Signature :... Desk no. :... Question Answer Written test Tuesday 19th of December 2000. Aids allowed : All usual aids Weighting : All questions are equally weighted. NAME :................................................... Signature :...................................................

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

POSITIONING A PIXEL IN A COORDINATE SYSTEM

POSITIONING A PIXEL IN A COORDINATE SYSTEM GEOREFERENCING AND GEOCODING EARTH OBSERVATION IMAGES GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF REMOTE SENSING AN INTRODUCTORY TEXTBOOK CHAPTER 6 POSITIONING A PIXEL IN A COORDINATE SYSTEM The essential

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

This paper describes an analytical approach to the parametric analysis of target/decoy

This paper describes an analytical approach to the parametric analysis of target/decoy Parametric analysis of target/decoy performance1 John P. Kerekes Lincoln Laboratory, Massachusetts Institute of Technology 244 Wood Street Lexington, Massachusetts 02173 ABSTRACT As infrared sensing technology

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Reconstruction of Images Distorted by Water Waves

Reconstruction of Images Distorted by Water Waves Reconstruction of Images Distorted by Water Waves Arturo Donate and Eraldo Ribeiro Computer Vision Group Outline of the talk Introduction Analysis Background Method Experiments Conclusions Future Work

More information

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2 A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA Naoto Yokoya 1 and Akira Iwasaki 1 Graduate Student, Department of Aeronautics and Astronautics, The University of

More information

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING SECOND EDITION IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING ith Algorithms for ENVI/IDL Morton J. Canty с*' Q\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC

More information

Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics

Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics Classify Multi-Spectral Data Classify Geologic Terrains on Venus Apply Multi-Variate Statistics Operations What Do I Need? Classify Merge Combine Cross Scan Score Warp Respace Cover Subscene Rotate Translators

More information

Lecture 7. Spectral Unmixing. Summary. Mixtures in Remote Sensing

Lecture 7. Spectral Unmixing. Summary. Mixtures in Remote Sensing Lecture 7 Spectral Unmixing Summary This lecture will introduce you to the concepts of linear spectral mixing. This methods is sometimes also called: Spectral Mixture Analysis (SMA: Wessman et al 1997)

More information

Quality assessment of RS data. Remote Sensing (GRS-20306)

Quality assessment of RS data. Remote Sensing (GRS-20306) Quality assessment of RS data Remote Sensing (GRS-20306) Quality assessment General definition for quality assessment (Wikipedia) includes evaluation, grading and measurement process to assess design,

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Image Enhancement in Spatial Domain (Chapter 3)

Image Enhancement in Spatial Domain (Chapter 3) Image Enhancement in Spatial Domain (Chapter 3) Yun Q. Shi shi@njit.edu Fall 11 Mask/Neighborhood Processing ECE643 2 1 Point Processing ECE643 3 Image Negatives S = (L 1) - r (3.2-1) Point processing

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations Image Enhancement Digital Image Processing, Pratt Chapter 10 (pages 243-261) Part 1: pixel-based operations Image Processing Algorithms Spatial domain Operations are performed in the image domain Image

More information

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Image Processing. Traitement d images. Yuliya Tarabalka  Tel. Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an

More information