1 Point Operations (V1)
|
|
- Marion Warner
- 5 years ago
- Views:
Transcription
1 Digital Image Processing - Summary 6. Februar Page 1/15 1 Point Operations (V1) 1.1 Definition 1.2 Process M rows (x) and N columns (y) Homogeneous operations don t depend on coordinates, but inhomogeneous do. 1.3 Histogram Gonzales p 120 h(i) = number of pixels in I with intensity value i Cumulative Histogram (Integral): H(i) = i Increasing Contrast: f c (a) = 1.5a Increasing Brightness: f b (a) = a + 10 Invert Image: f i (a) = a max a j=0 h(j) für 0 i K Optimize Dynamic Range (saturate s low % and s high pixels): â low = min{i H(i) M N s low } â high = min{i H(i) M N s high } a min for a â low f mac (a) = a min + (a a low ) amax amin a high a low for â low < a < â high a m ax for â high Histogram Equalisation: H(a) = H eq (a ) a = H(a) 255 M N H eq (i) = M N i H(a) = a M N 255
2 Digital Image Processing - Summary 6. Februar Page 2/15 2 Transformations (V2) Gonzales p 104 Transformation is done using the inverse transformation T 1 which means that the loop is done in the output space and not in the input space. Otherwise, some holes or overlaps are possible. 2.1 Affine Transformations Gonzales p 87 [ ] a11 a [x y] = [w z] Interpolation + [b a 21 a 1 b 2 ] Gonzales p Methods: Nearest Neighbour; Bilinear (image); Bicubic 2.3 Image Registration Image registration is the process of aligning (multiple) images or bringing them into one coordinate system geometrically. Feature extraction Find transform to reference image Transform image 3 Filtering in the Spatial Domain (V3) Gonzales p 144 Operations on images working with the pixels in the neighbourhood. This is a convolution of image and filter matrix in the spatial domain. I = I H If H is 3x3 matrix and the origin is the center position: I = 1 i= 1 j= 1 1 I(u + i, v + j) H(i, j) Beware, to reach an image with the same intensity levels, the calculation should be normalized either by dividing every pixel by the sum of the structural element (SE, H) or by normalizing the SE ( j k H jk = 1). 3.1 Filter Types Box lowpass filter (linear): H 3x3 =
3 Digital Image Processing - Summary 6. Februar Page 3/ Gaussian lowpass filter (linear): H 3x3 = H 5x5 = Rank filter (nonlinear): Select kth element of sorted neighbouring pixels Median filter: Special form of a rank filter where the middle element is selected (remove salt n pepper noise): median(p 0, p 1,..., p k,..., p 2 k ) = p k Minimum filter: Special form of a rank filter where the first element is selected (eliminate white points, thickens dark regions) Maximum filter: Special form of a rank filter where the last element is selected (eliminate dark points, thickens bright regions) Find edges in directions [ of zeros ] in structural [ ] element (gradient operators, based on first derivative) Roberts H1 R =, H R = Prewitt Hx P = Hy P = Sobel (better noise suppression/smoothing than Prewitt): Hx S = Hy S = Compass edge filter (search edges in a specified direction, e.g. Kirsch): H0 K = Hx S = H1 K = H2 K = Hy S = H3 K = H4 K = H5 K = H6 K = H7 K = Laplacian (p. 160), edge intensities sharpening: Lap(f(x, y)) = δ2 f δx + δ2 f 2 δy Beware of the double line effect which creates 2 two lines per edge (one negative and one positive due to the definition of the 2nd derivative. Elimination with zerocrossing detection) H 3 3,anisotropic = H 3 3,isotropic = H 5 5,anisotropic = Additional Comments Derivatives Norm Smoothing Filters All filter elements have to sum up to 1 in order to stay in the correct domain of intensity values. This is easily achievable by dividing all pixels by the sum of all filter elements Border Problems It is not defined what happens at the borders. Therefore, computation is only possible at positions completely inside picture. E.g. with a 3x3 filter mask, the image border is 1 and the resolution is reduced by 2 in both directions (horizontally, vertically). Alternatively, the image can be extended by constant values, by replicating pixels or by replicating cyclic. The second derivative is very sensible to noise (due to its highpass behaviour).
4 Digital Image Processing - Summary 6. Februar Page 4/15 4 Filtering in the Frequency Domain (V4) Gonzales p Discrete Fourier Transform (DFT) Dimensional Gonzales p 220 s(h) = N 1 k=0 ĉ k e jhk 2π N N 1 = k=0 [ ( â k cos hk 2π ) ( + N ˆb k sin hk 2π )] N with N as the periodic number and the coefficients: ĉ k = 1 N 1 N s(h)e jhk 2π N = â k jˆb k = R{c k } + ji{c k } â k = 1 N ˆbk = 1 N h=0 N 1 h=0 N 1 h=0 s(h) cos ( hk 2π N s(h) sin ( hk 2π N ) = R{ĉk } ) = I{ĉk } Dimensional Gonzales p 225 N 1 1 F (u, v) = MN x=0 4.2 Border Problems M 1 y=0 ( ( ux f(x, y) exp j2π M + vy )) N Due to the periodicity of the Fourier transform, borders are reproduced when no measures are taken. F (k + N) = N 1 x n e 2πj N (k+n)n = N 1 n=0 n=0 x n e 2πj N kn e } 2πjn {{} 1 = N 1 n=0 x n e 2πj N kn = F (k) Beware, the Fast Fourier Transform is exactly the same as the DFT but faster. The phase contains important information and should not be discarded. To find out amplitude and phase: F (u, v) = R{F (u, v)} 2 + ) I{F (u, v)} 2, ϕ(u, v) = arctan ( I{F (u,v)} R{F (u,v)} Examples of 2D-Fourier transform pairs Gonzales p 243. Fourier transform properties Gonzales p 253f.. In 2D, the DC values are visible at coordinates (0,0), but to better visualize the spectrum they are usually moved to the center (Matlab: fftshift) Padding One way to avoid border problems is padding the original input image. From p. 263: 1. Obtain padding parameters from input image f(x, y) of size MxN. The padding parameters are P = 2M and Q = 2N 2. Append zeros to input image to obtain the padded image f p (x, y) 3. Center its transform: f c (x, y) = f p (x, y) ( 1) x+y 4. Compute DFT: f c (x, y) F (u, v) 5. Filter: F f = F (u, v) H(u, v) 6. Compute IDFT: F (u, v) g p (x, y) 7. Extract original region MxN from g p (x, y)
5 Digital Image Processing - Summary 6. Februar Page 5/ Window Another way is use windowing function which have nearly the size of the image and which smooth at the border to zero. In the spatial domain, they can be multiplied (which leads to convolution in the frequency domain). 4.3 Filter Types Gonzales p 269 Filtering periodic noise can easily be done in the frequency domain. E.g. band-rejection (notch) filter.
6 Digital Image Processing - Summary 6. Februar Page 6/15 5 Morphological Image Processing (V5, 6, 7) Gonzales p Overview Gonzales p 662 Be aware of theˆwhich is the reflection! Morphological approaches are nonlinear operations on binary or grey images. 5.2 Dilation Gonzales p 633 Dilation (dt. wachsen) grows or thickens objects in a binary image. The structure element H is replicated at every foreground pixel of I:
7 Digital Image Processing - Summary 6. Februar Page 7/ Properties I H = H I (I 1 I 2 ) I 3 = I 1 (I 2 I 3 ) I δ = δ I = I (δ is a neutral object) Border conditions It is best, to set the pixels at the border with the minimum value of H. 5.3 Erosion Gonzales p 631 Erosion (dt. schrumpfen) shrinks or thins objects in a binary image. Only the pixels of the original image where the structure element is completely encapseled, are in the resulting image Properties I H H I (I 1 I 2 ) I 3 I 1 (I 2 I 3 ) Border conditions It is best, to set the pixels at the border with the maximum value of H. 5.4 Duality of Erosion and Dilation Gonzales p 635 (A B) c = A c ˆB (A B) c = A c ˆB with A C being the complement of A (binary inversion) 5.5 Opening and Closing Gonzales p 635 Opening: A B = (A B) B Foreground structures which are smaller than H are eliminated by erosion. Dilation lets the structures grow back to its original size. Closing: A B = (A B) B Holes in the foreground structures are eliminated by dilation. Erosion lets the structures shrink back to its original size. Duality: (A B) c = A c ˆB), (A B) c = A c ˆB) Properties: A B, A B are subsets of A (A B) B = A B, (A B) B = A B 5.6 Hit-or-Miss Transform Gonzales p 640 Finds shapes that are bigger than some (small) structure element D and smaller than some (big) second structure element W : A B = (A D) (A c (W D)). 5.8 Hole Filling Gonzales p 643 Fills out any holes (background regions) starting from a starting point: X k = (X k 1 B) A c with X 0 being an image of same size as A with single pixels as starting point. This is an iterative approach. 5.7 Boundary Extraction Gonzales p 642 Find difference between original image A and erosion of A with B: β(a) = A (A B). 5.9 Connected Components Gonzales p 645 Find objects which are connected: X k = (X k 1 B) A. The only difference to hole filling is that here we are looking for foreground pixels instead of background pixels.
8 Digital Image Processing - Summary 6. Februar Page 8/ Other Tools Convex Hull S647, Thinning S649, Thickening S650, Skeletons S651, Pruning S Morphological Reconstruction Gonzales p 656,676 Used after opening to grow back pieces of the original image that are connected to the opening. In addition to the structure element B (block), here a marker image F with starting points and a mask image G are required. F G Geodesic Dilation Gonzales p 656 Init: D (n) G (F ) = F First it. (size 1): D (1) G Recursion (size n): D (n) G (F ) = (F B) G (F ) = D(1) G [D(n 1) G (F )] Geodesic Erosion Gonzales p 657 Init: E (n) G (F ) = F First it. (size 1): E (1) G Recursion (size n): E (n) G (F ) = (F B) G (F ) = E(1) G [E(n 1) G (F )] Morphological Recon. by Dilation Gonzales p 658 The iterative approach reconstructs the whole object out of a starting point. RG D (F ) = D(k) G (F ) Morphological Recon. by Erosion Gonzales p 658 Reconstruction by erosion reconstructs holes. RG E (F ) = E(k) G (F ) Sample Applications Gonzales p 659 Opening by Reconstruction Mask g is the input image. The structure element b can be selected to find meaningful starting points (e.g. line with 50 pixels length). The marker is then calculated using f = g b. Then magic comes into play: The iterative reconstruction uses this equation: (f nb) g where nb is the iteration number with 0B = and is the minimum operator. This is done until the image is stable (does not change anymore). Hole Filling Automatic filling of holes (including starting point seeking) Border Cleaning e.g. OCR: All characters which contact the border can be removed 5.12 Gray-Scale Morphology Gonzales p 665 Uses function theory instead of set theory (Mengenlehre). In particular, the Top-Hat transformation S672 is interesting for shading correction.
9 Digital Image Processing - Summary 6. Februar Page 9/15 6 Segmentation (V8, V9) Segmentation subdivides an image into its constituent (zusammengehörend) regions or object. One of the most difficult tasks in ImPro. Usually, post processing is required to identify and potentially label objects. Problems: Uneven illumination, noise 6.1 Edge Based Segmentation Gonzales p 692 Aim: Finding borders or lines of objects: Contours, similarity of adjacent pixels can be detected. 3 Steps: 1. Preprocessing & smoothing: noise reduction, small object removal, intensity transformations 2. Edge point detection: Edge pixel detection, derivatives in X,Y direction with 1st and/or 2nd order filters (Laplacian) 3. Postprocessing (localize edges): thresholding, thinning, zero crossings of 2nd order derivative Line Detection Gonzales p 697 See spatial filters (Section 3.1) for Laplacian Sobel, Prewitt, Point detection Gonzales p 696 Point detection: Use 2nd order derivation (Laplacian): 2 f(x, y) = δ2 f δx 2 (f(x, y + 1) + f(x, y 1) 2f(x, y)) H 3 3 = δ2 f δy 2 = (f(x + 1, y) + f(x 1, y) 2f(x, y)) Marr-Hildreth Edge Detector Gonzales p 714 Intensity changes are not independent of image scale and so their detection requires the use of operators of different sizes; and that a sudden intensity change will give rise to a peak or trough (Mulde) in the first derivative or equivalently a zero crossing in the second derivative. ( The operator for smoothing and Laplacian is called Laplacian of a Gaussian (LoG) or Mexican hat: 2 G(x, y) = with σ being the standard deviation of the image. The size of the filter (n n) should be an odd integer with n > ceil(6σ). Discussion: Strong smoothing (sharp edges might be lost) Mostly closed edges Sensitive to noise Post-processing (zero-cross detection) required x 2 +y 2 2σ 2 σ 4 ) e x 2 +y 2 2σ Canny Edge Detector Gonzales p 714 Use first order derivative in both directions. 1. Smooth image with Gaussian 2. Compute gradient (Sobel, Prewitt,... ) in both directions 3. Non-maxima suppression to gradient magnitude image (alles was nicht max ist, unterdrücken) 4. Double thresholding: Pixels with f(x, y) > T 1 belong to edge (strong edge pixels) Pixels with T 1 > f(x, y) > T 2 belong to edge when there is a strong edge pixel in the 8 8 neighbourhood (weak edge pixels) 5. If necessary, edge thinning This algo is considerably slower but performs better than Marr-Hildreth.
10 Digital Image Processing - Summary 6. Februar Page 10/ Edge Linking and Boundary Detection Gonzales p 725 Edges are going to be linked together as they are typically not connected. Local Processing [ ] gx 1. Compute gradient f = (Sobel, Prewitt,... ) g y 2. Compute magnitude M(x, y) = g 2 x + g 2 y g x + g y and angle ϕ(x, y) = arctan(g y /g x ) 3. Set neighbour pixel (8-connected neighbours) at (s, t) of pixel (x, y) as edge pixel when M(s, t) M(x, y) m 0 and ϕ(s, t) ϕ(x, y) ϕ 0 4. If necessary: Thinning and remove single edge pixels Global Processing Find lines instead of simple edges Hough Transform See later... Edge Linking 6.2 Thresholding Gonzales p Definitions Gonzales p 738 Global threshold: One threshold depending on intensity T = T (f(x, y)) Local threshold: Threshold may also depend on neighbourhood (e.g. grey level average, variance,... ) T = T (f(x, y), p(x, y)) Dynamic or variable or adaptive threshold: Depending on position: T = T (x, y, f(x, y)) Hysteresis thresholding: 1. Global threshold with T H, 2. Threshold in the k-connected (e.g. k = 4, 8) neighborhood of all pixels detected in (1) with lower threshold T L. Problems of thresholding: Noise, illumination Basic Method Gonzales p Initial T (e.g. middle point between 2 maxima) 2. Segment image with T into G 1 T, G 2 > T 3. Compute average intensity value m 1, m 2 in G 1, G 2 4. Compute new threshold value: T = 1 2 (m 1 + m 2 ) 5. Repeat steps 2..4 until the difference of the T s is smaller than a constant ɛ Otsu s Method Gonzales p 742 Statistical approach to find best threshold. 1. Compute the normalized histogram of the input image. Denote the components of the histogram by p i, i = 0, 1,..., L Compute cumulative sums: P (k) = k i=0 p(i) for k = 0, 1,..., L 1 3. Compute cumulative means: m(k) = k i=0 ip(i) for k = 0, 1,..., L 1 4. Compute global intensity mean: m G = L 1 i=0 ip(i) 5. Compute between-class variance: σ 2 B (k) = (m GP 1(k) m(k)) 2 P 1(k)(1 P 1(k)) for k = 0, 1,..., L 1 6. Obtain Otsu threshold k = max k=0,1,...,l 1 σb 2 (k). If max is not unique, average these ks. Otsu is not always better than basic approach! Smoothing Gonzales p 747 Again, smoothing might improve results when noise is a problem.
11 Digital Image Processing - Summary 6. Februar Page 11/ Edges Gonzales p 749 Improving the shape of histograms by considering only thoe pixels that lie on or near the edges between the objects and the background. less dependency of size of objects and background. 1. Find edges 2. Threshold edge image binary image 3. Mask original image with thresholded edge image 4. Compute the histogram using only the pixels in the origianl image that correspond to the locations of the 1-valued pixels in image from step 3. Evaluate threshold with the basic method, Otsu, etc. 5. Segment original image using this threshold Adaptive Thresholding Gonzales p 756 Subdivide image and compute threshold for every region. Hint: Regions must consist of both object and background, otherwise the thresholding would not work and only distinguish between noise. Therefore, regions must be not too big and not too small Multivariable Thresholds Gonzales p 761 Use RGB color information for thresholding: z = [r, g, b] T. This is a 3D vector which can also be referred as voxel (volume element). 6.3 Region Based Segmentation Gonzales p 763 Find boundaries between regions directly (in contrast to thresholding) according to some criteria (gray level, color, texture, form,... ). This is more efficient in terms of noisy and blurred images Region Growing Gonzales p Find seed points in input image f(x, y) (e.g. with a very high threshold): S(x, y) 2. Morphological erosion of S(x, y) to reduce connected components to 1 pixel 3. Calculate predicate: E.g. similarity in intensity (intensity threshold) on f(x, y). This leads to f Q (x, y) check 4. Append found values to seed image and label every connected area. Starting with seed points, grow regions according to some criteria as mentioned above Splitting and Merging Gonzales p 766 Instead of growing regions from seed points, this strategy grows from arbitrary disjoint regions. 1. Split into four disjoint quadrants any region R i for which Q(R i ) = F ALSE (Q(R i ) is the predicate, pixels belonging to object) 2. Split all regions again into disjoint regions (if possible) 3. End splitting when no more splitting is possible 6.4 Watersheds Gonzales p 769 The image can be viewed as a 3D relief with local minima. When flooding from these minima step by step and the water merges between two regions, a dam (Damm) can be built. These dams (watersheds/wasserscheiden) are the dividers between regions. Use when uniform regions are searched Problems: Oversegmentation due to noise or non-uniform regions. Possible solutions: Smoothing (as always) but with relatively large kernels Apply watershed transformation to gradient image (often still oversegmentation smoothing) Use markers as starting point (e.g. shortest distance between black and white pixels)
12 Digital Image Processing - Summary 6. Februar Page 12/15 7 Description and Representation (V10, 11, 12) Gonzales p Representation Gonzales p 796 A region can be represented in terms of its external characteristics (its boundary - basically for shape properties) or its internal characteristics (pixels comprising the region - basically for regional properties (colors, texture)). Description is the extraction of information out of the representation. For example the length of a boundary. These are some basic methods: Chain Codes S798, Polygonal Approximations S801 and Skeletons S Signatures Gonzales p 808 Signature: 1d representation of a boundary that can be generated in various ways. Example: Distance from the centroid to the boundary as a function of angle: Fourier Descriptors Gonzales p 818 FDs represent the frequency content of a shape (of the border) which can also be viewed as the form of an object. It is possible to make them independent of various transformations: Translation F (0) = 0 Scale F (u) = F (u) F (1) Rotation, starting point, direction F (u) = F (u) FDs are insensitive to noise. However, this might be dangerous as information (phase) is thrown away and may lead to wrong assumptions: This signature generation is translation variant but not rotation or scaling variant. Features that can be used further are e.g. the number of maxima or the distance between maximum and minimum Hough Transform (HT) Gonzales p 735 Aim: Find lines, circles or any freeform shapes in an image after edge detection. HT for Lines The objects are found using the following algorithm: Step Example: Line Define mathematical equation which describes the shape. y = a k x + b k Define parameter space with limits a k [a min, a max ], b k [b min, b max ] For every point x i, y i solve b = y i ax i and increment each visited point: H(a, b) = H(a, b) + 1 check Find maximum value in parameter space Parameters inserted into equation leads to most likely object Marked deep blue in graphics above (with 3 in it) y = ax + b Problem with this is that a horizontal line leads to a. Therefore, the parameter equation x cos(θ) + sin(θ) = r should be used for finding a line. Equations for circles
13 Digital Image Processing - Summary 6. Februar Page 13/15 Cartesian equation: r 2 = (x x 0 ) 2 + (y y 0 ) 2 Parametric representation: x = x 0 + r cos(θ) y = y 0 + r cos(θ) with Θ not being a free parameter Generalized Hough Transform (GHT) For freeform shapes which are rotation and scaling invariant as well as some a priori knowledge of the shape is available. Here, the gradient of the boundary is used instead of a parameter model. The space is now called reference space and not parameter space anymore. Conclusions Quite high computational effort ( Brute Force ), but control by quantization of parameter space Multiple objects can be detected HT tolerates also objects with holes Harris Corner Detection Corners are reference landmarks in images and help in localization. A corner has two large eigenvalues in the structure tensor T s. 1. if necessary (eg. noise, etc.) low-pass filter the image 2. compute gradients G x, G y (sobel) 3. build the structure tensor: 4 (3) values per pixel 4. low-pass filter each component of the structure tensor 5. compute Harris corner measure R(x, y) from structure tensor 6. threshold R(x, y) with T t h > 0 potential corners 7. use non-maxima suppression to find maxima in R(x, y) sort maxima in descending order select largest maximum suppress maxima within radius r around selected maximum select remaining largest maximum... etc
14 Digital Image Processing - Summary 6. Februar Page 14/ Simple Descriptors Gonzales p 822 Simple descriptors include area, perimeter, minimum/maximum diameter, area of bounding rectangle and should be independent of translation, rotation and scale. Area based descriptors, continued 2D central moments of order p + q: µ pq = M 1 x=0 N 1 (x x) p (y ȳ) p f(x, y) y=0 with x = m10 m 00 and ȳ = m01 m 00 which are also the centroid coordinates. Use principal components for normalizing with respect to variations in size, translation and rotation. Moments provide: Centroid coordinates Principal axes of inertia (Trägheit) or direction of maximal and minimal variance (principal components) Minimal bounding box 7.4 Texture Based Descriptors Gonzales p Statistical Approaches Based on normalized histogram h(g) (global): Mean: m = L 1 i=0 g ih(g i ) Variance: µ 2 = σ 2 = L 1 i=0 (g i m) 2 h(g i ) Skewness: µ 3 = L 1 i=0 (g i m) 3 h(g i ) 1 Roughness: R(g) = 1 1+σ 2 /(L 1) 2 R 1: large σ, rough; R 0: σ = 0, smooth Uniformity (uniformity): U(g) = L 1 i=0 h2 (g i ) Average entropy: H(g) = L 1 i=0 h(g i) log 2 (h(g i )) Co-Occurence Matrix The construction of COO uses the following scheme, it is local: 7.3 Area Based Descriptors (Moments) Gonzales p 839 Moments are statistical measures of the shape of a set of points. Due to its statistical nature, they are hardly variant to translation, rotation and scale. 2D moments of order p + q: m pq = M 1 x=0 N 1 y=0 x p y p f(x, y) The row index indicates the value of the center and the columns the number of values in the PO. The COO can also be reduced (instead of 255 grey intensities only 64 or 8 could be evaluated). Energy Contrast Entropy Homogenity E = L 1 i=0 C = L 1 i=0 H = L 1 S = L 1 i=0 L 1 j=0 L 1 L 1 i=0 j=0 L 1 j= Spectral Approach COO(i, j)2 j=0 (i j)2 COO(i, j) COO(i, j) log(coo(i, j)) COO(i,j) 1+ i j Energy, contrast, entropy can also be found out using the scaled spectrum: S(u, v) S(u, v) = F F T (f(x, y)) S n (u, v) = m n S(u, v) u=2 v=2
15 Digital Image Processing - Summary 6. Februar Page 15/15 8 Object Recognition V13, 14 Gonzales p 861 Problems: Which features are to be used? Where are the boundaries? Definition of a pattern or feature vector: x = [x 1,..., x n ] T Classes: x K 1,..., K W 8.1 Decision Theory Minimum Distance Classification Gonzales p 866 Sample will be assigned to the class to which the mean is the shortest. 8.2 Neural Nets Gonzales p 882 No assumptions about statistical model, training via samples. 2-Layer perceptron for linearly separable classes 3-Layer perceptron for non-linearly separable classes with nonlinear perceptron function When the Mahalanobis distance is taken instead of the Euclidean distance, the variance is also included in the min-dist-classifier: D j (x) 2 = (x m j ) T C 1 j (x m j ) 8.3 Structural Methods Gonzales p 903 Classification based on common descriptors (order descriptors) according to degree of similarity: with C j as the covariance matrix for every sample per class: C j = 1 N j x K j xx T m j m T j Matching by Correlation Gonzales p 869 Correlation is not robust in terms of background and freeform objects (only square objects) Bayes Classifier Gonzales p 874 Measure the similarity: Largest number to which k is the same. Distance between two objects a and b: D(a, b) = 1 k Build similarity matrix: Binary loss function: d j (x) = p(x K j )p(k j ) (j = 1,..., W ). p(x K j ) is assumed to be Gaussian distributed: d j (x) = ln p(k j ) 1 2 ln C j 1 2 (x m j) T C 1 j (x m j )) 8.4 Grammar Gonzales p 904
Digital Image Processing
Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationC E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II
T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationBabu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)
5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More informationTopic 6 Representation and Description
Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation
More informationBiomedical Image Analysis. Mathematical Morphology
Biomedical Image Analysis Mathematical Morphology Contents: Foundation of Mathematical Morphology Structuring Elements Applications BMIA 15 V. Roth & P. Cattin 265 Foundations of Mathematical Morphology
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationImage segmentation. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year
Image segmentation Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Segmentation by thresholding Thresholding is the simplest
More informationLecture: Edge Detection
CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -
More informationECEN 447 Digital Image Processing
ECEN 447 Digital Image Processing Lecture 8: Segmentation and Description Ulisses Braga-Neto ECE Department Texas A&M University Image Segmentation and Description Image segmentation and description are
More informationFundamentals of Digital Image Processing
\L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,
More informationFeature Extraction and Image Processing, 2 nd Edition. Contents. Preface
, 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationMorphological Image Processing
Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor
More information09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)
Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationClassification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation
Image enhancement (GW-Ch. 3) Classification of image operations Process of improving image quality so that the result is more suitable for a specific application. contrast stretching histogram processing
More informationMathematical Morphology and Distance Transforms. Robin Strand
Mathematical Morphology and Distance Transforms Robin Strand robin.strand@it.uu.se Morphology Form and structure Mathematical framework used for: Pre-processing Noise filtering, shape simplification,...
More informationLecture 8 Object Descriptors
Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh
More informationPart 3: Image Processing
Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation
More informationBoundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking
Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationEdge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)
Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing
More informationLesson 6: Contours. 1. Introduction. 2. Image filtering: Convolution. 3. Edge Detection. 4. Contour segmentation
. Introduction Lesson 6: Contours 2. Image filtering: Convolution 3. Edge Detection Gradient detectors: Sobel Canny... Zero crossings: Marr-Hildreth 4. Contour segmentation Local tracking Hough transform
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity
More informationUlrik Söderström 16 Feb Image Processing. Segmentation
Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationImage Processing. Traitement d images. Yuliya Tarabalka Tel.
Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationChapter 3: Intensity Transformations and Spatial Filtering
Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationCHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37
Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationImage Analysis Image Segmentation (Basic Methods)
Image Analysis Image Segmentation (Basic Methods) Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Computer Vision course
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationDigital Image Processing Chapter 11: Image Description and Representation
Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationLecture 6: Multimedia Information Retrieval Dr. Jian Zhang
Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationEE 584 MACHINE VISION
EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency
More informationUNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences
UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages
More informationBiomedical Image Analysis. Spatial Filtering
Biomedical Image Analysis Contents: Spatial Filtering The mechanics of Spatial Filtering Smoothing and sharpening filters BMIA 15 V. Roth & P. Cattin 1 The Mechanics of Spatial Filtering Spatial filter:
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationAssignment 3: Edge Detection
Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationBinary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5
Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationOther Linear Filters CS 211A
Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin
More informationRegion-based Segmentation
Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.
More informationFilters. Advanced and Special Topics: Filters. Filters
Filters Advanced and Special Topics: Filters Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong ELEC4245: Digital Image Processing (Second Semester, 2016 17)
More informationLine, edge, blob and corner detection
Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5
More informationDigital Image Processing
Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents
More informationBroad field that includes low-level operations as well as complex high-level algorithms
Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and
More informationSegmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.
Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means
More informationDEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What
More informationRegion & edge based Segmentation
INF 4300 Digital Image Analysis Region & edge based Segmentation Fritz Albregtsen 06.11.2018 F11 06.11.18 IN5520 1 Today We go through sections 10.1, 10.4, 10.5, 10.6.1 We cover the following segmentation
More informationProcessing of binary images
Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring
More information(10) Image Segmentation
(0) Image Segmentation - Image analysis Low-level image processing: inputs and outputs are all images Mid-/High-level image processing: inputs are images; outputs are information or attributes of the images
More informationLecture 4: Spatial Domain Transformations
# Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material
More informationOutlines. Medical Image Processing Using Transforms. 4. Transform in image space
Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms
More informationIntroduction to Medical Imaging (5XSA0)
1 Introduction to Medical Imaging (5XSA0) Visual feature extraction Color and texture analysis Sveta Zinger ( s.zinger@tue.nl ) Introduction (1) Features What are features? Feature a piece of information
More informationEECS490: Digital Image Processing. Lecture #22
Lecture #22 Gold Standard project images Otsu thresholding Local thresholding Region segmentation Watershed segmentation Frequency-domain techniques Project Images 1 Project Images 2 Project Images 3 Project
More informationComputer Vision I - Basics of Image Processing Part 2
Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationMorphological Image Processing
Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures
More informationReview for the Final
Review for the Final CS 635 Review (Topics Covered) Image Compression Lossless Coding Compression Huffman Interpixel RLE Lossy Quantization Discrete Cosine Transform JPEG CS 635 Review (Topics Covered)
More informationEdge detection. Gradient-based edge operators
Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More information2: Image Display and Digital Images. EE547 Computer Vision: Lecture Slides. 2: Digital Images. 1. Introduction: EE547 Computer Vision
EE547 Computer Vision: Lecture Slides Anthony P. Reeves November 24, 1998 Lecture 2: Image Display and Digital Images 2: Image Display and Digital Images Image Display: - True Color, Grey, Pseudo Color,
More informationComparison between Various Edge Detection Methods on Satellite Image
Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering
More informationChapter 11 Representation & Description
Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering
More informationEdge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient
Edge Detection CS664 Computer Vision. Edges Convert a gray or color image into set of curves Represented as binary image Capture properties of shapes Dan Huttenlocher Several Causes of Edges Sudden changes
More informationCS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges
More informationDigital Image Processing Lecture 7. Segmentation and labeling of objects. Methods for segmentation. Labeling, 2 different algorithms
Digital Image Processing Lecture 7 p. Segmentation and labeling of objects p. Segmentation and labeling Region growing Region splitting and merging Labeling Watersheds MSER (extra, optional) More morphological
More informationEECS490: Digital Image Processing. Lecture #19
Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques
More informationNoise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions
Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationSECTION 5 IMAGE PROCESSING 2
SECTION 5 IMAGE PROCESSING 2 5.1 Resampling 3 5.1.1 Image Interpolation Comparison 3 5.2 Convolution 3 5.3 Smoothing Filters 3 5.3.1 Mean Filter 3 5.3.2 Median Filter 4 5.3.3 Pseudomedian Filter 6 5.3.4
More informationCS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation
More informationEEM 463 Introduction to Image Processing. Week 3: Intensity Transformations
EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial
More informationChapter 10: Image Segmentation. Office room : 841
Chapter 10: Image Segmentation Lecturer: Jianbing Shen Email : shenjianbing@bit.edu.cn Office room : 841 http://cs.bit.edu.cn/shenjianbing cn/shenjianbing Contents Definition and methods classification
More information9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B
8. Boundary Descriptor 8.. Some Simple Descriptors length of contour : simplest descriptor - chain-coded curve 9 length of contour no. of horiontal and vertical components ( no. of diagonal components
More information2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz
2D Image Processing - Filtering Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 What is image filtering?
More informationVivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.
Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and
More informationDigital Image Processing. Image Enhancement - Filtering
Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images
More informationLecture 18 Representation and description I. 2. Boundary descriptors
Lecture 18 Representation and description I 1. Boundary representation 2. Boundary descriptors What is representation What is representation After segmentation, we obtain binary image with interested regions
More informationImage Processing: Final Exam November 10, :30 10:30
Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always
More information11/10/2011 small set, B, to probe the image under study for each SE, define origo & pixels in SE
Mathematical Morphology Sonka 13.1-13.6 Ida-Maria Sintorn ida@cb.uu.se Today s lecture SE, morphological transformations inary MM Gray-level MM Applications Geodesic transformations Morphology-form and
More informationFiltering and Enhancing Images
KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.
More informationImage Processing
Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore
More informationLecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden
Lecture: Segmentation I FMAN30: Medical Image Analysis Anders Heyden 2017-11-13 Content What is segmentation? Motivation Segmentation methods Contour-based Voxel/pixel-based Discussion What is segmentation?
More information