Digital Image Processing. Week 4

Size: px
Start display at page:

Download "Digital Image Processing. Week 4"

Transcription

1

2 Morphological Image Processing Morphology deals with form and structure. Mathematical morphology is a tool for extracting image components that are useful in the representation and description of region shape, such as boundaries, skeletons, and the convex hull. In this chapter, the inputs are binary images but the outputs are attributes extracted from these images. A i j I i, j {(, ); 1(white)} 2

3 Preliminaries The reflection of a set B, denoted B is defined as B { w ; w b, for b B } The translation of a set B by point z = (z1, z2), denoted (B)z is defined as ( B) { c; czb for b B } z

4 Set reflection and translation are used in morphology to formulate operations based on so-called structuring elements (SE): small sets or subimages used to probe an image under

5 study for properties of interest. In addition to a definition of which elements are members of the SE, the origin of a structuring element also must be specified. The origin of the SE is usually indicated by a black dot. When the SE is symmetric and no dot is shown, the assumption is that the origin is at the center of symmetry. When working with images, it is required that structuring elements are rectangular arrays. This is accomplished by

6 appending the smallest possible number of background elements necessary to form a rectangular array.

7 Erosion and Dilation Digital Image Processing Many of the morphological algorithms are based on these two primitive operations: erosion and dilation. Erosion Let A and B be to sets from denoted A B is defined as: A B { z ; ( B) A } 2. The erosion of A by B, This definition indicates that the erosion of A by B is the set of all points z such that B, translated by z, is contained in A. z

8 In the following, set B is assumed to be a structuring element. Because the statement that B has to be contained in A is equivalent to B not shearing any common elements with the background, erosion can be expressed equivalently: c A B { z ; ( B) A } z

9

10 Equivalent definitions of erosion: 2 A B{ w ; wba for every bb } A B ( A) b Erosion shrinks or thins objects in a binary image. We can view erosion as a morphological filtering operation in which image details smaller than the structuring element are filtered (removed) from the image. bb

11 Dilation Let A and B be to sets in A B is defined as: Digital Image Processing 2. The dilation of A by B, denoted A B{ z ; ( B) A } The dilation of A by B is the set of all displacements z, such that B and A overlap by at least one element. We assume that B is a structuring element. Equivalent definitions of dilation: 2 A B{ w ; w ab, for some aa and bb } z

12 A B ( A) b The basic process of rotating B about its origin and then successively displacing it so that it slides over set (image) A is analogous to spatial convolution. Dilation being based on set operations is a nonlinear operation, whereas convolution is a linear operation. Unlike erosion which is a shrinking or thinning operation, dilation grows or thickens objects in a binary image. The bb

13 specific manner and the extent of this thickening are controlled by the shape of the structuring element used. One of the simplest applications of dilation is for bridging gaps.

14

15 Duality Erosion and dilation are duals of each other with respect to set complementation and reflection: A B A B c c c c A B A B The duality property is useful particularly when the structuring element is symmetric with respect to its origin, so that B B. Then, we can obtain the erosion of an image by B simply by dilating its background (i.e. dilating A c ) with the same structuring element and complementing the result.

16 Opening and Closing Opening generally smoothes the contour of an object, breaks narrow isthmuses, and eliminates thin protrusions. Closing also tends to smooth section of contours but, as opposed to opening, it generally fuses narrow breaks and long thin gulfs, eliminates small holes, and fills gaps in the contour.

17 The opening of set A by structuring element B is defined as: A B A B B Thus, the opening of A by B is the erosion of A by B, followed by a dilation of the result by B. Similarly, the closing of set A by structuring element B is defined as: AB A B B which says that the closing of A by B is the dilation of A by B, followed by an erosion of the result by B.

18 The opening operation has a simple geometric interpretation. Suppose that we view the structuring element B as a (flat) rolling ball. The boundary of A B is then established by the points in B that reach the farthest into the boundary of A as B is rolled around the inside of this boundary. The opening of A by B is obtained by taking the union of all translates of B that fit into A. A B {( B), za;( B) A } z z

19 Closing has a similar geometric interpretation, except that now we roll B on the outside of the boundary.

20

21

22 Opening and closing are dual of each other with respect to set complementation and reflection: c c AB A B c c A B A B The opening operation satisfies the following properties: 1. A B A 2. if C D then CB D B 3. AB B A B

23 Similarly, the closing operation satisfies the following properties: 1) A AB 2) if C D then CB DB 3) AB B A B Condition 3 in both cases states that multiple openings or closings of a set have no effect after the operator has been applied once.

24

25 The Hit-or-Miss Transformation The morphological hit-or-miss transformation is a basic tool for shape detection. Consider the set A from Figure 9.12 consisting of three shapes (subsets) denoted C, D, and E. The objective is to locate one of the shapes, say, D. Let the origin of each shape be located at its center of gravity. Let D be enclosed by a small window, W. The local background of D with respect to W is defined as the set difference (W-D) (Figure 9.12(b)). Figure 9.12(c) shows the

26 complement of A. Fig. 9.12(d) shows the erosion of A by D. Figure 9.12(e) shows the erosion of the complement of A by the local background set (W-D). From Figures 9.12(d) and (e) we can see that the set of location for which D exactly fits inside A is the intersection of the erosion of A by D and the erosion of A c by (W-D) as shown in Figure 9.12(f). If B denotes the set composed of D and its background, the match (or the set of matches) of B in A, denoted A B is: c AB ( AD) A ( W D)

27

28 We can generalize the notation by letting B = (B1, B2), where B1 is the set formed from elements of B associated with an object and B2 is the set of elements of B associated with the corresponding background (B1=D, B2=W-D) in the preceding example). A B AB A B c ( 1) 2 The set A B contains all the (origin) points at which, simultaneously, B1 found a match ( hit ) in A and B2 found a

29 match in A c. Taking into account the definition and properties of erosion we can rewrite the above relation as: A B ( A B ) ( AB ) 1 2 The above three equations for A B are referred as the morphological hit-or-miss transform.

30 Some Basic Morphological Algorithms When dealing with binary images, one of the principal applications of morphology is in extracting image components that are useful in the representation and the description of shape. We consider morphological algorithms for extracting boundaries, connected components, the convex hull, and the skeleton of a region. The images are shown graphically with 1s shaded and 0s in white.

31 Boundary Extraction The boundary of a set A, denoted β(a), can be obtained by first eroding A by B and then performing the set difference between A and its erosion. ( A) A A B where B is a suitable structuring element.

32

33

34 Filling Holes A hole may be defined as a background region surrounded by a connected border of foreground pixels. We present an algorithm based on set dilation, complementation, and intersection for filling holes in an image. Let A denote a set whose elements are 8-connected boundaries, each boundary enclosing a background region (i.e. a hole). Given a point in each hole, the objective is to fill all the holes with 1s.

35 We form an array, X0, of 0s (the same size as the array containing A), except at the location in X0 corresponding to the given point in each hole, which is set to 1. The following procedure fills all the holes with 1s: c Xk Xk1 B A, k 1,2,3,... where B is the symmetric structuring element in Figure 9.15(c). The algorithm terminates at iteration step k if Xk=Xk-1. The set Xk then contains all the filled holes. The set union of Xk and A contains all the filled holes and their boundaries.

36

37 Extraction of Connected Components Extraction of connected components from binary images is important in many automated image analysis applications. Let A be a set containing one or more connected components. Form an array X0 (of the same size as the array containing A) whose elements are 0s (background values), except at each location known to correspond to a point in each connected component in A, which we set to 1

38 (foreground value). The objective is to start with X0 and find all the connected components. The procedure that accomplishes this task is the following: X ( X B) A, k 1,2,3,... k k1 where B is a suitable structuring element. The procedure terminates when Xk = Xk-1, with Xk containing all connected components of the input image.

39

40

41 Figure 9.18(a) shows an X-ray image of a chicken breast that contains bone fragment. It is of considerable interest to be able to detect such objects in processed food befor packing and/or shiping. In this case, the density of the bones is such that their normal intensity values are different from the background. This makes extraction of the bones from the background a simple matter by using a single threshold. The result is the binary image in Figure 9.18(b). We can erode the thresholded image so that only objects of significant size

42 remain. In this example, we define as significant any object that remains after erosion with a 5 5 structuring elemnt of 1s. The result of erosion is shown in Figure 9.18(c). The next step is to analyse the objects that remain. We identify these objects by extracting the connected components in the image. There are a total of 15 connected components, with four of them being of dominant size. This is enough to determine that significant undesirable objects are containd in the original image.

43 Convex Hull A set A is said to be convex if the straight line segment joining any two points in A lies entirely within A. The convex hull H of an arbitrary set S is the smallest convex set containing S. The set difference H-S is called the convex deficiency of S. Convex hull and convex deficiency are useful for objects description. We present a simple morphological algorithm for obtaining the convex hull C(A) of a set A.

44

45 Let B i, i=1,2,3,4, represent the four structuring elements in Figure 9.19(a). The procedure consists of implementing the equation: X i 0 A X ( X B ) A, i 1,2,3,4 and k 1,2,3,... i i i k k1 i i When the procedure converges ( X k X k 1), we let Then the convex hull of A is 4 C( A) D i1 i D i X. i k

46 The method consists of iteratively applying the hit-or-miss transform to A with B 1 ; when no further changes occur, we perform the union with A and call the result D 1. The procedure is repeated with B 2 (applied to A) until no further chances occur; and so on. The union of the four resulting Ds constitutes the convex hull of A.

47 Thinning, thickening c A BA( AB) A( A B) - thinning B B B B B i is the rotated version of B i {,,, n } 1 2 { i n A B } ( (( AB ) B ) ) B Thickening A B A( AB) 1 2 { i n A B } ( (( AB ) B ) ) B

48

49

50

51

52 Color Image Processing Color Image Processing Color Image Processing Color is very important characteristic of an image that in most cases simplifies object identification and extraction form a scene. Human eye can discern thousands of color shades and intensities and only two dozen shades of gray. Color image processing is divided in 2 major areas: full-color (images acquired with a full-color sensor) and pseudocolor (gray images for which color is assigned) processing.

53 The colors that humans can perceive in an object are determinde by the nature of the light reflected from the object.visible light is composed of a relatively narrow band of frequencies in the electromagnetic spectrum (390nm to750nm). A body that reflects light that is balanced in all visible wavelengths appears white to the observer. A body that favors reflectance in a limited range of the visible spectrum exhibits some shades of color.

54 For example, blue objects reflect light with wavelengths from 450 to 475 nm, while absorbing most of the energy of other wavelengths.

55 How to characterize light? If the light is achromatic (void of color) its only attribute is its intensity (or amount) determined by levels of gray (black-grays-white). Chromatic light spans the electromagnetic spectrum from approximately 400 to 720 nm. Three basic quantities are used to describe the quality of a chromatic light source: radiance, luminance, and brightness.

56 - Radiance is the total amount of energy that flows from the light source (usually measured in watts). - Luminance (measured in lumens lm) gives a measure of the amount of energy an observer percieves from a light source. For example, the light emitted from a source operating in the infrared region of the spectrum could have significant energy (radiance), but an observer would hardly perceive it (the luminance is almost zero).

57 - Brightness is a subjective descriptor, that cannot be measured, it embodies the achromatic notion of intensity and is a factor describing color sensation. Cones are the sensors in the eye responsible for color vision. It has been established that the 6 to 7 million cones in the human eye can be devided into three principal sensing categories, corresponding roughly to red, green, and blue.

58 Approximately 65% of all cones are sensitive to red light, 33% are sensitive to green light, an only about 2% are sensitive to blue (but the blue cones are the most sensitive).

59 Due to these absorbtion characteristics of the human eye, colors are seen as variable combinations of the so-called primary colors : red (R), green (G), and blue (B). For the purpose of standardization, the CIE (Commission Internationale de l Eclairage) designated in 1931 the following specific wavelength values to the three primary colors: blue= nm, green = nm, and red=700 nm. The CIE standards correspond only approximately with experimental data.

60 These three standard primary colors, when mixed in various intensity proportions, can produce all visible colors. The primary colors can be added to produce the secondary colors of light magenta (red+blue), cyan (green+blue), and yellow (red+green). Mixing the three primaries, or a secondary with its opposite primary color in the right intensities produces white light. We must differentiate between the primary colors of light and the primary colors of pigments. A primary color for

61 pigments is one that substracts or absorb a primary color of light and reflects or transmits the other two. Therefore, the primary colors of pigments are magenta, cyan, and yellow, and the secondary colors are red, green, and blue.

62 The characteristics usually used to distinguish one color from another are brightness, hue, and saturation.

63 Brightness embodies the achromatic notion of intensity. Hue is an attribute associated with the dominant wavelength in a mixture of light waves. Hue represents dominat color as percieved by an observer (when we call an object to be red, orange or yellow we refer to its hue). Saturation refers to the relative purity or the amount of white light mixed with a hue. The pure spectrum colors are fully saturated. Color such as pink (red+white) and lavander (violet+white) are less

64 saturated, with the degree of saturation being inversely proportional to the amount of white light added. Hue and saturation taken together are called chromaticity, and therefore a color may be characterized by its brightness and chromaticity. The amounts of red, green, and blue needed to form any particular color are called the tristimulus values and are denoted X, Y and Z, respectively. A color is specified by its trichromatic coefficients, defined as:

65 x y z X X Y Z Y X Y Z Z X Y Z x y z 1

66 For any wavelength of light in the visible spectrum, the tristimulus values needed to produce the color coresponding to that wavelength can be obtained from the existing curves or tables. Another approach for specifying colors is to use the CIE chromaticity diagram, which shows color compositin as a function of x (red) and y (green); z (blue) is obtained from relation z = 1-x-y.

67 Digital Image Processing

68 The positions of the various spectrum colors (from violet at 380 nm to red at 780 nm) are indicated around the boundary of the tongue-shaped chromaticity diagram. The chromaticity diagram is useful for color mixing because a straight-line segment joining any two points in the diagram defines all the different color variation that can be obtained by combining these two colors. This procedure can be extended to three colors: to triangle determined by the three

69 color-points on the diagram embodies all the possible colors that can be obtained by mixing the three colors.

70 Color Models A color model (color space or color system) is a specification of a coordinate system and a subspace within that system where each color is represented by a single point. Most color models in use today are oriented either toward hardware (color monitors or printers) or toward applications where color manipulation is a goal.

71 The most commonly used hardware-oriented model is RGB (red-green-blue) for color monitors, color video cameras. The CMY (cyan-magenta-yellow) and CMYK (cyan-magenta-yellow-black) models are in use for color printing. The HSI (hue-saturation-intensity) model corespond with the way humans describe and interpret colors. The HSI model has the advantage that it decoupes the color and gray-scale information in an image, making it suitable for using the gray-scale image processing techniques.

72 The RGB Color Model In the RGB model, each color appears decomposed in its primary color components: red, green, blue. This model is based on a Cartesian coordinate system. The color subspace of interest is the unit cube (Figure 6.7), in which the primary and the seconadary colors are at the corners; black is at the origin, and white is at the corner farthest from the origin.

73 The gray scale (point of equal RGB values) extends from black to white along the line joining these two points. The different colors in this model are points on or inside the cube, and are defined by vectors extending from the origin.

74 Images represented in the RGB color model consist of three component images, one for each primary color. The number of bits used to represent each pixel in RGB space is called the pixel depth. Consider an RGB image in which each of the red, green, and blue images are an 8-bit image. In this case, each

75 RGB color pixel has a depth of 24 bits. The term full-color image is used often to denote a 24-bit RGB color image. The total number of colors in a 24-bit RGB image is A convenient way to view these colors is to generate color planes (faces or cross sections of the cube).

76 A color image can be acquired by using three filters, sensitive to red, green, and blue.

77 Because of the variety of systems in use, it is of considerable interest to have a subset of colors that are likely to be reproduced faithfully, resonably independently of viewer hardware capabilities. This subset of colors is called the set of safe RGB colors, or the set of all-systems-safe colors. In Internet applications, they are called safe Web colors or safe browser colors. We assume that 256 colors is the minimum number of colors that can be reproduced faithfully by any system. Forty of

78 these 256 colors are known to be processed differently by varoius operating system. We have 216 colors that are common to most systems, and are the safe colors, especially in Internet applications. Each of the 216 safe colors has a RGB representation with: RG,, B 0,51,102,153, 204, 255 We have (6) 3 =216 possible color values. It is costumary to express these values in the hexagonal number system.

79 Each safe color is formed from three of the two digit hex numbers from the above table. For example purest red if FF0000. The values and FFFFFF represent black and white respectively. Figure 6.10(a) shows the 216 safe colors, organized in descending RGB values. Figure 6.10(b) shows the hex codes for all the possible gray colors in the 216 safe color system. Figure 6.11 shows the RGB safe-color cube.

80

81 The CMY and CMYK Color Models Cyan, magenta, and yellow are the secondary colors of light but the primary color of pigments. For example, when a surface coated with yellow pigment is illuminated with white light, no blue light is reflected from the surface. Yellow substracts blue light from reflected white light (which is composed of equal amounts of red, green, and blue light). Most devices that deposit color pigments on paper, such as color printers and copiers, require CMY data input and

82 perform RGB to CMY conversion. Assuming that the color values were normalized to range [0,1], this conversion is: C 1 R M 1 G Y 1 B From this equation we can easily deduce, that pure cyan does not reflect red, pure magenta does not reflect green, and pure yellow does not reflect blue.

83 Equal amount of pigments primary, cyan, magenta, and yellow should produce black. In practice, combining these colors for printing produces a muddy-looking black. In order to produce true black (which is the predominant color in printing), a fourth color, black, is added, giving rise to the CMYK color model.

84

85 The HSI Color Model The RGB, CMY, and other similar color models are not well suited for describing colors in terms that are practical for human interpretation. We (humans) describe a color by its hue, saturation and brightness. Hue is a color attribute that describes a pure color, saturation gives a measure of the degree to which a pure color is diluted by white light and brightness is a subjective descriptor that embodies the achromatic notion of intensity.

86 The HSI (hue, saturation, intensity) color model, decouples the intensity component from the color information (hue and saturation) in a color image. What is the link between the RGB color model and HSI color model? Consider again the RGB unit cube. The intensity axis is the line joining the black and the white vertices. Consider a color point in the RGB cube. Let P be a plane perpedicular to the intensity axis and containing the color point. The intersection of this plane with the intensity axis gives us the

87 intensity of the color point. The saturation (purity) of the considered color point increases as a function of distance from the intensity axis (the saturation of the point on the intensity axis is zero). In order to determine how hue can be linked to a given RGB point, consider a plane defined by black, white and cyan. The intensity axis is also included in this plane. The intersection of this plane with the RGB-cube is a triangle. All point contained in this triangle would have the same hue (i.e. cyan).

88 The HSI space is represented by a vertical intensity axis and the locus of color points that lie on planes perpedicular to this axis. As the planes move up and down the intensity axis, the boundary defined by the intersection of this plane with the faces of the cube have either triangular or hexagonal shape.

89

90 In the plane shown in Figure 6.13(a) primary colors are separated by 120º. The secondary colors are 60º from the primaries. The hue of the point is determined by an angle from some reference point. Usually (but not always) an angle of 0º from the red axis designates 0 hue, and the hue increases countercloclwise from there. The saturation (distance from the vertical axis) is the length of the vector from the origin to the point. The origin is defined by the intersection of the color plane with the vertical intensity axis.

91 Converting colors from RGB to HSI if B G H 360 if B G ( RG) ( RB) arccos 2 2 ( RG) ( RB)( GB) 3 S 1 min{ R, G, B} R G B 1 I RG B 3

92 It is assumed that the RGB values have been normalized to the range [0,1] and that angle θ is measured with respect to the red axis of the HSI space in Figure Hue can be normalized to the range [0,1] by dividing it to 360º. The other two HSI components are in this range if the RGB values are in the interval [0,1]. R=100, G=150, B=200 H=210º, S=1/3, I=150/255=0.588

93 Converting colors from HSI to RGB Given values of HSI we now want to find the corresponding RGB values in the same range. RG sector (0º H < 120º) B I(1 S) Scos H R I1 cos(60 H ) G 3 I ( R B)

94 GB sector (120º H < 240º) R I(1 S) Scos H H H 120, G I 1 cos(60 H ) B 3 I ( RG) BR sector (120º H < 240º) G I(1 S) Scos H H H 240, B I 1 cos(60 H ) R 3 I ( GB)

95 Pseudocolor Image Processing Pseudocolor (also called false color) image processing consists of assigning colors to gray values based on a specified criterion. The main use of pseudocolor is for human visualization and interpretation of gray-scale events in an image or sequence of images.

96 Intensity (Density) Slicing If an image is viewed as a 3-D function, the method can be described as one of placing planes parallel to the coordinate plane of the image; each plane then slices the function in the area of intersection.

97 The plane at f( x, y) li slices the image function into two levels. If a different color is assigned to each side of the plane, any pixel whose intensity level is above the plane will be coded with one color and any pixel below the plane will be coded with other color. Levels that lie on the plane itself may be arbitrarily assigned one of the two colors. The result is a two color image whose relative appearance can be controlled by moving the slicing plane up and down the intensity axis.

98 Let [0, L-1] represent the gray scale, let l0 represent black (f(x,y)=0) and level ll-1 represent white (f(x,y)=l-1). Suppose that P planes perpendicular to the intensity axis are defined at levels l1, l2,, lp, 0<P<L-1. The P planes partition the gray scale into P+1 intervals, V1, V2,, VP+1. Intensity to color assignments are made according to the relation: f( x, y) c if f( x, y) V. k k

99 Measurements of rainfall levels with ground-based sensors are difficult and expensive, and total rainfall figures are even more difficult to obtain because a significant portion of

100 precipitations occurs over the ocean. One way to obtain these figures is to use a satellite. The TRMM (Tropical Rainfall Measuring Mission) satellite utilizes, among others, three sensors specially designed to detect rain: a precipitation radar, a microwave imager, and a visible and infrared scanner. The results from the various rain sensors are processed, resulting in estimates of average rainfall over a given time period in the area monitored by the sensors. From these estimates, it is not difficult to generate gray-scale images whose intensity values

101 correspond directly to rainfall, with each pixel representing a physical land area whose size depends on the resolution of the sensors.

102 Basics of Full-Color Image Processing cr( x, y) R( x, y) 3 4 f : D /, f( x, y) c cg ( x, y) G( x, y) cb ( x, y) B( x, y) C( x, y) M( x, y) f( x, y) c Y( x, y) K( x, y)

103 Color Transformations - processing the components of a color image within the context of a single-color model i i 1 2 n gxy (, ) T f( xy, ) s T( r, r,..., r ), i 1,2,..., n ( f( x, y) r, g( x, y) s) ri, si are the color components of f(x, y) and g(x, y), n is the number of color components, and {T1, T2,, Tn} is a set of transformations or color mapping functions that operate on ri to produce si. (n=3 or n=4)

104

105 In theory, any transformation can be performed in any color model. In practice, some operations are better suited to specific color models. Suppose we wish to modify the intensity of a color image, using gxy (, ) kf( xy, ), 0 k 1 In the HSI color space, this can be done with: s1= r1, s2= r2, s3=k r3

106 In the RGB/CMY color model all components must be transformed s1= kr1, s2= kr2, s3=kr3 (RGB) si = kri+(1-k), i=1,2,3 (CMY) Although the HSI transformation involves the fewest number of operations, the costs for converting an RGB or CMY(K) image to the HSI color space are much bigger than the transformations.

107

108 Color Complements The hues directly opposite one another on the above color circle are called complements (analogous to the gray-scale negatives).

109

110 Unlike the intensity transformation, the RGB complement transformation functions used in this example do not have straightforward HSI space equivalent. The saturation component of the complement cannot be computed from the saturation component of the input image alone. Color Slicing Highlighting a specific range of colors in an image is useful for separating objects from their surroundings. The basic idea is either to:

111 - display the colors of interest so they stand out from the background - use the region defined by the colors as a mask for further processing. One of the simplest ways to slice a color image is to map the colors outside some range of interest to a neutral color. If the colors of interest are enclosed by a cube (or hypercube, if n>3) of width W and centered at a

112 prototypical (e.g. average) color with components a1, a2,..., a n the set of transformations is: W 0.5 if rj aj, 1 j n si 2, i 1,2,..., n ri otherwise These transformations highlight the colors around the prototype by forcing all other colors to the midpoint of the reference color space (an arbitrarily chosen neutral point).

113 For the RGB color space, for example, a suitable neutral point is middle gray or color (0.5, 0.5, 0.5). If a sphere is used to specify the colors of interest, the transformations are: n if ( rj aj) R0 si j1, i 1,2,..., n ri otherwise where R0 is the radius of the enclosing sphere and a1, a2,..., a n are the components of its center.

114

115 Tone and Color Corrections The effectiveness of such transformations is judged ultimately in print. The transformations are developed and evaluated on monitors. It is necessary to have a high degree of consistency between the monitors and the output devices. This is best accomplished with a device-independent color model that relates the color gamut of the monitors and output devices, as well as any other devices being used, to one another. The model of choice for many color management systems (CMS)

116 is the CIE L*a*b* model, also called CIELAB. The L*a*b* color components are given by the following equations: Y L* 116h 16 Y W X Y a* 500h h XW YW Y Z b* 200h h YW ZW

117 3 q q hq ( ) q q XW, YW, Z W are reference tristimulus values typically the white of a perfectly reflecting diffuser under CIE standard D65 illumination ( x , y , z 1 x y). The L*a*b* color space is colorimetric (i.e. colors perceived as matching are encoded identically), perceptually uniform (i.e. color differences among various hues are perceived

118 uniformly), and device independent. Like the HSI system, the L*a*b* system is an excellent decoupler of intensity (represented by lightness L*) and color (represented by a* for red minus green and b* for green minus blue), making it useful in both image manipulation (tone and contrast editing) and image compression applications.

119 Histogram Processing It is not advisable to histogram equalize the components of a color image independently. This can produce wrong colors. A more logical approach is to spread the color intensity uniformly, leaving the colors (e.g., hues) unchanged. The HSI color space is ideally suited for this type of approach.

120 The unprocessed image contains a large number of dark colors that reduce the median intensity to Histogram equalizing the intensity component, without altering the hue and saturation produced image Figure 6.37(c). The image is brighter. Figure 6.37(d) was obtained by increasing also the saturation component.

121 Color Image Smoothing Let Sxy denote a neighborhood centered at (x,y) in an RGB color image. The average of the RGB component vectors in this neighborhood is: 1 c( x, y) c( s, t) K ( st, ) S xy 1 Rst (, ) K ( st, ) Sxy 1 c( x, y) G( s, t) K ( st, ) Sxy 1 Bst (, ) K ( st, ) Sxy

122

123

124 Color Image Sharpening gxy f xy c f xy 2 (, ) (, ) (, ) 2 Rxy (, ) cxy (, ) Gxy (, ) 2 Bxy (, ) 2 2

125

126 Other color spaces YIQ for the NTSC (National Television System Committee) television system in US Y luminance I (in-phase), Q (quadrature) chrominance YUV for the PAL (Phase Alternation Line) and SECAM (Séquentiel Couleur à Mémoire) television system in Europe (I, Q) obtained by rotating (U,V) YCbCr digital video transmission More about color spaces in: Andreas Koschan, Mongi Abidi, Digital Color Image Processing, Wiley, 2008

127 Color Difference RGB, CMY, Lab Euclidean distance HSI - F1=(H1, S1, I1), F2=(H2, S2, I2) d ( F, F ) ( I) ( C), I I I HSI C S S 2S S cos H1 H2 if H1 H2 2 H1 H2 if H1 H2

Lecture 12 Color model and color image processing

Lecture 12 Color model and color image processing Lecture 12 Color model and color image processing Color fundamentals Color models Pseudo color image Full color image processing Color fundamental The color that humans perceived in an object are determined

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor

More information

Digital Image Processing. Introduction

Digital Image Processing. Introduction Digital Image Processing Introduction Digital Image Definition An image can be defined as a twodimensional function f(x,y) x,y: Spatial coordinate F: the amplitude of any pair of coordinate x,y, which

More information

Fall 2015 Dr. Michael J. Reale

Fall 2015 Dr. Michael J. Reale CS 490: Computer Vision Color Theory: Color Models Fall 2015 Dr. Michael J. Reale Color Models Different ways to model color: XYZ CIE standard RB Additive Primaries Monitors, video cameras, etc. CMY/CMYK

More information

Color Image Processing

Color Image Processing Color Image Processing Inel 5327 Prof. Vidya Manian Introduction Color fundamentals Color models Histogram processing Smoothing and sharpening Color image segmentation Edge detection Color fundamentals

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Biomedical Image Analysis. Mathematical Morphology

Biomedical Image Analysis. Mathematical Morphology Biomedical Image Analysis Mathematical Morphology Contents: Foundation of Mathematical Morphology Structuring Elements Applications BMIA 15 V. Roth & P. Cattin 265 Foundations of Mathematical Morphology

More information

Digital Image Processing COSC 6380/4393. Lecture 19 Mar 26 th, 2019 Pranav Mantini

Digital Image Processing COSC 6380/4393. Lecture 19 Mar 26 th, 2019 Pranav Mantini Digital Image Processing COSC 6380/4393 Lecture 19 Mar 26 th, 2019 Pranav Mantini What is color? Color is a psychological property of our visual experiences when we look at objects and lights, not a physical

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Introduction Morphology: a branch of biology that deals with the form and structure of animals and plants Morphological image processing is used to extract image components

More information

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary) Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape

More information

CS 445 HW#6 Solutions

CS 445 HW#6 Solutions CS 445 HW#6 Solutions Text problem 6.1 From the figure, x = 0.43 and y = 0.4. Since x + y + z = 1, it follows that z = 0.17. These are the trichromatic coefficients. We are interested in tristimulus values

More information

International Journal of Advance Engineering and Research Development. Applications of Set Theory in Digital Image Processing

International Journal of Advance Engineering and Research Development. Applications of Set Theory in Digital Image Processing Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 Applications of Set Theory in Digital Image Processing

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

EECS490: Digital Image Processing. Lecture #17

EECS490: Digital Image Processing. Lecture #17 Lecture #17 Morphology & set operations on images Structuring elements Erosion and dilation Opening and closing Morphological image processing, boundary extraction, region filling Connectivity: convex

More information

Mathematical Morphology and Distance Transforms. Robin Strand

Mathematical Morphology and Distance Transforms. Robin Strand Mathematical Morphology and Distance Transforms Robin Strand robin.strand@it.uu.se Morphology Form and structure Mathematical framework used for: Pre-processing Noise filtering, shape simplification,...

More information

The Elements of Colour

The Elements of Colour Color science 1 The Elements of Colour Perceived light of different wavelengths is in approximately equal weights achromatic.

More information

Cvision 3 Color and Noise

Cvision 3 Color and Noise Cvision 3 Color and Noise António J. R. Neves (an@ua.pt) & João Paulo Cunha IEETA / Universidade de Aveiro Outline Color spaces Color processing Noise Acknowledgements: Most of this course is based on

More information

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation Lecture #13 Point (pixel) transformations Color modification Color slicing Device independent color Color balancing Neighborhood processing Smoothing Sharpening Color segmentation Color Transformations

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Ranga Rodrigo October 9, 29 Outline Contents Preliminaries 2 Dilation and Erosion 3 2. Dilation.............................................. 3 2.2 Erosion..............................................

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Introduction The Graphics Process Color Models Triangle Meshes The Rendering Pipeline 1 What is Computer Graphics? modeling

More information

morphology on binary images

morphology on binary images morphology on binary images Ole-Johan Skrede 10.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides

More information

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2 Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory Introduction Computer Vision & Digital Image Processing Morphological Image Processing I Morphology a branch of biology concerned with the form and structure of plants and animals Mathematical morphology

More information

Lecture 1 Image Formation.

Lecture 1 Image Formation. Lecture 1 Image Formation peimt@bit.edu.cn 1 Part 3 Color 2 Color v The light coming out of sources or reflected from surfaces has more or less energy at different wavelengths v The visual system responds

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 7. Color Transforms 15110191 Keuyhong Cho Non-linear Color Space Reflect human eye s characters 1) Use uniform color space 2) Set distance of color space has same ratio difference

More information

Visible Color. 700 (red) 580 (yellow) 520 (green)

Visible Color. 700 (red) 580 (yellow) 520 (green) Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

The Display pipeline. The fast forward version. The Display Pipeline The order may vary somewhat. The Graphics Pipeline. To draw images.

The Display pipeline. The fast forward version. The Display Pipeline The order may vary somewhat. The Graphics Pipeline. To draw images. View volume The fast forward version The Display pipeline Computer Graphics 1, Fall 2004 Lecture 3 Chapter 1.4, 1.8, 2.5, 8.2, 8.13 Lightsource Hidden surface 3D Projection View plane 2D Rasterization

More information

Game Programming. Bing-Yu Chen National Taiwan University

Game Programming. Bing-Yu Chen National Taiwan University Game Programming Bing-Yu Chen National Taiwan University What is Computer Graphics? Definition the pictorial synthesis of real or imaginary objects from their computer-based models descriptions OUTPUT

More information

11. Gray-Scale Morphology. Computer Engineering, i Sejong University. Dongil Han

11. Gray-Scale Morphology. Computer Engineering, i Sejong University. Dongil Han Computer Vision 11. Gray-Scale Morphology Computer Engineering, i Sejong University i Dongil Han Introduction Methematical morphology represents image objects as sets in a Euclidean space by Serra [1982],

More information

Reading. 2. Color. Emission spectra. The radiant energy spectrum. Watt, Chapter 15.

Reading. 2. Color. Emission spectra. The radiant energy spectrum. Watt, Chapter 15. Reading Watt, Chapter 15. Brian Wandell. Foundations of Vision. Chapter 4. Sinauer Associates, Sunderland, MA, pp. 69-97, 1995. 2. Color 1 2 The radiant energy spectrum We can think of light as waves,

More information

Image Analysis. Morphological Image Analysis

Image Analysis. Morphological Image Analysis Image Analysis Morphological Image Analysis Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008 University of Ioannina - Department

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Chapter 9 Morphological Image Processing

Chapter 9 Morphological Image Processing Morphological Image Processing Question What is Mathematical Morphology? An (imprecise) Mathematical Answer A mathematical tool for investigating geometric structure in binary and grayscale images. Shape

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Lecture 4 Image Enhancement in Spatial Domain

Lecture 4 Image Enhancement in Spatial Domain Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

When this experiment is performed, subjects find that they can always. test field. adjustable field

When this experiment is performed, subjects find that they can always. test field. adjustable field COLORIMETRY In photometry a lumen is a lumen, no matter what wavelength or wavelengths of light are involved. But it is that combination of wavelengths that produces the sensation of color, one of the

More information

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CSE 167: Introduction to Computer Graphics Lecture #6: Colors. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 CSE 167: Introduction to Computer Graphics Lecture #6: Colors Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Homework project #3 due this Friday, October 18

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Intensity Transformation and Spatial Filtering

Intensity Transformation and Spatial Filtering Intensity Transformation and Spatial Filtering Outline of the Lecture Introduction. Intensity Transformation Functions. Piecewise-Linear Transformation Functions. Introduction Definition: Image enhancement

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Intensity Transformations and Spatial Filtering

Intensity Transformations and Spatial Filtering 77 Chapter 3 Intensity Transformations and Spatial Filtering Spatial domain refers to the image plane itself, and image processing methods in this category are based on direct manipulation of pixels in

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

CS635 Spring Department of Computer Science Purdue University

CS635 Spring Department of Computer Science Purdue University Color and Perception CS635 Spring 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Elements of Color Perception 2 Elements of Color Physics: Illumination Electromagnetic

More information

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology lecture 23 (0, 1, 1) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (1, 1, 0) (0, 1, 0) hue - which ''? saturation - how pure? luminance (value) - intensity What is light? What is? Light consists of electromagnetic

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Chapter 6 Color Image Processing

Chapter 6 Color Image Processing Image Comm. Lab EE/NTHU 1 Chapter 6 Color Image Processing Color is a powerful descriptor Human can discern thousands of color shades. "color" is more pleasing than "black and white. Full Color: color

More information

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision Color n Used heavily in human vision n Color is a pixel property, making some recognition problems easy n Visible spectrum for humans is 400nm (blue) to 700 nm (red) n Machines can see much more; ex. X-rays,

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Megha Goyal Dept. of ECE, Doaba Institute of Engineering and Technology, Kharar, Mohali, Punjab, India Abstract The purpose of this paper is to provide readers with an in-depth

More information

Lecture 11. Color. UW CSE vision faculty

Lecture 11. Color. UW CSE vision faculty Lecture 11 Color UW CSE vision faculty Starting Point: What is light? Electromagnetic radiation (EMR) moving along rays in space R(λ) is EMR, measured in units of power (watts) λ is wavelength Perceiving

More information

Illumination and Shading

Illumination and Shading Illumination and Shading Light sources emit intensity: assigns intensity to each wavelength of light Humans perceive as a colour - navy blue, light green, etc. Exeriments show that there are distinct I

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Introduction to color science

Introduction to color science Introduction to color science Trichromacy Spectral matching functions CIE XYZ color system xy-chromaticity diagram Color gamut Color temperature Color balancing algorithms Digital Image Processing: Bernd

More information

Computer Vision. The image formation process

Computer Vision. The image formation process Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image

More information

Morphological Image Processing

Morphological Image Processing Digital Image Processing Lecture # 10 Morphological Image Processing Autumn 2012 Agenda Extraction of Connected Component Convex Hull Thinning Thickening Skeletonization Pruning Gray-scale Morphology Digital

More information

CS 111: Digital Image Processing Fall 2016 Midterm Exam: Nov 23, Pledge: I neither received nor gave any help from or to anyone in this exam.

CS 111: Digital Image Processing Fall 2016 Midterm Exam: Nov 23, Pledge: I neither received nor gave any help from or to anyone in this exam. CS 111: Digital Image Processing Fall 2016 Midterm Exam: Nov 23, 2016 Time: 3:30pm-4:50pm Total Points: 80 points Name: Number: Pledge: I neither received nor gave any help from or to anyone in this exam.

More information

What is it? How does it work? How do we use it?

What is it? How does it work? How do we use it? What is it? How does it work? How do we use it? Dual Nature http://www.youtube.com/watch?v=dfpeprq7ogc o Electromagnetic Waves display wave behavior o Created by oscillating electric and magnetic fields

More information

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased

More information

Digital Image Processing Fundamentals

Digital Image Processing Fundamentals Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to

More information

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #6: Color Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday, October 14

More information

11/10/2011 small set, B, to probe the image under study for each SE, define origo & pixels in SE

11/10/2011 small set, B, to probe the image under study for each SE, define origo & pixels in SE Mathematical Morphology Sonka 13.1-13.6 Ida-Maria Sintorn ida@cb.uu.se Today s lecture SE, morphological transformations inary MM Gray-level MM Applications Geodesic transformations Morphology-form and

More information

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision.

Announcements. Lighting. Camera s sensor. HW1 has been posted See links on web page for readings on color. Intro Computer Vision. Announcements HW1 has been posted See links on web page for readings on color. Introduction to Computer Vision CSE 152 Lecture 6 Deviations from the lens model Deviations from this ideal are aberrations

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing Lecture 4 Digital Image Enhancement 1. Principle of image enhancement 2. Spatial domain transformation Basic intensity it tranfomation ti Histogram processing Principle Objective of Enhancement Image enhancement

More information

Black generation using lightness scaling

Black generation using lightness scaling Black generation using lightness scaling Tomasz J. Cholewo Software Research, Lexmark International, Inc. 740 New Circle Rd NW, Lexington, KY 40511 e-mail: cholewo@lexmark.com ABSTRACT This paper describes

More information

Processing of binary images

Processing of binary images Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Colour Reading: Chapter 6. Black body radiators

Colour Reading: Chapter 6. Black body radiators Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours

More information

Robot vision review. Martin Jagersand

Robot vision review. Martin Jagersand Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity

More information

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 CSE 167: Introduction to Computer Graphics Lecture #6: Color Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #3 due this Friday, October 19

More information

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well

All forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well Pre AP Physics Light & Optics Chapters 14-16 Light is an electromagnetic wave Electromagnetic waves: Oscillating electric and magnetic fields that are perpendicular to the direction the wave moves Difference

More information

CS452/552; EE465/505. Color Display Issues

CS452/552; EE465/505. Color Display Issues CS452/552; EE465/505 Color Display Issues 4-16 15 2 Outline! Color Display Issues Color Systems Dithering and Halftoning! Splines Hermite Splines Bezier Splines Catmull-Rom Splines Read: Angel, Chapter

More information

One image is worth 1,000 words

One image is worth 1,000 words Image Databases Prof. Paolo Ciaccia http://www-db. db.deis.unibo.it/courses/si-ls/ 07_ImageDBs.pdf Sistemi Informativi LS One image is worth 1,000 words Undoubtedly, images are the most wide-spread MM

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY

UNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY 18-08-2016 UNIT-2 In the following slides we will consider what is involved in capturing a digital image of a real-world scene Image sensing and representation Image Acquisition Sampling and quantisation

More information

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Image Formation. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico Image Formation Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico 1 Objectives Fundamental imaging notions Physical basis for image formation

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

Basic relations between pixels (Chapter 2)

Basic relations between pixels (Chapter 2) Basic relations between pixels (Chapter 2) Lecture 3 Basic Relationships Between Pixels Definitions: f(x,y): digital image Pixels: q, p (p,q f) A subset of pixels of f(x,y): S A typology of relations:

More information

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Image Interpolation 48 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Fundamentally, interpolation is the process of using known

More information

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers PAM3012 Digital Image Processing for Radiographers Image Enhancement in the Spatial Domain (Part I) In this lecture Image Enhancement Introduction to spatial domain Information Greyscale transformations

More information

Filters. Advanced and Special Topics: Filters. Filters

Filters. Advanced and Special Topics: Filters. Filters Filters Advanced and Special Topics: Filters Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong ELEC4245: Digital Image Processing (Second Semester, 2016 17)

More information

ECEN 447 Digital Image Processing

ECEN 447 Digital Image Processing ECEN 447 Digital Image Processing Lecture 7: Mathematical Morphology Ulisses Braga-Neto ECE Department Texas A&M University Basics of Mathematical Morphology Mathematical Morphology (MM) is a discipline

More information

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu Image Processing CS 554 Computer Vision Pinar Duygulu Bilkent University Today Image Formation Point and Blob Processing Binary Image Processing Readings: Gonzalez & Woods, Ch. 3 Slides are adapted from

More information

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN 1 Image Enhancement in the Spatial Domain 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Unit structure : 3.0 Objectives 3.1 Introduction 3.2 Basic Grey Level Transform 3.3 Identity Transform Function 3.4 Image

More information

3D graphics, raster and colors CS312 Fall 2010

3D graphics, raster and colors CS312 Fall 2010 Computer Graphics 3D graphics, raster and colors CS312 Fall 2010 Shift in CG Application Markets 1989-2000 2000 1989 3D Graphics Object description 3D graphics model Visualization 2D projection that simulates

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Color, Edge and Texture

Color, Edge and Texture EECS 432-Advanced Computer Vision Notes Series 4 Color, Edge and Texture Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 628 yingwu@ece.northwestern.edu Contents

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

Lecture 7: Morphological Image Processing

Lecture 7: Morphological Image Processing I2200: Digital Image processing Lecture 7: Morphological Image Processing Prof. YingLi Tian Oct. 25, 2017 Department of Electrical Engineering The City College of New York The City University of New York

More information

Midterm Exam! CS 184: Foundations of Computer Graphics! page 1 of 13!

Midterm Exam! CS 184: Foundations of Computer Graphics! page 1 of 13! Midterm Exam! CS 184: Foundations of Computer Graphics! page 1 of 13! Student Name:!! Class Account Username:! Instructions: Read them carefully!! The exam begins at 1:10pm and ends at 2:30pm. You must

More information

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Bing-Yu Chen National Taiwan University Computer Graphics Bing-Yu Chen National Taiwan University Introduction The Graphics Process Color Models Triangle Meshes The Rendering Pipeline 1 INPUT What is Computer Graphics? Definition the pictorial

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information