CHAPTER 4 OPTIC CUP SEGMENTATION AND QUANTIFICATION OF NEURORETINAL RIM AREA TO DETECT GLAUCOMA

Size: px
Start display at page:

Download "CHAPTER 4 OPTIC CUP SEGMENTATION AND QUANTIFICATION OF NEURORETINAL RIM AREA TO DETECT GLAUCOMA"

Transcription

1 83 CHAPTER 4 OPTIC CUP SEGMENTATION AND QUANTIFICATION OF NEURORETINAL RIM AREA TO DETECT GLAUCOMA 4.1 INTRODUCTION Glaucoma damages the optic nerve cells that transmit visual information to the brain. Though IOP is the most significant reason to develop glaucoma, optic disc parameters are mainly used to discover early stage of glaucoma damage. So there is a real need for CAD methods that can identify early glaucomatous development to avoid further progression. In this chapter, a technique for cup segmentation to detect glaucoma using Color Mathematical Morphology (CMM) is proposed. Structural features are then described to estimate quantitative parameters to classify fundus images into two classes. Cup to disc ratio and the rim area in Inferior, Superior, Nasal, Temporal (ISNT) quadrants are the parameters extracted from the segmented disc and cup region. Flow diagram of the proposed technique to detect glaucoma using structural features are provided in section 4.2.Section 4.3 describes an automatic optic cup segmentation algorithm using color space and mathematical morphology. Sections 4.4 and 4.5 describe the experimental results and performance analysis for optic cup. Parameter estimation for glaucoma assessment, neuroretinal rim quantification and rim area detection algorithm are presented in section 4.6. Performance analysis of structural features is explained in section 4.7. Conclusions are provided in section 4.8.

2 ASSESSMENT OF GLAUCOMA USING STRUCTURAL FEATURES Optic cup is segmented using CMM technique based on pallor in fundus images to differentiate the optic cup from the disc boundary. From the detected disc and cup boundary, parameters namely cup to disc ratio and neuroretinal rim area are calculated. Flow diagram to detect glaucoma using the structural features is shown in Figure 4.1. Fundus image Preprocessing Optic disc segmentation Optic cup segmentation Segmented features (CDR, Neuroretinal rim area) (CDR, Neuroretinal Identification of stages of of glaucoma glaucoma Figure 4.1 Flow diagram to assess glaucoma using structural features 4.3 PROPOSED CMM METHOD TO DETECT OPTIC CUP Optic cup detection is a challenging task because of the interweavement of cup with the blood vessels. Optic disc is detected using

3 85 region of interest based segmentation and the bounding rectangle enclosing the region of interest and it is set as 1.5 times the disc width parameter. Color mathematical morphology approach for the segmentation of optic cup is aimed to detect the optic cup exactly to calculate the neuroretinal rim area present between the disc and cup. Unlike most of the previous methods discussed in the literature, the method proposed here differs in the initial detection of optic cup followed by the erasure of blood vessels to get higher accuracy. A new color model technique based on pallor in fundus images using K means clustering as shown in Figure 4.2 is proposed to differentiate the optic cup from the disc boundary. Figure 4.2 Systematic representation of CMM model to detect optic cup Optic cup is detected using the technique proposed in Figure 4.3.

4 86 Input Image Optic Disc Masking K-Means Clustering Centroid Color Mapping Cup Color Identifying Cup Boundary Extraction Morphological Operations Ellipse Fitting Figure 4.3 Flow diagram for optic cup detection The optic cup and disc areas usually differ in color, known as pallor. This method makes use of this difference in pallor to delineate the cup-disc boundary. Observations on the retinal images show that the actual cup pallor differs between different patients and even between images of the same retina due to changes in the lighting conditions. So color intensity of the optic cup cannot be fixed with prior knowledge. To overcome the limitation of the RGB color space in high level processing, color spaces have been developed based on the mathematical transformation of the original RGB color channels.

5 87 In a color image, each pixel has a 3D vector with component values ranging from 0 to 255. RGB, YIQ, CMY, XYZ, HSV, LUV and L* a* b* are the color models oriented towards image processing application. Color fundus images of Red, Green and Blue (RGB) space are transformed into Commission International d Eclairage (CIE) Lab color space, which is a color-opponent space with dimension L for lightness, a and b for the coloropponent dimensions based on nonlinearly-compressed CIE XYZ color space coordinates. Euclidean distance in L* a* b* is the threshold for human to distinguish color difference. CIE L* a* b* color space is perceptually uniform and is device independent. It consists of a luminosity layer L* which varies uniformly from 0 for black to 100 for white, chromaticity-layer a* indicating where color falls along the red-green axis, and chromaticity-layer b* indicating where the color falls along the blue-yellow axis. The a and b values are expressed such that +a/-a denotes red/green and +b/- b denotes blue/yellow. This color space is an excellent decoupler of intensity and color. Non-linear relations described by (William K.Pratt 2002) for L* a* and b* are as follows: L = 116 f 16, if > (4.1) a = 500 f f (4.2) b = 200 f f (4.3) where f (t) = t if t > t + otherwise The division of the f(t) function into two domains was done to prevent an infinite slope at t = 0.Tristimulus values represent the relative quantities of the primary colors. The intensities of red, green and blue are

6 88 transformed into tristimulus values and are represented by X, Y and Z. Here X n, Y n and Z n define the illuminant dependent reference white point in the XYZ color space where the subscript n' denotes normalized values. In order to detect the optic cup region from the surrounding region, a segmentation algorithm based on color histogram analysis followed by morphological operations has been developed. Proposed CMM color model for the detection of optic cup is explained in the following steps. 1. Input image to the optic cup segmentation process is the segmented optic disc in L *a* b* color space. 2. Optic disc is masked with a radius equal to or greater than the size of the optic disc. 3. The masked image is fed to the clustering process in order to group the pixel values into regions.number of clusters for the clustering process is determined using anatomical knowledge as well as Hill Climbing technique. 3.1 As optic disc consists of four regions viz optic cup, interior optic disc, exterior optic disc and blood vessels, the number of clusters is selected as four (K=4) using domain knowledge. 3.2 Number of clusters for K means clustering is also determined automatically using Hill Climbing technique described by Ohashi et al (2003). The clusters are found using the peaks by projecting the image onto its three color components. Histograms are constructed by splitting the range of data into equal sized bins called classes. Then for each bin, the number of points from the data set that falls into each bin are counted. Since CIE L*a*b* feature space is three dimensional, each bin in the color histogram has B d -1 neighbours where B is the total

7 89 number of bins, chosen as 10 by trial and error method and d, the number of dimensions of the feature space is 3. Peaks are identified by comparing the pixels with the neighboring bins and the number of peaks obtained indicates the value of K. The histogram bins are utilized rather than the pixels themselves finding the peaks of clusters D color histogram of the image is computed A non-zero bin of the color histogram is started and an uphill move is made until reaching a peak as follows: i. The number of pixels of the current histogram bin is compared with the number of pixels of neighboring left and right bins. ii. iii If the neighboring bins have different numbers of pixels, the algorithm makes an uphill move towards the neighboring bin with larger number of pixels. If the immediate neighboring bins have the same numbers of pixels, the algorithm checks the next neighboring bins, and so on, until two neighboring bins with different numbers of pixels are found. Then, an uphill move is made towards the bin with larger number of pixels The uphill climbing is continued until reaching a bin from where there is no possible uphill movement. Hence, the current bin is identified as a peak (local maximum). This step is continued until all non-zero bins of the color histogram associated with a peak are climbed. The identified peaks as shown in Figure 4.4

8 90 represent the initial number of clusters of the input image and these peaks are saved. Number of clusters K is initialized as four (K=4), and the values of these bins form the initial seeds as shown in Figure 4.4. These formed seeds are then passed on to K means clustering. Lower value of K leads to an increase in the cup size and higher value results in the predominance of blood vessels. Figure 4.4 Detected peaks in L*a*b* color space x 1,x 2,...x M are the data points or vectors of observations. Each vector x i will be assigned to one and only one cluster. M data points are grouped into K clusters such that similar items are grouped together in the same cluster. For each data point, nearest centroid is found and the data point

9 91 is assigned to the cluster associated with the nearest centroid. The new centroid will be the mean of all points in the cluster. Centroid is taken and data is mapped to the closest one, using the absolute distance between them. The above two steps are iterated until convergence and when there are no new re-assignments it is stopped. For a current set of cluster means, each observation is assigned as in Equation (4.4). C(i) = arg min x m, i = 1,, M (4.4) C(i) denotes the cluster number for the i th observation, M denotes the number of data points in the cluster, number of clusters K is initialized as 4, m k is the mean vector of the k th cluster., where k varies from 1 to 4. K-means minimizes within-cluster point scatter as shown in Equation (4.5). (4.5) 4. By using K means clustering all the peaks of three channels are grouped into one cluster and valleys, pixels into separate clusters. K- Means clustering groups the pixels within the optic disc into the four regions. 5. Each cluster has a centroid. Then each region is filled with the corresponding region s centroid color. From these 4 regions, the region corresponding to optic cup can be easily identified by its centroid color. Each pixel within a cluster is then replaced by the corresponding cluster centre color. 6. Difference between two colors are measured using Euclidean distance metric. The colors {c 1,c 2,c 3, c 4 } are extracted from the image. The first

10 92 cluster centre a 1 in the color space is chosen as a 1 =c 1.Next the color difference from c 2 to a 1 is computed. If this difference is greater than a threshold T h, a new cluster centre is created as a 2 =c 2, otherwise c 2 is assigned to a 1. Similarly the color difference from each representative color to every established color centre is computed and thresholded. A new cluster is created if all of these distances exceed T h, otherwise color is assigned to the class to which it is closest. The brightest centroid color corresponds to an optic cup. Thus an initial boundary of an optic cup is obtained and the output of this clustering is an optic cup. Experiment is repeated for various bin values like 5, 10 and 15 as shown in Figure 4.5. (a) Input Image (b) Clustered outputs for N =5 (c) Clustered outputs for N=10 (d) Clustered outputs for N=15 Figure 4.5 Clustered outputs for various bin size

11 93 The impact of blood vessel region within the cup is removed by morphological operations. This is performed by a closing operation, that is, a dilation to first remove the blood vessels followed by erosion operation to restore the boundaries to their former positions in the region of interest. As gray level morphological operations cannot be directly applied to color images, each color retinal image is described as a set of three independent vectors and each of these vectors represent a gray scale image. In color morphology each pixel must be considered as a vector of color components. Definitions of maximum and minimum operations on ordered vectors are necessary to perform basic operations. Hence for each arbitrary point x in the color space, morphological dilation (I d ) and erosion (I e ) function by a structuring element E are defined as in Equations (4.6) and (4.7). Dilation of the image I by B I (x) = {I(y): I(y) = max [I(z)], z E (4.6) Erosion is defined by I (x) = {I(y): I(y) = min[i(z)], z E (4.7) Structuring element is a matrix of pixels, each with a value of zero or one. The pixel in the structuring element containing 1 s defines the neighborhood of a pixel. When the structuring element fits the image, for each of its pixels set to 1, the corresponding image pixel is also 1. Similarly, a structuring element is said to hit an image if, at least for one of its pixels set to 1 the corresponding image pixel is also 1. The closing operator usually smoothes away the small scale dark structures from the color retinal images. As closing eliminates the image details smaller than the structuring element, the structuring element E must be set to cover all possible vascular structures. A circular window of maximal vessel width as radius is used for dilation and

12 94 erosion. Of the various shapes and sizes like square, rectangular, diamond, diagonal shaped, a disc shaped structuring element of size is preferred since blood vessels were determined not to be wider than 13 pixels. Size of the structuring element depends on the features extracted from the image. A symmetrical 13x13 disc-structuring element is used, since the blood vessels were determined to be not wider than 13 pixels. In the output image, pixel computation is performed by taking the maximum of the input pixels for dilation and minimum of the pixels in the neighborhood for erosion.this step helps to reject outliers inside or outside the cup region and helps to get approximate cup region. For images having a large traversal of blood vessels and discontinuous cup region, ellipse fitting technique is used. Ellipse fitting based on least squares fitting algorithm is used to smooth the cup boundary. The modified optic cup boundary obtained is then fitted with ellipse as shown in Figure 4.6. (a) Input image (b) Mask image (c) Color model (d) Initial cup boundary (e) Image smoothing (f) Ellipse fitting Figure 4.6 Steps in the detection of optic cup

13 95 Further, the boundary of the cup is detected exactly in accordance with the markings of experts except in those regions where the optic cup boundary is severely occluded by the blood vessels. A comparison of color spaces for cup detection is shown in Figure 4.7. Rand Index (RI), a statistical measure is used to measure the quality of segmentation. RI counts the fraction of pixels whose labeling are consistent between the computed segmentation and ground truth. RGB, YIQ, HSV, XYZ color space exhibits a RI of 0.883, 0.796, and respectively. In these color spaces, segmented cup results do not show consistent results with the ground truth and hence the RI is lower. In L*a*b*color space, optic cup is well segmented and exhibits a high RI of 0.93 when compared with other color spaces. Figure 4.7 Optic cup detection in different color spaces

14 EXPERIMENTAL RESULTS A few input images are shown in Figure 4.8. Figure 4.9 illustrates the detected disc, cup boundary and Figure 4.10 shows the neuroretinal rim region present between the disc and the cup. Figure 4.8 Few input images Figure 4.9 Detected optic disc and optic cup in fundus images

15 97 Figure 4.10 Neuroretinal rim area detection in fundus images 4.5 PERFORMANCE ANALYSIS FOR OPTIC CUP DETECTION i) To assess area of overlap between the computed region and ground truth of the optic cup, pixel-wise precision and recall values are computed using the Equations (4.8) and (4.9). Precision = TP TP + FP (4.8) Recall = TP TP + FN (4.9) where TP is the number of true positives, FP is the number of false positives and FN is the number of false negative pixels. Contours estimated by the ophthalmologist are set to be the ground truth. ii) Another method of evaluating the performance using F Score is given by the Equation (4.10). F = 2 Precision Recall Precision + Recall (4.10)

16 98 Value of F score lies between 0 and 1 and score will be high for an accurate method. Table 4.1 presents the quantitative assessment of the cup segmentation results using F score. Table 4.1 F score for cup segmentation Images Threshold Component analysis CMM CMM method achieves an average F score of 0.9 when compared to an average F score of 0.68 for thresholding and 0.74 for component analysis approach. The proposed cup segmentation method provides a significant improvement compared to the other two methods. 4.6 PARAMETER ESTIMATION FOR GLAUCOMA ASSESSMENT Determination of Cup to Disc Ratio The important feature is the cup to disc ratio (CDR), which specifies the change in the cup area. Due to glaucoma, the cup area will

17 99 increase slowly by intra ocular pressure and results in dramatic visual loss. Area ratio is selected to assess the overall segmentation accuracy achieved in all directions unlike the cup to disc diameter ratio which reflects the accuracy only in the vertical direction. Measuring the cupping area in the optic disc is an aiding tool for glaucomatous diagnosis as well as a following up tool for the glaucomatous optic disc to monitor the progression of the disease. CDR is one of the significant measurements that are found to be suspicious for glaucoma above a particular size usually 0.3.Increase in the cup-disc ratio or the enlargement of the cup over a period of time is diagnostic of glaucomatous disc damage. CDR is the ratio between the area of the optic cup and the area of the optic disc. CDR greater than 0.3:1 is the most often reported sign of disc damage.cdr > 0.3 indicates the suspection of glaucoma and CDR 0.3, is considered as a normal image as shown in Figure Area of the optic disc and cup is computed using the formula given below Area = ab (4.11) where a = major axis length of the disc boundary b = minor axis length of the disc boundary (a) CDR 0.3 (b) CDR > 0.3 Figure 4.11 CDR for normal and abnormal images Boundary of the detected cup compared with the threshold based approach and color component analysis are shown in Figure 4.12.

18 100 (a) Test images (b) Threshold based approach (c) Color Component analysis OPTIC DISC OPTIC CUP (d) Detection of optic disc using windowing technique and optic cup using color model. Figure 4.12 Disc and cup boundary extraction For left eye, boundary estimated by the CMM method and ground truth is closer at the nasal side. However, the boundaries at the nasal side are slightly different because there is no obvious cup structure on temporal side

19 101 and also the main blood vessels occlude on that region. For right eye, boundary is closer at the temporal side and slightly changes at the nasal side. Clinical and determined CDR values for few images are shown in Table 4.2. Table 4.2 Comparison of CDR values for few real time images Image CDR (Clinical) CDR (DW and CMM)

20 102 CDR is calculated from the fundus images using DW algorithm for the segmentation of optic disc and CMM technique for optic cup segmentation. To determine the performance of the approach, 40 retinal images are processed and their CDR are calculated. Images that are used as database are obtained from Aravind Eye hospital, Madurai. From the above analysis it is concluded that in few of the images there is a difference between the computed and ground truth values. Hence, there is a misclassification of two normal and two abnormal images and 90% accuracy is reported for 40 images. This is because the position of the optic cup in different sectors is not considered. CDR may be in the normal range but in the same image if cup position is closer to inferior or superior sector, there occurs severe rim loss and leads to loss of vision. CDR measure is inadequate to assess local cup changes and so sector-wise segmentation has to be performed Quantification of Neuroretinal Rim In order to quantify the focal cupping and to identify the stages of glaucoma, neuroretinal rim region is considered. Loss of axons in the eye leads to glaucoma which is reflected as abnormalities of the neuroretinal rim. Identification of the width of the neuroretinal rim in all sectors of the optic disc as shown in Figure 4.13 is of fundamental importance for the detection of diffuse and localized rim loss in glaucoma. Figure 4.13 Representation of structural features in fundus image

21 103 CDR is calculated regardless of the position of the cup and does not take into account the disc size. Optic disc is vertically oval and the optic cup is horizontally oval thus resulting in the characteristic shape of the neuroretinal rim. The normal optic disc usually have a configuration in which the Inferior (I) neuroretinal rim is the widest portion of the rim, followed by the Superior(S) rim and then the Nasal (N) rim with the Temporal (T) rim being the narrowest portion. Glaucoma frequently damages superior and inferior optic nerve fibers before temporal and nasal fibers, leading to thinning of the superior and inferior rims and violation of the rule. If CDR is less than 0.3 it appears to be a normal image, but if the cup region is closer to I or S, it is highly pathological and the patient suffers from visual damage due to severe neuroretinal rim loss. If CDR is greater than 0.3 and the cup region is closer to nasal or temporal quadrant, the patient is suspected to be in the early stage since rim loss has not occurred. So the calculation of neuroretinal rim area is essential for the accurate diagnosis of glaucoma NRRA detection algorithm Neuro Retinal Rim Area (NRRA) algorithm is described below 1. Boundary of the optic disc is extracted. 2. Optic cup segmentation is performed. 3. Binary image of the segmented disc and cup region is taken and cropped using a suitable mask. 4. Image is divided into four sectors (I, S, N, T) with adjacent lines making an angle of Calculate neuroretinal rim area in each of the sectors by calculating the number of white pixels in the rim region.

22 Neuroretinal rim area divided by the disc diameter is taken as ratio1.if ratio1 is greater than 0.2mm in the I and S region it refers to a normal image. 4.3 If ratio1 is less than 0.2mm in the inferior and in the superior region it refers to an abnormal image. For a large disc if (ratio1>0.3) disp('disk Damage - Stage 0a'); else if (ratio1>0.2 ratio1<=0.3) disp('disk Damage - Stage 0b'); else if(ratio1>0.1 ratio1<=0.2) disp('disk Damage - Stage 1 '); else if(ratio1>0.05 ratio1<=0.1) disp ('Disk Damage - Stage 2'); else disp ('Disk Damage - Stage 3'); end end end end

23 105 This process is repeated to calculate the rim to disc ratio in different images for left and right eye. Stage 0a and stage 0b refers to normal eye and ocular hypertension. Stage 1, stage 2 and stage 3 refers to early glaucoma, established glaucoma and advanced glaucoma. Neuroretinal rim area is the region between the optic disc and the optic cup. Rim area is measured in the inferior, superior, nasal and temporal quadrants. Usually the thickness of the rim area must be more in the superior and inferior regions when compared to the temporal and nasal regions in the normal images. To obtain the thickness in all the four quadrants shown in Figure 4.14, a binary image of the neuro retinal rim is taken as in Figure 4.15 and then cropped. A mask of the cropped image size is used to filter one quadrant. Then, the mask is rotated 90º to obtain the other quadrant area. Figure 4.15 shows the mask used for identifying rim area in the ISNT side of optic disc. Figure 4.14 Image with I S N T regions Figure 4.15 Binary image of the detected disc and cup

24 106 (a) Superior quadrant (b) Temporal quadrant (c) Inferior quadrant (d) Nasal quadrant Figure 4.16 Rim area detection using mask (a) Superior quadrant (b) Temporal quadrant (c) Inferior quadrant (d) Nasal quadrant Figure 4.17 Rim areas in ISNT sectors

25 107 This neuroretinal rim configuration gives rise to a cup shape that is either round or horizontally oval. Neuroretinal rim area is calculated by subtracting the area of the optic cup from area of optic disc. Normally the rim is the widest in the inferior temporal sector followed by the superior temporal sector, the nasal and the temporal horizontal sector. Neuroretinal rim area in each of the sector shown in Figure 4.17 is calculated by using a mask for each region and counting the number of white pixels in each of the I, S, N,T region. Large optic discs have large cups and often elevated CDR ratios. They can be incorrectly labeled as glaucomatous. Similarly small discs with a small CDR ratio may actually be pathologic and be erroneously classified as normal. ISNT rule states that rim width is greater in the Inferior region followed by superior, nasal and temporal regions. So rim width is calculated by calculating the number of white pixels in each of the sectors. Optic disc size varies from person to person and it could be a large, medium or small sized disc. If the neuroretinal rim width to the disc diameter is greater than 0.2 mm in the superior or in the inferior region it refers to a normal image. For pathological subjects, inferior and superior rim width thickness will be minimum which implies that neuroretinal rim width to a disc diameter is less than < 0.2mm. By calculating this ratio the extent of disc damage can be identified. Detection of rim width fails when there is an inaccurate segmentation of the optic disc and optic cup. Rim to Disc Ratio (RDR) in ISNT regions are mapped into a nomenclature which details the stage of the disease. 4.7 PERFORMANCE ANALYSIS OF STRUCTURAL FEATURES Discs are generally categorized as small, average and large. Small discs are less than 1.5mm, average size disc ranges between 1.5mm and 2mm and large discs are greater than 2mm. Stage 0a, stage 0b are the normal stages

26 108 and stages 1, 2, 3 denote the extent of disc damage for a pathological subject. For example when the rim to disc ratio ranges from 0.2 to 0.3, it refers to an abnormal stage 2 in a small disc, stage 1 in average size disc and stage 0b in a large disc. Table 4.3 Quantification of NRRA to identify stages of glaucoma Image I (µm) S (µm) N (µm) T (µm) Disc diameter (µm) Cup area (µm) CDR Rim to disc ratio (I) Rim to disc ratio (S) Stages of classifi cation a b a b a b a a a

27 109 Quantitative results for a few images are shown in Table 4.3. Asymmetry between Left Eye (LE) and Right Eye (RE) are shown in Table 4.4. Stages of disc damage in left and right eye are presented in Table 4.5.With the detected optic disc and cup boundary, disc diameter, cup area and cup to disc ratio are calculated. CDR does not consider the disc size. The stages, extending from no damage to far advanced damage are based on the width of the neuroretinal rim. In the first image shown in Table 4.3, CDR value greater than 0.3 indicates an abnormal image. In the same image, neuroretinal rim area is larger in the I region followed by S, N and T and hence ISNT rule is followed and resembles a healthy disc. Rim to disc ratios are then calculated for I and S region to identify the stage of the disease. Table 4.4 Rim to disc ratio in different images for left and right eye Images S. Left (µm) I. Left (µm) S. Right (µm) I. Right (µm) Disc diameter (Left) (µm) Disc diameter (Right) (µm) Rim to disc ratio (S.Left) Rim to disc ratio (S.Right) A A A Table 4.5 Stage of disc damage in left and right eye Images Rim to disc ratio(left) Rim to disc ratio(right) Stage of disc damage (Left) Stage of disc damage (Right) A A A The procedure is repeated for both left and right eye to assess the stage of disc damage. Based on neuroretinal rim area the extent of disc damage can be found in both left and right eye by taking into account the

28 110 asymmetry between left and right eyes. Stages of the disease can be found by calculating the rim loss in the inferior and superior quadrants. 4.8 CONCLUSION Optic cup segmented using CMM based on pallor provides a high rand index and lower displacement error when tested on a database of 40 images collected from Aravind Eye hospital, Madurai. Segmented results achieved an average F score of 0.9 compared to an average F score of 0.68 for thresholding and 0.74 in component analysis approach. Detected cup boundary using CMM technique showed good matching results with expert assessments of fundus images. As glaucoma progresses, the optic cup becomes larger and hence the cup to disc ratio becomes larger. Thus the CDR is higher for glaucoma subjects than for the normal subjects leading to the differences in the respective fundus images. Most of the previous studies on this discussion have addressed this issue by classifying the images based on cup to disc ratio. However CDR does not take into consideration the diameter of the optic disc nor does it directly describe the focal changes in the neuroretinal rim. There is a chance of misclassification in the images because large discs likely to have larger CDR ratio but with normal neuroretinal rims are more likely to be classified as glaucomatous, while small discs with small CDR ratio are more likely to be classified as normal whether they actually have glaucoma or not. Severe visual damage occurs when the cup region is very closer to the inferior or the superior quadrant. With the incorporation of disc size and rim width in clinical grading of the disc, rim to disc ratio reduces misclassification of images and identifies various stages of glaucoma by detecting rim loss in each quadrant. This feature also correlates strongly with the degree of glaucomatous visual

29 111 field damage. The estimated neuroretinal rim area to disc diameter based on the detected disc and cup boundaries showed good consistency when compared with the ground truth results. Experimental results showed that rim to disc ratio offers an excellent approach to distinguish between normal and glaucoma eyes and it outperformed CDR ratio. By categorizing discs as small, medium or large, the expectation of rim thickness is adjusted. Progression of the disease could be identified using the structural feature and can be used in Decision Support System (DSS) for the detection of glaucomatous optic nerves.

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK Ocular fundus images can provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular degeneration

More information

Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph

Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph Heidelberg Engineering GmbH, Heidelberg, Germany Contents 1 Introduction... 1 2 Confocal laser scanning

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Segmentation of optic disc and vessel based optic cup for glaucoma Assessment

Segmentation of optic disc and vessel based optic cup for glaucoma Assessment Segmentation of optic disc and vessel based optic cup for glaucoma Assessment 1 K.Muthusamy, 2 J.Preethi 12 (CSE, Anna University Regional Center Coimbatore, India, 1 muthuksp105@gmail.com, 2 preethi17j@yahoo.com)

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity. Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Rasipuram , India. University, Mettu, Ethipio.

Rasipuram , India. University, Mettu, Ethipio. ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com DETECTION OF GLAUCOMA USING NOVEL FEATURES OF OPTICAL COHERENCE TOMOGRAPHY IMAGE T. R. Ganesh Babu 1, Pattanaik

More information

Computer-aided Diagnosis of Retinopathy of Prematurity

Computer-aided Diagnosis of Retinopathy of Prematurity Computer-aided Diagnosis of Retinopathy of Prematurity Rangaraj M. Rangayyan, Faraz Oloumi, and Anna L. Ells Department of Electrical and Computer Engineering, University of Calgary Alberta Children's

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Blood vessel tracking in retinal images

Blood vessel tracking in retinal images Y. Jiang, A. Bainbridge-Smith, A. B. Morris, Blood Vessel Tracking in Retinal Images, Proceedings of Image and Vision Computing New Zealand 2007, pp. 126 131, Hamilton, New Zealand, December 2007. Blood

More information

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion

More information

Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations and Crossovers

Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations and Crossovers 6th International Conference on Computer and Knowledge Engineering (ICCKE 2016), October 20-21 2016, Ferdowsi University of Mashhad Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

EECS490: Digital Image Processing. Lecture #17

EECS490: Digital Image Processing. Lecture #17 Lecture #17 Morphology & set operations on images Structuring elements Erosion and dilation Opening and closing Morphological image processing, boundary extraction, region filling Connectivity: convex

More information

ENHANCED GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH REGION GROWING ALGORITHM

ENHANCED GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH REGION GROWING ALGORITHM ENHANCED GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH REGION GROWING ALGORITHM S.Nancy Evangeline Mary*1, Prof. B.ShakthivelM.E.,* 1-Research Scholar, Dept. of Computer Science, PSV College of Engineering&

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

CHAPTER 3 BLOOD VESSEL SEGMENTATION

CHAPTER 3 BLOOD VESSEL SEGMENTATION 47 CHAPTER 3 BLOOD VESSEL SEGMENTATION Blood vessels are an internal part of the blood circulatory system and they provide the nutrients to all parts of the eye. In retinal image, blood vessel appears

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

SUSSMAN ALGORITHM BASED DATA MINING FOR GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH ANN

SUSSMAN ALGORITHM BASED DATA MINING FOR GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH ANN SUSSMAN ALGORITHM BASED DATA MINING FOR GLAUCOMA DETECTION ON RETINAL FUNDUS IMAGES THROUGH ANN S. Nancy Evangeline Mary *1, Prof. B. Shakthivel *2 Research Scholar, Dept. of Computer Science, PSV College

More information

Effective Optic Disc and Optic Cup Segmentation for Glaucoma Screening

Effective Optic Disc and Optic Cup Segmentation for Glaucoma Screening Effective Optic Disc and Optic Cup Segmentation for Glaucoma Screening Ms.Vedvati S Zankar1 and Mr.C.G.Patil2 1,2 Sinhgad Academy of Engineering, Kondhwa, Pune Abstract Glaucoma is a chronic eye disease

More information

Processing of binary images

Processing of binary images Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring

More information

doi: /

doi: / Yiting Xie ; Anthony P. Reeves; Single 3D cell segmentation from optical CT microscope images. Proc. SPIE 934, Medical Imaging 214: Image Processing, 9343B (March 21, 214); doi:1.1117/12.243852. (214)

More information

Extraction of Features from Fundus Images for Glaucoma Assessment

Extraction of Features from Fundus Images for Glaucoma Assessment Extraction of Features from Fundus Images for Glaucoma Assessment YIN FENGSHOU A thesis submitted in partial fulfillment for the degree of Master of Engineering Department of Electrical & Computer Engineering

More information

Part 3: Image Processing

Part 3: Image Processing Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation

More information

Clustering Part 4 DBSCAN

Clustering Part 4 DBSCAN Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor

More information

Fusing Geometric and Appearance-based Features for Glaucoma Diagnosis

Fusing Geometric and Appearance-based Features for Glaucoma Diagnosis Fusing Geometric and Appearance-based Features for Glaucoma Diagnosis Kangrok Oh a Jooyoung Kim a Sangchul Yoon b Kyoung Yul Seo b a School of Electrical and Electronic Engineering, Yonsei University 50

More information

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden Lecture: Segmentation I FMAN30: Medical Image Analysis Anders Heyden 2017-11-13 Content What is segmentation? Motivation Segmentation methods Contour-based Voxel/pixel-based Discussion What is segmentation?

More information

[Suryawanshi, 2(9): September, 2013] ISSN: Impact Factor: 1.852

[Suryawanshi, 2(9): September, 2013] ISSN: Impact Factor: 1.852 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY An Approach to Glaucoma using Image Segmentation Techniques Mrs Preeti Kailas Suryawanshi Department of Electronics & Telecommunication,

More information

Prime Time (Factors and Multiples)

Prime Time (Factors and Multiples) CONFIDENCE LEVEL: Prime Time Knowledge Map for 6 th Grade Math Prime Time (Factors and Multiples). A factor is a whole numbers that is multiplied by another whole number to get a product. (Ex: x 5 = ;

More information

AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES

AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES Alauddin Bhuiyan, Baikunth Nath and Joselito Chua Computer Science and Software Engineering, The University of Melbourne,

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Segmentation and Localization of Optic Disc using Feature Match and Medial Axis Detection in Retinal Images

Segmentation and Localization of Optic Disc using Feature Match and Medial Axis Detection in Retinal Images Biomedical & Pharmacology Journal Vol. 8(1), 391-397 (2015) Segmentation and Localization of Optic Disc using Feature Match and Medial Axis Detection in Retinal Images K. PRADHEPA, S.KARKUZHALI and D.MANIMEGALAI

More information

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu Image Processing CS 554 Computer Vision Pinar Duygulu Bilkent University Today Image Formation Point and Blob Processing Binary Image Processing Readings: Gonzalez & Woods, Ch. 3 Slides are adapted from

More information

Image Analysis Lecture Segmentation. Idar Dyrdal

Image Analysis Lecture Segmentation. Idar Dyrdal Image Analysis Lecture 9.1 - Segmentation Idar Dyrdal Segmentation Image segmentation is the process of partitioning a digital image into multiple parts The goal is to divide the image into meaningful

More information

University of Florida CISE department Gator Engineering. Clustering Part 4

University of Florida CISE department Gator Engineering. Clustering Part 4 Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

More information

Foundation. Scheme of Work. Year 9. September 2016 to July 2017

Foundation. Scheme of Work. Year 9. September 2016 to July 2017 Foundation Scheme of Work Year 9 September 06 to July 07 Assessments Students will be assessed by completing two tests (topic) each Half Term. These are to be recorded on Go Schools. There will not be

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

Blood Microscopic Image Analysis for Acute Leukemia Detection

Blood Microscopic Image Analysis for Acute Leukemia Detection I J C T A, 9(9), 2016, pp. 3731-3735 International Science Press Blood Microscopic Image Analysis for Acute Leukemia Detection V. Renuga, J. Sivaraman, S. Vinuraj Kumar, S. Sathish, P. Padmapriya and R.

More information

Gene Clustering & Classification

Gene Clustering & Classification BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering

More information

Superpixel Classification based Optic Cup Segmentation

Superpixel Classification based Optic Cup Segmentation Superpixel Classification based Optic Cup Segmentation Jun Cheng 1,JiangLiu 1, Dacheng Tao 2,FengshouYin 1, Damon Wing Kee Wong 1,YanwuXu 1,andTienYinWong 3,4 1 Institute for Infocomm Research, Agency

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 7. Color Transforms 15110191 Keuyhong Cho Non-linear Color Space Reflect human eye s characters 1) Use uniform color space 2) Set distance of color space has same ratio difference

More information

Detecting Wedge Shaped Defects in Polarimetric Images of the Retinal Nerve Fiber Layer.

Detecting Wedge Shaped Defects in Polarimetric Images of the Retinal Nerve Fiber Layer. Detecting Wedge Shaped Defects in Polarimetric Images of the Retinal Nerve Fiber Layer. Koen Vermeer 1, Frans Vos 1,3, Hans Lemij 2, and Albert Vossepoel 1 1 Pattern Recognition Group, Delft University

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Glossary Common Core Curriculum Maps Math/Grade 6 Grade 8

Glossary Common Core Curriculum Maps Math/Grade 6 Grade 8 Glossary Common Core Curriculum Maps Math/Grade 6 Grade 8 Grade 6 Grade 8 absolute value Distance of a number (x) from zero on a number line. Because absolute value represents distance, the absolute value

More information

EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling

EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling More Accurate Optical Measurements of the Cornea Raquel González Fariña Contents 1. Introduction... 2 Background... 2 2.

More information

Gopalakrishna Prabhu.K Department of Biomedical Engineering Manipal Institute of Technology Manipal University, Manipal, India

Gopalakrishna Prabhu.K Department of Biomedical Engineering Manipal Institute of Technology Manipal University, Manipal, India 00 International Journal of Computer Applications (0975 8887) Automatic Localization and Boundary Detection of Optic Disc Using Implicit Active Contours Siddalingaswamy P. C. Department of Computer science

More information

Mapping Common Core State Standard Clusters and. Ohio Grade Level Indicator. Grade 5 Mathematics

Mapping Common Core State Standard Clusters and. Ohio Grade Level Indicator. Grade 5 Mathematics Mapping Common Core State Clusters and Ohio s Grade Level Indicators: Grade 5 Mathematics Operations and Algebraic Thinking: Write and interpret numerical expressions. Operations and Algebraic Thinking:

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Biomedical Image Analysis. Mathematical Morphology

Biomedical Image Analysis. Mathematical Morphology Biomedical Image Analysis Mathematical Morphology Contents: Foundation of Mathematical Morphology Structuring Elements Applications BMIA 15 V. Roth & P. Cattin 265 Foundations of Mathematical Morphology

More information

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory Introduction Computer Vision & Digital Image Processing Morphological Image Processing I Morphology a branch of biology concerned with the form and structure of plants and animals Mathematical morphology

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks 8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks MS Objective CCSS Standard I Can Statements Included in MS Framework + Included in Phase 1 infusion Included in Phase 2 infusion 1a. Define, classify,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

Topic 6 Representation and Description

Topic 6 Representation and Description Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

More information

Creating an Automated Blood Vessel. Diameter Tracking Tool

Creating an Automated Blood Vessel. Diameter Tracking Tool Medical Biophysics 3970Z 6 Week Project: Creating an Automated Blood Vessel Diameter Tracking Tool Peter McLachlan - 250068036 April 2, 2013 Introduction In order to meet the demands of tissues the body

More information

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS

CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary

More information

Carmen Alonso Montes 23rd-27th November 2015

Carmen Alonso Montes 23rd-27th November 2015 Practical Computer Vision: Theory & Applications 23rd-27th November 2015 Wrap up Today, we are here 2 Learned concepts Hough Transform Distance mapping Watershed Active contours 3 Contents Wrap up Object

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers

3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers 3.3 Optimizing Functions of Several Variables 3.4 Lagrange Multipliers Prof. Tesler Math 20C Fall 2018 Prof. Tesler 3.3 3.4 Optimization Math 20C / Fall 2018 1 / 56 Optimizing y = f (x) In Math 20A, we

More information

International Journal of Advance Engineering and Research Development. Applications of Set Theory in Digital Image Processing

International Journal of Advance Engineering and Research Development. Applications of Set Theory in Digital Image Processing Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 Applications of Set Theory in Digital Image Processing

More information

Digital Image Processing Fundamentals

Digital Image Processing Fundamentals Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to

More information

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both

More information

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners Data Mining 3.5 (Instance-Based Learners) Fall 2008 Instructor: Dr. Masoud Yaghini Outline Introduction k-nearest-neighbor Classifiers References Introduction Introduction Lazy vs. eager learning Eager

More information

PPKE-ITK. Lecture

PPKE-ITK. Lecture PPKE-ITK Lecture 6-7. 2017.10.24. 1 What is on the image? This is maybe the most important question we want to answer about an image. For a human observer it is a trivial task, for a machine it is still

More information

CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM

CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM 96 CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM Clustering is the process of combining a set of relevant information in the same group. In this process KM algorithm plays

More information

Basic relations between pixels (Chapter 2)

Basic relations between pixels (Chapter 2) Basic relations between pixels (Chapter 2) Lecture 3 Basic Relationships Between Pixels Definitions: f(x,y): digital image Pixels: q, p (p,q f) A subset of pixels of f(x,y): S A typology of relations:

More information

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION 4.1. Introduction Indian economy is highly dependent of agricultural productivity. Therefore, in field of agriculture, detection of

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves

PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves Print Your Name Print Your Partners' Names Instructions April 17, 2015 Before lab, read the

More information

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation

Lecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation Lecture #13 Point (pixel) transformations Color modification Color slicing Device independent color Color balancing Neighborhood processing Smoothing Sharpening Color segmentation Color Transformations

More information

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions Unit 1: Rational Numbers & Exponents M07.A-N & M08.A-N, M08.B-E Essential Questions Standards Content Skills Vocabulary What happens when you add, subtract, multiply and divide integers? What happens when

More information

Visible Color. 700 (red) 580 (yellow) 520 (green)

Visible Color. 700 (red) 580 (yellow) 520 (green) Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

Chapter 10: Image Segmentation. Office room : 841

Chapter 10: Image Segmentation.   Office room : 841 Chapter 10: Image Segmentation Lecturer: Jianbing Shen Email : shenjianbing@bit.edu.cn Office room : 841 http://cs.bit.edu.cn/shenjianbing cn/shenjianbing Contents Definition and methods classification

More information

An Efficient Character Segmentation Based on VNP Algorithm

An Efficient Character Segmentation Based on VNP Algorithm Research Journal of Applied Sciences, Engineering and Technology 4(24): 5438-5442, 2012 ISSN: 2040-7467 Maxwell Scientific organization, 2012 Submitted: March 18, 2012 Accepted: April 14, 2012 Published:

More information

Mathematical Morphology and Distance Transforms. Robin Strand

Mathematical Morphology and Distance Transforms. Robin Strand Mathematical Morphology and Distance Transforms Robin Strand robin.strand@it.uu.se Morphology Form and structure Mathematical framework used for: Pre-processing Noise filtering, shape simplification,...

More information

Optical Verification of Mouse Event Accuracy

Optical Verification of Mouse Event Accuracy Optical Verification of Mouse Event Accuracy Denis Barberena Email: denisb@stanford.edu Mohammad Imam Email: noahi@stanford.edu Ilyas Patanam Email: ilyasp@stanford.edu Abstract Optical verification of

More information

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2 Color Theory Physical Color Visible energy - small portion of the electro-magnetic spectrum Pure monochromatic colors are found at wavelengths between 380nm (violet) and 780nm (red) 380 780 Color Theory

More information

Data Term. Michael Bleyer LVA Stereo Vision

Data Term. Michael Bleyer LVA Stereo Vision Data Term Michael Bleyer LVA Stereo Vision What happened last time? We have looked at our energy function: E ( D) = m( p, dp) + p I < p, q > N s( p, q) We have learned about an optimization algorithm that

More information

The Detection of Faces in Color Images: EE368 Project Report

The Detection of Faces in Color Images: EE368 Project Report The Detection of Faces in Color Images: EE368 Project Report Angela Chau, Ezinne Oji, Jeff Walters Dept. of Electrical Engineering Stanford University Stanford, CA 9435 angichau,ezinne,jwalt@stanford.edu

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Smarter Balanced Vocabulary (from the SBAC test/item specifications)

Smarter Balanced Vocabulary (from the SBAC test/item specifications) Example: Smarter Balanced Vocabulary (from the SBAC test/item specifications) Notes: Most terms area used in multiple grade levels. You should look at your grade level and all of the previous grade levels.

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Dynamic Range and Weber s Law HVS is capable of operating over an enormous dynamic range, However, sensitivity is far from uniform over this range Example:

More information

Digital Image Processing. Introduction

Digital Image Processing. Introduction Digital Image Processing Introduction Digital Image Definition An image can be defined as a twodimensional function f(x,y) x,y: Spatial coordinate F: the amplitude of any pair of coordinate x,y, which

More information

A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish

A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish A Novel Approach to Image Segmentation for Traffic Sign Recognition Jon Jay Hack and Sidd Jagadish Introduction/Motivation: As autonomous vehicles, such as Google s self-driving car, have recently become

More information

VC 16/17 TP5 Single Pixel Manipulation

VC 16/17 TP5 Single Pixel Manipulation VC 16/17 TP5 Single Pixel Manipulation Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Hélder Filipe Pinto de Oliveira Outline Dynamic Range Manipulation

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

Medical images, segmentation and analysis

Medical images, segmentation and analysis Medical images, segmentation and analysis ImageLab group http://imagelab.ing.unimo.it Università degli Studi di Modena e Reggio Emilia Medical Images Macroscopic Dermoscopic ELM enhance the features of

More information

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #6: Color Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday, October 14

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity

More information

CSE/EE-576, Final Project

CSE/EE-576, Final Project 1 CSE/EE-576, Final Project Torso tracking Ke-Yu Chen Introduction Human 3D modeling and reconstruction from 2D sequences has been researcher s interests for years. Torso is the main part of the human

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

Bus Detection and recognition for visually impaired people

Bus Detection and recognition for visually impaired people Bus Detection and recognition for visually impaired people Hangrong Pan, Chucai Yi, and Yingli Tian The City College of New York The Graduate Center The City University of New York MAP4VIP Outline Motivation

More information