Registration and Fusion of Retinal Images An Evaluation Study

Size: px
Start display at page:

Download "Registration and Fusion of Retinal Images An Evaluation Study"

Transcription

1 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY Registration and Fusion of Retinal Images An Evaluation Study France Laliberté, Langis Gagnon*, Member, IEEE, and Yunlong Sheng Abstract We present the results of a study on the application of registration and pixel-level fusion techniques to retinal images. The images are of different modalities (color, fluorescein angiogram), different resolutions, and taken at different times (from a few minutes during an angiography examination to several years between two examinations). We propose a new registration method based on global point mapping with blood vessel bifurcations as control points and a search for control point matches that uses local structural information of the retinal network. Three transformation types (similarity, affine, and second-order polynomial) are evaluated on each image pair. Fourteen pixel-level fusion techniques have been tested and classified according to their qualitative and quantitative performance. Four quantitative fusion performance criteria are used to evaluate the gain obtained with the grayscale fusion. Index Terms Image fusion, image registration, medical imaging, ophthalmology. I. INTRODUCTION THE aim of this paper is to present the results of a study on 1) the development of a robust image registration algorithm and 2) the exploration of gray and color pixel-level fusion techniques to human retinal images of various types (i.e., with respect to color or angiographic modalities, resolution, disease, and image quality). Although image fusion is a standard topic in medical imaging, it has not been studied thoroughly for ophthalmic applications. Still much work has to be done in order to evaluate the potential benefits of the technique in ophthalmology. To our knowledge, only one paper [1] presents a feature-based fusion method where the coarse contours of anatomical and pathological features (vessels, fovea, optic disc, scotoma, subretinal leakage) are extracted from scanning laser ophthalmoscope images and superposed on the same image. Ophthalmology is one of the many medical areas for which diagnosis implies the manipulation and analysis of a large number of images, even for a single patient. For instance, mass screening of diseases like diabetic retinopathy can involve a Manuscript received May ; revised November 26, This work was supported in part by the Computer Research Institute of Montreal (CRIM), in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada, and in part by the Fonds québécois de la recherche sur la nature et les technologies. The Associate Editor responsible for coordinating the review of this paper and recommending its publication was M. W. Vannier. Asterisk indicates corresponding author. F. Laliberté is with the CRIM Montréal, QC H3A 1B9, Canada. He is also with the Département de Physique, Université Laval, Laval, QC G1K 7P4, Canada. *L. Gagnon is with the CRIM, rue Sherbrooke West, Suite 100, Montréal, QC H3A 1B9, Canada ( lgagnon@crim.ca). Y. Sheng is with the Département de Physique, Université Laval, Laval, QC G1K 7P4, Canada. Digital Object Identifier /TMI follow-up based on images acquired from different modalities (color, angiogram) over many years. Large parts of the image analysis process are still done by hand which contribute to increase the workload for the ophthalmologists. Therefore, any automatic image manipulation procedure that could help in reducing that workload or even suggest new ways to present the visual information is of potential interest. For instance, temporal registration can help to follow disease evolution while multimodal registration can help to accurately identify the type of lesions. On the other hand, image fusion can be used to combine or enhance pathological information to facilitate the diagnosis. Our study involves the development of a robust registration method in order to cope with the presence of various types of lesions associated with retinal diseases as well as the high visual quality variation between images. The method is essentially a global point mapping one but with a particular point matching search that takes into account the local structural information of the retinal network. Parameters of three transformation types are tested and a quantitative performance measure is given. Other registration methods have been proposed in the literature for retinal images. Correlation methods have been tested for unimodal ophthalmic image registration (see [2] [4]) but they cannot deal with local intensity reversal in multimodal registration ([5] and [6]). Since ophthalmic image deformations are mainly global, elastic model methods are unnecessary. Fourier methods are not suited because rotation and scaling can be present. In this study, a point-matching method is developed for temporal and multimodal registration of ophthalmic images. A point matching method has already been proposed for temporal and multimodal registration of mydriatic color images and fluorescein angiograms of the retina [7]. This method seems to give good results but ours does not require the assumption of a Gaussian shape vessel intensity profile, which is inappropriate for low-resolution optical images [8], and is much easier to implement. Former control point detection methods have been proposed in the literature that include edge detection [9] and color vessel reconstruction [10]. These methods have at least some drawbacks for our application, for example, coarse bifurcation point location or the requirement for color information. Fourteen pixel-level fusion techniques are tested: 12 grayscale [11] and two false color fusion methods ([12] and [13]). All have their qualitative advantages and drawbacks depending on the underlying medical context (application, disease type, subjective color appreciation, etc.). It is not our intention to identify the best method based on those criteria. However, we use some quantitative criteria (standard deviation, entropy, cross-entropy, and spatial frequency) that offer interesting performance indications for the grayscale fusion /03$ IEEE

2 662 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 TABLE I TECHNICAL CHARACTERISTICS OF THE FIVE DATA SETS USED methods. Our tests are performed on a large set of image pairs of different modalities (color, fluorescein angiogram), different resolutions, and taken at different times (from a few minutes during an angiography examination to several years between two examinations). The paper is organized as follows. Section II presents the data set used. Section III describes the registration technique we have developed and tested. Section IV describes the image fusion methods that have been selected and compared in this study. In Section V, we discuss the results obtained. Finally, conclusions and possible extension to this work are given in the last section. II. DATA SET Retinal images can be obtained with a fundus camera or a scanning laser ophthalmoscope. Fundus cameras have been used for this study. Images can be acquired under many different conditions. First, the pupil can be dilated with mydriatic eye drops. Second, to enhance the blood vessel appearance, a dye (sodium fluorescein) can be injected in the patient s arm, to produce angiograms. These are acquired under blue light because the dye has an absorption peak between 465 and 490 nm. Since the dye has an emission peak between 520 and 530 nm, a filter is placed in front of the camera to keep only the dye contribution. Images are acquired from about 10 s to 15 min after the injection, as the dye circulates through the retinal arteries, capillaries, veins, and is progressively eliminated from the vasculature, while staining in the optic disc and lesions. Finally, images can be taken on film and digitized, or directly with a digital camera. Five data sets, described in Table I, covering many eye diseases (diabetic retinopathy, age-related macular degeneration, anterior ischemic optic neuropathy, retinal vein occlusion, choroidal neovascular membrane, cystoid macular edema, histoplasmosis retinitis, telangiectasia, cytomegalovirus retinitis) are used in this study. The first three sets provide 12 image pairs, the fourth set provides 42 image pairs, and the two last sets provide 16 image pairs. For the color images, only the green band is used because it has the best contrast. Angiograms are very different from green band images: they have a better contrast and some features (blood vessels, microaneurisms) are intensity reversed. They are useful to detect leaks and other artery abnormalities, occluded capillaries, macular edema, microaneurisms and neovascularization. Color images are useful to detect exudates and hemorrhages. Acquiring angiograms at different phases is useful to follow the dye progression and staining which indicate the vasculature and lesion state. Angiograms taken at different phases are also very different, vessels go from black to white to black again eventually. It is important to say that our image data set has a very large variability in term of fundus disease and image quality (see all figures in the paper for specific examples). Some image pairs are of so bad quality regarding the visibility of the retinal network and/or the presence of large lesions (see Figs. 18 and 19 for examples) that we feel no retinal network-based registration techniques can work well on them. Our first idea was to discard those pairs like ophthalmologists do prior performing their diagnosis process. Such image quality check is often a critical step for the development of a reliable computer-aided diagnostic system [8]. Nevertheless, we finally decided to keep those bad-quality image pairs in the data set in order to get a clear idea about the robustness and limit performance of our registration algorithm. III. REGISTRATION Registration methods can be divided in four groups: elastic model, Fourier, correlation and point matching methods [5]. Here we describe the point matching method we have developed. In retinal imagery, image distortion comes from different sources: 1) change in patient sitting position (large horizontal translation); 2) change in chin cup position (smaller vertical translation); 3) head tilting and ocular torsion (rotation); 4) distance change between the eye and the camera (scaling); 5) three dimensional retinal surface (spherical distortion); and 6) inherent aberrations of the eye and camera optical systems. Four different transformation types have been used in the literature to register retinal images: translations ([3] and [4]), similarity transformation ([1], [2], and [14]), affine transformation ([6] and [7]), and second-order polynomial transformation ([15] and [16]). We propose to test the similarity, affine and second-order transformation types to determine which is the best (we reject the transformation composed only of translations because it is visually evident that it is not sufficient to correct the distortions). Our registration algorithm has ten parameters of which seven are dependent on the image resolution. The three left are the main free parameters. A. Control Point Detection A good point matching registration process requires that a sufficient number of corresponding control points be present in both images, uniformly distributed, and not affected by lesions. For these reasons, blood vessel bifurcation points are a natural choice. The control point detection involves two steps: retinal

3 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 663 Fig. 1. Retinal network (left) and bifurcation point (right) detection algorithm. vessel centerline detection [Fig. 1 (left)] followed by bifurcation point detection [Fig. 1 (right)]. To extract the retinal vessel centerline, we first correct for nonuniform illumination by dividing the image by its medianfiltered version with a kernel of size. Second, a threshold is applied to binarize the image. In order to take into account the average intensity difference between the images, we do not use a fixed threshold but a Gaussian-constant false-alarm rate (CFAR) one satisfying (1) where and are the mean andstandard deviation ofthe image intensity and is a constant [17]. The constant is fixed for a given data set allowing to vary from image to image as and vary. Third, small circular objects like microaneurisms are eliminated by opening with a linear structuring element of size with orientation ranging from 0 to 180 with 15 steps [18]. Fourth, vessel segments are connected by dilating with a circular structuring element of size. Fifth, the image is thinned with a thinning algorithm which guarantees a one pixel width. Finally, a mask is used to hide the optic disc because itcontainstoo many interlaced blood vessels which would not provide reliable bifurcation points. The optic disc mask is automatically created offline using an Hausdorff-based template matching algorithm [19]. On this monopixel binary retinal network, bifurcation points are pixels with three or four neighbors [Fig. 2(a)]. Groups of three adjacent bifurcation points are fused because they correspond to the same bifurcation point [Fig. 2(b)]. Bifurcation points due to small vessels (maximum length ) are eliminated [Fig. 2(c)]. Bifurcation point pairs formed by the juxtaposition of two vessels are also fused maximum distance [Fig. 2(d)]. Finally, the angles of the vessels surrounding each bifurcation point are obtained by computing the intersection between the vessels and a circle of fixed diameter centered on each point [Fig. 2(e)] [7]. B. Point Matching and Transformation In an automatic registration procedure, one does not want to impose the constraint about which of the images is to be used as the reference one. The algorithm proposed in [7] for ophthalmic images provides a registration result that is not the same according to which is the reference image. Our matching proce- Fig. 2. Bifurcation point detection steps: (a) identification of pixels with three or four neighbors, (b) fusion of the groups of three adjacent points, (c) elimination of the points due to small vessels, (d) fusion of the point pairs formed by the juxtaposition of two vessels, and (e) computation of the angles of the vessels surrounding each bifurcation point. dure (Fig. 3) gives results that are independent of the choice of the reference image. Each bifurcation point in image 1 is linked to the bifurcation points in image 2 that are located inside a given distance, with the same number of surrounding vessels and a difference between the angles less than a given threshold [20]. Bifurcation points without matches are eliminated. We then use the following relaxation method to eliminate matches while estimating the transformation parameters. We have the choice to use the similarity, affine and second-order polynomial transformation in the matching process. We have tried all of them and found that the affine transformation gives the best results. Affine parameters are computed in the two directions (to register image 1 to image 2 and vice versa) and applied to the appropriate bifurcation points. A root-mean-square error (RMSE)

4 664 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 Fig. 3. Flow chart of our matching algorithm. TABLE II USED PARAMETER VALUES FOR TWO IMAGE RESOLUTIONS Fig. 4. Fusion architecture of the MIT method. Step 1 is a within-band contrast enhancement and normalization. Step 2 is a between-band fusion. between the transformed and the reference points is computed for each image. The average RMSE is then computed. Followed by an iterative process where: 1) each initial match is removed; 2) the affine parameters are recalculated; 3) the RMSE is reestimated; and 4) the match that gives the smallest average RMS is eliminated. An initial match is eliminated until the average RMS is smaller than a threshold or when there are only three matches left (minimum number of matches needed to calculate an affine transformation). After this step, if there are some bifurcation points with more than one match, the worst is eliminated until there is only one left. If there are more matches than needed to find an exact solution, the least mean square method is used to estimate the transformation parameters.

5 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 665 Fig. 5. Example of color nonmydriatic (top-left) and reversed angiographic (top-right) images, retinal networks, and control points (circles). Fig. 7. Checkerboard images showing the registration of Fig. 5 with no (top-left), similarity (top-right), affine (bottom-left), and polynomial (bottom-right) transformation.. Then we compute for each. To determine which must be kept, we compare, for a certain number of images, the retinal network detected with the algorithm for different with the retinal network detected manually. The comparison criterion is the percentage of good vessels detected (true positive) minus the percentage of bad vessels detected (false positive). The length of the opening linear structuring element must be greater than the diameter of the largest circular feature to be removed(the largest microaneurisms have a diameter of 125 m). The size of the dilating circular structuring element must be large enough to connect the vessel segments. The other parameters (small branch length, fusion distance, orientation circle diameter, search distance, orientation difference, RMS threshold) have all been determined experimentally from a subset of the first three data sets. The values of all those parameters are given in Table II for two image data sets. Fig. 6. Color (red) and angiographic registered retinal networks of Fig. 5 with no (top-left), similarity (top-right), affine (bottom-left), and polynomial (bottom-right) transformation. We will now discuss the choice of the values for the ten parameters of the registration algorithm. The median filter size for the illumination correction must be large enough to eliminate the retinal structures and get an average illumination image. The Gaussian-CFAR threshold in (1) is estimated using the following procedure. First, we compute for where N is the number of images in a data set and C. Image Registration Performance Assessment In the fundus image literature, a visual performance assessment of registration is often done by creating a checkerboard of reference and transformed images [6] or by superposing transformed extracted vessels on the reference image [14]. Visual evaluation is useful but may be insufficient and too slow to compare the registration performance for different transformation types. We, thus, developed a quantitative criterion. A good registration is one with a good superposition of blood vessel centerline. However, retinal networks may be slightly different because the source images are different or the blood vessel detection is incomplete. To minimize this effect, the retinal network with least pixels is used as the reference. Superposition is defined as the presence of another network pixel into a fixed size window centered on a reference network pixel. A window of size unity corresponds to the true superposition definition. The larger the window is, the

6 666 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 TABLE III RESULTS FOR THE 45 IMAGE PAIRS (1) Number of control points, (2) number of matches, (3) registration performance (in %) without transformation, (4) with similarity, (5) affine, and (6) polynomial transformation, (7) fusion performance for the SIDWT method evaluated with the standard deviation, (8) entropy, (9) cross-entropy, and (10) spatial frequency criteria. larger the accepted registration error. We use a 3 3 window. The superposition percentage is the registration performance quantitative criterion. Of course, a 100% superposition can never be reached in practice because that would require that all pixels of the reference retinal network must be present in the other network and be within a one pixel distance. IV. FUSION A. Methods Twelve classical grayscale image fusion methods have been implemented in a Matlab toolbox [11]. We refer to [11] for details about those methods. Here, we only give a brief description. The classical methods can be classified in four groups: linear, nonlinear, image pyramids, and wavelet transform. Linear methods [average, principal component analysis (PCA)] consist in a weighted superposition of the source images. In nonlinear methods (select minimum, select maximum), nonlinear operators are applied that select the minimum or maximum intensity pixel between the images. In image pyramid methods (Laplacian, filter-subtract-decimate (FSD), ratio, contrast, gradient, morphological), each source image is decomposed in a pyramid representing the edge information at different resolutions. The fused image is obtained by combining those two pyramids. Wavelet transform methods [discrete wavelet transform (DWT) with Daubechies

7 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 667 Fig. 8. Fused images of Fig. 5 with (from top to bottom and left to right) average, PCA, select minimum, maximum, laplacian, FSD, ratio, contrast, gradient, and morphological pyramid, DWT, and SIDWT methods.

8 668 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 TABLE IV GRAYSCALE FUSION METHODS CLASSIFIED IN DECREASING PERFORMANCE ORDER FOR EACH QUANTITATIVE EVALUATION CRITERION wavelets] are similar to image pyramid methods but constitute nonredundant image representations. For the multiresolution methods, one has to choose the number of decomposition levels (1 7), the high-pass coefficient combination method (max, salience/match measure [21], or consistency check [22]), the area used for the high-pass coefficient combination (3, 5, 7, 9), and the low-pass coefficient combination method (one of the source images or the average of both). We tested these 12 grayscale fusion methods as well as three additional approaches that generate color fused images. The first color approach consists in adding the red and blue bands to the grayscale fused image. The second and third are two false color image fusion methods recently proposed: the so-called The Netherlands Organisation for Applied Scientific Research (TNO) and Massachusetts Institute of Technology (MIT) methods ([12] and [13]). The TNO method is a false color mapping of the unique component of each source image satisfying where is the unique component of the first image, that is, the source image minus the common component of both images. The MIT method is based on a color perception model of the human visual system. Many different mappings have been proposed in the past according to the application. We choose the one proposed in [23] and illustrated in Fig. 4. This mapping is appropriate when fusion involves one image with a better contrast or a more natural appearance, as is the case for temporal and multimodal fusion. As a result, the better image is preserved by its mapping to the green band. Step 1 in Fig. 4 is a within-band contrast enhancement and normalization. Step 2 is a between-band fusion. The concentric circles in Fig. 4 represent a convolution operator defined by (2) (3) (4) where is the value of the th pixel of the processed image,,,, and are constants, and and are Gaussian weighted averaging masks. The other terms are described below. The small circles with the plus sign in Fig. 4 represent the excitatory center. The large circles with the minus sign represent the inhibitory surround. Arrows indicate which image feeds the center and which image feeds the surround of the operator. The upper-left operator in Fig. 4 is characterized by, the lower-left one by, the upper-right one by and ( and being the output images of the upper-left and lower-left operator, respectively), and the lower-right one by and. We use the parameter values suggested in [13]:,,,,, and. B. Image Fusion Performance Assessment In the literature, almost all image fusion evaluations are done qualitatively because of the user perception factor. This is particularly obvious for color image fusion since it is not easy to find a criterion to evaluate color information. However, there are some quantitative criteria that can be used [24]. Some need an ideal fused image (RMSE, mutual information, difference entropy) while others do not (standard deviation, entropy, cross-entropy, spatial frequency). We are restricted to the second group in the current study. The standard deviation and the entropy, respectively, measure the contrast and the information content in an image. The cross-entropy measures the similarity in information content between the source and the fused images. We use those criteria to evaluate the 12 grayscale fusion methods. We also evaluate the false color fusion methods qualitatively on the basis of the brilliance contrast, the presence of complementary features without attenuation, the enhancement of common features, and the color contrast. V. RESULTS A. Registration The registration algorithm has been tested on the 70 image pairs described in Section II. Among those image pairs, 9 have been used as a training set to determine some of the algorithm

9 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 669 Fig. 10. Examples of angiograms taken at different stages and color image (bottom-right) of an eye with nonproliferative diabetic retinopathy. vessels) and Fig. 7 (checkerboard of reference and transformed image). On average, the registration performance is 22 6% (20 7%) (no transformation), 54 18% (52 19%) (similarity transformation), 56 19% (54 19%) (affine transformation) and 55 18% (54 19%) (polynomial transformation). We recall that to obtain a 100% score, all pixels of the reference retinal network must be present in the other network (which is highly unlikely) and be within a one pixel distance, which is not possible in practice. So, the registration improves the image alignment by 32 18% (32 18%) (similarity), 34 18% (34 18%) (affine), and 34 18% (35 18%) (polynomial). This indicates that, on average, the three transformation types are equally good. The affine transformation is significantly (difference greater than ) better than the similarity one in 10% (10%) of the cases and vice versa in 8% (7%) of the cases. The polynomial transformation is better than the affine one in 8% (9%) of the cases and vice versa in 10% (10%) of the cases. The polynomial transform is better than the similarity one in 10% (9%) of the cases and vice versa in 11% (11%) of the cases. Although the three transformation types are equally good on average, there is a significant number of cases [19% (19%)] for which a transformation type is better than another one. Fig. 9. Fused images of Fig. 5 with SIDWT, red and blue bands added (top), TNO (middle) and MIT (bottom) methods. parameter values. From now on the results in parentheses are those including this training set. The registration succeeded on 36 (45) image pairs. Results are detailed in Table III. Fig. 5 gives an example of a pair of source images along with the resulting thinned retinal networks and the detected control points. On average for each image pair, (90 21) control points were detected leading to 13 7 (14 7) good matches. The registration results for the images in Fig. 5 are shown in Fig. 6 (reference and transformed extracted B. Fusion The 14 fusion algorithms have been tested on the 45 registered image pairs. For each pair, we select the transformation that gives the best registration performance. For the multiresolution methods, we obtain the best results with seven decomposition levels, salience/match measure for high-pass coefficient combination, a value of three for the high-pass coefficient combination area, and the source image average for the low-pass coefficient combination (see [11] for details). Fig. 8 shows the fusion on image pair of Fig. 5 for the 12 grayscale fusion methods. The 12 grayscale fusion methods are evaluated with the four quantitative criteria mentioned in Section IV-B. Table IV shows the classification of the methods in decreasing order of performance for each criterion. The classification varies between the criteria because they do not measure the same image characteristics. If we classify the methods according to the sum of their

10 670 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 Fig. 11. methods. Fusion of images 1, 2, and 3 of Fig. 10 (top: 1 and 2, middle: 1 and 3, bottom: 2 and 3) with the SIDWT (left), TNO (middle), and the MIT (right) Fig. 12. Fusion of images 3 and 4 of Fig. 10 with the SIDWT (top-left), R and B bands added (top-right), TNO (bottom-left), and the MIT (bottom-right) methods. order of performance for the four criteria, we obtain: laplacian, shift-invariant discrete wavelet transform (SIDWT), contrast, morphological, DWT, PCA, FSD, select maximum, select minimum, ratio, gradient, and average. For example, for the SIDWT method, the average standard deviation is for the source images and for the fused images. Thus, the contrast is improved by 4 4. The average entropy is for the source images and for the fused images. Thus, the information content is improved by The average cross-entropy is for the fused images. The average spatial frequency is Fig. 13. Examples of angiograms taken at different stages and color image (bottom-right) of an eye with cystoid macular edema for the source images and 19 7 for the fused images. Thus, the spatial frequency is improved by 4 2. Fig. 9 shows the color fusion of the same image pair. The fused images gathered the exudates (yellow lesions) of the color image and the microaneurisms (black dot lesions) of the angiogram. The R and B bands addition gives realistic yellow color to the optic disc and exudates. The TNO-fused image shows in red what comes from the color image, in green what comes from the angiogram and in blue the difference between the two images. The MIT-fused image shows in red what comes from the color image too, in blue what comes from the angiogram and in green the normalized and contrast enhanced angiogram.

11 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 671 Fig. 14. methods. Fusion of images 1, 2, and 3 of Fig. 13 (top: 1 and 2, middle: 1 and 3, bottom: 2 and 3) with the SIDWT (left), TNO (middle), and the MIT (right) Fig. 15. Fusion of images 3 and 4 of Fig. 13 with the SIDWT (top-left), R and B bands added (top-right), TNO (bottom-left), and the MIT (bottom-right) methods. Fig. 10 shows four images of an eye with nonproliferative diabetic retinopathy from the fourth data set: angiograms taken 17, 25, and 640 s after the injection, plus a color image. Fusion results of the angiograms are shown in Fig. 11. The middle and bottom rows show the fusion results obtained, respectively, with the TNO and MIT methods. From left to right, we have the fusion of the first and second, first and third, and second and third angiograms. In the first case, we see the progression of the dye by the laminar filling of the veins. The second case allows us to see the progression of the dye and the lesions very well by the evident staining. The third case is the one where the lesions are the best defined. Fig. 12 shows the fusion of the third angiogram and the color image with the SIDWT (without and with the R and B bands added), TNO, and MIT methods. Figs are the equivalent of the three former figures but for an eye with cystoid macular edema imaged at 18, 22, and 324 s after the injection, respectively. Similarly, Figs. 16 and 17 show the results for an eye with age-related macular degeneration. We can see on these figures that the TNO method provides a better color contrast than the MIT one. VI. CONCLUSION We have presented the results of a study on the application of image registration and pixel-level fusion techniques between

12 672 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 5, MAY 2003 Fig. 16. Angiograms taken at different stages of an eye with age-related macular degeneration. Fig. 17. Fusion of images 1, 2, and 3 of Fig. 16 (top: 1 and 2, bottom: 2 and 3) with the SIDWT (left), TNO (middle), and the MIT (right) methods. pairs of human retinal images of different modalities (color, fluorescein angiogram), different resolutions, and taken at different times (from a few minutes during an angiography examination to several years between two examinations). We proposed an improved registration algorithm for the temporal and multimodal registration of low- and high-resolution retinal images which can be used for any global transformation type. We have tested this algorithm on 61 (70) image pairs of different modalities, eye diseases, and taken at different times. The registration performance has been evaluated with an overlap-based criterion which indicates that the similarity, affine, and second-order polynomial transformations are equally good on average for registering the images, although there is a significant number of cases for which a transformation type is better than another one. We presented the results of 12 grayscale and two false color fusion methods. The fusion performance of the grayscale methods has been evaluated with four quantitative criteria. The fusion provides an average entropy gain of , a standard deviation gain of 4 4, and a spatial frequency gain of 4 2 indicating that the fused images contain more information and have a better contrast than the source images. We have also found that the addition of the R and B bands to the grayscale fused images, provides additional information of potential interest for medical diagnosis. Finally, our tests indicate that the TNO false color mapping method is better than the MIT one regarding the color contrast rendering. Although extensive testing has been done during this work, there are many issues that remain open and should be addressed in the future. Let us mention two of them. First, the robustness of the image registration. Retinal image appearance is highly variable in practice regarding to color, texture, structure, and lesions. This is due to many factors such as human color pigmentation, age, diseases, eye movement, camera calibration, film defects, etc. Developing a robust registration algorithm that can cope with those practical constraints was the main challenge of this work. We failed to register 25 (25) of the 61 (70) image pairs because they are of very bad quality (very few vessels due to illumination problem, zoom on the macula or vessel deterioration caused by the disease). An example is shown in Fig. 18. Also images that are taken at the late angiography stage have vessels that are hardly visible because the dye has left them. Since the vessels are about the same intensity as the background they cannot be extracted. An example is shown in Fig. 19. Although those image pairs are of bad quality in term of network visibility, clinicians can still use them for diagnosis. Thus, there is room for improving the registration robustness but other reference structure seems to be necessary when network quality is poor. The second issue is the color information carried out by the color fusion technique. Color information is an important feature for the diagnosis of some diseases (e.g., diabetic retinopathy). It is clear that the false colors created by the fusion process carry information that cannot be interpreted easily by the clinician at this point. One can then ask: How fused color

13 LALIBERTÉ et al.: REGISTRATION AND FUSION OF RETINAL IMAGES AN EVALUATION STUDY 673 ACKNOWLEDGMENT The authors are would like to thank M.-C. Boucher from the Maisonneuve-Rosemont Hospital (Montreal, QC, Canada) for providing some of the images and clinical inputs. They would also like to thank the reviewers for their valuable comments that help improving the quality of the paper. Fig. 18. Image pair that the algorithm failed to register because one of the images shows very few vessels (bottom image). Fig. 19. Image pair that the algorithm failed to register because one of the images is taken at the late stage (bottom image). information can be adapted to be human readable or used in a further computer-aided diagnosis processing? This is an important issue that necessitates a close relationship with clinicians. We plan to address those issues, and others, in the near future. REFERENCES [1] A. Pinz, S. Bernögger, P. Datlinger, and A. Kruger, Mapping the human retina, IEEE Trans. Med. Imag., vol. 17, pp , Aug [2] A. V. Cideciyan, Registration of ocular fundus images by cross-correlation of triple invariant image descriptors, IEEE Eng. Med., Biol. Mag., vol. 14, pp , [3] E. Peli, R. A. Augliere, and G. T. Timberlake, Feature-based registration of retinal images, IEEE Trans. Med. Imag., vol. MI-6, pp , June [4] J. J.-H. Yu, B.-N. Hung, and C.-L. Liou, Fast algorithm for digital retinal image alignment, in Proc. Int. Conf. IEEE Engineering in Medicine and Biology Society, vol. 11, 1989, pp [5] L. G. Brown, A survey of image registration techniques, ACM Computing Surveys, vol. 24, no. 4, pp , [6] N. Ritter, R. Owens, J. Cooper, R. H. Eikelboom, and P. P. van Saarloos, Registration of stereo and temporal images of the retina, IEEE Trans. Med. Imag., vol. 18, pp , May [7] F. Zana and J. C. Klein, A multimodal registration algorithm of eye fundus images using vessels detection and Hough transform, IEEE Trans. Med. Imag., vol. 18, pp , May [8] M. Lalonde, L. Gagnon, and M.-C. Boucher, Non-recursive paired tracking for vessel extraction from retinal images, in Proc. Vision Interface, 2000, pp [9] D. E. Becker, A. Can, J. N. Turner, H. L. Tanenbaum, and B. Roysam, Image processing algorithms for retinal montage synthesis, mapping, and real-time location determination, IEEE Trans. Biomed. Eng., vol. 45, pp , Jan [10] V. Rakotomalala, L. Macaire, J.-G. Postaire, and M. Valette, Identification of retinal vessels by color image analysis, Machine Graph. Vis., vol. 7, no. 4, pp , [11] O. Rockinger and T. Fechner, Pixel-level image fusion: the case of image sequences, Proc. SPIE, vol. 3374, pp , [12] A. Toet and J. Walraven, New false color mapping for image fusion, Opt. Eng., vol. 35, no. 3, pp , [13] A. M. Waxman, D. A. Fay, A. N. Gove, M. C. Seibert, and J. P. Racamato, Method and apparatus for generating a synthetic image by the fusion of signals representative of different views of the same scene, U. S. Patent , [14] J. W. Berger, M. E. Leventon, N. Hata, W. M. Wells, and R. Kikinis, Design considerations for a computer-vision-enabled ophthalmic augmented reality environment, presented at the CVRMed/MRCAS, Grenoble, France, [15] A. Can, C. V. Stewart, and B. Roysam, Robust hierarchical algorithm for constructing a mosaic from images of the curved human retina, in Proc. Computer Vision and Pattern Recognition, 1999, pp [16] P. Jasiobedzki, Registration of retinal images using adaptive adjacency graphs, in Proc. Computer-Based Medical Systems, 1993, pp [17] L. Gagnon, M. Lalonde, M. Beaulieu, and M.-C. Boucher, Procedure to detect anatomical structures in optical fundus images, Proc. SPIE, vol. 4322, pp , [18] G. Yang, S. Wang, L. Gagnon, and M.-C. Boucher, Algorithm for detecting micro-aneurysms in low-resolution color retinal images, in Proc. Vision Interface, 2001, pp [19] M. Lalonde, M. Beaulieu, and L. Gagnon, Fast and robust optic disk detection using pyramidal decomposition and Hausdorff-based template matching, IEEE Trans. Med Imag., vol. 20, pp , Nov [20] F. Laliberté, L. Gagnon, and Y. Sheng, Registration and fusion of retinal images: a comparative study, in Proc. Int. Conf. Pattern Recognition, 2002, pp [21] P. J. Burt and R. J. Kolczynski, Enhanced image capture through fusion, in Proc. 4th Int. Conf.Computer Vision, 1993, pp [22] H. Li, B. S. Manjunath, and S. K. Mitra, Multisensor image fusion using the wavelet transform, Graph. Models Image Processing, vol. 57, no. 3, pp , [23] M. Aguilar and A. L. Garrett, Neurophysiologically-motivated sensor fusion for visualization and characterization of medical imagery, presented at the Fusion 2001, Montreal, QC, Canada. [24] Y. Wang and B. Lohmann, Multisensor image fusion: Concept, method, and applications, Univ. Bremen, Bremen, Germany, Tech. Rep., 2000.

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK Ocular fundus images can provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular degeneration

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Blood vessel tracking in retinal images

Blood vessel tracking in retinal images Y. Jiang, A. Bainbridge-Smith, A. B. Morris, Blood Vessel Tracking in Retinal Images, Proceedings of Image and Vision Computing New Zealand 2007, pp. 126 131, Hamilton, New Zealand, December 2007. Blood

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

Unsurpassed Wide Field Image Quality

Unsurpassed Wide Field Image Quality Unsurpassed Wide Field Image Quality EIDON TrueColor Confocal Scanner The latest technology to disclose new imaging quality to eye care professionals in all clinical practices. Many diagnosis of ocular

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

Carmen Alonso Montes 23rd-27th November 2015

Carmen Alonso Montes 23rd-27th November 2015 Practical Computer Vision: Theory & Applications 23rd-27th November 2015 Wrap up Today, we are here 2 Learned concepts Hough Transform Distance mapping Watershed Active contours 3 Contents Wrap up Object

More information

Robust Computer-Assisted Laser Treatment Using Real-time Retinal Tracking

Robust Computer-Assisted Laser Treatment Using Real-time Retinal Tracking Robust Computer-Assisted Laser Treatment Using Real-time Retinal Tracking Nahed H. Solouma, Abou-Bakr M. Youssef, Yehia A. Badr, and Yasser M. Kadah Biomedical Engineering Department and Laser Institute,

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES

AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES Alauddin Bhuiyan, Baikunth Nath and Joselito Chua Computer Science and Software Engineering, The University of Melbourne,

More information

Fusion of Visual and IR Images for Concealed Weapon Detection 1

Fusion of Visual and IR Images for Concealed Weapon Detection 1 Fusion of Visual and IR Images for Concealed Weapon Detection 1 Z. Xue, R. S. lum, and Y. i ECE Department, ehigh University 19 Memorial Drive West, ethlehem, P 18015-3084 Phone: (610) 758-3459, Fax: (610)

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

THE quantification of vessel features, such as length, width,

THE quantification of vessel features, such as length, width, 1200 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 25, NO. 9, SEPTEMBER 2006 Segmentation of Retinal Blood Vessels by Combining the Detection of Centerlines and Morphological Reconstruction Ana Maria Mendonça,

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Computer-aided Diagnosis of Retinopathy of Prematurity

Computer-aided Diagnosis of Retinopathy of Prematurity Computer-aided Diagnosis of Retinopathy of Prematurity Rangaraj M. Rangayyan, Faraz Oloumi, and Anna L. Ells Department of Electrical and Computer Engineering, University of Calgary Alberta Children's

More information

Image Fusion Using Double Density Discrete Wavelet Transform

Image Fusion Using Double Density Discrete Wavelet Transform 6 Image Fusion Using Double Density Discrete Wavelet Transform 1 Jyoti Pujar 2 R R Itkarkar 1,2 Dept. of Electronics& Telecommunication Rajarshi Shahu College of Engineeing, Pune-33 Abstract - Image fusion

More information

A Quadrature Filter Approach for Registration Accuracy Assessment of Fundus Images

A Quadrature Filter Approach for Registration Accuracy Assessment of Fundus Images A Quadrature Filter Approach for Registration Accuracy Assessment of Fundus Images Kedir M. Adal 1,3, Rosalie Couvert, D.W.J. Meijer, Jose P. Martinez 2, Koenraad A. Vermeer 1, L.J. van Vliet 3 1 Rotterdam

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Vessel Junction Detection From Retinal Images

Vessel Junction Detection From Retinal Images Vessel Junction Detection From Retinal Images Yuexiong Tao Faculty of Computer Science Dalhousie University Halifax, Nova Scotia, CA B3H 1W5 E-mail: yuexiong@cs.dal.ca Qigang Gao Faculty of Computer Science

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

Gopalakrishna Prabhu.K Department of Biomedical Engineering Manipal Institute of Technology Manipal University, Manipal, India

Gopalakrishna Prabhu.K Department of Biomedical Engineering Manipal Institute of Technology Manipal University, Manipal, India 00 International Journal of Computer Applications (0975 8887) Automatic Localization and Boundary Detection of Optic Disc Using Implicit Active Contours Siddalingaswamy P. C. Department of Computer science

More information

A feature-based dense local registration of pairs of retinal images.

A feature-based dense local registration of pairs of retinal images. A feature-based dense local registration of pairs of retinal images. Mathieu Fernandes, Yann Gavet, Jean-Charles Pinoli To cite this version: Mathieu Fernandes, Yann Gavet, Jean-Charles Pinoli. A feature-based

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Keywords Wavelet decomposition, SIFT, Unibiometrics, Multibiometrics, Histogram Equalization.

Keywords Wavelet decomposition, SIFT, Unibiometrics, Multibiometrics, Histogram Equalization. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Secure and Reliable

More information

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly) Introduction Image Compression -The goal of image compression is the reduction of the amount of data required to represent a digital image. -The idea is to remove redundant data from the image (i.e., data

More information

Implementation of Image Fusion Algorithm Using Laplace Transform

Implementation of Image Fusion Algorithm Using Laplace Transform Implementation of Image Fusion Algorithm Using Laplace Transform S.Santhosh kumar M.Tech, Srichaitanya Institute of Technology & Sciences, Karimnagar,Telangana. G Sahithi, M.Tech Assistant Professor, Srichaitanya

More information

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface , 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision

More information

Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations and Crossovers

Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations and Crossovers 6th International Conference on Computer and Knowledge Engineering (ICCKE 2016), October 20-21 2016, Ferdowsi University of Mashhad Automatic Graph-Based Method for Classification of Retinal Vascular Bifurcations

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online RESEARCH ARTICLE ISSN: 2321-7758 PYRAMIDICAL PRINCIPAL COMPONENT WITH LAPLACIAN APPROACH FOR IMAGE FUSION SHIVANI SHARMA 1, Er. VARINDERJIT KAUR 2 2 Head of Department, Computer Science Department, Ramgarhia

More information

Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph

Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph Heidelberg Engineering GmbH, Heidelberg, Germany Contents 1 Introduction... 1 2 Confocal laser scanning

More information

Multiscale Blood Vessel Segmentation in Retinal Fundus Images

Multiscale Blood Vessel Segmentation in Retinal Fundus Images Multiscale Blood Vessel Segmentation in Retinal Fundus Images Attila Budai 1, Georg Michelson 2, Joachim Hornegger 1 1 Pattern Recognition Lab and Graduate School in Advanced Optical Technologies(SAOT),

More information

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model Pascal A. Dufour 12,HannanAbdillahi 3, Lala Ceklic 3,Ute Wolf-Schnurrbusch 23,JensKowal 12 1 ARTORG Center

More information

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the The CSI Journal on Computer Science and Engineering Vol. 11, No. 2 & 4 (b), 2013 Pages 55-63 Regular Paper Multi-Focus Image Fusion for Visual Sensor Networks in Domain Wavelet Mehdi Nooshyar Mohammad

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

ÇANKAYA UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARMENT CENG 407 SOFTWARE REQUIREMENTS SPECIFICATION (SRS)

ÇANKAYA UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARMENT CENG 407 SOFTWARE REQUIREMENTS SPECIFICATION (SRS) ÇANKAYA UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARMENT CENG 407 SOFTWARE REQUIREMENTS SPECIFICATION (SRS) DETECTION OF OBSTRUCTIONS IN THE VESSELS IN FUNDUS IMAGES By 201311018 - AYKUT

More information

On the main screen top left side enter your name and specify if you are a clinician or not.

On the main screen top left side enter your name and specify if you are a clinician or not. Document name: SOP_VAMPIRE_ANNOTATION_TOOL Title: VAMPIRE-Annotation Tool Author: Ilaria Pieretti Version: 10 10 Background: VAMPIRE (Vascular Assessment and Measurement Platform for Images of the Retina)

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

The Dual-Bootstrap Iterative Closest Point Algorithm With Application to Retinal Image Registration

The Dual-Bootstrap Iterative Closest Point Algorithm With Application to Retinal Image Registration IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 1379 The Dual-Bootstrap Iterative Closest Point Algorithm With Application to Retinal Image Registration Charles V. Stewart*, Chia-Ling

More information

Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors

Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors American Journal of Engineering and Applied Sciences Original Research Paper Metric for the Fusion of Synthetic and Real Imagery from Multimodal Sensors 1 Rami Nahas and 2 S.P. Kozaitis 1 Electrical Engineering,

More information

A new interface for manual segmentation of dermoscopic images

A new interface for manual segmentation of dermoscopic images A new interface for manual segmentation of dermoscopic images P.M. Ferreira, T. Mendonça, P. Rocha Faculdade de Engenharia, Faculdade de Ciências, Universidade do Porto, Portugal J. Rozeira Hospital Pedro

More information

Macula Precise Localization Using Digital Retinal Angiographies

Macula Precise Localization Using Digital Retinal Angiographies Macula Precise Localization Using Digital Retinal Angiographies C.MARIÑO, S. PENA, M.G.PENEDO, M. ORTEGA, J. ROUCO, A. POSE-REINO, M. PENA Grupo de Visión Artificial y Reconocimiento de Patrones University

More information

Edge Detection in Angiogram Images Using Modified Classical Image Processing Technique

Edge Detection in Angiogram Images Using Modified Classical Image Processing Technique Edge Detection in Angiogram Images Using Modified Classical Image Processing Technique S. Deepak Raj 1 Harisha D S 2 1,2 Asst. Prof, Dept Of ISE, Sai Vidya Institute of Technology, Bangalore, India Deepak

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Automated Vessel Shadow Segmentation of Fovea-centred Spectral-domain Images from Multiple OCT Devices

Automated Vessel Shadow Segmentation of Fovea-centred Spectral-domain Images from Multiple OCT Devices Automated Vessel Shadow Segmentation of Fovea-centred Spectral-domain Images from Multiple OCT Devices Jing Wu a, Bianca S. Gerendas a, Sebastian M. Waldstein a, Christian Simader a and Ursula Schmidt-Erfurth

More information

Visual Distortions in Macular Degeneration: Quantitative Diagnosis and Correction

Visual Distortions in Macular Degeneration: Quantitative Diagnosis and Correction Visual Distortions in Macular Degeneration: Quantitative Diagnosis and Correction Walter Kohn, Professor Emeritus of Physics & Chemistry, UC Santa Barbara Jim Klingshirn, Consulting Engineer, Santa Barbara

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Registering Retinal Vessel Images from Local to Global via Multiscale and Multicycle Features

Registering Retinal Vessel Images from Local to Global via Multiscale and Multicycle Features Registering Retinal Vessel Images from Local to Global via Multiscale and Multicycle Features Haiyong Zheng 1 Lin Chang 1 Tengda Wei 1 Xinxin Qiu 1 Ping Lin 2 Yangfan Wang 1 1 Ocean University of China

More information

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement

2D Vessel Segmentation Using Local Adaptive Contrast Enhancement 2D Vessel Segmentation Using Local Adaptive Contrast Enhancement Dominik Schuldhaus 1,2, Martin Spiegel 1,2,3,4, Thomas Redel 3, Maria Polyanskaya 1,3, Tobias Struffert 2, Joachim Hornegger 1,4, Arnd Doerfler

More information

Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images

Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images Automatic Cerebral Aneurysm Detection in Multimodal Angiographic Images Clemens M. Hentschke, Oliver Beuing, Rosa Nickl and Klaus D. Tönnies Abstract We propose a system to automatically detect cerebral

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

Hybrid Retinal Image Registration. Thitiporn Chanwimaluang, Student Member, IEEE, Guoliang Fan, Senior Member, IEEE, and Stephen R.

Hybrid Retinal Image Registration. Thitiporn Chanwimaluang, Student Member, IEEE, Guoliang Fan, Senior Member, IEEE, and Stephen R. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, VOL. 10, NO. 1, JANUARY 2006 129 Hybrid Retinal Image Registration Thitiporn Chanwimaluang, Student Member, IEEE, Guoliang Fan, Senior Member,

More information

Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using 2D Fourier Image Reconstruction

Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using 2D Fourier Image Reconstruction Defect Inspection of Liquid-Crystal-Display (LCD) Panels in Repetitive Pattern Images Using D Fourier Image Reconstruction Du-Ming Tsai, and Yan-Hsin Tseng Department of Industrial Engineering and Management

More information

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What

More information

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging 1 CS 9 Final Project Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging Feiyu Chen Department of Electrical Engineering ABSTRACT Subject motion is a significant

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model Pascal A. Dufour 1,2, Hannan Abdillahi 3, Lala Ceklic 3, Ute Wolf-Schnurrbusch 2,3, and Jens Kowal 1,2 1 ARTORG

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks

More information

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos

Geometry-Based Optic Disk Tracking in Retinal Fundus Videos Geometry-Based Optic Disk Tracking in Retinal Fundus Videos Anja Kürten, Thomas Köhler,2, Attila Budai,2, Ralf-Peter Tornow 3, Georg Michelson 2,3, Joachim Hornegger,2 Pattern Recognition Lab, FAU Erlangen-Nürnberg

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Non-recursive paired tracking for vessel extraction from retinal images

Non-recursive paired tracking for vessel extraction from retinal images Non-recursive paired tracking for vessel extraction from retinal images Marc Lalonde, Langis Gagnon and Marie-Carole Boucher Centre de recherche informatique de Montréal 550 Sherbrooke W., Suite 100, Montréal,

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Performance Evaluation of Fusion of Infrared and Visible Images

Performance Evaluation of Fusion of Infrared and Visible Images Performance Evaluation of Fusion of Infrared and Visible Images Suhas S, CISCO, Outer Ring Road, Marthalli, Bangalore-560087 Yashas M V, TEK SYSTEMS, Bannerghatta Road, NS Palya, Bangalore-560076 Dr. Rohini

More information

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera [10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The

More information

Development of an Automated Fingerprint Verification System

Development of an Automated Fingerprint Verification System Development of an Automated Development of an Automated Fingerprint Verification System Fingerprint Verification System Martin Saveski 18 May 2010 Introduction Biometrics the use of distinctive anatomical

More information

Segmentation of the Optic Disc, Macula and Vascular Arch

Segmentation of the Optic Disc, Macula and Vascular Arch Chapter 4 Segmentation of the Optic Disc, Macula and Vascular Arch M.Niemeijer, M.D. Abràmoff and B. van Ginneken. Segmentation of the Optic Disc, Macula and Vascular Arch in Fundus Photographs. IEEE Transactions

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET 69 CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET 3.1 WAVELET Wavelet as a subject is highly interdisciplinary and it draws in crucial ways on ideas from the outside world. The working of wavelet in

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM ABSTRACT Mahesh 1 and Dr.M.V.Subramanyam 2 1 Research scholar, Department of ECE, MITS, Madanapalle, AP, India vka4mahesh@gmail.com

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3

ENHANCED IMAGE FUSION ALGORITHM USING LAPLACIAN PYRAMID U.Sudheer Kumar* 1, Dr. B.R.Vikram 2, Prakash J Patil 3 e-issn 2277-2685, p-issn 2320-976 IJESR/July 2014/ Vol-4/Issue-7/525-532 U. Sudheer Kumar et. al./ International Journal of Engineering & Science Research ABSTRACT ENHANCED IMAGE FUSION ALGORITHM USING

More information

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video Bit-Plane Decomposition Steganography Using Wavelet Compressed Video Tomonori Furuta, Hideki Noda, Michiharu Niimi, Eiji Kawaguchi Kyushu Institute of Technology, Dept. of Electrical, Electronic and Computer

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Multiscale Blood Vessel Segmentation in Retinal Fundus Images

Multiscale Blood Vessel Segmentation in Retinal Fundus Images Multiscale Blood Vessel Segmentation in Retinal Fundus Images Attila Budai 1, Georg Michelson 2, Joachim Hornegger 1 1 Pattern Recognition Lab and Graduate School in Advanced Optical Technologies(SAOT),

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Automated Model Based Segmentation, Tracing and Analysis of Retinal Vasculature from Digital Fundus Images

Automated Model Based Segmentation, Tracing and Analysis of Retinal Vasculature from Digital Fundus Images Chapter 1 Automated Model Based Segmentation, Tracing and Analysis of Retinal Vasculature from Digital Fundus Images Abstract Quantitative morphometry of the retinal vasculature is of widespread interest,

More information

MEDICAL IMAGE ANALYSIS

MEDICAL IMAGE ANALYSIS SECOND EDITION MEDICAL IMAGE ANALYSIS ATAM P. DHAWAN g, A B IEEE Engineering in Medicine and Biology Society, Sponsor IEEE Press Series in Biomedical Engineering Metin Akay, Series Editor +IEEE IEEE PRESS

More information

Panoramic Image Stitching

Panoramic Image Stitching Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract

More information

A Review of Methods for Blood Vessel Segmentation in Retinal images

A Review of Methods for Blood Vessel Segmentation in Retinal images A Review of Methods for Blood Vessel Segmentation in Retinal images Sonal S. Honale Department of Computer Science and Engineering Tulsiramji Gaikwad Patil College of Engineering & Technology, Nagpur,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques

Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques Performance Evaluation of Biorthogonal Wavelet Transform, DCT & PCA Based Image Fusion Techniques Savroop Kaur 1, Hartej Singh Dadhwal 2 PG Student[M.Tech], Dept. of E.C.E, Global Institute of Management

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images Karthik Ram K.V & Mahantesh K Department of Electronics and Communication Engineering, SJB Institute of Technology, Bangalore,

More information

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance Safety and Security Engineering VI 203 A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance V. Nosso 1, F. Garzia 1,2 & R. Cusani 1 1 Department

More information

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database 2017 29 6 16 GITI 3D From 3D to 4D imaging Data Fusion Virtual Surgery Medical Virtual Reality Team Morphological Database Functional Database Endo-Robot High Dimensional Database Team Tele-surgery Robotic

More information