VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ
|
|
- Charla Fowler
- 5 years ago
- Views:
Transcription
1 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV BIOMEDICÍNSKÉHO INŽENÝRSTVÍ FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF BIOMEDICAL ENGINEERING MULTIMODÁLNÍ REGISTRACE RETINÁLNÍCH SNÍMKŮ Z FUNDUS KAMERY A OCT DIPLOMOVÁ PRÁCE MASTER'S THESIS AUTOR PRÁCE AUTHOR Bc. ONDŘEJ BĚŤÁK BRNO 2011
2 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV BIOMEDICÍNSKÉHO INŽENÝRSTVÍ FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF BIOMEDICAL ENGINEERING MULTIMODÁLNÍ REGISTRACE RETINÁLNÍCH SNÍMKŮ Z FUNDUS KAMERY A OCT MULTIMODAL REGISTRATION OF FUNDUS CAMERA AND OCT RETINAL IMAGES DIPLOMOVÁ PRÁCE MASTER'S THESIS AUTOR PRÁCE AUTHOR VEDOUCÍ PRÁCE SUPERVISOR Bc. ONDŘEJ BĚŤÁK Ing. JIŘÍ GAZÁREK BRNO 2011
3 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ Fakulta elektrotechniky a komunikačních technologií Ústav biomedicínského inženýrství Diplomová práce magisterský navazující studijní obor Biomedicínské a ekologické inženýrství Student: Bc. Ondřej Běťák ID: Ročník: 2 Akademický rok: 2010/2011 NÁZEV TÉMATU: Multimodální registrace retinálních snímků z fundus kamery a OCT POKYNY PRO VYPRACOVÁNÍ: Seznamte se s dostupnými prameny o zpracování barevných retinálních snímků, zejména snímků pořízených digitální fundus kamerou a OCT. Z dostupné literatury nastudujte současné metody registrace obrazů a podle pokynů vedoucího projektu vybranou metodu realizujte v programovém prostředí MATLAB. Metodu otestujte na dostupných obrazových datech z fundus kamery a OCT na ÚBMI. DOPORUČENÁ LITERATURA: [1] JAN J.: Medical Image Processing, Reconstruction and Restoration - Concepts and Methods, CRC New York, [2] ZITOVÁ B., FLUSSER J.: Image registration methods: a survey, Image and Vision Computing, Termín zadání: Termín odevzdání: Vedoucí práce: Ing. Jiří Gazárek prof. doc. Ing. Ivo Radim Provazník, Kolář, Ph.D. Předseda oborové rady UPOZORNĚNÍ: Autor diplomové práce nesmí při vytváření diplomové práce porušit autorská práva třetích osob, zejména nesmí zasahovat nedovoleným způsobem do cizích autorských práv osobnostních a musí si být plně vědom následků porušení ustanovení 11 a následujících autorského zákona č. 121/2000 Sb., včetně možných trestněprávních důsledků vyplývajících z ustanovení části druhé, hlavy VI. díl 4 Trestního zákoníku č.40/2009 Sb.
4 LICENČNÍ SMLOUVA POSKYTOVANÁ K VÝKONU PRÁVA UŽÍT ŠKOLNÍ DÍLO uzavřená mezi smluvními stranami: 1. Pan/paní (dále jen autor ) Jméno a příjmení: Ondřej Běťák Bytem: Střítež nad Bečvou 40, Narozen/a (datum a místo): ve Valašském Meziříčí 2. Vysoké učení technické v Brně a Fakulta elektrotechniky a komunikačních technologií se sídlem Technická 10, Brno jejímž jménem jedná na základě písemného pověření děkanem fakulty: prof. Ing. Ivo Provazník, Ph.D, předseda rady oboru Biomedicínské a ekologické inženýrství (dále jen nabyvatel ) Čl. 1 Specifikace školního díla 1. Předmětem této smlouvy je vysokoškolská kvalifikační práce (VŠKP): disertační práce diplomová práce bakalářská práce jiná práce, jejíž druh je specifikován jako... (dále jen VŠKP nebo dílo) Název VŠKP: Multimodální registrace retinálních snímků z fundus kamery a OCT Vedoucí/ školitel VŠKP: Ing. Jiří Gazárek Ústav: Ústav biomedicínského inženýrství Datum obhajoby VŠKP: 7. nebo 8. června 2011 * VŠKP odevzdal autor nabyvateli * : v tištěné formě počet exemplářů: 2 v elektronické formě počet exemplářů: 2 2. Autor prohlašuje, že vytvořil samostatnou vlastní tvůrčí činností dílo shora popsané a specifikované. Autor dále prohlašuje, že při zpracovávání díla se sám nedostal do rozporu s autorským zákonem a předpisy souvisejícími a že je dílo dílem původním. 3. Dílo je chráněno jako dílo dle autorského zákona v platném znění. 4. Autor potvrzuje, že listinná a elektronická verze díla je identická. * hodící se zaškrtněte
5 Článek 2 Udělení licenčního oprávnění 1. Autor touto smlouvou poskytuje nabyvateli oprávnění (licenci) k výkonu práva uvedené dílo nevýdělečně užít, archivovat a zpřístupnit ke studijním, výukovým a výzkumným účelům včetně pořizovaní výpisů, opisů a rozmnoženin. 2. Licence je poskytována celosvětově, pro celou dobu trvání autorských a majetkových práv k dílu. 3. Autor souhlasí se zveřejněním díla v databázi přístupné v mezinárodní síti ihned po uzavření této smlouvy 1 rok po uzavření této smlouvy 3 roky po uzavření této smlouvy 5 let po uzavření této smlouvy 10 let po uzavření této smlouvy (z důvodu utajení v něm obsažených informací) 4. Nevýdělečné zveřejňování díla nabyvatelem v souladu s ustanovením 47b zákona č. 111/ 1998 Sb., v platném znění, nevyžaduje licenci a nabyvatel je k němu povinen a oprávněn ze zákona. Článek 3 Závěrečná ustanovení 1. Smlouva je sepsána ve třech vyhotoveních s platností originálu, přičemž po jednom vyhotovení obdrží autor a nabyvatel, další vyhotovení je vloženo do VŠKP. 2. Vztahy mezi smluvními stranami vzniklé a neupravené touto smlouvou se řídí autorským zákonem, občanským zákoníkem, vysokoškolským zákonem, zákonem o archivnictví, v platném znění a popř. dalšími právními předpisy. 3. Licenční smlouva byla uzavřena na základě svobodné a pravé vůle smluvních stran, s plným porozuměním jejímu textu i důsledkům, nikoliv v tísni a za nápadně nevýhodných podmínek. 4. Licenční smlouva nabývá platnosti a účinnosti dnem jejího podpisu oběma smluvními stranami. V Brně dne: 3. května Nabyvatel Autor
6 Prohlášení Prohlašuji, že svou diplomovou práci na téma Multimodální registrace retinálních snímků z fundus kamery a OCT jsem vypracoval samostatně pod vedením vedoucího diplomové práce a s použitím odborné literatury a dalších informačních zdrojů, které jsou všechny citovány v práci a uvedeny v seznamu literatury na konci práce. Jako autor uvedené diplomové práce dále prohlašuji, že v souvislosti s vytvořením této diplomové práce jsem neporušil autorská práva třetích osob, zejména jsem nezasáhl nedovoleným způsobem do cizích autorských práv osobnostních a jsem si plně vědom následků porušení ustanovení 11 a následujících autorského zákona č. 121/2000 Sb., včetně možných trestněprávních důsledků vyplývajících z ustanovení 152 trestního zákona č. 140/1961 Sb. V Brně dne 3. května podpis autora
7 Poděkování Děkuji vedoucímu diplomové práce Ing. Jiřímu Gazárkovi za účinnou metodickou, pedagogickou a odbornou pomoc a další cenné rady při zpracování mé diplomové práce. V Brně dne 3. května podpis autora 6
8 INTRODUCTION This thesis is dealing with multimodal registration of retinal devices from different scanning devices. With this multimodal registration is possible to enhance some details on the images which are crucial to detect different types of eye diseases (such as glaucoma, nerve fibre layer degradation, vessel degradation, etc.). The fundus camera is very common in ophthalmology and the fundus devices are cheaper comparing to OCT or SLO devices. That is why fundus is very important even if it is not the best method to detect retinal diseases. On the other hand, the OCT is very sophisticated system which often contains also the SLO device, and is very helpful with finding retinal diseases, but also more expensive. The chapter one is focused on theoretical background of retina itself, retinal imaging, image processing, and image registration. The second chapter is focused on practical application of this knowledge in Matlab. The first step was to create manual registration of fundus images and SLO images. The registration of these two images from fundus and SLO is quite common. Another step was to create automatic registration of fundus and SLO images. For this registration it was necessary to use algorithm which can evaluate the similarities in both images, thus the correlation was used. Thanks to automatic and manual registration was possible to create hybrid type of registration semi-automatic registration. In author s opinion this type of registration is the most effective one. It is very accurate and relatively fast with large amount of images. The final task was to create algorithm which can match the OCT B-scans with the lines in fundus image. The registration of fundus and OCT is not very usual. The results are consulted in the conclusion. 7
9 CONTENT 1 BACKGROUND INFORMATION Biological background Vision system Anatomy of retina Devices for retinal scanning Fundus camera (Retinal Camera) OCT (Optical Coherence Tomography) time domain OCT (Optical Coherence Tomography) spectral domain SLO (Scanning laser ophthalmoscopy) Image processing Edge detection Gradient Median filtering noise reduction Skeletonization Correlation Image registration Background information Image acquisition methods Most common procedure of image registration Spatial transformation types PRACITAL APPLICATION Pre-processing of fundus and SLO images Input image Pre-processing
10 2.1.3 Output image Manual registration of fundus and SLO images Choosing the input points Choosing the reference (base) points Registration result Manual registration block diagram Automatic registration of fundus and SLO images Choosing the reference points Finding the similar windows Finding the minimal value of found windows Automatic registration result Evaluation of automatic registration result Automatic registration block diagram Semi-automatic registration of fundus and SLO images Semi-automatic registration result Semi-automatic registration block diagram The fundus image and OCT B-scans matching Pre-processing of OCT and fundus images B-scans and fundus matching Matching result Block diagram of B-scans matching with fundus image CONCLUSION REFERENCES SUPPLEMENTS
11 1 1.1 BACKGROUND INFORMATION Biological background Vision system Vision system includes bulbus oculi and its auxiliary structures organa oculi accessoria. [4] Shape of Bulbus oculi approximately correspondss to a sphere. Its diameter is about 25 mm in adult human. The poles of the eye surface are connected by lines, known as meridians. In the direction of progress of the eyeball are vessels. Linea visus is an axis whichh connects observed point with the middle of fovea centralis retinae (yellow spot). Fig cornea, 2. sclera, 3. chorioid (chorioidea), 4. iris, 5. ciliary apparatus (corpus ciliare), 6. front and rear eye chamber, 7. lens, 8. retinae (retina), 9. yellow spot (fovea centralis), 10. blind spot (loco caecus), 11. optic nerve (nervus opticus), 12. vitreous (corpus vitreum) 10
12 The eye movements are provided by extraocular muscles. Eyelids, conjunctiva and lacrimal apparatus protect the eye from mechanical damage, provide eye moistening and prevent inflammation. Lacrimal gland (Glandula lacrimalis) is placed externally under the eyepiece cap. Tears flow out by the lacrimal cochlea to the nasal cavity. Rods and cones represent photosensitive elements of the retina. Yellow spot (Fovea centralis) is the point of sharpest vision and there are cumulated rods and cones. Cones are for daylight vision (cone vision) and rods are for monochromatic vision in the dusk or night (scotopic vision). The eye is able to recognize the visible part of spectrum which is in between nm. There exist three different types of cones in the eye, which are sensitive to variable wave length. The range of these wave lengths can be divided into colours. Cones are sensitive to red, green and blue colour. Different intensity of stimulation of all cones causes the sensing of the whole colour spectrum Anatomy of retina Retina is a light-sensitive tissue lining the inner surface of the eye. It is formed as pouch of diencephalon (the rear of the forebrain). Main function of retina is to capture the light signals incident on it through the lens. The rods and cones are placed on the inner side of retina. It also includes the yellow and blind spot (Fig. 1.2). 11
13 Fig. 1.2 Cross section of the eye showing retinal vasculature: 1. marking vessels and exit of the optic nerve (blind spot), 2. yellow spot [4] The retina can be further divided to these layers: stratum pigmenty retinae pigmentosum (Pigmentary epithelium of retina) stratum neuroepitheliale (layer of rods and cons) stratum ganglionare retinae (layer of bipolar neurons) stratum ganglionare nervi optici (layer of multipolar neurons, theirs axons create nervus opticus (optic nerve)) [2] The retina covers the back of the eye making optical magnification constant of 3.5 degrees of scan angle of the eye per one millimetre. It allows image pre-processing of the captured image in the form of edge detection and colour analysis before the information is transferred along the optic nerve to the brain. Retina contains four main types of cells which are arranged in layers as describedd above. The drites of these cells occupy no more than 1 to 2 12
14 mm 2 of retina and their content is limited by spatial distribution of the layers. In the first layer are located about 125 million receptors. They contain the photosensitive pigment responsible for converting photons into chemical energy. Receptor cells, as already mentioned, are divided into two groups: rods and cones. Cones are responsible for colour vision and work only in daylight or in good light of the scanned image. On the other hand, in the dark the rods are exited. When the single photon is captured by the eye, it increases the membrane potential, which can be measured in the rods. This reaction of the eye (eye sensitivity) is the result of chemical cascade which works similar like a photomultiplier in which the single photon generates the cascade of electrons. All rods use the same pigment whereas cones use three different types of pigments. On the inner side of retina is a layer of the ganglion cells (stratum ganglionare retinae), those axons form the optic nerve output of retina. There are approximately one million of these cells. When the information is transferred through the area between the receptors and ganglion cells, there is a compression of the transferred information. Bipolar cells represent the first level of information processing in the visual system. Throughout this area are bipolar cells dissociated from receptors to the ganglion cells. Their response to the light represents the centre, eventually surroundings, of captured image (scene). It means that even a little dot on the retina evokes specific reaction, whereas surrounding area evokes opposite reaction. If the centre and the surrounding are illuminated at the same time, there is no response. Thus the bipolar cells exist in two types: on-centre and off-centre. The on-centre reacts to the bright illumination and offcentre to the dark. On-centre response of bipolar cells is in direct contact with receptors. Off-centre response is delivered by horizontal cells which are parallel to the retina surface and can be founded between receptor and bipolar layer. Thanks to this, the off-centre layer has the opposite influence to the on-centre cells. Amacrine cells are also parallel to the retina surface. They can be founded in the different layer, between the bipolar cells and ganglion cells. Amacrine cells detect the motions on the captured image. If the ganglion cells are triggered by bipolar cells, then they have also the on-centre and off-centre receptor field. In the on-centre of ganglion cells is field which illumination causes increasing the excitation of these cells. On the other hand, if the off-centre field is illuminated, it causes the decreasing of excitation of these cells. If the whole area is illuminated, then there is 13
15 1.2 a little or none level of excitation, because mutual effect annuls the excitation. The fibres of optical nerve use the frequency encoding for scalar amount representation. Several of ganglion cells can receive information from the same receptor, because receptor fields of these cells overlap themselves. The maximal resolution is in the yellow spot. For recognizing two different points in the captured scene it is necessary that the distance between these two points has to be at least 0.5 minute of the visual angle. This separation corresponds to a distance on the retina of 2.5 µm, which is approximately the center-to-center spacing between cones. Devices for retinal scanning Fundus camera (Retinal Camera) Fundus camera is a specialized microscope with an attached camera designed to photograph the interior surface of the eye retina, blind and yellow spot. These photographs allow medical professionals to monitor progression of a disease glaucoma, macular degeneration, etc. The design is based on principle of monocular indirect ophthalmoscopy. It replaced previously used ophthalmoscope [7]. Fig. 1.3 Principle of fundus camera; 1 halogen lamp, 2 shade, 3 optics for focusing, 4 aperture for illumination, 5 green filter, 6 red filter, 10 (CCD) camera, 18 CCD sensor, 19 aperture for reflection measurements, 17 optics for focusing an image, 20 diode for aiming the eye [7] 14
16 The principle of scanning by fundus cameraa is implied on the figure 1.3. The procedure for acquiring the image starts with fixation of patient s head to the special holder, which fixates the position of the eye. Then the device automatically or manually detects the pupil centre. In the next step the device automatically focuses to the retina the shade opens (Fig ), several images is scanned by CCD sensor (Fig ) with synchronous shift of the lens (Fig ). Further the system evaluates the amount of high frequencies in scanned image. Well focused image has many highh frequencies. In the next step the device automatically sets the intensity of the patient s eye illumination. The images scanned in previous step are evaluated and the average intensity of the image is calculated. The final image is generated by scanning of several images for different shifting around the position of the best focusing. It is also possible to scan images in horizontal shifting. For better vessel resolution the scanning through green filter is used. The illumination is usually around 1 mw/cm 2. Total exposition period lasts approximately 250 ms. Field of view (FOV) is most often between 20 to 45. The device is capable to compensate dioptric deviations of the patient s eye in the range between +/- 20 D. For focusing the pupil, it s diameter of at least 4 mm is needed. The resolution of CCD sensor is usually larger than 2.1 MPx. The overall magnification in the output image compared to the patient's eye is ten to thirty times. The output image is usually stored as a RGB image in standard JPEG or in a special format of manufacturer. Fig. 1.4 Fundus camera image 15
17 1.2.2 OCT (Optical Coherence Tomography) time domain OCT is non-invasive tomographic imagingg and diagnostic method, whichh scanning the images of biological tissue (skin, mucosa, eye and teeth) in high resolution. [5] It belongs to a group of optical tomographic techniques. OCT scans the images in the transversal (cross) section. The method uses infrared radiation, which is able to penetrate to a depth of 1-3 mm and it has a high resolution. Based on the characteristics of the infrared radiation source, it may have resolution in microns. With this radiation, the computer can reconstruct the physical structures in two or three dimensions. Fig. 1.5 Component blocks of an OCT system [5] Figure 1.5 shows the basic components of the OCT system. One of the most important blocks is the interferometer, which is illuminated by broadband light source. In this section, the interferometer is stripped of all unnecessary signals for further analysis. Interferometer divides the broadband source field to reference field E r and sample field E s. [5] The sample field focuses through the scanning optics and objective lens to some point below the surface of the 16
18 tissue. After scattering back from the issue, the modified sample field E s mixes with E r on the surface of the photodetector. Given the assumption that the photodetector captures all light from reference and sample arms the intensity that impacts the photodetector is: I d E d 2 0,5( I r I ' ) Re E * ( t r) E' ( t) s r s (1) where I r a I s are mean intensities returning from the reference and sample arms of the interferometer. I d is intensity impacts the photodetector. The second term in equation (1), which deps on the optical time delay τ, set by the position of the reference mirror, represents the amplitude of the interference fringes that carry information about the tissue structure. The nature of the interference fringes - or whether any fringes form at all - deps on the degree to which the temporal and spatial characteristics of E s and E r match. Under the assumption that the tissue behaves as an ideal mirror that leaves the sample beam unaltered, the correlation amplitude deps on the temporal-coherence characteristics: E * ( t r) E' ( t) G( ) cos2 ( ) Re 0 s s (2) where c is the speed of light, c 0 is middle frequency of the source, G(τ) is complex 0 temporal-coherence function with argument Φ(τ). According to Wiener-Khinchin theorem, G(τ) is related to the power spectral density of the source, S(ν), as G j 2 ( ) S( ) e d 0 (3) It follows that the shape and the width of the emission spectrum of the light source are important variables in OCT because of their influence on the sensitivity of the interferometer. The relationship between S(ν) and G(τ) is: 17
19 2 S( ) 2 ln e 0 4ln 2( ) (4) 2 2 ln 2 G( ) e e e j2 0 (5) In thesee equations, the half-power bandwidth Δν represents the spectral width of the source in optical frequency domain. The corresponding measure of correlation width, derived from (5), is the correlation length, given by: l c 2c c ln(2) ,44 (6) where Δλ is the full-width of the coherence functionn at half-maximum measured in wavelength units. [5] The principle whichh is described above, describes The newest OCT devices functions in the spectral domain. so-called Time domain OCT. Fig. 1.6 OCT image (B-scan) 18
20 1.2.3 OCT (Optical Coherence Tomography) spectral domain OCT enables micron scale, cross-sectional imaging of internal issue microstructure in real time. [13] Commercial ophthalmic OCT imagingg systems have axial resolutions of µm. Spectral (Fourier) domain detection methods for OCT were suggested in 90s of 20 th century. In approximately 2005 becamee an active area of investigation. Studies have shown that spectral domain detection can enable dramatic improvements in imaging speed or detection sensitivity compared to standard detection techniques. Spectral time domain OCT detects the echo time delay and magnitude of light by measuring the interference spectrum of the light signal from the tissue. The principle is shown in figure 1.7. Fig. 1.7 Schematic of high speed, ultrahigh resolution OCT system using spectral (Fourier) domain detection, the echo time delay and magnitude of reflected light is detected by measuring and Fourier transforming the spectrum of the interference signal [13] [13] The detection uses a spectrometer in conjunction with a high speed, highh dynamic range CCD camera, photodiode array, or linescan camera. The camera records the spectrum of the interference pattern and the echo time delays of the light signal can be extracted by Fourier transform. Becausee spectral domain OCT measures all of the reflected light at once, rather than 19
21 light which returns at a given echo delay, there is a dramatic Thanks to this, it is possible to get high speed retinal imaging. increase in detection sensitivity. The newest types of OCT devices combines OCT and SLO scanning. First of all, the SLO image is captured, then the user choose the area in SLO image to be scanned with OCT SLO (Scanning laser ophthalmoscopy) [14] Scanning laser ophthalmoscopes provide non-invasive image sequences of the retina with highh spatial and temporal resolution and it is a method of examination of the eye which uses the technique of confocal laser scanning microscopy for diagnostic imagingg of retina or cornea of human eye. SLO scans images with a laser beam and typically provide standard video outputs (30 Hz) that can be captured by computer systems. The SLO imaging system focuses the laser onto a spot at the retina, on the order of 10 micrometre diameter, that lies about 24 mm beyond the pupil. Fig. 1.8 Representative SLO geometry [14] A laser beam of 1 mm diameter is pre-shaped by a lens (L)1 so as to be in focus at the retina (figure 1.9). The beam passes through a 2 mm aperture (M1) which is used as beam separator 20
22 between the illuminating and reflected light. The laser beam is then deflected horizontally by a rotating polygon mirror (M2), to form a line scan. The number of facets of the polygon determines the angle of deflection which, in turn, determines the angle scanned in the horizontal direction. A galvanometric mirror (M3) deflects the beam vertically to form a two-dimensional raster. The two-dimensional raster is focused by a spherical mirror (M4) to a single spot at the position of the patient s lens. The optics of the eye then focusess this on to the retina. The light reflected from the retina (dotted line in figure 1.9) emerges from a larger exit aperture, travels back along the same path as the illumination beam and is descanned by the two scanning mirrors. The light is collected by the beam separator and focused by a lens (L2) onto a photodetector. The signal from the photodetector can then either be recorded on video-tape or fed to a frame-grabber interfaced to a computer. [15] Fig. 1.9 An optical diagram of the SLO 21
23 1.3 Image processing Image processing is based on signal processing. In this case, the input is an image and the output is image with marked or highlighted elements. These elements are the reasons why the image is processed. There are many different types of methods, thus only used methods in this thesis will be noticed Edge detection Detecting edges in an image is one of the fundamental pillars of the whole image processing [1]. The edge in the image is possible to define as a quick change of intensity in the image between the neighbour pixels. The edge can be part (or another edge) of bigger object in the image, feature, or it can indicates object in itself (e.g. line or point). Unfortunately, it often happens that local changes in intensity are due to noise, and detect as edge. Thanks to various methods of image processing, is possible to eliminate the noise. With global edge detection applied to the image, the secondary binary image is created. This binary image contains only two values ones and zeroes. Ones are represent as white pixels, zeros as black pixels. Generally speaking, the edge operator puts the edges in place of the maximum response (derivation: responding to rapid change, it is necessary to differentiate according x and y to respond to the horizontal and vertical edges, Laplacian: the edge is where the function crosses zero). Gradient-based detectors These detectors are based on the evaluation of intensity changes in the neighbouring pixels and can be described as follows [1]: g 2 2 i, k ( x fi, k ) ( y fi, k ) (7) where g i,k is a pixel in the image (by i and k coordinates) and f i,k is a pixel of x and y coordinates, from which is g i,k calculated. This operator is based on absolute value, which means that it is isotropic (indepent on the direction). The equation (7) can be further approximated as [1]: 22
24 g i, k max( x fi, k, y fi, k ), gi, k x fi, k y fi, k (8) As seen from equation (8), these operators are slightly anisotropic (direction-depent). The first operator emphasizes horizontal and vertical edges, while the second operator highlights angled edges. The decision whether the pixel belongs to a certain edge, is based on comparison of the local gradient and the selected threshold, which is the only parameter of edge detection, given by absolute gradient formula. The threshold determinates whether the pixel belongs to edge representation of the image or not. Higher threshold causes that only the significant edges are highlighted (brighter or darker). If there is not known threshold at disposal, the threshold is set based on visual evaluation of the resulting edge representation. A similar result as the operator (8) provides the Roberts operator, which calculates the difference in both directions, and the bigger value is considered to be the result. This operator uses the convolution mask, given by [1]: max h 1, h (9) The absolute gradient provides usually relatively thick edges, which is possible to process with operator, which causes the thinning of the edges (erosion operators). If the partial differential is known, it is possible to determine the direction of local edge using equation [1]: i, k arctan y x f f i, k i, k (10) 23
25 The calculation of formula (10) can be approximated by the corresponding value in two dimensional LUT (Lookup table). As a result, parametric edgee representation becomes vector: for every positive edge detected pixel is possible to determine the local direction, which can be useful at a higher level of analysis, where the edge consistency tests based on the context. The edge detection, including the rough estimate edge directions, can be eventually made by so called compass detectors. They are based on the repeated convolution of an image with all eight masks and the set of directional masks, which approximate directional derivatives of the weighted average difference. These masks may look like this [1]: h h (11) h h Fig Original image and its rough edge representation [1] Each mask gives different parametric image, which highlights a different orientation of the edge. With these masks, it is possible to find maximum for every pixel (i,k) and further 24
26 threshold them [1]: g i, k (max( j g i, k ) T ), i, k jmax 45 (12) where j max is mask s order, which gave the maximum absolute response and T is the chosen threshold. The masks in equation (11) represent so called Sobel operator. Other commonly used operators are e.g. Prewitt operator, Kirsch operator or Canny operator. Laplacian-based Zero-Crossing detectors Another commonly used method for edge detection is Laplacian-based detectors. The usual procedure for edge detection using this method, is transform the image into Laplacian image in the first step. Next step is finding the zero-crossings. The procedure provides thinner and usually more precise edges than gradient-based methods. Fig Raw edge representation based on detection of zero-crossings in the Laplacian [1] 25
27 To make this method effective enough, it is used instead of searching for places where mask crosses the zero, mask, which looks for neighbour places with opposite signs. This can be done with the mask [1]: 0 m x (13) This mask selects the four neighbouring pixels of the tested pixel. The further operations are logical: 1. if at least one of the neighbours has a different sign than others, and 2. if the difference between the extreme differently signed values of neighbours is greater than a chosen threshold T, and 3. if the central value at the cross position lies in between the extreme differently signed values of neighbours, then the pixel (in a corresponding new matrix of the edge representation) is marked as an edge (e.g., 1). Once any of the conditions is not fulfilled, the pixel is abandoned as a nonedge (0 marked). The procedure again has a single threshold parameter. [1] The success of this method can be verified with a sufficiently high value of the gradient of the original image. Another test consists in the effective control of the slope between the neighbouring extremes of the Laplacian image with regard to the expected sharp edges that are detected Gradient In vector calculus, the gradient of a scalar field is a vector field which points in the direction of the greatest rate of increase of the scalar field, and whose magnitude is the greatest rate of change [9]. The gradient (or gradient vector field) of a scalar function f(x 1, x 2, x 3, x 4,, x n ) is denotedf or f where (the nabla operator) denotes the vector differential operator. The notation grad(f) is also used for the gradient. The gradient f is defined to be the vector field 26
28 whose components are the partial derivatives f. That is [9]: f f f (,..., ) x x 1 Here the gradient is written as a row vector, but it is often taken to be a column vector. When a function also deps on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only. The gradient of a vector f = (f 1,f 2,f 3 ) is [9]: n (14) f j f ee i x i j (15) or the transpose of the Jacobian matrix [9]: ( f1, f2, f3) ( x, x, x ) (16) In image processing the gradient is often used as the pre-step for edge representation. The gradient (row-wise or column-wise) is used to an image and then the threshold is set. The values which are smaller (or bigger) than current threshold are denoted as 1, the other values as 0. This method is described in detail in Chapter
29 Fig original OCT (SLO) image (left) and its gradient representation (right) created by MATLAB gradient function Median filtering noise reduction Each real image is affected by some noise. This noise leads to inaccuracies in further image processing. For examplee if there is the edge detection in very noisy image, it may happen that edge detector detects false edges edges caused by noise, but they are not real edges. There are several types of noise. For example the speckle noise is typical for images for ultrasound scanning devices. Roughly speaking, the noise is caused by diffuse re-processing of signals from multiple distributed targets. For suppressing the noise, the median filter is often used. Generally speaking, the median filter removes extreme values. The median is created as follows: first of all the elements are sorted by value, then the value, which is exactly in the middle of sorted vector is picked and this value is called median. If the vector has even number of elements, the arithmetical mean of two values in the middle is picked. Example a) odd number of elements in the vector v 1 = [ ] is sorted as v 1 = [ ], median is 3; arithmetical mean is
30 Example b) even number of elements in the vector v 2 = [ ] is sorted as v 2 2 = [ ], median is 3.5, arithmetical mean 3. 3 [10] The median filter is to run through the signal entry by entry with the median of neighbouring entries. The pattern of neighbours is called the window, which slides, entry by entry, over the entire image (or signal). The median operation is nonlinear. The median filter does not possess the superposition property and traditional impulse response analysis is not strictly applicable. The impulse response of median filter is zero. Fig The noisy original image (left), the image filtered by median filter (right) [8] Skeletonization The skeleton [11] is important for recognition and representation of various shapes and features in an image. It contains features and topological structures of an original image. But compared to the original image, the skeleton representation is binary image. Its values are only 1 and 0. The process of creating the skeleton image is called skeletonization, because its main goal is to find object s skeleton. The skeleton usually emphasizes geometrical and topological properties of the shape, such as its connectivity, topology, length, direction, and width. Together with the distance of its points to the shape boundary, the skeleton can also serve as a representation of the shape (they containn all the information necessary to reconstruct the shape) [11]. 29
31 Fig The original image (left) and its skeleton representation (right)[8] Correlation Correlation is a mutual relationship between two variables or processes. If one of them changes, the second one also correlatively changes. If the two processes show a correlation, it is likely dep on each other. The relationship between the variables A and B can be positive if applies y = k.x, or negative y = -k.x. The value of the correlation coefficient of -1 indicates an indirect depent (anti-correlation), so the more the value increases in the first group of variables, the more it decreases the value of the second group of variables, such as the relationship between the elapsed and remaining time. The value of the correlation coefficient of +1 indicates a direct depence, such as the relationship between speed and frequency speed of bicycle wheel. If the correlation coefficient is 0, then the variable is no statistically detectable linear depence. But even with zero correlation coefficient of each variable may the variables may dep on each other. This only means that this relationship can t be expressed as a linear function. In this paper, the variables A and B represents matrices 100x100px of original image (SLO image) and correlated image (fundus image). The correlation coefficient is computed using [8]: 30
32 r m m n n ( A ( A A )( B B) mn mn A) 2 m n mn ( B mn B) 2 (17) where A is mean of the values of matrix A, and B is mean of the values of matrix B. The MATLAB function corr2 is used for computing the correlation coefficients in this thesis. Fig Image of correlation coefficient matrix with marked highest value (the red pixels represent values nearing 1, the blue pixels represent values nearing -1) which is used for finding corresponding image features in this thesis 31
33 1.4 Image registration Background information [6] Image registration is the process of overlaying two or more images of the same scene. These images are captured at different time, different angle, from another device or from different location. Thanks to this, it is possible to reconstruct, or highlight necessary information, which is inadequate from one image. The image registration is crucial step in image analysis, in which the final information is obtained by combining data from different sources, such as image fusion, change detection, and multichannel image restoration. Typically, the registration is required in remote sensing (environmental monitoring, weather forecasting, creating superresolution images, integrating information into geographic information systems (GIS), in medicine (combining computer tomography (CT) and NMR data to obtain more complete information about the patient, monitoring tumour growth, verification of treatment), etc.). Generally speaking, the image registration can be divided into four subcategories according to the image acquisition Image acquisition methods Registration of images taken from various points (multi-view analysis) Images of the same scene are taken from various points (places). The aim is to get bigger 2D view or 3D representation of the scene. It is used for example in remote sensing from satellites or computer vision (face recognition). Registration of images taken at different times (multi-temporal analysis) Images are recorded in the same scene several times over time. It is recorded from the same position, only changing environmental conditions (light, seasons, etc.). The aim is to find changes that are reflected on an object, which is recorded over time (e.g. decomposition of a material, monitoring the growth of tumours, etc.). Registration of images with different scanning techniques (multi-modal analysis) Images of the same scene are taken from the different sensors. The aim is to combine 32
34 information from different sources in order to obtain more comprehensive and detailed representation of the visual scene. For example, different combinations of sensors for recording the human body anatomy (MRI, ultrasound or CT) or a combination of images, OCT fundus cameras to detect diseases of the eye. Registration of the model and the scene In this case is registered model of the scene and the real the image scenes. The model may be a computer template scenes sculpted environment map, etc. The aim is to locate the image obtained with the model and compare them. Used for example in remote sensing from satellites to create maps, or layers in the GIS (geographic information system). In medicine, for example for comparing the patient's state bodies with the models or the models corresponding healthy organs Most common procedure of image registration Given the diversity of captured images, which are required to register, it is not possible to use the universal method that would suit any requirement for registration of images. Therefore, all these methods will find their application in practice. The most common procedure for image registration consists of the following steps [6]: 1. Feature detection Salient and distinctive objects (closed-boundary regions, edges, contours, line intersections, corners, etc.) are manually or automatically detected. For further processing of these features can be illustrated by their point representatives (centres of gravity, line ings, distinctive points), which are called control points (CPs) in the literature. 2. Feature matching This step is an effort to find the correspondence between the features in the sensed image and those detected in the reference image is established. Various feature descriptors and similarity measures along with spatial relationships among the features are used for that purpose. 33
35 3. Transform model estimation In this step, the estimated type and parameters of the so-called mapping functions and the fitting of the scanned image with reference image are estimated. Mapping function parameters are calculated by the estimated average common features of both images. 4. Image resampling and transformation The scanned image is transformed by means of the mapping functions. Image values in non-integer coordinates are computed by the appropriate interpolation technique Spatial transformation types The spatial transform [12] represents the spatial mapping of pixel values from the moving image space to pints in the fixed image space. The type of the transform is chosen in order to compensate the distortion between images. The can be several types of distortion due to sequential image scanning. There can be also distortions caused by eye movement during the scanning process. [8] There are many transformation types for image registration. Because this thesis has the practical part in Matlab, the Matlab s cp2tform.m function was used. This function takes pairs of control points and uses them to infer a spatial transformation. The Input Points is a matrix containing the x and y coordinates of control points in the image which is transformed. The Base Points is matrix a containing the x and y coordinates of control points specified in the base image. The Transform type specifies the type of spatial transformation to infer. The transformation types which are related to thesis are mentioned. Non-reflective similarity [8] This type of transformation is used when shapes in the input image are unchanged, but the image is distorted by some combination of translation, rotation, and scaling. Straight lines remain straight, and parallel lines are still parallel. Non-reflective similarity transformation can include rotation, scaling and translation. At least two pairs of control points are needed. 34
36 The transformation equation is: s.cos s.sin u v x y 1 s.sin s.cos tx t y (18) where x,y are pairs of control points from output space, u,v are pairs of control points from input space, s is scaling, t is translation (t x for x axis, t y for y axis), θ is angle of rotation Similarity [8] The similarity type of transformation has the same features as the Non-reflective similarity type, but there is the addition of optional reflection. Thus, this type of transformation can include rotation, scaling, translation and reflection. At least three pairs of control points are needed. The transformation equation is: s.cos as..sin u v x y 1 s.sin as..cos tx t y (19) where a determines if there is a reflection or not (for a = -1 reflection is included, for a = 1 reflection is not included in transformation) Affine [8] The affine transformation is used when shapes in the input image exhibit shearing. The x and y dimensions can be scaled or sheared indepently and there can be a translation. Linear conformal transformations are a subset of affine transformations. Straight lines remain straight, and parallel lines remain parallel, but rectangles become parallelograms. At least three control-point pairs are needed to solve for the six unknown coefficients (equation 20). 35
37 The equation is [8]: A D u v x y B E C F 1 (20) where A,B,C,D,E,F are unknown coefficients, the equation (20) can be rewritten as: U X. T (21) where T is the matrix of 6 unknowns (A,B,C,D,E,F), X is matrix of the input controlpoint pairs and U is the vector of [u v]. With three or more correspondence points is possible X to solve T, with T which gives the first 2 columns of T and third column must be [0 0 1] U (transposed matrix of [0 0 1]). Projective [8] This type of transformation can be used when the scene appears tilted. The quadrilaterals map is matched with quadrilaterals. Straight lines remain straight. Affine transformations are a subset of projective transformations. At least four control-point pairs are needed. This equation for this transformation is: A D G up vp wp x y w B E H C F I (22) where up u. wp and vp v. wp, T is matrix 3x3 of unknown elements (A,B,,I) of ( AxByC) ( Dx Ey F) equation (22). Assuming: v u is possible to solve nine elements ( Gx Hy I) ( Gx Hy I) of T. The detail procedure can be found in [8]. 36
38 Polynomial [8] This transformation type is needed when objects in the image are curved. The higher the order of the polynomial, the better the fit, but the result can contain more curves than the base image. For second order, at least six control-point pairs are need. Polynomial functions of x and y determine the mapping. For third order, at least ten control-point pairs are needed. Following equation is for the second order of polynomial function: A G B H C I D J E K F L u v 1 x y x. y x 2 y 2 (23) Considering the fact, that polynomial transformation type is not used in this thesis, further details are not mentioned. They can be founded in [8]. 37
39 2 PRACITAL APPLICATION Part of this thesis is to design an application in Matlab for image registration according to submission of the thesis. First step was to create program for pre-processing the images from Fundus camera and SLO image. 2.1 Pre-processing of fundus and SLO images This pre-processing of fundus image is performed by fce_cteni_fundus_man.m function in Matlab for manual registration and by fce_cteni_fundus.m for automatic registration Input image First of all, the fundus image is loaded. Because the fundus images are often taken through the red filter, it is necessary to remove the red (R) component of RGB image. When the red component is not removed and the RGB fundus image is transformed to grayscale fundus image, image becomes very bright with lack of contrast. That is why only green and blue component of RGB image is used Pre-processing The image is divided by 255 to get the values in image between 0 and 1. The next step is to find edges of the eye of the fundus image. It is performed by finding the big differences of value of two neighbour pixels. After edges are found, the image is cut to show only the square of the eye. When the registration program starts, the user is asked if he wants to have original round shape of the eye, or if the inner square of the eye should be chosen (figure 2.2). If the user presses 1, the round shape is used (figure 2.6), otherwise the inner square of the eye is cut and used (figure 2.5) Output image The output image of fundus is square, grayscale image with only G and B components of RGB spectrum. Then it is resized to 1000x1000. Image is figured on figure 2.2. The image of SLO is not modified, only resized to 1000x1000 pixel size. 38
40 Fig. 2.1 Fundus image with highlighted edges Fig. 2.2 Final form of pre-processed fundus image 39
41 2.2 Manual registration of fundus and SLO images In this step, the SLO image and the fundus image are registered. First of all, the number of registration points is set. The default setting is 4 (because of the used spatial transformation method) ), and the spatial transformation method is affine (for details see chapter 1.4.4) ) Choosing the input points The input points have to be chosen very carefully. It is good to mark the most significant points in the image (for example: the vessel crossings, branching of vessels, etc.).. It is not recommed to mark the points in the blind spot (loco caecus), because on the fundus image the blind spot represented mostly as white points, but in SLO image is represented mostly as black points and thus it is difficult to choose the exact same points Choosing the reference (base) points The base points have to match exactly (in significance of points) the input points otherwise the registration result would not be accurate. It means the exact crossings have to be marked on the input and reference image (see figure 2.3). Fig. 2.3 Choosing the input and reference (base) points of fundus image (left) and SLO image (right) Registration result After the spatial transformation which is provided by cp2tform function (for details see 40
42 chapter 1.4.4) the images are registered. As the figure 2.4 shows, the images are in this step registered. The fundus image represents smaller area than the SLO image that is why the fundus image is registered to the SLO image and not vice versa. The left image on figure 2.4 shows the connection of the vessels on the edge of fundus image whereas the right image shows the overlap of both images. Better view of registration result provides the figure 2.5. For this image was used the function MakeChessBoard.m which was created by Doc. Kolář of Department of biomedical engineering of Faculty of electrical engineering and communication of Brno University of Technology. This functionn creates a chessboard from both images. The size of each window of the chessboard is 100x100 pixels. The figure 2.5 shows how good are the vessels of both images connected in the different parts of the image or how far was the registration successful. The possible errorr is caused by user who did not select the exact points in both images. Fig. 2.4 Registration result vessel connection (left), vessel coverage (right) 41
43 Fig. 2.5 The Chessboard representation of result using the inner square of fundus image Fig The Chessboard representationn of result using original eye s round shape 42
44 2.2.4 Manual registration block diagram Fig. 2.7 Manual registration block diagram 43
45 2.3 Automatic registration of fundus and SLO images In automatic registration the input and base points of images are selected automatically. First step is the same as in the manual registration. Images have to be pre-processed by functions mentioned in chapter Choosing the reference points First step is to automatically select significant points which will be the input points. This is done by following procedure. The gradient of reference image is created (more details about gradient can be found in chapter 1.3.2). The threshold is set according to the image maximal intensity. The values above this threshold are set to 1 and below the threshold to 0. Because of it, the binary representation of the reference image (SLO) is created. To find required input points is necessary to thinner the vessels and other structures represented in binary image as ones. This can be done by using skeletonization in the reference image (chapter 1.3.4). In this program the skeletonization is performed by Matlab s bwmorph.m function. After the skeletonization the image can be very noisy, so it is recommed to filter it by median filtering (chapter 1.3.3). For this the medfilt2.m function is used. Then the different regions of the reference image are searched to find significant points (such as vessels, vessel crossings, etc.). The four areas of approximately 100x100 pixels are searched and the algorithm marks the ones which neighbour each other. The algorithm for searching one area looks like this: for i = 100:200 for j = 300:400 if (((oct_skelet(i,j) == 1) && (oct_skelet(i+2,j) == 1)) ((oct_skelet(i,j) == 1) && (oct_skelet(i,j+1) == 1))) && sw == 1; cnt = cnt + 1; if cnt == round(cnt1/2) sw = 0; plot(j,i,'g+'); oct_points(1,2) = j-win_size/2; oct_points(1,1) = i-win_size/2; 44
46 Fig. 2.8 The binary image with marked reference points (red pluses) Finding the similar windows After the points are found, the 100x100 pixel windows of SLO are cut around them. The four found points are always in the middle of cut windows. These windows are in next step compared by correlation (chapter 1.3.5) to the all areas of fundus image. Deping on where is the maximum coefficient of correlation, the most similar window is found. The size of the windoww 100x100 pixels was chosen because it is the best compromise between the speed of the algorithm and the ability to find exact match. The user can change the size of the window in the program. If the window is bigger, the algorithm is faster, but may have difficulties to find the exact matches. The algorithm which performs correlation looks like this: 45
47 for i = 1:step:X-win_size for j = 1:step:Y-win_size if counter2 == Y/step; counter2 = 0; counter = counter+1; counter2 = counter2 + 1; cor_coef(counter,counter2) = corr2(oct(oct_points(cyk,1):oct_points(cyk,1)+win_size- 1,oct_points(cyk,2):oct_points(cyk,2)+win_size-1),fundus(i:i+win_size- 1,j:j+win_size-1)); % picking up the correlation coefficients Fig. 2.9 The window selected by algorithm in SLO image (right), and the window found by correlation in fundus image (left) 46
48 Fig Detail on the original windoww (right) - SLO, and found window (left) - fundus Finding the minimal value of found windows Because the size of correlation window is relatively big, there can be relatively big error in the correlation. The spatial transform algorithm needs very precise matches in both images. That is why another correlation is needed. In the SLO small 100x100 pixel window, the minimal value is chosen (figure 2.10) ), and another smaller window (20x20 pixels) surrounding the minimal value is created. This 20x20 pixels windoww is again correlated with smaller windows (20x20 pixels) in fundus 100x100 pixels window (figure 2.11). This operation is repeated for four times in each found 100x1000 window (because at least 4 points are needed for the spatial transformation). Thanks to this, the found points more match to the referencee points. Fig x100 pixels window of SLO image with minimal value marked 47
49 Fig x100 pixels window with marked 20x200 pixels window surrounding the minimal value (right) and its marked match (left) in fundus 100x100 pixels window Automatic registration result All neededd points are now found and set, thus the next step is the spatial transformation. The results of the automatic image registration are very similar to manual registration, but in some cases, when for example the images are damaged or heavily distorted it is better to use manual registration. If the figures 2.14 and 2.5 are compared, it is possible to see that manual registration have slightly better results in right top corner of the result image. It is possible to enhancee the automatic registration by choosing the smaller correlation windows, or lowering the thresholds for the binary representation. Another option is to try different type of spatial transformation but it turned out to be not so effective. The evaluation of automatic registration is in the supplement A. If the number of average deviation is higher than 2, registration was not successful. The best result provides so called semi-automatic registration Evaluation of automatic registration result In the supplement A can be found the evaluation of automatic registration result. For the evaluation the manual and automatic registration were used. The manual registration is the reference registration. First of all, the automatic registration result is subtracted from manual 48
50 registration result. If there is a deviation in the same pixel in both images, its absolute value is transformed to percentage and then the average and maximum deviation is found. If the average deviation is bigger than 2%, the registration was not successful. The maximal deviation determines the biggest error in the registration result but it is not depents whether the registration was successful or not. The equation for counting the deviation could be written as: C A B (24) where C is the result matrix of subtracted images, A is the matrix of manual registration result and B is the matrix of automatic registration result. 49
51 Fig Registration result vessel connection (left), vessel coverage (right) Fig The Chessboard representation of the automatic registration result 50
52 2.3.5 Automatic registration block diagram Fig Automatic registration block diagram 51
53 2.4 Semi-automatic registration of fundus and SLO images Semi-automatic registration result This type of registration combines advantages of two previous types (chapter 2.2 and chapter 2.3). The user manually selects the reference points in the SLO image and the input points are found automatically. Thanks to this, the match of the points is nearly 100 %, because the errorr of selecting not the exact points by user is eliminated. It is recommed to mark points which represent vessels crossings or thick vessels. The user only has to be aware of the limitation of the fundus image, which represents smaller area than the SLO. Because the semi-automatic the same, thus it registration is combination of two previous methods, the procedure is practically will be not mentioned. This method is the most accurate. Fig 2.16 Selecting the reference points in the SLO image 52
54 Fig Registration result of semi-automatic registration vessel connection (left), vessel coverage (right) Fig The Chessboard representation of the semi-automatic registration result 53
55 2.4.2 Semi-automatic registration block diagram Fig Semi automatic registration block diagram 54
56 2.5 The fundus image and OCT B-scanss matching The final task was to match the fundus image with OCT B-scans. Each B-scan represents approximately one line of fundus image. B-scans are in sectional view (see figure 1.6). From these B-scanss is possible to reconstruct image which is similar to fundus image and thus it is possible to register these images Pre-processing of OCT and fundus images For successful matching is the pre-processing crucial. First of all the B-scans have to be transformed to another view. For this transformation, the function vyrovnani_upravene.m, is used. It was made by Ing. Gazárek of Department of biomedical engineering of Faculty of electrical engineering and communicationn of Brno University of Technology. In this function the lower edge of the retina is detected (see figure 2.21). Then about 10 pixels below each one pixel of the edge are picked and average is made. This average represents one line of the reconstructed image, which is similar to fundus or SLO image (see figure 2.22) ). Fig OCT B-scan 55
University of Lübeck, Medical Laser Center Lübeck GmbH Optical Coherence Tomography
University of Lübeck, Medical Laser Center Lübeck GmbH Optical Coherence Tomography. Theory Dr. Gereon Hüttmann / 009 What is OCT? ( for the MD ) Lichtquelle Probe Detektor Display OCT is Ultrasound with
More informationOPTICAL COHERENCE TOMOGRAPHY:SIGNAL PROCESSING AND ALGORITHM
OPTICAL COHERENCE TOMOGRAPHY:SIGNAL PROCESSING AND ALGORITHM OCT Medical imaging modality with 1-10 µ m resolutions and 1-2 mm penetration depths High-resolution, sub-surface non-invasive or minimally
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationQuantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph
Quantitative Three-Dimensional Imaging of the Posterior Segment with the Heidelberg Retina Tomograph Heidelberg Engineering GmbH, Heidelberg, Germany Contents 1 Introduction... 1 2 Confocal laser scanning
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationMICHELSON S INTERFEROMETER
MICHELSON S INTERFEROMETER Objectives: 1. Alignment of Michelson s Interferometer using He-Ne laser to observe concentric circular fringes 2. Measurement of the wavelength of He-Ne Laser and Na lamp using
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationComparison between Various Edge Detection Methods on Satellite Image
Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering
More informationLecture: Edge Detection
CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -
More informationspecular diffuse reflection.
Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature
More information45 µm polystyrene bead embedded in scattering tissue phantom. (a,b) raw images under oblique
Phase gradient microscopy in thick tissue with oblique back-illumination Tim N Ford, Kengyeh K Chu & Jerome Mertz Supplementary Figure 1: Comparison of added versus subtracted raw OBM images 45 µm polystyrene
More informationOptics of Vision. MATERIAL TO READ Web: 1.
Optics of Vision MATERIAL TO READ Web: 1. www.physics.uoguelph.ca/phys1070/webst.html Text: Chap. 3, pp. 1-39 (NB: pg. 3-37 missing) Chap. 5 pp.1-17 Handbook: 1. study guide 3 2. lab 3 Optics of the eye
More informationEPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling
EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling More Accurate Optical Measurements of the Cornea Raquel González Fariña Contents 1. Introduction... 2 Background... 2 2.
More informationTutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication
Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem
More informationAn Intuitive Explanation of Fourier Theory
An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory
More informationCHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK
CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK Ocular fundus images can provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular degeneration
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationPHYSICS. Chapter 33 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT
PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 33 Lecture RANDALL D. KNIGHT Chapter 33 Wave Optics IN THIS CHAPTER, you will learn about and apply the wave model of light. Slide
More informationOther Linear Filters CS 211A
Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin
More informationChapter 37. Wave Optics
Chapter 37 Wave Optics Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics. Sometimes called physical optics These phenomena include:
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationImage Processing
Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore
More information(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology
lecture 23 (0, 1, 1) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (1, 1, 0) (0, 1, 0) hue - which ''? saturation - how pure? luminance (value) - intensity What is light? What is? Light consists of electromagnetic
More informationCHAPTER 3 RETINAL OPTIC DISC SEGMENTATION
60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular
More informationModeling the top neuron layers of the human retina. Laurens Koppenol ( ) Universiteit Utrecht
Modeling the top neuron layers of the human retina Laurens Koppenol (3245578) 23-06-2013 Universiteit Utrecht 1 Contents Introduction... 3 Background... 4 Photoreceptors... 4 Scotopic vision... 5 Photopic
More informationMEASUREMENT OF THE WAVELENGTH WITH APPLICATION OF A DIFFRACTION GRATING AND A SPECTROMETER
Warsaw University of Technology Faculty of Physics Physics Laboratory I P Irma Śledzińska 4 MEASUREMENT OF THE WAVELENGTH WITH APPLICATION OF A DIFFRACTION GRATING AND A SPECTROMETER 1. Fundamentals Electromagnetic
More informationChapter 2: Wave Optics
Chapter : Wave Optics P-1. We can write a plane wave with the z axis taken in the direction of the wave vector k as u(,) r t Acos tkzarg( A) As c /, T 1/ and k / we can rewrite the plane wave as t z u(,)
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationLocal Image preprocessing (cont d)
Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge
More informationENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning
1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles
More informationEdge Detection (with a sidelight introduction to linear, associative operators). Images
Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to
More informationComputer Vision. The image formation process
Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationLenses: Focus and Defocus
Lenses: Focus and Defocus circle of confusion A lens focuses light onto the film There is a specific distance at which objects are in focus other points project to a circle of confusion in the image Changing
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationRepresenting the World
Table of Contents Representing the World...1 Sensory Transducers...1 The Lateral Geniculate Nucleus (LGN)... 2 Areas V1 to V5 the Visual Cortex... 2 Computer Vision... 3 Intensity Images... 3 Image Focusing...
More informationDraft SPOTS Standard Part III (7)
SPOTS Good Practice Guide to Electronic Speckle Pattern Interferometry for Displacement / Strain Analysis Draft SPOTS Standard Part III (7) CALIBRATION AND ASSESSMENT OF OPTICAL STRAIN MEASUREMENTS Good
More informationDigital Image Processing
Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments
More informationNeurophysical Model by Barten and Its Development
Chapter 14 Neurophysical Model by Barten and Its Development According to the Barten model, the perceived foveal image is corrupted by internal noise caused by statistical fluctuations, both in the number
More informationProduct information. Hi-Tech Electronics Pte Ltd
Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,
More informationCHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM. Target Object
CHAPTER 2: THREE DIMENSIONAL TOPOGRAPHICAL MAPPING SYSTEM 2.1 Theory and Construction Target Object Laser Projector CCD Camera Host Computer / Image Processor Figure 2.1 Block Diagram of 3D Areal Mapper
More informationApplications of Piezo Actuators for Space Instrument Optical Alignment
Year 4 University of Birmingham Presentation Applications of Piezo Actuators for Space Instrument Optical Alignment Michelle Louise Antonik 520689 Supervisor: Prof. B. Swinyard Outline of Presentation
More information11. Image Data Analytics. Jacobs University Visualization and Computer Graphics Lab
11. Image Data Analytics Motivation Images (and even videos) have become a popular data format for storing information digitally. Data Analytics 377 Motivation Traditionally, scientific and medical imaging
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationTwo slit interference - Prelab questions
Two slit interference - Prelab questions 1. Show that the intensity distribution given in equation 3 leads to bright and dark fringes at y = mλd/a and y = (m + 1/2) λd/a respectively, where m is an integer.
More informationLecture 16: Geometrical Optics. Reflection Refraction Critical angle Total internal reflection. Polarisation of light waves
Lecture 6: Geometrical Optics Reflection Refraction Critical angle Total internal reflection Polarisation of light waves Geometrical Optics Optics Branch of Physics, concerning the interaction of light
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationSobel Edge Detection Algorithm
Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,
More informationDetection of Edges Using Mathematical Morphological Operators
OPEN TRANSACTIONS ON INFORMATION PROCESSING Volume 1, Number 1, MAY 2014 OPEN TRANSACTIONS ON INFORMATION PROCESSING Detection of Edges Using Mathematical Morphological Operators Suman Rani*, Deepti Bansal,
More informationCOSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor
COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The
More informationComputational Perception. Visual Coding 3
Computational Perception 15-485/785 February 21, 2008 Visual Coding 3 A gap in the theory? - - + - - from Hubel, 1995 2 Eye anatomy from Hubel, 1995 Photoreceptors: rods (night vision) and cones (day vision)
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha
More informationSIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS
SIMULATION AND VISUALIZATION IN THE EDUCATION OF COHERENT OPTICS J. KORNIS, P. PACHER Department of Physics Technical University of Budapest H-1111 Budafoki út 8., Hungary e-mail: kornis@phy.bme.hu, pacher@phy.bme.hu
More informationColor and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception
Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationAssignment 3: Edge Detection
Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse
More informationChapter 37. Interference of Light Waves
Chapter 37 Interference of Light Waves Wave Optics Wave optics is a study concerned with phenomena that cannot be adequately explained by geometric (ray) optics These phenomena include: Interference Diffraction
More informationKeywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.
Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques
More informationUNIT 102-9: INTERFERENCE AND DIFFRACTION
Name St.No. - Date(YY/MM/DD) / / Section Group # UNIT 102-9: INTERFERENCE AND DIFFRACTION Patterns created by interference of light in a thin film. OBJECTIVES 1. Understand the creation of double-slit
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationPHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves
PHY 222 Lab 11 Interference and Diffraction Patterns Investigating interference and diffraction of light waves Print Your Name Print Your Partners' Names Instructions April 17, 2015 Before lab, read the
More informationLens Design I. Lecture 11: Imaging Herbert Gross. Summer term
Lens Design I Lecture 11: Imaging 2015-06-29 Herbert Gross Summer term 2015 www.iap.uni-jena.de 2 Preliminary Schedule 1 13.04. Basics 2 20.04. Properties of optical systrems I 3 27.05. 4 04.05. Properties
More informationBroad field that includes low-level operations as well as complex high-level algorithms
Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and
More informationGeometry-Based Optic Disk Tracking in Retinal Fundus Videos
Geometry-Based Optic Disk Tracking in Retinal Fundus Videos Anja Kürten, Thomas Köhler,2, Attila Budai,2, Ralf-Peter Tornow 3, Georg Michelson 2,3, Joachim Hornegger,2 Pattern Recognition Lab, FAU Erlangen-Nürnberg
More informationCSE 167: Lecture #6: Color. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011
CSE 167: Introduction to Computer Graphics Lecture #6: Color Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday, October 14
More informationChapter 7: Geometrical Optics
Chapter 7: Geometrical Optics 7. Reflection at a Spherical Surface L.O 7.. State laws of reflection Laws of reflection state: L.O The incident ray, the reflected ray and the normal all lie in the same
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationBabu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)
5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?
More informationUnit - I Computer vision Fundamentals
Unit - I Computer vision Fundamentals It is an area which concentrates on mimicking human vision systems. As a scientific discipline, computer vision is concerned with the theory behind artificial systems
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationVisual Distortions in Macular Degeneration: Quantitative Diagnosis and Correction
Visual Distortions in Macular Degeneration: Quantitative Diagnosis and Correction Walter Kohn, Professor Emeritus of Physics & Chemistry, UC Santa Barbara Jim Klingshirn, Consulting Engineer, Santa Barbara
More informationComputer Vision I - Image Matching and Image Formation
Computer Vision I - Image Matching and Image Formation Carsten Rother 10/12/2014 Computer Vision I: Image Formation Process Computer Vision I: Image Formation Process 10/12/2014 2 Roadmap for next five
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationDigital Image Processing
Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture
More informationImage Processing: Final Exam November 10, :30 10:30
Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always
More informationSURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES
SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More information3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca
3. Image formation, Fourier analysis and CTF theory Paula da Fonseca EM course 2017 - Agenda - Overview of: Introduction to Fourier analysis o o o o Sine waves Fourier transform (simple examples of 1D
More information(A) Electromagnetic. B) Mechanical. (C) Longitudinal. (D) None of these.
Downloaded from LIGHT 1.Light is a form of radiation. (A) Electromagnetic. B) Mechanical. (C) Longitudinal. 2.The wavelength of visible light is in the range: (A) 4 10-7 m to 8 10-7 m. (B) 4 10 7
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationSYSTEM FOR VIDEOMETRIC MEASUREMENT OF THE VERGENCE AND ACCOMMODATION
SYSTEM FOR VIDEOMETRIC MEASUREMENT OF THE VERGENCE AND ACCOMMODATION T. Jindra, J. Dušek, M. Kubačák, L. Dibdiak, K. Dušek Institute of Biophysics and Informatics, First Faculty of Medicine, Charles University
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationMedical Image Processing using MATLAB
Medical Image Processing using MATLAB Emilia Dana SELEŢCHI University of Bucharest, Romania ABSTRACT 2. 3. 2. IMAGE PROCESSING TOOLBOX MATLAB and the Image Processing Toolbox provide a wide range of advanced
More informationImproving the 3D Scan Precision of Laser Triangulation
Improving the 3D Scan Precision of Laser Triangulation The Principle of Laser Triangulation Triangulation Geometry Example Z Y X Image of Target Object Sensor Image of Laser Line 3D Laser Triangulation
More informationDYNAMIC ELECTRONIC SPECKLE PATTERN INTERFEROMETRY IN APPLICATION TO MEASURE OUT-OF-PLANE DISPLACEMENT
Engineering MECHANICS, Vol. 14, 2007, No. 1/2, p. 37 44 37 DYNAMIC ELECTRONIC SPECKLE PATTERN INTERFEROMETRY IN APPLICATION TO MEASURE OUT-OF-PLANE DISPLACEMENT Pavla Dvořáková, Vlastimil Bajgar, Jan Trnka*
More informationMinimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.
Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare
More informationGeometric Rectification of Remote Sensing Images
Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in
More informationComparison between 3D Digital and Optical Microscopes for the Surface Measurement using Image Processing Techniques
Comparison between 3D Digital and Optical Microscopes for the Surface Measurement using Image Processing Techniques Ismail Bogrekci, Pinar Demircioglu, Adnan Menderes University, TR; M. Numan Durakbasa,
More informationUNIT VI OPTICS ALL THE POSSIBLE FORMULAE
58 UNIT VI OPTICS ALL THE POSSIBLE FORMULAE Relation between focal length and radius of curvature of a mirror/lens, f = R/2 Mirror formula: Magnification produced by a mirror: m = - = - Snell s law: 1
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationOutlines. Medical Image Processing Using Transforms. 4. Transform in image space
Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More information7.3 Refractive Index Profiling of Fibers and Fusion Splices
7.3 Refractive Index Profiling of Fibers and Fusion Splices 199 necessary for measuring the reflectance of optical fiber fusion splices. Fig. 7.10 schematically depicts an OFDR containing a Michelson interferometer
More information