MATLAB tool for evaluating the symmetry of the forehead in patients with unicoronal synostosis before and after operation.

Size: px
Start display at page:

Download "MATLAB tool for evaluating the symmetry of the forehead in patients with unicoronal synostosis before and after operation."

Transcription

1 MATLAB tool for evaluating the symmetry of the forehead in patients with unicoronal synostosis before and after operation. Master of Science Thesis in Radiation Physics Author: Annelie Lindström 1/23/2011 Supervisors: Peter Bernhardt, Lars Kölby and Jacob Heydorn-Lagerlöf DEPARTMENT OF RADIATION PHYSICS UNIVERSITY OF GOTHENBURG GOTHENBURG, SWEDEN

2 Abstract Unicoronal synostosis is a condition where one of the coronal sutures, in an infant s skull, has prematurely fused. This will result in a deformed skull where the forehead is flattened on the ipsilateral side and bulges outwards on the contralateral side. The craniofacial unit at the Department of Plastic Surgery at Sahlgrenska University has continuously improved craniofacial surgery over the years. The aim of this project was to develop a MATLAB tool that evaluates the symmetry of the forehead in patients with unicoronal synostosis before and after treatment. The program will be used as a quantitative measure in a study for craniofacial surgery. The program evaluates CT images and cephalometry images of the skull. The cranium is segmented leaving only the uttermost edge of the cranium and the symmetry between the left and right side of the forehead is evaluated. By calculating the symmetry of the forehead before and after operation a quantitative measure of the outcome is achieved, and the treatment can be evaluated. The symmetry of symmetrical phantoms, both circular and elliptical, has been evaluated and the result shows that it is completely symmetrical. 15 pre-operative images and 15 postoperative images have also been evaluated and it is shown that the relative standard deviation increases for a more symmetrical cranium. The results achieved for the CT images have a lower relative standard deviation than the results achieved for the cephalometry images. The success of the operations has been evaluated with two different equations. One method calculates the absolute change and the other calculates the relative change. The relative change is demonstrated to be a better evaluation of the operation. 1

3 Table of Contents 1 Introduction Theory Digital image Digital image processing Histogram processing Spatial filtering Segmentation MATLAB Method CT images Definition of center-point, -point, and unfused side Segmentation Cephalometry images Contrast enhancement Definition of center-point, -point, and unfused side Interactive definition of the cranium Dividing the skull into two parts, one left and one right Calculation of the symmetry between the two sides Evaluation of the program Symmetry evaluation Test on symmetrical phantoms The certainty of the program Results Symmetry evaluation Test on symmetrical phantoms The certainty of the program Discussion Conclusions Acknowledgements References

4 Appix Appix I Appix II

5 1 Introduction The bones in an infant s skull are joined together by six sutures, a type of fibrous joint [1]. These sutures make it possible for the skull to compress when passing through the birth canal. They also allow the skull to expand when the brain grows. The infant s skull grows rapidly and at one year of age the brain has tripled its volume. The rapid growth requires the cranium to be able to expand. As the child gets older, and the growth of the brain is decreased, the sutures will fuse one by one. In a condition called craniofacial synostosis the sutures can fuse prematurely, days or months before birth. For these children the skull won t be able to grow at the affected suture. To compensate for the prematurely fused suture the unaffected sutures will grow in a different pattern compared to a normal skull, resulting in an abnormal shape. Deping on which suture has been fused the shape of the skull will have special features. Usually one suture is closed too early but sometimes several sutures can be closed simultaneously. The coronal suture separates the parietal and the frontal bones (Figure 1). For patients with unicoronal synostosis one of the coronals is prematurely closed [2] [3]. Unicoronal synostosis is characterized by an asymmetric forehead. On the ipsilateral side, the side where the suture has fused, the forehead is flattened and on the contralateral side the forehead bulge outwards. The face is also affected due to the abnormal shape of the forehead. The eye and the eyebrow may be pushed downward by the bulging forehead [4]. Figure 1. Image of the cranium and its sutures where the arrow points to the coronal suture [5]. 4

6 The craniofacial unit at the Department of Plastic Surgery at Sahlgrenska University has a large material of patients with synostosis. At the department many new methods in craniofacial surgery have been developed over the years. An example of a new treatment modality is stainless steel wires for implantation, which was introduced by the Gothenburg craniofacial team eleven years ago. These wires affect and guide the dynamic growth of the infant s skull. When the growth of the brain is directed and affected by the wires the skull shape is forced into normality. Today the result of an operation is evaluated by studying the patient s skull and at the x-ray images of the patient. However, this is a result that will dep on the evaluator. Therefore a quantitative measure of the result, indepent of the viewer, is preferable. A quantitative measure of the symmetry of the skull would make it possible to compare and evaluate, for example, different surgical procedures. The aim of this project was to develop a MATLAB tool that calculates the symmetry of the forehead in patients with unicoronal synostosis before and after treatment. This should be done by comparing the left and the right side of the forehead in both computed tomography (CT) and cephalometry images. The program will be used as a quantitative measure in a study where two different treatments are compared. 2 Theory 2.1 Digital image An image can be defined as a two-dimensional function f(x,y), where x and y are spatial coordinates. The intensity or gray level at the point (x,y) is the amplitude of f. It is a positive number whose value is determined by the source of the image. A digital image make up of a finite number of elements, each element has a particular location determined by the coordinates x and y, and a finite value. These elements are called picture elements also known as pixels. A digital image is stored as a two-dimensional array, a matrix, where the columns and rows corresponds to the spatial coordinates x and y respectively [6]. A digital image of size M x N can be represented as shown in equation 1. ( ) [ ( ) ( ) ] (1) ( ) ( ) 5

7 2.2 Digital image processing Digital image processing refers to processing digital images with a computer using mathematical algorithms. The basics of digital image processing are to enhance an image so that the result suits a specific application more than the original. Spatial processing refers to direct manipulation of the pixels in the image. There are two main categories, intensity transformation and filtering. These processes are methods whose inputs and outputs are images. There are other processes, like segmentation, where the inputs are images and the outputs are constituents extracted from those images Histogram processing The basis of a numerous spatial domain processing techniques is histogram processing. It can be used to enhance the contrast and other image processing applications, such as image compression and segmentation. A histogram of a digital image is a bar graph of the distribution of all gray levels. The x-axis shows the number of gray levels, and the y-axis shows the number of pixels with a particular gray level (figure 2). The histogram is often normalized by dividing each of its components by the total number of pixels. Figure 2. A histogram of a digital image where the x-axis shows the number of gray levels, and the y-axis shows the number of pixels with a special gray level. Histogram equalization is a histogram processing technique where a transformation function is automatically determined. The transformation function maps one pixel value to another 6

8 pixel value so that the output image produced has a uniform histogram with a predefined number of discrete gray levels (figure 3). Histogram equalization can be performed on the entire image as well as small regions in the image. For small regions, the histogram of the points in the neighborhood is computed and histogram equalization is performed for each location. This enhances details over small areas in the image which may not be enhanced in a global process where the number of pixels in these areas may have negligible influence. Figure 3. The equalized histogram after performing histogram equalization on the histogram in figure 2. The histogram has 64 discrete gray levels. Another way to process a histogram is to map intensity values in a manually defined range in the input image to intensity values in a manually defined range in the output image. The relationship between these ranges can be specified. Both processing techniques can be used to enhance the contrast in an image. Histogram equalization is simple to implement but it is not always the best approach to base the enhancement on a uniform histogram. Instead of enhancing the desirable details in the image the noise might be enhanced Spatial filtering Spatial filtering operates by working in a neighborhood of every pixel in an image and performs a predefined operation on the pixel encompassed by the neighborhood. The neighborhood is a matrix much smaller than the image. Filtering creates a new pixel on the same location as the pixel in the center of the neighborhood. The filter moves from pixel to pixel so that the center of the filter operates on each pixel in the input image. The value of 7

9 the new pixel deps on the filtering operation and the size of the filter. The operations can be linear or nonlinear. An output image that has been filtered with a linear filter is a linear function of the input image. Each pixel, in the output image, is a weighted sum of the neighborhood pixels encompassed by the filter in the input image. Examples of linear filters are smoothing and sharpening filters. A nonlinear filter ranks the pixels contained in the area encompassed by the filter. An example is a median filter. The median filter calculates the median value of the intensity in the pixels encompassed in the filter area and replaces the intensity in the center pixel with this value. The size of the filter decides how many neighborhood pixels are encompassed. Median filters are effective for noise-reduction, particularly on so called salt-and-pepper noise Segmentation Segmentation partitions the image into its constituent parts. The segmentation algorithms are based on basic properties of intensity values like discontinuity and similarity. Discontinuity algorithms partitions the image based on rapid changes in the intensity values. This is used for, among other things, identifying edges in the image. The assumption is that the intensity values of the boundaries and the background differ sufficiently from each other. In a low-contrast image it can be difficult to identify boundaries. The second category, similarity, uses a set of criteria that is predefined to partition the image into regions that are similar according to the criterions. Examples of this approach are thresholding, region growing, and region splitting and merging. 2.3 MATLAB MATrix LABoratory, MATLAB, is a high-performance technical computing tool optimized for matrix and vector calculations [7]. It is designed for developing algorithms, visualizing and analyzing data, and numeric computation. MATLAB can be used in a wide range of applications, for example signal, image and video processing, control systems, test and measurement, technical computing, mechatronics, financial modeling and analysis, and computational biology. 3 Method In this project MATLAB Version (2010b) was used to create a program for evaluating the symmetry of the forehead. The created program is designed for CT and cephalometry images in jpg-format. 8

10 A flow-chart can easily illustrate the overall steps in the program (figure 4). An image is loaded by choosing the file containing the particular image. As can be seen in figure 5, CT and cephalometry images have different properties. Because of that they have to be processed differently. Figure 4. The flow-chart above illustrates the overall steps in the program. 9

11 Figure 5.. Illustration of the different properties of a CT image (left), and a cephalometry image (right). 3.1 CT images For the automatic segmentation, the CT images needs to have good contrast. Bone needs to have an intensity value over 250 in the gray-scale format. It is also important that there are no bigger open areas in the edge of the cranium (figure 6). The contrast can be enhanced by changing the window-level in any CT-image evaluation program before they are saved as jpgfiles. Changing the window level will also decrease, or increase, the size of the open areas in the edge of the cranium. Figure 6. Illustration of an open area that is too big for the segmentation to be automatic. 10

12 In the program a median filter of the size 4 x 4 is used to overcome the problem with small open areas in the edge of the cranium Definition of center-point, -point, and unfused side To be able to compare the left and the right side, of the forehead, a center-point has to be defined. This is done by hand. The point is created by clicking on the spot where the user wants to divide the two sides. The coordinates are saved and used later in the program. The user also has to define how much of the skull that is of interest for the comparison, the -point. The point is defined interactively, in the same way as the center-point, by clicking on the cranium where the comparison should. This point also defines the unfused side Segmentation The program segments the image to get the uttermost edge of the cranium. The segmentation is done by threshold the intensity values in the image. Pixel values below 105 will get the intensity value zero, pixel values above 250 will get the intensity value one, and pixel values between 105 and 250 are mapped to values between zero and one. This improves the contrast in the image. Bone gets the value one and soft tissue gets an intensity value closer to zero. The image format is converted from gray-scale to binary. A binary image has two discrete intensity values, zero and one. The output image replaces all pixels in the input image with intensities greater than graythresh with the intensity value one, and replaces all other pixels with the intensity value zero.graythresh uses Otsu s method to automatically determine a threshold that suits that gray-scale image best. The value deps on the histogram belonging to the input image [8]. When the image format has been converted, the MATLAB-command imfill is used to fill the area that is enclosed by the cranium. The output image contains the cranium and the filled area enclosed by the cranium, and sometimes parts of the table the patient lays on when the image is taken. To exclude the table a ROI is placed manually around the skull (figure 7). 11

13 Figure 7. The skull with the roi to exclude the table. To get the uttermost edge of the cranium the MATLAB-command edge, with its default settings, is used to detect the edge (figure 8). Figure 8. The uttermost edge of the cranium in a CT image. 12

14 3.2 Cephalometry images The cephalometry images are originally analogous images that have been digitalized by taking pictures of them with a digital camera. The images are resized with a scale factor from size M x N to size 400 x (N scale factor). The value of the scale factor deps on M. The image needs to be resized because the original matrix is too large for the memory on the computer Contrast enhancement In many of the cephalometry images there are very small differences in the intensity values between the cranium and the background. The MATLAB-command adapthisteq, using its default settings, enhances the contrast in the image by performing local histogram equalization Definition of center-point, -point, and unfused side Definition of center-point, -point, and unfused side are performed in the same way as in section Interactive definition of the cranium The cranium cannot be segmented automatically by the program due to poor contrast. Therefore, it has to be done by hand. By clicking along the edge of the cranium points are created, and its coordinates are saved in a vector. A MATLAB-command for creating lines is recalling the vector and creating lines between the locations where the points are. The points cannot be placed further apart than eight pixels, if they are, the program deletes the latest point and the user has to click closer to the previous point. 3.3 Dividing the skull into two parts, one left and one right The coordinates from the defined center-point are used to divide the skull into two parts, one left and one right side. The two parts are handled as two images. The unfused side is also cropped at the point that defines how much of the skull that is of interest for the comparison, the -point (figure 9). 13

15 Figure 9. The segmented skull is divided into two parts, the fused side (left) and the unfused side (right). The unfused side is cropped at the -point. The image containing the left part is mirrored in a left-right direction, about a vertical axis. To be able to perform the calculations the starting point of the two curves representing the uttermost edge of the cranium, has to start in the middle of the matrix. The matrices should also have the same size. This is done by adding matrices of zeros so that the output images have the same size and the starting point is in the center of the matrix. The output matrix contains the input image H and the matrices I and J (figure 10). I and J are zero matrices where J has the same size as H, and I is of the size d x c. c is determined by calculating a-b. The distances b and a are calculated by finding the coordinates of o in the input image H. Figure 10. Schematic image of how zero matrices are added to get the center-point, o, in the middle. H is the input matrix, and I and J are matrices of zeros. 14

16 3.4 Calculation of the symmetry between the two sides The symmetry can be calculated when: 1) the two sides have been segmented, 2) the left side has been mirrored, 3) the curves start at the same point in the middle of the matrix, and 4) the matrices has the same size The program finds a point, p1, on the first curve and a point, p2, on the other curve that has the same distance, l2, to the center-point, o (figure 11). The points are found by adding a circle with radius l2 to each image. The coordinates where the circle and the curve intersect are stored as p1 and p2. If the circle intersect with more than one pixel on the curve a mean of those coordinates are stored. The distance, d, between p1 and p2 is calculated and added to the variable, s. l2 starts at one and increases in steps of one until the program has reached the -point. Figure 11. Schematic image of the points p1 and p2, and the distance d. At each step the program finds p1 and p2 and calculates d. The variable s is a sum of all the calculated distances. The total sum is saved, the curve representing the fused side is rotated one degree, and the calculations are repeated. The MATLAB-command for rotating the image is imrotate which uses bilinear interpolation. For each angle the total sum are compared with the total sum achieved for the previous angle. The curve is rotated in steps of one as long as the sum decreases. Both clockwise and anti-clockwise rotation is tested. At the position where the smallest sum is found the number of pixels, n, between the two curves is calculated. 15

17 If the left and the right side of the forehead are symmetrical n will be zero. The more unsymmetrical the sides are the bigger the value is. The value does not only dep on the symmetry but also the size of the head and the size of the pixels. To be able to compare the symmetry this value has to be indepent of these two components. It is therefore divided by the number of pixels in the area denoted K which also deps on the size of the head and the size of the pixels (figure 12). The normalized value is called the symmetry ratio, N. N is multiplied with 1000 so that the numbers won t be so small (equation 2). Figure 12. The gray area represent the area K.The area K is expressed in pixels and is used to normalize the result. The point p has the same distance, l2, to the center-point, o, as the -point. (2) Two methods, relative change (equation 3) and absolute change (equation 4), was tested as an evaluation of how well the surgical operation has succeeded. (3) 16

18 is the symmetry ratio for the pre-operative image, and is the symmetry ratio for the post-operative image for the same patient. The closer C gets to one the better the operation has succeeded. C can also have a negative value which shows that the forehead is more asymmetrical after the operation. (4) The bigger B is the better the operation has succeeded. As for C, a negative value shows that the forehead is more asymmetrical after the operation. The MATLAB code is found in appix I and a user manual is found in appix II. 3.5 Evaluation of the program The program has been run several times to evaluate the program and see how reliable the results are Symmetry evaluation The main aim for the program is to calculate the symmetry of the left and the right side of the forehead. It should give a lower value for a more symmetrical forehead. The symmetry ratio, N, was calculated for two images where it can be seen that one of the foreheads is more symmetrical than the other (figure 13). The center- and -point was placed at the same structures in both images. Figure 13. Two images where it can be seen that the forehead in the left image is more asymmetrical than the forehead in the right image. 17

19 3.5.2 Test on symmetrical phantoms A few tests was performed on a CT image of a cylindrical phantom to see if the result is zero when the two sides are symmetrical, and if the result is indepent of the rotation of the head when the image is taken (figure 14). The symmetry ratio for this image was calculated ten times. At every run the center-point and the -point was placed on different places along the uttermost edge of the circle to simulate that the head is rotated. Figure 14. The cylindrical phantom. The program was also tested on an image of a symmetrical elliptical phantom (figure 15). The image was created with the command phantom(512) in MATLAB. The image of the phantom was run through the program twenty times. The aim for the first ten runs was to place the center-point and the -point at the same points on the uttermost edge of the cranium. The aim for the last ten runs was to place the center-point at the same place for each run and vary the places where the -point was placed. 18

20 Figure 15. The elliptical phantom The certainty of the program The symmetry of the forehead for 15 patients, both before and after operation, was evaluated. Seven of these patients were CT images, and eight were cephalometry images. Each image was run ten times to see how reliable the results are. For each run the aim was to put the center-point and the -point on the same structures in the image. For the cephalometry images an additional aim was to follow the edge of the cranium as thorough as possible. The relative standard deviation, and the mean for the symmetry ratio, N, was calculated for each image. C and B in equation 3 and 4, respectively was also calculated ten times for each patient, and the mean and the relative standard deviation was determined. 4 Results 4.1 Symmetry evaluation The symmetry ratio was calculated to be 68.1 for the more asymmetrical forehead in figure 13, and 4.7 for the more symmetrical forehead in the same figure. The numbers shows that the forehead in the right image is more asymmetrical. 4.2 Test on symmetrical phantoms For every run on the circular phantom the program calculated the symmetry ratio to be zero. That is, the program evaluates the two sides to be completely symmetrical, indepent of where the points are placed along the uttermost edge. 19

21 For all twenty runs on the elliptical phantom the program calculated the symmetry ratio to be zero which means that the program evaluates the phantom to be completely symmetrical. 4.3 The certainty of the program Figure 16 shows that the relative standard deviation increases when the forehead is more symmetrical. It also shows that the cephalometry images, continuous line, have a higher relative standard deviation than the CT images, dashed line. Figure 16. The dashed line shows how the relative standard deviation changes with the symmetry of the forehead in the CT images, and the continuous line shows how the relative standard deviation changes with the symmetry of the forehead in the cephalometry images. The mean and the relative standard deviation, SD, from evaluating the operations with equation 3 and 4 are shown in table 1 and 2. Table 1 shows the results for the CT images and table 2 shows the results for the cephalometry images. A more successful operation is represented by a high value in B and a value close to one in C. 20

22 Table 1. The results, for CT images, from evaluating the operations by calculating C in equation 3 and B in equation 4, and the relative standard deviation. C SD (%) B SD (%) Table 2. The results, for cephalometry images, from evaluating the operations by calculating C in equation 3 and B in equation 4, and the relative standard deviation. C SD (%) B SD (%) The negative number in table 2 shows that the forehead is evaluated as more asymmetrical after operation than before operation. It can be seen in both table 1 and table 2 that the equations do not evaluate an operation to be equally successful. Patient six, in table 1, is evaluated to be the most successful operation of the seven patients when it is evaluated with equation 4, while it is only ranked as third best when it is evaluated using equation 3. 21

23 5 Discussion Today the result of a surgical operation on a patient with unicoronal synostosis is evaluated by studying the patient s skull and at the x-ray images of the patient. There are no clear criteria on how to evaluate the result. No clear criteria give an evaluation that is depent of the evaluator. Therefore, a program has been created as a quantitative measure to evaluate the symmetry of the forehead in CT and cephalometry images of these patients, and in the to evaluate the result of the operation. The reliability of the program has been tested on images of symmetrical phantoms and images of 15 patients before and after the operation. The poor contrast in some of the cephalometry images has caused some problems in automatically detecting the uttermost edge of the cranium. In the the solution to this problem was to do this step manually. How the points are placed deps on the user and will affect the result. If the time hadn t been limited it might have been possible to come up with a method that enhances the contrast in each image separately. This is an area which can be further developed to improve the program. Although, in some images the contrast is so poor that no enhancement technique would make it possible to segment the uttermost edge. Even if the contrast could have been enhanced so that the edge could be segmented automatically some of the pictures are in a way that the jaw is seen outside of the uttermost edge of the cranium. The edge detected in an image like that would not be the uttermost edge of the cranium but the jaw. If the program is developed to enhance the contrast in each image separately, the problem with the jaw might be overcome by a combination of automatically and manually segmentation of the edge. As can be seen in figure 16 the relative standard deviation increase when the symmetry ratio decrease. If the center-point and the -point aren t placed at exactly the same place every time, the number of pixels between the curves representing the uttermost edge of the cranium will vary. If the curves are more symmetrical the number of pixels between them will be lower and this variation will have a higher impact on the result. Figure 16 also shows that the uncertainty for the cephalometry images is considerably higher and varies more than the uncertainty for the CT images. As mentioned, the relative standard deviation increases when the symmetry ratio decreases. The curve representing the cephalometry images in figure 16 shows that this is not always the case. Although, it can be seen that despite that the values differ from what might have been expected the variation is only a few percent, and an increase of the standard deviation at lower values on the symmetry ratio can be distinguished. These variations can dep on tree things; 1) there are some distinct structures where the center-point and the -point is placed which makes it easier to place the points at exactly the same point every time, 2) the edge isn t defined as thorough for each run, and 3) if there are big changes in the curves appearances around the center-point and the -point a variation in the placement of 22

24 these points will affect the result more than if the curves is more constant around the points. Number one and number three in the previous section might also be a reason for the variations shown in the results for the CT-images in figure 16. If the skull is completely symmetrical the place where the -point is placed is less important than the place where the center-point is placed. In that case, if the center-point is placed in a way that the forehead is divided into two completely symmetrical parts the placement of the -point won t affect the result at all. This is shown by the results from the test on the elliptical phantom where it is calculated that the two sides are completely symmetrical, even if the -point is placed at different places each time. As mentioned in the result, the two evaluation methods presented in equation 3 and 4 do not evaluate an operation to be equally successful. An operation evaluated to be successful by one of the equations might be evaluated to be less successful by the other. It would have been interesting to compare the results with previous clinical evaluations but no such evaluations have been done before. If the symmetry ratio is calculated to be 70 before operation and 40 after operation, equation 4 evaluates the operation to be more successful than if the symmetry ratio is calculated to be 20 before operation and 0 after operation, even if the last operation is as successful as it can be. Equation 3 on the other hand would evaluate the last operation to be more successful than the first. By dividing the difference between the symmetry ratio before operation and after operation with the symmetry ratio before operation the result will show the relative change. The relative change allows the results for different patients to be compared even if the symmetry ratio before operation differs between the patients. If equation 4 is used an operation where the symmetry ratio before operation is high will be evaluated as a more successful operation. It seems that equation 3 is preferable for evaluation of clinical materiel, since it has an upper limit of success, i. e. C=1 means perfect operation, and C=0 means no difference, and C less than zero means that the forehead is more asymmetrical after the surgery. As discussed above equation 4 do not give this information in a clear way. In the evaluation of patient 11 in table 2, it is shown that the forehead is more asymmetrical after operation than before operation. The surgical operation has not succeeded; in fact it has worsened the symmetry of the forehead. This is shown, as discussed in the previous section, as a negative value in both evaluation methods. Both methods show a pretty low difference. Therefore, small variations between each run will affect the result more which is shown by the high relative standard deviation. As can be seen in figure 16 the relative standard deviation is relatively high for a low value of the symmetry ratio. When the absolute or the relative change is calculated the result is 23

25 dominated by the higher symmetry ratio, resulting in a relatively low relative standard deviation. An attempt has been made to increase the size of the image, not the matrix, so that the edge can be distinguished more easily when it is defined manually. When this is done something goes wrong and some parts of the edge are lost. The reason for that has not been established. Another thing that can be further developed in the program is the processing time. At the time, one image takes approximately ten minutes, which might be a problem for longer patient series. Therefore, future studies should focus on optimizing the present MATLAB code. 24

26 6 Conclusions A MATLAB tool to evaluate the symmetry of the forehead in patients with unicoronal synostosis before and after treatment has been developed for both CT and cephalometry images which was the main aim of this project. The method is also indepent of the heads rotation at the time when the image is taken. It is shown that the results are more reliable when CT images are evaluated than when cephalometry images are evaluated. The recommation for evaluating an operation is to calculate the relative change. 25

27 Acknowledgements I would like to take the opportunity to thank everyone who has helped me to complete this project. First of all I would like to thank my supervisors Peter Bernhard at the Department of Radiation Physics, and Lars Kölby at the craniofacial unit at the Department of Plastic Surgery, for their guidance and quick answers to my less questions. Also a big thank you to my supervisor Jacob Heydorn-Lagerlöf at the Department of Radiation Physics for all his help with MATLAB, and for his guidance through everything that this computing tool has to offer. Thanks to Patrik Sund at the Department of Radiation Physics, and Giovanni Maltese at the craniofacial unit at the Department of Plastic Surgery for all their help along the way. Finally, special thanks to my classmates, fris and family who has helped me, in their own special ways, to this project as well as pass my studies. 26

28 References 1. Jane, J.A. and M.S. McKisic. Craniosynostosis. Apr 2, 2010 [cited 2010 Nov 2]; Available from: 2. Podda, S., et al. Craniosynostosis Management. sep 17, 2008 [cited 2010 oct 21]; Available from: 3. Boulet, S.L., S.A. Rasmussen, and M.A. Honein, A population-based study of craniosynostosis in metropolitan Atlanta, Am J Med Genet A, A(8): p Hayward, R., The clinical management of craniosynostosis. Clinics in developmental medicine, ; , London: Cambridge UP. 5. Wikipedia, Coronal suture. 6. Gonzalez, R.C. and R.E. Woods, Digital image processing. 2008, Upper Saddle River, N.J.: Pearson Prentice Hall. 7. MathWorks. MATLAB. [cited 2010 Nov 19]; Available from: 8. MathWorks. Functional design clunkers. [cited 2010 nov 25]; Available from: 27

29 Appix Appix I symmetry.m clear all clc %loads image nf = uigetfile('*.jpg','choose an image.'); Af = imread(nf); Af=rgb2gray(Af); o=menu('choose type of image','ct','cephalometry'); if o==1; figure(1) set(1,'name','define center-point and unfused side','numbertitle','off'); imshow(af) %Define center-point h1=impoint; wait(h1); pos1 = h1.getposition(); %Defines the unfused side and how much of the cranium to use in the %calculations h2=impoint; wait(h2); pos2=h2.getposition(); close %Filter the image and fills small open areas in the cranium A1=medfilt2(Af,[4 4]); %Adjusts the histogram Img = imadjust(a1,[105/255; 250/255],[]); %Makes the image into a binary image and fills the skull. bw = im2bw(img, graythresh(img)); bw = imfill(bw, 'holes'); figure(3) set(3,'name','make a ROI around the skull.','numbertitle','off'); imshow(bw) %Make a ROI around the skull bw2=roipoly(bw); %Multiplies the two images h=immultiply(bw,bw2); close eh=edge(h); %Segments the edge elseif o==2; Af=adapthisteq(Af); g=400/size(af,1); Ah=imresize(Af,g);%Resize image clear g 28

30 figure (1) set(1,'name','define center-point, unfused side and the cranium.','numbertitle','off'); imshow(ah); h=impoint; %Defines the center-point wait(h); pos1=h.getposition(); i=impoint; %Defines the unfused side and how much of the cranium to use %in the calculations wait(i); pos2=i.getposition(); delete(h); delete(i); %Segments the cranium by hand h=impoint; t=1; while 1; i=impoint; if isempty(i);%pusch esc to the loop break posh=h.getposition(); posi=i.getposition(); %The distance between the points shouldn't be too far. while sqrt((posh(1)-posi(1))^2+(posh(2)-posi(2))^2)>8; ts=text(size(ah,2)/2-100, size(ah,1)/2,'place the point closer to the previous point.'); delete(i) i=impoint; posi=i.getposition; delete(ts) x(t)=posi(1); y(t)=posi(2); t=t+1; h=impoint(gca,posi); close %Creates a zero-matrix z=zeros(size(ah)); z=im2bw(z); figure, imshow(z); k=1; BW1=z; %Makes a line between each point. while k<t-1; f = imline(gca,[x(k) x(k+1)], [y(k) y(k+1)]); BW = createmask(f);%creates a mask so that the lines burns into a matrix. k=k+1; BW2=imadd(logical(BW1),logical(BW)); BW1=BW2; close %closes the figure clear f t z BW1 BW k h i %clears variables eh=bw2; 29

31 eh=im2bw(eh); %changes the size of the image if 2*pos1(1)>size(eh,2) if (pos1(1)*2)>size(eh,2); j1=((2*pos1(1))-size(eh,2)); j21=zeros(size(eh,1),j1); eh=[eh j21]; %changes the size of the image if pos1(1)<size(eh,2). if pos1(1)<size(eh,2)/2; j5=(size(eh,2)-2*pos1(1)); j51=zeros(size(eh,1),j5); eh=[j51 eh]; pos1(1)=pos1(1)+size(j51,2); pos2(1)=pos2(1)+size(j51,2); %crops the image into one half and one "quarter" of the skull if pos2(1)<pos1(1); ceh1=eh((pos1(2)-pos1(2)/2):pos2(2), 1:pos1(1)); ceh2=eh((pos1(2)-pos1(2)/2):size(eh,1), pos1(1): 2*pos1(1)-1); else ceh1=eh((pos1(2)-pos1(2)/2):size(eh,1), 1:pos1(1)); ceh2=eh((pos1(2)-pos1(2)/2):pos2(2), pos1(1): 2*pos1(1)-1); p4=fliplr(ceh1); %mirrors the left side %makes the matrices the same size if size(p4,1)~=size(ceh2,1); dy=sqrt((size(p4,1)-size(ceh2,1))^2); yz=zeros(dy,size(p4,2)); if size(p4,1)>size(ceh2,1); ceh2=[ceh2;yz]; else p4=[p4;yz]; %puts the center-point in the midle of a matrix a1=p4(:,1); a2=find(a1==1); a3=a2(1); a4=size(p4,1)-(2*a3); %adds matrixes of zeros to get the center-point in the midle of a matrix B1=zeros(size(p4,1), size(p4,2)); B2=zeros(a4, size(p4,2)); E=[B2 B2; B1 p4]; I=[B2 B2; B1 ceh2]; B3=zeros(size(E,1),100); E1=[B3 E B3]; I1=[B3 I B3]; %Decides what side to rotate 30

32 if pos2(1)<pos1(1); Ik=I1; E3=E1; b=1; else E3=I1; Ik=E1; b=0; s1=0; s2=0; v=0; j2=0; f=0.24; while s1<=s2; I3=imrotate(Ik,v,'bilinear','crop'); I3(find(I3>f))=1; I3=im2bw(I3); s2=s1; %Store the latest value of s1 in s2 s1=0; t=size(e1,1)/2+1; l2=1; while t<=size(e1,1); %Runs the while-loop until the line in E3 s. %Makes a cirkle with radius l2 T=zeros(size(E1)); T2=T; x0=size(t,2)/2; y0=size(t,1)/2; the for i=1:size(t,2); for j=1:size(t,1); r=sqrt((i-x0)^2+(j-y0)^2); T(j,i)=r; L=find(T<=l2); T2(L)=1; L1=edge(T2); L1=medfilt2(L1,[1 2]); %widens the edge of the circle so that the % circle won't mis the curve even if the curve hasn't ed. L2=immultiply(L1,I3); %Finds a point in I3 that has the same radii %as a point in E3. L3=immultiply(L1,E3); %Finds a point in E3 that has the same radii %as a point in I3 [row, col]=find(l2); %Finds the coordinates of every pixels with %value one [row1, col1]=find(l3); %Finds the coordinates of every pixels with %value one if isempty(row1) isempty(row);%breaks the loop when it reaches % of the curve representing the unfused side break R=L1; %finds the points where the curves and the circle intersect x1=mean(col1); x2=mean(col); 31

33 y1=mean(row1); y2=mean(row); if v==0; r1=max(row); r2=max(row1); d=sqrt((x1-x2)^2+(y1-y2)^2); %Calculates the distance between the % points, that has the same radii, at the curves I3 and E3 t=t+1; s1=s1+d; l2=l2+1; if s2==0; %Extra to come around the first loop s2=s1; v=v+1; j2=j2+1; if j2==2; %Does the previous loop break at j2=2, the other direction is tested. v=-2; %Starts at v=-2 not to miss v=0 due to the extra command added. s1=0; s2=0; j3=0; while s1<=s2; I3=imrotate(Ik,-v,'bilinear','crop'); I3(find(I3>f))=1; I3=im2bw(I3); s2=s1; s1=0; t=size(e1,1)/2+1; l2=1; while t<=size(e1,1) %Makes a circle of radii l2 T=zeros(size(E1)); T2=T; x0=size(t,2)/2; y0=size(t,1)/2; the radii radii for i=1:size(t,2); for j=1:size(t,1); r=sqrt((i-x0)^2+(j-y0)^2); T(j,i)=r; L=find(T<=l2); T2(L)=1; L1=edge(T2); L1=medfilt2(L1,[1 2]); %widens the edge of the circle so that % circle won't mis the curve even if the curve hasn't ed. L2=immultiply(L1,I3); %Finds a point in I3 that has the same %as a point in E3. L3=immultiply(L1,E3); %Finds a point in E3 that has the same %as a point in I3 32

34 with with reaches the the [row, col]=find(l2); %Finds the coordinates of every pixels %value one [row1, col1]=find(l3); %Finds the coordinates of every pixels %value one if isempty(row1) isempty(row); %Breaks the loop when it % of the curve representing the unfused side break R=L1; %finds the points where the curves and the circle intersect x1=mean(col1); x2=mean(col); y1=mean(row1); y2=mean(row); if v==0; r1=max(row); r2=max(row1); d=sqrt((x1-x2)^2+(y1-y2)^2); %Calculates the distance between % points, that has the same radii, at the curves I3 and E3 t=t+1; s1=s1+d; l2=l2+1; if s2==0; %extra to get around the first lap s2=s1; v=v+1; j3=j3+1; %calculates the number of pixels between the two curves if j2>2; I3=imrotate(Ik,(v-2),'bilinear','crop'); I3(find(I3>f))=1; else I3=imrotate(Ik,-(v-2),'bilinear','crop'); I3(find(I3>f))=1; I3=im2bw(I3); p5=imadd(i3,logical(e3)); p5=im2bw(p5); R1=immultiply(I3,R); [row10 col10]=find(r1==1); y21=mean(row10); x21=mean(col10); figure, imshow(p5); li=imline(gca,[x1 x21], [y1 y21]); cm=createmask(li); close cm1=imadd(cm,logical(p5)); cm1=im2bw(cm1); p8=imfill(cm1,'holes'); 33

35 p9=imsubtract(p8,p5); n1=find(p9==1); n3=length(n1);%the number of pixels between the two curves %calculates the number of pixels in the area denoted K N=fliplr(E1); if b==1; f3=n(r2,:); col4=find(f3==1); z3=col4(1); f4=i1(r1,:); col5=find(f4==1); z4=col5(1); else f3=i1(r2,:); col4=find(f3==1); z3=col4(1); f4=n(r1,:); col5=find(f4==1); z4=col5(1); N4=imadd(N,I1); g=imline(gca,[z3 z4], [r2 r1]); bw1 = createmask(g); close k=imadd(bw1,logical(n4)); k1=imfill(k,'holes'); k2=imsubtract(k1,k); figure, imshow(k2); n=find(k2==1); n2=length(n); %number of pixels in the area K sum2=round(n3/n2*1000);%the symmetry ratio is calculated and multiplied %with 1000 so that the numbers won't be so small. sum=num2str(sum2); resultatruta(sum) 34

36 Appix II User manual. Install program 1. Run Symmetry_pkg. 2. Run the program Symmetry. Run program 1. Choose an image (it is important that the image is in the same folder as the program). Verify what type of image it is CT Define the center-point and how much of the cranium the program should use when doing the calculations, this point is placed on the unfused side. Using the mouse, you select the points by left-clicking. The point can be moved until you verify the location by double-clicking on the point Place a ROI around the skull. Using the mouse, you specify the region by selecting vertices around the skull by left-clicking. To close the ROI move the pointer over the initial vertex of the polygon that you selected. The pointer changes to a circle. Click either mouse button. You can move or resize the ROI using the mouse. When you are finished positioning and sizing the ROI, doubleclicking or right-clicking inside the region will start the calculations Cephalometry Define the center-point and how much of the cranium the program should use when doing the calculations, this point is placed on the unfused side. Using the mouse, you select the points by left-clicking. The point can be moved until you verify the location by double-clicking on the point. Also define the cranium by interactive placement of points along the whole edge of the cranium. Once the point is placed it cannot be moved. When the cranium is defined push esc to proceed. 35

MATLAB tool for evaluating temporal hollowing before and after surgery in patients with metopic synostosis

MATLAB tool for evaluating temporal hollowing before and after surgery in patients with metopic synostosis Master of Science thesis in Medical Physics MATLAB tool for evaluating temporal hollowing before and after surgery in patients with metopic synostosis Linn Hagmarker Supervisors Peter Bernhardt Lars Kölby

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator Li X.C.,, Chui C. K.,, and Ong S. H.,* Dept. of Electrical and Computer Engineering Dept. of Mechanical Engineering, National

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu Image Processing CS 554 Computer Vision Pinar Duygulu Bilkent University Today Image Formation Point and Blob Processing Binary Image Processing Readings: Gonzalez & Woods, Ch. 3 Slides are adapted from

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

Lecture 4 Image Enhancement in Spatial Domain

Lecture 4 Image Enhancement in Spatial Domain Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain

More information

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing What will we learn? Lecture Slides ME 4060 Machine Vision and Vision-based Control Chapter 10 Neighborhood Processing By Dr. Debao Zhou 1 What is neighborhood processing and how does it differ from point

More information

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image

More information

Comparison between Various Edge Detection Methods on Satellite Image

Comparison between Various Edge Detection Methods on Satellite Image Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering

More information

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary) Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

IDENTIFYING GEOMETRICAL OBJECTS USING IMAGE ANALYSIS

IDENTIFYING GEOMETRICAL OBJECTS USING IMAGE ANALYSIS IDENTIFYING GEOMETRICAL OBJECTS USING IMAGE ANALYSIS Fathi M. O. Hamed and Salma F. Elkofhaifee Department of Statistics Faculty of Science University of Benghazi Benghazi Libya felramly@gmail.com and

More information

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.

Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Image Interpolation 48 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Fundamentally, interpolation is the process of using known

More information

Lecture 2 Image Processing and Filtering

Lecture 2 Image Processing and Filtering Lecture 2 Image Processing and Filtering UW CSE vision faculty What s on our plate today? Image formation Image sampling and quantization Image interpolation Domain transformations Affine image transformations

More information

1 Background and Introduction 2. 2 Assessment 2

1 Background and Introduction 2. 2 Assessment 2 Luleå University of Technology Matthew Thurley Last revision: October 27, 2011 Industrial Image Analysis E0005E Product Development Phase 4 Binary Morphological Image Processing Contents 1 Background and

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Curriki Geometry Glossary

Curriki Geometry Glossary Curriki Geometry Glossary The following terms are used throughout the Curriki Geometry projects and represent the core vocabulary and concepts that students should know to meet Common Core State Standards.

More information

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010 An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010 Luminita Vese Todd WiCman Department of Mathema2cs, UCLA lvese@math.ucla.edu wicman@math.ucla.edu

More information

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Image Processing. Traitement d images. Yuliya Tarabalka  Tel. Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an

More information

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations I T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations I For students of HI 5323

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Image Reconstruction from Multiple Projections ECE 6258 Class project

Image Reconstruction from Multiple Projections ECE 6258 Class project Image Reconstruction from Multiple Projections ECE 658 Class project Introduction: The ability to reconstruct an object from multiple angular projections is a powerful tool. What this procedure gives people

More information

Segmentation

Segmentation Lecture 6: Segmentation 24--4 Robin Strand Centre for Image Analysis Dept. of IT Uppsala University Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 6 Image Enhancement in Spatial Domain- II ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Local/

More information

Aston Hall s A-Z of mathematical terms

Aston Hall s A-Z of mathematical terms Aston Hall s A-Z of mathematical terms The following guide is a glossary of mathematical terms, covering the concepts children are taught in FS2, KS1 and KS2. This may be useful to clear up any homework

More information

Prime Time (Factors and Multiples)

Prime Time (Factors and Multiples) CONFIDENCE LEVEL: Prime Time Knowledge Map for 6 th Grade Math Prime Time (Factors and Multiples). A factor is a whole numbers that is multiplied by another whole number to get a product. (Ex: x 5 = ;

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Finite Element Course ANSYS Mechanical Tutorial Tutorial 4 Plate With a Hole

Finite Element Course ANSYS Mechanical Tutorial Tutorial 4 Plate With a Hole Problem Specification Finite Element Course ANSYS Mechanical Tutorial Tutorial 4 Plate With a Hole Consider the classic example of a circular hole in a rectangular plate of constant thickness. The plate

More information

Section 2-2 Frequency Distributions. Copyright 2010, 2007, 2004 Pearson Education, Inc

Section 2-2 Frequency Distributions. Copyright 2010, 2007, 2004 Pearson Education, Inc Section 2-2 Frequency Distributions Copyright 2010, 2007, 2004 Pearson Education, Inc. 2.1-1 Frequency Distribution Frequency Distribution (or Frequency Table) It shows how a data set is partitioned among

More information

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force

CHAPTER 4. Numerical Models. descriptions of the boundary conditions, element types, validation, and the force CHAPTER 4 Numerical Models This chapter presents the development of numerical models for sandwich beams/plates subjected to four-point bending and the hydromat test system. Detailed descriptions of the

More information

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity. Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor

More information

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

2D Image Processing INFORMATIK. Kaiserlautern University.   DFKI Deutsches Forschungszentrum für Künstliche Intelligenz 2D Image Processing - Filtering Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 What is image filtering?

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

(Refer Slide Time: 0:32)

(Refer Slide Time: 0:32) Digital Image Processing. Professor P. K. Biswas. Department of Electronics and Electrical Communication Engineering. Indian Institute of Technology, Kharagpur. Lecture-57. Image Segmentation: Global Processing

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Shading Techniques Denbigh Starkey

Shading Techniques Denbigh Starkey Shading Techniques Denbigh Starkey 1. Summary of shading techniques 2 2. Lambert (flat) shading 3 3. Smooth shading and vertex normals 4 4. Gouraud shading 6 5. Phong shading 8 6. Why do Gouraud and Phong

More information

Filtering and Enhancing Images

Filtering and Enhancing Images KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.

More information

Segmentation

Segmentation Lecture 6: Segmentation 215-13-11 Filip Malmberg Centre for Image Analysis Uppsala University 2 Today What is image segmentation? A smörgåsbord of methods for image segmentation: Thresholding Edge-based

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Name: Tutor s

Name: Tutor s Name: Tutor s Email: Bring a couple, just in case! Necessary Equipment: Black Pen Pencil Rubber Pencil Sharpener Scientific Calculator Ruler Protractor (Pair of) Compasses 018 AQA Exam Dates Paper 1 4

More information

Adjacent sides are next to each other and are joined by a common vertex.

Adjacent sides are next to each other and are joined by a common vertex. Acute angle An angle less than 90. A Adjacent Algebra Angle Approximate Arc Area Asymmetrical Average Axis Adjacent sides are next to each other and are joined by a common vertex. Algebra is the branch

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Scanning Real World Objects without Worries 3D Reconstruction

Scanning Real World Objects without Worries 3D Reconstruction Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects

More information

EECS490: Digital Image Processing. Lecture #23

EECS490: Digital Image Processing. Lecture #23 Lecture #23 Motion segmentation & motion tracking Boundary tracking Chain codes Minimum perimeter polygons Signatures Motion Segmentation P k Accumulative Difference Image Positive ADI Negative ADI (ADI)

More information

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B 8. Boundary Descriptor 8.. Some Simple Descriptors length of contour : simplest descriptor - chain-coded curve 9 length of contour no. of horiontal and vertical components ( no. of diagonal components

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image Histograms h(r k ) = n k Histogram: number of times intensity level rk appears in the image p(r k )= n k /NM normalized histogram also a probability of occurence 1 Histogram of Image Intensities Create

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

RT_Image v0.2β User s Guide

RT_Image v0.2β User s Guide RT_Image v0.2β User s Guide RT_Image is a three-dimensional image display and analysis suite developed in IDL (ITT, Boulder, CO). It offers a range of flexible tools for the visualization and quantitation

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Performance Evaluations for Parallel Image Filter on Multi - Core Computer using Java Threads

Performance Evaluations for Parallel Image Filter on Multi - Core Computer using Java Threads Performance Evaluations for Parallel Image Filter on Multi - Core Computer using Java s Devrim Akgün Computer Engineering of Technology Faculty, Duzce University, Duzce,Turkey ABSTRACT Developing multi

More information

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING Proceedings of the 1994 IEEE International Conference on Image Processing (ICIP-94), pp. 530-534. (Austin, Texas, 13-16 November 1994.) A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

More information

Digital Image Processing Chapter 11: Image Description and Representation

Digital Image Processing Chapter 11: Image Description and Representation Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that

More information

Global Journal of Engineering Science and Research Management

Global Journal of Engineering Science and Research Management ADVANCED K-MEANS ALGORITHM FOR BRAIN TUMOR DETECTION USING NAIVE BAYES CLASSIFIER Veena Bai K*, Dr. Niharika Kumar * MTech CSE, Department of Computer Science and Engineering, B.N.M. Institute of Technology,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA

A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki

More information

Image Segmentation. 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra. Dronacharya College Of Engineering, Farrukhnagar, Haryana, India

Image Segmentation. 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra. Dronacharya College Of Engineering, Farrukhnagar, Haryana, India Image Segmentation 1Jyoti Hazrati, 2Kavita Rawat, 3Khush Batra Dronacharya College Of Engineering, Farrukhnagar, Haryana, India Dronacharya College Of Engineering, Farrukhnagar, Haryana, India Global Institute

More information

Understanding Tracking and StroMotion of Soccer Ball

Understanding Tracking and StroMotion of Soccer Ball Understanding Tracking and StroMotion of Soccer Ball Nhat H. Nguyen Master Student 205 Witherspoon Hall Charlotte, NC 28223 704 656 2021 rich.uncc@gmail.com ABSTRACT Soccer requires rapid ball movements.

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Face Recognition with Local Binary Patterns

Face Recognition with Local Binary Patterns Face Recognition with Local Binary Patterns Bachelor Assignment B.K. Julsing University of Twente Department of Electrical Engineering, Mathematics & Computer Science (EEMCS) Signals & Systems Group (SAS)

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Digital Image Analysis and Processing

Digital Image Analysis and Processing Digital Image Analysis and Processing CPE 0907544 Image Segmentation Part II Chapter 10 Sections : 10.3 10.4 Dr. Iyad Jafar Outline Introduction Thresholdingh Fundamentals Basic Global Thresholding Optimal

More information

Course Number: Course Title: Geometry

Course Number: Course Title: Geometry Course Number: 1206310 Course Title: Geometry RELATED GLOSSARY TERM DEFINITIONS (89) Altitude The perpendicular distance from the top of a geometric figure to its opposite side. Angle Two rays or two line

More information

Lecture 5 2D Transformation

Lecture 5 2D Transformation Lecture 5 2D Transformation What is a transformation? In computer graphics an object can be transformed according to position, orientation and size. Exactly what it says - an operation that transforms

More information

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach Volume 1, No. 7, September 2012 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Edge Detection

More information

2D and 3D Transformations AUI Course Denbigh Starkey

2D and 3D Transformations AUI Course Denbigh Starkey 2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates

More information

Big Mathematical Ideas and Understandings

Big Mathematical Ideas and Understandings Big Mathematical Ideas and Understandings A Big Idea is a statement of an idea that is central to the learning of mathematics, one that links numerous mathematical understandings into a coherent whole.

More information

Image Compression With Haar Discrete Wavelet Transform

Image Compression With Haar Discrete Wavelet Transform Image Compression With Haar Discrete Wavelet Transform Cory Cox ME 535: Computational Techniques in Mech. Eng. Figure 1 : An example of the 2D discrete wavelet transform that is used in JPEG2000. Source:

More information

Introducing Robotics Vision System to a Manufacturing Robotics Course

Introducing Robotics Vision System to a Manufacturing Robotics Course Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System

More information

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers

In this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers PAM3012 Digital Image Processing for Radiographers Image Enhancement in the Spatial Domain (Part I) In this lecture Image Enhancement Introduction to spatial domain Information Greyscale transformations

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE

AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE sbsridevi89@gmail.com 287 ABSTRACT Fingerprint identification is the most prominent method of biometric

More information

School District of Marshfield Mathematics Standards

School District of Marshfield Mathematics Standards MATHEMATICS Counting and Cardinality, Operations and Algebraic Thinking, Number and Operations in Base Ten, Measurement and Data, and Geometry Operations and Algebraic Thinking Represent and Solve Problems

More information

Univariate Statistics Summary

Univariate Statistics Summary Further Maths Univariate Statistics Summary Types of Data Data can be classified as categorical or numerical. Categorical data are observations or records that are arranged according to category. For example:

More information

Lecture #5. Point transformations (cont.) Histogram transformations. Intro to neighborhoods and spatial filtering

Lecture #5. Point transformations (cont.) Histogram transformations. Intro to neighborhoods and spatial filtering Lecture #5 Point transformations (cont.) Histogram transformations Equalization Specification Local vs. global operations Intro to neighborhoods and spatial filtering Brightness & Contrast 2002 R. C. Gonzalez

More information

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked Plotting Menu: QCExpert Plotting Module graphs offers various tools for visualization of uni- and multivariate data. Settings and options in different types of graphs allow for modifications and customizations

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information