Model Based Segmentation of Clinical Knee MRI

Similar documents
2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

Simon K. Warfield, Michael Kaus, Ferenc A. Jolesz and Ron Kikinis

Calculating the Distance Map for Binary Sampled Data

MR IMAGE SEGMENTATION

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07.

Probabilistic Registration of 3-D Medical Images

Introduction to Medical Image Processing

Prostate Detection Using Principal Component Analysis

Whole Body MRI Intensity Standardization

3D VISUALIZATION OF SEGMENTED CRUCIATE LIGAMENTS 1. INTRODUCTION

Image Segmentation and Registration

Automatic Optimization of Segmentation Algorithms Through Simultaneous Truth and Performance Level Estimation (STAPLE)

Acknowledgements. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

Semantic Context Forests for Learning- Based Knee Cartilage Segmentation in 3D MR Images

Multi-Modal Volume Registration Using Joint Intensity Distributions

Norbert Schuff VA Medical Center and UCSF

Segmentation of Images

SKULL STRIPPING OF MRI USING CLUSTERING AND RESONANCE METHOD

Automatic Generation of Training Data for Brain Tissue Classification from MRI

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Comparison Study of Clinical 3D MRI Brain Segmentation Evaluation

Performance Issues in Shape Classification

Simulating Surgery using Volumetric Object Representations, Real-Time Volume Rendering and Haptic Feedback

Semi-Automatic Segmentation of the Patellar Cartilage in MRI

Simulating Arthroscopic Knee Surgery using Volumetric Object Representations, Real-Time Volume Rendering and Haptic Feedback

Medical Image Registration by Maximization of Mutual Information

Object Identification in Ultrasound Scans

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Intuitive, Localized Analysis of Shape Variability

Automated segmentation methods for liver analysis in oncology applications

The Use of Unwrapped Phase in MR Image Segmentation: A Preliminary Study

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

Deformation Analysis for Shape Based Classification

Biomedical Image Processing

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Correspondence Detection Using Wavelet-Based Attribute Vectors

MEDICAL IMAGE ANALYSIS

Video Registration Virtual Reality for Non-linkage Stereotactic Surgery

Validation of Image Segmentation and Expert Quality with an Expectation-Maximization Algorithm

Color Image Segmentation

Segmentation of Bony Structures with Ligament Attachment Sites

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

A Generic Lie Group Model for Computer Vision

Volumetric Object Modeling for Surgical Simulation

IMAGE SEGMENTATION. Václav Hlaváč

Automatic segmentation of the cortical grey and white matter in MRI using a Region Growing approach based on anatomical knowledge

Law, AKW; Zhu, H; Lam, FK; Chan, FHY; Chan, BCB; Lu, PP

MORPHOLOGY ANALYSIS OF HUMAN KNEE USING MR IMAGERY

TEMPLATE-BASED AUTOMATIC SEGMENTATION OF MASSETER USING PRIOR KNOWLEDGE

Automatic MS Lesion Segmentation by Outlier Detection and Information Theoretic Region Partitioning Release 0.00

Level Set Evolution with Region Competition: Automatic 3-D Segmentation of Brain Tumors

Biomedical Image Processing for Human Elbow

Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks

Parallelization of Mutual Information Registration

Efficient population registration of 3D data

A Model-Independent, Multi-Image Approach to MR Inhomogeneity Correction

ABSTRACT 1. INTRODUCTION 2. METHODS

Segmentation and Modeling of the Spinal Cord for Reality-based Surgical Simulator

Modern Medical Image Analysis 8DC00 Exam

MRI Brain Image Segmentation Using an AM-FM Model

identified and grouped together.

ISSN: X Impact factor: 4.295

EMSegment Tutorial. How to Define and Fine-Tune Automatic Brain Compartment Segmentation and the Detection of White Matter Hyperintensities

Learning-based Neuroimage Registration

Statistical Models in Medical Image Analysis by Michael Emmanuel Leventon Submitted to the Department of Electrical Engineering and Computer Science o

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

Automatized & Interactive. Muscle tissues characterization using. Na MRI

Knowledge-Based Segmentation of Brain MRI Scans Using the Insight Toolkit

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

Context-sensitive Classification Forests for Segmentation of Brain Tumor Tissues

STIC AmSud Project. Graph cut based segmentation of cardiac ventricles in MRI: a shape-prior based approach

Tissue Tracking: Applications for Brain MRI Classification

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Problem Solving Assignment 1

Pathology Hinting as the Combination of Automatic Segmentation with a Statistical Shape Model

MEDICAL IMAGE COMPUTING (CAP 5937) LECTURE 9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

Automated Tool for Diagnosis of Sinus Analysis CT Scans

A Study of Medical Image Analysis System

Dept of CSE, CIT Gubbi, Tumkur, Mysore, India

Machine Learning for Medical Image Analysis. A. Criminisi

City, University of London Institutional Repository

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Available Online through

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

Segmentation of 3-D medical image data sets with a combination of region based initial segmentation and active surfaces

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

A Unified Framework for Atlas Matching using Active Appearance Models

Semi-Automatic Detection of Cervical Vertebrae in X-ray Images Using Generalized Hough Transform

Data mining for neuroimaging data. John Ashburner

Histograms. h(r k ) = n k. p(r k )= n k /NM. Histogram: number of times intensity level rk appears in the image

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Global Journal of Engineering Science and Research Management

JOURNAL OF INTERNATIONAL ACADEMIC RESEARCH FOR MULTIDISCIPLINARY Impact Factor 2.417, ISSN: , Volume 3, Issue 10, November 2015

Model-based segmentation and recognition from range data

Transcription:

Model Based Segmentation of Clinical Knee MRI Tina Kapur 12, Paul A. Beardsley 2, Sarah F. Gibson 2, W. Eric L. Grimson 1, William M. Wells 13 corresponding author: tkapur@ai.mit.edu Abstract A method for model based segmentation of 3D Magnetic Resonance Imaging (MRI) scans of the human knee is presented. A probabilistic model describing the spatial relationships between features of the human knee is constructed from 3D manually segmented data. In conjunction with feature detection techniques from low-level computer vision, this model is used to segment knee MRI scans in a Bayesian framework. 1. Motivation We are interested in segmentation of clinical MRI scans for pre- as well as intra-operative visualization and modeling. Specifically, segmentation will be necessary to provide patient-specific anatomical models for a surgical simulation system [4]. In this system, computer-based knee models are combined with physically-based anatomical modeling, high quality visual rendering, and force-reflective interface devices to provide the surgeon with a virtual testbed for planning and rehearsing patient specific procedures. Current knee models for the surgical simulator have been painstakingly hand-segmented by medical professionals. The hand segmentation has required up to 6 hours per knee model an effort that would not be economical or practical for patient-specific models. We are investigating techniques to fully automate the segmentation process wherever possible, and to support and accelerate manual segmentation where full automation breaks down. An automated segmentation system would also be beneficial to studies that evaluate the role of specific drugs and/or therapies for cartilage regrowth by facilitating comparison between the pre- and post-therapy cartilage volumes. 1 Massachusetts Institute of Technology, Artificial Intelligence Laboratory, Cambridge MA 2 Mitsubishi Electric Research Laboratory, Cambridge MA 3 Department of Radiology, Brigham and Womens Hospital, Harvard Medical School, Boston MA Figure 1. High resolution knee scan. Resolution is.25x.25x1.5mm. Why is Knee Segmentation Difficult: Figure 1 is a single slice from a 3D MRI image of a human knee. This high quality image has an in-plane resolution of approximately.25x.25 mm and a between plane resolution of 1.5 mm. The image was generated for research purposes with a special high resolution receiver coil and a scan time of 1 hour. While specific anatomical structures in this image are very apparent to the trained eye, it is important to note that even in this high quality image, many standard automatic segmentation methods fail. For example, simple intensity based classification will not provide a reasonable segmentation because the different tissues have overlapping image intensity values and because there is a

strong 3D gain field that modulates the image intensity [14]. In addition, edge and surface-based segmentation methods fail to provide boundaries between tissues such as cortical bone and ligaments because these tissues have similar intensities and no clear separation in the scanned image. Figure 2. Clinical knee scan..2x.2x2mm. Resolution is Figure 2 is a slice through a 3D MRI knee image taken in a typical clinical scan. Because it is difficult for patients to remain still during a long imaging session and because of the cost of scan time, most clinical scans are limited to 2 minutes. This results in a poorer signal to noise ratio and lower image resolution than the image in Figure 1, exacerbating the segmentation difficulties discussed above. Trained clinicians who interpret or segment clinical MRI scans rely on implicit models of knee anatomy and pathology that they have built up through their education and practical experience. It is reasonable to assume that a computerbased segmentation system would also benefit from such a model. In this paper, we discuss one method to use anatomical models to assist in the automation of medical image segmentation. 2. Previous Work General Model Based Methods for Segmentation of Medical Images: Most existing medical image analysis techniques that incorporate anatomical models into the segmentation process use a parameterized shape-based approach. In these methods, statistical parameters for parameterized shape models of the desired structures are generated from handsegmented data and used to influence the computer-based segmentation [1, 3, 8, 9, 11, 12]. A recent review of various deformable models that have been used for segmentation can be found in [7]. A different type of statistical model is proposed by Kamber et al. [5] for brain MRI scans. It is constructed by registering and averaging the voxel intensities of normal brain scans to provide spatial prior probabilities for intensity based classification. Model-Based Method in Knee Segmentation: Solloway et al. [8] use a parameterized shape-based method for segmenting femoral cartilage. Their method uses principal component analysis to encode shape and intensity variations across a training set of landmark locations. During segmentation, the mean locations of the landmarks from the training data are superimposed on the image. These locations are subsequently adjusted within an allowable range of motion determined from the principle component analysis to maximize the similarity between local intensities around the landmark locations in the model and the image. Warfield [13] is currently developing a model driven segmentation method for cartilage segmentation from clinical knee MRI images. 3. Segmentation of Knee MRI Our overall strategy is to treat knee segmentation as a sequential process, segmenting first the large scale and most easily obtainable structures, and using the results to guide the segmentation of subsequent structures. The sequential approach allows us to employ a specific algorithm for a specific anatomical structure at each new stage. It also allows us to accumulate and utilize constraints as the segmentation proceeds. For example, by starting with the automatic segmentation of the femur, we obtain spatial constraints on neighbouring structures such as femoral cartilage (utilization of spatial constraints to guide segmentation is described below in detail). Furthermore the segmentation of largescale structures at the start of the process offers the possibility of calibrating the gain field, and applying a correction to the pixel intensities before continuing (see [6] for details). Our initial work on this approach has looked at the segmentation of just four structures - the femur and the femoral cartilage, and the tibia and tibial cartilage - but we present the work in terms of the overall framework for a full knee segmentation. We next describe the atlas, which is our

Figure 3. Flowchart for Model Based Segmentation of Knee. repository for a priori information to be used in the segmentation, and then the algorithms themselves. The Atlas Our atlas is a manually segmented knee data set with some annotated information. The annotations are of two types - (a) the sequence of algorithms to be used during segmentation, each with an associated location where they are to be applied, (b) a priori data used by the algorithms. The first step in the processing is to align the atlas with the data. This is done manually with an interactive tool which allows the atlas to be rotated, translated and scaled relative to the data set. The user interface is shown in Figure 4. The aligned atlas is used to provide seed regions in the central parts of the femur and the tibia (a coarse manual alignment of atlas and data is sufficient for this purpose). Segmentation of the femur/tibia This is a two-step process which uses as input the interactively obtained seed regions. The first step is an adaptive region growing which analyzes local texture properties of each seed region and uses that as a homogeneity criterion for growing the bone. As the region grows, the statistics of the local texture are updated, hence making the process adaptive. To prevent leaks in the texture based region growing, connected contours of edges are used as a heterogeneity criterion i.e. are used to stop the growth of the region. In the second step, the boundary of the bone is localized using a snake-like regularizer that is discussed in [6]. Figure 5 shows the seed region on a clinical MRI scan, the result of the adaptive region growing from that seed, and Figure 4. Alignment of the atlas with the data. Three orthogonal cross-sections are obtained for a selected location in the data set and are shown in 2D (left column) and 3D (right-side). The atlas is also shown on the 3D display. The 3D display allows change of overall viewpoint and translation/rotation/scaling of the atlas relative to the data. The central column shows the slices through the atlas made by the crosssections, for the current alignment. the result of regularizing that boundary. Our feature detector finds the boundary of soft bone. In clinical scans, such as this one, the contrast between cortical bone, cartilage and ligaments is poor. Therefore, it is difficult to differentiate between cortical bone and cartilage. We are investigating different imaging protocols that have been adjusted for improved contrast. Segmentation of the femoral/tibial cartilage (a) Modelling spatial relationships We utilize a model which encodes the spatial relationship between femur and femoral cartilage, and tibia and tibial cartilage. Figure 6 illustrates this relationship for a segmented slice with these four structures annotated. We observe that the spatial relationship between the femur and femoral cartilage is well described by the distance (d) between cartilage and the surface of the femur, as well as the local orientation (n) of the femur where the cartilage connects to it. Thus we describe the relationship between the femoral cartilage and the femur by the class conditional probability density function:

where P (n i ; d i jx i 2 C F ) (1) x i are the spatial coordinates of the ith data voxel is the set of all voxels belonging to femoral cartilage C F F is the set of all voxels belonging to the femur d i is short for d i (F ), which is the distance from x i to the surface of the femur n i is short for n i (F ), which is the direction of the surface normal at p i, where p i is the point on the femur surface that is closest to x i A non-parametric estimate for this joint density function is constructed from examples in which both the cartilage and the femur have been manually labelled by experts. The training procedure is implemented using the following four steps applied sequentially to each image I t in the training set: 1. For each point on the surface of the femur in image I t, compute the direction of the local normal. Quantize the direction into 5 uniformly spaced bins between? and. 2. Compute the Chamfer [1] distance from each point in the image I t to the femur. Saturate the distance at 25 (empirically determined value). 3. For each cartilage point in I t, lookup two numbers: the distance d i to the closest femur point (using the chamfer map computed in step 2) and the orientation n i of the femur at that point (using the precomputed normals in step 1). 4. Histogram the values d i and n i jointly. If I t is the last image in the training set, normalize the histogram to obtain an empirical estimate for the joint density of d i and n i for cartilage. We have used a histogram to estimate the density from sample points. Other methods such as Parzen Windowing for density estimation (discussed in [2]) could be used effectively as well. Figure 7 shows an estimate of the joint density constructed using the above method. Note that this estimated density function is localized i.e. has high information content (or low entropy), thereby indicating that d and n are reasonable indicators of the relationship between the femur and its cartilage. We construct P (n i (T ); d i (T )jx i 2 C T ) an estimate for the pdf describing the relationship between the tibial cartilage (C T ) and the tibia (T ) in a similar fashion. This pdf is shown is Figure 8. Note that this model is a combination of heuristics and training from examples. The heuristic used is that surface normals and distance are pertinent parameters to the relationship between the femur and its cartilage, while examples are used to estimate probability distributions on these parameters. We believe that information-theoretic schemes could be devised for automatically deducing the relationships from examples as well. (b) Bayesian Classification Our next step is to use the model and the features (F, T ), that have already been labelled, to identify cartilage in the data. We do this in two stages: The Inference Stage: Compute the posterior probability that a voxel location should be classified as femoral cartilage based on observations of its intensity and spatial relation to the femur. The Decision Stage: Classify each voxel as cartilage or not-cartilage using the probabilities computed in the inference stage. Bayes rule allows us to express the posterior probability that a voxel should be classified as femoral cartilage based on observations of its intensity and spatial relation to the femur (P (x i 2 C F jn i (F ); d i (F ); I i )) as a product of the prior probability that a given voxel belongs to cartilage (P (x i 2 C F ) ) and the the class conditional density P (n i (F ); d i (F ); I i jx i 2 C F ) as follows: P(x i 2 C F jn i; d i; I i) = P(ni; di; Iijxi 2 CF)P(xi 2 CF) P(n i; d i; I i) (2) where x i, n i, d i, C F and F represent the same quantities as in Equation 1, and I i is the intensity at x i. This can be rewritten assuming independence between the intensity at a voxel and its spatial relationship to femur as: P(x i 2 C F jn i; d i; I i) = P(ni; dijxi 2 CF)P(Iijxi 2 CF)P(xi 2 CF) P(n i; d i; I i) (3) The first term in the numerator on the right hand side is the class conditional density for the model parameters, and is estimated using the method described in the previous section. The second term in the numerator is class conditional density function that conventional classifiers use. It can either be stored as part of the model for each relevant

imaging protocol (since the appearance of structures varies depending on the protocol), or be obtained from samples of cartilage from the data. We use a stored parametric (gaussian) model as shown in Figure 9. The third term in the numerator, P (x i 2 C F ), is the prior probability that a voxel belongs to cartilage. This is computed as a ratio of the cartilage volume to the total volume of the knee in scan. The denominator is a scaling term that we ignore. The decision stage currently thresholds the probabilities using hysterisis to decide between cartilage and noncartilage. As more spatial relationships are incorporated in the model, a maximum aposteriori probability (MAP) decision rule will be used. 4. Results and Discussion Currently we have a database of seven clinical scans and three high resolution healthy knee scans that have been manually segmented. The results presented in this section were obtained using one of the high resolution scans (size 512x512x87 voxels and resolution.25x.25x1.4mm.) We trained and tested our system using leave k out cross-validation i.e. in a typical run we trained the system on several (usually 6) two-dimensional slices from a single three-dimensional scan and tested on the remaining few slices of the same scan. We repeated this for different partitions of the scan. The mean distance between the cartilage segmentation generated by our system and the segmentation generated manually by experts was 1.25 pixels, or.32 mm. This distance was computed over three trials, each testing different sets of images. We used manually segmented models of the femur and the tibia to train the system. This allows us to focus on the performance of our modeling of spatial relationships independent of the performance of the feature detector. We show results for three different slices (shows in Figure 1) of the scan, which were not used in the training process. Figure 12 shows the segmentation of femoral cartilage generated by our system for these three slices. Figure 11 shows the corresponding manual segmentation for visual comparison. Note that the location of the cartilage generated by our model compares well with manual segmentation. However, three weaknesses of the current implementation are apparent. False positives in the first two slices in Figure 11 indicate the need for finer quantization of orientation in the estimated joint density of distances and normals. The optimal quantization may require more training data to estimate the joint density (currently we estimate the joint density over 25x5=125 bins using approximately 5-1 cartilage points). We defer this task until multiple three-dimensional scans are registered to construct the model, thereby providing a sufficiently large sample size. Competing intensity models for structures such as muscle, ligament, and meniscus are needed in addition to the existing intensity model for cartilage. This would prevent some of the false positives in the second image in Figure 11. Heuristic post-processing of the data using techniques such as mathematical morphology and connected component analysis could improve the results. 5. Future Work This paper has described initial work on a system for knee segmentation. The work has focussed on creating a framework in which to place a variety of algorithms specific to segmenting individual parts of the knee. Within this framework we have investigated the segmentation of femur and tibia, and femoral and tibial cartilage. Our short-term goal is to look at these algorithms more closely, finding ways to vary the parameters and thresholds which inevitably arise in the processing and automatically assessing what values are giving a good segmentation. While our assumption about relevant spatial relationships between the femur and its associated cartilage were based on heuristics, future work will include research into techniques to automatically generate such relationships from a database of pre-segmented images. Our long-term goal is the generation of patient-specific models of knee anatomy for use in a surgical simulator. This will involve not only the development of a range of algorithms for segmenting healthy knees and knees with pathologies, but also a user interface which readily facilitates visual assessment of the quality of the segmentation, and allows user guidance and modifications to the segmentation process. Acknowledgements: The authors thank Dr. Ron Kikinis and Dr. Jens Richolt of the Surgical Planning Lab at Brigham and Women s hospital for the data used in this paper, as well as for discussions that were helpful in developing the ideas presented here. Thanks to Dr. Shin Nakajima and Dr. Akira Sawada as well for providing some of the manual segmentations. References [1] G. Borgefors, Distance transformations in digital images, Computer Vision, Graphics, and Image Processing, No. 34. pp. 344 371, 1986. [2] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, John Wiley and Sons, 1973.

[3] D. Fritsch, S. Pizer, L. Yu, V. Johnson, E. Chaney, Segmentation of Medical Image Objects Using Deformable Shape Loci, Proc. Information Processing in Medical Imaging, pp. 127-14, 1997. [4] S. Gibson, C. Fyock, E. Grimson, T. Kanade, R. Kikinis, H. Lauer, N. McKenzie, A. Mor, S. Nakajima, H. Ohkami, R. Osborne, J. Samosky, A. Sawada, Volumetric object modeling for surgical simulation, accepted for publication, Medical Image Analysis, 1998. [5] M. Kamber, D. Collins, R. Shinghal, G. Francis, and A. Evans, Model-based 3D segmentation of multiple sclerosis lesions in dual-echo MRI data, Proc. Visualization in Biomedical Computing, pp. 59-6, 1992. [6] T. Kapur, P.A. Beardsley, S.F. Gibson, Segmentation of Bone in knee MRI using Adaptive Region Growing, MERL Tech Report [7] T. McInerny and D. Terzopoulos, Deformable models in medical image analysis: a survey, Medical Image Analysis, Vol.1, No. 2, pp. 91-18, 1996. [8] S. Solloway, C. Hutchinson, J. Waterton, and C.J. Taylor, The use of Active Shape Models for Making Thickness Measurements of Articular Cartilage from MR Images, MRM 37:943-952, 1997 [9] M. Sonka, S. Tadikonda, S. Collins, Knowledgebased interpretation of MR brain images, IEEE Trans. Med. Imaging, Vol. 15, No. 4, pp. 443-452, 1996. [1] L. Staib and J. Duncan, Boundary finding with parametrically deformable models, IEEE Trans. PAMI, Vol. 14, No. 11, pp. 161-175, 1992. [11] G. Szekely, A. Kelemen, C. Brechbuhler, and G. Gerig, Segmentation of 2D and 3D objects from MRI volume data using constrained elastic deformations of flexible fourier contour and surface models, Medical Image Analysis, Vol. 1, No. 1, pp. 19-34, 1996. [12] B. Vemuri, A. Radisavljevic, and C. Leonard, Multiresolution stochastic 3D shape models for image segmentation, Proc. Information Processing in Medical Imaging, pp. 62-75, 1993. [13] S. Warfield, Brigham and Women s Hospital, Personal communication. July 1997. [14] W. Wells, R. Kikinis, E. Grimson, and F. Jolesz, Statistical Intensity Correction and Segmentation of Magnetic Resonance Image Data, Proceedings of the Third Conference on Visualization in Biomedical Computing, Rochester MINN, 1994 Figure 5. The top image shows a slice from a clinical MRI scan along with the seed region that was obtained interactively. The middle image shows the result of adaptive region growing using the seed. The bottom image shows the result of running a snake-like regularizer on the boundary obtained from region growing.

P(n i, d i C F ).4.3.2.1 3.14 1.57 Normal n i (F) 1.57 3.14 3 2 1 Distance d i (F) Figure 6. A manually segmented slice from a knee MRI which shows four structures: femur, tibia, femoral cartilage and tibial cartilage. Figure 7. P (n i (F ); d i (F )jx i 2 C F ). Empirical estimate of the joint density for normals n i and distances d i for femoral cartilage.

P(n i, d i C T ).16.14.12.1.1.5.8.6 5 1 Distance d i (T) 15 2 25 3.14 1.57 1.57 Normal n i (T) 3.14.4.2 5 1 15 2 Intensity Figure 8. P (n i (T ); d i (T )jx i 2 C T ). Empirical estimate of the joint density for normals n i and distances d i for tibial cartilage. Figure 9. P (IjC F ). Empirical estimate of the probability density for pixel intensity of femoral cartilage.

Figure 1. Three slices from a knee MRI scan. These slices were used to test our cartilage modelling system, and were not included in the set of images used to train the system. Figure 11. Manually generated cartilage segmentation is overlaid in white on the greyscale slices. This figure is presented for visual comparison with the results generated by our system in Figure 12.

Figure 12. The cartilage segmentation generated by our system is overlaid in white on the greyscale slices. Note that in all cases the cartilage is well localized. False positives in the middle slice (compare with corresponding manual segmentation in Figure 11) are discussed in the text.