Validation of New Gibbs Priors for Bayesian Tomographic Reconstruction Using Numerical Studies and Physically Acquired Data 1

Similar documents
Quantitative Eects of Using Thin-Plate Priors. in Bayesian SPECT Reconstruction. Department of Electronic Engineering. Paichai University

STATISTICAL positron emission tomography (PET) image

Workshop on Quantitative SPECT and PET Brain Studies January, 2013 PUCRS, Porto Alegre, Brasil Corrections in SPECT and PET

PET image reconstruction algorithms based on maximum

Q.Clear. Steve Ross, Ph.D.

A Weighted Least Squares PET Image Reconstruction Method Using Iterative Coordinate Descent Algorithms

Modeling and Incorporation of System Response Functions in 3D Whole Body PET

PET Image Reconstruction using Anatomical Information through Mutual Information Based Priors

THE methods of maximum likelihood (ML) and maximum

Axial block coordinate descent (ABCD) algorithm for X-ray CT image reconstruction

MAP Tomographic Reconstruction with a Spatially Adaptive Hierarchical Image Model

Noise weighting with an exponent for transmission CT

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION

Projection Space Maximum A Posterior Method for Low Photon Counts PET Image Reconstruction

Resolution and Noise Properties of MAP Reconstruction for Fully 3-D PET

Low-Dose Dual-Energy CT for PET Attenuation Correction with Statistical Sinogram Restoration

REMOVAL OF THE EFFECT OF COMPTON SCATTERING IN 3-D WHOLE BODY POSITRON EMISSION TOMOGRAPHY BY MONTE CARLO

Evaluation of Spectrum Mismatching using Spectrum Binning Approach for Statistical Polychromatic Reconstruction in CT

Continuation Format Page

Validation of GEANT4 for Accurate Modeling of 111 In SPECT Acquisition

Unmatched Projector/Backprojector Pairs in an Iterative Reconstruction Algorithm

A Fast GPU-Based Approach to Branchless Distance-Driven Projection and Back-Projection in Cone Beam CT

Bias-Variance Tradeos Analysis Using Uniform CR Bound. Mohammad Usman, Alfred O. Hero, Jerey A. Fessler and W. L. Rogers. University of Michigan

An overview of fast convergent ordered-subsets reconstruction methods for emission tomography based on the incremental EM algorithm

Comparison of 3D PET data bootstrap resampling methods for numerical observer studies

STATISTICAL image reconstruction methods for X-ray

MAXIMUM a posteriori (MAP) or penalized ML image

TOMOGRAPHIC IMAGE RECONSTRUCTION WITH A SPATIALLY VARYING GAUSSIAN MIXTURE PRIOR. Katerina Papadimitriou, Christophoros Nikou

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH

M. Usman, A.O. Hero and J.A. Fessler. University of Michigan. parameter =[ 1 ; :::; n ] T given an observation of a vector

Quantitative comparison of FBP, EM, and Bayesian reconstruction algorithms, including the impact of accurate system modeling, for the IndyPET scanner

Emission Computed Tomography Notes


CENTER FOR ADVANCED COMPUTATION

Introduc)on to PET Image Reconstruc)on. Tomographic Imaging. Projec)on Imaging. Types of imaging systems

CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION

Limited View Angle Iterative CT Reconstruction

Iterative SPECT reconstruction with 3D detector response

Review of PET Physics. Timothy Turkington, Ph.D. Radiology and Medical Physics Duke University Durham, North Carolina, USA

8/2/2016. Measures the degradation/distortion of the acquired image (relative to an ideal image) using a quantitative figure-of-merit

Introduction to Positron Emission Tomography

Assessment of OSEM & FBP Reconstruction Techniques in Single Photon Emission Computed Tomography Using SPECT Phantom as Applied on Bone Scintigraphy

Spiral ASSR Std p = 1.0. Spiral EPBP Std. 256 slices (0/300) Kachelrieß et al., Med. Phys. 31(6): , 2004

DUE to beam polychromacity in CT and the energy dependence

A Projection Access Scheme for Iterative Reconstruction Based on the Golden Section

Image segmentation using an annealed Hopfield neural network

Corso di laurea in Fisica A.A Fisica Medica 5 SPECT, PET

Implementation and evaluation of a fully 3D OS-MLEM reconstruction algorithm accounting for the PSF of the PET imaging system

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

THE general task in emission computed tomography (CT)

DUAL energy X-ray radiography [1] can be used to separate

F3-A5: Toward Model-Based Reconstruction in Scanned Baggage Security Applications

Temperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System

Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

Analysis of ARES Data using ML-EM

FAST KVP-SWITCHING DUAL ENERGY CT FOR PET ATTENUATION CORRECTION

I How does the formulation (5) serve the purpose of the composite parameterization

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

Reconstruction from Projections

Optimal threshold selection for tomogram segmentation by reprojection of the reconstructed image

doi: /

Splitting-Based Statistical X-Ray CT Image Reconstruction with Blind Gain Correction

Performance Evaluation of the TINA Medical Image Segmentation Algorithm on Brainweb Simulated Images

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Penalized-Likelihood Reconstruction for Sparse Data Acquisitions with Unregistered Prior Images and Compressed Sensing Penalties

SPECT QA and QC. Bruce McBride St. Vincent s Hospital Sydney.

Iterative and analytical reconstruction algorithms for varying-focal-length cone-beam

EMISSION tomography, which includes positron emission

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Reconstruction of CT Images from Sparse-View Polyenergetic Data Using Total Variation Minimization

Ordered subsets algorithms for transmission tomography

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

ATTENUATION correction is required for quantitatively

SPECT reconstruction

An Accelerated Convergent Ordered Subsets Algorithm for Emission Tomography

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

Introduction to Emission Tomography

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Quantitative Statistical Methods for Image Quality Assessment

3-D PET Scatter Correction

MR IMAGE SEGMENTATION

SINGLE-PHOTON emission computed tomography

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Simplified statistical image reconstruction algorithm for polyenergetic X-ray CT. y i Poisson I i (E)e R } where, b i

GPU implementation for rapid iterative image reconstruction algorithm

FAST MULTIRESOLUTION PHOTON-LIMITED IMAGE RECONSTRUCTION

Motion Correction in PET Image. Reconstruction

Irradiance Gradients. Media & Occlusions

Noniterative sequential weighted least squares algorithm for positron emission tomography reconstruction.

EE795: Computer Vision and Intelligent Systems

NIH Public Access Author Manuscript Int J Imaging Syst Technol. Author manuscript; available in PMC 2010 September 1.

Non-homogeneous ICD Optimization for Targeted Reconstruction of Volumetric CT

Non-Homogeneous Updates for the Iterative Coordinate Descent Algorithm

Carbondale, IL USA; University of North Carolina Chapel Hill, NC USA; USA; ABSTRACT

IN COMPUTED imaging, reconstructing an image from

Determination of Three-Dimensional Voxel Sensitivity for Two- and Three-Headed Coincidence Imaging

Evaluation of Penalty Design in Penalized Maximum- likelihood Image Reconstruction for Lesion Detection

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Transcription:

Validation of New Gibbs Priors for Bayesian Tomographic Reconstruction Using Numerical Studies and Physically Acquired Data 1 S. J. Lee # Member, IEEE, Y. Choi x Member, IEEE, and G. Gindi { Member, IEEE # Department of Electronic Engineering, Paichai University, Taejon, Korea x Department of Nuclear Medicine, Samsung Biomedical Research Institute, Samsung Medical Center, Seoul, Korea { Department of Radiology, SUNY at Stony Brook, Stony Brook, NY Abstract The variety of Bayesian MAP approaches to emission tomography proposed in recent years can both stabilize reconstructions and lead to improved bias and variance. In our previous work [1, 2], we showed that the thin-plate (TP) prior, which is less sensitive to variations in first spatial derivatives than the conventional membrane (MM) prior, yields improved reconstructions in the sense of low bias. In spite of the several advantages of such quadratic smoothing priors, they are still less than ideal due to their limitations in edge preservation. In this paper we use a convex but nonquadratic (CNQ) potential function, which provides a degree of edge preservation. As in the case of quadratic priors, a class of two-dimensional smoothing splines with first and second partial derivatives are applied to the new potential function. In order to reduce difficulties such as oversmoothing for MM and edge overshooting for TP, we also generalize the prior energy definition to that of a linear combination of MM and TP using a control parameter [3], and observe its transition between the two extreme cases. To validate advantages of our new priors, we first perform extensive numerical studies using a digital phantom to compare the bias/variance behavior of CNQ priors with that of quadratic priors. We also use physically acquired PET emission and transmission data from phantoms to observe the efficacies of our new priors. Our numerical studies and results using physical phantoms show that a combination of first and second partial derivatives applied to the CNQ potential yields improved quantitative results in terms of scalar metrics of image quality computed from independent noise trials and good qualitative results for both emission and transmission images. I. INTRODUCTION Over the last decade, Bayesian maximum a posteriori (MAP) approaches have been a topic for image reconstruction in emission computed tomography (ECT). This is due mainly to the fact that the MAP approach can not only model the system model and statistical character of the data in a natural way, but it also allows the incorporation of a priori information on the underlying source distribution. As in the case of emission images, statistical reconstruction methods for transmission images in ECT have also been 1 This work was supported in part by a grant (HMP-98-E-1-0008) of the Good Health R&D Project from the Ministry of Health and Welfare, Korea. Y. Choi was supported in part by Samsung Grant SBRI C-97-023. of interest since they can reduce major problems that arise in filtered backprojection (FBP), such as singularities and systematic biases for low-count scans [4]. Many statistical methods are based on the original transmission ML-EM (maximum-likelihood expectation-maximization) algorithm proposed by Lange and Carson [5], and some are extended to MAP approaches by incorporating Bayesian smoothing priors [4, 6]. Most common MAP approaches involve assumptions on the local spatial characteristics of the underlying source and model the prior probability as Gibbs distributions, which are equivalent to Markov random field models. In these approaches, neighboring pixels in the underlying source are assumed to have similar intensities. A host of different formulations to model the Gibbs priors have been proposed in the literature. In particular, the nonquadratic smoothing priors that allow spatial discontinuities have led to good performance [7, 8, 9, 1]. In our early work [1], a nonquadratic prior (the weak plate [10]) that imposed piecewise smoothness on the first derivative of the solution led to much improved bias/variance behavior relative to results obtained using a more conventional nonquadratic prior (the weak membrane) that imposed piecewise smoothness of the zeroth derivative. In spite of the good performance of the weak plate, it suffered difficulties in optimization due to the nonconvexity of its potential function. In order to overcome these problems, we proposed [2] a new quadratic smoothing prior, the thin plate, and showed that, by relaxing the requirement of imposing spatial discontinuities and using instead a quadratic (no discontinuities) smoothing prior, algorithms become easier to analyze, solutions easier to compute, and hyperparameter calculation becomes less of a problem. In this case, the similar bias/variance advantages of the weak plate over the weak membrane were retained for the quadratic version of the priors, the thin plate vs. the membrane. Nevertheless, this model is still less than ideal due to its oversmoothing action in edge regions. In this paper we use a convex-nonquadratic potential function [11]. Our motivation of choosing such a potential function stems from the comparison of the quantitative performance of the weak plate and the thin plate reported in [1, 2]; the advantages of each prior model in our previous work encouraged us to investigate whether those advantages might be merged in a single form of the prior model by choosing the new convex version of nonquadratic potential function. We note that, by choosing a potential function which

is nonquadratic but is still convex, both the nonconvexity involved in some nonquadratic priors and the oversmoothness in edge regions in quadratic priors may be avoided. In this case, the new potential function may be considered a compromise between the quadratic function of the thin plate and the nonquadratic broken parabola function [1] of the weak plate. Since the new potential function is nonquadratic, large jumps in the pixel intensities can be preserved. In addition, the convexity of the function not only provides for a globally convergent optimization algorithm, but also supports a variety of useful theoretical noise analyses [12]. In this work, we focus on the validation of our new priors using numerical simulations as well as physically acquired PET (positron emission tomography) data from several phantoms. The remainder of this paper describes our new prior models, develops the reconstruction algorithms, performs and presents the results of numerical studies and experimental results showing good performance on real PET emission/transmission data. II. CONVEX-NONQUADRATIC PRIORS The general form of the Gibbs distribution which we will use is given by Pr(F = f ) = 1 exp[?e(f )]; Z where F is the two-dimensional (2-D) random field, f the 2-D source distribution comprising pixel components, the positive hyperparameter that weights the prior relative to the likelihood term, Z a normalization of no concern here. The function E(f ) is the Gibbs prior energy, which is defined by the sum of energies of individual cliques in a local neighborhood N ij of a pixel located at (i; j): E(f ) = X ij X fi 0 j 0 g2nij (f ij? f i 0 j0 ): The argument of the potential function () is usually defined as a discrete first spatial derivative. In order to model gradual transition regions, however, the argument can take the form of a discrete second spatial derivative. One widely used potential function takes a simple quadratic form: () = 2. In this case, the corresponding quadratic prior energies used in our previous work [2] are E MM (f ) = ij X E T P (f ) = X f 2 h (i; j) + fv 2 (i; j) and ij f 2 hh (i; j) + 2fhv 2 (i; j) + f vv 2 (i; j) for the membrane and the thin plate, respectively, where f v (i; j) and f h (i; j) are the discrete first partial derivatives of the 2- D source distribution in the vertical and horizontal directions, respectively, f vv (i; j) and f hh (i; j) the discrete second partial derivatives in the vertical and horizontal directions, respectively, and f hv (i; j) the second partial cross derivative. Note that, for (a) (c) Figure 1: Noiseless MAP reconstructions using quadratic smoothing priors, which show oversmoothing and overshooting errors in edge regions: (a) phantom; (b) MAP with membrane prior (c) MAP with hybrid prior; (d) MAP with thin-plate prior. the thin-plate prior, second partial derivative terms are used for the argument of the Gibbs potential functions. One could also consider a generalized form of the quadratic prior energy by a linear combination of the membrane (MM) and the thin plate (TP), which is given by E(f ) = (1? )E MM (f ) + E T P (f ); (1) where 2 [0; 1] is the weight parameter; = 0 reduces (1) to the membrane energy and = 1 to the thin plate energy. It is important to point out that, while the solution obtained from the MM prior oversmoothes the discontinuities and incurs large bias error, it is often observed that the TP solution exhibits overshoots around discontinuities, which may be the major drawback of the TP prior [3]. On the other hand, it has been shown [3] that the hybrid model, by combining MM and TP using (1) with = 0:5, alleviates such problems (see Fig. 1). Although the quadratic priors have several advantages and can also be further improved by the hybrid model, they are still less than ideal due to their fundamental limitations in edge preservation. In order to overcome this problem, we use a convex-nonquadratic (CNQ) potential function which is given by [11] () = jj ; (2) where 1 < < 2. One can easily notice that the function in (2) provides a degree of edge preservation by penalizing large pixel differences less than small pixel differences. Figure 2 shows @() the derivative potential function, @, whose magnitude specifies the strength of smoothing [13]. Unlike quadratic priors, where the increment of the penalty (the strength of smoothing) with respect to the increment of the pixel difference is constant for all pixel differences, CNQ priors reveal less increment of the penalty for large pixel differences as shown in Fig. 2. (b) (d) Similarly to the quadratic priors, the class of 2-D smoothing splines with first and second partial derivatives can be applied to the above potential function. Using the equality, p, jj = 2 the corresponding new energy functions are

7 6 5 4 3 2 1 α=1.25 α=1.5 α=1.75 0 0 1 2 3 4 5 Figure 2: Derivative potential functions for > 0 and = 1:25, 1.5, 1.75. The abscissa is and the ordinate is @() @. given by E 1 (f ) = E 2 (f ) = Xq p fh 2 (i; j) + f 2 v (i; j) (3) i;j Xq fhh 2 (i; j) + 2f hv 2 (i; j) + f vv 2 (i; j) ; (4) i;j where E 1 and E 2 are comparable to the membrane and the thin plate, respectively, in the quadratic case. For the CNQ prior energy with second partial derivatives in (4), there exist alternate forms of using a separate potential function for each second partial derivative [14]. As in the quadratic case, we may write a linear combination of the above two energy functions, given by E(f ) = (1? )E 1 (f ) + E 2 (f ); (5) where 2 [0; 1] is the weight parameter; = 0 reduces (5) to (3) and = 1 to (4). A variety of algorithms can be used to minimize the above objective functions. In this work, for simplicity, we used the ICM (iterated conditional modes) for quadratic priors [2] and the OSL (one-step-late) [8] for CNQ priors. The OSL indeed converges to the MAP solution if it converges at all. (The experiments reported here are results of convergent iterations.) The OSL algorithm for emission using the conventional Poisson statistics for the likelihood is given by: ^f n+1 ij = ^f n ij Pt H t;ij + @E(f ) @fij f ij = ^f n ij X t H t;ij g P t kl H t;kl ^f kl n ; where ^f ij n is the object estimate at location (i; j) and iteration n, g t the number of detected counts in the detector bin indexed by t at angle, and H t;ij the probability that a photon (or a pair of photons) emitted from location (i; j) is recorded at the detector bin at (t; ). For transmission, the associated OSL algorithm is given by: ^ n+1 ij = 1 2 P t2jij (M t;ij? N t;ij ) Pt2Jij (M t;ij + N t;ij ) l t;ij + @E() @ij ij =^ n ij ; where ^ n ij is the estimated attenuation factor at location (i; j) and iteration n, M t;ij and N t;ij the expected number of photons entering and leaving pixel (i; j) along the projection ray indexed by t at angle, respectively. Here, J ij is the set of projections to which pixel (i; j) contributes with their chord lengths l t;ij. The OSL algorithm for transmission is based on the EM algorithm by Lange and Carson [5], where the second approximation in a Taylor series in the derivation of the M-step is implicit. III. NUMERICAL STUDIES We first evaluated and compared the quantitative performance of our new priors using a realistic digital phantom derived from primate autoradiography [15], which contains a variety of edge structures (see Fig. 3). We scaled the phantom so that the noise level corresponded to 500,000 counts. For the given phantom and noise level, we generated 50 Monte Carlo noise trials by adding independent realizations of Poisson noise to the noiseless projection data. In order to focus only on the quantitative performance of our new priors, we simply ignored physical factors such as attenuation and scatter in the following numerical studies. For the iterative MAP algorithms used in our experiments, we chose a sufficient number of iterations (150 iterations) after which the change in reconstruction was negligible. Thus, iteration number was removed as a parameter in all comparisons of MAP reconstructions. However, since ML-EM diverges in root-mean squared error (RMSE), we chose two stopping criteria and designated the resulting reconstructions as EM-1 and EM-2, respectively. EM-1 was chosen by observing the iteration number at which reconstructions minimized RMSE, and EM-2 was based on the simple heuristic of choosing the iteration number that optimized qualitative resemblance of reconstruction and phantom. The iteration numbers for EM-1 and EM-2 were 20 and 30, respectively. For the choice of the values of hyperparameters ( and ) in our initial tests, we simply chose the values that led to a best reconstruction in terms of minimum RMSE relative to the phantom. Of course, this method of fixing hyperparameters to enable comparisons could be replaced by possibly better criteria, such as task-specific criteria like those in [12] and maximum likelihood estimation [16, 17]. A similar strategy was also used in FBP; a Hanning window was adjusted to lead to a minimum RMSE. We tested the eight reconstruction algorithms FBP, EM, MAP with the MM prior (referred to hereafter as MAP-MM), MAP with the hybrid prior using = 0:5 (MAP-HB), MAP with the TP prior (MAP-TP), MAP with the CNQ prior in (3) (MAP-CNQ1), MAP with the hybrid CNQ prior in (5) using = 0:5 (MAP-CNQ2), and MAP with the CNQ prior in (4) (MAP-CNQ3). The figures from our initial performance evaluations make use of an empirically chosen value of the hyperparameter for the various MAP algorithms, as well as an empirically chosen. Two stopping criteria, corresponding to EM-1 and EM-2 are used for ML-EM algorithms. We include further results, however, comparing performance

PHANTOM FBP EM-2 FBP EM-1 EM-2 MAP-MM MAP-HB MAP-TP MAP-MM MAP-HB MAP-TP MAP-CNQ1 MAP-CNQ2 MAP-CNQ3 Figure 3: The autoradiograph phantom and anecdotal reconstructions for FBP, EM-2 (30 iterations), MAP-MM (=2:5), MAP-HB (=1:7), MAP-TP ( = 1:7), MAP-CNQ1 ( = 1:3, = 1:3), MAP-CNQ2 (=1:0, =1:2), and MAP-CNQ3 (=0:8, =1:3). across a range of hyperparameters. Figure 3 shows anecdotal reconstructions for each of the eight estimators. For the MAP results in the 2nd and 3rd rows of Fig. 3, transition from the left to the right shows qualitative effects of using higher-order spatial derivatives applied to each of the potential functions. In addition, comparison of the MAP results in the 2nd row with those in the 3rd row shows that the CNQ potential functions enhance edges in the reconstructed images. To evaluate the reconstructions quantitatively, we computed bias and standard deviation (STD) images. A bias image, b ij, is defined as b ij def = 1 K KX k=1 ^f k ij? f ij ; (6) where ^f k ij is the kth reconstruction of the k th noise trial of phantom f at the location (i; j) and the summation is over K = 50 independent noise trials. To display the bipolar bias image, an intermediate grey scale value of 128 out of 256 levels was used as zero bias. A standard deviation image, s ij, is defined as s ij def = vu u t 1 K? 1 KX k=1 ^f k ij? f ij 2; (7) where f ij is the mean of ^f ij over the noise trials defined as def P f ij = K 1 K ^f k k=1 ij : In STD images, intensity (lighter means greater) codes the positive STD value. MAP-CNQ1 MAP-CNQ2 MAP-CNQ3 Figure 4: Pointwise bias images calculated from 50 independent noise realizations. FBP EM-1 EM-2 MAP-MM MAP-HB MAP-TP MAP-CNQ1 MAP-CNQ2 MAP-CNQ3 Figure 5: Pointwise STD images calculated from 50 independent noise realizations. Figures 4 and 5 show our pointwise bias and STD images, respectively. A given figure comprises images displayed with same grey scale to allow visual comparisons. In general, the bias images show a negative bias for high-signal regions and

a positive bias for low signal regions. The results for EM-1 and EM-2 show the usual bias/variance tradeoff inherent in ML- EM; fewer iterations lead to lower variance but larger bias, and the opposite is true for the larger number of iterations. Note that the FBP algorithm reveals the patterning artifact in its bias image, and spreads variance across the entire image area. The patterning artifact in the FBP bias images is due to the fact that different pixels exhibit different detector responses due to interpolation effects in our chord-weighted projector. The transition of bias images for MAP-CNQ reconstructions as varies from 0 to 1 (i.e., CNQ1 to CNQ3) is similar to that of bias images for quadratic priors (i.e., MM to TP); the firstorder model ( = 0) yields relatively larger bias error than the second-order model ( = 1), and the combination ( = 0:5) of the first-order and the second-order models compromises the bias errors for = 0 and = 1. Interesting results are seen in the case of STD images. The STD image for MAP-CNQ1 shows high variance in sharp edge regions. This effect appears to be due to the fact that the CNQ1 prior results in unstable estimates of edge location; in different noise realizations, the locations of continuity breaks can shift. In fact, the similar effect in STD images was also observed when the weak membrane prior was used in our early work [1]. Compared to the results from the weak membrane prior, however, the variance for CNQ priors in edge regions is not as high as that for nonconvex priors since CNQ priors allow limited smoothing at discontinuities. Table 1 Mean (r) and STD () of RMSE s and TSE (t 2 ) for noise trials. Algorithm Mean (STD) of RMSE TSE FBP 0.1924 (0.0037) 164.586 EM-1 0.2299 (0.0027) 234.139 EM-2 0.2580 (0.0036) 296.309 MAP-MM 0.1700 (0.0029) 127.779 MAP-HB 0.1658 (0.0030) 121.644 MAP-TP 0.1663 (0.0030) 122.299 MAP-CNQ1 0.1660 (0.0031) 121.931 MAP-CNQ2 0.1609 (0.0032) 114.596 MAP-CNQ3 0.1656 (0.0034) 121.432 In order to compare the performance of algorithms more quantitatively, we computed mean (r) and STD () of RMSE s for noise trials, which are defined as r def = 1 K KX k=1 r k and def = q vu u t 1 K? 1 KX k=1 (r k? r) 2 ; respectively, where the RMSE of the k th reconstruction, r k, is P given by r k 1 = ij N ( ^f ij k? f ij) 2 : Here, N is the number of pixels in 2-D object indexed by (i; j). To summarize our pointwise bias and STD results in Figs. 4 and 5, we also computed a different metric, total squared error (TSE) for noise Total Squared Error 145 140 135 130 125 120 115 MAP-CNQ2 ( ) MAP-CNQ2 ( α -1 ) MAP-CNQ2 ( α 1 ) MAP-HB 110-2 -1 0 1 2 Parameter Index for λ Figure 6: Total squared errors evaluated over a range of hyperparameters. The abscissa indexes n, where n=-2,-1,0,1,2. P trials, t 2, which is defined by t 2 def = ij (b2 ij + s2 ij ), where b ij and s ij are the bias and standard deviation quantities defined in (6) and (7), respectively. The difference between mean (r ) of RMSE s and TSE (t 2 ) and is that, while r is obtained by first calculating RMSE in space for each noise trial and then by taking averages over noise trials, t 2 is obtained by first calculating b 2 ij and s2 ij at each location (i; j) over noise trials and then by summing over space. Numerical results for RMSE and TSE are listed in Table 1. The values in (parentheses) indicate STD s of RMSE s. For the calculation of both RMSE and TSE, we used a mask that excluded the background of the image to optimize free parameters for the object only. In this case, the error due to the patterning artifact in FBP is excluded and FBP fares well. Note that, for a given potential function, setting = 0:5 or = 0:5 reveals the smallest values of both RMSE and TSE in all MAP reconstructions. This is presumably due to the fact that the linear combination of the first- and second-order models compromises the two extreme constituent models well by reducing oversmoothing and overshooting errors inherent in the first- and second-order models, respectively. Note also that the CNQ2 prior exhibits the smallest values of both RMSE and TSE among the algorithms used in our experiments. In particular, comparison of the MAP-CNQ2 and MAP-HB results in RMSE clearly shows that the performance of the CNQ2 prior is superior to that of the HB prior (0.1609 vs. 0.1658), whereas the CNQ2 prior yields equally small STD (around 0.003) as the HB prior. The results above are predicated on a single value of the hyperparameter based on a minimum RMSE criterion. Since hyperparameters control performance, we also include a study involving a range of hyperparameters. Figure 6 shows, in an ensemble sense, the effects of the hyperparameters ( for both MAP-HB and MAP-CNQ2, and for MAP-CNQ2) in terms of TSE over a range of values, where each parameter was swept around the values of and chosen by our minimum RMSE criteria (designated as 0 and 0, respectively). The values of 0 used in our experiments were 1.7 and 1.0 for MAP-HB and MAP-CNQ2, respectively, and 0 = 1:2 for MAP-CNQ2. For the range of, we chose the 5 different values: α 0

?2 = 0:6 0 ;?1 = 0:8 0 ; 0 ; 1 = 1:2 0 ; 2 = 1:4 0. For the range of, we used the 3 different values:?1 = 0:9 0 ; 0 ; 1 = 1:2 0. In this case, the variation of each parameter was 20% of 0 and 0 except for?1, which was?10% since > 1 for our CNQ priors. Note that the overall behavior of the effect of in TSE for both MAP-HB and MAP-CNQ2 is similar to each other, but MAP-CNQ2 seems to be more sensitive to the variation of. This is presumably due to the fact that, unlike the MAP-HB which involves only one hyperparameter (), the MAP-CNQ2 has two hyperparameters ( and ) and the variation of one parameter affects the other. For example, as decreases for < 0, MAP-CNQ2 not only gets noisier but also creates more discontinuities for a given value of, thereby revealing higher variance and larger TSE. As increases for > 0, on the other hand, MAP-CNQ2 reveals larger bias error around the discontinuities. According to our numerical results summarized in Table 1, however, the MAP-CNQ2 with properly chosen hyperparameters yields smaller TSE than the MAP-HB. FBP ML-EM IV. RESULTS USING PHYSICAL PHANTOMS To observe qualitatively the efficacies of our new CNQ priors, we acquired physical data using a GE Advance PET scanner, which contains 18 detector rings yielding 35 slices at 4.25-mm center-to-center slice separation. We acquired 2-D data using the scanner s high sensitivity mode with septa in. The phantoms used in our studies were the Hoffman brain phantom and an elliptical lung-spine body phantom (Data Spectrum, Chapel Hill, NC), where the Hoffman brain phantom (referred to as phantom A) was used mainly for the evaluation of our priors in the reconstruction of emission images, the body phantom (referred to as phantom B) was used mainly for transmission. For phantom A in Fig. 7, we acquired the emission data from an 18 FDG scan for 10 minutes, the transmission data for 15 minutes and the blank data for 30 minutes. The corresponding number of detected coincidences for emission was approximately 1 million. The sinogram dimension for both emission and transmission scans was 281 bins and 336 angles, and the reconstructed images were 128 2 with 2.92-mm pixels. Since the transmission image of phantom A contains no edge structures inside the object, we used a conventional attenuation correction technique, which uses the ratio of the measurements in the blank and transmission scans. Currently, we have not included factors to model scattered and random coincidences in our reconstruction algorithms. In this work, we simply used standard randoms subtraction techniques and the method of Bergstrom et al. [18] for scatter correction. In this case, the measurement is not strictly Poisson. Figure 7 shows reconstructions of 18 FDG emission images, where we compare emission reconstructions using FBP, ML-EM, MAP-HB ( = 0:5), and MAP-CNQ2 ( = 0:5). For MAP reconstructions, we used the six reconstruction algorithms as done in our numerical studies but report here the MAP-HB and MAP-CNQ2 results only. For these results using MAP-HB MAP-CNQ2 Figure 7: Reconstructions of 18 FDG emission images using the Hoffman brain phantom. phantoms, we set the values of hyperparameters empirically by considering the degree of smoothness in the reconstructed images. For all MAP results in Figs. 7, we set = 0:75, and = 1:25 for CNQ priors. The overall quality of the MAP reconstructions, particularly in regions with low activity, appears qualitatively superior to FBP as expected. Edges in the results using quadratic priors are further enhanced by replacing the potential function with the CNQ function in (2). Comparison of MAP-HB to MAP-CNQ2 shows edge-preserving smoothing behavior of CNQ priors. For phantom B in Fig. 8, we acquired 30-minute blank data and two transmission data; the duration of one transmission scan was 12 hours and the other scan was 10 minutes. For emission reconstructions, we also acquired the emission data from an 18 FDG scan for 10 minutes. The sinogram dimension in this case was the same as that for phantom A, but the pixel size of the reconstructed images was 3.44mm. Figure 8 shows transaxial sections through reconstructions of the transmission and 18 FDG emission images, where we compare transmission reconstructions using FBP, ML-EM, and MAP-CNQ2 in (a)- (e), and emission reconstructions using FBP and MAP-CNQ2 with different attenuation correction techniques in (f)-(h). We used the FBP reconstruction from 12-hour transmission scan in Fig. 8 (a) as a reference for other transmission reconstructions. Several qualitative observations may be noted in Fig. 8. The MAP-HB image in (d) exhibits very low noise but reveals oversmoothness around discontinuities due to the limitation of quadratic priors in edge preservation. On the other hand, the CNQ priors exhibit significantly sharper boundary definition as shown in (e). Figures 8(f) and (g) show FBP and MAP-CNQ2 reconstructions, respectively, of the emission

300 250 MAP-CNQ2 MAP-HB ML-EM FBP 200 150 100 (a) FBP-12hr (b) FBP-10min 50 (c) ML-EM (d) MAP-HB 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 Figure 9: Histograms of transmission images (MAP-CNQ2, MAP- HB, ML-EM, and FBP). The abscissa is the pixel intensity and the ordinate is the number of pixels. MAP reconstruction in (g) can be further improved by using reprojection of the MAP-CNQ2 transmission reconstruction in (e) for attenuation correction as shown in Fig. 8(h). Figure 9 shows histograms of transmission images, where the abscissa is the pixel intensity and the ordinate is the number of pixels. The histograms, while obtained from the anecdotal reconstructions, show that MAP-CNQ2 outperforms MAP with quadratic smoothing priors as well as ML-EM and FBP in that the MAP-CNQ2 reconstruction yields a better separation of lung and soft tissue. (e) MAP-CNQ2 (g) MAP-CNQ2 (f) FBP (h) MAP-CNQ2 Figure 8: Reconstructions of transmission and emission images using the body phantom: (a) FBP from 12-hour transmission scan; (b) FBP from 10-min transmission scan; (c) ML-EM from 10-min transmission scan, 30 iterations; (d) MAP-HB from 10-min transmission scan; (e) MAP-CNQ2 from 10-min transmission scan; (f) FBP for the emission image using smoothed blank/transmission attenuation correction factors (ACFs); (g) MAP-CNQ2 for the emission image using smoothed blank/transmission ACFs; (h) MAP-CNQ2 for the emission image using reprojection of MAP-CNQ2 transmission reconstruction in (e). images using standard attenuation correction (using smoothed blank/transmission attenuation correction factors). The V. SUMMARY AND CONCLUSION We have considered new Gibbs priors whose potential function is convex and nonquadratic with arguments of these potential functions comprising membranes or non-standard plates. The resulting objective functions contain linear combinations of these types of priors. The essence of our new priors is that they not only permit the recovery of discontinuities in the reconstructed image, but also provide for globally convergent optimization algorithms. Although the CNQ priors considered here do not preserve edges as well as those derived from nonconvex penalty functions, such as the weak membrane and the weak plate, their bias/variance behavior turns out to be a good compromise between nonconvex priors and quadratic priors. A reasonable conclusion for our numerical studies is that, since the CNQ potentials do not entirely prohibit smoothing at discontinuities, the variance of CNQ priors around edges over the different noise realizations is not as high as that of nonconvex priors, whereas their bias behavior is desirably similar to that of quadratic priors. Overall, the hybrid (membrane + plate) priors combined with the CNQ potential yielded the best results on quantitative simulations. Our experimental results using physically acquired PET data show that our new priors yield good qualitative results for reconstructing transmission images as well as emission images. In particular, the CNQ priors exhibit improved boundary definition in both emission and transmission images. Recently, several parallelizable algorithms have been

introduced in the literature [4], which can easily accommodate convex-nonquadratic penalties with a moderate number of iterations. Integrating our new priors in such algorithms and developing a systematic way of determining hyperparameters will make our new priors more practical in their clinical applications. VI. REFERENCES [1] S.J. Lee, A. Rangarajan, and G. Gindi, Bayesian Image Reconstruction in SPECT Using Higher Order Mechanical Models as Priors, IEEE Trans. Med. Imaging, MI-14(4), pp. 669 680, Dec. 1995. [2] S.J. Lee, I.T. Hsiao, and G.R. Gindi, The Thin Plate as a Regularizer in Bayesian SPECT Reconstruction, IEEE Trans. Nuclear Science, NS-44(3), pp. 1381 1387, Jun. 1997. [3] M. Gokmen and A.K. Jain, -Space Representation of Images and Generalized Edge Detector, IEEE Trans. Patt. Anal. Mach. Intell., 19(6), pp. 545 563, Jun. 1997. [4] J.A. Fessler, E.P. Ficaro, N.H. Clinthorne, and K. Lange, Grouped-Coordinate Ascent Algorithms for Penalized- Likelihood Transmission Image Reconstruction, IEEE Trans. Med. Imaging, MI-16, pp. 166 175, Apr. 1997. [5] K. Lange and R. Carson, EM Reconstruction Algorithms for Emission and Transmission Tomography, J. Comput. Assist. Tomogr., 8, pp. 306 316, 1984. [6] E.U. Mumcuoglu, R. Leahy, S.R. Cherry, and Z. Zhou, Fast Gradient-Based Methods for Bayesian Reconstruction of Transmission and Emission PET Images, IEEE Trans. Med. Imaging, MI-13(1), pp. 687 701, Dec. 1994. [7] T. Hebert and R. Leahy, A Generalized EM Algorithm for 3-D Bayesian Reconstruction for Poisson Data Using Gibbs Priors, IEEE Trans. Med. Imaging, MI-8(2), pp. 194 202, June 1989. [8] P.J. Green, Bayesian Reconstructions from Emission Tomography Data Using a Modified EM Algorithm, IEEE Trans. Med. Imaging, MI-9(1), pp. 84 93, March 1990. [9] V.E. Johnson, W.H. Wong, X. Hu, and C.-T. Chen, Image Restoration Using Gibbs Priors: Boundary Modeling, Treatment of Blurring, and Selection of Hyperparameter, IEEE Trans. Patt. Anal. Mach. Intell., PAMI-13(5), pp. 413 425, May 1991. [10] A. Blake and A. Zisserman, Visual Reconstruction, Artificial Intelligence, MIT Press, Cambridge, MA, 1987. [11] C. Bouman and K. Sauer, A Generalized Gaussian Image Model for Edge Preserving MAP Estimation, IEEE Trans. Image Processing, 2, pp. 296 310, July 1993. [12] W. Wang and G. Gindi, Noise Analysis of MAP-EM Algorithms for Emission Tomography, Phys. Med. Biol., 42(11), pp. 2215 2232, 1997. [13] S.Z. Li, Close-Form Solution and Parameter Selection for Convex Minimization-Based Edge-Preserving Smoothing, IEEE Trans. Patt. Anal. Mach. Intell., 20(9), pp. 916 932, Sep. 1998. [14] D. Geman and G. Reynolds, Constrained Restoration and the Recovery of Discontinuities, IEEE Trans. Patt. Anal. Mach. Intell., PAMI-14(3), pp. 367 383, March 1992. [15] G. Gindi, D. Dougherty, I. Hsiao, and A. Rangarajan, Autoradiograph Based Phantoms for Emission Tomography, In Proc. SPIE Symposium on Medical Imaging - Image Processing, pp. 403 414, 1997. [16] S.S. Saquib, C.A. Bouman, and K. Sauer, ML Parameter Estimation for Markov Random Fields with Applications to Bayesian Tomography, IEEE Trans. Image Processing, 7(7), pp. 1029 1044, July 1998. [17] Z. Zhou, R. Leahy, and J. Qi, Approximate Maximum Likelihood Hyperparameter Estimation for Gibbs Priors, IEEE Trans. Image Processing, 6(6), pp. 844 861, June 1997. [18] M. Bergstrom, L. Eriksson, C. Bohm, G. Blomqvist, and J. Litton, Correction for Scattered Radiation in a Ring Detector Positron Camera by Integral Transformation of the Projections, J. Comput. Assist. Tomogr., 10, pp. 845 850, 1983.