Noise weighting with an exponent for transmission CT

Similar documents
Revisit of the Ramp Filter

Unmatched Projector/Backprojector Pairs in an Iterative Reconstruction Algorithm

Reconstruction of Tomographic Images From Limited Projections Using TVcim-p Algorithm

Total Variation and Tomographic Imaging from Projections

Spiral ASSR Std p = 1.0. Spiral EPBP Std. 256 slices (0/300) Kachelrieß et al., Med. Phys. 31(6): , 2004

Iterative and analytical reconstruction algorithms for varying-focal-length cone-beam

Gengsheng Lawrence Zeng. Medical Image Reconstruction. A Conceptual Tutorial

Reconstruction of CT Images from Sparse-View Polyenergetic Data Using Total Variation Minimization

Medical Image Reconstruction Term II 2012 Topic 6: Tomography

AN ELLIPTICAL ORBIT BACKPROJECTION FILTERING ALGORITHM FOR SPECT

A Weighted Least Squares PET Image Reconstruction Method Using Iterative Coordinate Descent Algorithms

Low-Dose Dual-Energy CT for PET Attenuation Correction with Statistical Sinogram Restoration

Cover Page. The handle holds various files of this Leiden University dissertation

Workshop on Quantitative SPECT and PET Brain Studies January, 2013 PUCRS, Porto Alegre, Brasil Corrections in SPECT and PET

THE FAN-BEAM scan for rapid data acquisition has

Temperature Distribution Measurement Based on ML-EM Method Using Enclosed Acoustic CT System

Evaluation of Spectrum Mismatching using Spectrum Binning Approach for Statistical Polychromatic Reconstruction in CT

CENTER FOR ADVANCED COMPUTATION

Radon Transform and Filtered Backprojection

Multi-slice CT Image Reconstruction Jiang Hsieh, Ph.D.

Algebraic Iterative Methods for Computed Tomography

Reduction of Metal Artifacts in Computed Tomographies for the Planning and Simulation of Radiation Therapy

FAST KVP-SWITCHING DUAL ENERGY CT FOR PET ATTENUATION CORRECTION

Tomographic Image Reconstruction in Noisy and Limited Data Settings.

THE general task in emission computed tomography (CT)

Reconstruction from Projections

Attenuation map reconstruction from TOF PET data

SPECT reconstruction

Splitting-Based Statistical X-Ray CT Image Reconstruction with Blind Gain Correction

Simplified statistical image reconstruction algorithm for polyenergetic X-ray CT. y i Poisson I i (E)e R } where, b i

Reconstruction in CT and relation to other imaging modalities

Axial block coordinate descent (ABCD) algorithm for X-ray CT image reconstruction

A discrete convolution kernel for No-DC MRI

PET image reconstruction algorithms based on maximum

Adaptive algebraic reconstruction technique

(RMSE). Reconstructions showed that modeling the incremental blur improved the resolution of the attenuation map and quantitative accuracy.

Non-Stationary CT Image Noise Spectrum Analysis

NIH Public Access Author Manuscript Int J Imaging Syst Technol. Author manuscript; available in PMC 2010 September 1.

Central Slice Theorem

Superiorized polyenergetic reconstruction algorithm for reduction of metal artifacts in CT images

Adaptive Reconstruction Methods for Low-Dose Computed Tomography

USING cone-beam geometry with pinhole collimation,

Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

An FDK-like cone-beam SPECT reconstruction algorithm for non-uniform attenuated

Introduction to Biomedical Imaging

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION

F3-A5: Toward Model-Based Reconstruction in Scanned Baggage Security Applications

Metal Streak Artifacts in X-ray Computed Tomography: A Simulation Study

SINGLE-PHOTON emission computed tomography

Projection Space Maximum A Posterior Method for Low Photon Counts PET Image Reconstruction

my working notes on HYPR for Math 597

SPECT (single photon emission computed tomography)

Iterative SPECT reconstruction with 3D detector response

STATISTICAL image reconstruction methods for X-ray

A filtered backprojection MAP algorithm with nonuniform sampling and noise modeling

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting

Assessment of OSEM & FBP Reconstruction Techniques in Single Photon Emission Computed Tomography Using SPECT Phantom as Applied on Bone Scintigraphy

Optimal threshold selection for tomogram segmentation by reprojection of the reconstructed image

Quality control phantoms and protocol for a tomography system

Accelerating CT Iterative Reconstruction Using ADMM and Nesterov s Methods

Resolution and Noise Properties of MAP Reconstruction for Fully 3-D PET

S rect distortions in single photon emission computed

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Reconstruction in CT and relation to other imaging modalities

Digital Image Processing

Edge-Preserving Denoising for Segmentation in CT-Images

Introduc)on to PET Image Reconstruc)on. Tomographic Imaging. Projec)on Imaging. Types of imaging systems

Q.Clear. Steve Ross, Ph.D.

GE s Revolution CT MATLAB III: CT. Kathleen Chen March 20, 2018

Feldkamp-type image reconstruction from equiangular data

A numerical simulator in VC++ on PC for iterative image reconstruction

A novel coupled transmission-reflection tomography and the V-line Radon transform

STATISTICAL positron emission tomography (PET) image

Validation of New Gibbs Priors for Bayesian Tomographic Reconstruction Using Numerical Studies and Physically Acquired Data 1

Simultaneous Estimation of Attenuation and Activity Images Using Optimization Transfer

Expectation Maximization and Total Variation Based Model for Computed Tomography Reconstruction from Undersampled Data

A Fast GPU-Based Approach to Branchless Distance-Driven Projection and Back-Projection in Cone Beam CT

A Curvelet based Sinogram Correction Method for Metal Artifact Reduction

Regularization in Tomography

Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT


DUE to beam polychromacity in CT and the energy dependence

GPU implementation for rapid iterative image reconstruction algorithm

Cover Page. The handle holds various files of this Leiden University dissertation.

Tomographic Algorithm for Industrial Plasmas

Column-Action Methods in Image Reconstruction

Iterative CT Reconstruction Using Curvelet-Based Regularization

MAP Tomographic Reconstruction with a Spatially Adaptive Hierarchical Image Model

Algebraic Iterative Methods for Computed Tomography

ML reconstruction for CT

CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION

A Comparison of the Uniformity Requirements for SPECT Image Reconstruction Using FBP and OSEM Techniques

ATTENUATION CORRECTION IN SPECT DURING IMAGE RECONSTRUCTION USING INVERSE MONTE CARLO METHOD A SIMULATION STUDY *

Consistency in Tomographic Reconstruction by Iterative Methods

Bias-Variance Tradeos Analysis Using Uniform CR Bound. Mohammad Usman, Alfred O. Hero, Jerey A. Fessler and W. L. Rogers. University of Michigan

Improving Reconstructed Image Quality in a Limited-Angle Positron Emission

IMAGE RECONSTRUCTION WITH BACK FILTERED PROJECTION ALGORITHMS, USED IN CULTURAL HERITAGE INVESTIGATIONS

Regularization in Tomography

M. Usman, A.O. Hero and J.A. Fessler. University of Michigan. parameter =[ 1 ; :::; n ] T given an observation of a vector

8/1/2017. Current Technology: Energy Integrating Detectors. Principles, Pitfalls and Progress in Photon-Counting-Detector Technology.

Transcription:

doi:10.1088/2057-1976/2/4/045004 RECEIVED 13 January 2016 REVISED 4 June 2016 ACCEPTED FOR PUBLICATION 21 June 2016 PUBLISHED 27 July 2016 PAPER Noise weighting with an exponent for transmission CT Gengsheng L Zeng 1,2 and Wenli Wang 3 1 Department of Engineering, Weber State University, Ogden, UT 84408, USA 2 Department of Radiology, University of Utah, Salt Lake City, UT 84108, USA 3 Toshiba Medical Research Institute USA, Inc. Vernon Hills, IL 60061, USA E-mail: larryzeng@weber.edu Keywords: computed tomography, iterative reconstruction, filtered backprojection algorithm, noise weighted image reconstruction Abstract It is widely believed that the correct weighting function is the reciprocal of the noise variance of the associated measurement. Many researchers are making great efforts to find the accurate variance for the measurements for imaging systems so that they can hopefully achieve an optimal reconstruction. An optimal solution in the context of this paper is referred to as the image that reaches optimum according to a criterion or criteria among a group of candidates, regardless how the images in the group are obtained. This optimal solution is not a theoretical concept, but is simply the best of the bunch. The goal of the paper is to investigate how the weighting function affects the image noise when the image contrast is pre-specified in an iterative algorithm for x-ray CT. This paper makes some interesting observations: there is no universal optimal weighting function. The noise weighting function can introduce artifacts. The optimal noise weighting varies with the object to be reconstructed and targeted image contrast in an iterative image reconstruction algorithm and in a filtered backprojection algorithm that incorporates the projection noise. It is suggested that an exponent be used in the weighting function so that the artifacts caused by the weighting function can be reduced. 1. Introduction One of the advantages of using iterative algorithms to reconstruct a tomographic image is the ability to model and suppress the measurement noise (Kuhl and Edwards 1963, Shepp and Vardi 1982, Langer and Carson 1984, Fessler 1994). Recently we have shown that the filtered backprojection (FBP) can be extended to model and suppress the measurement noise too (Zeng and Zamyatin 2013, Zeng 2014). In all these algorithms, the noise-control weighting function is normally set up as the reciprocal of the noise variance associated with the measurement (Geman and McClure 1987). This weighting function assignment is supported by the general maximum likelihood theory, and its fundamental principle is explained intuitively below. When the measurements are over-determined, there are more redundant measurements than the number of the unknowns. Figure 1 shows a simple example with two unknowns x 1 and x 2 as well as three measurements. In the x 1 x 2 solution space, the three measurements are represented as three lines. If the measurements are consistent (i.e., noiseless), the three lines should meet at a single point in the x 1 x 2 solution space and this point is the unique solution. Due to noise, the three measurements are not consistent and the solution does not exist. A maximum likelihood method selects an approximate solution that is closer to less noisy measurements and farther away from noisier measurements. If the number of unknowns is the same (or almost the same) as the number of the measurements, a unique solution can always be obtained regardless whether the measurements are noisy or not. Figure 2 shows a simple case of two unknowns (x 1 and x 2 ) and two noisy measurements. The unique solution in this case is most likely not the true solution because the measurements are noisy and deviate from the true values. This wrong, noisy, and unique solution can be obtained by using an iterative algorithm using a large enough iteration number so that the algorithm converges. This wrong, noisy, and unique solution can also be obtained by an analytical algorithm such as the 2016 IOP Publishing Ltd

Figure 1. An over-determined system with twp unknowns and three measurements. (a) The measurements are noiseless and the system is consistent. A unique solution exists. (b) The measurements are noisy and the system is inconsistent. An approximate solution can be determined by the relative noise variance of the measurements. Figure 2. An well-determined system with two unknowns and two noisy measurements. (a) Even though the measurements are noisy, the system is consistent and a unique noisy solution exists. (b) If an iterative algorithm is stopped before convergence, many pseudo solutions can be obtained. These pseudo solutions are influenced by the weighting function, and may be closer to the true solution than the unique noisy solution. FBP algorithm which is popular in image reconstruction community (Radon 1917, Bracewell 1956, Shepp and Logan 1974). In terms of image reconstruction, a wrong solution here is a solution to the system of linear equations, which are corrupted with noise. If the system matrix is invertible, the solution exists and is unique. Due the noise, this solution is no longer the same as the original image. If an iterative algorithm is stopped before convergence, a sequence of pseudo solutions is obtained. A pseudo solution is any intermediate solution at any iteration in an iterative algorithm, regardless whether the algorithm has converged or not. These pseudo solutions are influenced by the weighting function, and may be closer to the true solution than the unique noisy solution (see figure 2(b)). It is important to notice that the weighting function is effective before the algorithm converges. In other words, the weighting function determines the path of the algorithm towards the converged noisy solution. Due to the involvement of the weighting function, some pseudo solutions (i.e., the intermediate solutions) may be closer to the true solution than the ultimate noisy solution. We argue that the situation described in figure 2 applies to most tomographic medical imaging systems such as x-ray computed tomography (CT), positron emission tomography (PET) and single photon emission CT (SPECT). This is the reason that when an iterative algorithm is used to reconstruct an image, the algorithm must stop early otherwise a very noisy and useless image will result. This paper considers the iterative algorithm that stops before convergence, and investigates how the weighting function influences the early solutions. The result of this paper will apply to the extend FBP algorithm that models and suppresses the noise. 2

This paper introduces a parameter in the weighting function; this parameter should be 1 according to theoretical derivation. Our motivation of introducing this parameter is as follows. The theoretically derived weighting function assumes the convergence of the algorithm. In practice, the iterative algorithm is terminated before reaching the convergence. Therefore, this small perturbation of early stopping may require a small deviation of the parameter from its theoretical value in order to obtain a better image. Here better is task dependent. The task in our paper is to find a solution with the smallest noise standard deviation for a given image contrast. As expected, this deviation from its theoretical value 1 is small in numerical studies. 2. Methods An iterative algorithm is used to minimize or maximize an objective function. The objective function usually consists of two parts: the data fidelity term and the regularization term. One popular way to do regularization is to control the number of iterations (i.e., the stopping rules). Our stopping rule is that the iteration terminates when the image contrast reaches a pre-specified value. When the iteration stops, the reconstructed image depends on the noise weighting factors. The state-of-the-art iterative image reconstruction algorithms use a fixed weighting factor that is the reciprocal of the image variance. This paper points out that if a control parameter is introduced to the weighting factor, the resultant reconstruction can be further improved in terms of the reduction of noise. 2.1. The gradient descent iterative algorithm The gradient descent iterative algorithm considered in this paper has the following form (Elbakri and Fessler 2002) X å ( + ) ( ) i k 1 = Xi k j - a å () ( å jn n k - j) Ajiwj A X P n A w A j ji j å n jn, () 1 ( ) where X k i is the ith image pixel at the kth iteration, P j is the jth line-integral (ray-sum) measurement value, A ji is the contribution of the ith image pixel to the jth measurement, w j is the weighting factor for the jth measurement, and α is a constant to prevent the algorithm from divergence. The purpose of the denominator åjaw ji j ånajn is to normalize the step size so that the step size is independent from the system matrix A and the weighting function w j. Thus the value of α is always 1 in (Elbakri and Fessler 2002). However, this scaled step size does not always work and the algorithm may diverge for many situations. We set α to 0.1 in this paper. The summation over the index n is the projector and the summation over the index j is the backprojector. Figure 3. A computer generated torso phantom is used for transmission CT data generation. The linear attenuation coefficients are labeled in the unit of per mm. Two regions of interest (ROIs) are defined for image evaluation and for the image contrast calculation. 2.2. The noise-weighted FBP algorithm A noise-weighted FBP algorithm was recently developed to model and suppress noise (Zeng and Zamyatin 2013, Zeng 2014). This algorithm emulates the gradient descent algorithm and contains a control index k, which is similar to the iteration number in an iterative algorithm. This noise-weighted FBP algorithm is almost the same as the conventional FBP algorithm, except for the ramp filter. In a conventional FBP algorithm, the ramp filter is ω,whereω is the frequency. In the noiseweighted FBP algorithm, the ramp filter is modified by a window function and is expressed as a k w j H ( w) = 1-1 - w, withw ¹ 0 w and H () 0 = 0, () 2 where α is a positive constant to prevent the algorithm from divergence. In this paper, α is set to 0.1. The implementation of (2) is in the Fourier domain of the sinogram. The weighting factors w j in (2) for all projection bins are quantized into 11 discrete values, and each of these 11 quantized weighting factors produces a filtered sinogram. A combined sinogram from these filtered sinograms is formed pointby-point according to the variance of the original sinogram. The details of the implementation can be found in (Zeng and Zamyatin 2013). 2.3. Data generation and noise model The computer simulations in this paper are based on a scaled-down x-ray CT fan-beam imaging geometry with a curved detector. The image array was 256 256, the pixel size was 1.52 mm 1.52 mm, the number of views was 400 over 360, the number of detection channels was 400, and the focal length was 240 mm. The x-ray source flux had I 0 = 10 4 counts, which corresponds to a low-dose imaging setup. The phantom shown in figure 3 is 355 mm 187 mm 3

Figure 4. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (A): the default phantom as shown in figure 3 is used, and I 0 = 10 4. The target contrast is 0.53. ellipse with water background (μ = 0.02 mm 1 ), three high contrast regions (μ = 0.032 mm 1 ) with diameter 48 mm, two low-contrast regions (μ = 0.0194-1 mm) with diameter 36 mm, surrounded by outer layers of fat (μ = 0.019 mm 1 ) and skin (μ = 0.021 mm 1 ), in which ROI 1 (high contrast object) and ROI 2 (water background) are used to evaluate the image quality. The projection data were generated in the prelog format with the Poisson noise model and Gaussian electronic noise. No beam-hardening effects are simulated. The pre-log data were then converted into the post-log data for image reconstruction. If a pre-log data is less than one, it is changed to one before taking logarithm to avoid negative post-log sinogram values. The iterative algorithm was implemented according to (1), and the iterative algorithm stops when a prespecified image contrast is reached. This value was set up as 0.53 and 0.57 (where the true contrast is 0.6). The reconstructed images are compared with the normalized standard deviation value in ROI 2. The normalized standard deviation value is the standard deviation value divided by the mean value. The phantom is shown in figure 3. A popular approach to assigning the weighting factor is to let w j be the reciprocal of the noise variance of the ray-sum measurement. This approach is justified by using the likelihood function as the objective function for an optimization problem (Aitken 1935). The philosophy is that we should trust the less noisy measurements more than noisier measurements. 4

Biomed. Phys. Eng. Express 2 (2016) 045004 Figure 5. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (B): the phantom is changed to an obese version, and I0 = 104. The target contrast is 0.53. In x-ray CT imaging, the noise in measured transmission data can be approximately described by a Poisson distribution, i.e., var(i) I, where I denotes an x-ray intensity transmission measurement (Hsieh 1998). If the additive electronic noise σ2 is also considered for the detection system, the total variance of the pre-log transmission measurement is then var (I) I + σ2. Here s 2 = 6.32 was chosen in our simulated low-count x-ray CT data generation. After log conversion the noise variance is described by var(i)/i2 (I + σ2)/ I2. If the x-ray source flux I0 is stable and consistent, measurement intensity I can be written as I = I0 exp (-p ) according to Beer s law, where p is the ray-sum or the total attenuation along the ray. The variance of post-log measurement p can thus be expressed as 5 var ( p )» I0 exp ( - p ) + s 2 I + s2 =. I2 I02 exp ( - 2p ) (4) According to our previous experiments, the image noise was not reduced by including the electronic noise variance σ2 in the weighting factors because the value σ2 is the same for every measurement. Some other researchers (Sauer and Bouman 1993) also choose not to include the electronic noise variance in the weighting function. We suggest discarding the components that are common to all measurements from the weighting function. Even though s 2 = 6.32 was used in data generation, we assume s 2 = 0 during image reconstruction. The conventional weighing factor is inversely proportional to the noise variance. Thus, the conventional weighing factor can be

Figure 6. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (C): the phantom is changed to a slightly thinner version, and I 0 = 10 4. The target contrast is 0.53. assigned as w = exp (-p). ( 5) This paper introduces a new parameter γ to the weighting function: w = exp (-gp). ( 6) In our implementation of the weighting function (6) the post-log data p is first smoothed by a five-point running average low-pass filter in the detector channel direction, in order to reduce the noise propagation from the weighting function to the reconstruction. However, the post-log projections used in (1) are not pre-filtered. The image quality is evaluated by the normalized standard deviation in ROI 2. The normalized standard deviation is defined as the ratio of the standard deviation and the mean value. Performing systematic comparison studies with scanned data is difficult because the ground true is unknown. However, we did apply our method of using an artificial parameter γ in real patient studies, see (Zeng and Zamyatin 2013), where γ = 0.3 was used. 3. Results Resolution is closely related to contrast. Usually higher resolution leads to larger noise standard deviation. If the resolution is fixed, a better reconstruction method can produce less noisy images. This is the whole point of the paper. By fixing the image contrast, we vary a 6

Figure 7. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (D): the phantom is changed to a very thin version, and I 0 = 10 4. The target contrast is 0.53. control parameter in the weighting function and observe how the image noise and artifacts change accordingly. 3.1. Iterative reconstructions The effects of the exponent γ are illustrated with the following variations. A. The default phantom as shown in figure 3 is used, and I 0 = 10 4. The target contrast is 0.53. B. The phantom is changed to an obese version. C. The phantom is changed to a slightly thinner version. D. The phantom is changed to a very thin version. E. Same as (A), but the target contrast is changed to 0.57. F. Same as (A), but I 0 is increased to 1.5 10 4. The computer simulation results are summarized in figures 4 9. In all these figures, one can make the following observations. When γ is small, there are severe noise induced streaking artifacts. As the value of γ increases, the streaking artifacts are gradually suppressed. After passing the optimal value of γ, a larger value of γ causes severe low-frequency shadowing artifacts. Sometimes these low-frequency shadowing artifacts may be mistaken as the beam-hardening artifacts. The cause of the shadowing artifacts is the extremely small values of the weighting factors w j, due to the large p values. The shadowing artifacts are caused by 7

Figure 8. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (E): the phantom is as in figure 3, and I 0 = 10 4. The target contrast is changed to 0.57. the improper weighting factors, and are not caused by discarding negative sinogram values. The shadowing artifacts can appear with ideally generated, i.e., noiseless (i.e., I 0 ), line-integral sinogram when improper weighting factors are used during image reconstruction. When the weighting factors w j are too small, some important tomographic information is neglected, resulting in limited data artifacts (similar to metal artifacts). The optimal parameter γ depends on the object shape and image contrast (maybe less on dose I 0 ). The main idea of using a new exponent parameter γ is to reduce those large p values to some extend so that the over-suppressed tomographic information can be available for image reconstruction. 3.2. FBP reconstructions Image reconstruction results using the noise-weighted FBP algorithm with a modified ramp filter (2) are listed in figure 10. The FBP results have the same trend as that in the iterative reconstruction results. For a small parameter γ, we see streaking artifacts. The streaking artifacts are suppressed with a larger γ. However, when the parameter γ is too large, the low-frequency shadowing artifacts appear. An optimal parameter γ should be used. Thus only three representative images are shown. Similar to the iterative algorithm s iteration number, the parameter k is selected when the pre-specified image contrast is reached. We must point out that the parameter k in the FBP algorithm and the number of iteration k in the iterative algorithm, in general, are not the same; they are closely 8

Figure 9. Iterative reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (F): the phantom is in figure 3, but I 0 is increased to 1.5 10 4. The target contrast is 0.53. Figure 10. FBP reconstructions with various parameter γ and the normalized standard deviation in ROI 2 for Case (C): the phantom is the slightly thinner version as in figure 6, and I 0 = 10 4. The target contrast is 0.53. The values of parameter k are 9100, 103 900, and 431 000, respectively. 9

related. Similarly, the γ values in both algorithms are not thesame,buthavethesametrend. All images are displayed with the same linear gray scale. 4. Conclusions This paper uses an exact known noise model to investigate the effects of the weighting function. This paper suggests that a weighting function that is a power function of the reciprocal of the noise variance, w = 1/ variance g, should be used. When γ = 0, the weighting function is a constant without any variation. A larger γ gives a larger variation of the weighting function. When γ = 1, the weighting function is the so-called correct weighting which is widely used among researchers. Both an iterative gradient descent algorithm and an analytic noise-weighted FBP algorithm are used for the investigation. In order for the weighting function to be effective, the iteration number k in the iterative algorithm or the noise control parameter k in the FBP algorithm must be small enough so that the algorithm is not converged yet. Our computer simulations show that the optimal weighting scheme depends on the object and the pre-specified image contrast. Therefore, there is no universal optimal weighting function. The so-called correct weighting function is sub-optimal. For a given object, the optimal weighting function is image contrast dependent, which in turn will be determined by the noise. The power function weighting function suggested in this paper is only one of many options that can help to achieve a desired image quality. References Aitken A C 1935 On least squares and linear combinations of observations Proc. R. Soc. Edinburgh 55 42 8 Bracewell R N 1956 Strip integration in radio astronomy Aus. J. Phys. 9 198 217 Elbakri A and Fessler J A 2002 Statistical image reconstruction for polyenergetic x-ray computed tomography IEEE Trans. Med. Imaging 21 89 99 Fessler J A 1994 Penalized weighted least-squares image reconstruction for positron emission tomography IEEE Trans. Med. Imaging 13 290 300 Geman S and McClure D E 1987 Statistical methods for tomographic image reconstruction Bull. Int. Stat. Inst. LII-4 5 21 Hsieh J 1998 Adaptive streak artifact reduction in computed tomography resulting from excessive x-ray photon noise Med. Phys. 25 2139 47 Kuhl D E and Edwards R Q 1963 Image separation radioisotope scanning Radiology 80 653 62 Langer K and Carson R 1984 EM reconstruction algorithms for emission and transmission tomography J. Comput. Assist. Tomogr. 8 302 16 Radon J 1917 Über die Bestimmung von Funktionen durch ihre integralwerte längs gewisser Mannigfaltigkeiten Ber. Verh. Sächs. Akad. Wiss. Leipzig, Math. Nat. K1 69 262 7 Sauer K and Bouman C 1993 A local update strategy for iterative reconstruction projections IEEE Trans. Signal Process. 41 534 48 Shepp L A and Logan B F 1974 The Fourier reconstruction of a head section IEEE Trans. Nucl. Sci. NS-21 21 43 Shepp L A and Vardi Y 1982 Maximum likelihood reconstruction for emission tomography IEEE Trans. Med. Imaging 1 113 22 Zeng G L and Zamyatin A 2013 A filtered backprojection algorithm with ray-by-ray noise weighting Med. Phys. 40 031113 Zeng G L 2014 Model-based filtered backprojection algorithm: a tutorial Biomed. Eng. Lett. 4 3 18 10