Iterative SPECT reconstruction with 3D detector response Jeffrey A. Fessler and Anastasia Yendiki COMMUNICATIONS & SIGNAL PROCESSING LABORATORY Department of Electrical Engineering and Computer Science The University of Michigan Ann Arbor, Michigan 4819-2122 Jul 19, 21, Revised September 4, 21 Technical Report No.??? Approved for public release; distribution unlimited.
Iterative SPECT reconstruction with 3D detector response Jeffrey A. Fessler and Anastasia Yendiki 424 EECS Bldg., 131 Beal Ave. University of Michigan Ann Arbor, MI 4819-2122 734-763-1434, FAX: 734-763-841 email: fessler@umich.edu Original version: July 21 Revised: September 4, 21 September 4, 21 Technical Report #??? Communications and Signal Processing Laboratory Dept. of Electrical Engineering and Computer Science The University of Michigan 1 Introduction We have recently implemented a SPECT forward projector / back projector pair that accommodates arbitrary nonuniform attenuation maps and depth-dependent blur. The form of the blur is entirely arbitrary (so long as it has an odd number of samples in the x and z directions). The user supplies samples of a 2D PSF kernels for each plane in the object parallel to the detector, i.e., for an object sampled to n x n y n z =64 64 6, the user will supply n y =64 kernels, each of which might be sampled to say 11 11 pixels. The user may also supply a separate set of PSF kernels for each projection angle, such as would be appropriate for SPECT with a noncircular orbit. The default is to use the same PSF kernels for each projection angle, which corresponds to a circular orbit. Our approach is similar to that of Zeng et al. [1]. However, since we are ultimately interested in regularized reconstruction (and hence iterating until convergence), we have taken great pains to implement a backprojector that is the exact transpose of the forward projector (to with numerical precision of course), whereas Zeng et al. have explored unmatched pairs [2]. This document summarizes results of testing this projector/backprojector pair to confirm that it gives results that outperform the in-plane (2D) system model used previously for SPECT reconstruction in our group. 2 Experiment 1: Digital Point Source The first experiment was a computer simulation of a point source in a uniform attenuating cylinder. Three cases were studied, with 36 projections in each case. Case 1. The object was 32 32 8 and there were 3 projection views of size 32 8. The voxel size was (14mm) 3. 2
Case 2. The object was 64 64 8 and there were 6 projection views of size 64 8. The voxel size was (7mm) 3. Case 3. This case was similar to Case 2, except that the projection measurements were generated analytically by erfbased sampling of a Gaussian PSF for an ideal point source. The location of this point source in object space is as follows. Consider the central four voxels in the middle slice, in this case slice 4 of slices (,...,7). Consider the line segment that runs axially along the center of rotation where those four voxels touch. The point source is located exactly at the midpoint of this line segment. Thus, the point source is exactly centered on the middle slice. However, the point source is not centered in any individual voxel, but rather is along the central edge of the four central voxels, This case tests robustness to some model mismatch, since the projections did not come from any discrete system matrix. If the object volume has coordinates ranging over [ n x /2,n x /2] [ n y /2,n y /2] [,n z ], where those coordinates cover the exent of the voxel (not just the voxel center), then the analytical point source coordinates were (x, y, z) =(,, (n z +1)/2). We compared five reconstruction methods. FBP with a ramp filter and 1st-order Chang attenuation correction [3]. EM with 2D strip-integral (not depth dependent) system model EM with 2D depth-dependent Gaussian blur system model EM with 3D depth-dependent Gaussian blur system model OSEM 1 with 3D depth-dependent Gaussian blur system model. (Case 2 only.) I used 5 subsets of 1 views each. Fig. 1, Fig. 2, and Fig. 3 show the voxel value at location (n x /2,n y /2,n z /2), which is where the point source was located in Case 1 and 2, as a function of EM iteration. FBP with a ramp filter only recovers a fraction of the ideal value due to the poor resolution at the center of the object. EM with a 2D model, whether it be strip integrals or in-plane depth-dependent response, partially recovers the point-source value, but cannot fully recover it since the true blur is three dimensional. When we include the full 3D detector response, then naturally as EM iterates it converges to the correct value since there is no noise in this simulation. Curiously, in the n x =32case, the strip integrals outperformed the in-plane (2D) detector blur model, because the strips have width 14mm, whereas the actual detector response has FWHM=1mm at the center of FOV, so the wider strips are over correcting in plane. And as Les Rogers says, simulations are doomed to succeed, and that is particularly true in Case 2 since the projection measurements were computed with exactly the model used for reconstruction. In Case 3, we recover 25% of the value because the counts are spread over the four central voxels since the ideal point source is in the corner between them. So 25% is in fact the best we could hope to recover in this case, so the method worked better than might have been expected given the model mismatch. Summing the four central voxels would give recovery nearly unity. In Case 1, the central slice had 88% of the counts (for n x =32case), so it is interesting that the 2D approach got less than 6% recovery. Not surprisingly, convergence is slower in the 64 case, since the PSFs are wider (in terms of number of voxels), and there are more parameters (voxels) to estimate. Fig. 2 confirms that OSEM with 5 subsets converges about 5 times faster than ordinary ML-EM, which is why it is so popular in the depth-dependent response compensation community! 1 Actually, it is a slightly modified form of OSEM that is faster than the classical OSEM algorithm yet has better convergence properties. 3
1 Digital point source simulation with 32,32,8 object.9.8.7 Point source recovery.6.5.4.3.2.1 EM (2D strip) EM (2D blur) EM (3D blur) 5 1 15 2 25 3 35 4 Iteration Figure 1: Case 1. Digital point source results for 32 case. Point source recovery 1.9.8.7.6.5.4 Digital point source simulation with 64,64,8 object EM (2D strip) EM (2D blur) EM (3D blur) OSEM (3D blur) Point source recovery.3.25.2.15.1.5 Digital point source simulation with 64,64,8 object EM (2D strip) EM (2D blur) EM (3D blur).3.2.1 5 1 15 2 25 3 35 4 Iteration 5 1 15 2 25 3 35 4 Iteration Figure 3: Case 3. Digital point source results for 64 case with analytical Gaussian projections. Figure 2: Case 2. Digital point source results for 64 case with discrete system matrix projections. 4
3 Experiment 2: Monte-Carlo Point Source The second set of experiments used a single 64 19 projection view, of a point source in air, created by Yuni s Monte Carlo program. Like Case 3 above, the point source is located at the center of the middle slice, and it is centered within the transaxial plane, so it does not fall on the center of any voxel. For the purposes of verifying the resolution recovery aspects, I considered only the geometric component of the set of detected photons in the primary energy window. I replicated that view 6 times to form 6 artificial projection views over 36. The radius of rotation was 26cm, and the pixel size was.36cm. To determine the set of PSFs, I fit a Gaussian PSF to the MC data. The FWHM of the fitted Gaussian was 6.32 pixels, which is 2.275cm. (Gee that is pretty lousy spatial resolution.) Note that the MC simulation uses square collimator holes, yet the Gaussian fit is remarkably good (see Fig. 4) and gives remarkably good reconstructions too. For reconstruction, I simply varied the width of the Gaussian linearly from at the detector face to 2.275cm at the center-of-rotation (26cm from detector). This ignores detector crystal response, but that s probably a small effect and since this is just a point source at the center it does not matter anyway. In the interest of speed I only ran the FBP and OSEM algorithms. There is little point in using ordinary ML-EM. Fig. 5 shows the results, which again confirm the significant benefits of using a full 3d depth-dependent system model. Fig. 6 shows profiles through the four reconstructed images along the axial (z) direction, which clearly shows that the full 3d depth-dependent system model does a much better job of recovering the resolution. I took Yuni s data and normalized it so that the single projection summed to 1 counts, so I think a peak value of 25 is what we should expect since the counts will be spread over the central four voxels. 4 Acknowledgement Many thanks to Ken Koral, Yuni Dewaraja, and Edward Ficaro for their continuing input on SPECT. References [1] G L Zeng and G T Gullberg. Frequency domain implementation of the three-dimensional geometric point response correction in SPECT imaging. IEEE Tr. Nuc. Sci., 39(5):1444 53, October 1992. [2] G L Zeng and G T Gullberg. Unmatched projector/backprojector pairs in an iterative reconstruction algorithm. IEEE Tr. Med. Im., 19(5):548 55, May 2. [3] L T Chang. A method for attenuation correction in radionuclide computed tomography. IEEETr.Nuc.Sci., 25(1):638 643, February 1978. 5
Monte Carlo PSF Radially Symmetric Gaussian Fit 2 2 4 4 6 6 2 4 6 8 1 12 2 4 6 8 1 12.1.2.1.2.25.2 15 Jun 21 psf fit.25.2 FWHM = 6.3231 samples.15.15.1.1.5 5 5 u.5 2 2 v psf fit Figure 4: Gaussian fit to Monte-Carlo point source projection view. 6
25 OSEM (2D strip) OSEM (2D blur) OSEM (3D blur) Monte Carlo point source simulation with 64,64,19 object 2 Point source recovery 15 1 5 5 1 15 2 25 3 35 4 Iteration Figure 5: Monte-Carlo point source results. 7
25 Monte Carlo point source simulation: profiles pixel value 2 15 1 OSEM (2D strip) OSEM (2D blur) OSEM (3D blur) 5 5 1 15 2 z index Figure 6: Profiles through reconstructions of Monte-Carlo point source data. 8