Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport

Size: px
Start display at page:

Download "Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport"

Transcription

1 Macro-step Monte Carlo Methods and their Applications in Proton Radiotherapy and Optical Photon Transport by Dustin J. Jacqmin A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Medical Physics) at the University of Wisconsin-Madison 2012 Date of final oral examination: 5/23/2012 The dissertation is approved by the following members of the Final Oral Committee: Thomas R. Mackie, Professor, Medical Physics and Human Oncology Paul M. DeLuca, Jr., Professor, Medical Physics Bhudatt R. Paliwal, Professor, Medical Physics and Human Oncology Paul P. Wilson, Associate Professor, Engineering Physics Kevin R. Kozak, Assistant Professor, Human Oncology

2 Copyright by Dustin J. Jacqmin 2012 All Rights Reserved

3 For James and Tina Jacqmin, without whom I would not have grown so much nor gone so far. i

4 ii Abstract Monte Carlo modeling of radiation transport is considered the gold standard for radiotherapy dose calculations. However, highly accurate Monte Carlo calculations are very time consuming and the use of Monte Carlo dose calculation methods is often not practical in clinical settings. With this in mind, a variation on the Monte Carlo method called macro Monte Carlo (MMC) was developed in the 1990 s for electron beam radiotherapy dose calculations. To accelerate the simulation process, the electron MMC method used larger steps-sizes in regions of the simulation geometry where the size of the region was large relative to the size of a typical Monte Carlo step. These large steps were pre-computed using conventional Monte Carlo simulations and stored in a database featuring many step-sizes and materials. The database was loaded into memory by a custom electron MMC code and used to transport electrons quickly through a heterogeneous absorbing geometry. The purpose of this thesis work was to apply the same techniques to proton radiotherapy dose calculation and light propagation Monte Carlo simulations. First, the MMC method was implemented for proton radiotherapy dose calculations. A database composed of precomputed steps was created using MCNPX for many materials and beam energies. The database was used by a custom proton MMC code called PMMC to transport protons through a heterogeneous absorbing geometry. The PMMC code was tested against MCNPX for a number of different proton beam energies and geometries and proved to be accurate and much more efficient.

5 iii The MMC method was also implemented for light propagation Monte Carlo simulations. The widely accepted Monte Carlo for multilayered media (MCML) was modified to incorporate the MMC method. The original MCML uses basic scattering and absorption physics to transport optical photons through multilayered geometries. The MMC version of MCML was tested against the original MCML code using a number of different geometries and proved to be just as accurate and more efficient. This work has the potential to accelerate light modeling for both photodynamic therapy and near-infrared spectroscopic imaging.

6 iv Acknowledgments If you want to build a ship, don t drum up people together to collect wood and don t assign them tasks and work, but rather teach them to long for the endless immensity of the sea. Antoine de Saint-Exupéry The family members, friends and colleagues who have helped me most during the last five years have done far more than assisted me with my thesis work. Together, they have fostered a longing for knowledge within me that will continue to drive me for the rest of my life. They have given me opportunities to learn and explore that I could not have dreamed of five year ago. It is with a grateful heart that I write these acknowledgements to those who have enabled me to become the person I am. First of all, I would like to thank my two thesis advisors: Thomas Rock Mackie and Paul M. DeLuca, Jr. Rock Mackie developed the macro Monte Carlo method many years ago and it has been an honor working with him to extend the macro Monte Carlo method to proton therapy. In addition to generously sharing his expertise on Monte Carlo radiation transport with me, Rock taught me a number of valuable lessons during the course of my research. Rock encouraged me to search for solutions in unexpected places by seeking connections between medical physics and other mathematical and scientific fields. Furthermore, Rock helped me become well-rounded student by allowing me to pursue a rather non-traditional path during graduate school.

7 v Paul DeLuca, you might say, was both my first and last boss during college. I discovered the field of medical physics while working as an office assistant in Paul DeLuca s office at the medical school as an undergraduate. During my Master s studies, I worked with Paul on a neutron backscatter health physics project. Through this project, Paul taught me to be a diligent researcher and encouraged me to ask a lot of questions. This ultimately made me a better at solving problems in my thesis work. I am very grateful for Paul s guidance and mentoring over the course of my entire college career. I owe a debt of gratitude to many of my fellow graduate students in Rock Mackie s group. Xiaohu Mo has always been willing to lend a helping hand and I cannot thank him enough for all the questions he has helped me answer over the years. I also learned a great deal about clinical physics working with Xiaohu and Miao Zhang as a member of Team Tomo. Finally, Patrick Hill and Ryan Flynn have always been there to give me sound advice on my research and the field of medical physics. The Brittingham Viking Organization has figured prominently in my life during my PhD studies and has been a source of immeasurable personal growth. I would like to thank the organization and the Ehrnrooth family for the generous scholarship that allowed me to study abroad in Finland. The experience proved to be beneficial to my thesis work in a number of unexpected ways. Most importantly, it gave me an opportunity to pursue a summer guest researcher position at Aalto University. I would like to thank Risto Ilmoniemi and Ilkka Nissilä for giving me the opportunity to work with them in the Biomedical Engineering and Computational Science Department at Aalto University. The research I did in Finland became the inspiration for the light propagation macro Monte Carlo algorithm discussed in this dissertation. I would have not made it through the past five years without the support and encouragement of my friends. As my roommate for three years of college, Nicole Rybeck saw me through some of the best years of my life and was always game to watch Mean Girls and eat

8 vi Pokey Stix. The members of the Badger Ballroom Dance Team and the Hoofer s Outing Club taught me that there is nothing you can t learn to do if you try hard enough. My fellow Brittingham Vikings are some of the most steadfast friends I have ever made and have kept these last three years very, very interesting. Finally, I owe my current roommate and fellow medical physicist Jessica Snow an enormous debt of gratitude. She has been an inexhaustible well of encouragement and support during what has been a very challenging year, and a top-notch roommate to boot. Above all, I would like to thank my family. My grandparents, sister, aunts, uncles and cousins are a source of inspiration and I strive hard to make them proud. My parents Jim and Tina Jacqmin have always believed in me and encouraged me to grow. They have always given me sound advice (even if I have failed to recognize it in the moment). They have encouraged me to see the world and to develop myself inside and outside of school. I would not be the person I am today without their support.

9 Contents Abstract Acknowledgments List of Figures List of Tables ii iv xii xiii I Proton Macro Monte Carlo 1 1 Introduction Proton Radiotherapy Overview Proton Radiotherapy Dose Calculation Monte Carlo Methods Pencil-Beam Methods Macro Monte Carlo The Proton Macro Monte Carlo Library Introduction The Local Geometry Monte Carlo Simulations in the Local Geometry MCNPX Simulations Processing the MCNPX Output Creation of Pre-computed Steps Histogram-based Pre-computed Steps List-based Pre-computed Steps The MMC Library The Proton Macro Monte Carlo Method Introduction Initialization of the Proton MMC Code Simulation of a Proton History Initialization of the Proton vii

10 3.3.2 Determining the Pre-computed Step Material, Energy and Step-size Sampling a History from the MMC Library Updating the Proton Phase Space Deposition of Energy Loss in the Dose Grid Secondary Particle Production Processing of the Output Results Introduction Validation Procedure Validation Geometries Validation Method Results from PMMC Validation Uncertainty Analysis of MCNPX and PMMC Problem 1: Homogeneous Water Phantom Problem 2: Fat-Muscle-Bone Phantom Problem 3: Water Phantom with Multiple Gaussian Beams Performance of PMMC Conclusions Implications and Future Work Introduction Improvements to Local Geometry Storing Dose Deposition Data Optimizing Energies and Step-sizes Adding Heavy Charged Particles to the MMC Library Histograms for the One-Proton Histories Improvements to Global Geometry Dose Deposition across Boundaries Dose Deposition through Variable Density Regions Full transport of Additional Particles Uncertainty Analysis Implications of PMMC PMMC in Proton Radiotherapy Treatment Planning MMC on a Graphics Processing Unit Computer Architechure MMC for Other Medical Physics Problems viii II Light Propagation Macro Monte Carlo Introduction Current Uses of Light Propagation Monte Carlo Light Propagation Monte Carlo Description of the Transport Medium

11 6.2.2 Optical Photon Transport Monte Carlo in Multilayered Media (MCML) Prior Work on Accelerating Light Propagation Monte Carlo The Light Propagation Macro Monte Carlo Library Introduction Monte Carlo Simulations Processing the Photon Packet Exit Parameters The MMC Library The Light Propagation Macro Monte Carlo Method Introduction Modifications to MCML Data Structures Modifications to MCML Input/Output Functions The Macro Monte Carlo Photon Transport Algorithm Determining whether a Macro-step is Feasible Sampling Parameters from the MMC Library Updating the Position of the Photon Packet Updating the Trajectory of the Photon Packet Updating the Weight of the Photon Packet Depositing Weightloss in the Absorption Array Results of MMC-MCML Validation Procedure Validation Model Validation Method Results from Validation Tests Accuracy of MMC-MCML Performance of MMC-MCML Conclusions Implications and Future Work Optimizing the MMC Library Expanding to More Complex Geometries Tailoring Photon MMC to Particular Problems Bibliography 174 ix

12 List of Figures 1.1 Comparison of a high energy photon beam, proton beam and a spread-out Bragg peak Diagram of the local geometry used by MCNPX The relationship between the Cartesian and cylindrical coordinates for exit position The relationship between the cylindrical geometry and the alternate frameof-reference Relationship between the alternate frame-of-references and exit trajectory in spherical coordinates Example of uniformly spaced bins and equiprobable bins using the normal distribution Algorithm used to create the four-dimensional histograms with equiprobable bins A graphical representation of the pre-computed step data structure: Fourdimensional histogram with equiprobable bin structure A sample phase space probability density function Examples of the fractal bin structure created using the normal distribution A graphical representation of the pre-computed step data structure: Fourdimensional histogram with fractal bin structure A graphical representation of the pre-computed step data structure: Lists of individual phase space elements Algorithm used to create a pre-computed step that is stored as a list of the exit phase space parameters A graphical representation of the pre-computed step data structure: Lists of individual histories Algorithm used to create a pre-computed step that is stored as a list of the histories Partial diagram of the macro Monte Carlo library of pre-computed steps Algorithm used to generate the MMC library of pre-computed steps A two-dimensional representation of the macro Monte Carlo process Diagram of the overall flow of the PMMC program Diagram of the logic flow used for a single proton history x

13 4.1 A graphical representation of the uniform water phantom geometry used for the first test problem A graphical representation of the multilayered fat-bone geometry used for the second test problem A graphical representation of the multiple-beam geometry used for the third test problem Distribution of relative error as a function of position using the homogeneous water phantom geometry and 10 4 proton histories Distribution of relative error as a function of position using the homogeneous water phantom geometry and 10 6 proton histories Relative error as a function of the number of proton histories simulated in the homogeneous water phantom geometry Problem 1: Central-axis depth-dose distributions for pencil-beams produced by MCNPX and PMMC Problem 1: Integral depth-dose distributions for pencil beams produced by MCNPX and PMMC Problem 1: Lateral profiles at the Bragg Peak produced by MCNPX and PMMC Problem 2: Central-axis depth-dose distributions for pencil beams produced by MCNPX and PMMC Problem 2: Integral depth-dose distributions for pencil beams produced by MCNPX and PMMC Problem 2: Lateral profiles at the depth of maximum dose produced by MCNPX and PMMC Problem 3: Central-axis depth-dose distributions for Gaussian beams produced by MCNPX and PMMC Problem 3: Two-dimensional isodose profile produced MCNPX and PMMC A diagram of the Cartesian coordinate system used in MCML A sample Monte Carlo simulation showing light propagation in a sphere The spherical coordinate system used to define the exit position The alternate frame-of-reference used to compute trajectory in spherical coordinates A diagram of the spherical coordinates of the exit trajectory with respect to the alternate frame-of-reference The algorithm used to generate the MMC library A partial schematic of the MMC Library used for light propagation MMC The beginning of a sample MMC library file A flow diagram of the modified MCML algorithm Flow diagram for a macro Monte Carlo step Distribution of relative error as a function of position using the skin model: 10 5 photons xi

14 9.2 Distribution of relative error as a function of position using the skin model: 10 8 photons Relative error as a function of the number of photon packets simulated in the skin model Relative error as a function of the MMC library size in the homogeneous absorber model Comparison of the isofluence lines produced by MCML and MMC-MCML using 100 million photon packets and the brain model Comparison of the diffuse reflectance and transmittance profiles produced by MCML and MMC-MCML xii

15 xiii List of Tables 2.1 List of parameters contained in the ssr file for each particle List of parameters that described the behavior of a single exiting particle Implementations of Proton Macro Monte Carlo Material properties of the multilayered geometry used for the second test problem Properties of the beamlets used for the third test problem Specifications of the test platform Runtime comparison (in seconds) of MCNPX and PMMC for 10 6 proton histories Optical properties for the one-layer test problem Optical properties of the four-layer brain model for 800-nm light Optical properties of the five-layer skin model for 633-nm light Specifications of the test platform Runtime comparison (in seconds) of MCML and MMC-MCML for 100 million photon packets

16 Part I Proton Macro Monte Carlo 1

17 2 Chapter 1 Introduction

18 3 1.1 Proton Radiotherapy Overview The history, advantages and drawbacks of proton radiotherapy have been covered at great length in scientific literature [1, 2]. Briefly, the advantages of using accelerated protons for radiotherapy were first realized by Robert Wilson in 1946 [1]. The first patients were treated with this modality in the mid-1950 s using protons accelerated by a cyclotron at Lawrence Berkeley Laboratory. Since then, the number of facilities using accelerated protons to treat cancer has proliferated to more than 30 worldwide [2]. Accelerated protons and high energy photons interact with matter in significantly different ways [3]. The major distinction between protons and high energy photons is that protons are charged particles while photons are uncharged. As charged particles, protons interact approximately continuously via the Coulomb force with the atomic electrons in the media through which they are traveling. This causes the protons to slow down and they do so without much lateral scatter and much spread. As a result, a beam of protons follows a relatively straight path through tissue. This makes them useful for targeting well-localized tumors [3,4]. In addition, the depth at which a proton beam will come to a stop and cease depositing energy can be easily estimated [3]. This depth is the location of the Bragg peak, a steep peak in the proton depth-dose distribution that occurs because protons deposit most of their energy in the last few millimeters of their range (see Fig. 1.1). Given these properties, a beam of protons can be specified to stop inside a tumor, deposit a considerable fraction of its energy in that tumor, and deposit no dose beyond the margins of the tumor [1, 3, 4]. High energy photons are quite different. As uncharged particles, photons interact randomly via electromagnetic interactions with the media through which they are traveling [3]. A photon passing through a patient can interact anywhere along its initial trajectory (near the surface, inside a tumor, beyond the tumor, etc.) or pass through the patient without interacting at all. However, while individual photons interact in an unpredictable way, a

19 4 Dose (%) Photon Beam (10 MV) Spread-out Bragg Peak (Max. Energy 200 MeV) Proton Beam Bragg Peak (200 MeV) Depth in Tissue (cm) Figure 1.1: Comparison of a high energy photon beam, proton beam and a spread-out Bragg peak. The 10 MV photon beam exhibits a maximum dose at a shallow depth and decreases thereafter. The 200 MeV proton beam deposits a fraction of its peak dose at the entrance and most of its energy in the last few millimeters of travel. The spread-out Bragg peak is composed of many superimposed monoenergetic beams and has a higher entrance dose than the 200 MeV proton beam. photon beam composed of a large number of high energy photons achieves a predictable and reproducible average behavior. For instance, monodirectional beam of photons impinging on a homogeneous media always produces a dose distribution that decreases approximately exponentially with depth as shown in Fig. 1.1 [3]. This reproducible average behavior makes them a suitable source of radiation for safely treating cancer. The differences in the physical properties of high energy photons and protons have consequences in radiotherapy. First, the peak dose of a proton beam can be positioned anywhere in a patient by changing the beam energy because this changes the depth of the Bragg peak [4]. In contrast, the peak dose for photons always occurs near the entrance surface [3]. This makes protons better suited for sparing shallow structures. Second, while proton beams have a finite depth of penetration, a beam of high energy photons deposits dose at all depths along the beam s direction. This makes protons better suited for sparing deep structures as well [4]. All told, proton radiotherapy has the potential to provide better

20 tissue sparing and permit higher tumor doses relative to photon radiotherapy [2]. These 5 advantages are summarized graphically in Fig In addition to showing high energy photon and proton depth-dose curves, Fig. 1.1 displays a spread-out Bragg peak produced by varying the energy of a proton beam to produce a flat dose distribution over a range of depths [1]. This range of depths typically corresponds to the location of a tumor [4]. That is not to say protons are without disadvantages. Proton radiotherapy technology is vastly more expensive than high energy photon radiotherapy equipment [2]. Furthermore, proton radiotherapy is very susceptible to tumor motion due to the preciseness with which proton beams must be placed [5,6]. At greater depths, the lateral uncertainty in the proton dose distribution can be larger than that of photon beams due to motion and tissue inhomogeneity. Whether the advantages of proton radiotherapy outweigh the disadvantages is the subject of considerable debate. With the continuing proliferation of proton radiotherapy facilities, greater attention is being given to some of the underdeveloped components and poorly understood aspects of the proton radiotherapy treatment process [2]. Part I of this dissertation will focus on the development of a fast, accurate algorithm for proton beam dose calculation. 1.2 Proton Radiotherapy Dose Calculation The goal of a proton radiotherapy dose calculation algorithm is simple: Given a set of proton beams and a patient s anatomical data, determine the resulting dose distribution in the patient as quickly and accurately as possible. The most accurate way to do this would involve simulating the behavior of individual protons on an interaction-by-interaction basis as is done for photons [7]. However, so-called single-scatter methods are problematic because high-energy protons deposit a minute fraction of their energy in each interaction they undergo and have correspondingly low mean free paths [3, 8]. For instance, a 10-MeV proton has a mean-free-path of approximately 100 nm and deposits on the order of 20

21 6 ev per interaction [8]. The immense number of interactions that protons undergo before coming to stop makes simulating the behavior of individual protons in an interaction-byinteraction manner very time consuming. Luckily, the individual interactions that protons undergo in tissue-like materials result in only minute changes in the trajectories of these protons [8]. In fact, the average effect of multiple sequential interactions can be calculated and this makes it possible to perform proton transport with step-sizes much larger than one mean-free-path [9]. One common implementation of this involves Monte Carlo radiation transport Monte Carlo Methods Monte Carlo is a computational tool with a wide array of applications. In proton radiotherapy, Monte Carlo radiation transport is used as a dose calculation algorithm [9 14]. Given a set of proton beams and a patient s anatomical data, a Monte Carlo radiation transport code can be used to simulate the behavior of the proton beams and generate a statistically likely dose distribution. This distribution is generated by simulating the behavior of individual protons in a step-by-step manner. The simulation of a single proton trajectory is called a history and the uncertainty in the Monte Carlo-calculated dose distribution improves as the number of histories increases [9]. Millions of proton histories are needed to generate a dose distribution with a level of uncertainty that is considered acceptable for clinical purposes [12]. As protons pass through matter, they slow down and interact via two types of interactions. First, protons interact with atomic electrons via the Coulomb force resulting in continuous energy loss [3]. Second, protons undergo discrete interactions with atomic nuclei that result in larger energy transfers [15 17]. Different Monte Carlo codes model these two types of interactions differently but a few generalizations can be made.

22 First, most Monte Carlo radiation transport codes model Coulomb interactions using 7 condensed-history algorithms [9, 12, 18]. Condensed-history algorithms approximate the continuous energy loss using discrete steps. The energy lost in a single step depends upon the step-size, the medium and the proton energy and can be calculated using tabulated stopping powers [10]. The stopping power of a material is the average energy loss of a charged particle per unit path length in that material. It is typically expressed in MeV/cm. The stopping power of a particle in a given material depends upon the charge, mass and energy of the particle and the atomic composition of the material. Stopping power tables yield an expected energy loss; in reality the true energy loss for a given step is randomly distributed around the expected value. This phenomenon is called energy straggling and is accounted for with mathematical models [9, 18]. The position and direction of the proton after each step is typically sampled using a Gaussian distribution and multiple-scattering theory, respectively [9, 18]. While the treatment of the continuous Coulomb interactions is roughly the same in the most common Monte Carlo radiation transport codes, the methods used to model discrete nuclear interactions can be quite different from code to code. Each code begins modeling these discrete interactions before each condensed-history step by sampling probability distributions to determine whether a discrete interaction will occur [9, 18]. These probability distributions may be based on data derived from experiments or based on physics models. For instance, the MCNPX Monte Carlo code uses the experimentally determined LA150 nuclear reaction cross-sections for protons below the maximum energy of the experimental data (usually 150 MeV). A combination of several physics models are used for higher energies [9,13]. In contrast, the GEANT4 Monte Carlo code can be instructed to use a number of different physics models but does not use the LA150 cross-section data [18]. Monte Carlo radiation transport codes are capable of modeling dose distributions in proton radiotherapy with an accuracy unmatched by other dose calculation methods [10,

23 8 12]. The primary drawback of Monte Carlo is that the calculation process is very time consuming. Future advances in computer technology will eventually make it possible to routinely use Monte Carlo as a dose calculation tool. For the time being, it is simply too time consuming and resource intensive to see widespread use in clinical treatment planning systems, particularly during the treatment plan optimization process Pencil-Beam Methods As discussed above, modeling the behavior of protons in an interaction-by-interaction manner would be prohibitively time consuming. Condensed-history Monte Carlo methods reduce calculation times by modeling the aggregate behavior of many interactions over comparatively large steps. Pencil-beam algorithms take this a step further by modeling the aggregate behavior of millions of protons in the form of small diameter beams, or beamlets [10, 19]. A proton radiotherapy treatment plan can then be modeled as a superposition of many beamlets with different sizes and energies that are stretched or compressed longitudinally and widened laterally as they pass through different structures in the patient [19]. Before a pencil-beam algorithm models the dose deposition process inside a patient, it must model the effects of the beam line elements on the nearly monoenergetic proton beam that enters the treatment machine [19]. This is accomplished by changing the spot size and angular spread of the beam as it travels through the beam line. The final spot size and angular spread of the beam is used in the dose calculation algorithm that determines the dose to the patient [19]. As stated above, a pencil-beam algorithm models the effects of a full proton beam using numerous small diameter beamlets. The algorithm does this in two parts. First, the dose deposited by each beamlet is calculated at every point of interest. Then, the contributions of each beamlet are added together to compute the total dose at each point [19]. The dose,

24 9 d(x, y, z), at a point (x, y, z) due to a single pencil-beam is calculated using Eq. (1.1): d(x, y, z) = C(z) O(x, y, x) (1.1) Eq. 1.1 shows that the dose due to a single beam is separated into a central-axis term, C(z), and an off-axis term O(x, y, x). The origin of the coordinate system is at the beam source and the z axis runs along the beam axis. In this representation, z is not the depth of the beam in the patient but rather the depth plus the source-to-surface distance [19]. With that in mind, the central-axis term is defined as shown in Eq. (1.2): ( ) ssd0 + d 2 eff C(z) = DD(d eff ) (1.2) z The term, DD(d eff ), refers to the central-axis depth-dose distribution of a monoenergetic broad beam in a water phantom and ssd 0 is the source-to-surface distance from the measured large, open beam data. The effective depth, d eff, is the water-equivalent depth of the beam inside the patient [19]. The off-axis term accounts for the lateral flux of the beam due to angular spread from beam-line elements and from scattering inside the patient [19]. The lateral flux distribution is taken to be Gaussian based on multiple-scattering theory. Eq. (1.3) shows the form of the off-axis term: O(x, y, z) = 1 ( 2π [σ tot (z)] 2 exp x2 + y 2 ) 2 [σ tot (z)] 2 (1.3) The spread of the Gaussian distribution, σ tot (z), is called the radial emittance standard deviation and it is related to the total angular spread of the beam [19]. The dose distribution due to a full beam is determined by integration over all of the pencil beams. In practice, this integration process is converted to a weighted summation over the beamlets. The weight of a beamlet depends upon its location in the intensity profile of the full beam [19].

25 10 Pencil-beam algorithms work best in geometries that are minimally heterogeneous [10, 19]. This is the case for a number of reasons. First, the data used in pencil-beam algorithms are generated in homogeneous water phantoms. Approximations must be used to apply these data to heterogeneous situations. Second, the multiple-scattering theory used to generate the off-axis term treats the beam as if it were traveling through semi-infinite homogeneous slabs. As a result, pencil-beam algorithms perform poorly near heterogeneity boundaries that run along the beam axis [19]. The effects of proton scatter along such boundaries are too complex to be modeled using a simple Gaussian distribution for the lateral scatter. Another disadvantage of pencil-beam algorithms is the way discrete nuclear interactions are treated. Pencil-beam algorithms use measured depth-dose data and these data include the local effects of nuclear interactions. However, protons with larger scattering angles and secondary neutrons are not explicitly accounted for in pencil-beam algorithms [10, 19]. Conversely, these nuclear interactions can easily be included in Monte Carlo calculations [9, 12, 18]. In exchange for the aforementioned disadvantages, pencil-beam algorithms offer shorter calculation times that cannot be matched by Monte Carlo [10]. Pencil-beam algorithm calculations are by no means instantaneous; the dose distribution must be calculated on a grid with tens of thousands of points with numerous pencil beams contributing dose at each point. However, this task is much less onerous than simulating the behavior of proton beams on a proton-by-proton basis as is done in Monte Carlo radiation transport Macro Monte Carlo Selecting a dose calculation algorithm for proton radiation therapy treatment planning involves balancing two competing needs: accuracy and speed. As discussed above, Monte Carlo represents the most accurate algorithm but Monte Carlo simulations take too long to be practical for regular use in a radiotherapy clinic [10,12]. On the other end of the spectrum

26 11 are pencil-beam algorithms. They are much faster because the calculations are not based on the behavior of individual particles. Rather, these algorithms model a full proton beam as the sum of many small beamlets [19]. This method struggles in heterogeneous media because pencil-beam algorithms cannot actively take heterogeneities into account. In particular, lateral scattering near heterogeneities is poorly modeled using these methods [10, 19]. Macro Monte Carlo may represent a compromise between the two in terms of accuracy and speed. Macro Monte Carlo (MMC) simulates the behavior of individual particles but does so in a way that is more efficient than traditional Monte Carlo [7,20 22]. The weakness of traditional Monte Carlo is that looking up cross-sections and making physics calculations after each transport step is time consuming. A MMC algorithm uses data generated by traditional Monte Carlo so that it does not have to carry out these operations. These data are generated by running a Monte Carlo simulation on a limited number of small geometries called local geometries. Specifically, each simulation consists of particles (protons, electrons, etc.) entering a geometric object (a homogeneous cylinder or sphere, for example) with a fixed initial energy. The phase space of particles leaving the local geometry is tabulated for several initial energies and materials [7, 20, 22]. These tabulated datasets are assembled into a MMC library of pre-computed steps. The MMC library of pre-computed steps can be sampled using random numbers to transport particles over a heterogeneous geometry (for instance, a CT dataset) in a step-by-step manner using the output phase space of one step as the initial conditions for the next step [20]. The MMC method was originally conceived and implemented for electron beam radiotherapy dose calculation [7, 20 22]. However, as electron beam radiotherapy use decreased and faster electron Monte Carlo codes became available, the need for algorithms based on the MMC method diminished and research on the MMC method has stalled since then. Part I of this dissertation discusses the application of the MMC method to proton beam radiotherapy.

27 12 Chapter 2 The Proton Macro Monte Carlo Library

28 Introduction Using prior macro Monte Carlo (MMC) implementations as a model, MMC was implemented for proton radiotherapy dose calculation. Before proceeding, it is necessary to introduce some terminology that is specific to the study of MMC. The local geometry is a well-defined simple geometry through which particle paths are computed. Conventional Monte Carlo (MC) simulations are used to compute particle paths through the local geometry, and these data are used to create a MMC library. A MMC program uses the MMC library to transport particles through a complex, heterogeneous geometry called the global geometry. The MMC program uses the MMC library to take large steps called pre-computed steps through the global geometry. To tie it all together, MC simulations of particles through small local geometries are used to create the MMC library. The MMC library is used by a MMC program to transport particles via pre-computed steps through a global geometry [7]. The following subsections discuss the creation of the MMC library for protons. First, a local geometry is selected. Second, MCNPX simulations of protons are used to create pre-computed steps. Finally, pre-computed steps for many proton energies and materials are compiled into a MMC library. 2.2 The Local Geometry The first step in creating the MMC library is to choose the shape of the geometry that will be used for the pre-computed steps in the MMC program. The local geometry can take on virtually any three-dimensional shape; however, it is best to choose a shape that is simple. Simple shapes are preferred because they can be manipulated more easily in the global code. Simple shapes may also make the data library smaller by allowing the results of the conventional MC simulations to be summarized in terms of a smaller number of parameters. It is also important to choose a shape that takes advantage of the physical

29 14 behavior of the particles that are being modeled. For proton radiotherapy simulations, a cylindrical geometry satisfies both of these requirements. Cylinders are simple to describe mathematically and are appropriate for protons because high-energy protons undergo only small amounts of lateral deflection [1]. The shape of a long cylinder conforms very well to the trajectory of high-energy protons. 2.3 Monte Carlo Simulations in the Local Geometry MCNPX Simulations The pre-computed steps used in the proton macro Monte Carlo algorithm that will be discussed in Ch. 3 were created using the Monte Carlo N-Particle extended (MCNPX) code [9, 13]. The code was developed by Los Alamos National Laboratory. The MCNPX code is capable of transporting photons, neutrons, electrons and a wide variety of light and heavy ions through complex, user-specified geometries. The version used in this work was MCNPX The following subsections describe the geometry, physics settings and tallies used by MCNPX during these simulations Simulation Geometry The simulation geometry used in MCNPX consisted of a right circular cylinder of radius R local [cm] and length L local [cm]. One base of the right circular cylinder was located in the xy-plane and centered at the origin. The central axis of the right circular cylinder ran along the z-axis such that the opposite base of the cylinder was located in the z = L local plane. The cylindrical geometry was composed of a single material. The material was usually some material of anatomical interest such as water, lung, bone, soft tissue or fat. The material was also characterized by a density ρ local [g/cm 3 ]. The source used for the MCNPX simulations consisted of a monoenergetic proton beam. The initial energy of the protons was E initial. The source was located at the origin and was

30 15 z-axis L local x-axis 1 H Proton Source y-axis Figure 2.1: Diagram of the local geometry used by MCNPX oriented such that the proton beam impinged normally on one base of the right circular cylinder. In other words, the initial direction of the proton beam was along the z-axis and the central axis of the right circular cylinder. The simulation geometry is shown in Fig Physics Settings The mode card used for the MCNPX simulation included six particle designators: n, h, d, t, s and a. The source particles used in the simulations were protons (h). The other five designators were included to instruct MCNPX to completely track secondary particle production and to transport these particles as well. The secondary particles that were

31 16 tracked were neutrons (n), deuterons (d), tritons (t), 3 Helium nuclei (s) and alpha particles (a). The neutron physics card (phys:n) used the MCNPX default settings, with the exception that upper energy limit was set to E initial. The proton physics card (phys:h) used the MCNPX default settings with two exceptions. First, the upper energy limit was set to E initial. Doing so improves the accuracy of the simulation by forcing MCNPX to use the finest bin structure possible for the stopping power tables. The stopping power tables are used to compute energy loss. Second, the light ion recoil feature for protons was turned on so that secondary particle production would be modeled completely. The interaction probability data used by MCNPX come from evaluated nuclear libraries for particle energies up to 150 MeV. Above this energy, a number of physics models are used to estimate interaction cross sections for nuclear interactions. These models include the Bertini, ISABEL, CEM03, INCL4 and FLUKA models. The Bertini intranuclear cascade (INC) model is the default model used by MCNPX for protons, neutrons and pions and can be used up to particle energies of 3.5 GeV [23]. The ISABEL INC model can be used in place of the Bertini model for particles with atomic masses that are less than 4 and energies up to 1.0 GeV [24, 25]. The CEM03 INC model, upgraded from CEM2K used in prior MCNPX versions, is normally used for heavy ion production at energies from 100 MeV to 5 GeV [13, 26]. The INCL4 model can be used for nucleon-nucleus interactions above 200 MeV [27]. Two physics models were used in this work. The Bertini model was used for transporting protons and neutrons. However, the Bertini model is not capable to modeling the other secondary particles of interest. For this reason, the ISABEL model was used for deuterons, tritons, 3 He nuclei and alpha particles. The maximum energy used in this work was 250 MeV, suitable for both models.

32 Tally Settings Creating pre-computed steps for the MMC library requires detailed information about the behavior of protons during the MCNPX simulation. Specifically, the behavior of the protons that exit the cylindrical geometry is of interest. It is necessary to characterize this behavior in terms of position, trajectory and energy of the exiting protons. It is possible to do this using conventional MCNPX tallies and additional features like the tally segment card, energy card and cosine card. However, using these features does not specifically allow tracking of protons and secondary particle production on a history-by-history basis. Rather, the conventional tallies provide a summary of the simulation after all of the protons have been simulated and the behavior of individual particles is lost. Instead of using conventional tallies, information about the behavior of the protons during the simulation was obtained by using the surface source write (ssr) card. The ssr card instructs MCNPX to create a file containing the position, trajectory and energy of particles as they cross a user-specified surface. In the simulations used to create pre-computed steps, the ssr card was used collect information about protons leaving the homogeneous right circular cylinder through the base at z = L local plane and through the side of the cylinder at radius R local from the z-axis. The product of the ssr card is a binary file from MCNPX that contains the exit location, trajectory and energy of every particle that crossed the two surfaces specified above. This file is designed to be read by MCNPX and used as a source in another simulation. However, Siebers (2002) [28] has written a code called mcnp strip that can access the data in the binary file and allows the exiting particles to be read one at a time. This permits the simulation data to be processed into a pre-computed step, as discussed in the following sections. In addition to the ssr card, the simulation also determines the average energy deposition in the right circular cylinder using the f6 tally card. The total energy deposition is tallied

33 separately for the six particle types to determine the relative contribution of each to the dose deposited in the cylinder Processing the MCNPX Output Structure of the Output File The ssr output file has two sections. First, there is header section that contains information about the simulation parameters. This includes the geometry, source specification, number of particles and data about the surfaces that were used to produce the ssr file. Some of this information, such as the radius and length of the right circular cylinder, is used during the output processing operations. The remainder of ssr output is a list of every particle that exited the cylindrical geometry through the surfaces specified on the ssr card. The number of particles may be equal to the number of histories run by MCNPX. This happens when all of the particles exit the geometry without undergoing nuclear interactions and is common for protons with lower initial energies. If protons are absorbed or lose all of their energy, there may be fewer particles in the ssr file than the number of histories. If the initial energy of the protons is high enough for secondary particle production to occur, there may be more particles in the ssr file than the number of histories. The ssr file contains a wealth of information about each particle that has exited the geometry. Table 2.1 summarizes these quantities and shows the variable names used by mcnp strip to identify them Processing the Output File The raw data in the ssr file must be processed before it can be used to produce a precomputed step. Specifically, the exit position and trajectory are stored in the ssr file with respect to the coordinate system used in the MCNPX simulation. However, when pre-

34 19 Table 2.1: List of parameters contained in the ssr file for each particle Variable Name Description nocas History number nx Particle type and sign of z-component of trajectory wt Weight of the particle upon exiting E Energy of the particle upon exiting tme Time that the particle exited x, y, z Exit position in Cartesian coordinates u, v Components of exit trajectory in the x- and y-direction cs The cosine of the exit angle with respect to the exit surface normal computed steps are used in a MMC simulation, the position and orientation of the cylindrical pre-computed step is arbitrary. Therefore, the exit position and trajectory need to be computed relative to the right circular cylinder itself rather than the MCNPX coordinate system. This allows the cylindrical pre-computed steps to be used in arbitrary positions and orientations. In addition to casting the exit position and trajectory into different coordinate systems, some of the other parameters in the ssr file are modified before the data are used to create a pre-computed step. The ssr file describes the exit position and trajectory in terms of five parameters: x, y, z, u, and v. In addition to u and v, there is third component of trajectory w. This is the component of trajectory parallel to the z-axis and is calculated using Eq. (2.1): w = SIGN (nx) 1 u 2 v 2 (2.1) Where nx is one of the parameters stored for each particle in the ssr file and u and v are the other components of trajectory. The sign of nx is defined as the sign of w and this is why it is used in Eq. (2.1). The SIGN (nx) returns -1 if the sign of nx is negative and 1 if the sign is positive. The complete exit trajectory can now be expressed as a vector:

35 20 Figure 2.2: A diagram depicting the relationship between the Cartesian coordinates used by MCNPX and the cylindrical coordinates defined relative to the right circular cylinder. Two exit positions (x, y, z) are shown in two locations using black dots. One exit position is on the side of the right circular cylinder while the other is on the opposite base. The radius of the exit position is labeled r exit, the azimuthal position of the exit position is labeled α exit and the lateral position of the exit position is labeled z exit.. u = u v w The exit position (x, y, z) is expressed in Cartesian coordinates and must be converted to cylindrical coordinates relative to the cylindrical geometry. Fig. 2.2 shows the relationship between the Cartesian and cylindrical coordinates. The cylindrical coordinates (r exit, α exit, z exit ) can be calculated using the expressions in Eq. (2.2):

36 21 r exit = x 2 + y 2 where 0 r exit R local α exit = tan 1 ( y x) where 0 αexit 2π (2.2) z exit = z where 0 z exit L local Computing the exit location in cylindrical coordinates is as easy as using the definitions above. However, it is not strictly necessary to compute all three values. Consider that the exit location, by definition, is always on the surface of the right circular cylinder. This means either the particle exited through the side of the cylinder and r exit = R local or it exited through the opposite base and z exit = L local. Given this, it is possible to describe the exit location in just two parameters and a single binary value that depends on the exit surface. Doing so allows the particles to be easily sorted based on their exit surface and reduces the size of the MMC library. Furthermore, consider that the azimuthal angle α exit is defined in a plane perpendicular to the initial direction of the protons. The axial symmetry of the cylindrical geometry with respect to α exit guarantees that α exit will be independent of r exit and z exit and uniformly distributed over 0 α exit 2π. This means it is not necessary to compute and store α exit. Instead, a MMC transport algorithm can randomly sample a value for α exit whenever it is needed. As a result, it is possible to describe the exit location with just one numerical parameter, p exit, and one binary parameter, p side. The exit surface p side is defined as shown in Eq. (2.4): 0 if z = L local p side = (2.3) 1 if x 2 + y 2 = R local In other words, p side = 0 when the particle exits through the opposite base of the right circular cylinder and p side = 1 when the particle exits through the side of the cylinder. Once p side has been determined, p exit is calculated with Eq. (2.4):

37 22 x 2 + y 2 if p side = 0 p exit = z if p side = 1 (2.4) Next, the trajectory of the exiting particles is converted to spherical coordinates. Instead of using the standard frame-of-reference defined by the x-, y- and z-axes, an alternate frameof-reference is used to take advantage of the cylindrical geometry. This alternate frame-ofreference is not static; rather, the location and orientation of the alternate frame-of-reference is different for each particle. The origin of the alternate frame-of-reference is at (x, y, z), the location where the particle exited the geometry. The alternate frame-of-reference is defined by a set of mutually orthogonal vectors: d, r and c. Fig. 2.3 shows the relationship between the cylindrical geometry and the vectors that define the alternate frame-of-reference. The vector d is the initial direction of the protons in the MCNPX simulation and defined using Eq. (2.5): d = ẑ (2.5) Where ẑ is a unit vector that points along the z-axis. When a particle exits through the opposite base of the right circular cylinder (p side = 0), then d is the exit surface normal vector. When a particle exits through the side of the cylinder (p side = 1), then d points along the length of the cylinder pointing towards the positive z-direction. The vector r is a unit vector in the xy-plane that points from the z-axis radially towards the exit location, as shown in Fig It is defined by Eq. (2.6): r = x x 2 + y ˆx + y 2 x 2 + y ŷ (2.6) 2 Where ˆx and ŷ are the unit vectors that point along the x-axis and y-axis, respectively. When a particle exits through the opposite base of the right circular cylinder (p side = 0),

38 23 Figure 2.3: A diagram depicting the relationship between the cylindrical geometry and the alternate frame-of-reference defined by the vectors d, r and c. Two exit positions (x, y, z) are shown using black dots. One exit position is on the side of the right circular cylinder while the other is on the opposite base. The location and orientation of the alternate frame-of-reference is different for each exit position.

39 24 Figure 2.4: A diagram showing the relationship between the alternate frame-of-reference and the trajectory parameters θ exit and φ exit in spherical coordinates. The diagram shows an exit position (x, y, z) on the opposite base of the right circular cylinder. The cylinder has been made partially transparent for clarity. The polar angle of trajectory θ exit is defined as the angle between the initial direction d and the final trajectory vector u. The azimuthal angle of trajectory φ exit is defined as the angle between the radial vector r and u proj, the projection of u onto the xy-plane. then r is in the plane of the exit surface. When a particle exits through the side of the cylinder (p side = 1), then r is the surface normal vector at the exit location. If x = 0 and y = 0, r may be defined as any vector in the xy-plane. The vector c is the perpendicular to both d and r an is computed using Eq. (2.7): c = d r (2.7) The vector c is not explicitly used now, but it is used later in Sec by the MMC transport algorithm to reconstruct the trajectory of particles after each pre-computed step. In the alternate frame-of-reference, the exit trajectory can be described in terms of two parameters: θ exit and φ exit. Fig. 2.4 shows the relationship between the two exit trajectory parameters and the alternate frame-of-reference. The exit polar angle, θ exit, is defined as the

40 25 angle between the initial direction d and the exit trajectory. Rather than computing θ exit explicitly, the cosine of θ exit is computed and ultimately used to create the pre-computed steps. It is computed with Eq. (2.8): cos θ exit = d u = w (2.8) The exit azimuthal angle φ exit is defined relative to r in the plane perpendicular to d. Before φ exit can be computed, it is necessary to project the trajectory u into the plane perpendicular to d. Formally, this can be computed using u proj = d (u d), and then normalizing u proj. This ultimately yields Eq. (2.9): u proj = u u 2 + v ˆx + v 2 u 2 + v ŷ (2.9) 2 Note that Eq. (2.9) for u proj is only defined for u 2 + v 2 > 0. If u 2 + v 2 = 0, then there is no component of the exit trajectory in the xy-plane and u proj = 0, where 0 is the null vector. The exit azimuthal angle φ exit is defined as the angle between u proj and r. This angle is also stored in terms of its cosine and computed using Eq. (2.10): cos φ exit = r u proj (2.10) If x 2 + y 2 > 0 and u 2 + v 2 > 0, then the dot product can be expanded and Eq. (2.11) can be used to compute cos φ exit : cos φ exit = xu + yv x 2 + y 2 u 2 + v 2 (2.11) If x 2 + y 2 = 0 or u 2 + v 2 = 0, then cos φ exit may be arbitrarily assigned a value of 1.0. In practice, x 2 + y 2 = 0 corresponds to r exit = 0 and u 2 + v 2 = 0 corresponds to θ exit = 1.0.

41 26 In both these circumstances, a MMC transport algorithm will not use cos φ exit because it is undefined. Note that θ exit and φ exit were defined the same way for both exit surfaces. This can lead to a bit of confusion because the interpretation of θ exit and φ exit is different for each surface. For example, consider how θ exit is defined with respect to d regardless of the exit surface. For p side = 0, d is the exit surface normal so the cosine of θ exit is the true exit cosine. This means cos θ exit is equal to the cs parameter stored in the ssr file (see Table 2.1). For p side = 1, d is not the surface normal. Rather, r is the surface normal for this situation. As a result, the cosine of θ exit is not the true exit cosine cs. In spite of this confusion, a single set of definitions for θ exit and φ exit is used because it simplifies the trajectory reconstruction process in the MMC transport algorithm (see Sec ). Before proceeding with the creation of pre-computed steps, two of the other parameters in Table 2.1 will be changed. First, the history number of a given particle nocas will be relabeled n hist for clarity. Second, the parameter nx, which contains information about the particle type and boundary crossing direction, will be discarded. The particle type is calculated using Eq. (2.12) and stored as n part : n part = nx 10 6 (2.12) Where w is the floor operator and returns the largest integer smaller than a floating-point number w. The particle type information is stored in the millions and ten-millions digits, which is why nx undergoes division by In these simulations, n part will have one of six values: 1 for neutrons, 9 for protons, 31 for deuterons, 32 for tritons, 33 for 3 He nuclei and 34 for alpha particles. Next, the fraction of initial energy, E frac, carried by an exiting particle is computed and stored for each particle. Computing and storing E frac rather than the actual exit energy E

42 27 Table 2.2: List of parameters that described the behavior of a single exiting particle Variable Name Description n hist n part E frac p side History number Particle type (1 for neutrons, 9 for protons, etc.) Fraction of initial energy carried by an exiting particle Exit position in Cartesian coordinates p exit Exit position on base (p side = 0) or side (p side = 1) θ exit Deflection of the particle with respect to the initial direction Azimuthal deflection with respect to radial direction φ exit makes it easier to use the pre-computed steps in the MMC transport algorithm discussed in Sec The remaining fraction of energy E frac is computed using Eq. (2.13): E frac = E E initial (2.13) Finally, the exit time tme and the exit weight wt are discarded because they are not used to create pre-computed steps. The parameters that are used to create pre-computed steps are shown in Table Creation of Pre-computed Steps The product of the MCNPX simulation and processing operations done in the last section is a long list of particles for a given simulation geometry and initial proton energy. This long list of particles constitutes the phase space of exiting particles for the simulation. Creating a pre-computed step involves converting the long list of particles into a data structure that accurately describes the resulting phase space. There are a number of ways that this can be done and four methods were explored during the course of this thesis work. Three unsuccessful methods will be described to give insight into the difficultly of creating precomputed steps for protons. A fourth method that proved successful will be discussed in greater detail.

43 28 Each particle in the long list of particles is characterized by a unique set of values for the parameters in Table 2.2. The vast majority of particles in the list are primary protons that have traversed the local geometry, but secondary protons, secondary neutrons and charged particles are present as well. As discussed in Sec. 2.1, the cylindrical local geometry is characterized by a radius R local, length L local and single material of uniform density ρ local. Additionally, the long list of particles is specific to the initial proton energy E initial. There are a number of criteria to consider when deciding how to convert the long list of particles into a pre-computed step data structure. First, the data structure should be as small as possible. The size of the data structure is important because pre-computed steps are created for many proton energies, materials and step-sizes. The MMC library may contain hundreds of pre-computed steps, so keeping the data structure size low will allow the MMC library to be more portable and reduce loading times. Second, the precomputed step data structure must accurately describe the phase space of particles escaping the local geometry. For a given storage method, improving the accuracy generally involves storing more data about the phase space. These additional data increase the size of the data structure. Striking the right balance between storage size and accurate representation of the phase space is challenging and requires significant experimentation. Finally, the data structure should be easy to sample using random numbers. The MMC method uses random numbers to sample values from the pre-computed step data structure in order to perform the radiation transport process. Therefore, creating a data structure that is easy to sample will make the pre-computed steps more efficient Histogram-based Pre-computed Steps Histograms Dimensionality and Bin Structure Past implementations of MMC have used histograms to represent the phase space of particles leaving the local geometry [7, 21, 22]. These histograms serve as probability density

44 29 functions for the phase space. This methodology proved successful for electron MMC and was attempted as part of this thesis work for proton MMC as well. There are multiple ways one can use histograms to represent a phase space. One consideration is the dimensionality of the histograms. For instance, protons escaping the cylindrical geometry can be described using four parameters: one for exit position, one for exit energy and two for exit trajectory. At one extreme, the phase space can be represented by four independent one-dimensional histograms, one for each phase space parameter. At the other extreme, the phase space can be represented by one four-dimensional histogram. The advantages and disadvantages of each are discussed below. The most common method used in MMC thus far is the creation of one-dimensional histograms for exit position, energy and trajectory [7, 20 22]. In the case of electron MMC, this involved creating one histogram for the exit position, one for the exit energy and one for each of the two components of the exit trajectory. In breaking the phase space into four separate parameters, each with its own histogram, one implicitly assumes that the four parameters are independent of one another. In reality, this is not true for electrons [7]. Indeed, the exit position, trajectory and energy of electrons leaving a local geometry are highly correlated. To address this, the developers of electron MMC broke the phase space into several so-called bands. The bands were defined as a range of exit positions in which the histograms for each phase space parameter would be almost independent. Past work on electron MMC has shown that electrons can be accurately transported via MMC methods using pre-computed steps that break the phase space into between 4 to 8 bands. Four histograms are created for each band, one for each phase space component. Increasing the number of bands increases the size of the MMC library of pre-computed steps. However, as the number of bands increases (and the size of each individual band gets smaller), the phase space parameters within a single band become more independent of one another. The number of bands that are used is a compromise between library size

45 30 and the accuracy with which the histograms model the true phase space. Given the number of bins n used for each phase space parameter and the number of bands m, the size of the database S hist,1 (m, n) scales linearly with both m and n. This is stated in big O notation in Eq. (2.14): S hist,1 (m, n) = O (mn) (2.14) An alternative to using one-dimensional histograms is using histograms of a higher dimensionality. Multi-dimensional histograms are probability distributions that describe the phase space in terms of multiple parameters simultaneously. For example, Svatos (1998) [7] noted that the exit position and trajectory of electrons leaving a local geometry are highly correlated with the exit energy. She proposed creating separate histograms for many small ranges of exit energy. This is equivalent to creating three two-dimensional histograms: one for energy verses position, one for energy verses the exit trajectory polar angle, and one for energy verses the exit trajectory azimuthal angle. Svatos (1998) abandoned this idea due to the large amount of memory required to use such a histogram structure. Given the number of bins n used for each phase space parameter, the size of the database S hist,2 (n) would have scaled with the square of n. This is stated in big O notation in Eq. (2.15): S hist,2 (n) = O ( n 2) (2.15) The strength of using multi-dimensional histograms is that the correlation between different phase space parameters is explicitly taken into account. For instance, using the two-dimensional histogram structures described above preserves the relationship between energy and position as well as energy and trajectory. However, the correlation between position and trajectory is still assumed to be independent because they are tracked using separate histograms. It is possible to create a histogram that takes the correlation between

46 31 all four phase space parameters into account. Such a histogram would be four-dimensional, one dimension for each phase space parameter. The histogram would be the equivalent of a four-dimensional probability density function for the phase space of interest. This structure would be even larger than the structure suggested by Svatos (1998) [7]. Given the number of bins n used for each phase space parameter, the size of the database S hist,4 (n) would have scaled as n 4. This is stated in big O notation in Eq. (2.16): S hist,4 (n) = O ( n 4) (2.16) A four-dimensional phase space would be able to represent the highly correlated phase space parameters with high fidelity but at the cost of a very large data structure relative to the other choices. The transition from one-dimensional histograms to four-dimensional histograms is not simply a matter of space and organization. As the dimensionality of the histograms is increased from one to four, the components of the phase space (exit radius, energy, escape trajectory) go from being completely independent to completely coupled. Recall that the phase space parameters are highly correlated, so the coupling of the phase space components is very important. This means using higher dimensional histograms will greatly increase the accuracy of the phase space model. Indeed, a four-dimensional histogram with a sufficient number of bins is equivalent to the true phase space. Histograms with a lower dimensionality can only approximate the true phase space. In addition to choosing the dimensionality of the histograms used to represent the phase space of particles leaving a local geometry, the structure and arrangement of the histograms bins must be selected as well. There are two common arrangements: (1) uniformly spaced bins and (2) equiprobable bins. When using uniformly spaced bins, the range of possible values for each phase space component (energy, exit polar angle, etc.) is divided into many bins such that each bin has the same width and the bin boundaries are evenly spaced.

47 32 Relative probability for bin Relative probability for bin (a) x (b) x Figure 2.5: Example histograms that describe the normal distribution for µ = 0 and σ = 1. Each histogram was generated using 10 5 samples and 25 bins: (a) A histogram of the sampled values created using uniformly spaced bins and (b) a histogram of the sampled values created using equiprobable bins. When using equiprobable bins, the bins are not of equal size and the bin boundaries are not evenly spaced. Instead, the bin boundaries are carefully selected so that each bin has the same probability as all of the other bins. A simple example is shown in Fig A normal distribution with µ = 0 and σ = 1 was randomly sampled 10 5 times and the resulting data were used to make two histograms. Fig. 2.5.(a) shows a histogram with 25 uniformly spaced bins. The probabilities associated with each bin are unequal and the histogram resembles a plot of the normal distribution. Fig. 2.5.(b) shows a histogram with 25 equiprobable bins and the probabilities associated with each bin are equal. The bins are smallest near the center of the distribution and larger on either side.

48 33 There are strengths and weaknesses associated with each structure. Histograms with uniformly spaced bins are easier to create. Furthermore, they can be populated one particle at a time. This allows them to be improved later with additional simulations. A disadvantage of histograms with uniformly spaced bins is that they are less efficient to sample compared to histograms with equiprobable bins. Histograms with equiprobable bins are easier to sample because each bin has the same probability. It is not necessary to have a sophisticated sampling algorithm or to create alias sampling tables. However, histograms with equiprobable bins are much more difficult to create. The full results of a simulation are needed to determine the location of the bin boundaries, meaning much more data must be stored in order to create them. Once they have been created, histograms with equiprobable bins require less data storage relative to histograms with uniformly spaced bins. This is because histograms with equiprobable bins are characterized only by the bin boundary locations while histograms with uniformly spaced bins are characterized by bin boundaries and probabilities for each bin Four-dimensional Histograms with Equiprobable Bin Structure The earliest implementations of proton MMC developed during this thesis work used onedimensional histograms with an equiprobable bin structure. These early attempts to model the phase space using one-dimensional histograms proved unsuccessful. Specifically, the depth-dose profiles produced by proton MMC were too wide in the region of the Bragg peak. It appeared that the high correlation between exit radius and energy was not being modeled correctly using one-dimensional histograms and resulted an overestimation in the amount of energy straggling for each pre-computed step. This overestimation led to too much range straggling of the protons in the overall simulation, explaining why the Bragg peaks were too wide.

49 34 Neuenschwander et al. (1992) and Svatos (1998) [7, 21, 22] were limited to using onedimensional histograms because of computational, data storage and computer memory limitations that were present at the time of their work. Such limitations are no longer a concern due to advances in computer technology and this allowed other histogram structures to be attempted. Specifically, a library composed of pre-computed steps represented by fourdimensional histograms with equiprobable bins was created. These histograms were created using right circular cylinders with an infinite radius, effectively creating a semi-infinite slab geometry. Each four-dimensional bin in the histogram was represented by eight bin boundaries: 2 bounds for the exit radius, 2 bounds for the exit energy, 2 bounds for the exit polar angle of trajectory and 2 bounds for the exit azimuthal angle of trajectory. Every four-dimensional bin was equally probable. The bins that were concentrated near the peak exit radius and energy had relatively narrow boundaries. Far away from these peaks, the four-dimensional bins had relatively wide boundaries. The four-dimensional histograms were created as follows. First, an MCNPX simulation was run for many particles, typically at least 10 2 to 10 3 times the total number of bins. If there are too few protons in each bin, the location of the bin boundaries may not be statistically valid. Running an MCNPX simulation of more than 10 million protons per pre-computed step is impractical in terms of both calculation time and the large size of the ssr phase space file that must be processed. This places a strong upper limit on the number of bins per histogram dimension. Only 15 bins were used per dimension, a relatively small number compared to the 100 bins per dimension used by Svatos (1998) for one-dimensional histograms. Once the MCNPX output ssr file had been processed and all of the neutrons and heavy charged particles were excluded, the remaining protons were sorted by their fractional exit energy E frac. Once sorted, the list was cut into 15 energy sections, each with the same number of protons. Since each energy section was equal in size, the boundaries of each

50 35 energy section formed the bin boundaries for an equiprobable bin structure for energy. Next, this process was repeated for each individual energy section. First, the protons in each energy section were sorted by exit radius p exit. Then each energy section was divided into 15 radial sections. The radial section boundaries formed the bin boundaries for an equiprobable bin structure for exit radius for that particular energy section. A set of radial bin boundaries was created for each of the 15 energy sections. Once each of the 15 energy sections had been divided into 15 radial sections, this process was repeated for each radial section. The protons in each of the 225 radial sections were sorted by exit polar angle θ exit and then divided into 15 polar sections. The polar section boundaries formed the bin boundaries for an equiprobable bin structure for the exit polar angle for that particular radial section. A set of polar angle bin boundaries was created for each of the 225 radial sections. Finally, the protons in each of the 3375 polar sections were sorted by their exit azimuthal angle φ exit and then divided into 15 azimuthal sections. The azimuthal section boundaries formed the bin boundaries for an equiprobable bin structure for the exit azimuthal angle for that particular polar section. A set of azimuthal angle bin boundaries was created for each of the 3375 polar sections. The result of this process is a four-dimensional histogram composed of 50,625 bins with 57,856 bin boundaries. Since the bins are equiprobable, the locations of the 57,856 bin boundaries completely describe the four-dimensional histogram. The pre-computed step data structure consists of the 57,856 bin boundaries arranged in a logical and easily accessible manner. Fig. 2.6 summarizes the algorithm for generating a four-dimensional histogram with equiprobable bins. Fig. 2.7 shows a graphical representation of how the pre-computed step data structure is organized. Proton MMC was implemented using pre-computed steps that were stored as fourdimensional histograms. This code was called PMMC-4DH. In contrast, the previously discussed proton MMC code that used one-dimensional histograms was called PMMC-1DH.

51 36 Run MCNPX simulation Create long list of particles Exclude neutrons and heavy charged particles Choose number of bins per dimension n Sort list by exit energy Divide list into n energy sections Record energy section boundaries for (each of the n energy sections) Sort list by exit radius Divide list into n radial sections Record radial section boundaries for (each of the n radial sections) Sort list by exit polar angle Divide list into n polar sections Record polar section boundaries for (each of the n polar sections) Sort list by exit azimuthal angle Divide list into n azimuthal sections Record azimuthal section boundaries end end end Figure 2.6: Algorithm used to create the four-dimensional histograms with equiprobable bins. The number of bins per dimension n was 15 in the example discussed in the text, but n may take on any integer so long as there are enough protons to divide the long list of protons into n 4 parts.

52 37 Pre-computed Step Four-dimensional Histogram with Equiprobable Bins n - number of bins per phase space dimension (n+1)-by-1 vector of bin boundaries for E frac (n+1)-by-n-by-n matrix of bin boundaries for θ exit (n+1)-by-n matrix of bin boundaries for p exit (n+1)-by-n-by-n-by-n matrix of bin boundaries for ϕ exit Figure 2.7: A graphical representation of the pre-computed step data structure using fourdimensional histograms with equiprobable bin structure. The data structure stores the phase space parameters using four multi-dimensional arrays, one for each phase space parameter.

53 38 Table 2.3: Implementations of Proton Macro Monte Carlo Designation Phase Space Storage Structure PMMC-1DH One-dimensional histograms with equiprobable bins PMMC-4DH Four-dimensional histogram with equiprobable bins PMMC-F Four-dimensional histogram with fractal bins PMMC-P List of phase space parameters for each exiting particle PMMC-H 1 List of histories sorted by the number of exiting protons 1 PMMC-H produced the best results and was redesignated PMMC. This implementation is discussed in greater detail in Ch. 3 and 4. Please see Table 2.3 for a full list of the proton macro Monte Carlo codes implemented in this work. Pre-computed steps were created for PMMC-4DH using the algorithm described above for initial proton energies ranging from 2 MeV to 200 MeV. The initial energy spacing was 2 MeV from 2 to 100 MeV and 4 MeV from 104 MeV to 200 MeV. The PMMC-4DH code was tested for many proton beam energies in a water phantom. The code produced much more accurate results than the briefly discussed PMMC-1DH. Specifically, the range and energy straggling errors were vastly reduced. This demonstrated that the four-dimensional histograms were successful in accurately representing the highly correlated phase space variables. Unfortunately, the PMMC-4DH code also produced unsatisfactory simulation results. While the PMMC-4DH code was able to model the range of protons accurately, the code did a poor job modeling the lateral spread of protons as they penetrated into the water phantom. Specifically, the code overestimated the lateral spreading of the beam and produced dose distributions with lateral profiles that were too wide. The cause of this problem has to do with the shape of the proton phase spaces. The phase space for a typical pre-computed step is heavily peaked at a particular radius, energy, and polar angle. This is especially true for higher energy protons. Consider Fig. 2.8, which shows a sample phase space probability density function in terms of exit radius and exit energy. The figure was generated using protons with an initial energy of 100 MeV and a water cylinder that was 1.5 cm long and has a radius of 0.5 cm. The peaked region of the phase space in Fig. 2.8 consists primarily of

54 Exit Energy [MeV] log 10 (probability) Exit Radius [cm] 7 8 Figure 2.8: A sample phase space probability density function in terms of exit radius and exit energy. The phase space was generated using protons with an initial energy of 100 MeV and a homogeneous water cylinder with a length of 1.5 cm and a radius of 0.5 cm.

55 40 primary protons that have traversed the local geometry without undergoing discrete nuclear interactions. The remainder of the phase space away from the peak is populated mostly by secondary protons produced in nuclear interactions. The logarithmic scale used in Fig. 2.8 shows that a typical phase space probability density function spans many orders of magnitude. The phase space is so heavily peaked that it is very difficult to accurately model it using a histogram with equiprobable bins. This is because the region of the phase space away from the peak (the part populated by secondary protons) is generally covered by only 1 or 2 of the 15 equiprobable bins in each dimension. Since all radii, energies and angles within one bin are treated as equally probable, the large bins at the periphery of the phase space tend to overestimate the frequency of protons escaping at large radii with large energy losses and large deflection angles. This is what ultimately produces the overestimation of lateral spread. A clear solution to this problem is increasing the number of bins per phase space dimension so that there are more bins covering the periphery of the phase space. Unfortunately, this is impractical due to the time needed to run MCNPX simulations with enough histories to populate histograms with more bins. Using uniformly spaced bins would alleviate this problem, but the bin spacing would be too wide to accurately resolve the peak region of the phase space. This would lead to poor modeling of range and energy straggling and produce results similar to those obtained for PMMC-1DH. For these reasons, the PMMC-4DH code and the four-dimensional equiprobable bin structure was abandoned Multi-dimensional Histograms with Fractal Bin Structure The pre-computed steps stored as four-dimensional histogram data structures lead to poor results when using equiprobable bins. This was due to poor resolution of the phase space away from the peak energy, radius and escape angle. Conversely, uniformly spaced bins would have modeled the peak region of the phase space poorly while modeling the periph-

56 41 ery more accurately. A third, novel bin structure was developed that was a compromise between equiprobable and uniformly spaced bins. This new structure is called the fractal bin structure. The fractal bin structure is similar to the equiprobable bin structure but improves the resolution of the phase space at the periphery. The structure of a histogram with fractal bins is determined by two parameters: the number of bins per level n and the number of fractal levels d. When the number of levels d is zero, the fractal bin histogram reduces to a histogram with n equiprobable bins. When d = 1, the outermost two bins are each split into n additional bins, each with an equal probability that is 1/n of the original bin. When d = 2, the two outermost bins produced by the split made for d = 1 are each split again into n more equiprobable bins. This splitting of the outermost bins is repeated d times. After each split, the probability of newly created bins is 1/n the probability of the bin that was split. Therefore, splitting the exterior bins both improves the resolution of the histogram at the periphery and reduces the probability associated with events at the periphery relative to the bins in the middle. Three samples of histograms with fractal bin structures are shown in Fig A normal distribution with µ = 0 and σ = 1 was randomly sampled 10 5 times and the resulting data were used to make the three histograms. Fig. 2.9.(a) shows a histogram with 16 fractal bins produced using n = 2 bins per level and d = 7 levels. Fig. 2.9.(b) shows a histogram with 15 fractal bins produced using n = 3 bins per level and d = 3 levels. Fig. 2.9.(c) shows a histogram with 16 fractal bins produced using n = 4 bins per level and d = 2 levels. Histograms with fractal bins combine some of the advantages of uniformly spaced bins and equiprobable bins. Like uniformly spaced bins, fractal bins resolve low probability parts of a probability distribution better than equiprobable bins. Furthermore, although it is not clear from the sample histograms shown in Fig. 2.9, it turns out that fractal bins are better than uniformly spaced bins at modeling sharply peaked distributions like those being modeled in proton MMC. Finally, the fractal bin structure is based on the equiprobable

57 42 Relative probability for bin Relative probability for bin Relative probability for bin (a) x (b) x (c) x Figure 2.9: Example histograms with fractal bin structures that describe the normal distribution for µ = 0 and σ = 1. Each histogram was generated using 10 5 samples: (a) A histogram with a total of 16 fractal bins created using n = 2 bins per level and d = 7 levels, (b) a histogram with a total of 15 fractal bins created using n = 3 bins per level and d = 3 levels and (c) a histogram with a total of 16 fractal bins created using n = 4 bins per level and d = 2 levels.

58 43 Pre-computed Step Four-dimensional Histogram with Fractal Bins n - number of bins per fractal level d - number of fractal levels (depth of the fractal) b = n + 2(d-1)(n-1) - number of bins per dimension (b+1)-by-1 vector of bin boundaries for E frac (b+1)-by-b-by-b matrix of bin boundaries for θ exit (b+1)-by-b matrix of bin boundaries for p exit (b+1)-by-b-by-b-by-b matrix of bin boundaries for ϕ exit Figure 2.10: A graphical representation of the pre-computed step data structure using fourdimensional histograms with fractal bin structure. The data structure stores the phase space parameters using four multi-dimensional arrays, one for each phase space parameter. bin structure, making it efficient to store and easy to randomly sample for phase space parameters. One disadvantage of using fractal bins is that bin boundaries of the smaller bins found at the periphery are determined using fewer samples than those near the middle. As a result, more samples maybe be required initially to produce statistically valid bin boundaries. In the context of proton MMC, this requires more MCNPX histories to be run for each pre-computed step. Another proton MMC code was implemented using pre-computed steps that were stored as four-dimensional histograms with fractal bin structure. This code was called PMMC-F. Pre-computed steps were created for PMMC-F using the algorithm described in Fig. 2.6, except that fractal bins were created rather than equiprobable bins. Fig shows a graphical representation of how the pre-computed step data structure is organized. Pre-

59 44 computed steps were created for initial proton energies ranging from 2 MeV to 250 MeV. The PMMC-F code was tested for many proton beam energies in a water phantom. The results produced by PMMC-F were better than those produced by PMMC-1DH and PMMC-4DH. Relative to PMMC-4DH, PMMC-F modeled the lateral spread of the beam better but modeled the range and energy straggling of the beam slightly worse. The poorer modeling of range and energy straggling is due to the lower bin resolution near the peak region of the phase space. However, PMMC-F still performed better than PMMC-1DH in this regard. It became clear while experimenting with PMMC-1DH, PMMC-4DH and PMMC-F that the only way to improve the results from these codes would be to increase the number of bins in the histograms. Since this was not feasible due to computational limitations, other pre-computed step storage methods were investigated List-based Pre-computed Steps The results produced by the proton MMC codes that used histogram-based pre-computed steps demonstrated the difficulty in accurately modeling the phase space of particles exiting a local geometry. It became clear during the course of this thesis work that the MCNPX simulations produced phase spaces composed of many types of histories. The vast majority of histories consist of protons undergoing only Coulomb scattering. In other words, there are no nuclear interactions or secondary particles produced by these histories. In these histories, protons exit the local geometry in a relatively narrow range of radii with a relatively low amount of angular deflection and with a relatively narrow range of energies. These histories are responsible for the highly peaked portion of the phase space discussed in the preceding subsections. The remaining histories consist of protons that undergo one or more nuclear interactions. In some histories, the protons are completely absorbed. In other histories, these interactions produce multiple protons. Sometimes neutrons or secondary heavy charged particles are produced as well. Consequently, the phase space of particles exiting

60 45 a local geometry is actually two superimposed phase spaces. One phase space is heavily peaked and consists of protons that undergo no nuclear interactions. The other phase space consists of a protons, neutrons and heavy charged particles escaping over a wide range of positions with a wide range of energies and trajectories. The key to performing accurate MMC with protons is finding a way to describe both components of the phase space well. The reason histogram-based pre-computed steps failed to model the phase space accurately is because the histogram structures used could not simultaneously model both components of the phase space. Equiprobable bins worked well for the peak region while uniformly spaced bins modeled the low-probability secondary particle production well. Neither could model both components well at the same time. One possible solution would be to create two sets of histograms for each pre-computed step, one for the histories in peak region and one for histories with secondary particle production. However, a typical four-dimensional histogram data structure with 15 bins per dimension is approximately the same size as an ssr file containing 10,000 proton histories. This begs the question: Why go to the trouble of creating complex histograms when you can simply use the original phase space for the same storage cost? The subsections that follow discuss creating list-based pre-computed steps using two different methods of organizing the phase space values from the MCNPX simulation Simple List of Exit Phase Space Parameters Storing a pre-computed step as a simple list of the exit phase space parameters is much easier than creating histograms. After the ssr file output has been processed, there are only three additional things that must be done to prepare the list for use in proton MMC. First, the secondary particles like neutrons and heavy charged particles have to be excluded from the list. Second, the energy lost due to secondary particle production needs to be accounted for. This is done by adding the energy of all of the secondary particles together

61 46 Pre-computed Step Lists of Individual Phase Space Elements p - number of proton phase space elements stored in the list w - fraction of energy lost to secondary particles s - protons produced per source particle List of Phase Elements Phase 1: n part, E frac, p side, p exit, θ exit, ϕ exit Phase 2: n part, E frac, p side, p exit, θ exit, ϕ exit Phase p: n part, E frac, p side, p exit, θ exit, ϕ exit Figure 2.11: A graphical representation of the pre-computed step data structure using a simple list of phase space elements. The data structure stores the phase space parameters using a simple array. and determining the fraction of the initial energy that is lost due to secondary particle production. This is stored as the parameter w, where w is the average energy lost per step to secondary particles, expressed as a fraction of the initial energy. Third, the gain or loss of protons must be accounted for as well. At higher energies, the number of protons in the list of exit phase space parameters is not usually equal to the number of protons initially simulated by MCNPX. The gain or loss is stored as a fraction, s, the number of protons produced per source proton. Fig shows a graphical representation of how the pre-computed step data structure is organized. A proton MMC code was implemented using pre-computed steps that were stored as lists of the exit phase space parameters. This code was called PMMC-P. Pre-computed steps were created for PMMC-P using the algorithm described in Fig Pre-computed steps were created for initial proton energies ranging from 2 MeV to 250 MeV. Like the

62 47 Run MCNPX simulation Create long list of particles Exclude neutrons and heavy charged particles from the list Determine p, the number of protons in the list Compute w, the average energy lost to secondary particle production per source particle Compute s, the survival probability of a proton Figure 2.12: Algorithm used to create a pre-computed step that is stored as a list of the exit phase space parameters prior implementations of proton MMC discussed earlier, the PMMC-P code was tested for many proton beam energies in a water phantom. The results produced by PMMC-P were a significant improvement over those produced by PMMC-F and PMMC-4DH. The PMMC-P code was able to model the lateral spread of the proton beams and the range straggling of the beams as well as or better than all of the earlier implementations of proton MMC. In spite of these improvements, PMMC-P often produced dose distributions that were the correct shape but not scaled properly. For instance, the dose at the Bragg peak was often too high or low. Given that the shape of the distribution was correct and only the scaling was wrong, it was inferred that PMMC-P was estimating the amount of dose deposition incorrectly. This was attributed to the fact that PMMC-P does not explicitly model secondary particle production, which directly affects the amount of dose deposition. Rather, PMMC-P accounts for secondary particle production indirectly using the s and w parameters discussed above. Accounting for secondary particle production using s and w is a bit clumsy because it attempts to model something that varies considerably for each history using averages over many histories. The pre-computed step data structure discussed in the next section addresses this issue and allows for explicit modeling of secondary particle production.

63 Simple List of Individual Histories The various pre-computed step data structures discussed above have made little use of n hist, the history number for each particle. This parameter can be used to reconstruct the entire exit phase space in the ssr file on a history-by-history basis. Reconstructing the histories of each source proton is significantly more challenging than using the simple list of phases, as was done in the previous subsection. However, reconstructing the histories allows the energy loss to be modeled on a history-by-history basis. This has the potential to significantly improve the modeling of dose deposition by the proton MMC transport algorithm. Fig shows the data structure of a pre-computed step stored as a collection of histories. This data structure is hierarchical and much more complicated than the structure used for pre-computed steps stored as a list of phase space parameters (see Fig. 2.11). The long list of phase space parameters is divided up into groups with the same n hist. Each one of these groups constitutes one history. As seen in Fig. 2.13, the histories are divided into three groups: histories with zero exiting protons, histories with one exiting proton and histories with more than one exiting proton. The histories are divided this way because the proton MMC transport algorithm treats each kind of history a bit differently, as will be discussed in Sec The pre-computed step data structure stores the number of each kind of history. Specifically, h 0, h 1 and h m are the number of histories producing zero, one and many protons, respectively. The pre-computed step data structure also stores p 0, p 1 and p m, the relative probabilities of histories with zero, one and many exiting protons, respectively. They are computed using Eq. (2.17): p 0 = p 1 = p m = h 0 h 0 +h 1 +h m h 1 h 0 +h 1 +h m h m h 0 +h 1 +h m (2.17)

64 49 Pre-computed Step Lists of Individual Histories h 0 - number of histories with zero exiting protons h 1 - number of histories with one exiting proton h m - number of histories with many exiting protons p 0 - probability of history with zero proton p 1 - probability of history with one proton p m - probability of history with many proton List of Histories Many-proton History 1 List of zero-proton histories List of one-proton histories List of many-proton histories Many-proton History 2 Many-proton History h 1 List of Phase Elements Phase 1: n part, E frac, p side, p exit, θ exit, ϕ exit Individual History Data Structure n p - number of protons n n - number of neutrons n c - number of heavy charged particles E loss - fractional energy loss to local geometry Phase 2: n part, E frac, p side, p exit, θ exit, ϕ exit Phase n c : n part, E frac, p side, p exit, θ exit, ϕ exit List of proton phase elements List of neutrons phase elements List of heavy charged particle phase elements Figure 2.13: A partial graphical representation of the pre-computed step data structure stored as a collection of histories. The data structure stores individual histories in arrays that sort them based on the number of protons produced during the history. The individual histories themselves sort particles into separate arrays based on the type of particle.

65 50 The histories are divided into three arrays based on the number of protons in that history. These arrays of histories constitute the second level of the data structure hierarchy. The arrays are populated by data structures that store the individual histories. The number of histories in each array varies depending on the initial energy of the protons. Low energy protons produce very few secondary particles so sometimes the zero-proton and manyproton arrays are empty. High energy protons produce many secondary particles, so the zero-proton and many-proton arrays contain many histories. The one-proton array is always the largest because most of the histories involve no nuclear interactions or secondary particle production. The data structures that store individual histories constitute the third level of the hierarchy. The individual history data structure stores n p, n n and n c, which are the number of protons, neutrons and heavy charged particles for a history, respectively. The individual history data structure also stores E loss, the fraction of the initial energy of the source proton that was deposited in the local geometry. This value is calculated using Eq. (2.18): E loss = 1 n hist E frac (2.18) Where the sum takes place over all of the particles for history n hist. Finally, the individual history data structure contains three arrays: one for exiting protons, one for exiting neutrons and one for exiting heavy charged particles. Depending on the type of history, some of the arrays may be empty. In some cases, the source proton and secondary particles are completely absorbed in the local geometry and all three arrays are empty. The reason the particles are divided into three groups is because the proton MMC code treats each type of particle differently. Protons are tracked and transported by the proton MMC code. Neutrons are not transported, but the neutron production may be tracked to estimate neutron dose. Right now, the energy of the heavy charged particles is deposited locally. However, the proton MMC method can easily be extended to allow these particles to be

66 51 Run MCNPX simulation Create long list of particles Allocate array to temporarily store history h 0 = 0, h 1 = 0, h m = 0 for (each history run by MCNXP) for (each particle in the long list) if (n hist for this particle = current history) Add this particle to the temporary array end end Count the number of protons in the temporary array if (there is zero protons) h 0 = h else if (there is one proton) h 1 = h else h m = h m + 1 end end for (each history run by MCNXP) Recreate temporary array of particles for this history Count number of protons (n p ), neutrons (n n ) and heavy charged particles (n c ) in the array Use n p, n n and n c to allocate arrays for protons neutrons and charged particles for this history if (n p = 0) Add the particles from this history to the next empty entry in the zero-proton array else if (n p = 1) Add the particles from this history to the next empty entry in the one-proton array else Add the particles from this history to the next empty entry in the many-proton array end Compute E loss for this history end Use h 0, h 1 and h m to compute p 0, p 1 and p m Use h 0, h 1 and h m to allocate arrays for zero-proton one-proton and many-proton histories Figure 2.14: Algorithm used to create a pre-computed step that is stored as a list of individual histories tracked and transported in the future. The arrays of protons, neutrons and heavy charged particles constitute the fourth and bottom level of the hierarchy. These arrays contain the phase space parameters for the protons, neutrons and heavy charged particles that exited the local geometry during one history. This bottom level of the hierarchy contains all of the data originally found in the long list of particles. Fig shows the algorithm used to create the pre-computed steps stored as lists of histories. Notice that the algorithm loops over the histories twice. This allows the arrays in the pre-computed step data structure to be dynamically allocated. During the first loop, the algorithm cycles through the phase space elements and histories to determine h 0, h 1 and h m. These values are used to calculate p 0, p 1 and p m. They are also used to allocate space for the zero-proton, one-proton and many-proton arrays. During the second loop, the

67 52 algorithm determines n p, n n and n c for each history. Finally, the algorithm allocates space for the arrays of protons, neutrons and heavy charged particles for each history and fills them with the particles from the appropriate history. A proton MMC code was implemented using pre-computed steps stored as lists of individual histories. This code was called PMMC-H (please refer to Table 2.3 for a full list of PMMC codes). Pre-computed steps were created for PMMC-H using the algorithm described in Fig Pre-computed steps were created for initial proton energies ranging from 2 MeV to 250 MeV. Like the prior implementations of proton MMC discussed earlier, the PMMC-H code was tested for many proton beam energies in a water phantom. The results produced by PMMC-H were better than those produced by all of the other implementations of proton MMC. The PMMC-H code was ultimately redesignated as PMMC and served as the final implementation of the code. The MMC transport algorithm used by PMMC is discussed in Ch. 3. The results produced by PMMC are discussed in Ch The MMC Library Creating the MMC library involves running many MCNPX simulations, creating many pre-computed steps using the algorithms described in Sec. 2.3 and organizing them into a library. This entire process is accomplished with a single C++ program. Fig shows how the pre-computed steps are stored in a hierarchical data structure. The figure shows that the MMC library is a data structure that contains an array of n m material. A single material i (i = 1, 2,..., n m ) in the library is a data structure that is characterized by a single material with density ρ (i). The atomic composition of the material is not stored in the data library. While the atomic composition is used to create the pre-computed steps that populate the MMC library, it is not used during the MMC transport process and does not need to be stored. In addition to the material name and density, each material data structure contains a stopping power table that describes how the proton stopping power of

68 53 Pre-computed Step Library of n m Materials Material 1 ρ(1) [g/cm 3 ] n e (1) = # of energies Table of Stopping Power Data Array of energies Material i ρ(i) [g/cm 3 ] n e (i)= # of energies Table of Stopping Power Data Array of energies Material n m ρ(n m ) [g/cm 3 ] n e (n m ) = # of energies Table of Stopping Power Data Array of energies Initial Energy i, 1 E(i,1) [MeV] n s (i,1) = # of step-sizes Array of step-sizes Initial Energy i, j... E(i, j) [MeV]... n s (i, j) = # of step-sizes Array of step-sizes Initial Energy i, n e (i) E(i, n e (i)) [MeV] n s (i,n e (i)) = # of step-sizes Array of step-sizes Step Size i, 1, 1 R local (i,1,1) [cm] L local (i,1,1) [cm]... Step Size i, 1, k R local (i,1,k) [cm] L local (i,1,k) [cm]... Step Size i, 1, n s (i,1) R local (i,1,n s (i,1)) [cm] L local (i,1,n s (i,1)) [cm] Pre-computed Step Data Structure Pre-computed Step Data Structure Pre-computed Step Data Structure Figure 2.15: Partial diagram of the macro Monte Carlo library of pre-computed steps. The hierarchical data structure has three levels. The top level consists of n m materials. Each material i has an array of n e (i) initial proton energies. The data structures for the initial proton energies constitute the second level of the library. Each initial energy j for material i has an array of n s (i, j) step-sizes. The data structures for the step-sizes constitute the third level of the library. The pre-computed step data structures are stored in the step-size data structures.

69 54 the material changes with energy. This is used by the MMC transport algorithm to improve the dose deposition process. Each material data structure i also contains an array of initial proton energies. The array of initial proton energies for material i contains n e (i) entries. The individual initial energies in the arrays constitute the second level of the hierarchy, as shown in Fig Each initial proton energy j (j = 1, 2,..., n e (i)) for material i is a data structure that contains three items: E (i, j), n s (i, j) and an array of step-sizes. The value E (i, j) is the initial proton energy in MeV used in the MCNPX simulations that generated the pre-computed steps created for initial energy j of material i. The value n s (i, j) is the number of different step-sizes available for initial energy j of material i. The array of step-sizes has n s (i, j) step-size data structures. These step-size data structures constitute the third level of the MMC library hierarchy. Each step-size k (k = 1, 2,..., n s (i, j)) of initial energy j and material i is characterized by two values: R local (i, j, k) and L local (i, j, k). The value R local (i, j, k) is the radius of the cylindrical local geometry for material i, initial energy j and step-size k. Similarly, the value L local (i, j, k) is the length of the cylindrical local geometry for material i, initial energy j and step-size k. Each step-size data structure also contains one pre-computed step data structure. This pre-computed step data structure can be generated using any of the algorithms described in the preceding subsections (see Figs. 2.6, 2.12 and 2.14). The MCNPX simulation used to produce the pre-computed step for material i, initial energy j and step-size k uses the appropriate physical dimensions (R local (i, j, k) and L local (i, j, k)), proton energies E (i, j) and material properties (ρ (i)). Refer to Fig. 2.1 for a diagram of the geometry used to generate the pre-computed steps. Fig summarizes the algorithm used to populate the MMC library with pre-computed steps.

70 55 Create an array for n m materials for i = 1 to n m Choose material properties (material name and ρ) Create an array for n e (i) energies Create a stopping power table for material i for j = 1 to n e (i) Choose initial proton energy E(i,j) Create an array for n s (i, j) step-sizes for k = 1 to n s (i, j) Choose local geometry cylinder radius R local (i, j, k) Choose local geometry cylinder length L local (i, j, k) Create pre-computed step using desired algorithm end end end Figure 2.16: Algorithm used to generate the MMC library of pre-computed steps

71 56 Chapter 3 The Proton Macro Monte Carlo Method

72 Introduction Macro Monte Carlo radiation transport is performed by taking large spatial steps through homogeneous areas of a transport medium. Sometimes these steps may also cross multiple transport regions composed of different materials. Fig. 3.1 shows a two-dimensional representation of the MMC method used for protons. Figure 3.1: A two-dimensional representation of the macro Monte Carlo process. Large spacial steps are taken through homogeneous parts of the mesh geometry. Near the interface between two areas, smaller steps are used to model the boundary crossing accurately. Ch. 2 discussed how to create pre-computed steps using a number of different algorithms. Ultimately, pre-computed steps stored as lists of histories were determined to be the best storage method. Ch. 2 also discussed how to create a MMC library composed of precomputed steps for different materials, proton energies and step-sizes. This chapter discusses how the proton Macro Monte Carlo code (called PMMC) uses the MMC library to perform efficient proton transport through a larger, global geometry composed of many regions and materials. The PMMC code discussed here is the code originally called PMMC-H in Sec

73 Initialization of the Proton MMC Code The PMMC code is run from the UNIX command line prompt. The program takes the name of an input file as its only argument. The input file allows the user to specify a number of input parameters. The input file begins with a list of file locations for the global geometry, proton source specification, proton stopping power tables and the MMC library file. Next the user specifies the output file name and the number of protons to be simulated per beamlet. Finally, the user specifies the number of voxels in the dose grid to be created by the PMMC simulation, as well as the resolution of the voxels in centimeters. The global geometry file describes the heterogeneous geometry that will be used in the PMMC simulation. It specifies the number of voxels per dimension and the resolution of the voxels for the global geometry. It also lists the material number, density and appropriate step-size for protons for each voxel. The material number corresponds to the material with the same index in the MMC library. The source specification file was designed to reflect the fact that proton therapy is delivered from a fixed number of angles with many individual beamlets per angle. For this reason, proton beamlets are organized by beam direction in the file. First, the file lists the number of beam directions. Then, the first beam direction is given using the directional cosines. Next, the number of beamlets for the first direction is listed, followed by descriptions of each beamlet. Beamlets are described in terms of their initial position, the Gaussian spread of the beamlet in each direction, the beamlet energy, and the relative weight of that beamlet. Once all the beamlets have been listed for the first direction, the file proceeds with the second direction and continues until the source is completely described. The proton stopping power file is generated using PTRAC data available online [29]. For each material in the MMC library, the proton stopping power file contains information about how the proton stopping power for that material varies with proton energy. The data

74 59 in this file are used to improve the dose deposition algorithm, as discussed in Sec The last file that is loaded is the MMC library, which is discussed in Sec The PMMC program initializes by loading all of the files and other user specified parameters into memory. The program uses dynamic allocation of memory, allowing the user to vary the size and composition of the global geometry and the description of the source without recompiling the code. Once the input file is loaded, PMMC begins the proton transport process. Fig. 3.2 shows the overall flow of the PMMC program. After initialization, PMMC cycles through each beam direction one at a time and simulates all of the beamlets for each direction before moving on to the next. Note that Fig. 3.2 does not expand on how protons are transported. The simulation of a single proton history is discussed in the next section. 3.3 Simulation of a Proton History Fig. 3.3 shows the algorithm used to simulate a single proton history. Each of the steps of the algorithm is discussed in detail below. Briefly, a proton is initialized using the current source and direction. Next, using the current energy and location of the proton, PMMC chooses a pre-computed step from the MMC library and randomly samples exit location, exit energy and exit trajectory parameters. These parameters are used to update the position, trajectory and energy the proton in the global geometry. The energy loss is deposited in the three-dimensional dose grid. If secondary protons are produced, they are added to a storage array to be transported later. Finally, PMMC checks to see if the proton has escaped or run out of energy. If the proton has not escaped and has not run out of energy, another pre-computed step is chosen and the process described above repeats itself until the proton runs out of energy or escapes the global geometry.

75 60 Start PMMC Load input file Load global geometry file, source specification file, MMC Library and stopping power data Start the next unsimulated direction Start the next unsimulated beamlet for current direction SIMULATE A PROTON HISTORY Last proton for current beamlet? N Y Last beamlet for current direction? N Y Last direction? N Y Convert energy loss grid to dose grid End PMMC Figure 3.2: Diagram of the overall flow of the PMMC program. The program loads the input file and all the associated files containing data about the source and geometry. The MMC library is loaded as well. Then the program cycles through each beam direction and simulates each beamlet for each beam direction in sequence. After all the beamlets have been simulated, the program processes the output. For the details of the simulation of a single proton history, see Fig. 3.3.

76 61 Start One MMC History Initialize initial proton position, trajectory and energy using the current direction and source beamlet structures Simulate Gaussian spread of the beamlet Move proton to the global geometry and update position indices Secondary Proton Queue Determine appropriate material m, energy e and step-size l to select a pre-computed step Take another MMC pre-computed step MMC Library Pre-computed step (m,e,l) Primary proton and Psuedo-protons Zero-proton History Create pseudo-proton to deposit dose Determine whether to select zero-proton one-proton or many-proton history One-proton History Get phase space parameters for exiting proton Many-proton History Create pseudo-proton to deposit dose and get phase space parameters for all exiting protons Use p side, p exit, E frac, cos(θ exit ) and cos(ϕ exit ) to update position, trajectory and energy of all protons and pseudo-protons Deposit E loss for history in dose grid Proton dead Are there secondary N or escaped? Y protons in the queue? Y Send to secondary proton queue Begin transporting the next secondary proton in queue N End MMC History Figure 3.3: Diagram of the logic flow used for a single proton history. First, a proton is initialized using source parameters. Then the proton enters a loop consisting of three steps: sampling new phase space parameters, updating the proton phase space and depositing dose. This process is repeated until the primary proton runs out of energy. Then, the process is repeated until all of the secondary protons have been transported as well.

77 Initialization of the Proton The first step in initializing the phase space of a new proton simply involves reading data stored in the source data structure for the current direction and beamlet. The initial position of the proton is set to the (x source, y source, z source ) location of the current beamlet. The trajectory of the proton, (u x, u y, u z ), is initialized using the direction cosines for the current direction. The beam energy E is initialized using the energy for the current beamlet. A number of other quantities are initialized as well. Two Boolean variables called dead and escaped are initialized to 0. The variable dead tracks whether the proton has dropped below the lowest energy in the MMC library. It is set to 1 if the energy of the proton drops below this threshold and can no longer be transported. The variable escaped tracks whether the proton is still located within the global geometry. It is set to 1 if the proton escapes the geometry. Together, dead and escaped are used to determine when the proton history is finished. The second step in initializing the phase space of the newly created proton is simulating Gaussian spread of the beam. Gaussian spread of the source is simulated by moving the proton to a new position in the plane perpendicular to the source direction. The Gaussian spread for the current beamlet σ is in centimeters. The spread is simulated by randomly selecting a radius r spread from a Gaussian distribution with a mean µ = 0 and variance σ 2. This is accomplished using a Mersenne Twister random number generator with a normal distribution function developed by Matsumoto and Nishimura (1998) [30]. Next a vector r is created perpendicular to the beam direction. If u y 0, then r is created using Eq. (3.1): r x = u 2 y u 2 x+u 2 y r y = ux u y r x r z = 0 (3.1)

78 63 If u y = 0, then then r is simply (0, 1, 0). Next, a second vector c is constructed that is perpendicular to both r and the direction vector u = (u x, u y, u z ). This is constructed using the vector cross product, c = u r, which has been expanded in Eq. (3.2): c x = u y r z u z r y c y = r x u z r z u x (3.2) c z = u x r y u y r x Finally, an angle α is randomly sampled on the interval [0, 2π]. This random angle is used to select a point in the plane defined by r and c that is a distance r spread from the current proton location (x source, y source, z source ). The proton is moved to its new source location (x, y, z) using Eq. (3.3): x = x source + r spread (r x cos α + c x sin α) y = x source + r spread (r y cos α + c y sin α) (3.3) z = x source + r spread (r z cos α + c z sin α) Since r spread is randomly sampled from a Gaussian distribution, the source location of the protons for a given beamlet will ultimately taken on a Gaussian shape centered at (x source, y source, z source ). The third step in the proton initialization process is moving the proton to the global geometry. The source location (x, y, z) is generally outside of the boundaries of the global geometry. Before proton transport can begin, the proton must be moved to the global geometry. Specifically, the proton is moved along its initial trajectory (u x, u y, u z ) until a voxel of the global geometry is encountered that has a density ρ > 0. This is accomplished by taking small steps that are much smaller than the voxel dimensions. Given a step-size s, the equations in Eq. (3.4) are used repeatedly until the proton is inside a voxel with a non-zero density:

79 64 x = x + su x y = y + su y (3.4) z = z + su z The step-size s is generally set to be one-tenth the size of the smallest voxel dimension. The last step in the proton initialization process is determining the current voxel location of the proton. The current and previous voxel location of the proton is stored and updated often during the MMC transport process. These voxel locations are used in Sec to determine the material and step-size to use for the pre-computed steps. The voxel locations are also used in Sec to determine where to deposit energy loss in the dose grid. Given global geometry voxel dimensions (l x,g, l y,g, l z,g ) and the number of voxels in each dimension (n x,g, n y,g, n z,g ), the voxel location of a proton in the global geometry, (i x,g, i y,g, i z,g ), is found using Eq. (3.5): i x,g = i y,g = i z,g = x l x,g + nx,g 2 y l y,g + ny,g 2 z l z,g + nz,g 2 (3.5) Where w is the floor operator and returns the largest integer smaller than w. Similarly, the voxel location of a proton in the dose grid (i x,d, i y,d, i z,d ) can be computed by using the dose grid voxel dimensions (l x,d, l y,d, l z,d ), the number of voxels in each dimension (n x,d, n y,d, n z,d ) and a variation of Eq. (3.5) in which the global geometry index g is replaced with the dose grid index d.

80 3.3.2 Determining the Pre-computed Step Material, Energy and Stepsize Once the phase space of a new proton is initialized, the proton propagates through the global geometry using pre-computed steps until it escapes the geometry or runs out of energy. The pre-computed steps are stored in the MMC library and are sorted by material, proton energy and step-size. Choosing the appropriate pre-computed step involves determining the indices of the material, proton energy and step-size that are most appropriate. The index m for the appropriate material can be found using the current voxel location of the proton in the global geometry. This location, (i x,g, i y,g, i z,g ) can be used to retrieve the material of the current voxel from the global geometry data. As discussed in Sec. 3.2, the global geometry file stores the material index for each voxel, which corresponds to a particular material in the MMC Library. The material index m obtained from the global geometry grid tells PMMC to use the mth material in the MMC library for the pre-computed step. The index e for the appropriate proton energy can be found by comparing the current proton energy E to all of the initial energies available for material m. First, PMMC determines E l and E h. Among all the pre-computed step energies for material m, E l is the largest initial energy that is less than E. It corresponds to a set of pre-computed steps with an energy index e l. Similarly, E h is the smallest initial energy that is greater than or equal to E. It corresponds to a set of pre-computed steps with an energy index e h. The index e for the appropriate proton energy is determined using Eq. (3.6): e = e l if ξ log E h log E log E h log E l Where ξ is a random number sampled on the interval [0, 1]. 65 e h if ξ > log E h log E log E h log E l (3.6)

81 66 Finally, the index l for the step-size can be found using the current voxel location, (i x,g, i y,g, i z,g ), of the proton in the global geometry grid. As discussed in Sec. 3.2, the global geometry file stores the step-size for each voxel. The step-size is stored as an index which corresponds to a particular step-size in the MMC Library. The material index l obtained from the global geometry grid tells PMMC to use the lth step-size of initial energy e and material m for the pre-computed step. The current method of using one step-size for all energies is a bit inefficient. This could be resolved by specifying the appropriate step-size for each energy in the global geometry file. Alternatively, PMMC could determine the appropriate step-size automatically during the PMMC initialization process Sampling a History from the MMC Library The material index m, initial proton energy index e and step-size index l uniquely corresponds to one step-size data structure in the MMC library, as shown in Fig This data structure contains the radius R local and length L local of a pre-computed step created using the mth material and eth proton energy. The pre-computed step is stored as a data structure in the step-size data structure (see Fig. 2.15) and consists of lists of different kinds of histories, as shown in Fig Now that the appropriate pre-computed step has been identified, a single history from this pre-computed step must be randomly sampled from one of the lists. This history will be used to take a step through the global geometry and update the phase space of the proton being simulated. Before randomly sampling an individual history, PMMC must determine which list of histories to use. As shown in Fig. 2.13, there are three types of histories: those that produce no exiting protons, those that produce one exiting proton and those that produce multiple exiting protons. The probability of each type of history is stored in the pre-computed step data structure as p 0, p 1 and p m, respectively. Since each type of history is handled

82 differently by PMMC, each type of history will be discussed separately in the subsections below Histories with One Exiting Proton First, PMMC checks to see if a one-proton history is selected. One-proton histories are investigated first because they are the most likely. A random number ξ is sampled on the interval [0, 1]. If ξ < p 1, then PMMC randomly chooses a history from the one-proton history list. PMMC randomly chooses an integer η between 1 and n 1, the number of oneproton histories. The number η corresponds to an individual history in the one-proton history list. This history contains one exiting proton and may contain secondary particles as well. The single exiting proton has phase space parameters E frac, p side, p exit, cos θ exit and cos φ exit. The individual history selected also contains the energy loss E loss, for that history. These six parameters are used to update the phase space of the proton and deposit the energy loss in the dose grid. This process is discussed in Secs and Histories with Zero Exiting Protons If ξ p 1, then the history that is selected will come from the zero-proton history list or the many-proton history list. PMMC determines which using the conditional statement in Eq. (3.7): ξ < p 0 p 0 + p m (3.7) Where ξ is a random number sampled on the interval [0, 1]. If the statement is true, then PMMC randomly chooses a history from the zero-proton history list. PMMC randomly chooses an integer η between 1 and n 0, the number of zero-proton histories. The number η corresponds to an individual history in the zero-proton history list. This history contains no protons but may contain secondary particles.

83 68 Although the individual history selected contains no exiting protons, the energy loss E loss, for that history must still be deposited over the length of the pre-computed step. To achieve this, a pseudo-proton is created with specially selected values for the five parameters that are used to update the proton phase space (E frac, p side, p exit, cos θ exit and cos φ exit ). The pseudo-proton exits at the exact center of the opposite face of the cylinder (p side = 0 and p exit = 0) with no energy (E frac = 0) and a straight trajectory (cos θ exit = cos φ exit = 1). The sole purpose of the pseudo-proton is to deposit the energy loss E loss across the length of the pre-computed step. The E frac = 0 value selected for the remaining energy fraction ensures that the transport process for the pseudo-proton will end after the energy loss is deposited (for more details, see Sec ). Secs and discuss how the phase space of the pseudo-proton is updated and how the energy loss is deposited in the dose grid Histories with Many Exiting Protons If the conditional statement in Eq. (3.7) is false, then PMMC randomly chooses a history from the many-proton history list. PMMC randomly chooses an integer η between 1 and n m, the number of many-proton histories. The number η corresponds to an individual history in the many-proton history list. This history contains more than one proton and may contain secondary particles as well. A number of things happen when a many-proton history is selected. First, PMMC ends the current history using the same process used to end a zero-proton history. A pseudoproton is created that exits at the exact center of the opposite face of the cylinder (p side = 0 and p exit = 0) with no energy (E frac = 0) and a straight trajectory (cos θ exit = cos φ exit = 1). This pseudo-proton is used to deposit the energy loss E loss for the current step. Again, Secs and discuss how the phase space is updated and the energy loss is deposited. In addition to creating the pseudo-proton, PMMC also creates n p secondary protons (where n p is the number of protons for the selected many-proton history). Each of the

84 69 n p secondary protons is defined by five exit parameters: E frac, p side, p exit, cos θ exit and cos φ exit. These parameters are used to determine the exit position, trajectory and energy of each secondary proton produced during the current pre-computed step. This is achieved using the process in Sec Once the phase space for each secondary proton has been updated, the protons are temporarily stored in a queue data structure. As discussed above, the transport of the primary proton is ended as part of simulating a many-proton history. The primary proton is replaced by multiple secondary protons whose phase spaces are updated immediately. Then the secondary protons are added to a queue. The first secondary proton in the queue is removed and treated liked a new primary proton. This proton may also produce secondary protons, and these secondary protons are added to the queue as well. A single proton history will continue until the primary proton and all of the secondary protons run out of energy or escape the geometry Updating the Proton Phase Space Regardless of whether a zero-proton, one-proton or many-proton history is chosen, the resulting primary proton, secondary protons and/or pseudo-protons must have their phase spaces updated using the sampled parameters. In addition, the energy lost during the pre-computed step, E loss, must be deposited in the dose grid. The exit phase space of a proton is defined by five parameters: E frac, p side, p exit, cos θ exit and cos φ exit. The local geometry of the pre-computed step is defined by two parameters: R local and L local. These parameters are used to update the position of the proton (x, y, z), the trajectory of the proton (u x, u y, u z ) and the energy of the proton (E). The new position and energy are used to determine if a proton should continue to be transported. The new position and energy are also used to determine where to deposit the energy loss, E loss, in the dose grid. Each of these processes is discussed in detail in the subsections below.

85 Determination of the Exit Position Recall from Sec that the phase space of a proton exiting the cylindrical geometry is defined with respect to an alternate coordinate system defined by three vectors: d, r and c. The first step in determining the exit position and trajectory is to determine d, r and c. Recall that d is a vector that is parallel to the axis of the cylindrical geometry. It is equal to the current (yet-to-be updated) trajectory of the proton as shown in Eq. (3.8): d x = u x d y = u y (3.8) d z = u z The vector r is defined as any vector that is perpendicular to d. Recall from Sec that a similar vector was created as part of simulating the Gaussian spread of the proton beam. Indeed, if u y 0, then r is created using Eq. (3.1). This set of equations is reproduced below for reference: r x = u 2 y u 2 x+u 2 y r y = ux u y r x r z = 0 If u y = 0, then then r is simply (0, 1, 0). Finally, the vector c is created using the vector cross product, c = d r, which has been expanded in Eq. (3.9): c x = d y r z d z r y c y = r x d z r z d x (3.9) c z = d x r y d y r x Next, the sampled phase space parameters are used to determine the exit radius R exit and the exit length L exit. Recall that p side tracks the exit surface of the proton. If p side = 0, then

86 71 the proton exited through the opposite face of the cylinder and L exit = L local by definition. Furthermore, for p side = 0 the value stored in p exit is the exit radius (i.e. R exit = p exit ). If p side = 1, then the proton exited through the side of the cylinder and R exit = R local by definition. Furthermore, for p side = 1 the value stored in p exit is the exit length (i.e. L exit = p exit ). The symmetry of the cylindrical geometry means that all exit locations with an exit radius R exit and exit length L exit are equally likely. The set of possible exit locations is defined by a circle of radius R exit centered on the center axis of the cylinder at a distance L exit from the entrance surface. For reference, recall that the exit position geometry is shown in Fig The circular dotted lines at each exit position show the circles that define the set of equally likely exit positions. A single exit position is selected by randomly sampling an exit azimuthal angle α exit on the interval [0, 2π]. Before the proton position (x, y, z) is updated, the current position is stored in a separate position vector (x last, y last, z last ). This is done because the energy deposition process discussed in Sec uses both the initial and final position of the proton to deposit energy in the dose grid. Finally, the exit position is updated using Eq. (3.10): x x + L exit d x + R exit (r x cos α exit + c x sin α exit ) y y + L exit d y + R exit (r y cos α exit + c y sin α exit ) (3.10) z z + L exit d z + R exit (r z cos α exit + c z sin α exit ) Determination of the Exit Trajectory The exit trajectory is determined using d, r and c and the sines and cosines of α exit, θ exit and φ exit. The values for cos θ exit and cos φ exit are included in the exit phase space parameters sampled from the MMC library. The sine of the exit polar angle, θ exit, is by definition always positive. It is calculated using Eq. (3.11):

87 72 sin θ exit = 1 cos 2 θ exit (3.11) The sine of the exit azimuthal angle, φ exit, can be either positive or negative. Recall that φ exit is defined with respect to r. Since azimuthal deflection away from r is equally likely in both directions, positive and negative values for sin φ exit are equally likely for a given cos φ exit. The value of sin φ exit is computed using Eq. 3.12: + 1 cos 2 φ exit for ξ > 0.5 sin φ exit = 1 cos 2 φ exit for ξ 0.5 (3.12) Where ξ is a random number sampled on the interval [0, 1]. Finally, the exit trajectory can be computed using d, r, c, α exit, θ exit and φ exit. The vectors d, r and c have coefficients that quantify their relative contribution to the new trajectory. They are computed using Eq. (3.13): a d = cos θ exit a r = sin θ exit (sin φ exit sin α exit + cos φ exit cos α exit ) (3.13) a c = sin θ exit (cos φ exit sin α exit + sin φ exit cos α exit ) The new trajectory is computed using Eq. (3.14): u x = a d d x + a r r x + a c c x u y = a d d y + a r r y + a c c y (3.14) u x = a d d z + a r r z + a c c z

88 Determination of Exit Energy and Energy Loss The sampled parameters E frac and E loss are expressed as fractions rather than in MeV. This allows them to be used with arbitrary proton energies. The energy loss in MeV E loss,mev is computed using Eq. (3.15): E loss,mev = E loss E (3.15) The new proton energy is computed by multiplying the current proton energy E in MeV by E frac as shown in Eq. (3.16): E E frac E (3.16) Recall that the pseudo-protons used to deposit dose are assigned E frac = 0. This means the pseudo-protons have zero energy after the transport process Check for Escape and Low Energy Threshold After the position, trajectory and energy of the proton are updated, PMMC checks to see if the proton has escaped the global geometry. Using the global geometry voxel dimensions (l x,g, l y,g, l z,g ) and the number of voxels in each dimension (n x,g, n y,g, n z,g ), the size of the global geometry is shown in Eq. (3.17): nx,glx,g 2 x nx,glx,g 2 ny,gly,g 2 y ny,gly,g 2 nz,glz,g 2 z nz,glz,g 2 (3.17) If the new proton position (x, y, z) is not in this range, then the proton is stepped backward toward (x last, y last, z last ) until the proton position satisfies the conditions in Eq. (3.17). This is accomplished by taking small steps that are much smaller than the voxel dimensions.

89 Given a step-size s, the equations in Eq. (3.18) are used repeatedly until the proton is inside a voxel with a non-zero density: 74 x = x s (x x last ) y = y s (y y last ) (3.18) z = z s (z z last ) The proton is moved back to the global geometry so that the dose deposition process can be completed. The Boolean variable escaped is set to 1. After the dose deposition process, MMC transport for this proton is halted due to the escape. PMMC also checks to see if the energy of the proton has fallen below the initial energies in the MMC library. If the energy of a proton falls below this threshold, the proton can no longer be transported using pre-computed steps. At this point, the Boolean variable dead is set to 1. The remaining energy of the proton is deposited locally. After the dose deposition process, MMC transport for this proton is halted because the energy of the proton has reached zero. The MMC transport of pseudo-protons used to deposit energy in zero-proton and many-proton histories is halted in this manner as well Deposition of Energy Loss in the Dose Grid The energy lost during a pre-computed step, E loss,mev is deposited in the dose grid between (x last, y last, z last ) and (x, y, z). The energy loss is deposited at n dep different locations between these two points. The location of the energy deposition points is randomly sampled rather than analytically selected. Selecting dose deposition locations randomly has two functions. First, using random numbers to select the energy deposition locations eliminates artifacts caused by superimposing the local geometries on the voxel structure of the global geometry. The most common manifestation of this is double deposition of energy at particular locations. These locations correspond to voxels where steps frequently end and begin.

90 Energy is deposited at these locations on two consecutive steps, leading to spikes in the resulting dose distribution. Second, using random numbers to select the energy deposition locations allows PMMC to make use of stopping power data to bias the energy deposition locations and simulate the change of stopping power over the length of a step. Given a stopping power ( ) de dx [MeV/cm] at (x last, y last, z last ) and a stopping power ( de dx ) [MeV/cm] at (x, y, z), the energy deposition locations are selected using a simple linear probability density function [31]. First, a relative position between (x last, y last, z last ) and (x, y, z) is randomly sampled using Eq. (3.19): 75 ξ if η < F 1 F f pos = 1 +F 2 (3.19) ξ if η F 1 F 1 +F 2 Where ξ and η are both random numbers sampled on [0, 1], F 1 (( 1 de ) ( 2 dx de dx Eq. (3.20): last = ( ) de dx last and F 2 = ) last). A energy deposition location (xdep, y dep, z dep ) is then calculated using x dep = x last + f pos (x x last ) y dep = y last + f pos (y y last ) (3.20) z dep = z last + f pos (z z last ) Next, this position is converted to a set of indices (i x,d, i y,d, i z,d ) in the dose grid using Eq. (3.5) and the dose grid parameters. Finally, a fraction the energy loss is added to the dose grid D in the corresponding voxel, as shown in Eq. (3.21): D (i x,d, i y,d, i z,d ) D (i x,d, i y,d, i z,d ) + E loss,mev n dep (3.21) As stated above, this process is repeated n dep times to simulate the energy deposition for a single pre-computed step. The number n dep is typically set to 10.

91 76 There are two problems with the dose deposition method used in PMMC. First, the dose deposition method assumes that each step occurs entirely in one material such that the stopping power varies continuously across the step. This is not true if there is a sudden change in density or a material boundary. This problem is discussed in Sec Second, there is another problem specific to zero-proton histories. During these histories, the incident proton is absorbed completely somewhere in the local geometry. This location can be anywhere along the length of the cylinder. However, the dose deposition method assumes that the dose is spread along the entire length of the cylinder. As a result, the dose deposition method overestimates dose at slightly deeper depths during zero-proton histories. Finally, as discussed in Sec , the remaining proton energy E may also be deposited in the dose grid. This occurs if the energy of the proton is lower than the lowest initial proton energy available in the MMC library for the current material. First, the final position of the proton (x, y, z) is used to compute a set of indices (i x,d, i y,d, i z,d ) in the dose grid using Eq. (3.5). Then, the remaining energy of the proton E is added to the dose grid D at the in the corresponding voxel, as shown in Eq. (3.22): D (i x,d, i y,d, i z,d ) D (i x,d, i y,d, i z,d ) + E (3.22) Secondary Particle Production There are six secondary particles that can be generated during the PMMC transport process: protons, neutrons, deuterons, tritons, 3 He nuclei and alpha particles. The MMC library contains the exit phase spaces for every type of particle. Although it is possible to extend the MMC library to allow the transport of additional particles, PMMC only has the capacity to transport protons at this time. The methods used to simulate secondary protons are discussed in Sec At present, secondary neutrons are assumed to escape the global geometry and their energy is not deposited locally. However, the neutron production is

92 77 tracked by recording the energy of the secondary neutrons in a separate three-dimensional grid that is the same size as the dose grid. This allows the user to get an idea of where neutrons are created and the energy fluence in each voxel. The secondary charged particles (deuterons, tritons, 3 He nuclei and alpha particles) are not transported but their energy is deposited locally in a third grid that is the same size as the dose grid. The user may choose whether to add the charged particle dose to the proton dose after the simulation. 3.4 Processing of the Output The output of PMMC is a dose grid D. The dose grid resolution is given by the size of the voxel dimensions (l x,d, l y,d, l z,d ). The size of the dose grid is given by the number of voxels in each dimension (n x,d, n y,d, n z,d ). After all of the beamlets have been calculated for all the directions, the dose grid D contains energy deposition in MeV for each voxel, not dose in J/kg. Before the output file is printed, the values in the dose grid are converted to dose. The mass m (i x, i y, i z ) of a single voxel (i x, i y, i z ) is given by Eq. (3.23): m (i x, i y, i z ) = l x,d l y,d l z,d ρ (i x, i y, i z ) [g] 1 [kg] 1000 [g] (3.23) Where ρ (i x, i y, i z ) is the average density of the voxel, which is determined by averaging many randomly selected point densities within the voxel. Given that the conversion coefficient for changing MeV to Joules is J/MeV, the dose D (i x, i y, i z ) for a single voxel (i x, i y, i z ) is given by Eq. (3.24): D (i x, i y, i z ) D (i x, i y, i z ) m (i x, i y, i z ) [ ] J kg (3.24) Where there is a change in the exponent of the leading coefficient due to the conversion from grams to kilograms in Eq. (3.23).

93 78 Chapter 4 Results

94 Introduction The PMMC code outlined in Ch. 3 was tested against the MCNPX radiation transport software. Recall that MCNPX was used to create the MMC library of pre-computed steps. The purpose of testing PMMC against MCNPX was to see if PMMC could accurately reproduce the results generated by conventional condensed-history Monte Carlo simulations. PMMC and MCNPX were compared in terms of two-dimensional planar dose distributions, pencil-beam central-axis depth-dose distributions, integral depth-dose distributions, lateral profiles and Bragg peak positions. The uncertainty in the results produced by PMMC and MCNPX is also evaluated. This is done by comparing of the results of each code as the number of proton histories is varied. The performance of the two programs in terms of calculation time was analyzed as well. The PMMC code was tested using a variety of beam energies from 10 to 200 MeV. It was tested using a number of different geometries. These included homogeneous phantoms, layered geometries and geometries with source beamlets coming from many directions. 4.2 Validation Procedure Validation Geometries Several test problem geometries were used to validate and measure the performance of the PMMC. The first problem was a simple, homogeneous absorber composed entirely of water. The rectangular box water phantom had a uniform density of 1.0 g/cm 3. The phantom was tested with proton pencil beams ranging from 10 MeV to 200 MeV directed into the center of the phantom along the z-axis. The size of the phantom and the resolution of the dose grid depended upon the beam energy. For beams from 10 MeV to 100 MeV, the phantom was 6 cm 6 cm 10 cm with a voxel size of 1 mm 1 mm 1 mm (that is, l x,d = l y,d = l z,d = 0.1 cm, n x,d = n y,d = 60 and n z,d = 100). For beams

95 80 Figure 4.1: A graphical representation of the uniform water phantom geometry used for the first test problem. The origin of the coordinate system is at the exact center of the rectangular box. from 110 MeV to 200 MeV, the phantom was 6 cm 6 cm 30 cm with a voxel size of 2 mm 2 mm 2 mm (that is, l x,d = l y,d = l z,d = 0.2 cm, n x,d = n y,d = 30 and n z,d = 150). The larger size allows beams with longer ranges to be simulated, while the larger voxel size reduces the total number of voxels and the size of the resulting output file. Fig. 4.1 depicts the geometry used for this problem. A variation on this geometry is used in the procedure that measures the statistical uncertainty of PMMC and MCNPX. In this geometry, the pencil beam is replaced by a single 100 MeV Gaussian beam with a spread σ = 1 cm. The second problem was a layered absorber composed of layers of fat, muscle and bone. The rectangular box consists of five layers that are 6 cm 6 cm in the x- and y-directions and of varying thickness. Table 4.1 shows the order, material, density and thickness of each layer of the phantom. The multilayered geometry was tested with 80-, 100- and 120-MeV proton pencil beams directed into the center of the phantom along the z-axis. The size of the geometry was 6 cm 6 cm 10 cm with a voxel size of 1 mm 1 mm 1 mm (that is,

96 81 Table 4.1: Material properties of the multilayered geometry used for the second test problem Layer Material Density [g/cm 3 ] Thickness [cm] 1 Fat Muscle Bone Muscle Fat cm x 6 cm y z 2.5 cm Figure 4.2: A graphical representation of the multilayered geometry used for the second test problem. The origin of the coordinate system is at the exact center of the rectangular box. l x,d = l y,d = l z,d = 0.1 cm, n x,d = n y,d = 60 and n z,d = 100). Fig. 4.2 depicts the geometry used for this problem. For the third problem, we return to the simple, homogeneous absorber composed entirely of water (ρ = 1.0). This time, multiple beams of varying energy, Gaussian spread and direction are directed at a single point in the xz-plane. Table 4.2 shows the source parameters for each beamlet. The size of the geometry was 6 cm 6 cm 10 cm with a voxel size of 1 mm 1 mm 1 mm (that is, l x,d = l y,d = l z,d = 0.1 cm, n x,d = n y,d = 60 and n z,d = 100). Fig. 4.3 shows a graphical depiction of the source arrangement.

97 Table 4.2: Properties of the beamlets used for the third test problem Beamlet Location [cm] Direction Cosines Energy [MeV] Gaussian Spread [cm] 1 ( 1, 0, 5) (0, 0, 1) ( 1, 0, 5) (0, 0, 1) ( 3, 0, 1) (1, 0, 0) (3, 0, 1) ( 1, 0, 0) ( 4, 0, 1) 6 (4, 0, 6) ( 1 2, 0, 1 2 ) ( ) 1 2, 0,

98 83 1 H 1 H 1 H 1 H Figure 4.3: A graphical representation of the multiple-beam geometry used for the third test problem. The origin of the coordinate system is at the exact center of the rectangular box Validation Method The validation of PMMC consisted of two phases and followed some of the methods used by Lo et al. (2009) [32]. The first phase involved assessing the uncertainty in the output of PMMC simulations against conventional MCNPX simulations. Monte Carlo simulations are inherently non-deterministic and it was necessary to separate the statistical uncertainty in the results from the error introduced by the MMC algorithm. Lo et al. (2009) achieved this by comparing the variance of the output. For PMMC and MCNPX, the variance of the output was measured using a single 2-D plane from the dose grids produced by each code. The 2-D plane was located at y = 0 and spanned the x- and z-dimensions. The differences between the two 2-D dose planes were quantified using a relative error E [i x ] [i z ] between corresponding elements. The relative error was computed using Eq. (4.1):

99 84 E [i x ] [i z ] = D s [i x ] [i z ] D t [i x ] [i z ] D s [i x ] [i z ] (4.1) Where D s is the gold standard 2-D dose plane produced by a conventional MCNPX simulation of 10 million protons, and D t is a corresponding 2-D dose plane of the same size produced by PMMC. The absorption array D t may also be a product of MCNPX, which allows the statistical uncertainty between conventional MCNPX simulation runs to be compared to the uncertainty for PMMC runs. The indices i x and i z specify a grid location in terms of x and z, respectively. The distribution of relative error was visualized using a 2-D color map that showed the relative error as a function of x- and z-position. The number of proton histories was varied from 10 4 to 10 7 for both MCNPX and PMMC. Lo et al. (2009) also outlined a method to summarize the effect of varying the number of proton histories. They defined a mean relative error that is computed by averaging the relative error in all the elements of the 2-D dose plane with values above a predefined threshold. The purpose of this threshold is to exclude relative error values that are undefined when the gold standard output array D s [i x ] [i z ] reaches zero. For these simulations, a threshold of 5% of the maximum dose was used. Computing mean relative error allowed the quantification of the error introduced by varying the number of histories. The mean relative error was computed using Eq. (4.2): E ave = nz nx i z=1 i E [i x=1 x] [i z ] (4.2) n x,d n z,d Where E ave is the mean relative error, E [i x ] [i z ] is the relative error for each element defined in Eq. (4.1), and n x,d and n z,d are the number of elements in the x- and z-directions in the 2-D dose plane, respectively. The second phase of the validation process involved direct comparison of the output of conventional MCNPX simulations and PMMC. This was done using the three test problems

100 85 described in Sec Several metrics are used to compare MCNPX and PMMC, including two-dimensional planar dose distributions, central-axis depth-dose distributions, integral depth-dose distributions, lateral profiles and Bragg peak positions. Two-dimensional planar dose distributions were compared using isodose maps. Pencil-beam central-axis depth-dose distributions were created by plotting the dose in the voxels along the central axis of the beams used in each of the problems. For the first two geometries, this is the z-axis. There are three different central axes used in the third problem. For the first two problems, integral depth-dose distributions were created by summing the dose in all of the voxels with the same depth position z. Lateral profiles were created at the depth of the Bragg peak to examine the lateral spread of the simulated beams. The magnitude and location of the Bragg peaks are directly compared as well. 4.3 Results from PMMC Validation Uncertainty Analysis of MCNPX and PMMC Fig. 4.4 and Fig. 4.5 show the distribution of relative error for simulations of 10 4 and 10 6 proton histories, respectively, using the homogeneous water phantom geometry discussed in Sec A 100-MeV proton beam with a Gaussian spread of 1.0 cm was used for this test.

101 86 (a) 10 Lateral (x) [cm] Depth (z) [cm] (b) 0 10 Lateral (x) [cm] Depth (z) [cm] 0 Figure 4.4: Distribution of relative error as a function of position using the homogeneous water phantom geometry and 10 4 proton histories. The beam energy was 100 MeV and the Gaussian spread of the beam was 1 cm: (a) PMMC with 10 thousand proton histories versus the MCNPX gold standard simulation (10 million proton histories) and (b) MCNPX with 10 thousand proton histories versus the MCNPX gold standard simulation (10 million proton histories). The color bar represents percent error from 0 to 10%. Values above 10% are represented using the same color as the color scale maximum. The accuracy of PMMC was comparable to that of conventional MCNPX as demonstrated by the two error distributions in Fig Fig. 4.5 shows that the statistical uncertainty decreased significantly for the simulations that used 10 million proton histories, as indicated by the increased number of voxels in the xz-plane showing less than 10% error.

102 87 This is expected because as number of proton histories n in a MC simulation increases, the uncertainty decreases at a rate of 1/ n [33]. Fig. 4.4 only shows a PMMC plot generated using a medium step-size (roughly 4% energy loss per step). Fig. 4.5 shows the results from PMMC using three step sizes (roughly 0.4%, 4% and 40% energy loss per step). It is clear from Fig. 4.5(c) that using the largest step-size leads to greater errors and a larger level of uncertainty. Perhaps this is due to the fact that using large step-sizes involves depositing energy loss at fewer locations during each history, leading to poorer statistics in each voxel. This issue is less apparent for the smaller step-sizes shown in Fig. 4.5(a) and Fig. 4.5(b). These error distributions resembled those for MCNPX shown in Fig. 4.5(d) much more closely. Fig. 4.5(a), which shows the error for the smallest step size, has lower levels of uncertainty but there is a clearly some kind of artifact present in the dose distribution.

103 88 Lateral (x) [cm] Lateral (x) [cm] Lateral (x) [cm] Lateral (x) [cm] (a) Depth (z) [cm] (b) Depth (z) [cm] (c) Depth (z) [cm] (d) Depth (z) [cm] Figure 4.5: Distribution of relative error as a function of position using the homogeneous water phantom geometry and 10 6 proton histories. The beam energy was 100 MeV and the Gaussian spread of the beam was 1 cm: (a) PMMC using the smallest available stepsize versus the MCNPX gold standard simulation, (b) PMMC using a medium step-size versus the MCNPX gold standard simulation, (c) PMMC using the largest available stepsize versus the MCNPX gold standard simulation and (d) A second MCNPX run with 10 million proton histories versus the MCNPX gold standard simulation. The color bar represents percent error from 0 to 10%.

104 89 The effect of the number of proton histories on the accuracy of the simulations was analyzed by computing the mean relative error for MCNPX and PMMC. Fig. 4.6 shows the mean relative error for MCNPX and PMMC for three step-sizes using the geometry and source specification discussed in the previous paragraph. The mean relative error of PMMC using the smallest step size tracked the mean relative error of MCNPX most closely. The mean relative errors were larger for the other step-sizes, which was consistent with the results seen in Fig As shown in Fig. 4.6(b) and 4.6(c), using the larger step sizes can double the uncertainty levels found for the smallest step-size. As discussed above, the uncertainty in the results of a Monte Carlo simulation should decrease at a rate of 1/ n. Therefore, the slope of the lines that pass through the data in Figure 4.6 should be 1/2 on the log-log plot. The lines for MCNPX have a slope of approximately 1/2. The lines for PMMC initially have a slope of 1/2, but the data values ultimately level off near 2-3%. This suggests there is a systematic error of roughly 2-3% between MCNPX and PMMC.

105 90 Mean relative error [%] Mean relative error [%] Mean relative error [%] (a) Number of photon packets (b) Number of photon packets (c) Number of photon packets Figure 4.6: Relative error as a function of the number of proton histories simulated in the homogeneous water phantom geometry. The plots use a log-log scale: x-shaped markers show the relative error between MCNPX and the MCNPX gold standard while square markers show the relative error between PMMC and the MCNPX gold standard. (a) PMMC using the smallest step-size, (b) PMMC using a medium step-size and (c) PMMC using the largest step-size.

106 Problem 1: Homogeneous Water Phantom The second phase of the validation process involved direct comparison of the results from conventional MCNPX and PMMC. This subsection discusses the results from the homogeneous water phantom problem. Fig. 4.7 shows the central-axis depth-dose distributions of proton pencil beams from 60 to 100 MeV and 160 to 200 MeV. Fig. 4.8 shows the integral depth-dose distributions of proton pencil beams from 10 to 50 MeV and 110 to 150 MeV. Fig. 4.9 shows the lateral profiles at the Bragg peak for proton pencil beams from 60 to 100 MeV and 160 to 200 MeV. Figs. 4.7, 4.8 and 4.9 show that PMMC reproduces the results from MCNPX with a high degree of accuracy. In particular, the depth, shape and magnitude of the Bragg peak are modeled correctly to within 1% or 1 mm. The lateral profiles show that PMMC can accurately model the lateral spread of the beam. The integral depthdose distributions show that there is greater disagreement for higher energy proton beams at the moderate depths. This discrepancy is thought to be due to inaccurate modeling of dose deposition. It is most likely linked to secondary particle production given that it is greater where proton energies are higher. Specifically, the amount of secondary particle production may be overestimated, resulting in too much energy leaving the geometry via secondary particles. The exact nature of this error is unknown at this time Problem 2: Fat-Muscle-Bone Phantom Fig shows the central-axis depth-dose distributions for 80-, 100- and 120-MeV proton pencil beams in the fat-muscle-bone phantom. Fig shows the integral depth-dose distributions for the same pencil beams. Fig shows the lateral profiles at the Bragg peaks. Figs. 4.10, 4.11 and 4.12 show that PMMC reproduces the results from MCNPX with a high degree of accuracy. In particular, the depth, shape and magnitude of the Bragg peak are modeled correctly to within 1% or 1 mm. The lateral profiles show that PMMC can accurately model the lateral spread of the beam in the multilayered phantom. Once again,

107 92 Central axis Dose [Gy/incident proton] x 10 7 (a) Depth Position (z) in Phantom [cm] Central axis Dose [Gy/incident proton] 6 x (b) Depth Position (z) in Phantom [cm] Figure 4.7: Comparison of the central-axis depth-dose distributions for pencil beams produced by MCNPX and PMMC using 10 million proton histories and the homogeneous water phantom. The solid lines are results from MCNPX. The circular markers show the results from PMMC: (a) pencil beams from MeV, (b) pencil beams from MeV.

108 93 x (a) Integral Dose [Gy/incident proton] Depth Position (z) in Phantom [cm] x (b) Integral Dose [Gy/incident proton] Depth Position (z) in Phantom [cm] Figure 4.8: Comparison of the integral depth-dose distributions for pencil beams produced by MCNPX and PMMC using 10 million proton histories and the homogeneous water phantom. The solid lines are results from MCNPX. The circular markers show the results from PMMC: (a) pencil beams from MeV, (b) pencil beams from MeV.

109 x 10 7 (a) Dose [Gy/incident proton] Lateral Position (x) in Phantom [cm] 3.5 x 10 9 (b) Dose [Gy/incident proton] Lateral Position [cm] Figure 4.9: Comparison of the lateral profiles at the Bragg peak produced by MCNPX and PMMC using 10 million proton histories and the homogeneous water phantom. The solid lines are results from MCNPX. The circular markers show the results from PMMC: (a) pencil beams from MeV, (b) pencil beams from MeV. The profiles broaden as the initial energy of the protons increases.

110 95 x MCNPX PMMC Central axis Dose [Gy/incident proton] Depth Position (z) in Phantom [cm] Figure 4.10: Comparison of the central-axis depth-dose distributions for pencil beams produced by MCNPX and PMMC using 10 million proton histories and the fat-muscle-bone layered phantom. The solid lines are results from MCNPX. The other markers show the results from PMMC. The proton pencil beams have energies of 80, 100 and 120 MeV. the integral depth-dose distributions show that there is greater disagreement for higher energy proton beams at the moderate depths. There is also a small discontinuity in the dose distribution at the interfaces between layers. Currently, PMMC simulates boundary crossings by using the smallest available step-size. No attempt is made to distribute energy loss based on density or to correct for the different stopping powers on each side of the interface. For larger step-sizes, this can result in the Bragg peak being shifted to a different depth. This issue is an area for future development and is addressed in Sec. 5.3.

111 96 x MCNPX PMMC 1.6 Integral Dose [Gy/incident proton] Depth Position (z) in Phantom [cm] Figure 4.11: Comparison of the integral depth-dose distributions for pencil beams produced by MCNPX and PMMC using 10 million proton histories and the fat-muscle-bone layered phantom. The solid lines are results from MCNPX. The other markers show the results from PMMC. The proton pencil beams have energies of 80, 100 and 120 MeV.

112 97 7 x MCNPX (d max ) PMMC(d max ) 5 Dose [Gy/incident proton] Lateral Position (x) in Phantom [cm] Figure 4.12: Comparison of the lateral profiles at the depth of maximum dose produced by MCNPX and PMMC using 10 million proton histories and the fat-muscle-bone layered phantom. The solid lines are results from MCNPX. The other markers show the results from PMMC. The proton pencil beams have energies of 80, 100 and 120 MeV.

113 98 Table 4.3: Specifications of the test platform Processor Intel(R) Core(TM)2 Duo CPU 2.13GHz Memory 4 GBytes RAM Cache 3072 kbytes Operating system Ubuntu LTS Compiler gcc Problem 3: Water Phantom with Multiple Gaussian Beams Fig shows the central-axis depth-dose distributions for the three beam axes used in the third problem. Fig shows the isodose lines from MCNPX and PMMC using 2-D planar dose distributions in the y = 0 plane. Figs and 4.14 show that PMMC reproduces the results from MCNPX with a high degree of accuracy. Fig shows that beams can be accurately modeled from many angles by PMMC. Fig shows that the isodose contours are in agreement to better than 1 mm down to dose levels as low as 10% of the maximum dose. The discrepancy away from the Bragg peak present the previous two problems is still present here, but is much less noticeable due to the relatively high dose at the beam intersection location. 4.4 Performance of PMMC The execution speed of PMMC was compared to execution speed of MCNPX. The two programs were executed on an identical test platform, the specifications of which are shown in Table 4.3. For a complete end-to-end runtime comparison, the runtime for PMMC includes file I/O, the MC simulation, and post-processing operations that create the output file. PMMC was compiled using full compiler optimization and the g++ linked libraries. Table 4.4 shows the total simulation execution times for the test problem used in the uncertainty analysis in Sec Table 4.4 also shows the total simulation execution times for the bone-muscle-fat phantom problem using the 120-MeV pencil beam source. Each simulation used 10 6 protons, which was enough to guarantee a statistically significant

114 99 Central axis Dose [Gy/incident proton] Central axis Dose [Gy/incident proton] Central axis Dose [Gy/incident proton] 2.5 x (a) z position [cm] 2.5 x (b) x position [cm] 2.5 x (c) x position [cm] Figure 4.13: Comparison of the central-axis depth-dose distributions produced by MCNPX and PMMC using 10 million proton histories and the water phantom problem with multiple Gaussian beams. The solid lines are results from MCNPX. The circular markers show the results from PMMC: (a) profile along the axis defined by x = 1, y = 0, (b) profile along the axis defined by y = 0, z = 1 and (c) profile along the axis defined by y = 0, z = x + 2.

115 100 (a) 90 x location [cm] z location [cm] (b) x location [cm] z location [cm] Figure 4.14: Comparison of the two-dimensional isodose profiles produced by MCNPX and PMMC using 10 million proton histories and the water phantom problem with multiple sources: (a) a view of the entire phantom showing isodose lines above 10% of the maximum dose and (b) a close-up view of the peak region. The solid lines are results from MCNPX. The x-shaped markers are from PMMC.

116 101 Table 4.4: Runtime comparison (in seconds) of MCNPX and PMMC for 10 6 proton histories. Homogeneous Absorber Bone-Muscle-Fat Model MCNPX 4502 (1.00) 4939 (1.00) PMMC (S) 2986 (1.51) 2187 (2.26) PMMC (M) 336 (13.4) 249 (19.8) PMMC (L) 59 (76.3) 42 (117.6) Three step-sizes were tested: A small (S) step-size with approximately 0.4% energy loss per step, a medium (M) step-size with approximately 4% energy loss per step and a large (L) step-size with approximately 40% energy loss per step. The number outside parentheses is the runtime in seconds. The number in parentheses is the speedup and is equal to the MCNPX runtime divided by the PMMC runtime. Monte Carlo solution. Simulation execution times are shown for PMMC using three step sizes (roughly 0.4%, 4% and 40% energy loss per step). As one would expect, the largest step-size produces the largest speedup while the smallest step-size produces the smallest. As shown in the table, PMMC performed best in the multilayered bone-muscle-fat phantom geometry, producing speedups of up to 117-fold over conventional MCNPX simulations. Speedup was slightly smaller in the homogeneous water phantom. 4.5 Conclusions Using the MCNPX program as the gold standard, the proton macro Monte Carlo (PMMC) code achieved a speed gain of over 100-fold compared to conventional MCNPX simulations. The system was tested on three geometries: a homogeneous water phantom, a multilayered model composed of fat, muscle and bone, and a water phantom with multiple Gaussian beams. Depth-dose distributions, lateral dose profiles and two-dimensional isodose profiles generated by MCNPX and PMMC were compared at 10 million proton histories for each geometry. The resulting distributions from PMMC agreed with MCNPX to better than 1% or 1 mm in the high dose region near the Bragg peaks of the proton beams. A maximum error of 5-10% was seen away from the Bragg peak for higher energy beams.

117 102 Chapter 5 Implications and Future Work

118 Introduction The final chapter of Part I of this dissertation discusses areas for future development of proton Macro Monte Carlo. First, improvements to the system used to generate the MMC library are discussed. This includes improving the estimation of energy loss, optimization of the library, the inclusion of data for secondary particle transport and reducing the size of the library by using histogram structures for parts of the phase space. Second, areas for improvement to the global code are discussed. These include implementing a more sophisticated boundary crossing algorithm, allowing for transport in regions of variable density, implementing particle transport for secondary charged particles and improving the estimate of the statistical uncertainty of the code. Finally, the implications of PMMC are discussed for proton therapy and other physics problems. 5.2 Improvements to Local Geometry Storing Dose Deposition Data As discussed in Sec , the energy loss E loss during a given pre-computed step is not a value produced by MCNPX and stored in the ssr file. Rather, E loss is inferred using energy conservation. The energy loss E loss is computed using Eq. 2.18, which assumes it is equal to the fraction of the initial energy remaining after accounting for all the escaping particles. One thing not mentioned in Sec is that MCNPX periodically yields histories for which E loss is negative, implying that MCNPX does not attempt to conserve energy when sampling the energies of secondary particles. As a result, the method of computing E loss using the E frac values of the escaping particles is flawed. Actually, it is surprising that PMMC works so well given the way E loss is computed. This error must be smaller for the one-proton histories that are most common given that the results in Ch. 4 are quite good.

119 104 There are a number of ways to address this problem. Some radiation transport codes like FLUKA [34, 35] and GEANT [18] allow the user to score this information on a history-byhistory basis or to infer it from more complete history data that include interaction locations in addition to the exit phase space. Unfortunately, the native capabilities of MCNPX do not allow for this sort of analysis on a history-by-history basis. One option is to modify the MCNPX source code to produce files with this information. Another option is use an energy deposition tally, divide the cylindrical local geometry into many sections, and determine the relative energy deposition in each section. This information could be stored as a vector in the pre-computed step data structure and used to bias the dose deposition algorithm in the same way that stopping powers are used now. Perhaps the best option would be to use FLUKA or GEANT for the production of the pre-computed steps. This option would require the most work but would give the developer total control of the information needed to estimate E loss accurately Optimizing Energies and Step-sizes Creating a library of pre-computed steps is a balancing act between limiting the size of the library and ensuring it contains enough information to produce accurate results. The libraries used in this thesis work generally used 75 different energies and 3 different stepsizes per material. The choice of the energies and step-sizes was not optimized. Rather, a trial-and-error process was employed until a suitably accurate MMC library was produced. Given the limited amount of experimentation used to create the libraries thus far, it is possible that a fully optimized library would be much smaller and more accurate. Only further experimentation will determine if this is the case. The MCNPX input files used to generate the library may not be fully optimized either. For instance, little effort was put into determining how many histories should be used to describe the phase space. The pre-computed steps were generated using one million histories,

120 105 of which a maximum thirty thousand were saved in the pre-computed step data structure. The reason why one million histories was used (and not thirty thousand) was to improve the estimate of the p 0, p 1 and p m values that determine the relative likelihood of each type of history. Further investigation would determine the ideal number histories to store in a pre-computed step and how many are needed to accurately estimate p 0, p 1 and p m. Finally, there are a number of physics settings that can be varied in MCNPX. A number of these settings were explored as part of creating the MMC library, but it is possible that these settings could be optimized as well Adding Heavy Charged Particles to the MMC Library At present, PMMC is only capable of transporting primary source protons and secondary protons produced during pre-computed steps. However, creating macro Monte Carlo libraries for other secondary charged particles could be done trivially with few modifications to the existing system. The system used to create the MMC library was designed with this sort of modification in mind. First, one would need to add an additional level of hierarchy to the MMC library. Above the level for materials would be a level for different types of particles, with each type of particle represented by a unique data structure. In addition, one would need to change the scripts that automatically create MCNPX input files to allow for different source particles. Finally, this modification would also require stopping power data tables for each combination of particle and material. These tables are used to determine how long the local geometries should be to obtain a particular level of energy loss. They are also used during the energy deposition process (see Sec ) Histograms for the One-Proton Histories While the list of individual histories method for storing a pre-computed step proved to be very successful, it requires the most storage space of all of the methods discussed. One way to

121 106 reduce the size of the MMC library could be to use histograms to describe part of the exiting proton phase space. As discussed in Sec , the phase space of protons exiting the local geometry is composed of multiple, superimposed phase spaces. The component of the phase space that is heavily peaked and consists of protons that undergo no nuclear interactions is relatively simple. All of the protons exit through the opposite face of the cylinder in a narrow range of energies and radii. This component of the phase space is well-suited to being modeled using the one-dimensional histograms used in electron MMC. The one-dimensional histogram structure is significantly more efficient in terms of storage space. The resulting pre-computed step would be composed of a hybrid structure. The most common type of history (one proton with no nuclear interactions) would be represented by one-dimensional histograms. The other types of histories (zero protons, one proton with nuclear interactions, multiple protons) would continue to be represented using lists of histories. 5.3 Improvements to Global Geometry Dose Deposition across Boundaries The current boundary crossing method used by PMMC involves switching to very small step-sizes at the boundary, then resuming larger steps once the boundary is crossed. The method is very effective so long as the minimum step-size is much smaller than the voxel dimensions of the dose and global geometry grids. Given this, it is also highly inefficient. In geometries with frequent boundary crossings, the use of this method would slow down the MMC transport process considerably. Svatos (1998) [7] was correct in saying that finding an effective energy deposition algorithm for crossing multiple regions is the single greatest obstacle to fast, efficient MMC transport. Proton transport through true anatomical geometries features frequent boundary crossings, so modeling steps that cross boundaries accurately is essential to producing

122 accurate dose distributions. Finding a way to accurately transport particles across multiple regions in a single step is challenging because the exit phase space for a particular 107 pre-computed step is only valid for one material. Using pre-computed steps created for one material in another material inevitably produces errors. These errors can be mitigated by using charged particle physics to make corrections. The method used by Svatos (1998) involved determining the proton pathlength in each region and using the stopping power in each region to estimate the true energy loss during the pre-computed step. Such a method may work very well for PMMC, because the path of a proton through a cylindrical step is relatively straight. This makes accurately estimating the pathlength of a proton through different regions feasible. Implementing accurate boundary crossing for PMMC should be the first priority for future development Dose Deposition through Variable Density Regions At present, PMMC is only capable of simulating proton transport through regions with uniform density. In realistic anatomical geometries, the density of the voxels within one region can vary considerably. Accurate transport through such regions will require an algorithm capable of accounting for the inhomogeneity of the density over a MMC step. Such an algorithm may account for variable density by stretching or compressing the cylindrical geometry of a MMC step to account increased or decreased energy loss. Alternatively, the step could remain the same size but account for increased or decreased energy loss by using stopping power data. Determining the average density along the proton path can be done using ray tracing or random sampling. Like implementing accurate boundary crossing, implementing accurate dose deposition through regions of variable density should be a high priority for future development as well.

123 Full transport of Additional Particles As discussed in Sec , implementing the transport of secondary heavy charged particles like alpha particles and deuterons will be relatively easy because PMMC was designed with multi-particle transport in mind. The primary modifications would involve updating the MMC library data structure to allow for multiple particles and updating the MMC library loading functions to permit the expanded library to be loaded. The PMMC transport algorithm would need to be updated as well. Instead of depositing the energy of secondary charged particles locally, the updated algorithm would treat secondary charged particles like secondary protons and add them to the queue used to stored secondary particles for future transport. The rest of the MMC algorithm can be used for heavy charged particles without further modification Uncertainty Analysis The uncertainty analysis method used in Sec did not involve accessing the absolute uncertainty of PMMC. Rather, a relative uncertainty was computed with respect to MC- NPX. An important part of performing Monte Carlo is determining the absolute uncertainty of the simulation results. The uncertainty can be used to determine how many particles per beamlet must be run to obtain statistically significant results needed for treatment planning. Furthermore, performing uncertainty analysis will allow for a better estimate of the speed gain associated with using PMMC. Right now, the performance analysis is done using PMMC and MCNPX simulations with the same number of histories. However, it is unlikely that the uncertainty of the results for PMMC and MCNPX is equal for the same number of particles. A better measure of speed gain would involve comparing the calculation times for the two codes at equal levels of uncertainty rather than at equal numbers of histories.

124 Implications of PMMC PMMC in Proton Radiotherapy Treatment Planning The PMMC code as it exists now is not capable of performing proton transport in clinically realistic geometries. This is because PMMC cannot perform proton transport through regions with variable density. However, once this process has been implemented, PMMC will be able to compute dose distributions for clinically realistic problems. All of the other aspects of computerized treatment planning have been implemented. These include the use of clinically relevant materials; the ability to load a complex, voxel-based geometry like a CT dataset; and the ability to use many proton beams with variable positions, directions and energies. It is too soon to say whether PMMC will ever prove to be fast and accurate enough to replace conventional Monte Carlo simulations or pencil beam algorithms for clinical treatment planning. That said, given the promising speed gain found in this thesis work, it is possible that MMC will prove a viable alternative to other methods of proton therapy dose calculation in the future. Furthermore, PMMC does not need to replace conventional Monte Carlo in order to find use in clinical treatment planning. Instead, PMMC could be used in parallel with traditional Monte Carlo during the optimization process. Many treatment planning software packages use dual dose calculation methods to perform treatment planning optimization. One fast algorithm is used for most of the optimization iterations to make the overall process faster. The other, more accurate algorithm is used for the final dose calculation. Proton MMC and traditional Monte Carlo could work together in this manner to perform optimization for intensity modulated proton therapy MMC on a Graphics Processing Unit Computer Architechure In addition to being faster than traditional Monte Carlo, macro Monte Carlo is different in another key way. Traditional Monte Carlo algorithms make frequent use of conditional logic

125 110 (if -than-else statements, while loops, etc.) during the particle transport process [36]. By comparison, the MMC algorithm is largely composed of serial operations. This difference is of little importance when computations are performed on traditional CPU-based computer systems. However, computer systems based on graphics processing units (GPU) operate differently. Specifically, GPU computers are designed to perform serial computations very quickly but are slowed considerably when branching operations are introduced. Such operations are a result of the conditional logic statements that are frequently used in traditional Monte Carlo. It is likely that MMC could be implemented on a GPU-based architecture more easily than traditional MC. Furthermore, the speed gain achieved on a GPU system would likely be larger for MMC MMC for Other Medical Physics Problems Thus far, macro Monte Carlo methods have only been applied to electron beam and proton beam radiotherapy dose calculation. However, the macro Monte Carlo method can be applied to any problem that uses Monte Carlo. In fact, there are a number of problems in which MMC would perform even better than it does for dose calculation. Using MMC for dose calculation involves a trade-off: the large, pre-computed steps improve the simulation speed at the cost of losing information about what happens during the steps. The problem with using MMC for dose calculation is that the information lost during a pre-computed step is rather important. The information that is lost (the pattern of energy deposition inside the local geometry) is precisely the information that is needed to accurately compute dose deposition. In electron and proton MMC, this information is partially recreated using the energy of the exiting particles to infer the energy loss during the pre-computed step. However, even if the energy loss is computed accurately, it is impossible to determine the true dose deposition pattern for a given history. Rather, the dose deposition pattern is

126 111 modeled through some other means, such as using stopping power data to distribute the energy loss in a physically realistic manner. While the loss of information during a MMC step is a problem in dose calculation, there are a number of medical physics problems in which the loss of information would be trivial. One example of this would be modeling the beam line for a proton radiotherapy system. The beam line consists of a proton source from a particle accelerator and a number of beam monitoring and beam shaping devices. These beam shaping devices include beam scattering inserts, range modulators, collimators, jaws and beam compensators. When modeling the beam line, one is not concerned with the dose deposition in the beam shaping devices. Rather, the goal of simulating a beam line is to determine the final phase space of protons at the end of the beam line. This means that the information lost during the pre-computed steps is not needed at all. This thesis work has demonstrated that PMMC can model beam range, lateral spread and energy very accurately. Therefore, PMMC would be ideally suited for beam line physics calculations. In fact, one could even replace the cylindrical local geometry with separate local geometries for each of the beam line elements. Each MMC step would cross one beam line element and the entire transport process for one history would involve only a few MMC steps. Such an algorithm would be very fast and accurate. There are a number of other problems that can be modeled this way. For instance, Hill (2012) [37] developed a fan beam proton therapy delivery system using a unique base-2 binary multileaf collimator array. One could make a library of pre-computed steps using the collimator leaves as the local geometry for the problem. The library could then be used to quickly transport protons through the multileaf collimator assembly one leaf at a time. Again, the behavior of protons within the leaves is not of interest. Only the final phase space of protons exiting the fan beam assembly is required. Another example of a problem where MMC would perform well is modeling neutron scatter inside a radiotherapy vault. In this problem, the local geometry would consist of

127 112 neutrons impinging on semi-infinite concrete walls. The phase space of neutrons exiting the walls would be stored in the MMC library. The library would be used to transport neutrons from wall to wall as if the neutrons were bouncing around the room. The behavior of the neutrons inside the walls of the vault is not of interest. Only the neutron fluence distribution at the exit to the vault is desired. Once again, the ability to ignore the behavior of neutrons during the pre-computed steps allows MMC to rapidly model neutron scatter without compromising accuracy. Finally, there are a number of emerging applications that use visible or near-infrared light to perform medical imaging and therapy. Monte Carlo simulations are used to model the propagation of light through living tissues. There are number of problems in light propagation MC that are well-suited for MMC as well. The use of MMC in light propagation MC simulations is the subject of Part II of this dissertation.

128 Part II Light Propagation Macro Monte Carlo 113

129 114 Chapter 6 Introduction

130 Current Uses of Light Propagation Monte Carlo Ever since Wilson and Adam (1983) [38] introduced the use of Monte Carlo (MC) simulations to model light propagation through tissues, the method has been used to investigate a wide variety of therapeutic and diagnostic applications. For instance, MC has been used to model the delivery of photodynamic therapy (PDT) to treat both cancerous and non-cancerous targets [39 41]. Photodynamic therapy begins with the administration of a chemical photosensitizer, followed by irradiation of a target volume using light of the specific wavelength needed to activate the photosensitizer. The activation of the photosensitizer leads to a number of biological effects that ultimately lead to cell death and the destruction of the target tissue. Advances have allowed the technique to be used in a growing number of complex anatomical locations. In addition to external treatments used for skin lesions and other dermatological diseases, a number of interstitial, intracavitary and intra-operative treatments have been developed for wide variety of cancer sites. These include basal-cell carcinoma and cancers of the cervix, lungs, esophagus, stomach, head and neck, bladder, brain and prostate [41]. Regardless of the treatment site, it is essential to accurately model the propagation of light through complex treatment geometries as part of the treatment planning process. Monte Carlo simulations have been shown to accurately model light propagation in such geometries [38, 42]. Monte Carlo simulation of light propagation has also been used extensively in the field of near-infrared spectroscopy (NIRS) and imaging (NIRSI) [43 47]. A common application of NIRS and NIRSI is spectroscopic imaging of the brain to measure tissue oxygenation in a noninvasive manner. Another technique related to NIRS is diffuse optical tomography, the goal of which is to determine the absorption and scattering properties inside a tissue [48]. This is accomplished by sending pulses of light into a tissue and measuring the intensity and phase of the light that escapes at different locations. The measurements are used to reconstruct an image of the underlying tissue. The infrared light used in NIRS and NIRSI

131 116 is strongly scattered in most human tissues resulting in highly diffusive light propagation. This makes good reconstruction difficult and puts a high burden on accurate modeling of light propagation in order to perform image reconstruction. Monte Carlo is typically used due to its ability to accurately model light propagation in complex geometries. However, the image reconstruction requires the use of iterative, nonlinear optimization techniques. This requires light propagation inside the tissue of interest to be modeled many times before reconstructed image can be made. Therefore, fast MC simulations are a priority as well. 6.2 Light Propagation Monte Carlo Description of the Transport Medium Monte Carlo simulations of light propagation are based on the methods introduced by Wilson and Adam (1983) [38]. Compared to the physics used to describe the behavior of x-ray photons and gamma rays, visible light propagation physics is much simpler. For instance, x-ray photons change energy as the undergo Compton scattering [3]. Visible light photons do not undergo such energy losses. During an interaction, a visible light photon is either scattered or absorbed. This means the physical properties of the scattering and absorbing media can be described by relatively few parameters. Each material in a light propagation MC simulation is described by four parameters: an absorption coefficient µ a [cm 1 ], a scattering coefficient µ s [cm 1 ], an anisotropy factor g [unitless] and an index of refraction n [unitless]. The absorption and scattering coefficients describe how far a photon will travel before undergoing one of these two types of interactions. Specifically, the mean distance between scattering events is 1/µ s and the mean distance to absorption is 1/µ a. The anisotropy factor g describes the extent to which scattering is forward-directed (g close to 1), reverse-directed (g close to -1) or isotropic (g = 0). Specifically, the Heyney-

132 Greenstein probability density function describes the distribution of the scattering deflection angle θ using Eq. (6.1): 117 P (cos (θ)) = 1 g 2 2 (1 + g 2 2g cos (θ)) 3/2 (6.1) The index of refraction n describes how light propagates through the medium. It is equal to c/v, where c is the speed of light in a vacuum and v is the speed of light in the medium. As discussed in Sec , n is used to compute the direction of a photon packet after refraction and to compute the total internal reflection angle for an interface Optical Photon Transport Transport within a Homogeneous Region Optical photons are not transported individually in light propagation Monte Carlo. Rather, so-called photon packets are used. Photon packets can be thought of many photons traveling together and following the same exact path. Each photon packet begins with a weight w equal to 1.0 and loses weight as it is transported through the geometry. The lost weight accounts for the loss of photons due to absorption. The weight of the photon packet is therefore proportional to photon fluence. This transport technique allows the photon packets to propagate farther than individual light photons and ultimately reduces the variance of the MC simulation. Each photon packet is described by seven parameters: three for position (x, y, z), three for trajectory (u x, u y, u z ) and one for weight (w). The transport of photon packets takes place in a complex geometry composed of many homogeneous regions of varying size. Within a homogeneous region, photon packets are transported using a three-step process that Wang et al. (1995) [42] call hop-drop-spin. The first step in this process is the hop. In this step, the photon packet takes a step along its

133 118 current trajectory. The length of this step is randomly sampled with Eq. (6.2): ln (ξ) s = (6.2) µ t Where s [cm] is the step length in, ξ is a random number sampled on the interval [0, 1], and µ t = µ a + µ s is the total photon interaction coefficient. The derivation of this formula can be found in Wang et al. (1995). The position of the photon packet is updated using Eq. (6.3): x x + u x s y y + u y s (6.3) z z + u z s It is also prudent to define the step length as a dimensionless quantity since µ t varies from region to region and photon packets periodically cross from one region into another. The dimensionless step length s dim is sampled using Eq. (6.4): s dim = ln (ξ) (6.4) Such that s dim = sµ t. The position of the photon packet would then be updated using Eq. (6.5): x x + u x s dim µt y y + u y s dim µt (6.5) z z + u z s dim µt The next step in the three-step process is the drop, which accounts for absorption. In this step, the MC simulation accounts for photon absorption. The new position (x, y, z)

134 represents an interaction site where either absorption or scattering has occurred. The weight of the photon packet is decremented after each step to account for absorption using Eq. (6.6): 119 w µ s µ a + µ s w = µ s µ t w (6.6) Where µ s /µ t is the probability that a photon packet has not been absorbed. Finally, the last step in the three-step process is spin, which accounts for scattering. Scattering is modeled by changing the trajectory of the photon packet. The trajectory change is characterized by a deflection angle θ (0 θ π) and an azimuthal angle ψ (0 ψ 2π). These two angles are sampled statistically. The deflection angle θ is described by the Heyney-Greenstein function shown in Eq. (6.1). It can be randomly sampled to obtain a value for cos (θ) using Eq. (6.7): { [ ] } 1 2g 1 + g 2 1 g g+2gξ if g 0 cos (θ) = 2ξ 1 if g = 0 (6.7) Recall that ξ is a random number sampled on the interval [0, 1]. The azimuthal angle is uniformly distributed on the interval [0, 2π] and is sampled using Eq. (6.8): ψ = 2πξ (6.8) Once the deflection angle and azimuthal angle have been sampled, the trajectory is updated using Eq. (6.9): u x = u y = sin θ(uxuz cos ψ uy sin ψ) 1 u 2 z sin θ(uyuz cos ψ+ux sin ψ) 1 u 2 z + u x cos θ + u y cos θ (6.9) u x = 1 u 2 z sin θ cos ψ + u z cos θ

135 If trajectory is nearly parallel to the z-axis (e.g u z > ), then Eq. (6.10) should be used instead: 120 u x = sin θ cos ψ u y = sin θ sin ψ (6.10) u x = SIGN (u z ) cos θ Where SIGN (u z ) returns 1 when u z is positive and returns -1 when u z is negative. The hop-drop-spin process is repeated until the weight w drops below a pre-defined threshold or until the photon packet encounters a boundary. If the weight of the photon packet drops below the threshold value, it is declared dead and the hop-drop-spin process is halted. At this point, the light propagation MC code begins transporting a new photon packet. The next section discusses what happens when a photon packet encounters a boundary Transport at the Interface between Two Regions As a photon packet propagates through an absorber geometry with multiple regions, it will periodically encounter boundaries between different regions. A boundary encounter occurs when the dimensionless step length s dim produced by Eq. (6.4) is greater than the distance along the current trajectory to the nearest boundary d b µ t. The distance to the nearest boundary, d b is in centimeters but it is rendered dimensionless by multiplying it by µ t for the current region. The methods used to determine d b depend upon the geometry being used in the MC simulation. For the sake of example, let us consider the geometry used in MCML. The MCML code is a widely-accepted light propagation MC code that propagates photons through simple, layered geometries. The geometry used in MCML consists of layers that are semi-infinite in the x- and y-directions. The interfaces between layers lie in planes parallel to the xy-plane and perpendicular to the z-axis. A diagram of the MCML geometry is

136 121 Photon Beam Layer 1 y z x Layer 2 Layer n Figure 6.1: A diagram of the Cartesian coordinate system used in MCML. The y-direction is outward. Adapted from Wang et al. (1995) [42] shown in Fig In this geometry, the distance along the current trajectory to the nearest boundary, d b is computed using Eq. (6.11): d b = (z 0 z) /µ t if u z < 0 if u z = 0 (z z 1 ) /µ t if u z > 0 (6.11) Where z 0 and z 1 are the upper and lower boundaries, respectively, of the current layer. If d b µ t < s dim, then a boundary encounter has occurred. First, the photon packet is transported to the boundary using Eq. (6.12): x x + u x d b y y + u y d b (6.12) z z + u z d b

137 122 Next, the dimensionless step length s dim is updated using Eq. (6.13): s dim s dim d b µ t (6.13) Now that the photon packet is at the boundary, it is necessary to determine if the photon packet will be reflected from the boundary or transmitted to the next region. It is also possible that the photon packet will undergo refraction as a result of transmission. First, it is necessary to determine the relative probability of internal reflection R (α i ), where α i is the current angle of incidence onto the boundary. This angle is calculated using Eq. (6.14): α i = cos 1 ( u z ) (6.14) If the photon packet is transmitted, Snell s Law can be used to determine the angle of transmission using Eq. (6.15): n i sin α i = n t sin α t (6.15) One consequence of Snell s Law is that there is a critical angle α c = sin 1 (n t /n i ) beyond which refraction cannot occur. If α i α c, total internal reflection occurs and R (α i ) = 1. If α i < α c, then the probability of internal reflection R (α i ) is computed using Fresnel s formula in Eq. (6.16): R (α i ) = 1 2 [ sin 2 ] (α i α t ) sin 2 (α i + α t ) + tan2 (α i α t ) tan 2 (α i + α t ) (6.16) The photon packet is either internally reflected or transmitted at the boundary. To determine which occurs, a random number ξ is sampled on the interval [0, 1] and compared to the internal reflectance probability R (α i ). If ξ R (α i ), then the photon packet is reflected internally. The direction of the photon packet must be updated such that the

138 123 angle of incidence is equal to the angle of reflection. For the layered geometry used in MCML, the new direction is calculated using Eq. (6.17): u x u x u y u y (6.17) u z u z If ξ > R (α i ), then the photon packet is transmitted to the region on the other side of the interface. The direction of the refracted photon packet must be updated using Snell s Law with n i and n t, the indices of refraction for the previous medium and current medium, respectively. For the layered geometry used in MCML, the new direction is calculated using Eq. (6.18): u x u x n i /n t u y u y n i /n t (6.18) u z SIGN (u z ) cos α t After the reflection or transmission process, the light propagation continues with the hop-drop-spin process using the residual step length, s dim, computed in Eq. (6.13) Monte Carlo in Multilayered Media (MCML) MCML is a Monte Carlo code that models steady-state light transport in multilayered tissues [42]. The code models photon pencil beams impinging on semi-infinite layers stacked perpendicularly to the beam axis. Each layer is characterized by a thickness d and the optical properties discussed in Sec (µ a, µ s, g and n). More complicated planar sources can be modeled by convolving the photon distribution obtained for a single pencil beam. The code can also be modified to allow other sources like point sources embedded in the layered geometry. The output of the code consists of three quantities that are scored during the MC transport process: reflectance, transmittance and absorption. Diffuse reflectance and

139 124 transmittance are tabulated in terms of escape radius r and angle α and measured in units of cm 2 steradian 1. Absorption is tabulated in terms of the radial location r and depth location z and measured in units of cm 3. Reflectance, transmittance and absorption are all normalized by the total number of photon packets simulated. MCML transports photon packets using the three-step hop-drop-spin process described in Sec When photon packets encounter a boundary between layers, the methods discussed in Sec are used to determine if reflection or transmission occurs. These methods are also used at the upper and lower boundary of the layered geometry to determine if a photon packets escapes from the geometry. 6.3 Prior Work on Accelerating Light Propagation Monte Carlo To date, attempts to accelerate MC simulations for modeling light propagation in tissues have been limited to using specialized computational architectures. A number of studies have focused on software parallelization schemes that divide the simulation process into many parts, and then run the individual parts on many computers simultaneously [49, 50]. Other studies have investigated the use of specialized hardware architectures to accelerate the simulation process, including hardware designs based on field-programmable gate arrays (FPGA) [32] and graphics processing units (GPU) [51]. However, each of these methods still uses the same underlying Monte Carlo modeling process. Part II of this thesis explores the use of a modified Monte Carlo algorithm based on the macro Monte Carlo (MMC) method. The MMC method uses large, precomputed steps in order to accelerate the MC modeling of light propagation. The advantage of this modified algorithm is that it can be customized for specific problems and can be implemented on multiple architectures. As a result, MMC can be used to accelerate FPGA-, GPU- and parallelization-based Monte Carlo systems in addition to systems that run on CPU-based

140 125 desktop computers. In Ch. 7, the transport methods used in MCML are used to create pre-computed steps for a light propagation MMC algorithm. Ch. 8 discusses how MCML was modified to use the MMC method. Ch. 9 discusses the results obtained by the MMC version of MCML. Finally, Ch. 10 discusses the implications of MMC for light propagation and proposes a number of ways to improve the method in the future.

141 126 Chapter 7 The Light Propagation Macro Monte Carlo Library

142 Introduction Using prior MMC implementations as a model, MMC was implemented for simulations of light propagation. The following sections discuss the creation of data libraries for visible light photons. Before proceeding, it is necessary to introduce some terminology that is specific to the study of MMC. The local geometry is a simple, well-defined geometry where many particle paths are computed. Conventional MC simulations are used to compute particle paths through the local geometry, and these data are used to create a MMC library. A MMC program transports particles through a complex, heterogeneous geometry called the global geometry. The MMC algorithm uses the MMC library to take small steps called macro-steps through the global geometry. To tie it all together, MC simulations of particles through local geometries are used to create the MMC library. The MMC library is used by a MMC program to transport particles via macro-steps through a global geometry. 7.2 Monte Carlo Simulations The simulation geometry consisted of a sphere of radius R sphere composed of a single material and centered at the origin. This material was characterized by an absorption coefficient µ a, a scattering coefficient µ s and an anisotropy factor g. These values are specific to the material and the wavelength of the light being simulated. Photons were transported using the methodology introduced by Wilson and Adam (1983) [38] and discussed in Sec Photon packets were used in the local geometry calculations and the weight of a photon packet was designated w local. Recall that each photon packet is described by seven parameters: Three for position (x, y, z), three for trajectory (u x, u y, u z ) and one for weight (w local ). These parameters were initialized to a position of (0, 0, 0), a trajectory of (0, 0, 1) and a weight of 1.0 before each simulation. Each MC simulation began with a step in the direction of the initial trajectory.

143 128 The length of this step was randomly sampled with Eq. (6.2), then the position of the photon packet was updated using Eq. (6.3). Next, the MC code determined whether the photon packet was still inside the sphere using Eq. (7.1): R 2 sphere < x2 + y 2 + z 2 (7.1) If the statement in Eq. (7.1) was true, then the photon packet had escaped the sphere and the simulation of the photon packet ended. If it was false, then the photon was still inside the sphere and the simulation continued. The simulation continued taking additional steps to move the photon packet until the photon packet left the sphere. Before another step was taken taken, the simulation accounted for absorption and scattering. The weight of the photon packet was decremented after each step to account for absorption using Eq. (6.6). Scattering was accounted for by changing the trajectory of the photon packet using Eq. (6.7), Eq. (6.8), Eq. (6.9) and Eq. (6.10). After accounting for absorption and scattering, the step length was sampled again using Eq. (6.2) and the position of the photon packet was updated using Eq. (6.3). After the step, Eq. (7.1) was used to determine whether the photon packet was still inside the sphere. If the photon packet had exited, the simulation ended. If the photon packet had not exited, absorption and scattering were accounted for again and another step was taken. This was repeated until the photon packet exited the sphere. Fig. 7.1 shows the MC simulation of three photon packets using the algorithm described above. The sphere was composed of a material with µ a = 0.1 cm 1, µ s = 1.0 cm 1 and g = 0.0. The radius was R sphere = 4/µ t cm, 4 times the mean free path of photons in the material. The small circles show collision sites and are connected by line segments that show that path of the photon packets through the sphere. Each path terminates outside the sphere. The long gray arrow in the figure shows the initial direction of the photon packet along the z-axis.

144 129 u initial Figure 7.1: A sample Monte Carlo simulation for a sphere with the following material properties: µ a = 0.1 cm 1, µ s = 1.0 cm 1, g = 0.0 and R sphere = cm. The lines trace the path of three photon packets as they scatter inside the spherical geometry. Each circle-shaped marker denotes a collision site while the -marker denotes the exit location of each photon packet. The initial trajectory of the photon packets is labeled. 7.3 Processing the Photon Packet Exit Parameters The Monte Carlo simulation determined the final position and trajectory of the photon packet in Cartesian coordinates. This consists of six parameters: x, y, z, u x, u y, u z. The final weight w local was also recorded. These exit parameters need to be processed before they are used to create the macro Monte Carlo data library. The library stores the exit location of the photon packet rather than the final location. Furthermore, the library stores the exit location and trajectory in terms of cosines of angles in spherical coordinates. The use of spherical coordinates allows the exit parameters to be stored more efficiently in terms of just four parameters: cos β exit, cos θ exit, cos φ exit and w macro. Spherical coordinates also allow

145 130 the global code to use the macro Monte Carlo data library more efficiently, as discussed in Sec The location of the final interaction (x, y, z) and the trajectory (u x, u y, u z ) of the photon packet were used to determine the location where the photon packet exited the sphere (x exit, y exit, z exit ). The equation to calculate this location, and many other equations that follow, can be expressed more concisely in vector notation. The following three-dimensional vectors shown in Eq. (7.2) summarize some of the quantities already introduced: u = u x u y u z, u initial = 0 0 1, p = x y z, p exit = x exit y exit z exit (7.2) The vector u is the final trajectory of the photon packet, u initial is the initial trajectory of the photon packet, p is the final position of the photon packet, p exit is the location where the photon packet exited the sphere. The vector p exit was calculated using Eq. (7.3): p exit = p du (7.3) Where p and u are defined above. The value d is the distance between the final position of the photon packet p and the exit location p exit. It was calculated using Eq. (7.4) [52]: d = u p (u p) 2 p p + Rsphere 2 (7.4) Where R sphere is the radius of the sphere. Next, the exit location was converted to spherical coordinates. Fig. 7.2 shows the relationship between the Cartesian and spherical coordinates. The spherical coordinates (r exit, α exit, β exit ) were calculated using the expressions in Eq. (7.5):

146 131 z (x exit y exit, z exit ) y Figure 7.2: The spherical coordinate system used to define the exit position. The exit radius is r exit, the exit polar position is β exit and is defined with respect to the z-axis, and the exit azimuthal angle is α exit and is defined with respect to the x-axis. r exit = x 2 exit + y2 exit + z2 exit where r exit 0 ( ) α exit = tan 1 yexit x exit ( ) β exit = cos 1 zexit r exit where 0 α exit 2π where 0 β exit π (7.5) Computing the exit location in spherical coordinates is as easy as using the definitions above. However, it is not strictly necessary to compute all three values. The radial exit position r exit is always equal to R sphere because the exit location is always on the surface of the sphere. Furthermore, the azimuthal angle α exit is defined in a plane perpendicular to the initial direction of the photon packet. The symmetry of the spherical geometry with respect to α exit guarantees that α exit it will be independent of r exit and β exit and uniformly distributed over 0 α exit 2π. This means it is not necessary to compute and store α exit. Instead, the global code can randomly sample a value for α exit when it is needed. Lastly,

147 132 Figure 7.3: The alternate frame-of-reference used to compute the trajectory in spherical coordinates. The vector u is the trajectory vector. The vector n exit is the surface normal vector at the exit location. The vector f is the projection of the initial direction vector u initial on the plane perpendicular to n exit. The vector h is orthogonal to n exit and f. the global code does not use β exit explicitly; rather it uses the cosine of β exit. This value is calculated using a modified version of Eq. (7.5) shown in Eq. (7.6): cos β exit = z exit r exit = z exit R sphere where 1 cos β exit 1 (7.6) Using spherical coordinates, the three-dimensional exit location in Cartesian coordinates is reduced to one parameter in spherical coordinates. Only cos β exit is stored in the MMC library. Next, the trajectory of the exiting photon packet is converted to spherical coordinates. Instead of using the standard frame-of-reference defined by the x-, y- and z-axes, an alternate frame-of-reference is used to take advantage of the symmetry of the spherical geometry. This frame-of-reference is defined by three vectors: n exit, f and h. The frame-of-reference is shown in Fig The vector n exit is the surface normal vector at the exit location of

148 133 the photon packet and is calculated using Eq. (7.7): n exit = p exit p exit = p exit R sphere (7.7) The vector f is the projection of u initial on the surface tangent plane at the point p exit. This vector is computed using Eq. (7.8): f = n exit (u initial n exit ) n exit (u initial n exit ) (7.8) The vector h is the perpendicular to both n exit and f an is computed using Eq. (7.9): h = n exit f (7.9) In the alternate frame-of-reference, the exit trajectory can be described in terms of two parameters: θ exit and φ exit. Fig. 7.4 shows the relationship between the two exit trajectory parameters and the alternate frame-of-reference. The polar angle of escape, θ exit, is defined as the angle between the surface normal n exit and the exit trajectory u. Like β exit, the MMC transport algorithm does not use θ exit explicitly. Rather, the cosine of θ exit is computed and stored in the MMC library. It is computed with Eq. (7.10): cos θ exit = n exit u where 1 cos θ exit 1 (7.10) Before the azimuthal angle of escape, φ exit, can be computed, it is necessary to define u proj. This vector is the projection of u on the surface tangent plane at p exit (the same plane used to define f). It is computed using Eq. (7.11): u proj = n exit (u n exit ) (7.11)

149 134 Figure 7.4: A diagram of the spherical coordinates of the exit trajectory with respect to the alternate frame-of-reference. The angle θ exit is the exit trajectory polar angle defined with respect to the exit normal n exit. The angle φ exit is the exit trajectory azimuthal angle defined with respect to the vector f. The azimuthal angle of escape, φ exit, is defined as the angle between u proj and f. This angle is also stored in terms of its cosine and computed using Eq. (7.12): cos φ exit = u proj u proj f where 1 cos φ exit 1 (7.12) A little vector algebra yields Eq. (7.13), which uses values that have already been determined: cos φ exit = u z cos β exit cos θ exit 1 cos 2 β exit 1 cos 2 θ exit = u z cos β exit cos θ exit sin β exit sin θ exit (7.13) The final parameter that is defined is w macro. The parameter w macro is equal to the final weight w local of the photon packet after the Monte Carlo simulation has terminated.

150 135 Create an array for m materials for i = 1 to m Choose material properties μa,i, μs,i, gi Create an array for qi step-sizes for j = 1 to qi Choose sphere radius ri,j Create an array for hi,j individual Monte Carlo histories for k = 1 to hi,j Run one Monte Carlo simulation for material i, step-size ri,j Convert p and u to cos(β), cos(θ) and cos(ϕ) Store cos(β), cos(θ), cos(ϕ) and w macro in the individual history array end end end Figure 7.5: The algorithm used to generate the MMC library The parameter w macro is expressed as a value between 0 and 1. It is used later by the MMC program to update the weight of a photon packet after a macro-step is taken. 7.4 The MMC Library Each MC simulation of a single photon packet yields a final position vector p = [x y z] T, a final trajectory vector u = [u x u y u z ] T and a final weight w local. The exit parameters are then processed and the total number of parameters is reduced from seven to four. The remaining four-parameter set (cos β exit, cos θ exit, cos φ exit and w macro ) is sufficient to describe the exit position, trajectory and weight of the photon packet at the end of the simulation. For the sake of brevity, a four-parameter set will hereafter be called a history. This term is used often used in MC to describe behavior of an individual simulated particle [9, 13, 18]. Creating the MMC library involves running many simulations, computing many histories and organizing them into a library. Fig. 7.5 summarizes the algorithm used to create the MMC library. This entire process is accomplished with a single C++ program. Fig. 7.6 shows how the MC-generated histories are stored as a hierarchical data structure. The

151 136 Macro Step Library of m Materials Material 1 g 1 [unitless] μ a,1 [1/cm] μ s,1 [1/cm] q 1 = # of step-sizes Array of step-size data Material i g i [unitless] μ a,i [1/cm] μ s,i [1/cm] q i = # of step-sizes Array of step-size data Material m g m [unitless] μ a,m [1/cm] μ s,m [1/cm] q m = # of step-sizes Array of step-size data Step-size i, 1 R i,1 [cm] h i,1 = # of histories... Step-size i, j R i, j [cm] h i, j = # of histories... Step-size i, q i R i,q(i) [cm] h i,q(i) = # of histories Array of individual histories Array of individual histories Array of individual histories History i, 1, 1 Exit Position: cos(β) Exit Trajectory: cos(θ), cos(ϕ) Internal Behavior: w macro... History i, 1, k Exit Position: cos(β) Exit Trajectory: cos(θ), cos(ϕ) Internal Behavior: w macro... History i, 1, h i,1 Exit Position: cos(β) Exit Trajectory: cos(θ), cos(ϕ) Internal Behavior: w macro Figure 7.6: A partial schematic of the MMC Library. The library is organized as a hierarchical data structure. The topmost level consists of m different materials, each with a unique set of µ a, µ s and g values. Each material has an array of possible step-sizes. These step-sizes, each represented by different sphere radii R, make up the second level of the hierarchy. Associated with each step-size is an array of histories. Each array is a list of MC-generated histories and these histories make up the bottom level of the hierarchy. figure shows that the MMC library is a data structure that contains an array of m material. A single material i (i = 1, 2,..., m) in the library is a data structure that contains a unique set of values for the absorption coefficient µ a,i, scattering coefficient µ s,i and anisotropy factor g i. Note that the values of µ a, µ s and g for a single substance (water, glass, etc.) or anatomical tissue (fat, gray matter, etc.) can vary considerably with the wavelength of light. As a result, a single substance or tissue may require multiple material entries, one for each wavelength of light being simulated. Each material data structure i also contains an array of step-sizes. The array of stepsizes for material i contains q i entries. The individual step-sizes in the arrays constitute the second level of the hierarchy, as shown in Fig Each step-size j (j = 1, 2,..., q i )

152 137 for material i is a data structure that contains three items: R i,j, h i,j and an array of individual histories. The value R i,j is the radius of the sphere used in the MC simulation that generated data for this step-size. For each material i, the step-size data structures are ordered by radius such that R i,1 < R i,2 < < R i,j < < R i,q(i). This ordering allows the MMC transport algorithm to search for and select an appropriate step-size more easily. The value h i,j is the number of photon packets simulated in the MC simulation used to generate data for step-size j of material i. The value h i,j is also the number of entries in the array of individual histories because one photon packet always yields one history. The MC simulations are run using the geometry discussed in Sec Specifically, for step-size j of material i, the simulation calculates the histories of h i,j photon packets inside a homogeneous sphere of radius R i,j composed of material i (absorption coefficient µ a,i, scattering coefficient µ s,i and anisotropy factor g i ). The results of the MC simulations are stored in the array of individual histories. These individual histories constitute the bottom level of the hierarchy. An individual history is a data structure that contains values for cos β exit, cos θ exit, cos φ exit, and w macro. Each individual history data structure is given an index k (k = 1, 2,..., h i,j ). Unlike the step-size data structures, the history data structures are unordered; the index serves only to identify a unique history. Fig. 7.7 shows the beginning of a MMC library file to illustrate how the library is structured. Three MMC library files were generated for the simulations done in this work, one for each of the three geometries that were used. The number of materials in each MMC library depended upon the number of materials used in each geometry. Five step-sizes were created for each material. The step-sizes for each material i were scaled such that the radius of the smallest step-size R i,1 was always 3.5/µ t, or 3.5 times the mean free path of photons in that material. The largest radius R i,5 was 10/µ t for every material. The number of individual histories h i,j was 10,000 for every step-size j and material i.

153 138 4 # number of materials # Starting Material #1 # mus, mua, g # number of radii # Radii values [cm] # Starting Radius # # number of histories Figure 7.7: The beginning of a sample MMC library file

154 139 Chapter 8 The Light Propagation Macro Monte Carlo Method

155 Introduction Creating a MMC version of MCML required many modifications to the MCML code. These included modifications to the data structures, input/output functions and photon transport process. Each group of modifications is described below. 8.2 Modifications to MCML Data Structures The first set of modifications involved changing and augmenting the data structures used in the photon transport process. The most important modification is the creation of the hierarchical data structure that holds the MMC library discussed in Sec The MMC library is stored using nested C++ structures and organized as shown in Fig Specifically, the data library consists of an array of material data structures. The material data structures contain material properties and an array of step-size data structures. The stepsize data structures store a sphere radius for that step-size and an array of history data structures. The history data structures contain values for cos β exit, cos θ exit, cos φ exit, and w macro generated by MC simulations of individual photon packets. Accessing an individual element in the data library is easy. For instance, consider the kth history of the jth step-size of the ith material. The absorption coefficient for the material used to generate that history can be recalled with a statement like this: MaterialArray[i].mua The sphere radius for that history would be: MaterialArray[i].StepSizeArray[j].radius The cos β exit for that history can be recalled with a statement like this:

156 141 MaterialArray[i].StepSizeArray[j].HistoryArray[k].cosbeta The other parameters shown in Fig. 7.6 can be accessed just as easily. In addition to creating the MMC library, some of the existing MCML data structures need to be modified as well. Two parameters must be added to InputStruct, the data structure that contains information about the user-created input file. The first is a string that contains the name of the MMC library file. The second parameter is an array of materials that constitutes the MMC library. This array is filled using the data in the MMC library file. Another data structure called LayerStruct also needs to be modified. This structure contains information about the layers that make up the user-specified geometry. For each layer, two additional parameters are added. The first is a Boolean variable that tells the modified MCML code whether the MMC algorithm should be used in the layer. Some layers are so thin that MMC steps are not practical and the Boolean variable informs the transport algorithm when this is the case. The second parameter is an index that stores the material i (i = 1, 2,..., m) of which that layer is composed. The number of layers and the number of materials may not be the same because multiple layers may be composed of the same material. This indexing system allows the MMC library to be smaller by avoiding duplicate materials when a single material is used in more than one layer. 8.3 Modifications to MCML Input/Output Functions The primary modification to the I/O functions consists of adding functions that load the MMC library file and use the text in that file to populate the MMC library data structures. The MMC library does not have a fixed number of materials, step-sizes or histories. As a

157 142 result, the functions that populate the MMC library data structures must be flexible and capable of dynamically allocating memory. The order of operations was tailored to the structure of the MMC library file (see Fig. 7.7) The new I/O functions read the MMC library file one line at a time. First, the program reads the number of materials in the file and creates an array of material data structures containing the specified number of materials. Then the program focuses exclusively on one material at a time, starting with the first. The material properties and number of stepsizes are read from the file. Next, the program creates an array of step-size data structures containing the specified number of step-sizes. The radii values for these step-sizes are loaded into the array. After this, the program focuses on one step-size at a time, starting with the first. The program reads the number of histories for that step-size from the file and creates an appropriately-sized array of histories. The array of histories is populated one line at a time until all of the histories are loaded. Then, the program continues by loading the histories for the next step-size. Once all of the step-sizes are loaded for one material, the program starts loading data for another material. This process continues until all of the materials, step-sizes and histories have been loaded into the MMC library. A number of smaller changes are needed as well. The input files for the MMC version of MCML must specify the name of the MMC library file. The program s I/O functions must read this filename and open the file to load the stored information into the MMC library using the process described above. Furthermore, once the I/O functions read the MCML input file and load the user-specified layers into the LayerStruct array, the newly added LayerStruct parameters must be determined as well. To determine the material composition of each layer, the program cycles through the user-specified layers and materials until a match is found for each layer. If a match not found, the user is notified that they must create a new MMC library file with the correct materials. The program also determines

158 automatically whether MMC steps are permitted in the layer. A layer l will permit MMC steps if the conditional statement in Eq. (8.1) is true: 143 d l > M µ t,l (8.1) Where d l is the thickness of layer l, µ t,l is the total interaction coefficient for the material in layer l and M is a multiplicative factor. Since 1/µ t,l is the mean free path of a photon in layer l, M specifies how many times thicker than the mean free path d l must be in order to use if MMC steps. In this work, M was set to 4.5. This is slightly larger than the smallest step-size used in the MMC library, which is always 3.5 times the mean free path. 8.4 The Macro Monte Carlo Photon Transport Algorithm Fig. 8.1 shows how the MCML transport algorithm was modified to allow the use of a MMC algorithm. The new algorithm differs from the original only where the boxes are outlined with a dotted line. Fig. 8.1 shows that the modified MCML attempts a macro-step before transporting a photon packet via the hop-drop-spin process described in Sec Macro-steps are optional and the program ascertains whether a macro-step is allowed during each step. A macro-step is only attempted when the current step length s is zero. When s is greater than zero, as happens when a photon packet crosses a boundary and has some residual step length left in a new region, then a macro-step is not attempted.

159 144 Launch photon (dimensionless s = 0) s = 0? N Y Attempt Macro Step s = -ln(ξ) Find distance to boundary db Hit boundary? (dbμt s) Y Move s/μt N Move db s = 0 s = s - dbμt Absorb Transmit/Reflect Scatter Y Photon dead? N Weight small? Y Survive roulette? N Y N Last photon? Y End N Figure 8.1: A flow diagram of the modified MCML algorithm. The algorithm differs from the original only where diamonds and boxes are surrounded with a dotted line.

160 Determining whether a Macro-step is Feasible Fig. 8.2 shows the algorithm used to perform a macro-step. First, the program determines d min, the distance between the photon packet the nearest boundary. This is done by comparing the depth of the photon packet z with the depths of the layer boundaries for the current layer. Next, the program recalls the radius of the smallest step-size for the current layer, R i,1, from the MMC library. If d min is less than R i,1, then it is not possible to take a macro-step because the photon packet is too close to a boundary. In this case, the macrostep attempt is halted and the program continues with the hop-drop-spin process. If d min is greater than R i,1, then a macro-step is possible and the macro-step algorithm continues.

161 146 Begin Macro Step Find minimum distance to nearest boundary, dmin, from current position in material i N Macro step possible? (dmin > Ri,1) Y Determine maximum step radius, Ri,j, smaller than dmin for current layer Randomly choose an index η between 1 and hi,j Retrieve the history ηth history for material i, step radius Ri,j Use history to update position, trajectory and weight of photon packet i, j, η Macro Monte Step Data Library cos(β), cos(θ) cos(ϕ), wmacro Deposit weightloss in absorption grid End Macro Step Figure 8.2: Flow diagram for a macro Monte Carlo step Sampling Parameters from the MMC Library The next step is determining R i,j, the largest step-size radius for material i that is smaller than d min. This is done by comparing each step-size radius with d min until the largest is found. Once the largest radius is found, the program randomly samples an integer η between 1 and h i,j, where h i,j is the number of histories that have been computed and stored for step-size j for material i. The random number η corresponds to a single history

162 147 for step-size j and material i. The values i, j and η are used to look up this history in the MMC library. The MMC library returns values for cos β exit, cos θ exit, cos φ exit, and w macro. These values are used to update the position, trajectory and weight of the photon packet Updating the Position of the Photon Packet The sampled parameter cos β exit is used to update the photon packet s position. This is done by creating a unit vector r that points from the current position to a new position. The new position is somewhere on a sphere of radius R i,j centered at the current position (x, y, z). The vector r is defined in terms of α exit, the azimuthal angle of escape relative to the current photon packet direction u, and β exit, the polar angle of escape relative to the u. As discussed in Sec. 7.2, α exit is independent of and β exit and uniformly distributed over 0 α exit 2π. Therefore α exit can be sampled using ξ, a random number sampled on the interval [0, 1] as shown in Eq. (8.2): α exit = 2πξ (8.2) The value cos β exit has already been sampled from the library and is used to compute sin β exit using Eq. (8.3): sin β exit = 1 cos 2 β exit (8.3) The components of r, (r x, r y, r z ), are computed with Eq. (8.4) using α exit, β exit and the photon packet s current direction u with components (u x, u y, u z ): r x = sin β exit(u xu z cos α exit u y sin α exit ) 1 u 2 z + u x cos β exit r y = sin β exit(u yu z cos α exit +u x sin α exit ) 1 u 2 z + u y cos β exit (8.4) r z = 1 u 2 z sin β exit cos (α exit ) + u z cos β exit

163 148 If the exit direction is nearly forward (e.g cos β exit > ), then Eq. (8.5) should be used for speed: r x = u x r y = u y (8.5) r z = u z The position of the photon packet (x, y, z) is updated using Eq. (8.6): x x + r x R i,j y y + r y R i,j (8.6) z z + r z R i,j Updating the Trajectory of the Photon Packet Next, the sampled parameters cos θ exit and cos φ exit are used to update the trajectory of the photon packet u. Recall from Sec. 7.2 that the sampled parameters cos θ exit and cos φ exit define the exit trajectory of the photon packet relative to three vectors: n exit, f and h. The exit normal n exit has already been computed as r. In order to compute the new trajectory of the photon packet, it is necessary to compute f and h as well. The vector f is computed using the formula shown in Eq. (7.8). A little a vector algebra yields Eq. (8.7) for the components of f, (f x, f y, f z ): f x = ux rx cos β exit sin β exit f y = uy ry cos β exit sin β exit (8.7) f z = uz rz cos β exit sin β exit When sin β exit is zero, f can be any vector that is perpendicular to u. The final vector h is calculated using Eq. (7.9). Expanding the cross product yields Eq. (8.8) for the components of h, (h x, h y, h z ):

164 149 h x = r y f z f y r z h y = f x r z r x f z (8.8) h z = r x f y f x r y Now that r, f and h are known, the new direction can be computed with cos θ exit and cos φ exit. First, the values for sin θ exit and sin φ exit must be computed. For the polar angle θ exit, the sine is computed using Eq. (8.9): sin θ exit = 1 cos 2 θ exit (8.9) The azimuthal angle of escape is a bit different. The angle φ exit is defined such that positive and negative values for sin φ exit are equally likely. Therefore, sin φ exit is computed using Eq. (8.10): + 1 cos 2 φ exit for ξ > 0.5 sin φ exit = 1 cos 2 φ exit for ξ 0.5 (8.10) Where ξ is a random number sampled on the interval [0, 1]. The vectors r, f and h have coefficients that quantify their relative contribution to the new trajectory. They are computed using Eq. (8.11): c r = cos θ exit c f = sin θ exit cos φ exit (8.11) c h = sin θ exit sin φ exit Finally, the new trajectory is computed using Eq. (8.12): u = c r r + c f f + c h h (8.12)

165 Updating the Weight of the Photon Packet After the position and direction of the photon packet have been updated, the weight of the packet must be updated using the sampled w macro value. This values describes how much weight was lost during the macro-step. The weightloss during the macro-step, w loss, is also recorded so that it can be deposited in the 2-D absorption array. The weightloss w loss is calculated using Eq. (8.13): w loss = (1 w macro ) w (8.13) The photon packet s weight is updated with Eq. (8.14): w w w loss = w macro w (8.14) Depositing Weightloss in the Absorption Array The weight lost during the macro-step, w loss, must be deposited in the absorption array. Three different methods were implemented to perform this task: simple weight deposition, complete path reconstruction weight deposition and partial path reconstruction weight deposition Simple Weight Deposition The simplest method involves randomly depositing the weight between the initial and final position of the photon packet after a macro-step. This method was called simple weight deposition. First a deposition position is randomly sampled using Eq. (8.15): x dep = x r x R i,j ξ y dep = y r y R i,j ξ (8.15) z dep = z r z R i,j ξ

166 Where ξ is a random number sampled on the interval [0, 1]. Then r dep = 151 x 2 dep + y2 dep is calculated and the full weight loss, w loss, is deposited in the absorption array in the voxel that corresponds to (r dep, z dep ). Alternatively, the weight can also be divided into a n equal parts and deposited at n randomly selected locations Complete Path Reconstruction Weight Deposition The simple weight deposition method is computationally efficient but is not physically realistic. A more accurate method was implemented called complete path reconstruction (CPR) weight deposition. The CPR deposition method computes the location of every interaction site for the history being used during the current macro-step. Once each location is computed, an appropriate amount of weight is deposited at the corresponding position in the absorption array. For the largest step-sizes, there may be more than 100 interaction sites per history. In order to use the CPR weight deposition method, the MMC library must be expanded to include the photon interaction sites associated with each history. The storage of the interaction sites increases the size of the MMC library by approximately a factor of Partial Path Reconstruction Weight Deposition A modified version of CPR deposition called partial path reconstruction (PPR) weight deposition was also implemented. Like CPR weight deposition, the PPR weight deposition method uses interaction site data to reconstruct where a photon packet interacted during a macro-step. However, the PPR reconstruction method does not repeat the process for every single interaction site stored in the expanded MMC library. Instead, the PPR method sets an upper limit on the number of interaction sites that are reconstructed. In this work, a maximum of 10 interaction sites were reconstructed when using the PPR weight deposition method. As a result, the PPR weight deposition method is equally efficient for all step-sizes.

167 152 The macro-step algorithm is finished once the weightloss deposition process is completed. As shown in Fig. 8.1, the program proceeds with the rest of the hop-drop-spin process after the macro-step attempted regardless of whether it was successful or halted. After the hopdrop-spin, another macro-step will be attempted unless the photon packet is at a boundary.

168 153 Chapter 9 Results of MMC-MCML

169 154 Table 9.1: Optical properties for the one-layer test problem Layer µ a [ cm 1 ] µ s [ cm 1 ] g n Thickness [cm (mfp)] # (100.1) The thickness is expressed both in terms of centimeters and the number of mean free path lengths (mfp), where one mean free path length is (µ a + µ s) 1. Table 9.2: Optical properties [ of the four-layer brain model for 800-nm light Layer µ a cm 1 ] µ [ s cm 1 ] g n Thickness [cm (mfp)] Scalp & Skull (20.4) Cerebrospinal Fluid (0.022) Gray Matter (10.1) White Matter (84.07) The thickness is expressed both in terms of centimeters and the number of mean free path lengths (mfp), where one mean free path length is (µ a + µ s) Validation Procedure Validation Model Three test problems were used to validate and test the performance of the MMC version of the MCML code (MMC-MCML). The first problem was a simple, homogeneous absorber. The tissue optical parameters for the simple absorber are presented in Table 9.1. The second problem is a layered model of the brain that consists of four layers. The tissue optical parameters for the brain model are presented in Table 9.2 and were chosen from the reported data on the optical properties of brain tissue [47,53]. Instead of using the photon scattering coefficient µ s, this geometry uses the reduced photon scattering coefficient µ s = µ s (1 g). The reduced scattering coefficient is used in diffusion theory calculations of light propagation and combines µ s and g into a single parameter [54]. In MC simulations, µ s can be used in place of µ s by setting g = 0. This method has been used in many studies of the brain [47, 53 57].

170 155 The third problem is a layered model of the skin that consists of five layers. The tissue optical parameters for the skin model are presented in Table 9.3 and are based on the values determined by Tuchin [58].

171 Table 9.3: Optical properties of the five-layer skin model for 633-nm light Layer µ a [ cm 1 ] µ s [ cm 1 ] g n Thickness [cm (mfp)] Epidermis (1.11) Dermis (3.79) Dermis with plexus superficialis (3.91) Dermis (17.07) Dermis plexus profundus (11.84) The thickness is expressed both in terms of centimeters and the number of mean free path lengths (mfp), where one mean free path length is (µ a + µ s)

172 Validation Method The validation of MMC-MCML consisted of three phases and closely follows the methods used by Lo et al. (2009) [32]. The first phase involved verifying the output of MMC- MCML simulations against conventional MCML simulations. Monte Carlo simulations are inherently non-deterministic and it was necessary to separate the statistical uncertainty in the results from the error introduced by the MMC algorithm. Lo et al. (2009) achieved this by considering the variance in the output of the MCML simulations. The output that was used for these comparisons was the 2-D absorption array that scores the absorbed photon probability density as a function of radius and depth. The differences between two arrays were quantified using a relative error E [i r ] [i z ] between corresponding elements. The relative error was computed using Eq. (9.1): E [i r ] [i z ] = A s [i r ] [i z ] A h [i r ] [i z ] A s [i r ] [i z ] (9.1) Where A s is the gold standard absorption array produced by a conventional MCML simulation of 100 million photon packets, and A h is a corresponding absorption array of the same size produced by MMC-MCML. The absorption array A h may also be a product of MCML, which allows the statistical uncertainty between conventional MCML simulation runs to be compared to the uncertainty for MMC-MCML runs. The indices i r and i z specify a location in terms of radius and depth, respectively. The distribution of relative error was visualized using a 2-D color map that showed the relative error as a function of depth and radial position. Photon packet numbers ranging from 10 5 to 10 8 were simulated for both conventional MCML and MMC-MCML. Lo et al. (2009) also outlined a method to summarize the effect of varying the number of photon packets. They defined a mean relative error that is computed by averaging the relative error in all the elements of the absorption array with values above a predefined threshold. The purpose of this threshold is to exclude relative error values that are undefined

173 158 when the gold standard output array A s [i r ] [i z ] reaches zero. For this work, a threshold of 10 5 cm 3 was used. Computing mean relative error allowed the quantification of the error introduced by varying a number of parameters. These parameters included the number of photon packets, the size of the MMC library, and the use of various MMC-MCML settings. The mean relative error was computed using Eq. (9.2): E ave = nz nr i z=1 i E [i r=1 r] [i z ] (9.2) n r n z Where E ave is the mean relative error, E [i r ] [i z ] is the relative error for each element defined in Eq. (9.1), and n z and n r are the number of depth and radius elements in the absorption array, respectively. The second phase of the validation process involved investigating how changing the size of the MMC library affects the accuracy of MMC-MCML. The size of the database scales linearly with the number of histories created for each step-size. A collection of histories for a given step-size is, in effect, a probability density function for that step-size. The more histories that are included for each step-size, the better resolved this probability density function will be. The number of histories per step-size was systematically varied from 10 to 100,000 to investigate the effect of the MMC library size on the accuracy of MMC-MCML. The third phase of validating MMC-MCML involved direct comparison of the output of conventional MCML and MMC-MCML. Isofluence maps were generated from MMC- MCML simulations that ran 10 8 photon packets. The isofluence lines produced by MMC- MCML were compared to those for the gold standard MCML simulations. The diffuse reflectance and transmittance probability density profiles for MMC-MCML and the gold standard MCML simulations were compared as well.

174 Results from Validation Tests Accuracy of MMC-MCML Fig. 9.1 and Fig. 9.2 show the distribution of relative error for simulations of 10 5 and 10 8 photon packets, respectively, using the skin model whose parameters are shown in Table 9.3. Figure 9.1: Distribution of relative error as a function of position using the skin model: (a) MMC-MCML using CPR deposition with 100 thousand photon packets versus the MCML gold standard simulation (100 million photon packets) and (b) MCML with 100 thousand photon packets versus the MCML gold standard simulation (100 million photon packets). The color bar represents percent error from 0 to 10%. Values above 10% are represented using the same color as the color scale maximum.

175 160 The accuracy of MMC-MCML was similar to that of conventional MCML as demonstrated by the two error distributions in Fig Similar results were obtained for the brain model and the homogeneous absorber. Fig. 9.2 shows that the statistical uncertainty decreased significantly for the simulations that used 100 million photon packets, as indicated by the increased number of voxels in the rz-plane showing less than 10% error. This is expected because as number of photon packets n in a MC simulation increases, the uncertainty decreases at a rate of 1/ n [33]. Fig. 9.1 only shows a MMC-MCML plot generated using the CPR weight deposition method. Fig. 9.2 shows the results from all three deposition methods. It is clear from Fig. 9.2(a) that the use of the simple weight deposition technique lead to regions of higher error near the interfaces between two layers. Using PPR or CPR weight deposition (Fig. 9.2(b) and Fig. 9.2(c), respectively) nearly eliminated this problem and resulted in error distributions that closely resembled those for MCML (Fig. 9.2(d)).

176 Figure 9.2: Distribution of relative error as a function of position using the skin model. All simulations used 100 million photon packets. (a) MMC-MCML using simple deposition versus the MCML gold standard simulation, (b) MMC-MCML using PPR deposition versus the MCML gold standard simulation, (c) MMC-MCML using CPR deposition versus the MCML gold standard simulation and (d) A second MCML run with 100 million photon packets versus the MCML gold standard simulation. The color bar represents percent error from 0 to 10%. 161

Electron Dose Kernels (EDK) for Secondary Particle Transport in Deterministic Simulations

Electron Dose Kernels (EDK) for Secondary Particle Transport in Deterministic Simulations Electron Dose Kernels (EDK) for Secondary Particle Transport in Deterministic Simulations A. Al-Basheer, G. Sjoden, M. Ghita Computational Medical Physics Team Nuclear & Radiological Engineering University

More information

ELECTRON DOSE KERNELS TO ACCOUNT FOR SECONDARY PARTICLE TRANSPORT IN DETERMINISTIC SIMULATIONS

ELECTRON DOSE KERNELS TO ACCOUNT FOR SECONDARY PARTICLE TRANSPORT IN DETERMINISTIC SIMULATIONS Computational Medical Physics Working Group Workshop II, Sep 30 Oct 3, 2007 University of Florida (UF), Gainesville, Florida USA on CD-ROM, American Nuclear Society, LaGrange Park, IL (2007) ELECTRON DOSE

More information

Proton dose calculation algorithms and configuration data

Proton dose calculation algorithms and configuration data Proton dose calculation algorithms and configuration data Barbara Schaffner PTCOG 46 Educational workshop in Wanjie, 20. May 2007 VARIAN Medical Systems Agenda Broad beam algorithms Concept of pencil beam

More information

Monte Carlo methods in proton beam radiation therapy. Harald Paganetti

Monte Carlo methods in proton beam radiation therapy. Harald Paganetti Monte Carlo methods in proton beam radiation therapy Harald Paganetti Introduction: Proton Physics Electromagnetic energy loss of protons Distal distribution Dose [%] 120 100 80 60 40 p e p Ionization

More information

Basics of treatment planning II

Basics of treatment planning II Basics of treatment planning II Sastry Vedam PhD DABR Introduction to Medical Physics III: Therapy Spring 2015 Monte Carlo Methods 1 Monte Carlo! Most accurate at predicting dose distributions! Based on

More information

Basics of treatment planning II

Basics of treatment planning II Basics of treatment planning II Sastry Vedam PhD DABR Introduction to Medical Physics III: Therapy Spring 2015 Dose calculation algorithms! Correction based! Model based 1 Dose calculation algorithms!

More information

Basic Radiation Oncology Physics

Basic Radiation Oncology Physics Basic Radiation Oncology Physics T. Ganesh, Ph.D., DABR Chief Medical Physicist Fortis Memorial Research Institute Gurgaon Acknowledgment: I gratefully acknowledge the IAEA resources of teaching slides

More information

THE SIMULATION OF THE 4 MV VARIAN LINAC WITH EXPERIMENTAL VALIDATION

THE SIMULATION OF THE 4 MV VARIAN LINAC WITH EXPERIMENTAL VALIDATION 2007 International Nuclear Atlantic Conference - INAC 2007 Santos, SP, Brazil, September 30 to October 5, 2007 ASSOCIAÇÃO BRASILEIRA DE ENERGIA NUCLEAR - ABEN ISBN: 978-85-99141-02-1 THE SIMULATION OF

More information

gpmc: GPU-Based Monte Carlo Dose Calculation for Proton Radiotherapy Xun Jia 8/7/2013

gpmc: GPU-Based Monte Carlo Dose Calculation for Proton Radiotherapy Xun Jia 8/7/2013 gpmc: GPU-Based Monte Carlo Dose Calculation for Proton Radiotherapy Xun Jia xunjia@ucsd.edu 8/7/2013 gpmc project Proton therapy dose calculation Pencil beam method Monte Carlo method gpmc project Started

More information

CLINICAL ASPECTS OF COMPACT GANTRY DESIGNS

CLINICAL ASPECTS OF COMPACT GANTRY DESIGNS CLINICAL ASPECTS OF COMPACT GANTRY DESIGNS J. Heese, J. Wulff, A. Winnebeck, A. Huggins, M. Schillo VARIAN PARTICLE THERAPY JUERGEN HEESE New gantry developments Viewpoint from user and vendor perspective

More information

Suitability Study of MCNP Monte Carlo Program for Use in Medical Physics

Suitability Study of MCNP Monte Carlo Program for Use in Medical Physics Nuclear Energy in Central Europe '98 Terme Catez, September 7 to 10, 1998 SI0100092 Suitability Study of MCNP Monte Carlo Program for Use in Medical Physics R. Jeraj Reactor Physics Division, Jozef Stefan

More information

Photon beam dose distributions in 2D

Photon beam dose distributions in 2D Photon beam dose distributions in 2D Sastry Vedam PhD DABR Introduction to Medical Physics III: Therapy Spring 2014 Acknowledgments! Narayan Sahoo PhD! Richard G Lane (Late) PhD 1 Overview! Evaluation

More information

Breaking Through the Barriers to GPU Accelerated Monte Carlo Particle Transport

Breaking Through the Barriers to GPU Accelerated Monte Carlo Particle Transport Breaking Through the Barriers to GPU Accelerated Monte Carlo Particle Transport GTC 2018 Jeremy Sweezy Scientist Monte Carlo Methods, Codes and Applications Group 3/28/2018 Operated by Los Alamos National

More information

Application of MCNP Code in Shielding Design for Radioactive Sources

Application of MCNP Code in Shielding Design for Radioactive Sources Application of MCNP Code in Shielding Design for Radioactive Sources Ibrahim A. Alrammah Abstract This paper presents three tasks: Task 1 explores: the detected number of as a function of polythene moderator

More information

Artifact Mitigation in High Energy CT via Monte Carlo Simulation

Artifact Mitigation in High Energy CT via Monte Carlo Simulation PIERS ONLINE, VOL. 7, NO. 8, 11 791 Artifact Mitigation in High Energy CT via Monte Carlo Simulation Xuemin Jin and Robert Y. Levine Spectral Sciences, Inc., USA Abstract The high energy (< 15 MeV) incident

More information

A fast and accurate GPU-based proton transport Monte Carlo simulation for validating proton therapy treatment plans

A fast and accurate GPU-based proton transport Monte Carlo simulation for validating proton therapy treatment plans A fast and accurate GPU-based proton transport Monte Carlo simulation for validating proton therapy treatment plans H. Wan Chan Tseung 1 J. Ma C. Beltran PTCOG 2014 13 June, Shanghai 1 wanchantseung.hok@mayo.edu

More information

Transitioning from pencil beam to Monte Carlo for electron dose calculations

Transitioning from pencil beam to Monte Carlo for electron dose calculations Transitioning from pencil beam to Monte Carlo for electron dose calculations Jessie Huang-Vredevoogd (jyhuang4@wisc.edu) University of Wisconsin NCC AAPM October 12, 2019 1 Topics to cover Background RayStation

More information

Image-based Monte Carlo calculations for dosimetry

Image-based Monte Carlo calculations for dosimetry Image-based Monte Carlo calculations for dosimetry Irène Buvat Imagerie et Modélisation en Neurobiologie et Cancérologie UMR 8165 CNRS Universités Paris 7 et Paris 11 Orsay, France buvat@imnc.in2p3.fr

More information

A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program

A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program World Academy of Science, Engineering and Technology 53 29 A CT-based Monte Carlo Dose Calculations for Proton Therapy Using a New Interface Program A. Esmaili Torshabi, A. Terakawa, K. Ishii, H. Yamazaki,

More information

Helical Tomotherapy Qualitative dose Delivery Verification

Helical Tomotherapy Qualitative dose Delivery Verification University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2003 Helical Tomotherapy Qualitative

More information

Monte Carlo Simulation for Neptun 10 PC Medical Linear Accelerator and Calculations of Electron Beam Parameters

Monte Carlo Simulation for Neptun 10 PC Medical Linear Accelerator and Calculations of Electron Beam Parameters Monte Carlo Simulation for Neptun 1 PC Medical Linear Accelerator and Calculations of Electron Beam Parameters M.T. Bahreyni Toossi a, M. Momen Nezhad b, S.M. Hashemi a a Medical Physics Research Center,

More information

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon WHITE PAPER Introduction Introducing an image guidance system based on Cone Beam CT (CBCT) and a mask immobilization

More information

Medical Physics Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.

Medical Physics Research Center, Mashhad University of Medical Sciences, Mashhad, Iran. DXRaySMCS First User Friendly Interface Developed for Prediction of Diagnostic Radiology X-Ray Spectra Produced by Monte Carlo (MCNP-4C) Simulation in Iran M.T. Bahreyni Toosi a*, H. Moradi b, H. Zare

More information

Automated ADVANTG Variance Reduction in a Proton Driven System. Kenneth A. Van Riper1 and Robert L. Metzger2

Automated ADVANTG Variance Reduction in a Proton Driven System. Kenneth A. Van Riper1 and Robert L. Metzger2 Automated ADVANTG Variance Reduction in a Proton Driven System Kenneth A. Van Riper1 and Robert L. Metzger2 1 White Rock Science, P. O. Box 4729, White Rock, NM 87547, kvr@rt66.com Radiation Safety Engineering,

More information

Indrin Chetty Henry Ford Hospital Detroit, MI. AAPM Annual Meeting Houston 7:30-8:25 Mon 08/07/28 1/30

Indrin Chetty Henry Ford Hospital Detroit, MI.   AAPM Annual Meeting Houston 7:30-8:25 Mon 08/07/28 1/30 Review of TG105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning D. W. O. Rogers, Carleton Laboratory for Radiotherapy Physics,

More information

Comparison of absorbed dose distribution 10 MV photon beam on water phantom using Monte Carlo method and Analytical Anisotropic Algorithm

Comparison of absorbed dose distribution 10 MV photon beam on water phantom using Monte Carlo method and Analytical Anisotropic Algorithm Journal of Physics: Conference Series PAPER OPEN ACCESS Comparison of absorbed dose distribution 1 MV photon beam on water phantom using Monte Carlo method and Analytical Anisotropic Algorithm To cite

More information

Outline. Monte Carlo Radiation Transport Modeling Overview (MCNP5/6) Monte Carlo technique: Example. Monte Carlo technique: Introduction

Outline. Monte Carlo Radiation Transport Modeling Overview (MCNP5/6) Monte Carlo technique: Example. Monte Carlo technique: Introduction Monte Carlo Radiation Transport Modeling Overview () Lecture 7 Special Topics: Device Modeling Outline Principles of Monte Carlo modeling Radiation transport modeling with Utilizing Visual Editor (VisEd)

More information

New Technology in Radiation Oncology. James E. Gaiser, Ph.D. DABR Physics and Computer Planning Charlotte, NC

New Technology in Radiation Oncology. James E. Gaiser, Ph.D. DABR Physics and Computer Planning Charlotte, NC New Technology in Radiation Oncology James E. Gaiser, Ph.D. DABR Physics and Computer Planning Charlotte, NC Technology s s everywhere From the imaging chain To the planning system To the linac To QA..it..it

More information

MCNP4C3-BASED SIMULATION OF A MEDICAL LINEAR ACCELERATOR

MCNP4C3-BASED SIMULATION OF A MEDICAL LINEAR ACCELERATOR Computational Medical Physics Working Group Workshop II, Sep 3 Oct 3, 7 University of Florida (UF), Gainesville, Florida USA on CD-ROM, American Nuclear Society, LaGrange Park, IL (7) MCNP4C3-BASED SIMULATION

More information

Improved Detector Response Characterization Method in ISOCS and LabSOCS

Improved Detector Response Characterization Method in ISOCS and LabSOCS P Improved Detector Response Characterization Method in ISOCS and LabSOCS *1 1 1 1 1 R. VenkataramanP P, F. BronsonP P, V. AtrashkevichP P, M. FieldP P, and B.M. YoungP P. 1 PCanberra Industries, 800 Research

More information

A dedicated tool for PET scanner simulations using FLUKA

A dedicated tool for PET scanner simulations using FLUKA A dedicated tool for PET scanner simulations using FLUKA P. G. Ortega FLUKA meeting June 2013 1 Need for in-vivo treatment monitoring Particles: The good thing is that they stop... Tumour Normal tissue/organ

More information

Monitor Unit (MU) Calculation

Monitor Unit (MU) Calculation Monitor Unit (MU) Calculation Timothy C. Zhu 1, Haibo Lin 1, and JiaJian Shen 2 1 University of Pennsylvania, Philadelphia, PA 2 Mayo Clinic, Phoenix, AZ Introduction Pencil-beam based dose/mu algorithms

More information

Dose Distributions. Purpose. Isodose distributions. To familiarize the resident with dose distributions and the factors that affect them

Dose Distributions. Purpose. Isodose distributions. To familiarize the resident with dose distributions and the factors that affect them Dose Distributions George Starkschall, Ph.D. Department of Radiation Physics U.T. M.D. Anderson Cancer Center Purpose To familiarize the resident with dose distributions and the factors that affect them

More information

Michael Speiser, Ph.D.

Michael Speiser, Ph.D. IMPROVED CT-BASED VOXEL PHANTOM GENERATION FOR MCNP MONTE CARLO Michael Speiser, Ph.D. Department of Radiation Oncology UT Southwestern Medical Center Dallas, TX September 1 st, 2012 CMPWG Workshop Medical

More information

Chapter 9 Field Shaping: Scanning Beam

Chapter 9 Field Shaping: Scanning Beam Chapter 9 Field Shaping: Scanning Beam X. Ronald Zhu, Ph.D. Department of Radiation Physics M. D. Anderson Cancer Center Houston, TX June 14-18, 2015 AAPM - Summer School 2015, Colorado Spring Acknowledgement

More information

Monte Carlo simulations

Monte Carlo simulations MC simulations Monte Carlo simulations Eirik Malinen Simulations of stochastic processes Interactions are stochastic: the path of a single ioniing particle may not be predicted Interactions are quantified

More information

Accuracy of parameterized proton range models; a comparison

Accuracy of parameterized proton range models; a comparison Accuracy of parameterized proton range models; a comparison H. E. S. Pettersen a,b*, M. Chaar b, I. Meric c, O. H. Odland a, J. R. Sølie c, D. Röhrich b * Corresponding author: E-mail: helge.pettersen@helse-bergen.no.

More information

Analysis of Radiation Transport through Multileaf Collimators Using BEAMnrc Code

Analysis of Radiation Transport through Multileaf Collimators Using BEAMnrc Code American Journal of Biomedical Engineering 216, 6(4): 124-131 DOI: 1.5923/j.ajbe.21664.3 Analysis of Radiation Transport through Multileaf Collimators Using BEAMnrc Code Ankit Kajaria 1,*, Neeraj Sharma

More information

Mesh Human Phantoms with MCNP

Mesh Human Phantoms with MCNP LAUR-12-01659 Mesh Human Phantoms with MCNP Casey Anderson (casey_a@lanl.gov) Karen Kelley, Tim Goorley Los Alamos National Laboratory U N C L A S S I F I E D Slide 1 Summary Monte Carlo for Radiation

More information

Implementation of the EGSnrc / BEAMnrc Monte Carlo code - Application to medical accelerator SATURNE43

Implementation of the EGSnrc / BEAMnrc Monte Carlo code - Application to medical accelerator SATURNE43 International Journal of Innovation and Applied Studies ISSN 2028-9324 Vol. 6 No. 3 July 2014, pp. 635-641 2014 Innovative Space of Scientific Research Journals http://www.ijias.issr-journals.org/ Implementation

More information

A DOSIMETRIC MODEL FOR SMALL-FIELD ELECTRON RADIATION THERAPY A CREATIVE PROJECT (3 SEMESTER HOURS) SUBMITTED TO THE GRADUATE SCHOOL

A DOSIMETRIC MODEL FOR SMALL-FIELD ELECTRON RADIATION THERAPY A CREATIVE PROJECT (3 SEMESTER HOURS) SUBMITTED TO THE GRADUATE SCHOOL A DOSIMETRIC MODEL FOR SMALL-FIELD ELECTRON RADIATION THERAPY A CREATIVE PROJECT (3 SEMESTER HOURS) SUBMITTED TO THE GRADUATE SCHOOL IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE MASTER OF

More information

Photon Dose Algorithms and Physics Data Modeling in modern RTP

Photon Dose Algorithms and Physics Data Modeling in modern RTP Photon Dose Algorithms and Physics Data Modeling in modern RTP Niko Papanikolaou, PhD Professor and Director of Medical Physics University of Texas Health Science Center of San Antonio Cancer Therapy &

More information

THESIS NEUTRON PRODUCTION AND TRANSPORT AT A MEDICAL LINEAR ACCELERATOR. Submitted by. Amber Allardice

THESIS NEUTRON PRODUCTION AND TRANSPORT AT A MEDICAL LINEAR ACCELERATOR. Submitted by. Amber Allardice THESIS NEUTRON PRODUCTION AND TRANSPORT AT A MEDICAL LINEAR ACCELERATOR Submitted by Amber Allardice Department of Environmental and Radiological Health Sciences In partial fulfillment of the requirements

More information

Monte Carlo simulations. Lesson FYSKJM4710 Eirik Malinen

Monte Carlo simulations. Lesson FYSKJM4710 Eirik Malinen Monte Carlo simulations Lesson FYSKJM4710 Eirik Malinen MC simulations 1 Simulations of stochastic processes Interactions are stochastic: the path of a single ionizing particle may not be predicted Interactions

More information

What is Monte Carlo Modeling*?

What is Monte Carlo Modeling*? What is Monte Carlo Modeling*? Monte Carlo Modeling is a statisitcal method used here to simulate radiative transfer by simulating photon (or more exactly light rays/beams) interaction with a medium. MC

More information

I. INTRODUCTION. Figure 1. Radiation room model at Dongnai General Hospital

I. INTRODUCTION. Figure 1. Radiation room model at Dongnai General Hospital International Journal of Computational Engineering Research Vol, 04 Issue, 4 Simulation of Photon and Electron dose distributions 5 code for the treatment area using the linear electron accelerator (LINAC)

More information

The Monte Carlo simulation of a Package formed by the combination of three scintillators: Brillance380, Brillance350, and Prelude420.

The Monte Carlo simulation of a Package formed by the combination of three scintillators: Brillance380, Brillance350, and Prelude420. EURONS I3 506065 JRA9 RHIB Report made during stay IEM-CSIC Madrid december 2006 MINISTERIO DE ASUNTOS EXTERIORES Y DE COOPERACIÓN AECI VICESECRETARÍA GENERAL The Monte Carlo simulation of a Package formed

More information

VALIDATION OF A MONTE CARLO DOSE CALCULATION ALGORITHM FOR CLINICAL ELECTRON BEAMS IN THE PRESENCE OF PHANTOMS WITH COMPLEX HETEROGENEITIES

VALIDATION OF A MONTE CARLO DOSE CALCULATION ALGORITHM FOR CLINICAL ELECTRON BEAMS IN THE PRESENCE OF PHANTOMS WITH COMPLEX HETEROGENEITIES VALIDATION OF A MONTE CARLO DOSE CALCULATION ALGORITHM FOR CLINICAL ELECTRON BEAMS IN THE PRESENCE OF PHANTOMS WITH COMPLEX HETEROGENEITIES by Shayla Landfair Enright A Thesis Submitted to the Faculty

More information

Monte Carlo Method for Medical & Health Physics

Monte Carlo Method for Medical & Health Physics Med Phys 774 Monte Carlo Method for Medical & Health Physics Chapter 5. MCNP Monte Carlo Code 1 MCNP stands for: A general-purpose M C N-P code Particles that can be transported??? See the references:

More information

Development of a Radiation Shielding Monte Carlo Code: RShieldMC

Development of a Radiation Shielding Monte Carlo Code: RShieldMC Development of a Radiation Shielding Monte Carlo Code: RShieldMC Shenshen GAO 1,2, Zhen WU 1,3, Xin WANG 1,2, Rui QIU 1,2, Chunyan LI 1,3, Wei LU 1,2, Junli LI 1,2*, 1.Department of Physics Engineering,

More information

Comparison of internal and external dose conversion factors using ICRP adult male and MEET Man voxel model phantoms.

Comparison of internal and external dose conversion factors using ICRP adult male and MEET Man voxel model phantoms. Comparison of internal and external dose conversion factors using ICRP adult male and MEET Man voxel model phantoms. D.Leone, A.Häußler Intitute for Nuclear Waste Disposal, Karlsruhe Institute for Technology,

More information

Deliverable D10.2. WP10 JRA04 INDESYS Innovative solutions for nuclear physics detectors

Deliverable D10.2. WP10 JRA04 INDESYS Innovative solutions for nuclear physics detectors MS116 Characterization of light production, propagation and collection for both organic and inorganic scintillators D10.2 R&D on new and existing scintillation materials: Report on the light production,

More information

Dosimetry Simulations with the UF-B Series Phantoms using the PENTRAN-MP Code System

Dosimetry Simulations with the UF-B Series Phantoms using the PENTRAN-MP Code System Dosimetry Simulations with the UF-B Series Phantoms using the PENTRAN-MP Code System A. Al-Basheer, M. Ghita, G. Sjoden, W. Bolch, C. Lee, and the ALRADS Group Computational Medical Physics Team Nuclear

More information

CHAPTER 9 INFLUENCE OF SMOOTHING ALGORITHMS IN MONTE CARLO DOSE CALCULATIONS OF CYBERKNIFE TREATMENT PLANS: A LUNG PHANTOM STUDY

CHAPTER 9 INFLUENCE OF SMOOTHING ALGORITHMS IN MONTE CARLO DOSE CALCULATIONS OF CYBERKNIFE TREATMENT PLANS: A LUNG PHANTOM STUDY 148 CHAPTER 9 INFLUENCE OF SMOOTHING ALGORITHMS IN MONTE CARLO DOSE CALCULATIONS OF CYBERKNIFE TREATMENT PLANS: A LUNG PHANTOM STUDY 9.1 INTRODUCTION 9.1.1 Dose Calculation Algorithms Dose calculation

More information

Event reconstruction in STAR

Event reconstruction in STAR Chapter 4 Event reconstruction in STAR 4.1 Data aquisition and trigger The STAR data aquisition system (DAQ) [54] receives the input from multiple detectors at different readout rates. The typical recorded

More information

Physics 11. Unit 8 Geometric Optics Part 1

Physics 11. Unit 8 Geometric Optics Part 1 Physics 11 Unit 8 Geometric Optics Part 1 1.Review of waves In the previous section, we have investigated the nature and behaviors of waves in general. We know that all waves possess the following characteristics:

More information

Assesing multileaf collimator effect on the build-up region using Monte Carlo method

Assesing multileaf collimator effect on the build-up region using Monte Carlo method Pol J Med Phys Eng. 2008;14(3):163-182. PL ISSN 1425-4689 doi: 10.2478/v10013-008-0014-0 website: http://www.pjmpe.waw.pl M. Zarza Moreno 1, 2, N. Teixeira 3, 4, A. P. Jesus 1, 2, G. Mora 1 Assesing multileaf

More information

UNIVERSITY OF SOUTHAMPTON

UNIVERSITY OF SOUTHAMPTON UNIVERSITY OF SOUTHAMPTON PHYS2007W1 SEMESTER 2 EXAMINATION 2014-2015 MEDICAL PHYSICS Duration: 120 MINS (2 hours) This paper contains 10 questions. Answer all questions in Section A and only two questions

More information

Code characteristics

Code characteristics The PENELOPE Computer code M.J. Anagnostakis Nuclear Engineering Department National Technical University of Athens The PENELOPE code system PENetration and Energy LOss of Positrons and Electrons in matter

More information

Effects of the difference in tube voltage of the CT scanner on. dose calculation

Effects of the difference in tube voltage of the CT scanner on. dose calculation Effects of the difference in tube voltage of the CT scanner on dose calculation Dong Joo Rhee, Sung-woo Kim, Dong Hyeok Jeong Medical and Radiological Physics Laboratory, Dongnam Institute of Radiological

More information

Particle track plotting in Visual MCNP6 Randy Schwarz 1,*

Particle track plotting in Visual MCNP6 Randy Schwarz 1,* Particle track plotting in Visual MCNP6 Randy Schwarz 1,* 1 Visual Editor Consultants, PO Box 1308, Richland, WA 99352, USA Abstract. A visual interface for MCNP6 has been created to allow the plotting

More information

An Investigation of a Model of Percentage Depth Dose for Irregularly Shaped Fields

An Investigation of a Model of Percentage Depth Dose for Irregularly Shaped Fields Int. J. Cancer (Radiat. Oncol. Invest): 96, 140 145 (2001) 2001 Wiley-Liss, Inc. Publication of the International Union Against Cancer An Investigation of a Model of Percentage Depth Dose for Irregularly

More information

Simulation of Diffuse Optical Tomography using COMSOL Multiphysics

Simulation of Diffuse Optical Tomography using COMSOL Multiphysics Simulation of Diffuse Optical Tomography using COMSOL Multiphysics SAM Kirmani *1 L Velmanickam 1 D Nawarathna 1 SS Sherif 2 and IT Lima Jr 1 1 Department of Electrical and Computer Engineering North Dakota

More information

Monte Carlo simulation of photon and electron transport

Monte Carlo simulation of photon and electron transport First Barcelona Techno Week Course on semiconductor detectors ICCUB, 11-15th July 2016 Monte Carlo simulation of photon and electron transport Francesc Salvat Monte Carlo 1 Simulations performed with the

More information

OPTIMIZATION OF MONTE CARLO TRANSPORT SIMULATIONS IN STOCHASTIC MEDIA

OPTIMIZATION OF MONTE CARLO TRANSPORT SIMULATIONS IN STOCHASTIC MEDIA PHYSOR 2012 Advances in Reactor Physics Linking Research, Industry, and Education Knoxville, Tennessee, USA, April 15-20, 2012, on CD-ROM, American Nuclear Society, LaGrange Park, IL (2010) OPTIMIZATION

More information

Measurement of depth-dose of linear accelerator and simulation by use of Geant4 computer code

Measurement of depth-dose of linear accelerator and simulation by use of Geant4 computer code reports of practical oncology and radiotherapy 1 5 (2 0 1 0) 64 68 available at www.sciencedirect.com journal homepage: http://www.rpor.eu/ Original article Measurement of depth-dose of linear accelerator

More information

Graphical User Interface for High Energy Multi-Particle Transport

Graphical User Interface for High Energy Multi-Particle Transport Graphical User Interface for High Energy Multi-Particle Transport Final Report November 30 th 2008 PREPARED BY: P.O. Box 1308 Richland, WA 99352-1308 PHONE: (509) 539-8621 FAX: (509) 946-2001 Email: randyschwarz@mcnpvised.com

More information

Dose Point Kernel calculation and modelling with nuclear medicine dosimetry purposes

Dose Point Kernel calculation and modelling with nuclear medicine dosimetry purposes Dose Point Kernel calculation and modelling with nuclear medicine dosimetry purposes LIIFAMIRX - Laboratorio de Investigación e Instrumentación en Física Aplicada a la Medicina e Imágenes por Rayos X -

More information

Investigation of tilted dose kernels for portal dose prediction in a-si electronic portal imagers

Investigation of tilted dose kernels for portal dose prediction in a-si electronic portal imagers Investigation of tilted dose kernels for portal dose prediction in a-si electronic portal imagers Krista Chytyk MSc student Supervisor: Dr. Boyd McCurdy Introduction The objective of cancer radiotherapy

More information

Validation of GEANT4 Monte Carlo Simulation Code for 6 MV Varian Linac Photon Beam

Validation of GEANT4 Monte Carlo Simulation Code for 6 MV Varian Linac Photon Beam Validation of GEANT4 Monte Carlo Code for 6 MV Varian Linac Photon Beam E. Salama ab*, A.S. Ali c, N. Emad d and A. Radi a a Physics Department, Faculty of Science, Ain Shams University, Cairo, Egypt;

More information

Outline. Outline 7/24/2014. Fast, near real-time, Monte Carlo dose calculations using GPU. Xun Jia Ph.D. GPU Monte Carlo. Clinical Applications

Outline. Outline 7/24/2014. Fast, near real-time, Monte Carlo dose calculations using GPU. Xun Jia Ph.D. GPU Monte Carlo. Clinical Applications Fast, near real-time, Monte Carlo dose calculations using GPU Xun Jia Ph.D. xun.jia@utsouthwestern.edu Outline GPU Monte Carlo Clinical Applications Conclusions 2 Outline GPU Monte Carlo Clinical Applications

More information

PSG2 / Serpent a Monte Carlo Reactor Physics Burnup Calculation Code. Jaakko Leppänen

PSG2 / Serpent a Monte Carlo Reactor Physics Burnup Calculation Code. Jaakko Leppänen PSG2 / Serpent a Monte Carlo Reactor Physics Burnup Calculation Code Jaakko Leppänen Outline Background History The Serpent code: Neutron tracking Physics and interaction data Burnup calculation Output

More information

Disclosure 7/24/2014. Validation of Monte Carlo Simulations For Medical Imaging Experimental validation and the AAPM Task Group 195 Report

Disclosure 7/24/2014. Validation of Monte Carlo Simulations For Medical Imaging Experimental validation and the AAPM Task Group 195 Report Validation of Monte Carlo Simulations For Medical Imaging Experimental validation and the AAPM Task Group 195 Report Ioannis Sechopoulos, Ph.D., DABR Diagnostic Imaging Physics Lab Department of Radiology

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

specular diffuse reflection.

specular diffuse reflection. Lesson 8 Light and Optics The Nature of Light Properties of Light: Reflection Refraction Interference Diffraction Polarization Dispersion and Prisms Total Internal Reflection Huygens s Principle The Nature

More information

I Introduction 2. IV Relative dose in electron and photon beams 26 IV.A Dose and kerma per unit incident fluence... 27

I Introduction 2. IV Relative dose in electron and photon beams 26 IV.A Dose and kerma per unit incident fluence... 27 Notes on the structure of radiotherapy depth-dose distributions David W O Rogers Carleton Laboratory for Radiotherapy Physics Physics Department, Carleton University, Ottawa, Canada drogers at physics.carleton.ca

More information

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Advanced Image Reconstruction Methods for Photoacoustic Tomography Advanced Image Reconstruction Methods for Photoacoustic Tomography Mark A. Anastasio, Kun Wang, and Robert Schoonover Department of Biomedical Engineering Washington University in St. Louis 1 Outline Photoacoustic/thermoacoustic

More information

Tomographic Reconstruction

Tomographic Reconstruction Tomographic Reconstruction 3D Image Processing Torsten Möller Reading Gonzales + Woods, Chapter 5.11 2 Overview Physics History Reconstruction basic idea Radon transform Fourier-Slice theorem (Parallel-beam)

More information

Dose Calculation and Optimization Algorithms: A Clinical Perspective

Dose Calculation and Optimization Algorithms: A Clinical Perspective Dose Calculation and Optimization Algorithms: A Clinical Perspective Daryl P. Nazareth, PhD Roswell Park Cancer Institute, Buffalo, NY T. Rock Mackie, PhD University of Wisconsin-Madison David Shepard,

More information

DUAL energy X-ray radiography [1] can be used to separate

DUAL energy X-ray radiography [1] can be used to separate IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 1, FEBRUARY 2006 133 A Scatter Correction Using Thickness Iteration in Dual-Energy Radiography S. K. Ahn, G. Cho, and H. Jeon Abstract In dual-energy

More information

G4beamline Simulations for H8

G4beamline Simulations for H8 G4beamline Simulations for H8 Author: Freja Thoresen EN-MEF-LE, Univ. of Copenhagen & CERN Supervisor: Nikolaos Charitonidis CERN (Dated: December 15, 2015) Electronic address: frejathoresen@gmail.com

More information

Modelling of non-gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter

Modelling of non-gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter Modelling of non-gaussian tails of multiple Coulomb scattering in track fitting with a Gaussian-sum filter A. Strandlie and J. Wroldsen Gjøvik University College, Norway Outline Introduction A Gaussian-sum

More information

The OpenGATE Collaboration

The OpenGATE Collaboration The OpenGATE Collaboration GATE developments and future releases GATE Users meeting, IEEE MIC 2015, San Diego Sébastien JAN CEA - IMIV sebastien.jan@cea.fr Outline Theranostics modeling Radiobiology Optics

More information

SHIELDING DEPTH DETERMINATION OF COBALT PHOTON SHOWER THROUGH LEAD, ALUMINUM AND AIR USING MONTE CARLO SIMULATION

SHIELDING DEPTH DETERMINATION OF COBALT PHOTON SHOWER THROUGH LEAD, ALUMINUM AND AIR USING MONTE CARLO SIMULATION Research Article SHIELDING DEPTH DETERMINATION OF COBALT PHOTON SHOWER THROUGH LEAD, ALUMINUM AND AIR USING MONTE CARLO SIMULATION 1 Ngadda, Y. H., 2 Ewa, I. O. B. and 3 Chagok, N. M. D. 1 Physics Department,

More information

Towards patient specific dosimetry in nuclear medicine associating Monte Carlo and 3D voxel based approaches

Towards patient specific dosimetry in nuclear medicine associating Monte Carlo and 3D voxel based approaches Towards patient specific dosimetry in nuclear medicine associating Monte Carlo and 3D voxel based approaches L.Hadid, N. Grandgirard, N. Pierrat, H. Schlattl, M. Zankl, A.Desbrée IRSN, French Institute

More information

IMPLEMENTATION OF SALIVARY GLANDS IN THE BODYBUILDER ANTHROPOMORPHIC PHANTOMS

IMPLEMENTATION OF SALIVARY GLANDS IN THE BODYBUILDER ANTHROPOMORPHIC PHANTOMS Computational Medical Physics Working Group Workshop II, Sep 30 Oct 3, 2007 University of Florida (UF), Gainesville, Florida USA on CD-ROM, American Nuclear Society, LaGrange Park, IL (2007) IMPLEMENTATION

More information

D&S Technical Note 09-2 D&S A Proposed Correction to Reflectance Measurements of Profiled Surfaces. Introduction

D&S Technical Note 09-2 D&S A Proposed Correction to Reflectance Measurements of Profiled Surfaces. Introduction Devices & Services Company 10290 Monroe Drive, Suite 202 - Dallas, Texas 75229 USA - Tel. 214-902-8337 - Fax 214-902-8303 Web: www.devicesandservices.com Email: sales@devicesandservices.com D&S Technical

More information

Development a simple point source model for Elekta SL-25 linear accelerator using MCNP4C Monte Carlo code

Development a simple point source model for Elekta SL-25 linear accelerator using MCNP4C Monte Carlo code Iran. J. Radiat. Res., 2006; 4 (1): 7-14 Development a simple point source model for Elekta SL-25 linear accelerator using MCNP4C Monte Carlo code. Mesbahi * Department of Medical Physics, Medical School,

More information

Muon imaging for innovative tomography of large volume and heterogeneous cemented waste packages

Muon imaging for innovative tomography of large volume and heterogeneous cemented waste packages Muon imaging for innovative tomography of large volume and heterogeneous cemented waste packages This project has received funding from the Euratom research and training programme 2014-2018 under grant

More information

ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD. Moe Siddiqui, April 08, 2017

ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD. Moe Siddiqui, April 08, 2017 ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD Moe Siddiqui, April 08, 2017 Agenda Background IRCU 50 - Disclaimer - Uncertainties Robust optimization Use Cases Lung Robust 4D

More information

Dose Calculations: Where and How to Calculate Dose. Allen Holder Trinity University.

Dose Calculations: Where and How to Calculate Dose. Allen Holder Trinity University. Dose Calculations: Where and How to Calculate Dose Trinity University www.trinity.edu/aholder R. Acosta, W. Brick, A. Hanna, D. Lara, G. McQuilen, D. Nevin, P. Uhlig and B. Slater Dose Calculations - Why

More information

Validation of GEANT4 for Accurate Modeling of 111 In SPECT Acquisition

Validation of GEANT4 for Accurate Modeling of 111 In SPECT Acquisition Validation of GEANT4 for Accurate Modeling of 111 In SPECT Acquisition Bernd Schweizer, Andreas Goedicke Philips Technology Research Laboratories, Aachen, Germany bernd.schweizer@philips.com Abstract.

More information

Philip E. Plantz. Application Note. SL-AN-08 Revision C. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation

Philip E. Plantz. Application Note. SL-AN-08 Revision C. Provided By: Microtrac, Inc. Particle Size Measuring Instrumentation A Conceptual, Non-Mathematical Explanation on the Use of Refractive Index in Laser Particle Size Measurement (Understanding the concept of refractive index and Mie Scattering in Microtrac Instruments and

More information

DESIGN AND SIMULATION OF A PASSIVE-SCATTERING NOZZLE IN PROTON BEAM RADIOTHERAPY. A Thesis FADA GUAN

DESIGN AND SIMULATION OF A PASSIVE-SCATTERING NOZZLE IN PROTON BEAM RADIOTHERAPY. A Thesis FADA GUAN DESIGN AND SIMULATION OF A PASSIVE-SCATTERING NOZZLE IN PROTON BEAM RADIOTHERAPY A Thesis by FADA GUAN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the

More information

Graphical User Interface for Simplified Neutron Transport Calculations

Graphical User Interface for Simplified Neutron Transport Calculations Graphical User Interface for Simplified Neutron Transport Calculations Phase 1 Final Report Instrument No: DE-SC0002321 July 20, 2009, through April 19, 2010 Recipient: Randolph Schwarz, Visual Editor

More information

A fast method for estimation of light flux in fluorescence image guided surgery

A fast method for estimation of light flux in fluorescence image guided surgery A fast method for estimation of light flux in fluorescence image guided surgery 1. Introduction In this document, we present a theoretical method to estimate the light flux in near-infrared fluorescence

More information

ADVANCING CANCER TREATMENT

ADVANCING CANCER TREATMENT 3 ADVANCING CANCER TREATMENT SUPPORTING CLINICS WORLDWIDE RaySearch is advancing cancer treatment through pioneering software. We believe software has un limited potential, and that it is now the driving

More information

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation

2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing. Monte Carlo Simulation 2017 Summer Course on Optical Oceanography and Ocean Color Remote Sensing Curtis Mobley Monte Carlo Simulation Delivered at the Darling Marine Center, University of Maine July 2017 Copyright 2017 by Curtis

More information

EXTERNAL PHOTON BEAMS: PHYSICAL ASPECTS

EXTERNAL PHOTON BEAMS: PHYSICAL ASPECTS EXTERNAL PHOTON BEAMS: PHYSICAL ASPECTS E.B. PODGORSAK Department of Medical Physics, McGill University Health Centre, Montreal, Quebec, Canada 6.1. INTRODUCTION Radiotherapy procedures fall into two main

More information

Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurement

Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurement DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Quantifying the Dynamic Ocean Surface Using Underwater Radiometric Measurement Lian Shen Department of Mechanical Engineering

More information