Interactive deformable registration visualization and analysis of 4D computed tomography

Size: px
Start display at page:

Download "Interactive deformable registration visualization and analysis of 4D computed tomography"

Transcription

1 Northeastern University Electrical and Computer Engineering Master's Theses Department of Electrical and Computer Engineering January 01, 2008 Interactive deformable registration visualization and analysis of 4D computed tomography Burak Erem Northeastern University Recommended Citation Erem, Burak, "Interactive deformable registration visualization and analysis of 4D computed tomography" (2008). Electrical and Computer Engineering Master's Theses. Paper 9. This work is available open access, hosted by Northeastern University.

2 NORTHEASTERN UNIVERSITY Graduate School of Engineering Thesis Title: Interactive Deformable Registration Visualization And Analysis Of 4D Computed Tomography Author: Department: Burak Erem Electrical and Computer Engineering Approved for Thesis Requirements of the Master of Science Degree Thesis Adviser: Professor David Kaeli Date Thesis Reader: Professor Dana Brooks Date Thesis Reader: Gregory C. Sharp Date Department Chair: Professor Ali Abur Date Graduate School Notified of Acceptance: Director of the Graduate School: Yaman Yener Date

3 INTERACTIVE DEFORMABLE REGISTRATION VISUALIZATION AND ANALYSIS OF 4D COMPUTED TOMOGRAPHY A Thesis Presented by Burak Erem to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering in the field of Computer Engineering Northeastern University Boston, Massachusetts July 2008

4 c Copyright 2008 by Burak Erem All Rights Reserved iii

5 Abstract Radiation therapy is a method for treating patients with various types of cancerous tumors. A major challenge in radiation treatment planning is to treat tumors while avoiding irradiating healthy tissue and organs. The problem is that some tumors in the body are in areas where motion occurs (e.g., due to respiration or other normal functions). Radiation treatment plans must try estimate the position of the moving organ inside the body, since they cannot see inside the body. Even given 2-D and 3-D X-Ray images of the patient, it can be very difficult to understand the complex motion of a tumor. This thesis presents one interactive method for analyzing 4-D X-Ray Computed Tomography (4DCT) images for patient care and research. 4-D includes 3-D volume rendering and time (the fourth dimension). Our 4DCT visualization tools have been developed using the SCIRun Problem Solving Environment. Deformable registration is one way to observe the motion of anatomy in images from one respiratory phase to another. Our system provides users with the capability to visualize these trajectories while simultaneously viewing rendered anatomical volumes, which can greatly improve the accuracy of deformable registration as a means of analysis. iv

6 Acknowledgements For my mother and father, Halise and Mehmet, forever my best friends. For the unconditional love and support they have given me in the face of every imaginable obstacle throughout the years. I can t thank them enough for believing in me unlike anyone else could. Thank you, again and again. Many thanks to my advisor, Dr. David Kaeli, as well as my mentors and collaborators at Massachusetts General Hospital (MGH): Drs. Gregory C. Sharp, George T.Y. Chen, and Ziji Wu. Also thanks to Dr. Dana Brooks for his help with SCIRun and his contact with collaborators at the University of Utah. This work was supported in part by Gordon-CenSSIS, the Bernard M. Gordon Center for Subsurface Sensing and Imaging Systems, under the Engineering Research Centers Program of the National Science Foundation (Award Number EEC ). This work was made possible in part by software from the NIH/NCRR Center for Integrative Biomedical Computing, P41-RR v

7 Contents Abstract iv Acknowledgements v 1 Introduction Contributions of Thesis Organization of Thesis Background D X-Ray Computed Tomography Image Acquisition Image Reconstruction Volume Visualization Radiotherapy Treatment Planning Deformable Registration SCIRun Problem Solving Environment Development Volume Rendering vi

8 3 View Trajectory Loop Tool Motivation for the View Trajectory Loop Tool Development of a Trajectory Viewing Cursor Description of Visual Elements User Interaction Edit Point Path Tool Motivation for the Edit Point Path Tool Development of a Trajectory Editor Materials and Methods Description of Visual Elements User Interaction Related Work D/4D Medical Visualization SCIRun Fovia OsiriX D Slicer Motion Analysis Fluid Dynamics Anatomical Motion Contributions and Future Work Future Work Bibliography 105 vii

9 List of Figures 2.1 An illustration of how planar X-ray imaging works [52] The basic orientation of the patient to the scanner in X-ray Computed Tomography (CT) and an example CT slice of a patient s head [52] Several generations of CT scanner designs that serve to illustrate the concept of rotating the X-ray source and detectors around the object [52] An example of a visualization of a single respiratory phase of a 4DCT visualization showing lung, bone, and skin Example of four beams administered in the Anterior, Posterior, Right, and Left directions, forming the shape of a box (source: a7www.igd.fhg.de) A simplified direct volume rendering SCIRun dataflow network with added modules, the focus of this research, at the bottom The 15 possible surface combinations for the contents of each cube in the Marching Cubes algorithm [40] An example of adjacent cubes, each containing explicit surfaces, combining to form a volume [13] viii

10 2.9 An example of a direct volume rendering of bone and muscle tissue, two different ranges of isovalues that were combined with gradient magnitude for the look-up table (a) Visualization of bone and lung tissue. Although it is possible to analyze trajectories within this type of visualization, or (b) one showing a cropped version of the branches of the lungs, we provide examples of each tool showing only bony anatomy for visual clarity Viewing several trajectories in the lung while visualizing surrounding bony anatomy (right) and while zoomed in (left). Trajectories are represented as line loops that make a smooth transition from blue to red in even increments across each of the respiratory phases (a) The editing tool shown alone with the current phase highlighted in green and (b) the same editing tool shown with the trajectory loop superimposed to demonstrate the relationship between the two tools. The point highlighted in green is edited while the others remain fixed to prevent accidental changes A zoomed out (left) and more detailed (right) perspective of editing a point path while observing changes using the trajectory loop tool ix

11 Chapter 1 Introduction Radiation therapy is a method of treating patients with various types of cancerous tumors. The goal of the treatment, as discussed in this thesis, is to kill cancerous cells by exposing them to radiation. However, when exposed to enough radiation, this treatment method will kill healthy tissue as well a loss that proper treatment planning attempts to minimize. The case of tumors that are located very closely to vital organs serve to illustrate the importance of minimizing the radiation exposure of healthy tissue. Despite successfully removing the cancerous cells from the area, the treatment may inflict irreparable damage to those organs and put the patient at even greater risk. Thus the goal is to target cancerous cells, but always at a minimal cost of healthy tissue to the patient. This becomes more of a concern for a physician planning a patient s treatment when the tumor moves significantly due to cardiac activity or respiration, and can often lead to lower treatment success rates. Furthermore, imaging methods often used for this type of treatment planning, such as 4D X-Ray Computed Tomography (4DCT), are imperfect in their ability capture all of the information about internal anatomical motion. For this reason, much research in this area focuses 1

12 on minimizing exposure of healthy tissue to radiation, maximizing the coverage of the intended target, and also improving the usefulness of 4DCT imaging for analysis. With a better understanding of internal anatomical motion, physicians can improve the accuracy and efficiency of the treatment of their patients. One attempt at characterizing such motion is by using a deformable registration algorithm on 4DCT data to map, voxel by voxel, movement from one respiratory phase to another. Based on splines, this model of voxel trajectories can have undesirable results if the algorithm s parameters are not appropriately set. Furthermore, it can be difficult to determine what the proper parameters should be without some visual feedback and experimentation. This thesis discusses several new ideas for medical visualization that can help address some of these issues. For the evaluation of the validity of visualizations, we present an interactive measurement tool. For the visualization of anatomical motion in 4DCT image sets, we present the ability to display point trajectories. Specifically, we have developed a toolset that can simultaneously visualize vector fields and anatomy, provides interactive browsing of point trajectories, and allows for the improved identification of current trajectory position using node color. We also describe some additional interactive capabilities of our work, such as editing of deformation fields which can enable automatic and interactive registration. We present the major contributions of this work in the next section and then describe the organization of the remainder of the thesis. 2

13 1.1 Contributions of Thesis The main contributions of this thesis are summarized as follows: implemented, in the C++ programming language for the SCIRun [1] Problem Solving Environment 1, several visualization tools to perform the following tasks: Trajectory Viewing Tool Display trajectories as line loops with transitioning colors Visualize vector fields for interactively chosen voxels with respect to simultaneously visualized anatomy Edit Point Path Tool Display trajectories as sequences of points Highlight which respiratory phase of the reference anatomy is being visualized by changing the node color of the aforementioned vector field visualization Edit visualized vector fields to make changes to the deformation fields used to produce them 1.2 Organization of Thesis The central focus of this thesis is on applying multiple interactive visualization techniques simultaneously to a single patient s medical data in order to facilitate more efficient analysis of anatomical motion that is relevant to radiotherapy treatment planning. The remainder of the thesis is organized as follows: Chapter 2 presents 1 All implementations in this thesis were done within the SCIRun Problem Solving Environment 3

14 background information about 4D X-Ray Computed Tomography, Deformable Registration, and the SCIRun Problem Solving Environment. In Chapter 3 we present the View Trajectory Loop Tool, explaining the design and implementation of an interactive cursor that displays the results of deformable registration relative to anatomy. In Chapter 4 we present the Edit Point Path Tool similarly, highlighting its ability to make changes to trajectories interactively. We use Chapter 5 to discuss the related work to this thesis, past and present. Finally, in Chapter 6 we summarize our contributions and present directions for future work. The Appendix holds source code relevant to the tools presented in Chapters 3 and 4. 4

15 Chapter 2 Background 2.1 4D X-Ray Computed Tomography Image Acquisition X-ray imaging is a transmission-based technique in which X-rays from a source pass through the patient and are detected on the other side. In planar X-ray imaging, as shown in Figure 2.1, a simple two-dimensional projection of the tissues lying between the X-ray source and the detecting medium produce the image. In planar X-ray images, overlapping layers of soft tissue or complex bone structures can often be difficult to interpret, even for a skilled radiologist. In these cases, X-ray computed tomography (CT) is used [52]. In CT, the X-ray source and detectors rotate together around the patient, as shown in Figure 2.2, producing a series of one-dimensional projections at a number of different angles [52]. When rotated around a fixed axis, within a fixed plane as illustrated 5

16 Figure 2.1: An illustration of how planar X-ray imaging works [52]. Figure 2.2: The basic orientation of the patient to the scanner in X-ray Computed Tomography (CT) and an example CT slice of a patient s head [52]. 6

17 Figure 2.3: Several generations of CT scanner designs that serve to illustrate the concept of rotating the X-ray source and detectors around the object [52]. in Figure 2.3, these one-dimensional projections are reconstructed to form a twodimensional image that is a cross section, or slice, of the imaged patient in that plane. In some methods of acquisition, several slices can be acquired at the same time (multislice CT), but in general the acquisition of these slices leads to threedimensional image volumes that are composed of stacks of two-dimensional slices. However, since this method of imaging is based on several projections that are reconstructed later to form an image, its accuracy is dependent on the absence of patient organ motion during this image acquisition step. In order to take potential motion into account, patients are imaged with four-dimensional X-ray computed tomography (4DCT) instead. The dimensions of 4DCT are the three spatial dimensions that are also a part of CT, represented relative to the fourth, temporal dimension. The 4D images are typically acquired as 1D projections and reconstructed into a series of 3D volumes that each represent a stage of respiratory movement. Although other approaches could be considered in addition to this, such as the stages of cardiac motion, this method 7

18 of imaging is too slow and represents respiration more reliably. This movement is accounted for by way of acquiring an external signal that measures respiration in some way, and then using the assumption that respiration is more or less periodic to perform the desired reconstruction. This subject will be discussed in more detail in the following image reconstruction subsection, but the images are typically reconstructed according to respiration because this is considered to be the most prominent source of movement for which random noise cannot form a good approximation Image Reconstruction The reconstruction of 4DCT image sets after image acquisition is typically dependent on having simultaneously acquired a signal that is thought to accurately represent the stages of respiration of the patient. Plainly stated, in order to put the independently acquired pieces to the puzzle back together, some assumptions about the dependence of some of those pieces need to be made. While there are several approaches to reconstructing an image set from these pieces in conjunction with a respiratory signal, the respiratory signal itself is generally acquired using an external marker on the surface of the patient s skin, preferably near the diaphragm, which is tracked for motion. It should be noted that this respiratory tracking records a one-dimensional signal representing the rise and fall of the skin s surface at that point, and it is expected to characterize the varying internal anatomical motion of the patient. Although there are infinitely many possible variations of the motion associated with respiration, this represents it with a discrete, undersampled (for example, the data presented in this work has ten phases), and periodic signal whose samples correspond to averaged 8

19 generalizations of the stages of that motion. While this is a somewhat unfairly critical view of this process, given the physical constraints of the situation, it is important to describe the situation accurately in order to capture the enormous difficulty of analyzing motion under these conditions. Nonetheless, with this signal, images are arranged according to the physical location and the stage of respiratory motion at which they were acquired. Typically, a major assumption involved with this step is that the process of respiration is a periodically occurring sequence that can be divided into well-defined bins. In succession, the images sorted into these bins form a piecewise representation of the image as it would look over one full sequence of respiratory phases. One way to do this is to separate the respiratory signal into bins according to its amplitude. So, if it was decided that there should be ten bins, every period of respiration would be broken into ten possible amplitude ranges and images would be sorted into these bins according to the amplitude of the respiratory signal at the time the image was recorded. An alternative approach to separating the respiratory signal into bins is by phase. Once again viewing the signal as periodic, bins are defined by dividing each cycle of the respiratory signal evenly in time. While other methods of reconstruction certainly exist, the concept of putting the puzzle together using assumptions made about a one-dimensional signal is typically prevalent among them and serves to illustrate the reason why robust methods of analyzing these results are so essential [33] Volume Visualization While humans are capable of viewing three-dimensional structure in the real world, the most common form of viewing media still tends to be two-dimensional. For 9

20 example, most computer monitors are two-dimensional viewing surfaces that represent three-dimensional structures by projecting them onto those two-dimensional surfaces. While this may seem obvious, this is an important consideration when faced with the task of visualizing information of even higher dimensionality, such as 4DCT. One way to look at this challenge is to compare it to that faced by motion photography or film. In some sense, movies are a form of 3D imaging, as they represent twodimensional projections of the three-dimensional world, captured over time. When viewing this information, it usually is sufficient for one to watch a sequence of those two-dimensional images over time in the same order in which they were captured in order for the necessary information to be conveyed. However, in medical imaging, passing over the information two dimensions at a time in succession can be an insufficient method of conveying the proper information. Clearly representing the aspect of the information the user needs to analyze or, in other words, finding the right method by which the user wishes to traverse the information is one of the greatest challenges and also one of the most important considerations of this type of visualization. With respect to medical imaging, volume visualization is generally considered a way of viewing the structure of the anatomy in 3D. Thus, as mentioned earlier, the main goal of volume visualization is to represent higher dimensional data on a two-dimensional computer screen for visual inspection. Unlike other kinds of information of similar dimensionality however, it is best for the user to decide which two-dimensional perspective is desired for such inspection. In the case of the work done in this thesis, we use visualizations of the same 4DCT datasets which we have used for deformable registration calculations, providing a superimposed anatomical frame of reference for analysis. An example visualization can be seen in Figure 2.4, a 10

21 Figure 2.4: An example of a visualization of a single respiratory phase of a 4DCT visualization showing lung, bone, and skin. rendering of the bone and lung tissue that has been cropped to show a cross section. While it is common to see 3D renderings of human anatomy in this field, it is important to note that there are several methods of obtaining these visualizations with important distinctions between them. We separate these into two categories: 1) direct volume rendering and 2) explicit volume rendering. With explicit volume rendering, the boundaries of the structure which are being visualized are explicitly 11

22 defined, calculated, and then projected onto the 2D viewing plane of the user. On the other hand, direct volume rendering only calculates the surfaces which will be projected onto the 2D viewing plane, making it a faster alternative. We chose to work with direct volume rendering in our analysis because of its inherent speed advantage. We note that there is no loss of information from the user s perspective with this method, especially from the standpoint of analyzing and editing deformable registration parameters. It is because the renderings act as a reference for visual comparison to the independent registration calculations that explicit surfaces are not necessary Radiotherapy Treatment Planning In this context, we refer to radiation therapy as a method of treating patients with various types of cancerous tumors. The goal of the treatment is to kill cancerous cells by exposing them to ionizing radiation. Ionizing radiation refers to high-energy particles that cause atoms to lose an electron, or ionize. Traditionally, the view has been that exposure to this type of radiation can be characterized in terms of its effect on DNA and leads to a number of different possible outcomes for cells: DNA damage is detectable and repairable by the cells own internal mechanisms DNA damage is irreparable and cells go through apoptosis, thus killing the cells DNA mutation occurs, potentially causing cancer More recently, however, alternative insights into the process of cell death after exposure to ionizing radiation have been presented which question this first perspective 12

23 because cell-death pathways, in which direct relations between cell killing and DNA damage diverge, have been reported. These pathways include membrane-dependent signaling pathways and bystander responses (when cells respond not to direct radiation exposure but to the irradiation of their neighboring cells). New insights into mechanisms of these responses coupled with technological advances in targeting of cells in experimental systems with microbeams have led to a reassessment of the model of how cells are killed by ionizing radiation [34]. However, from the perspective of this work, when exposed to enough ionizing radiation, it is clear that this treatment method will kill healthy tissue as well cancerous tissue. Minimizing such damage is an obvious goal for tumors that are located very closely to vital organs are a good example of why this is the case. While potentially successfully removing the cancerous cells from the area, the treatment may inflict irreparable damage on those organs and put the patient at equal or even greater risk. To complicate things further, this becomes more of a concern for a physician planning a patient s treatment when the tumor moves significantly due to cardiac activity or respiration, and can often lead to lower treatment success rates. Certainly, this already poses a significant challenge for physicians from a clinical standpoint, but it is worth noting that the current task of planning such treatment is also a difficult process that inefficiently handles the high-dimensional data that is available, compounding the overall difficulty. Specifically, treatment planning in this field is done by experts for whom working with two-dimensional information has become commonplace. In effect, this requires looking at four-dimensional information by only working with a single, two-dimensional subset of the total image at one time. One can draw the analogy that this is similar to viewing a movie over its entire 13

24 duration only one pixel at a time before going back to the beginning to view the next pixel. The absurdity of this analogy should serve to illustrate the corresponding degree of inefficiency of the manner in which treatments are planned only two dimensions at a time. Of course, it is only fair to note that a part of this inefficiency stems from the fact that imaging methods often used for this type of treatment planning, such as 4D X-Ray Computed Tomography (4DCT), are imperfect in their ability capture all of the information about internal anatomical motion. As mentioned above, image reconstruction is imperfect and thus there is rightly inherent distrust of additional processing that may amplify existing noise or even introduce new noise. Four-Field Box Technique The Four-Field Box technique [8] is a radiation therapy method in which radiation is administered in four directions: Anterior-Posterior, Posterior-Anterior, Right-Lateral, and Left-Lateral. In the Anterior-Posterior direction, the beam goes from the anterior, or front, of the patient toward the posterior, or back. The reverse is true for the Posterior-Anterior direction; the beam goes from the back toward the front of the patient. The Right-Lateral and Left-Lateral directions are also relative to the patient, where the Right-Lateral beam goes from the patient s own right side to the left side. Similarly, the Left-Lateral beam goes from the patient s own left side to the right side [7]. The Four-Field Box technique gets its name from the intersection of the four beams, which forms a box shape. An example is shown in Figure 2.5. Here, The letters A, P, R, and L specify the Anterior, Posterior, Right, and Left sides of the patient (and 14

25 Figure 2.5: Example of four beams administered in the Anterior, Posterior, Right, and Left directions, forming the shape of a box (source: a7www.igd.fhg.de) directions of the beams), respectively [7]. In addition to direction, each beam is administered with a specific energy, which refers to its wavelength (or conversely, its frequency). Shorter wavelengths are associated with higher energy, which can penetrate deeper into the tissue. Typical energies are between 6 MV and 18 MV. The amount of radiation that the linear accelerator outputs is measured in monitor units (MU). One monitor unit corresponds to one centigray of radiation 1 [7]. Treatment planning for this type of technique is done such that physicians first segment the individual slices of the CT image set to highlight the location of cancerous cells, then plan the proper dosages according to the expected density of the tissue in the way of each beam before reaching the tumor. Due to the number of beams used in this technique, it is clear that even small errors in the planning physician s understanding of any potential motion involved can have severe consequences. We will briefly characterize respiratory motion in the following section. 1 The gray (symbol: Gy) is the SI unit of absorbed dose. One gray is the absorption of one joule of radiation energy by one kilogram of matter. 15

26 Respiratory Motion Motion of internal anatomy due to respiration is a significant challenge for radiation therapy. Specifically, if a tumor is located in or near the lung, its motion is very difficult to characterize. The general field of image-guided radiotherapy aims to tackle this very difficult problem. One method of handling motion is to turn the radiation on and off when the tumor is expected to be correctly targeted by the beam. This has several problems associated with it, because even if the patient breathes exactly the same way each time, it isn t necessarily true that the relevant internal anatomical motion will be identical for each respiratory cycle [14]. A more ambitious method is to synchronize the movement of the beams to match that of the target [14]. While this would be an ideal approach if the tumor could be imaged appropriately in realtime, even then there would be the need for better models that could more accurately characterize the motion such that target-tracking algorithms could perform correctly. 2.2 Deformable Registration Given all of the challenges, described above, that come as a result of imaging anatomy in motion, one proposed solution is to employ image analysis methods to allow for better understanding of that motion. With a better understanding of this motion, improved methods for image reconstruction and even treatment planning could be conceived. One such vein of research in this area attempts to address this type of analysis by using image registration. 16

27 Image registration is a process to determine a transformation that can relate the position of features in one image with the position of the corresponding features in another image. For example, the features that one would use to perform this matching could be anything from simple but specific pixel values to edges detected by more complicated processing. In this case, we wish to relate the features in one time instant to the next, for example. Amongst our considerations, we note that we do not wish to make too many assumptions about the contents of medical images. We consider every voxel as opposed to a subset that we assume corresponds to a tumor, for example that we have imaged, and thus we use more general models of deformation not specific to this problem that account for these kinds of features. These considerations and design decisions each have various tradeoffs. One such approach, spline-based free-form registration, is capable of modeling a wide variety of deformations [21]. Also, by definition, it is constrained such that it ensures a smooth deformation field. A deformation field is represented as a weighted sum of spline basis functions, which have parameters that adjust such smoothness. B-splines are one of the most widely used basis functions for this purpose. B-spine Transformation Model In the B-spline transformation model [36], the deformation vectors are computed using B-spline interpolation from the deformation values of points located in a coarse grid, which is usually referred to as the B-spline grid. The parameter space of the B-spline deformation is composed by the set of all the deformations associated with 17

28 the nodes of the B-spline grid. A cubic B-spline in matrix form is: [ S i (t) = t 3 t 2 t 1 ] p i 1 p i p i+1 p i+2, t [0, 1] (2.1) where p j are the control points, and the parameter t determines the progression of the knot vector (defined above as the vector of the powers of t from 3 to 0). As a result, one can follow the spline S i (t), to the next time phase to find where the model places a specific point with respect to the control points. Note that B-splines have a finite support region and thus changing the weight or contribution of each basis function affects only a specific portion of the overall deformation. By increasing the resolution of the B-spline grid, more complex and localized deformations can be modeled. Landmark-based Splines An alternative to the B-spline deformation model is landmark-based splines, typically implemented using thin-plate splines [12] or other radial basis functions. In this approach, a set of landmark correspondence matches is formed between points in a pair of images. The displacements of the correspondences are used to define a deformation map, which smoothly interpolates or approximates the point pairs. One approach of particular interest is radial basis functions that have finite support, such as the Wendland functions [25]. Because these functions only deform a small region of the image, the deformations can be quickly computed and updated for interactive applications. Given N control points, located at x i and displaced by an amount λ i, 18

29 the deformation ν at location x is given as: ν(x) = N λ i φ( x x i ), (2.2) i=1 where φ is an appropriate Wendland function, such as: ( 1 r 2 σ) r σ φ(r) = 0 otherwise. (2.3) In this method, the function φ serves as a weight whose effect changes based on the distance of control points on the current deformation ν. To be more specific, the variable σ controls the width of the adjustment, usually on the order of one to two centimeters for human anatomy, and the weight that results in the deformation calculation is based on the input r, defined as the Euclidian distance between the current point x and the control point x i. Another explanation of the deformation ν is that it maps any point x in one time phase to a point ν(x) in the time phase that corresponds to the control points in the calculation. Several of these Wendland functions are used together to form a complete vector field, which defines the motion of organs of the anatomy [21]. 2.3 SCIRun Problem Solving Environment Developed by the Scientific Computing and Imaging (SCI) Institute at the University of Utah, SCIRun is a problem solving environment designed to allow researchers the freedom to build programs to perform various scientific computing tasks [1]. In our particular application, a dataflow network of modules already existed that allowed 19

30 us to do direct volume rendering. The network is a simplified version of the SCIRun PowerApp called BioImage [2]. Enhancements were made to that network to allow for visualization of 4DCT datasets and point paths by cycling through them one phase at a time. Building on the existing tools, we provided for more efficient and interactive ways of analyzing tumor motion. As shown in Figure 2.6, the visual representation of the dataflow network allows us to make a connection to the base system by dragging a pipe from our module to the relevant module in the existing network. The viewing window, the central module to which almost all dataflow eventually leads, is especially useful for our application. This graphical viewport allows navigation of the 3D environment in which we work by zooming, panning, and rotating. Furthermore, the viewing window passes back event callbacks to certain classes of objects that allow module developers to make interactive, draggable, clickable tools. However, movement of such tools is limited to the viewing plane. Thus, by rotating the viewing plane, one is able to change the directions of motion of the interactive tools Development Development for SCIRun is done by connecting the dataflow of independently-functioning modules into fully-functioning programs. These programs are called dataflow networks and are created in a visual editing environment such that dataflow connections can be done easily in a point-and-click manner. Special purpose dataflow networks, called PowerApps, can also be made to perform collections of application-specific tasks and accessed via a single user interface. 20

31 Figure 2.6: A simplified direct volume rendering SCIRun dataflow network with added modules, the focus of this research, at the bottom. 21

32 BioImage PowerApp Specifically, the BioImage PowerApp has the goal of providing a unified source for all built-in medical image visualization support that comes with SCIRun. While many basic and advanced visualization features are supported, certain general classes of analysis tools are lacking, and therefore the underlying modules and dataflow network serve as a good starting point for development of such tools. Modules Modules are written in the C++ programming language and function as independent entities so long as sufficient inputs and settings are provided. This is facilitated by each new module inheriting the general C++ Module class, provided as part of the SCIRun headers, and thus having the same familiar interfaces by which SCIRun knows to handle its operation. The most important of these is the execute function which is analogous to the main function in any C or C++ program. Additional generic hooks for user interface connections exist as well, although these are not necessarily mandatory. SCIRun is made aware of each module s capabilities by the parameters in each module s XML definition file. As stated earlier, while it is true that each module functions independently, they are obligated to adhere to any supported input and output types as specified in this file. Additionally, if a module is specified to have its own user interface, it must implement the corresponding functions inherited from the Module class to handle such interaction. At the time this was written, user interface development for SCIRun modules was done in the TCL/TK scripting language as an independent file from the module 22

33 C++ source code and XML definition file. While plans to move to either a GTK or OpenGL-based user interface scheme had been discussed as possible replacements, this discussion will be about the current TCL/TK setup. Most importantly, SCIRun facilitates interaction between modules and their user interfaces either by connecting the execution of a specific TCL/TK function to that of a module s C++ member function or marrying the values of variables in each language such that a change in one corresponds to a change in the other. Dataflow Networks As mentioned earlier, dataflow networks are the connections of modules that form more meaningfully functioning applications on a larger scale. Development of dataflow networks is primarily done within the base SCIRun application s visual editing environment. Modules are chosen, dropped into this environment, and can be dragged to any desired position. Furthermore, each module s input ports can be connected to applicable output ports by clicking and dragging from one port to the other. The same is true for connecting output ports to input ports, if this is desired. Within this environment, user interface fields can be edited, presumably changing the function parameters of the corresponding modules, and each module can be executed separately or the entire network can be executed as a whole. To make repreduction of networks easy, the ability to save and load dataflow networks is provided. While the above is the most common way to develop SCIRun dataflow networks, a lesser-known method is one taken advantage of by several SCIRun PowerApps: using the TCL/TK user interface scripts to dynamically add modules, edit input/output port connections, and edit user interface parameters of each module. This is a considerably more advanced method that is not documented and was discovered as a part 23

34 of this research when attempting to assimilate our own modules into an independent version of the BioImage PowerApp. The disadvantage of this is that while it allows for dynamically reconfigured dataflow networks, the ease in which dataflow networks are intended to be created and edited is considerably diminished despite this flexibility Volume Rendering We refer to the means by which volume visualization is achieved as volume rendering. Here we will provide background about two algorithms used for volume rendering. As mentioned in the section on volume visualization, these two algorithms correspond to the two methods of visualization addressed in this work: explicit volume rendering (or marching cubes) and direct volume rendering. Marching Cubes Also referred to as isosurface extraction, the Marching Cubes algorithm and its variants (such as Marching Tetrahedrons), are used to extract explicit surfaces for a volume that typically can be summarized by the voxels in an image set whose values fall within a specified range of the identifying voxel value, the isovalue. In the case of Marching Cubes, this is achieved by analyzing the eight vertices of a cube and, based on how their voxel values lie relative to the specified isovalue, determine whether each vertex is classified to belong within the volume or not. Based on these classifications, either one or more surfaces are defined within the cube. Once the previous step of classification is done, this step can be simplified considerably to only 15 possible surface combinations within each cube, as shown in Figure 2.7. The connection of all of the adjacent cubes in the image set yield isosurfaces, as can be seen in Figure 2.8, that correspond to the isovalue for which the algorithm 24

35 Figure 2.7: The 15 possible surface combinations for the contents of each cube in the Marching Cubes algorithm [40]. 25

36 Figure 2.8: An example of adjacent cubes, each containing explicit surfaces, combining to form a volume [13]. was run. As explained above, this creates explicit surfaces within each cube and hence defines explicit surfaces for each volume corresponding to the specified isovalue. The benefit of this is that it is easy to define when a point is either outside, inside, or intersecting the surface of a volume because that surface is flat and has well-defined vertices that were already calculated for the visualization. However, calculating these types of vertices can be time-consuming, and therefore specific applications may prefer a faster alternative, as described below. Direct Volume Rendering Using a substantially different approach, direct volume rendering uses the concept that three-dimensions are projected onto two-dimensional viewing surfaces anyways 26

37 to eliminate the need for calculating explicit vertices and surfaces for visualization. Another way to look at this concept is to consider that isosurface extraction starts with the image set, creates a three-dimensional representation, and then projects it onto the two-dimensional viewing surface, whereas, in the case of direct volume rendering, the approach is to start from the viewing surface and determine what the projection should look like by directly looking up the projection results from the image set. How this process is done in reverse is application specific, but one approach is to use look-up tables. For example, if one were to calculate the gradient of the image set, this would be a relatively fast calculation whose magnitude would contain information about where the surfaces within the image lie. A gradient can also be calculated locally very quickly, requiring little computational overhead when following a projection from the surface back into the volume as described above. Thus one such look-up table method is to create a colormap that compares gradient magnitude to isovalue. In practice, this achieves a very similar visual effect to that of Marching Cubes, and provides a fast alternative in the absence of the need for explicit surfaces. An additional benefit to this method is the ability to combine the visualizations for multiple isovalues, as seen in Figure 2.9 with very little additional calculation cost due to the efficiency of the look-up table. 27

38 Figure 2.9: An example of a direct volume rendering of bone and muscle tissue, two different ranges of isovalues that were combined with gradient magnitude for the look-up table 28

39 Chapter 3 View Trajectory Loop Tool 3.1 Motivation for the View Trajectory Loop Tool Some paths of motion, like the swing of a pendulum, are easy to see using the human eye just by observing a few iterations of this behavior. However other trajectories, like the flight of a bee, can be exceptionally difficult to understand with the same type of observation. Furthermore, it even can be difficult to understand the trajectories of several simultaneously swinging pendulums all at once. While the complexity of internal anatomical motion, as interpreted by purposelysmoothed deformable registration results, may not be quite as complex as the flight of a bee, the scenario with more than one pendulum is a perfect explanation of why it can be difficult to understand too many simple behaviors simultaneously. Within the 4DCT image sets, it is of interest to researchers to understand the movements of various regions of close proximity. As a consequence, it is of additional interest to understand the ways in which their modeling of this motion (in the case of this work, via deformable registration) succeeds and fails at helping them understand this type 29

40 of behavior. The trajectories of individual voxels over the respiratory cycle are not necessarily very complicated and, in fact, we have observed that they almost never are. However, understanding this motion for the entire image set simultaneously is very difficult because it invariably requires interpreting the trajectories via some form of complicated animation. However it may not always be the case that one wishes to observe the trajectories of all of the voxels in an image set. Instead, it is reasonable to expect that one may wish to analyze either several very loosely selected voxels trajectories or one very specifically selected voxel s trajectory over all of the respiratory phases. In this case, traditional visualization methods for 4DCT image sets are not suitable for this type of interaction. Thus the motivation for this work comes from the desire to view only a select few trajectories at the same time, without the need to view an animation. In other words, the View Trajectory Loop Tool enables one to visually analyze the trajectories of a few selected voxels over all of the available respiratory phases in a single, static visualization. 3.2 Development of a Trajectory Viewing Cursor Given the motivation for this tool, we encountered several design considerations that were important to address. The primary goal being visualization of one or more trajectories, we decided to design the tool such that it was scalable to the desired number of trajectory visualizations of the user. With this in mind, and the benefit of a flexible SCIRun development environment, we were able to create the tool in such 30

41 a way that it operates independently of the traditional three-dimensional anatomical visualization capabilities of SCIRun while still providing for user interaction. Specifically, we took advantage of a specific predefined visual component of the class Widget called the PointWidget. This object, regardless of in which module in the SCIRun dataflow network it is created, can be selected, dragged, and dropped in the end by the user in the viewing window. Furthermore, when properly used via inheritance, this object triggers feedback to the module that created it for the exact events that correspond to being selected, dragged, and dropped. This allowed us to make an independently functioning module which displays only one trajectory corresponding to the nearest voxel selected by its cursor, the underlying PointWidget. If the user wants to view N number of trajectories, all that is required is to insert and connect N separate modules for this purpose and finally interact with them all together in the viewing window. The deformable registration results were read from external vector field files as the relevant data was requested by the visualization tool. This was facilitated by the point path application developed by Gregory C. Sharp, available in the Appendix. This application, specifically created for this visualization project, parsed and traversed the deformable registration results to form a point by point trajectory for every requested voxel. A summary of its functionality is that, when supplied the coordinates of the voxel for which a trajectory was desired, the application s output was an ASCII text file with as many coordinate locations as respiratory phases, from which we extracted the relevant trajectory information by reading in the file as a matrix in SCIRun and parsing it row by row appropriately. The agreed upon ASCII data format was defined as 31

42 0 x 0 y 0 z N 1 x N 1 y N 1 z N 1 where there are N rows, one for each respiratory phase, and the first column holds the index of each respiratory phase. The remaining elements in each row form a tuple (x i, y i, z i ) that are the coordinates of the voxel at the i-th respiratory phase. In the file, columns were delimited by white space and every new line started a new row. Once these values were read, there was still some small amount of processing required before the data was ready to be processed. Because the coordinate systems of the visualization environment and the deformable registration results agreed in scale but not in translation, each coordinate needed to be shifted by a constant amount that we calculated by comparing reference points. In order to ensure that this would not introduce errors, we compared several stationary reference points as well as several anatomical reference points to make sure that the resulting translation was correct. This discrepancy was believed to be caused by an inconsistency of the handling of the coordinate system by the SCIRun visualization software and therefore we needed to accommodate this shift internally within our software. After the shift, in order to obtain trajectory vectors from the voxel coordinate locations, p i, we performed the simple calculation for each vector v i such that v i = p i p i 1 (3.1) where the respiratory phases are assumed to circularly repeat, making that calculation possible since 32

43 p 0 = p N (3.2) or in other words, p 1 = p N 1 (3.3) given, once again, that there are N respiratory phases. These vectors, v i, were then displayed for this tool rather than the shifted output of the point path application Description of Visual Elements To represent a 4D trajectory in a 3D graphical environment, we have developed a cursor that displays the path of movement of a single voxel over time. A user can move the cursor by clicking and dragging it in a motion plane parallel to the viewing plane. At its new location, the cursor displays the trajectory of the voxel at that point by showing a line path. The direction and magnitude of the motion during each time phase are indicated by a color transition from blue to red. All trajectories start and end at the same shades of blue and red, but may display less of certain intermediate shades due to very low magnitude movements during those time phases. This can be very useful when comparing two trajectories of similar shape, but very different color patterns, indicating that despite having followed a similarly shaped path, each voxel followed the path at a different speed. 33

44 (b) Cropped Lung Branches (a) Lung Branches and Bone Figure 3.1: (a) Visualization of bone and lung tissue. Although it is possible to analyze trajectories within this type of visualization, or (b) one showing a cropped version of the branches of the lungs, we provide examples of each tool showing only bony anatomy for visual clarity User Interaction The visual nature of these tools provides a definite improvement in the way tumor motion analysis is performed. The user has a rich set of visualization capabilities using our system; volume rendering of 4DCT datasets is capable of showing many different kinds of tissue. Figure 3.1 shows two examples of different kinds of tissue that can be visualized. In Figure 3.1(a) we show how the lungs and bone can be displayed simultaneously and that our visualization tools are not strictly limited to bone. Figure 3.1(b) shows branches of a set of lungs that have been cropped to show a different perspective, an important kind of tissue whose motion is important to understand in order to treat tumors located within. This illustrates the ability to create helpful perspectives of the data by methods such as cropping and visualizing other types of tissue that the user wishes to see. However, for the rest of the figures, we use renderings of bony anatomy only to avoid cluttering the view of our tools. It should be noted that this is less of a concern when viewing them together in an interactive environment. 34

45 Figure 3.2: Viewing several trajectories in the lung while visualizing surrounding bony anatomy (right) and while zoomed in (left). Trajectories are represented as line loops that make a smooth transition from blue to red in even increments across each of the respiratory phases. 35

46 The trajectory loop tool s purpose is to facilitate rapid analysis of trajectories within the visual environment. We are able to run the trajectory loop at every position the cursor has been (see Figure 3.2), showing a trail of several loops that have been visualized. In this figure, the tool was used to analyze the extent to which the registration algorithm detected motion at various spatial locations within the lung. As expected, movements became smaller as the cursor was used to inspect areas closer to bone. On the other hand, trajectory loops closer to the lower lung showed significant motion. 36

47 Chapter 4 Edit Point Path Tool 4.1 Motivation for the Edit Point Path Tool In order to fully interact with the deformable registration results, simply viewing individual trajectories may not always be enough. There may be times when the user identifies errors by analyzing the deformable registration visualizations and wishes to make changes to the results within the visual environment that can be reviewed later. In such a case, while the View Trajectory Loop Tool would be useful for finding the initial point of analysis that raised concern, it would be unable to perform any edits due to its limited niche of visualization behavior. The motivation for the Edit Point Path Tool is that, given a corresponding and simultaneously visualized anatomical background, the best way for a user to make changes to observed anomalous trajectory results is to mark and edit them in place, within the same visual setting in which they were witnessed. Furthermore, because results like deformable registration are smoothed purposely, changes made should be reflected over an entire region of influence determined by some radius, removing the 37

48 need to edit every individual trajectory within the bounds of that radius one by one. 4.2 Development of a Trajectory Editor Taking advantage of the same development components used by the View Trajectory Loop Tool, the major difference in this tool was that there needed to be several movable cursors per editing tool, so the PointWidget cursors needed to be organized accordingly. Thus maintaining a list of the visual elements, one for each respiratory phase, became important. Additionally, finding a way to visually distinguish the cursors that corresponded to each of the respiratory phases was also important. Specifically, this visualization challenge required finding and editing the internal parameters of each PointWidget object so that we could change its color at the appropriate times. When a normal cursor is selected and moved, its selection is indicated by a change in color from gray to red, and then its release causes a change back. In order to prevent confusion during interaction with this tool, we decided it was best to make the color of the cursor that corresponds to the current respiratory phase being visualized green instead of gray. Furthermore, we decided to ignore the select, drag, and drop events of all cursors that did not correspond to the current respiratory phase. Thus only one cursor could be moved at a time, somewhat limiting the editing ability of the tool, but more elegantly solving the organizational problem of distinguishing between tightly packed cursors in the visualization. As with the trajectory viewing tool, the deformable registration results were read from external vector field files as the relevant data was requested by the visualization tool. This was facilitated by the point path application developed by Gregory C. 38

49 Sharp, available in the Appendix. In summary, when supplied the coordinates of the voxel for which a trajectory was desired, the application s output was an ASCII text file with as many coordinate locations as respiratory phases, from which we extracted the relevant trajectory information by reading in the file as a matrix in SCIRun and parsing it row by row appropriately. 4.3 Materials and Methods Description of Visual Elements The Edit Point Path Tool is a collection of points, or cursors, that indicate the locations of a specified voxel over all of the available respiratory phases in the data. While each cursor is editable as mentioned above, only one cursor is editable at each respiratory phase of the background anatomical visualization. The easiest way to interpret the information shown by the tool is to imagine that, for a specified voxel, one can view all of the frames of a movie showing its motion simultaneously. In this analogy, each of the frames of the movie correspond to a respiratory phase in the visualization. The visual effect is as if one can view all of the places the voxel has been over its trajectory at one time. For the user, having a comfortable understanding of the nature of this visualization allows for appropriate edits to be made. An improperly interpreted visual element here can lead to confusion about which cursor represents which respiratory phase of the trajectory or, even worse, about which voxel is being edited by the tool at that time. Once the visualization is properly interpreted, interaction and editing are intuitively learned and utilized as described next. 39

50 4.3.2 User Interaction Once a user has identified a region of interest using our tool, they can then explore the region in greater detail. Instead of displaying a line path, this tool displays several cursors to convey similar information without using lines. To prevent confusion about the order, the module connects to the same tool that allows the user to select the 4DCT phase currently being viewed, and then highlights the corresponding cursor with a different color. At each respiratory phase, the path of a voxel can be followed both through this tool and a volume visualization simultaneously. If it is observed that the trajectory and the visualization do not agree, the user has the option of editing the trajectory by moving the cursors. It should be noted that this will not modify the 4DCT data itself, but only supplement the output of the registration algorithm. Also, moving the cursor will not only effect the voxel whose trajectory is being viewed, but will also have an attenuated effect on the surrounding area. To view the extent of this effect, the user can use several of the previously described tools to view the updated trajectory loops. If unsatisfied with analysis of the trajectories when compared to the visualization, the user can make adjustments within this environment to improve the registration. Figure 4.1(a) shows the path editing tool, where each of the individual points can be moved independently to adjust the path to the user s specifications. The point that is colored green highlights the current phase of the 4DCT that is being visualized. Thus, if the rest of the anatomy were visible, one could see the voxel to which that specific point path belonged. While Figure 4.1(a) shows the editing tool alone, Figure 4.1(b) shows the trajectory loop tool and the path editing tool when used at the same point. This may not normally be a desired way to edit a path, but in this 40

51 (a) Zoomed In (b) With Loop Figure 4.1: (a) The editing tool shown alone with the current phase highlighted in green and (b) the same editing tool shown with the trajectory loop superimposed to demonstrate the relationship between the two tools. The point highlighted in green is edited while the others remain fixed to prevent accidental changes. case it serves to illustrate the relationship between the two tools. Each has its own purpose for different intended uses, but this demonstrates that both represent the same registration information. When changes are made to the point path and are committed, the tool appends modifications to the previous registration results and refreshes the visualization. Thus, if desired, after several rounds of changes, one can go back to the modified deformable registration results and perform analysis and comparisons about what was incorrectly or insufficiently specified in the first attempt at characterizing the motion. While this work does not do this, one particularly useful extension of this tool would be to infer the appropriate adjustments to the deformable registration parameters from the interactive modifications made to the results using this set of tools. 41

52 Figure 4.2: A zoomed out (left) and more detailed (right) perspective of editing a point path while observing changes using the trajectory loop tool. An additional thing to note is that changing the visible path affects the surrounding paths as well, that may or may not also be visualized, in a way similar to how smudging tools work in image editing software. Typically, image editing software includes a tool that allows one to distort the pixels under the cursor and, as a consequence, around the cursor by a smearing effect. Similar to this, although not exactly the same, the editing tool uses changes in the path being edited to push surrounding paths out of its own way. Intuitively, this makes sense because one wouldn t expect internal anatomy to cross paths during its motion and thus potential changes that may cause such effects are best dealt with in this way. By pushing the adjacent trajectories out of the way that may potentially interfere with the changes being made, the tool aims to prevent such an undesirable conflict. The effect of this range of influence can be seen by using the path editing tool and several trajectory loop tools simultaneously, as shown in Figure 4.2. While in some cases, pushing adjacent trajectories out of the way may be a desired thing to do, 42

Interactive Deformable Registration Visualization and Analysis of 4D Computed Tomography

Interactive Deformable Registration Visualization and Analysis of 4D Computed Tomography Interactive Deformable Registration Visualization and Analysis of 4D Computed Tomography Burak Erem 1, Gregory C. Sharp 2, Ziji Wu 2, and David Kaeli 1 1 Department of Electrical and Computer Engineering,

More information

Surface Construction Analysis using Marching Cubes

Surface Construction Analysis using Marching Cubes Surface Construction Analysis using Marching Cubes Burak Erem Northeastern University erem.b@neu.edu Nicolas Dedual Northeastern University ndedual@ece.neu.edu Abstract This paper presents an analysis

More information

Tomographic Reconstruction

Tomographic Reconstruction Tomographic Reconstruction 3D Image Processing Torsten Möller Reading Gonzales + Woods, Chapter 5.11 2 Overview Physics History Reconstruction basic idea Radon transform Fourier-Slice theorem (Parallel-beam)

More information

Brilliance CT Big Bore.

Brilliance CT Big Bore. 1 2 2 There are two methods of RCCT acquisition in widespread clinical use: cine axial and helical. In RCCT with cine axial acquisition, repeat CT images are taken each couch position while recording respiration.

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha Model Generation from Multiple Volumes using Constrained Elastic SurfaceNets Michael E. Leventon and Sarah F. F. Gibson 1 MIT Artificial Intelligence Laboratory, Cambridge, MA 02139, USA leventon@ai.mit.edu

More information

Motion artifact detection in four-dimensional computed tomography images

Motion artifact detection in four-dimensional computed tomography images Motion artifact detection in four-dimensional computed tomography images G Bouilhol 1,, M Ayadi, R Pinho, S Rit 1, and D Sarrut 1, 1 University of Lyon, CREATIS; CNRS UMR 5; Inserm U144; INSA-Lyon; University

More information

Finite Element Simulation of Moving Targets in Radio Therapy

Finite Element Simulation of Moving Targets in Radio Therapy Finite Element Simulation of Moving Targets in Radio Therapy Pan Li, Gregor Remmert, Jürgen Biederer, Rolf Bendl Medical Physics, German Cancer Research Center, 69120 Heidelberg Email: pan.li@dkfz.de Abstract.

More information

Estimating 3D Respiratory Motion from Orbiting Views

Estimating 3D Respiratory Motion from Orbiting Views Estimating 3D Respiratory Motion from Orbiting Views Rongping Zeng, Jeffrey A. Fessler, James M. Balter The University of Michigan Oct. 2005 Funding provided by NIH Grant P01 CA59827 Motivation Free-breathing

More information

VJ Technologies Inspection Services Division. By Bob Maziuk, February 2015

VJ Technologies Inspection Services Division. By Bob Maziuk, February 2015 White Paper on the Inspection of Underground Electrical Conductors, Insulation, and Splices in Oil-Filled Piping Using High Energy Computed Tomography (CT) VJ Technologies Inspection Services Division

More information

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009 1 Introduction Classification of Hyperspectral Breast Images for Cancer Detection Sander Parawira December 4, 2009 parawira@stanford.edu In 2009 approximately one out of eight women has breast cancer.

More information

Selective Space Structures Manual

Selective Space Structures Manual Selective Space Structures Manual February 2017 CONTENTS 1 Contents 1 Overview and Concept 4 1.1 General Concept........................... 4 1.2 Modules................................ 6 2 The 3S Generator

More information

Implementation of Advanced Image Guided Radiation Therapy

Implementation of Advanced Image Guided Radiation Therapy Image Acquisition Course Outline Principles, characteristics& applications of the available modalities Image Processing in the T x room Image guided treatment delivery What can / can t we do in the room

More information

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields Lars König, Till Kipshagen and Jan Rühaak Fraunhofer MEVIS Project Group Image Registration,

More information

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH 3/27/212 Advantages of SPECT SPECT / CT Basic Principles Dr John C. Dickson, Principal Physicist UCLH Institute of Nuclear Medicine, University College London Hospitals and University College London john.dickson@uclh.nhs.uk

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Volume Illumination & Vector Field Visualisation

Volume Illumination & Vector Field Visualisation Volume Illumination & Vector Field Visualisation Visualisation Lecture 11 Institute for Perception, Action & Behaviour School of Informatics Volume Illumination & Vector Vis. 1 Previously : Volume Rendering

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Scalar Visualization

Scalar Visualization Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,

More information

Volume Illumination, Contouring

Volume Illumination, Contouring Volume Illumination, Contouring Computer Animation and Visualisation Lecture 0 tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Overview -

More information

Effective Medium Theory, Rough Surfaces, and Moth s Eyes

Effective Medium Theory, Rough Surfaces, and Moth s Eyes Effective Medium Theory, Rough Surfaces, and Moth s Eyes R. Steven Turley, David Allred, Anthony Willey, Joseph Muhlestein, and Zephne Larsen Brigham Young University, Provo, Utah Abstract Optics in the

More information

ANATOMIA Tutorial. Fig. 1 Obtaining CT scan data

ANATOMIA Tutorial. Fig. 1 Obtaining CT scan data ANATOMIA Tutorial Step 1: Get CT scan data from Hospital Go to the hospital where you received CT scan, and request the CT scan data copied to CD-ROM media. CT scan data is personal information, and therefore,

More information

Isosurface Rendering. CSC 7443: Scientific Information Visualization

Isosurface Rendering. CSC 7443: Scientific Information Visualization Isosurface Rendering What is Isosurfacing? An isosurface is the 3D surface representing the locations of a constant scalar value within a volume A surface with the same scalar field value Isosurfaces form

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

Lecture notes: Object modeling

Lecture notes: Object modeling Lecture notes: Object modeling One of the classic problems in computer vision is to construct a model of an object from an image of the object. An object model has the following general principles: Compact

More information

MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves

MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves MA 323 Geometric Modelling Course Notes: Day 21 Three Dimensional Bezier Curves, Projections and Rational Bezier Curves David L. Finn Over the next few days, we will be looking at extensions of Bezier

More information

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING Reveal pathology Reveal the anatomic truth Steven R. Singer, DDS srs2@columbia.edu IDEAL DIAGNOSTIC IMAGING STUDY Provides desired diagnostic

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

6. Parallel Volume Rendering Algorithms

6. Parallel Volume Rendering Algorithms 6. Parallel Volume Algorithms This chapter introduces a taxonomy of parallel volume rendering algorithms. In the thesis statement we claim that parallel algorithms may be described by "... how the tasks

More information

Scientific Visualization Example exam questions with commented answers

Scientific Visualization Example exam questions with commented answers Scientific Visualization Example exam questions with commented answers The theoretical part of this course is evaluated by means of a multiple- choice exam. The questions cover the material mentioned during

More information

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute. Dynamic Reconstruction for Coded Aperture Imaging Draft 1.0.1 Berthold K.P. Horn 2007 September 30. Unpublished work please do not cite or distribute. The dynamic reconstruction technique makes it possible

More information

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

Lofting 3D Shapes. Abstract

Lofting 3D Shapes. Abstract Lofting 3D Shapes Robby Prescott Department of Computer Science University of Wisconsin Eau Claire Eau Claire, Wisconsin 54701 robprescott715@gmail.com Chris Johnson Department of Computer Science University

More information

Semi-Automatic Segmentation of the Patellar Cartilage in MRI

Semi-Automatic Segmentation of the Patellar Cartilage in MRI Semi-Automatic Segmentation of the Patellar Cartilage in MRI Lorenz König 1, Martin Groher 1, Andreas Keil 1, Christian Glaser 2, Maximilian Reiser 2, Nassir Navab 1 1 Chair for Computer Aided Medical

More information

Image Guidance and Beam Level Imaging in Digital Linacs

Image Guidance and Beam Level Imaging in Digital Linacs Image Guidance and Beam Level Imaging in Digital Linacs Ruijiang Li, Ph.D. Department of Radiation Oncology Stanford University School of Medicine 2014 AAPM Therapy Educational Course Disclosure Research

More information

CSC Computer Graphics

CSC Computer Graphics // CSC. Computer Graphics Lecture Kasun@dscs.sjp.ac.lk Department of Computer Science University of Sri Jayewardanepura Polygon Filling Scan-Line Polygon Fill Algorithm Span Flood-Fill Algorithm Inside-outside

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Volume Illumination and Segmentation

Volume Illumination and Segmentation Volume Illumination and Segmentation Computer Animation and Visualisation Lecture 13 Institute for Perception, Action & Behaviour School of Informatics Overview Volume illumination Segmentation Volume

More information

Ch. 4 Physical Principles of CT

Ch. 4 Physical Principles of CT Ch. 4 Physical Principles of CT CLRS 408: Intro to CT Department of Radiation Sciences Review: Why CT? Solution for radiography/tomography limitations Superimposition of structures Distinguishing between

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

Is deformable image registration a solved problem?

Is deformable image registration a solved problem? Is deformable image registration a solved problem? Marcel van Herk On behalf of the imaging group of the RT department of NKI/AVL Amsterdam, the Netherlands DIR 1 Image registration Find translation.deformation

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM Contour Assessment for Quality Assurance and Data Mining Tom Purdie, PhD, MCCPM Objective Understand the state-of-the-art in contour assessment for quality assurance including data mining-based techniques

More information

Scalar Algorithms: Contouring

Scalar Algorithms: Contouring Scalar Algorithms: Contouring Computer Animation and Visualisation Lecture tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour School of Informatics Contouring Scaler Data Last Lecture...

More information

UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication

UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication UvA-DARE (Digital Academic Repository) Motion compensation for 4D PET/CT Kruis, M.F. Link to publication Citation for published version (APA): Kruis, M. F. (2014). Motion compensation for 4D PET/CT General

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Constrained Diffusion Limited Aggregation in 3 Dimensions

Constrained Diffusion Limited Aggregation in 3 Dimensions Constrained Diffusion Limited Aggregation in 3 Dimensions Paul Bourke Swinburne University of Technology P. O. Box 218, Hawthorn Melbourne, Vic 3122, Australia. Email: pdb@swin.edu.au Abstract Diffusion

More information

Quantifying Three-Dimensional Deformations of Migrating Fibroblasts

Quantifying Three-Dimensional Deformations of Migrating Fibroblasts 45 Chapter 4 Quantifying Three-Dimensional Deformations of Migrating Fibroblasts This chapter presents the full-field displacements and tractions of 3T3 fibroblast cells during migration on polyacrylamide

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

Respiratory Motion Compensation for Simultaneous PET/MR Based on Strongly Undersampled Radial MR Data

Respiratory Motion Compensation for Simultaneous PET/MR Based on Strongly Undersampled Radial MR Data Respiratory Motion Compensation for Simultaneous PET/MR Based on Strongly Undersampled Radial MR Data Christopher M Rank 1, Thorsten Heußer 1, Andreas Wetscherek 1, and Marc Kachelrieß 1 1 German Cancer

More information

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images Jianhua Yao 1, Russell Taylor 2 1. Diagnostic Radiology Department, Clinical Center,

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

The Ball-Pivoting Algorithm for Surface Reconstruction

The Ball-Pivoting Algorithm for Surface Reconstruction The Ball-Pivoting Algorithm for Surface Reconstruction 1. Briefly summarize the paper s contributions. Does it address a new problem? Does it present a new approach? Does it show new types of results?

More information

Respiratory Motion Estimation using a 3D Diaphragm Model

Respiratory Motion Estimation using a 3D Diaphragm Model Respiratory Motion Estimation using a 3D Diaphragm Model Marco Bögel 1,2, Christian Riess 1,2, Andreas Maier 1, Joachim Hornegger 1, Rebecca Fahrig 2 1 Pattern Recognition Lab, FAU Erlangen-Nürnberg 2

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

Basics of treatment planning II

Basics of treatment planning II Basics of treatment planning II Sastry Vedam PhD DABR Introduction to Medical Physics III: Therapy Spring 2015 Dose calculation algorithms! Correction based! Model based 1 Dose calculation algorithms!

More information

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies g Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies Presented by Adam Kesner, Ph.D., DABR Assistant Professor, Division of Radiological Sciences,

More information

Scientific Visualization. CSC 7443: Scientific Information Visualization

Scientific Visualization. CSC 7443: Scientific Information Visualization Scientific Visualization Scientific Datasets Gaining insight into scientific data by representing the data by computer graphics Scientific data sources Computation Real material simulation/modeling (e.g.,

More information

TomoTherapy Related Projects. An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram

TomoTherapy Related Projects. An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram TomoTherapy Related Projects An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram Development of A Novel Image Guidance Alternative for Patient Localization

More information

X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330

X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330 X-ray Target Reconstruction for Cyber Knife Radiosurgery Assignment for CISC/CMPE 330 We will perform radiosurgery of multipole liver tumors under X-ray guidance with the Cyber Knife (CK) system. The patient

More information

PURE. ViSION Edition PET/CT. Patient Comfort Put First.

PURE. ViSION Edition PET/CT. Patient Comfort Put First. PURE ViSION Edition PET/CT Patient Comfort Put First. 2 System features that put patient comfort and safety first. Oncology patients deserve the highest levels of safety and comfort during scans. Our Celesteion

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

Dose Distributions. Purpose. Isodose distributions. To familiarize the resident with dose distributions and the factors that affect them

Dose Distributions. Purpose. Isodose distributions. To familiarize the resident with dose distributions and the factors that affect them Dose Distributions George Starkschall, Ph.D. Department of Radiation Physics U.T. M.D. Anderson Cancer Center Purpose To familiarize the resident with dose distributions and the factors that affect them

More information

Transform Introduction page 96 Spatial Transforms page 97

Transform Introduction page 96 Spatial Transforms page 97 Transform Introduction page 96 Spatial Transforms page 97 Pad page 97 Subregion page 101 Resize page 104 Shift page 109 1. Correcting Wraparound Using the Shift Tool page 109 Flip page 116 2. Flipping

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

UGviewer: a medical image viewer

UGviewer: a medical image viewer Appendix A UGviewer: a medical image viewer As a complement to this master s thesis, an own medical image viewer was programmed. This piece of software lets the user visualize and compare images. Designing

More information

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Advanced Image Reconstruction Methods for Photoacoustic Tomography Advanced Image Reconstruction Methods for Photoacoustic Tomography Mark A. Anastasio, Kun Wang, and Robert Schoonover Department of Biomedical Engineering Washington University in St. Louis 1 Outline Photoacoustic/thermoacoustic

More information

Structural and Syntactic Pattern Recognition

Structural and Syntactic Pattern Recognition Structural and Syntactic Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Simultaneous Model-based Segmentation of Multiple Objects

Simultaneous Model-based Segmentation of Multiple Objects Simultaneous Model-based Segmentation of Multiple Objects Astrid Franz 1, Robin Wolz 1, Tobias Klinder 1,2, Cristian Lorenz 1, Hans Barschdorf 1, Thomas Blaffert 1, Sebastian P. M. Dries 1, Steffen Renisch

More information

icatvision Quick Reference

icatvision Quick Reference icatvision Quick Reference Navigating the i-cat Interface This guide shows how to: View reconstructed images Use main features and tools to optimize an image. REMINDER Images are displayed as if you are

More information

Enhanced material contrast by dual-energy microct imaging

Enhanced material contrast by dual-energy microct imaging Enhanced material contrast by dual-energy microct imaging Method note Page 1 of 12 2 Method note: Dual-energy microct analysis 1. Introduction 1.1. The basis for dual energy imaging Micro-computed tomography

More information

4-D Modeling of Displacement Vector Fields for Improved Radiation Therapy

4-D Modeling of Displacement Vector Fields for Improved Radiation Therapy Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 2010 4-D Modeling of Displacement Vector Fields for Improved Radiation Therapy Elizabeth Zachariah Virginia

More information

Getting Started. What is SAS/SPECTRAVIEW Software? CHAPTER 1

Getting Started. What is SAS/SPECTRAVIEW Software? CHAPTER 1 3 CHAPTER 1 Getting Started What is SAS/SPECTRAVIEW Software? 3 Using SAS/SPECTRAVIEW Software 5 Data Set Requirements 5 How the Software Displays Data 6 Spatial Data 6 Non-Spatial Data 7 Summary of Software

More information

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN

More information

Fluorescence Tomography Source Reconstruction and Analysis

Fluorescence Tomography Source Reconstruction and Analysis TECHNICAL NOTE Pre-clinical in vivo imaging Fluorescence Tomography Source Reconstruction and Analysis Note: This Technical Note is part of a series for Fluorescence Imaging Tomography (FLIT). The user

More information

Interactive Treatment Planning in Cancer Radiotherapy

Interactive Treatment Planning in Cancer Radiotherapy Interactive Treatment Planning in Cancer Radiotherapy Mohammad Shakourifar Giulio Trigila Pooyan Shirvani Ghomi Abraham Abebe Sarah Couzens Laura Noreña Wenling Shang June 29, 212 1 Introduction Intensity

More information

ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD. Moe Siddiqui, April 08, 2017

ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD. Moe Siddiqui, April 08, 2017 ROBUST OPTIMIZATION THE END OF PTV AND THE BEGINNING OF SMART DOSE CLOUD Moe Siddiqui, April 08, 2017 Agenda Background IRCU 50 - Disclaimer - Uncertainties Robust optimization Use Cases Lung Robust 4D

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Dynamic digital phantoms

Dynamic digital phantoms Dynamic digital phantoms In radiation research the term phantom is used to describe an inanimate object or system used to tune the performance of radiation imaging or radiotherapeutic devices. A wide range

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Gamepad Controls. Figure 1: A diagram of an Xbox controller. Figure 2: A screenshot of the BodyViz Controller Panel. BodyViz 3 User Manual 1

Gamepad Controls. Figure 1: A diagram of an Xbox controller. Figure 2: A screenshot of the BodyViz Controller Panel. BodyViz 3 User Manual 1 BodyViz User Manual Gamepad Controls The first step in becoming an expert BodyViz user is to get acquainted with the Xbox gamepad, also known as a controller, and the BodyViz Controller Panel. These can

More information

SETTLEMENT OF A CIRCULAR FOOTING ON SAND

SETTLEMENT OF A CIRCULAR FOOTING ON SAND 1 SETTLEMENT OF A CIRCULAR FOOTING ON SAND In this chapter a first application is considered, namely the settlement of a circular foundation footing on sand. This is the first step in becoming familiar

More information

CLOTH - MODELING, DEFORMATION, AND SIMULATION

CLOTH - MODELING, DEFORMATION, AND SIMULATION California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 3-2016 CLOTH - MODELING, DEFORMATION, AND SIMULATION Thanh Ho Computer

More information

CMSC 425: Lecture 10 Skeletal Animation and Skinning

CMSC 425: Lecture 10 Skeletal Animation and Skinning CMSC 425: Lecture 10 Skeletal Animation and Skinning Reading: Chapt 11 of Gregory, Game Engine Architecture. Recap: Last time we introduced the principal elements of skeletal models and discussed forward

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Clipping. CSC 7443: Scientific Information Visualization

Clipping. CSC 7443: Scientific Information Visualization Clipping Clipping to See Inside Obscuring critical information contained in a volume data Contour displays show only exterior visible surfaces Isosurfaces can hide other isosurfaces Other displays can

More information

CS 378: Computer Game Technology

CS 378: Computer Game Technology CS 378: Computer Game Technology Dynamic Path Planning, Flocking Spring 2012 University of Texas at Austin CS 378 Game Technology Don Fussell Dynamic Path Planning! What happens when the environment changes

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

doi: /

doi: / Yiting Xie ; Anthony P. Reeves; Single 3D cell segmentation from optical CT microscope images. Proc. SPIE 934, Medical Imaging 214: Image Processing, 9343B (March 21, 214); doi:1.1117/12.243852. (214)

More information

Medical Image Processing: Image Reconstruction and 3D Renderings

Medical Image Processing: Image Reconstruction and 3D Renderings Medical Image Processing: Image Reconstruction and 3D Renderings 김보형 서울대학교컴퓨터공학부 Computer Graphics and Image Processing Lab. 2011. 3. 23 1 Computer Graphics & Image Processing Computer Graphics : Create,

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Automated segmentation methods for liver analysis in oncology applications

Automated segmentation methods for liver analysis in oncology applications University of Szeged Department of Image Processing and Computer Graphics Automated segmentation methods for liver analysis in oncology applications Ph. D. Thesis László Ruskó Thesis Advisor Dr. Antal

More information

Data Visualization (DSC 530/CIS )

Data Visualization (DSC 530/CIS ) Data Visualization (DSC 530/CIS 60-01) Scalar Visualization Dr. David Koop Online JavaScript Resources http://learnjsdata.com/ Good coverage of data wrangling using JavaScript Fields in Visualization Scalar

More information