GEANT4 particle simulations in support of the neutron interrogation project

Size: px
Start display at page:

Download "GEANT4 particle simulations in support of the neutron interrogation project"

Transcription

1 Defence Research and Development Canada Recherche et développement pour la défense Canada GEANT4 particle simulations in support of the neutron interrogation project ANS Technologies Inc. Contract Scientific Authority: A.A. Faust, DRDC Suffield The scientific or technical validity of this Contract Report is entirely the responsibility of the contractor and the contents do not necessarily have the approval or endorsement of Defence R&D Canada. Defence R&D Canada Contract Report DRDC Suffield CR December 2007

2

3 GEANT4 particle simulations in support of the neutron interrogation project ANS Technologies Inc. University of Montreal, Lab. Rene-J.A.-Levesque P.O. Box 6128, Station CV Montreal QC H3C 3J7 Contract Number: W R108/001/EDM Contract Scientific Authority: A.A. Faust ( ) The scientific or technical validity of this Contract Report is entirely the responsibility of the contractor and the contents do not necessarily have the approval or endorsement of Defence R&D Canada. Defence R&D Canada Suffield Contract Report DRDC Suffield CR December 2007

4 Her Majesty the Queen as represented by the Minister of National Defence, 2007 Sa majesté la reine, représentée par le ministre de la Défense nationale, 2007

5 Geant4 particle simulation in support of the neutron interrogation project Progress Report, by ANS Technologies Inc. for Dr. Anthony Faust, Threat Detection Group, Suffield project anst-c Objective In this report, we provide an explanation and propose a solution to the event limit problem encountered during the previous report. The data persistency were included using Reflex with ROOT I/O method; the analysis tools was written for extracting a ROOT readable file from the Reflex/ROOT format to an independent ROOT ntuple file; the pile-up analysis tools are also provided. In addition, we have run a full simulation with a reasonable event number for testing the new cluster configuration. 1 Introduction We have received a 64-bit quad-core with 8G of memory Supermicro server from Threat Detection Group. The final setup of this server will be configured as a head node. In our facility, we have a 2G dual-core Dell server and it can be used as a head or worker node depending the needs. Originally, in our understanding, the plan was to setup a Beowulf type of cluster using clustermatic distribution (BProc). This cluster can run 32 or 64-bit OS depending on which OS is better for TNA simulations. Since we already knew how the TNA simulations run on a 32-bit cluster, it was worth testing it on a 64-bit cluster to see if it can boost the simulations performances. So a test Beowulf cluster using 64-bit OS with our 2G Dell server as head node was setup and the TNA simulation programs were recompiled for 64-bit running Scientific Linux CERN 4 (SLC4). The test seems to work fine, however the memory used increases very rapidly. This behavior has an explanation which we will discuss in details later in the report. We later learned that DND had changed their distribution to a Rocks distribution. In order to re-adapt to this new situation, a test Rocks cluster has been configured using 64-bit Centos 4 OS. The Rocks cluster does not require the BProc option to be compiled into the OpenMPI any longer. The communication between the head node and the worker node uses the ssh protocol. In this case, we need to reconfigure TOP-C and the other related external softwares to be able to run the TNA simulation on the Rocks cluster. OpenMPI is used as the default MPI and will be updated regularly. In previous reports we worked with geant4.8.1.p01 and since that time another version of geant4 was released. The latest version, at the moment of writing of this report, is geant4.9.0.p01. In this version, there are several changes that affect the ParTNAmain, for instance the Hadron physics has been moved and the neutron data file is new. In principle, we can run a 32-bit application as well as a 64-bit application on a 64-bit OS. We need to compile all the external packages and ParTNAmain in 32-bit. For this task, it is not 1

6 always easy to put the right flags at the right places. The persistency using ROOT/IO have been included in ParTNAmain (parallel version of the TNA simulation) as well as in the TNAmain (serial version). The pulse pile-up analysis of the simulated energy deposition in the TNA detector will be presented. We will also present the results of a simulation for 10 million events which is 100 times more than in the last report. 2 Clusters setup In this section, we present a short description of the clusters setup for a 64-bit Beowulf cluster with BProc and a 64-bit Rocks cluster using Centos Beowulf cluster (BProc) We had already reported a working 64-bit Beowulf cluster in the previous report. In this previous report, we had problems compiling CPPGDML using the gcc4.x series compiler. We found that this problem can be solved using the gcc3.4 series compiler. The OS kernel version is 2.6.9, which is similar to the version in CentOS 4.5. We note that the Supermicro s ethernet controller is an Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper), the default driver from the kernel did not work at all. We found that the e worked fine. In order to test the Supermicro server as worker node, we need to compile and incorporate the ethernet driver as modules in the BProc kernel and than load it as stage 2 s modules. The easy method is to compile the ethernet driver module and to copy it into the appropriate kernel library directory and then to produce a second stage boot image and initrd using the following command line: beoboot -2 -n -i -o stage2 Once this is done, one can load the image from the worker node using PXEboot. It is also possible to load the ethernet driver module by modifying the original clustermatic distribution and then booting the worker node using the boot CD. The jobs submission using mpirun worked as before, there were no modifications needed for the submission scripts. However, we noted that the master process memory utilization increased very rapidly. This can be explained by the fact the 64-bit program allocated more memory that the 32-bit version and ran much faster than 32-bit. This is why in the same amount of simulation time the memory needs increase faster compared to the 32-bit OS simulation. More details of memory utilization of the ParTNAmain will be given in the Rocks cluster section. 2.2 Rocks Cluster By following the installation guide for Rocks 4.3 distribution from the Rocks webpage [1], the installation Rocks cluster was simple. However, several parameters related to the network were 2

7 not configured correctly and a manual intervention was required after the installation. In Rocks 4.3, we have a choice of operating systems, we understood DND preferred to use the CentOS (in this case, the OS version is 4.5). In this setup, we used the Supermicro server as the head node and the Dell server as worker node. Later, when we run TNA simulations on the Rocks cluster we discovered that there were intermittent periods of network interruptions from the front ethernet card (eth0). We first thought it was an IP conflict since the internal ethernet card (eth1) was working. After debugging, we found that there was no IP conflict causing this problem but instead the problem came from the ethernet driver. A new driver was installed as a module in Rocks 4.3 in order to have a stable network communication between the nodes. Since that time we have experienced only two to three network disconnections under heavy load. Since the communication protocol between the head node and worker node is ssh, we have to reconfigure OpenMPI and TOPC, and recompile ParTNAmain. For the Rocks cluster configuration, we had a hard time to figure out how to make mpirun work. It seems that not all the local environment variables are exported to the worker node as it is done in the case of the Beowulf cluster. In particular, we needed to force the following variables to the mpirun (orterun) as options: #geant4.8.1 mpirun --prefix $MPI_DIR -np 3 -machinefile /home/chen/machines-a \ -x G4LEVELGAMMADATA -x G4RADIOACTIVEDATA -x G4RADIOACTIVEDATA -x G4LEDATA \ -x G4ELASTICDATA -x NeutronHPCrossSections -x ROOTSYS \ /home/chen/geant4/bin/linux-g++/partnamain $TOPC_opt1./$macroFile >output1.out We note that the prefix option was necessary as well as the -x option and it had to be exported individually. For geant4.9.0.p1, we needed to replace NeutronHPCrossSections by G4NEUTRONHPDATA. In addition, the LD LIBRARY PATH cannot be passed by the user, we have to set it in /etc/ld.so.conf.d/pargeant4.conf as root user. This limitation is very inconvenient for the user. Maybe later we can find a way to overcome this limitation. Inside the pargeant4 file, it looks like (for 64-bit): /opt/software64/clhep/lib /opt/software64/xerces-c-src_2_7_0/lib /opt/software64/aidajni-3.2.3/lib/linux-g++ /opt/software64/jdk1.6.0_02/jre/lib/amd64/server /opt/software64/root/lib #geant4.8.1 /opt/software64/cppgdml/build/linux-g++/lib /opt/software64/geant4.8.1.p01/lib/linux-g++ #geant4.9.0 /opt/software64/cppgdml-g4.9.0/build/linux-g++/lib 3

8 /opt/software64/geant4.9.0.p01/lib/linux-g++ and for 32-bit /opt/software32/clhep/lib /opt/software32/cppgdml/build/linux-g++/lib /opt/software32/aidajni-3.2.3/lib/linux-g++ /opt/software32/jdk1.6.0_03/jre/lib/i386/server /opt/software32/root/lib /opt/software32/geant4.9.0.p01/lib/linux-g++ /opt/software32/python-2.4.4/lib /opt/software32/root/lib In a Rocks cluster, we need to tell the mpirun which machine to use and also how many slots are available by using the -machinefile option. The option file looks like: ansq1.local slots=1 compute-0-0.local slots=2 This will launch one master process in ansq1.local and two slave processes in compute-0-0.local. We can also run the slave processes in the head node by setting the machinefile as: ansq1.local slots=3 It is not recommended to run slave processes in the master node on a Beowulf cluster since the master control the slaves and the slaves can be rebooted. This is what we want to avoid in a running Beowulf cluster. Overall for the 64-bit Rocks cluster, we have been able to compile TNAmain and ParTNAmain in 64-bit and 32-bit. For 32-bit, we had to force the compiler in 32-bit with the option -m32. The main problem was to put this option at the right place. We have also experienced problems with AIDA when an older java version was used. The working version is jdk and higher. 3 Persistency The persistency was included as a module in RootIO.cc using the lcgdict tool to create the Reflex dictionary for geant4 classes and use Root/IO to save as ROOT files. The Root/IO also has the ability to read Root persistency files during the simulations. In our case, we will produce files for the TNA detector, the TNT and Soil separately because the soil (defined as sensitive detector to neutron) root file could become huge for large events simulations. The selection file for producing the dictionary looks like: 4

9 <lcgdict> <class name="std::basic_string<char>" /> <class name="g4string"/> <class name="g4vhit"/> <class name="tnatnthit"/> <class name="std::vector<tnatnthit*>"/> <class name="tnasoilhit"/> <class name="std::vector<tnasoilhit*>"/> <class name="tnadetectorhit"/> <class name="std::vector<tnadetectorhit*>"/> <class name="clhep::hep3vector"/> </lcgdict> The dictionary library is created by simply typing make dictionary This will produce a library file libclassesdict.so and an executable file readtnahits. The executable file allows one to extract and create ntuple files readable by Root. The libclasses- Dict.so is required by readtnahits to be able to extract Root persistency data to produce a simple Root ntuples file. 4 Data Collection and Simulations Results 4.1 Limitation First, let us to clarify an issue from the last report, we had reported that we had successfully run 60 million events on Gimili and also reported the results for events, saying that we would report for 60 millions later. The Gimili calculation included only the sensitive TNA detector. Adding the TNT and the soil as neutron sensitive detectors and storing ntuples will change considerablebly the memory used in the simulation. In the following section we try to understand memory utilization by TNAmain and ParTNAmain. This can help us to solve the event-limit problem and how to handle the future data collection when we will want to keep more data for post simulation analysis. In pargeant4, there is a memory allocation for the random seeds to insure the slaves use different seeds in order to have independent Monte-Carlo events. The global seeds allocation is done by using: // Setup random seeds for each slave g_seeds = (long*)calloc(n_event, sizeof(long)); where n event is the number event N from beamon(n). Naturally, this kind of allocation has 5

10 an upper limit on the memory utilization, which the test, originally, omitted. The correct prevention can be written as: // Checking g_seeds if out of memory exit; if (g_seeds == NULL ) { G4cout<<"g_Seeds: out of memory\n"<<g4endl; exit (1); } In 64-bit, long* is a 8 bytes segment and in 32-bit, long* is a 4 bytes segment. For example, when N=100 millions, ParTNAmain has a footprint of 900M for the master node and 891M for the slave nodes after the geometry has been loaded. When N=10 millions, ParTNAmain has a footprint of 227M for the master and 207M for the slaves after the geometry has been loaded. This illustrates how N affects the footprint. When the simulation started to run events, ParTNAmain started to collect data via the sensitive detectors; at the moment we have three Sensitive detectors: the TNADetectorSD, TNATntSD and TNASoilSD and also the AIDA histogram and ntuple hold data in the memory until the simulation program finishes then it frees the memory. We are not aware of any option available in AIDA that allows one to empty the memory to the file continuously as it does for the ROOT/IO persistency. The global memory utilization for storing simulated data is huge, sometimes it can reach 1.5Gb to 2Gb only for the soil sensitive detector for 100 million events. We have also tried hunting for memory leaks by using Valgrind which includes memcheck and massif. Running ParTNAmain in debugging mode using Valgrind did not allow us to find any memory leak. There is another important factor that contributes to the memory increase is the TOPC/Marsharling, this involves copying data forward and backward between the slaves and the master. To isolate the problem, we nead to run the TNA simulation in serial mode (run TNAmain). In this case, the TOPC part is removed and the footprint of the TNAmain at N=100 millions is 143M which is much smaller than the ParTNAmain. The rate of memory occupation increased from 77M/(million of events) for TNAmain to 200M/(million of events) for ParTNAmain. To simulate 100 million events, TNAmain needs 7.8G of memory and 21G of memory for ParTNAmain. The memory could be a virtual memory (swap). To solve this problem, the easy solution is to increase the memory to 32G for the head node and set a large swap partition. We can also divide the simulation into several simulations with different seeds each and add up the results at the end of the simulation. 4.2 Optimization and Performance According to our observation, the performance of TNA simulations has been enhanced by running a Rocks cluster. We note that in the Beowulf cluster the slave processes could not run at full capacity, contrary to the Rocks cluster, where all processes run at almost 99% all the time. We present below a measure of the simulation time needed for calculating one event per node (s/event): ParTNAmain: Gimili: (used 9 nodes as slaves) Rocks: (two slaves run in Supermicro (master)) 6

11 Rocks: (two slaves run in Dell (pentium D 3.0GHz)) TNAmain: Rocks: (one process on master node) The supermicro server is 15 times (if the calculation is based on a single node) as fast as gimili and 2.5 times as fast as the Dell server. 4.3 Simulation We present an update of the previous parallel Geant4 simulation of the full TNA detector geometry for 10 millions events using ParTNAmain for the energy deposition with geant4.8.1.p01 and geant4.9.0.p01. The upgrade from p0 to p01 clearly presented a problem. As we mentionned earlier there is a new neutron data set and the hadron physics list has been moved. This problem needs to be solved if we wish to use the latest version of geant4. We note that there is yet a newer version of geant4, released during the writing of this report, which has a new neutron data set; this could be the first thing to try. All the results below are simulated using geant4.8.1.p01 since we are having problems with geant4.9.0.p01. The neutron flux distribution for the TNA detector (histogram 20), soil (histogram 21) and TNT (histogram 22) is also updated in figure 2. The TNT is positioned at 5cm in the z-direction. Fig. 3 shows an update of the neutron fluence for the soil (left figure) and the TNT (right figure) as function of the distance defined in the previous report. Fig. 4 shows an update of the event-time ditribution of the photons hitting the sensitive detectors. Fig. 5 shows the neutron distribution in the soil as XYZ plotted with Root. With OpenGL we can have different viewpoints of the neutron distribution, as shown in Fig. 6. Fig. 7 shows the neutron flux at different surfaces Pile-up The pulse pile-up calculation is done using the Reflex ROOT/IO format file which stores the time, t, and the deposited energy, E d, of an event hitting the TNA detector. For the pile-up treatement algothrim, we follow Palomba et al. [2] s work. We first need to define the pulse rise time in the pre-amplier as τ R and time width in the shaped output pulse as τ P. Assuming two events with t 1 and t 2 hitting the TNA detector. If these two events occured at the atmost same time then there is a pile-up occurence; if t 2 t 1 < t R, the pulse shape is indistinguishable by any filter and we accept the pile-up and add the two event energies and record it. If the pile-up occurs between τ R t 2 t 1 < τ P, the time difference is large and the rejection filter can detect the pile-up, both events are rejected and no particle is stored. If τ P < t 2 t 1, the pulses are distinguishable by the filter, there is no pile-up occurence and both particles are stored. Fig. 8 shows the pile-up analysis using the algorithm described above with τ r = 100 ns and τ P = 3 µs. The red histogram represents the original energy deposition distribution and the 7

12 Figure 1: Energy deposition calculated using ParTNAmain for the full TNA detector geometry (Blue with geant4.8.1.p01 and red with geant4.9.0.p01). The TNT is placed at 5 cm in the z-direction. green one includes the calculation of the pile-up. 5 Conclusion A 64-bit Beowulf cluster and a 64-bit Rocks cluster were built and this allowed as to further test the TNA simulations. In these clusters, both 32-bit and 64-bit TAN simulations can be performed. The Rocks cluster seems to be much faster than the Beowulf cluster. A detailed analysis of the event limit problem has been done and by adding more memory we can help to solve the problem. We have also included the Root persistency and the pile-up analysis, as well provided new updated simulations of the TNA detectors. We also found that geant4.9.0.p01 presents some problems which need to be solved, unless we update to the newest version of the geant4. 8

13 Figure 2: Neutron flux (in mm 2 ) as a function of energy for the dectector (20), soil (21) and TNT (22). Appendix Problems encounted: ulimit -a The stack size is set at 10240, this should set to unlimited when run 32-bit application on 64-bit cluster. Java memory limit for jas3: The default -Xmx is set to 256M in jas3 script, this option needs to be modified for larger memory size. Rocks OS behavior: Rocks crashed if all the memory is used. 9

14 References [1] [2] An application of the CSSE code: analysis of pulse pile-up in NaI(Tl) detectors used for TNA, M.Palomba, G.D erasmo, A. Pantaleo, Nucl. Instr. and Meth. A 498 (2003) ; 10

15 Figure 3: Neutron fluence as function of the distance r defined in the previous report for the soil (left figure) and TNT (right figure). 11

16 Figure 4: Event-time distribution of the photons hitting the sensitive detectors. Thermal neutrons of less than 0.5 MeV have been cut-off for this figure. 12

17 x:y:z x y z Figure 5: Neutron distribution in the soil (plotted with Root). The cynlinder shape distribution is the reactions with the TNT. A version of this figure is available with more events; due to the size of the file it must be downloaded from projet with username Faust and password x1f5z

18 Figure 6: Neutron distribution in the soil: OpenGL views of figure 5 with different angles. The simulation used for this figure contains fewer events than in the preceding figure because it makes images easier to handle. 14

19 x x:y:fastflux {z> && z<=0} y stflux fa x x:y:fastflux {z>-50.5 && z<-50.0} y x fastflu x x:y:fastflux {z> && z<=-250.0} y x fastflu Figure 7: Neutron flux distribution in the soil near the 0 surface (top figure), near 5cm surface (middle figure) and near 30cm surface (bottom figure). The top part of this figure was produced with a limited number of events in order to reduce its size. The full version is available at projet with username Faust and password x1f5z

20 energy Counts 10 5 plot1 Entries Mean RMS Energy Figure 8: Analysis of energy distribution including pile-up effect on the detector. 16

21 UNCLASSIFIED SECURITY CLASSIFICATION OF FORM (highest classification of Title, Abstract, Keywords) DOCUMENT CONTROL DATA (Security classification of title, body of abstract and indexing annotation must be entered when the overall document is classified) 1. ORIGINATOR (the name and address of the organization preparing the document. Organizations for who the document was prepared, e.g. Establishment sponsoring a contractor's report, or tasking agency, are entered in Section 8.) ANS Technologies Inc. University of Montreal, Lab. Rene-J.A.-Levesque P.O. Box 6128, Station CV Montreal QC H3C 3J7 2. SECURITY CLASSIFICATION (overall security classification of the document, including special warning terms if applicable) UNCLASSIFIED 3. TITLE (the complete document title as indicated on the title page. Its classification should be indicated by the appropriate abbreviation (S, C or U) in parentheses after the title). GEANT4 Particle Simulations in Support of the Neutron Interrogation Project 4. AUTHORS (Last name, first name, middle initial. If military, show rank, e.g. Doe, Maj. John E.) Atlantic Nuclear Services Ltd. 5. DATE OF PUBLICATION (month and year of publication of document) December a. NO. OF PAGES (total containing information, include Annexes, Appendices, etc) 16 6b. NO. OF REFS (total cited in document) 2 7. DESCRIPTIVE NOTES (the category of the document, e.g. technical report, technical note or memorandum. If appropriate, enter the type of report, e.g. interim, progress, summary, annual or final. Give the inclusive dates when a specific reporting period is covered.) Progress report 8. SPONSORING ACTIVITY (the name of the department project office or laboratory sponsoring the research and development. Include the address.) Defence R&D Canada Suffield, PO Box 4000, Station Main, Medicine Hat, Alberta, Canada T1A 8K6 9a. PROJECT OR GRANT NO. (If appropriate, the applicable research and development project or grant number under which the document was written. Please specify whether project or grant.) 9b. CONTRACT NO. (If appropriate, the applicable number under which the document was written.) W R108 10a. ORIGINATOR'S DOCUMENT NUMBER (the official document number by which the document is identified by the originating activity. This number must be unique to this document.) DRDC Suffield CR b. OTHER DOCUMENT NOs. (Any other numbers which may be assigned this document either by the originator or by the sponsor.) 11. DOCUMENT AVAILABILITY (any limitations on further dissemination of the document, other than those imposed by security classification) ( x ) Unlimited distribution ( ) Distribution limited to defence departments and defence contractors; further distribution only as approved ( ) Distribution limited to defence departments and Canadian defence contractors; further distribution only as approved ( ) Distribution limited to government departments and agencies; further distribution only as approved ( ) Distribution limited to defence departments; further distribution only as approved ( ) Other (please specify): 12. DOCUMENT ANNOUNCEMENT (any limitation to the bibliographic announcement of this document. This will normally corresponded to the Document Availability (11). However, where further distribution (beyond the audience specified in 11) is possible, a wider announcement audience may be selected). UNCLASSIFIED SECURITY CLASSIFICATION OF FORM

22 UNCLASSIFIED SECURITY CLASSIFICATION OF FORM 13. ABSTRACT (a brief and factual summary of the document. It may also appear elsewhere in the body of the document itself. It is highly desirable that the abstract of classified documents be unclassified. Each paragraph of the abstract shall begin with an indication of the security classification of the information in the paragraph (unless the document itself is unclassified) represented as (S), (C) or (U). It is not necessary to include here abstracts in both official languages unless the text is bilingual). DRDC Suffield is developing nuclear-based technologies that will provide the Canadian Forces (CF) and other Public Security agencies with advanced explosives detection capabilities, such as to aid in the identification and evaluation of vehicle borne Improvised Explosive Devices (IEDs). As part of this effort, they require a high-precision, flexible neutron transport simulation program consistent with their existing analysis tools. Geant4 is a recently developed open-source C++ based radiation simulation package created by a world-wide collaboration of universities and provides a complete set of tools for all areas of detector simulation. As the Geant4 code base is relatively new and not widely used outside the physics community only a few attempts have been made to create a parallel version capable of working on a commodity component computer cluster. A previous contract, W R108, developed a parallel Geant4 simulation framework, TNAG4Sim, running on the DRDC Suffield parallel Linux cluster. However, that code suffered from an artificial limitation in total event number, limiting the statistical accuracy of the output data. This report provides an explanation and proposes a solution to the event limit problem. Data persistency was included using Reflex with ROOT I/O method, and data analysis tools written for extracting a ROOT readable files from the Reflex/ROOT format to an independent ROOT ntuple. Lastly, a pile-up analysis tool is provided. We report on a successful full simulation run, with a reasonable event number, for testing this new cluster configuration. 14. KEYWORDS, DESCRIPTORS or IDENTIFIERS (technically meaningful terms or short phrases that characterize a document and could be helpful in cataloguing the document. They should be selected so that no security classification is required. Identifies, such as equipment model designation, trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published thesaurus, e.g. Thesaurus of Engineering and Scientific Terms (TEST) and that thesaurusidentified. If it is not possible to select indexing terms which are Unclassified, the classification of each should be indicated as with the title.) Simulation, GEANT4, Fast Neutron Analysis, Thermal Neutron Analysis, Explosive, Detection UNCLASSIFIED SECURITY CLASSIFICATION OF FORM

23

24 Defence R&D Canada Canada's Leader in Defence and National Security Science and Technology R & D pour la défense Canada Chef de file au Canada en matière de science et de technologie pour la défense et la sécurité nationale

Polaris Big Boss oise eduction

Polaris Big Boss oise eduction Defence Research and Development Canada Recherche et développement pour la défense Canada Polaris Big Boss oise eduction Report 2: Component ound ource anking C. Antelmi HGC Engineering Contract Authority:.

More information

Iterative constrained least squares for robust constant modulus beamforming

Iterative constrained least squares for robust constant modulus beamforming CAN UNCLASSIFIED Iterative constrained least squares for robust constant modulus beamforming X. Jiang, H.C. So, W-J. Zeng, T. Kirubarajan IEEE Members A. Yasotharan DRDC Ottawa Research Centre IEEE Transactions

More information

Recherche et développement pour la défense Canada. Centre des sciences pour la sécurité 222, rue Nepean, 11ième étage Ottawa, Ontario K1A 0K2

Recherche et développement pour la défense Canada. Centre des sciences pour la sécurité 222, rue Nepean, 11ième étage Ottawa, Ontario K1A 0K2 Defence Research and Development Canada Centre for Security Science 222 Nepean Street, 11 th floor Ottawa, Ontario K1A 0K2 Recherche et développement pour la défense Canada Centre des sciences pour la

More information

ATLANTIS - Assembly Trace Analysis Environment

ATLANTIS - Assembly Trace Analysis Environment ATLANTIS - Assembly Trace Analysis Environment Brendan Cleary, Margaret-Anne Storey, Laura Chan Dept. of Computer Science, University of Victoria, Victoria, BC, Canada bcleary@uvic.ca, mstorey@uvic.ca,

More information

Non-dominated Sorting on Two Objectives

Non-dominated Sorting on Two Objectives Non-dominated Sorting on Two Objectives Michael Mazurek Canadian Forces Aerospace Warfare Centre OR Team Coop Student Slawomir Wesolkowski, Ph.D. Canadian Forces Aerospace Warfare Centre OR Team DRDC CORA

More information

AIS Indexer User Guide

AIS Indexer User Guide AIS Indexer User Guide Dan Radulescu Prepared by: OODA Technologies Inc. 4891 Av. Grosvenor, Montreal Qc, H3W 2M2 Project Manager: Anthony W. Isenor Contract Number: W7707-115137, Call Up 6, 4500959431

More information

Model and Data Management Tool for the Air Force Structure Analysis Model - Final Report

Model and Data Management Tool for the Air Force Structure Analysis Model - Final Report Air Force Structure Analysis Model - Final Report D.G. Hunter DRDC CORA Prepared By: CAE Integrated Enterprise Solutions - Canada 1135 Innovation Drive Ottawa, ON, K2K 3G7 Canada Telephone: 613-247-0342

More information

Conceptual Model Architecture and Services

Conceptual Model Architecture and Services CAN UNCLASSIFIED Conceptual Model Architecture and Services Contribution to the National Science Foundation Report on Research Challenges in Modeling and Simulation for Engineering Complex Systems Nathalie

More information

Functional Blue Prints for the Development of a KMapper Prototype

Functional Blue Prints for the Development of a KMapper Prototype Functional Blue Prints for the Development of a KMapper Prototype SOFTWARE DESIGN DOCUMENT KMAPPER KNOWLEDGE INFERRING SERVICES And prepared by Martin Froment and Ludovic Tobin Fujitsu Consulting (Canada)

More information

Policy Based Network Management System Design Document

Policy Based Network Management System Design Document Policy Based Network Management System Design Document J. Spagnolo, D. Cayer The scientific or technical validity of this Contract Report is entirely the responsibility of the contractor and the contents

More information

Analysis of integrating Computer-Aided Dispatch information with the Multi-Agency Situational Awareness System

Analysis of integrating Computer-Aided Dispatch information with the Multi-Agency Situational Awareness System 2014-08-26 DRDC-RDDC-2014-L169 Produced for / Distribution List: MASAS Users Community and the Federal/Provincial/Territorial (FPT) Interoperability Working Group Scientific Letter Analysis of integrating

More information

Comparing open source and commercial off-the-shelf software

Comparing open source and commercial off-the-shelf software CAN UNCLASSIFIED Comparing open source and commercial off-the-shelf software Initial comparison Richard Cross Sean Webb Anna-Liesa S. Lapinski DRDC Atlantic Research Centre Defence Research and Development

More information

C-CORE Task #2 Report - Support for data processing and image analysis of the Fall 2012 through-wall field trials

C-CORE Task #2 Report - Support for data processing and image analysis of the Fall 2012 through-wall field trials C-CORE Task # Report - Support for data processing and image analysis of the Fall through-wall field trials B. Yue and J. Chamberland The scientific or technical validity of this Contract Report is entirely

More information

Spring 2010 Research Report Judson Benton Locke. High-Statistics Geant4 Simulations

Spring 2010 Research Report Judson Benton Locke. High-Statistics Geant4 Simulations Florida Institute of Technology High Energy Physics Research Group Advisors: Marcus Hohlmann, Ph.D. Kondo Gnanvo, Ph.D. Note: During September 2010, it was found that the simulation data presented here

More information

Canadian Fire Community of Practice

Canadian Fire Community of Practice Canadian Fire Community of Practice Ret on Intermediate Science and Technology Priorities of Canadian Fire Services Capability Assessment Management System (CAMS) Redesign Donn MacMillan Delivery Manager

More information

The TinyHPC Cluster. Mukarram Ahmad. Abstract

The TinyHPC Cluster. Mukarram Ahmad. Abstract The TinyHPC Cluster Mukarram Ahmad Abstract TinyHPC is a beowulf class high performance computing cluster with a minor physical footprint yet significant computational capacity. The system is of the shared

More information

Comsics: the parallel computing facility in the school of physics, USM.

Comsics: the parallel computing facility in the school of physics, USM. Comsics: the parallel computing facility in the school of physics, USM. Yoon Tiem Leong Talk given at theory group weekly seminar, School of Physics, Universiti Sains Malaysia Tues, 19 October 2010 Abstract

More information

8 Novembre How to install

8 Novembre How to install Utilizzo del toolkit di simulazione Geant4 Laboratori Nazionali del Gran Sasso 8 Novembre 2010 2010 How to install Outline Supported platforms & compilers External software packages and tools Working area

More information

REAL-TIME IDENTIFICATION USING MOBILE HAND-HELD DEVICE : PROOF OF CONCEPT SYSTEM TEST REPORT

REAL-TIME IDENTIFICATION USING MOBILE HAND-HELD DEVICE : PROOF OF CONCEPT SYSTEM TEST REPORT REAL-TIME IDENTIFICATION USING MOBILE HAND-HELD DEVICE : PROOF OF CONCEPT SYSTEM TEST REPORT Prepared by: C/M Tien Vo Royal Canadian Mounted Police Scientific authority: Pierre Meunier DRDC Centre for

More information

RMP Simulation User Guide

RMP Simulation User Guide Richard Sorensen Kihomac DRDC CORA CR 2011 099 October 2011 Defence R&D Canada Centre for Operational Research and Analysis National Defence Défense nationale Prepared By: Richard Sorensen Kihomac 5501

More information

Coyote: all IB, all the time draft. Ron Minnich Sandia National Labs

Coyote: all IB, all the time draft. Ron Minnich Sandia National Labs Coyote: all IB, all the time draft Ron Minnich Sandia National Labs Acknowledgments Andrew White, Bob Tomlinson, Daryl Grunau, Kevin Tegtmeier, Ollie Lo, Latchesar Ionkov, Josh Aune, and many others at

More information

CASE STUDY: Using Field Programmable Gate Arrays in a Beowulf Cluster

CASE STUDY: Using Field Programmable Gate Arrays in a Beowulf Cluster CASE STUDY: Using Field Programmable Gate Arrays in a Beowulf Cluster Mr. Matthew Krzych Naval Undersea Warfare Center Phone: 401-832-8174 Email Address: krzychmj@npt.nuwc.navy.mil The Robust Passive Sonar

More information

Starting with an example.

Starting with an example. Starting with an example http://geant4.cern.ch PART I Set your environment up and get a Geant4 example Getting started First, you have to access the common PC where Geant4 is installed, and set the environment

More information

CMPE 655 Fall 2016 Assignment 2: Parallel Implementation of a Ray Tracer

CMPE 655 Fall 2016 Assignment 2: Parallel Implementation of a Ray Tracer CMPE 655 Fall 2016 Assignment 2: Parallel Implementation of a Ray Tracer Rochester Institute of Technology, Department of Computer Engineering Instructor: Dr. Shaaban (meseec@rit.edu) TAs: Akshay Yembarwar

More information

Introduction to Geant4

Introduction to Geant4 Introduction to Geant4 Release 10.4 Geant4 Collaboration Rev1.0: Dec 8th, 2017 CONTENTS: 1 Geant4 Scope of Application 3 2 History of Geant4 5 3 Overview of Geant4 Functionality 7 4 Geant4 User Support

More information

Geant4 Installation Guide

Geant4 Installation Guide Geant4 Installation Guide For setting up Geant4 in your computing environment Version: geant4 9.0 Published 29 June, 2007 Geant4 Collaboration Geant4 Installation Guide : For setting up Geant4 in your

More information

SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: Document identifier: SLHC-PP-D v1.1. End of Month 03 (June 2008) 30/06/2008

SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: Document identifier: SLHC-PP-D v1.1. End of Month 03 (June 2008) 30/06/2008 SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: 1.2.1 Document identifier: Contractual Date of Delivery to the EC Actual Date of Delivery to the EC End of Month 03 (June 2008) 30/06/2008 Document date: 27/06/2008

More information

Brutus. Above and beyond Hreidar and Gonzales

Brutus. Above and beyond Hreidar and Gonzales Brutus Above and beyond Hreidar and Gonzales Dr. Olivier Byrde Head of HPC Group, IT Services, ETH Zurich Teodoro Brasacchio HPC Group, IT Services, ETH Zurich 1 Outline High-performance computing at ETH

More information

Geant4 Computing Performance Benchmarking and Monitoring

Geant4 Computing Performance Benchmarking and Monitoring Journal of Physics: Conference Series PAPER OPEN ACCESS Geant4 Computing Performance Benchmarking and Monitoring To cite this article: Andrea Dotti et al 2015 J. Phys.: Conf. Ser. 664 062021 View the article

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Validation of the MODTRAN 6 refracted geometry algorithms in the marine boundary layer and development of EOSPEC modules

Validation of the MODTRAN 6 refracted geometry algorithms in the marine boundary layer and development of EOSPEC modules Validation of the MODTRAN 6 refracted geometry algorithms in the marine boundary layer and development of EOSPEC modules Vincent Ross Aerex Avionics Inc. Prepared By: Aerex Avionics Inc. 324, St-Augustin

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

DETECT2000: An Improved Monte-Carlo Simulator for the Computer Aided Design of Photon Sensing Devices

DETECT2000: An Improved Monte-Carlo Simulator for the Computer Aided Design of Photon Sensing Devices DETECT2000: An Improved Monte-Carlo Simulator for the Computer Aided Design of Photon Sensing Devices François Cayouette 1,2, Denis Laurendeau 3 and Christian Moisan 3 1 Montreal Neurological Institute,

More information

ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different?

ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different? LLNL-TR-548633 ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different? by Dermott E. Cullen Lawrence Livermore National Laboratory P.O. Box 808/L-198 Livermore, CA 94550 March 17, 2012 Approved for public

More information

2017 Resource Allocations Competition Results

2017 Resource Allocations Competition Results 2017 Resource Allocations Competition Results Table of Contents Executive Summary...3 Computational Resources...5 CPU Allocations...5 GPU Allocations...6 Cloud Allocations...6 Storage Resources...6 Acceptance

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

SGE Roll: Users Guide. Version 5.3 Edition

SGE Roll: Users Guide. Version 5.3 Edition SGE Roll: Users Guide Version 5.3 Edition SGE Roll: Users Guide : Version 5.3 Edition Published Dec 2009 Copyright 2009 University of California and Scalable Systems This document is subject to the Rocks

More information

The DETER Testbed: Overview 25 August 2004

The DETER Testbed: Overview 25 August 2004 The DETER Testbed: Overview 25 August 2004 1. INTRODUCTION The DETER (Cyber Defense Technology Experimental Research testbed is a computer facility to support experiments in a broad range of cyber-security

More information

Working on the NewRiver Cluster

Working on the NewRiver Cluster Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced

More information

A Novel Approach to Explain the Detection of Memory Errors and Execution on Different Application Using Dr Memory.

A Novel Approach to Explain the Detection of Memory Errors and Execution on Different Application Using Dr Memory. A Novel Approach to Explain the Detection of Memory Errors and Execution on Different Application Using Dr Memory. Yashaswini J 1, Tripathi Ashish Ashok 2 1, 2 School of computer science and engineering,

More information

Fedora Core: Made Simple

Fedora Core: Made Simple Table of Contents Installing Fedora...2 Before you begin...2 Compatible Hardware...2 Minimum Requirements...2 Disk Space Requirements...2 Help! Booting from the CD ROM Drive Fails!...2 Installing Fedora

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Contractor Report / User Manual for the Acoustic Tracking System (ATS) for Autonomous Underwater Vehicles (AUV) Project

Contractor Report / User Manual for the Acoustic Tracking System (ATS) for Autonomous Underwater Vehicles (AUV) Project Autonomous Underwater Vehicles (AUV) Project Tim Moorhouse Prepared By: 1120 Finch Ave West, 7 th Floor Toronto, ON Canada M3J 3H7 PWGSC Contract Number: W7707-115263/001/HAL CSA: Nicos Pelavas, 902-426-3100

More information

Simulation of the CRIPT detector

Simulation of the CRIPT detector Simulation of the CRIPT detector Christopher Howard Prepared by: excelitr Bank Street, 3rd Floor, Ottawa, ON Canada K1P 5N4 Contract Number: W7714-45898 Contract Scientific Authority: David Waller, Defence

More information

Simulation Techniques Using Geant4

Simulation Techniques Using Geant4 IEEE Nuclear Science Symposium and Medical Imaging Conference Short Course Simulation Techniques Using Geant4 Maria Grazia Pia (INFN Genova, Italy) MariaGrazia.Pia@ge.infn.it Dresden, 18 October 2008 http://www.ge.infn.it/geant4/events/nss2008/geant4course.html

More information

IBM Spectrum LSF Version 10 Release 1. Readme IBM

IBM Spectrum LSF Version 10 Release 1. Readme IBM IBM Spectrum LSF Version 10 Release 1 Readme IBM IBM Spectrum LSF Version 10 Release 1 Readme IBM Note Before using this information and the product it supports, read the information in Notices on page

More information

300x Matlab. Dr. Jeremy Kepner. MIT Lincoln Laboratory. September 25, 2002 HPEC Workshop Lexington, MA

300x Matlab. Dr. Jeremy Kepner. MIT Lincoln Laboratory. September 25, 2002 HPEC Workshop Lexington, MA 300x Matlab Dr. Jeremy Kepner September 25, 2002 HPEC Workshop Lexington, MA This work is sponsored by the High Performance Computing Modernization Office under Air Force Contract F19628-00-C-0002. Opinions,

More information

Deliverable D10.2. WP10 JRA04 INDESYS Innovative solutions for nuclear physics detectors

Deliverable D10.2. WP10 JRA04 INDESYS Innovative solutions for nuclear physics detectors MS116 Characterization of light production, propagation and collection for both organic and inorganic scintillators D10.2 R&D on new and existing scintillation materials: Report on the light production,

More information

Inversion of water clouds lidar returns using the azimuthal dependence of the cross-polarization signal

Inversion of water clouds lidar returns using the azimuthal dependence of the cross-polarization signal CAN UNCLASSIFIED Inversion of water clouds lidar returns using the azimuthal dependence of the cross-polarization signal Xiaoying Cao Lidar Consultant Gilles Roy DRDC Valcartier Research Centre Gregoire

More information

Upgrading your GEANT4 Installation

Upgrading your GEANT4 Installation your GEANT4 Installation Michael H. Kelsey SLAC National Accelerator Laboratory GEANT4 Tutorial, Jefferson Lab 13 Jul 2012 Where Are Upgrades? http://www.geant4.org/ Michael H. Kelsey GEANT4 July 2012

More information

High Performance Beowulf Cluster Environment User Manual

High Performance Beowulf Cluster Environment User Manual High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how

More information

User s Manual of Interactive Software for Predicting CPF Bow-Flare Impulsive Loads

User s Manual of Interactive Software for Predicting CPF Bow-Flare Impulsive Loads Copy No. Defence Research and Development Canada Recherche et développement pour la défense Canada DEFENCE & DÉFENSE User s Manual of Interactive Software for Predicting CPF Bow-Flare Impulsive Loads J.M.

More information

AASPI Software Structure

AASPI Software Structure AASPI Software Structure Introduction The AASPI software comprises a rich collection of seismic attribute generation, data conditioning, and multiattribute machine-learning analysis tools constructed by

More information

Introduction to Red Hat Linux I: Easy Reference Index Page

Introduction to Red Hat Linux I: Easy Reference Index Page Introduction to Red Hat Linux I: Easy Reference Index Page Easy Reference Topics Module Page Common installation troubleshooting issues 2 2 Installation classes 2 3 Contents of the usr directory 4 5 File

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Development of the GEANT4 Simulation for the Compton Gamma-Ray Camera

Development of the GEANT4 Simulation for the Compton Gamma-Ray Camera Development of the GEANT4 Simulation for the Compton Gamma-Ray Camera Ryuichi Ueno Prepared by: Calian Technologies Ltd. 340 Legget Dr. Suite 101, Ottawa, ON K2K 1Y6 Project Manager: Pierre-Luc Drouin

More information

CTECS Connect 2.2 Release Notes December 10, 2009

CTECS Connect 2.2 Release Notes December 10, 2009 (Formerly VTECS) CTECS Connect 2.2 Release Notes December 10, 2009 This document contains information that supplements the CTECS Connect 2.2 documentation. Please visit the CTECS Connect Support area of

More information

MCNP CLASS SERIES (SAMPLE MCNP INPUT) Jongsoon Kim

MCNP CLASS SERIES (SAMPLE MCNP INPUT) Jongsoon Kim MCNP CLASS SERIES (SAMPLE MCNP INPUT) Jongsoon Kim Basic constants in MCNP Lengths in cm Energies in MeV Times in shakes (10-8 sec) Atomic densities in units of atoms/barn*-cm Mass densities in g/cm 3

More information

Parallel Programming Pre-Assignment. Setting up the Software Environment

Parallel Programming Pre-Assignment. Setting up the Software Environment Parallel Programming Pre-Assignment Setting up the Software Environment Author: B. Wilkinson Modification date: January 3, 2016 Software The purpose of this pre-assignment is to set up the software environment

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

TRUEGRID WINDOWS INSTALLATION/LICENSING/UPGRADES

TRUEGRID WINDOWS INSTALLATION/LICENSING/UPGRADES TRUEGRID WINDOWS INSTALLATION/LICENSING/UPGRADES PLEASE NOTE: We have tried to be as complete as possible with these instructions. In most cases, there is no need to read all of this. Just call us at (925)

More information

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de

More information

Install your scientific software stack easily with Spack

Install your scientific software stack easily with Spack Install your scientific software stack easily with Spack Les mardis du développement technologique Florent Pruvost (SED) Outline 1. Context 2. Features overview 3. In practice 4. Some feedback Florent

More information

DEVELOPING A REMOTE ACCESS HARDWARE-IN-THE-LOOP SIMULATION LAB

DEVELOPING A REMOTE ACCESS HARDWARE-IN-THE-LOOP SIMULATION LAB DEVELOPING A REMOTE ACCESS HARDWARE-IN-THE-LOOP SIMULATION LAB FINAL REPORT SEPTEMBER 2005 Budget Number KLK214 N05-03 Prepared for OFFICE OF UNIVERSITY RESEARCH AND EDUCATION U.S. DEPARTMENT OF TRANSPORTATION

More information

System Manager Unit (SMU) Hardware Reference

System Manager Unit (SMU) Hardware Reference System Manager Unit (SMU) Hardware Reference MK-92HNAS065-02 Notices and Disclaimer Copyright 2015 Hitachi Data Systems Corporation. All rights reserved. The performance data contained herein was obtained

More information

Product Support Notice

Product Support Notice PSN # PSN027012u Product Support Notice 2015 Avaya Inc. All Rights Reserved. Avaya Proprietary Use pursuant to the terms of your signed agreement or company policy. Original publication date: 11-Feb-15.

More information

A Distributed Parallel Processing System for Command and Control Imagery

A Distributed Parallel Processing System for Command and Control Imagery A Distributed Parallel Processing System for Command and Control Imagery Dr. Scott E. Spetka[1][2], Dr. George O. Ramseyer[3], Dennis Fitzgerald[1] and Dr. Richard E. Linderman[3] [1] ITT Industries Advanced

More information

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents InfiniPath 2.2.1 Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs This software license applies only to QLogic customers. QLogic Corporation. All rights reserved. Table of Contents 1. Version 2.

More information

Parallel computation performances of Serpent and Serpent 2 on KTH Parallel Dator Centrum

Parallel computation performances of Serpent and Serpent 2 on KTH Parallel Dator Centrum KTH ROYAL INSTITUTE OF TECHNOLOGY, SH2704, 9 MAY 2018 1 Parallel computation performances of Serpent and Serpent 2 on KTH Parallel Dator Centrum Belle Andrea, Pourcelot Gregoire Abstract The aim of this

More information

Quick Installation Guide for RHV/Ovirt

Quick Installation Guide for RHV/Ovirt Quick Installation Guide for RHV/Ovirt 2017 Chengdu Vinchin Technology Co. Ltd. All rights reserved. CONTENTS 1. Create New Virtual Machine...2 2. Install Backup Server ( as master)...5 3. Install Backup

More information

Solution of Exercise Sheet 2

Solution of Exercise Sheet 2 Solution of Exercise Sheet 2 Exercise 1 (Cluster Computing) 1. Give a short definition of Cluster Computing. Clustering is parallel computing on systems with distributed memory. 2. What is a Cluster of

More information

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman 1 2 Introduction to High Performance Computing (HPC) Introduction High-speed computing. Originally pertaining only to supercomputers

More information

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux TECHNICAL WHITE PAPER Using Stateless Linux with Veritas Cluster Server Linux Pranav Sarwate, Assoc SQA Engineer Server Availability and Management Group Symantec Technical Network White Paper Content

More information

Linux Clusters for High- Performance Computing: An Introduction

Linux Clusters for High- Performance Computing: An Introduction Linux Clusters for High- Performance Computing: An Introduction Jim Phillips, Tim Skirvin Outline Why and why not clusters? Consider your Users Application Budget Environment Hardware System Software HPC

More information

Cornell Theory Center 1

Cornell Theory Center 1 Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell University. Scientific and engineering research projects supported by

More information

Guest Operating System Installation Guide. February 25, 2008

Guest Operating System Installation Guide. February 25, 2008 Guest Operating System Installation Guide February 25, 2008 Guest Operating System Installation Guide Guest Operating System Installation Guide Revision: 20080225 Item: GSTOS-ENG-Q108-198 You can find

More information

Grid Engine Users Guide. 7.0 Edition

Grid Engine Users Guide. 7.0 Edition Grid Engine Users Guide 7.0 Edition Grid Engine Users Guide : 7.0 Edition Published Dec 01 2017 Copyright 2017 University of California and Scalable Systems This document is subject to the Rocks License

More information

InfoBrief. Platform ROCKS Enterprise Edition Dell Cluster Software Offering. Key Points

InfoBrief. Platform ROCKS Enterprise Edition Dell Cluster Software Offering. Key Points InfoBrief Platform ROCKS Enterprise Edition Dell Cluster Software Offering Key Points High Performance Computing Clusters (HPCC) offer a cost effective, scalable solution for demanding, compute intensive

More information

Limitations in the PHOTON Monte Carlo gamma transport code

Limitations in the PHOTON Monte Carlo gamma transport code Nuclear Instruments and Methods in Physics Research A 480 (2002) 729 733 Limitations in the PHOTON Monte Carlo gamma transport code I. Orion a, L. Wielopolski b, * a St. Luke s/roosevelt Hospital, Columbia

More information

Monte Carlo programs

Monte Carlo programs Monte Carlo programs Alexander Khanov PHYS6260: Experimental Methods is HEP Oklahoma State University November 15, 2017 Simulation steps: event generator Input = data cards (program options) this is the

More information

Control Software centralized HEM. User Manual

Control Software centralized HEM. User Manual Control Software centralized HEM User Manual Page: - 2-1. Product Overview This software (HEM-HyperElectronicsMappers) for remote control is designed to be installed into a PC for use within a surveillance

More information

Console Redirection on VMware ESX Server Software and Dell PowerEdge Servers

Console Redirection on VMware ESX Server Software and Dell PowerEdge Servers Console Redirection on VMware ESX Server Software and Dell PowerEdge Servers October 2005 Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your

More information

BCOM-USB Device. User Manual.

BCOM-USB Device. User Manual. BCOM-USB Device User Manual www.kalkitech.com Version 2.1.2, December 2017 Copyright Notice 2017 Applied Systems Engineering, Inc. All Rights reserved. This user manual is a publication of Applied Systems

More information

KDev-Valgrind : User Documentation

KDev-Valgrind : User Documentation KDev-Valgrind : User Documentation Damien Coppel Anthony Corbacho Lionel Duc Mathieu Lornac Sébastien Rannou Lucas Sarie This document is for developers wishing to use the plugin. It enables to understand

More information

Guest Operating System Installation Guide. March 14, 2008

Guest Operating System Installation Guide. March 14, 2008 Guest Operating System Installation Guide March 14, 2008 Guest Operating System Installation Guide Guest Operating System Installation Guide Revision: 20080314 Item: GSTOS-ENG-Q108-198 You can find the

More information

CPM. Quick Start Guide V2.4.0

CPM. Quick Start Guide V2.4.0 CPM Quick Start Guide V2.4.0 1 Content 1 Introduction... 3 Launching the instance... 3 CloudFormation... 3 CPM Server Instance Connectivity... 3 2 CPM Server Instance Configuration... 4 CPM Server Configuration...

More information

Lab 1: Accessing the Linux Operating System Spring 2009

Lab 1: Accessing the Linux Operating System Spring 2009 CIS 90 Linux Lab Exercise Lab 1: Accessing the Linux Operating System Spring 2009 Lab 1: Accessing the Linux Operating System This lab takes a look at UNIX through an online experience on an Ubuntu Linux

More information

Test Lab Introduction to the Test Lab Linux Cluster Environment

Test Lab Introduction to the Test Lab Linux Cluster Environment Test Lab 1.0 - Introduction to the Test Lab Linux Cluster Environment Test lab is a set of three disposable cluster environments that can be used for systems research. All three environments are accessible

More information

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -

More information

Clearswift SECURE Gateways

Clearswift SECURE Gateways Clearswift SECURE Gateways Virtual Deployment Guidelines Issue 1.1 December 2015 Copyright Version 1.1, December, 2015 Published by Clearswift Ltd. 1995 2015 Clearswift Ltd. All rights reserved. The materials

More information

Servicing HEP experiments with a complete set of ready integreated and configured common software components

Servicing HEP experiments with a complete set of ready integreated and configured common software components Journal of Physics: Conference Series Servicing HEP experiments with a complete set of ready integreated and configured common software components To cite this article: Stefan Roiser et al 2010 J. Phys.:

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

Beginner's Guide for UK IBM systems

Beginner's Guide for UK IBM systems Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,

More information

SGE Roll: Users Guide. Version Edition

SGE Roll: Users Guide. Version Edition SGE Roll: Users Guide Version 4.2.1 Edition SGE Roll: Users Guide : Version 4.2.1 Edition Published Sep 2006 Copyright 2006 University of California and Scalable Systems This document is subject to the

More information

Army Research Laboratory

Army Research Laboratory Army Research Laboratory Arabic Natural Language Processing System Code Library by Stephen C. Tratz ARL-TN-0609 June 2014 Approved for public release; distribution is unlimited. NOTICES Disclaimers The

More information

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA)

Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA) Intel Cache Acceleration Software (Intel CAS) for Linux* v2.9 (GA) Release Notes June 2015 Revision 010 Document Number: 328497-010 Notice: This document contains information on products in the design

More information

Hitachi Gloabal Storage Products. Hints and tips. BIOS 33.8GB limitation

Hitachi Gloabal Storage Products. Hints and tips. BIOS 33.8GB limitation Hints and Tips Deskstar 7K250 UltraATA 100 Hard disk drive HDS722504VLAT20 HDS722508VLAT20 HDS722512VLAT20 HDS722512VLAT80 HDS722516VLAT20 HDS722516VLAT80 HDS722525VLAT80 Hints and tips This document provides

More information

Isilon InsightIQ. Version Installation Guide

Isilon InsightIQ. Version Installation Guide Isilon InsightIQ Version 4.1.0 Installation Guide Copyright 2009-2016 EMC Corporation All rights reserved. Published October 2016 Dell believes the information in this publication is accurate as of its

More information

DEBUGGING ON FERMI PREPARING A DEBUGGABLE APPLICATION GDB. GDB on front-end nodes

DEBUGGING ON FERMI PREPARING A DEBUGGABLE APPLICATION GDB. GDB on front-end nodes DEBUGGING ON FERMI Debugging your application on a system based on a BG/Q architecture like FERMI could be an hard task due to the following problems: the core files generated by a crashing job on FERMI

More information

Guide to the RDAQ. How to enter descriptions of fonds and collections into the Réseau de diffusion des archives du Québec (RDAQ) database

Guide to the RDAQ. How to enter descriptions of fonds and collections into the Réseau de diffusion des archives du Québec (RDAQ) database Guide to the RDAQ How to enter descriptions of fonds and collections into the Réseau de diffusion des archives du Québec (RDAQ) database Table of contents What is the Réseau de diffusion des archives du

More information