XSEDE and XSEDE Resources

Size: px
Start display at page:

Download "XSEDE and XSEDE Resources"

Transcription

1 October 26, 2012 XSEDE and XSEDE Resources Dan Stanzione Deputy Director, Texas Advanced Computing Center Co-Director, iplant Collaborative

2 Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing large scale computing, data, and visualization resources. XSEDE is the evolution of the NSF TeraGrid. Today s session is a general overview of XSEDE for new XSEDE/TeraGrid users; it is not going to teach you computational science or programming! Use the webcast chat window to ask questions!

3 Outline What is XSEDE? How do I get started? XSEDE User Portal User Responsibility & Security Applying for an Allocation Accessing Resources Managing Data and File Transfers Other Resources Picking the right resource for your job. Running Jobs Managing Your Software Environment Getting Help Next Steps

4 What is XSEDE? The Extreme Science and Engineering Discovery Environment (XSEDE): The most powerful integrated advanced digital resources and services in the world. Funded by NSF. A single virtual system that scientists can use to interactively share computing resources, data, and expertise. 9 supercomputers, 3 visualization systems, and 9 storage systems provided by 16 partner institutions (Service Providers or SPs) 4

5 What is XSEDE? The successor to the TeraGrid, XSEDE is an NSF-funded, advanced, nationally distributed open cyberinfrastructure, consisting of: Supercomputing (and other computing) Storage Visualization Data Collections Network Science Gateways Unified Policies and Programs 5

6 XSEDE Service Providers NCSA, Illinois PSC, Pitt/Carnegie Mellon NICS, Tennessee/ORNL TACC, Texas SDSC, UC San Diego OSC, Ohio State Cornell Virginia Indiana Purdue Rice Shodor Foundation Argonne UC-Berkeley U Chicago SURA Open Science Grid 6

7 Allocation of XSEDE Resources XSEDE resources are allocated through a peerreviewed process. Open to any US open science researcher (or collaborators of US researchers) regardless of funding source. XSEDE resources are provided at NO COST to the end user through NSF funding (~$100M/year). 7

8 How do I get started using XSEDE To get started using XSEDE a researcher needs to: Apply for an Allocation, or Get added to an existing allocation To do either of these things, you should start with the XSEDE User Portal 8

9 XSEDE User Portal (XUP) Web-based single point of contact that provides: Continually updated information about your accounts. Access to your XSEDE accounts and allocated resources: A single location from which to access XSEDE resources. One can access all accounts on various machines from the Portal. Interfaces for data management, data collections, and other user tasks and resources Access to the Help Desk 9

10 The XSEDE.org Home Page From here, you can create a web account at any time! 10

11 User Responsibilities and Security The first time you login to the Portal, at the beginning of each allocation term, you will be asked to accept the User Responsibilities form: Explains acceptable use to protect shared resources and intellectual property. Acknowledgment in publications, etc. You are responsible for your account: Do not share accounts User is responsible for protecting the passwords: Includes not sharing passwords, not writing passwords down where they can be easily found, and not using tools which expose passwords on the network This includes private keys: make sure they are password-protected. Appropriate Behavior Protecting computing, closing SSH terminals when done, logging out of the User Portal when done, etc. Report Suspicious Activity. If you have any suspicion that your account or personal computer has been compromised send to or call 24/ immediately. 11

12 Getting an Allocation If you do not yet have an allocation, you can use the portal to acquire one. If you are a first time investigator, request a startup allocation. 12

13 Creating an Allocation 13

14 Once your allocation is approved: The PI (principle investigator), Co-PI, or Allocation Manager can add users to an existing allocation through the portal. XSEDE UserPortal:My XSEDE->Add/Remove User Takes the portal name of the user you want to add/remove. Accounts at certain Service Providers need to be activated before they can be accessed. 14

15 Accessing XSEDE Resources Several methods are possible: Direct login access Single Sign On (SSO) through portal SSO between resources Through Science Gateways Your choice of method may vary with: How many resources you use How much you want to automate file transfers, job submission, etc. 15

16 Accessing Resources(2) SSO is the default method; you ll need to file a ticket to request a direct access password to the machine. Direct access: Use a secure shell (ssh) client. From Linux or Mac terminal window: ssh l <username> <machinename> E.g: ssh l dstanzi ranger.tacc.utexas.edu From Windows: Download one of many ssh clients Free ones include putty Most campuses have a site license for a fancier one. 16

17 Single Sign On Single Sign-On (SSO)allows you to use just one username and password (your User Portal one) to log into every digital service on which you have an account. The easiest way to use SSO is via the XSEDE User Portal, but you can also use SSO via a desktop client or with an X.509 certificate. Stand-alone client: After you authenticate using SSO with your User Portal username and password, you will be recognized by all XSEDE services on which you have account, without having to enter your login information again for each resource. 17

18 SSO thru user portal Make sure you are logged into the XSEDE User Portal Go to My XSEDE tab Go to the Accounts link Resources you have access to will be indicated by a login link Click on the login link of the resource you would like to access. 18

19 SSO Thru User Portal A Java Applet will talk you may be asked permission to allow it to run. After the applet starts, a blank terminal window will appear in your web browser. The window will fill with text indicating that you have been successfully logged into the resource of your choice. You can now work on this machine, and connect to other machines from this terminal, using the command gsissh machine-name 19

20 Another Access Path: Science Gateways There are many sites that give you web-based, domain-specific access to applications running on XSEDE. Collectively, we call them Science Gateways View a list of them on the User Portal, in the Resources tab. Access methods vary; click on the specific gateway to find out more (dozens available, across many fields!). iplant s DE is one of these gateways! 20

21 The Mobile User Portal Allows browsing of all XSEDE systems, file downloading, and third-party transfers. It provides several features for mobile users such as one touch file publishing to the user's public folder, simple creation of shared groups for any file/folder, and one click permission management of all XSEDE systems, file downloading, and third-party transfers. 21

22 XUP Resource Monitor View system information: TFLOPS, memory, today s load, jobs running in queue. Status: up or down: takes you to the news announcements that tells you when the machine is expected to come back up. 22

23 User Portal: User Forums The User Forums are a great place to ask question, get help, or discuss ideas about XSEDE. 23

24 Running Jobs Each system in XSEDE has some local options you will need to know about to run jobs. To learn about the specifics of each system, check out the user guides: In the portal, under Documentation select User Guides Pay particular attention to: File Systems Batch job submission 24

25 File Systems on XSEDE Resources Where your data resides on XSEDE and the appropriate storage is your responsibility. In general, all resources provide: HOME: Permanent space, but small. A good choice for building software and working file collections of small to medium sized files, where a medium sized file is less than 50 MB. SCRATCH: More space, but temporary; use while you are running your jobs. Scratch space is temporary; it is not backed up and has limited redundancy, and is periodically purged of old files! Archival Storage: Long term storage of large amounts of data (often tape); slower access, accessible from all sites. 25

26 Batch Jobs All XSEDE compute resources use some form of batch scheduler. Compute jobs *can not* be run on the login nodes (no faster than a normal workstation!) There are several batch systems in use, but all work basically the same way. Request number/type of nodes you need. Specify how long you need to run. Specify where your output files go. Jobs typically described with a job script:

27 Sample Job Script for Grid Engine on TACC Lonestar: #!/bin/bash #$ -N mympi # Job Name #$ -j y # Combine stderr and stdout #$ -o $JOB_NAME.o$JOB_ID # Name of the output file #$ -pe 12way 24 # Requests 12 tasks/node, 24 cores total #$ -q normal # Queue name "normal" #$ -l h_rt=01:30:00 # Run time (hh:mm:ss) hours ibrun./a.out # Run the MPI executable named "a.out"

28 Submitting/manipulating batch jobs Submit the script that you have created: Actual commands are machine specific, but they follow general principles. qsub jobname qstat a qstat -u username qdel jobid man qsub 28

29 Managing Your Environment: Modules Allows you to manipulate your environment. module list shows currently loaded modules. module avail shows available modules. module show <name> describes module. % module load gcc/3.1.1 % which gcc /usr/local/gcc/3.1.1/linux/bin/gcc % module switch gcc/3.1.1 gcc/3.2.0 % which gcc /usr/local/gcc/3.2.0/linux/bin/gcc % module unload gcc % which gcc gcc not found 29

30 File Transfers: Small Files (<2GB) To transfer small files between XSEDE Resources and/or your own workstation you can use scp or sftp. From Linux or Mac, you can run these commands directly from the terminal From Windows, use your ssh client to do this (putty has free downloads for these tools, too! just Google putty sftp ). These are easy to use and secure, but provide poor performance for large files. 30

31 File Transfer: User Portal Log into the XSEDE User Portal Select Resources tab Select File Manager tab (now wait for Java Applet to load) May need to allow access for applet to run by clicking OK. You will see a list of all machines. This includes: your local machine. XSEDE$Share: 2GB of space to collaborate. Allows you to share files with your collaborators. 31

32 Transferring Large Files with User Portal For large file transfers, we need to set a few parameters. Before clicking on the resource, Right click on the resource you re going to transfer data from and select Edit. This will bring up the file transfer parameters: Click the checkbox next to Stripe Transfers - Click OK Repeat for the other panel using the destination resource Repeat this every time you change Resources Drag and drop the file from source to destination to transfer. 32

33 Large Files Command Line Transfers Within XSEDE, you can use the command line to transfer large files with uberftp or globus-url-copy. Example: From PSC Blacklight to TACC Ranger : Optimized for large files globus url copy -stripe tcp bs gsiftp:// gridftp.blacklight.psc.teragrid.org/scratcha/joeuser/file gsiftp:// gridftp.ranger.tacc.teragrid.org/scratch/joeuser Look here for names of gridtftp servers at each site: support/transfer_location speedpage.psc.edu: provides information on transfer speeds you can expect using globus-url-copy with the optimized parameters above. 33

34 What is Globus Online? Initial implementation of XSEDE User Access Services (XUAS) Reliable data movement service High performance: Move terabytes of data in thousands of files Automatic fault recovery Across multiple security domains Designed for researchers Easy fire and forget file transfers No client software installation New features automatically available Consolidated support and troubleshooting Works with existing GridFTP servers Ability to move files to any machine (even your laptop) with ease "We have been using Globus Online to move files to a TeraGrid cluster where we analyze and store tens of terabytes of data... I plan to continue using GO to access these resources within XSEDE to easily get my files where they need to go. -- University of Washington user The service is reliable and easy to use, and I look forward to continuing to use it with XSEDE. I've also used the Globus Connect feature to move files from TeraGrid sites to other machines -- this is a very useful feature which I'm sure XSEDE users will want to take advantage of. -- NCSA user 34

35 ECSS Extended Collaborative Support Services Expertise available in a wide range of areas Performance analysis Petascale optimization Gateways and web portals Specialized scientific software Can solicit ECSS support at any time though the Allocations tab in the XSEDE User Portal Requires written justification and a project plan Inquire at help@xsede.org 35

36 ECSS can include Porting applications to new resources Providing help for portal and gateway development Implementing algorithmic enhancements Implementing parallel math libraries Improving scalability of codes to higher processor counts Optimizing codes to efficiently utilize specific resources Assisting with visualization, workflow, data analysis, and data transfer 36

37 Questions? Need Help? First, try searching the knowledge base or other documentation Next, submit a ticket portal.xsede.org -> My XSEDE -> Tickets Send help@xsede.org Or call the Help Desk

38 Need more training? portal.xsede.org -> Training Course Calendar On-line training 38

39 October 26, 2012 Selecting the right XSEDE resource Dan Stanzione Deputy Director, TACC

40 Overview What kinds of architectures are out there? Will my program run (well) on them? If not, what can I do about it? What are the current XSEDE systems? What s coming down the pipe?

41 Parallel Computing Architectures Shared Memory - threads Distributed Memory tasks Something in between (most everything).

42 Shared Memory Parallelism Processors have access to the same memory Limited number of processors Fast communication via memory access

43 Shared Memory Model Tasks share a common address space, which they read and write asynchronously. Advantage: no need for explicit data exchange between tasks. Disadvantage: understanding performance and managing data locality become more difficult. Implementation The native compiler translates user program variables into actual memory addresses, which are global. No common distributed memory platform implementation currently exist.

44 Distributed Memory Parallelism Processors do not have access to each other s memory Many processors available (Large scale parallelism possible) Communication speed determined by network (switch)

45 Distributed Memory Model Advantages Easy to scale with number of processors No need to maintain cache coherency Faster local access to memory No interference or overhead Cost effective; can use commodity, off-the-shelf processors and networking Disadvantages Programmer is responsible for data exchange between processors. Non-uniform memory access (NUMA) times Difficulty mapping some data structures to this memory architecture.

46 Parallel Programming Models Shared Memory: Create *threads* primarily with OpenMP (or Pthreads). Distributed Memory: Create *tasks* primarily with MPI. Use a Single Program Multiple Data model. None of this applies to accelerators, exactly, we will get back to that. All of this assumes that C/C++ and Fortran are the only languages in the universe

47 What if my application is not C/C++ or Fortran? Your mileage will definitely vary. In response to the Java question, very senior architects at very large chip makers answer I thought we were talking about performance. Same kinds of answers for Python, and they get worse for less mainstream things. Some systems will not support these at all: i.e. Cray systems have no Java runtime on compute nodes (Kraken, Blue Waters). Most will support them at some level, but you will have difficulty getting maximum performance. If you just need a high througput system, you may well not care!

48 Accelerators and Co-Processors Architectures are becoming much more heterogeneous, particularly with the wide use of: GPU (nvidia, AMD Fusion) Intel MIC (Many Integrated Core) FPGA (to a lesser extent, e.g. Convey). Some custom networks.

49 Accelerators and Co-Processors The biggest problem with the new accelerated models is the lack of standard programming models. No existing MPI/OpenMP codes will work out of the box. A few have been ported to one system or the other. Options: CUDA (most stable, most popular): nvidia extensions to C to support a streaming model. (C-only; PGI provides a subset in Fortran) OpenCL: An attempt at a standard for offload extensions (C-only) OpenACC: A fork off of OpenMP to do offload like things for OpenMP (much to their dismay). All of these models do two things: Partition your code between a piece that runs on the host, and a piece that runs on the accelerator. Express your accelerated code in a highly parallel, long vector, streaming model to make memory access efficient.

50 XSEDE Resource Picking Rules of Thumb. Do I want to write code? (If yes, you have more options). Is my code parallel? (What language is it in? Does it have parallel support?). YES: Do I need shared memory or distributed memory?

51 XSEDE Resource Picking Rules of Thumb. Does it use MPI? --Probably Distributed memory systems. Does it use OpenMP Probably shared memory systems. If both, probably distributed memory, but: How much memory do I need per task? (If more than ~32GB, back to shared memory). This is per task total memory can be 200TB; this is an important distinction! Does it support CUDA, or anything else custom? (Then you can think about accelerators ).

52 XSEDE Resource Picking Rules of Thumb. NO, my code isn t parallel Do you want to make it parallel? This may be a big undertaking Do you have to do a lot of runs? If no, you may be in the wrong place. If yes How big is one run? If you can fit a single run in 24-32GB memory, and hours, you may be able to use a distributed memory resource in high throughput mode. If more RAM than that, we have a few high memory resources. But remember, running a non-parallel code, on, say Blacklight to get 2TB of RAM, means leaving 511 empty cores to on a $4M taxpayer machine to support you only at 0.2% of peak at best!!!

53 Current XSEDE Resources Compute (Shared and Distributed Memory) Visualization Storage Special Purpose

54 Ranger: World Class Supercomputing Capability!

55 Ranger System Summary Peak Performance Teraflops 3,936 Sun four-socket blades 15,744 AMD Barcelona processors Quad-core, four flops/clock cycle Total Memory Terabytes 2 GB/core, 32 GB/node 123 TB/s aggregate bandwidth Interconnect 1 GB/s, sec latency Sun Data Center Switches (2), InfiniBand, up to x ports each 7.8 TB/s backplane Ranger will be decommissioned in 2013 (after more than 3 million jobs for more than 5,000 users).

56 Kraken The next of the Track 2 systems, awarded the year after Ranger. Oak Ridge/NICS So, newer, faster cores, even more peak performance (~1PF); still the largest system in XSEDE! Cray, with AMD processors. Single socket nodes >100,000 total cores. Cray Linux on nodes; really good for C/Fortran MPI codes, less support for much else.

57 Forge The first XSEDE resource targeted exclusively at production GPU computing. ~150TF peak, 4TB total RAM 36 nodes, 16 cores of AMD per node, plus 6 nvidia 2070GPUs per node (216 total). Really relies on CUDA apps! Leaves XSEDE pool September 1 (Keeneland will bring new GPU capability).

58 Queen Bee/Steele LSU/Purdue Older, but still solid, distributed memory Linux clusters. 50TF(QB)/60TF (Steele) of Intel Quad core processors (~5k/6k processor cores) Steele has a GigE connection; throughput or coarse-grain parallel implementations. Queen Bee has an Infiniband interconnect (like Ranger), better for MPI jobs.

59 Trestles Appro Linux distributed memory cluster. AMD Magny Cours processors, 324 nodes, 10,368 cores. 100TF peak performance Local flash disks on nodes. QDR infiniband (40Gbps) Focused more on interactive jobs; support for jobs generated by web gateways (shorter max run times and shorter queue waits).

60 Lonestar 4 Dell Intel 64-bit Xeon Linux Cluster 22,656 CPU cores (302 TFlops) 44 TB memory, 2.4 PB disk

61 Lonestar: The Stats 302 TeraFlops (Trillion floating point operations per second). 22,656 Intel Westmere Processor Ghz in 1,888 Dell Blades. 2GB RAM per core, 24GB per node, 44.3 TB total. 40Gbps QDR Infiniband interconnect in a fully non-blocking fat tree (Mellanox). 1.2Petabytes of disk in parallel filesystem (DataDirect, Lustre)

62 Specialized Subsystems Lonestar is a comprehensive system for science and engineering High Performance Computing High Throughput Computing Visualization subsystem Large memory subsystem GPU/Accelerated Computing Subsystem

63 Blacklight The premiere shared memory resource in XSEDE; 32TB shared RAM. SGI Ultraviolet system 4096 Cores, 37TF peak performance Peak is not as high, but focused on running large shared memory problems that will not run on the big systems. Pittsburgh Supercomputer Center

64 Gordon San Diego Supercomputing Center Focus on Data Intensive Applications Just started production 1024 nodes/16,384 cores of Intel Sandy Bridge 341TF peak performance Dual-rail QDR interconnect Features for data intensive applications: 300TF of Flash storage as a fast cache ScaleMP to provide virtual shared memory of up to 2TB (slower than real shared memory, but much faster than not having it!!!).

65 Nautilus, Longhorn, Spur Visualization/Data Analysis Systems Longhorn; GPUs to be used for the novel purpose of graphics (at TACC). A system for remote visualization; 2048 Intel Nehalem cores (Dell) and 512 nvidia GPUs; 4GB or 12GB per core, QDR infiniband. Nautilus; SGI UV system at NICS (like Blacklight) for larger memory, with 8 nvidia GPUs (less vis than longhorn, more statistical analysis). Spur; the original remote visualization system, 8 high memory nodes with GPUs integrated on Ranger.

66 Condor Pool Purdue University A specialized resource to take advantage of a server farm Focus on throughput computation; large numbers of single node jobs (similar to what you might run on a commercial cloud, e.g. Amazon). ~150TF peak, ~4,500 various speed CPUs, from 0.5GB to 16GB.

67 Storage Systems A variety of large scale storage systems; mostly focused on archive, or swapping data between resources: Lustre-WAN (PSC), Data Capacitor (Indiana); online Lustre storage. Data Replication Service (TACC) HPSS (6PB, NICS), NCSA Tape (10PB, NCSA), Ranch (60PB, TACC) ;; all long term primarily tape archives. New Data SuperCell at PSC (disk-based archive).

68 Coming XSEDE Resources Keeneland Fall12 Stampede End 12/Start of 13 Blue Waters Mid-2012/Start of 13.

69 Keeneland Track 2D Experimental System Georgia Tech Oak Ridge/NICS First system to bring GPU computing to production at large scale. Lots of focus on evolving GPU programming model. Supporting CUDA, OpenCL Developing Ocelot model.

70 Keeneland Architecture KID: Keeneland initial delivery in production now. 7 racks, 200TFLOPS. 3 nvidia 2070 GPUs per node In early user phase now Production system: About triple the peak performance of KID nvidia 2090 GPUs. Full specs available soon

71 Stampede Two petaflop, 100,000+ core Xeon based cluster to be the new workhorse of the NSF/XSEDE open science community Eight additional petaflops, several hundred thousand cores of Intel Many-Integrated- Core processors to provide a revolutionary innovative capability.

72 Stampede: Programming Models A 2PF Xeon-only system (MPI, OpenMP) An 8PF MIC-only system (MPI, OpenMP) A 10PF heterogeneous system (MPI, OpenMP) Run separate MPI tasks on Xeon vs. MIC, or use OpenMP extensions for offload for hybrid programs.

73 Power/Physical Stampede will physically use U cabinets. Power density (after upgrade in 2015) will exceed 40kW per rack. Estimated 2015 peak power is 6.2MW.

74 Blue Waters Not technically an XSEDE Resource, but too big not to talk about! Cray (like a newer, much larger Kraken), but also a large number of GPUs (>3,000). Detailed specs not yet public: ~25,000 nodes AMD processors/nvidia GPUs 4GB per core Lots and lots of I/O 11.5PF peak performance Focus on a handful of full-system, sustained petaflops apps (requiring 100 s of thousands of cores).

75 Thanks for listening, and welcome to XSEDE! 75

Getting Started with XSEDE. Dan Stanzione

Getting Started with XSEDE. Dan Stanzione November 3, 2011 Getting Started with XSEDE Dan Stanzione Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing large scale computing, data, and visualization resources. XSEDE is the evolution

More information

XSEDE and XSEDE Resources

XSEDE and XSEDE Resources September 13, 2012 XSEDE and XSEDE Resources Dan Stanzione Deputy Director, Texas Advanced Computing Center Co-Director, iplant Collaborative Outline What is XSEDE? What resources does Xsede have? How

More information

Regional & National HPC resources available to UCSB

Regional & National HPC resources available to UCSB Regional & National HPC resources available to UCSB Triton Affiliates and Partners Program (TAPP) Extreme Science and Engineering Discovery Environment (XSEDE) UCSB clusters https://it.ucsb.edu/services/supercomputing

More information

XSEDE and XSEDE Resources

XSEDE and XSEDE Resources October 22, 2013 XSEDE and XSEDE Resources Dan Stanzione Deputy Director, Texas Advanced Computing Center Co-Director, iplant Collaborative Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011 Overview of the Texas Advanced Computing Center Bill Barth TACC September 12, 2011 TACC Mission & Strategic Approach To enable discoveries that advance science and society through the application of advanced

More information

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions Pradeep Sivakumar pradeep-sivakumar@northwestern.edu Contents What is XSEDE? Introduction Who uses XSEDE?

More information

XSEDE New User Tutorial

XSEDE New User Tutorial August 22, 2017 XSEDE New User Tutorial Marcela Madrid, Tom Maiden PSC XSEDE New User Tutorial Today s session is a general overview of XSEDE for prospective and new XSEDE users. It is NOT going to teach

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

XSEDE New User Tutorial

XSEDE New User Tutorial March 27, 2018 XSEDE New User Tutorial Marcela Madrid, Tom Maiden PSC XSEDE New User Tutorial Today s session is a general overview of XSEDE for prospective and new XSEDE users. It is NOT going to teach

More information

XSEDE New User Training. Ritu Arora November 14, 2014

XSEDE New User Training. Ritu Arora   November 14, 2014 XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative

More information

XSEDE New User Tutorial

XSEDE New User Tutorial June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please

More information

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing Jay Boisseau, Director April 17, 2012 TACC Vision & Strategy Provide the most powerful, capable computing technologies and

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Tutorial. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers

Tutorial. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers Tutorial Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers Dan Stanzione, Lars Koesterke, Bill Barth, Kent Milfeld dan/lars/bbarth/milfeld@tacc.utexas.edu XSEDE 12 July 16, 2012

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

XSEDE New User Tutorial

XSEDE New User Tutorial October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

Parallel Programming on Ranger and Stampede

Parallel Programming on Ranger and Stampede Parallel Programming on Ranger and Stampede Steve Lantz Senior Research Associate Cornell CAC Parallel Computing at TACC: Ranger to Stampede Transition December 11, 2012 What is Stampede? NSF-funded XSEDE

More information

HPC Hardware Overview

HPC Hardware Overview HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Introduction to the Intel Xeon Phi on Stampede

Introduction to the Intel Xeon Phi on Stampede June 10, 2014 Introduction to the Intel Xeon Phi on Stampede John Cazes Texas Advanced Computing Center Stampede - High Level Overview Base Cluster (Dell/Intel/Mellanox): Intel Sandy Bridge processors

More information

Remote & Collaborative Visualization. Texas Advanced Computing Center

Remote & Collaborative Visualization. Texas Advanced Computing Center Remote & Collaborative Visualization Texas Advanced Computing Center TACC Remote Visualization Systems Longhorn NSF XD Dell Visualization Cluster 256 nodes, each 8 cores, 48 GB (or 144 GB) memory, 2 NVIDIA

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Longhorn Project TACC s XD Visualization Resource

Longhorn Project TACC s XD Visualization Resource Longhorn Project TACC s XD Visualization Resource DOE Computer Graphics Forum April 14, 2010 Longhorn Visualization and Data Analysis In November 2008, NSF accepted proposals for the Extreme Digital Resources

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology Computational Biology @ TACC Goal: Establish TACC as a leading center for ENABLING computational biology

More information

Parallel Visualization At TACC. Greg Abram

Parallel Visualization At TACC. Greg Abram Parallel Visualization At TACC Greg Abram Visualization Problems * With thanks to Sean Ahern for the metaphor Huge problems: Data cannot be moved off system where it is computed Large Visualization problems:

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

GPUs and Emerging Architectures

GPUs and Emerging Architectures GPUs and Emerging Architectures Mike Giles mike.giles@maths.ox.ac.uk Mathematical Institute, Oxford University e-infrastructure South Consortium Oxford e-research Centre Emerging Architectures p. 1 CPUs

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Large Scale Remote Interactive Visualization

Large Scale Remote Interactive Visualization Large Scale Remote Interactive Visualization Kelly Gaither Director of Visualization Senior Research Scientist Texas Advanced Computing Center The University of Texas at Austin March 1, 2012 Visualization

More information

Parallel Visualization At TACC. Greg Abram

Parallel Visualization At TACC. Greg Abram Parallel Visualization At TACC Greg Abram Visualization Problems * With thanks to Sean Ahern for the metaphor Huge problems: Data cannot be moved off system where it is computed Large Visualization problems:

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin Overview and Introduction to Scientific Visualization Texas Advanced Computing Center The University of Texas at Austin Scientific Visualization The purpose of computing is insight not numbers. -- R. W.

More information

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Managing Terascale Systems and Petascale Data Archives

Managing Terascale Systems and Petascale Data Archives Managing Terascale Systems and Petascale Data Archives February 26, 2010 Tommy Minyard, Ph.D. Director of Advanced Computing Systems Motivation: What s all the high performance computing fuss about? It

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Preparing GPU-Accelerated Applications for the Summit Supercomputer Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership

More information

Overview of XSEDE for HPC Users Victor Hazlewood XSEDE Deputy Director of Operations

Overview of XSEDE for HPC Users Victor Hazlewood XSEDE Deputy Director of Operations October 29, 2014 Overview of XSEDE for HPC Users Victor Hazlewood XSEDE Deputy Director of Operations XSEDE for HPC Users What is XSEDE? XSEDE mo/va/on and goals XSEDE Resources XSEDE for HPC Users: Before

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center

More information

High Performance Computing and Data Resources at SDSC

High Performance Computing and Data Resources at SDSC High Performance Computing and Data Resources at SDSC "! Mahidhar Tatineni (mahidhar@sdsc.edu)! SDSC Summer Institute! August 05, 2013! HPC Resources at SDSC Hardware Overview HPC Systems : Gordon, Trestles

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

Preparing for Highly Parallel, Heterogeneous Coprocessing

Preparing for Highly Parallel, Heterogeneous Coprocessing Preparing for Highly Parallel, Heterogeneous Coprocessing Steve Lantz Senior Research Associate Cornell CAC Workshop: Parallel Computing on Ranger and Lonestar May 17, 2012 What Are We Talking About Here?

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

Data Movement and Storage. 04/07/09 1

Data Movement and Storage. 04/07/09  1 Data Movement and Storage 04/07/09 www.cac.cornell.edu 1 Data Location, Storage, Sharing and Movement Four of the seven main challenges of Data Intensive Computing, according to SC06. (Other three: viewing,

More information

Future of Enzo. Michael L. Norman James Bordner LCA/SDSC/UCSD

Future of Enzo. Michael L. Norman James Bordner LCA/SDSC/UCSD Future of Enzo Michael L. Norman James Bordner LCA/SDSC/UCSD SDSC Resources Data to Discovery Host SDNAP San Diego network access point for multiple 10 Gbs WANs ESNet, NSF TeraGrid, CENIC, Internet2, StarTap

More information

Data Movement & Storage Using the Data Capacitor Filesystem

Data Movement & Storage Using the Data Capacitor Filesystem Data Movement & Storage Using the Data Capacitor Filesystem Justin Miller jupmille@indiana.edu http://pti.iu.edu/dc Big Data for Science Workshop July 2010 Challenges for DISC Keynote by Alex Szalay identified

More information

HPCC New User Training

HPCC New User Training High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda

More information

NERSC. National Energy Research Scientific Computing Center

NERSC. National Energy Research Scientific Computing Center NERSC National Energy Research Scientific Computing Center Established 1974, first unclassified supercomputer center Original mission: to enable computational science as a complement to magnetically controlled

More information

High Performance Computing and Big Data Resources for Financial Economists Through the extreme Science and Engineering Discovery Environment (XSEDE)

High Performance Computing and Big Data Resources for Financial Economists Through the extreme Science and Engineering Discovery Environment (XSEDE) July 27, 2018 High Performance Computing and Big Data Resources for Financial Economists Through the extreme Science and Engineering Discovery Environment (XSEDE) Anirban Jana, anirban@psc.edu Pittsburgh

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

XSEDE New User/Allocation Mini-Tutorial

XSEDE New User/Allocation Mini-Tutorial February 23, 2015 XSEDE New User/Allocation Mini-Tutorial Vincent C. Betro, Ph.D. University of Tennessee NICS/ORNL XSEDE Training Manager Outline What s XSEDE? How do I get an alloca7on? What is the User

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Parallel Visualiza,on At TACC

Parallel Visualiza,on At TACC Parallel Visualiza,on At TACC Visualiza,on Problems * With thanks to Sean Ahern for the metaphor Huge problems: Data cannot be moved off system where it is computed Visualiza,on requires equivalent resources

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Introduction to Xeon Phi. Bill Barth January 11, 2013

Introduction to Xeon Phi. Bill Barth January 11, 2013 Introduction to Xeon Phi Bill Barth January 11, 2013 What is it? Co-processor PCI Express card Stripped down Linux operating system Dense, simplified processor Many power-hungry operations removed Wider

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information

Cray XC Scalability and the Aries Network Tony Ford

Cray XC Scalability and the Aries Network Tony Ford Cray XC Scalability and the Aries Network Tony Ford June 29, 2017 Exascale Scalability Which scalability metrics are important for Exascale? Performance (obviously!) What are the contributing factors?

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit CPU cores : individual processing units within a Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

The Red Storm System: Architecture, System Update and Performance Analysis

The Red Storm System: Architecture, System Update and Performance Analysis The Red Storm System: Architecture, System Update and Performance Analysis Douglas Doerfler, Jim Tomkins Sandia National Laboratories Center for Computation, Computers, Information and Mathematics LACSI

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

UGP and the UC Grid Portals

UGP and the UC Grid Portals UGP and the UC Grid Portals OGF 2007 Documentation at: http://www.ucgrid.org Prakashan Korambath & Joan Slottow Research Computing Technologies UCLA UGP (UCLA Grid Portal) Joins computational clusters

More information

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Brett Bode, Michelle Butler, Sean Stevens, Jim Glasgow National Center for Supercomputing Applications/University

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University University of Utah Purdue

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler The Blue Water s File/Archive System Data Management Challenges Michelle Butler (mbutler@ncsa.illinois.edu) NCSA is a World leader in deploying supercomputers and providing scientists with the software

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

OpenSees on Teragrid

OpenSees on Teragrid OpenSees on Teragrid Frank McKenna UC Berkeley OpenSees Parallel Workshop Berkeley, CA What isteragrid? An NSF sponsored computational science facility supported through a partnership of 13 institutions.

More information

Introduction to BioHPC New User Training

Introduction to BioHPC New User Training Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-04-04 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Practical: a sample code

Practical: a sample code Practical: a sample code Alistair Hart Cray Exascale Research Initiative Europe 1 Aims The aim of this practical is to examine, compile and run a simple, pre-prepared OpenACC code The aims of this are:

More information

Introduction to the Cluster

Introduction to the Cluster Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

High Performance Computing with Accelerators

High Performance Computing with Accelerators High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information