TLC 2 Overview. Lennart Johnsson Director Cullen Distinguished Professor of Computer Science, Mathematics and Electrical and Computer Engineering

Size: px
Start display at page:

Download "TLC 2 Overview. Lennart Johnsson Director Cullen Distinguished Professor of Computer Science, Mathematics and Electrical and Computer Engineering"

Transcription

1 TLC 2 Overview Lennart Johnsson Director Cullen Distinguished Professor of Computer Science, Mathematics and Electrical and Computer Engineering

2 TLC2 Overview Outline Mission Services Resources Innovation program Member organizations

3 TLC 2 Mission to foster and support collaborative multidisciplinary research, education and training in Computational Science and Engineering and other disciplines that can benefit from information technology, and in Computer Science and Information Technology

4 TLC 2 Services Assistance in proposal preparation and research award management proposals/yr M$ requested annually ~25 awards annually 15 30M$ in new awards anually Assistance with arrangements of workshops, symposia, conferences, and external relations ~10 conferences/symposia annually ~10 seminars ~5 training sessions (web development, HPC) Outreach and Promotional efforts (web, flyers, posters, ) Develops 5 10 new websites annually Maintains and hosts web sites

5 TLC 2 Services Provide expertise in Data base development Web tools and web development High-performance computation High-performance networking Visualization AV-tools

6 TLC 2 Resources State-of-the-art communication tools and electronic classrooms Facilitate and lead initiatives for high-end computing and storage capabilities Facilitate and lead initiatives for high-end network capabilities Facilitate and lead initiatives for high-end visualization capabilities

7 Communication Facilities AG node from InSorS Used regularly for HiPCAT meetings 50% domestic, 50% international AV Facilities Video conference capabilities in rooms PGH-200, PGH-218, and PGH-232 Ability to direct rendering cluster output to PGH- 200, PGH-218 and PGH-232 in addition to displays in PGH-216

8 48 seats with - 15 flat panel displays - AMD Athlon 3800 dual-core PCs - individual teacher student interaction facilities Dual boot Linux Windows XP server AV for video conferencing Dual projection capability Streaming video from visualization theatre IBIS Retreat Classrooms PGH 200

9 Classrooms PGH seats with Video conference facilities Stereographic projection capabilities Power/Network port at every desk

10 Linux Clusters IBIS Retreat Atlantis Cluster (atlantis.tlc2.uh.edu) 152 Itanium2 nodes dual 1.3 GHz/3MB processor nodes 4 GB per node Myrinet, 152 host ports, 304 spine ports GigE for System Area Networking, 4:1 oversubscription Management network Second most powerful computer in a Texas Educational Institution Ranked 102 among the worlds 500 most powerful computers (2003) Acquired as part of an Intel/HP/TLC 2 partnership

11 Linux Clusters IBIS Retreat Lemuria Coming soon 48 Itanium2 4-way nodes four 1.3 GHz/3MB processors/node 32 GB per node GigE for node interconnect GigE for System Area Networking Management network

12 Linux Clusters IBIS Retreat Archen Pala 46 dual Pentium III nodes dual 1 GHz processors/node 2 GB per node Fast Ethernet for node interconnect GigE for System Area Networking Management network 44 AMD Athlon nodes single 1.4 GHz processors/node 1 GB per node GigE for node interconnect GigE for System Area Networking Management network

13 Linux SMPs Gimli Medusa 16-way Itanium2 SMP GHz/3MB processors 32 GB GigE for System Area Networking Management network Three 4-way Opteron SMPs GHz processors/smp 16 GB memory/smp Infiniband SMP interconnect GigE for System Area Networking Management network

14 Linux SMPs IBIS Retreat Flexo & Bender 64-way Itanium2 SMP GHz/3MB processors each 512 GB memory each Numalink interconnect GigE for System Area Networking Management network

15 ACRL Resource - Eldorado 60 node Itanium2 cluster 6 nodes with dual 0.9GHz/1.5MB processors, 4GB 2 nodes with dual 1.3 GHz/3MB processors, 4GB 52 nodes with 1.5 GHz/3MB processors, 4GB SCI interconnect GigE for System Area Networking Management network

16 ACRL Resource - Medusa 16 node AMD Opteron cluster 16 nodes with dual 2.0GHz processors, 4GB Infiniband interconnect GigE for System Area Networking Management network

17 ACRL SMPs Eldorado Medusa 4-way Itanium2 SMP 4 1.5GHz/6MB Itanium2 processors 16GB SCI interconnect GigE for System Area Networking Management network 4-way Opteron SMP GHz processors 16 GB memor Infiniband SMP interconnect GigE for System Area Networking Management network

18 UH Bioinformatics Lab & IMD Resource - Nibbler 64-way Itanium2 SMP GHz/3MB processors 512 GB memory Numalink interconnect GigE for System Area Networking Management network

19 IDIA Resource - Archimedes 40 node AMD Athlon cluster 40 dual nodes with dual 1.5GHz processors, 2GB Fast Ethernet interconnect GigE for System Area Networking Management network

20 Distributed Storage Network Provide NFS based filesystems to clients in Chemistry, Geosciences, Computer Science, and TLC ~70 TB

21 Coming Soon... Two Sony 4096 x 2160 projectors 16 x 9 projection screen Active stereo 42 seat theatre Rendering cluster with 4 AMD Opteron dual socket dual core nodes 4 GB memory per node Dual nvidia Quadro FX4500 video cards per node GigE Interconnect Management network

22 System No of nodes Sockets per node Cores per socket Processor Memory per node GB Total no of cores Total Memory GB Peak perf. GF Intercon. Location Atlantis Itanium2 1.3GHz Myrinet PGH576 Gemli Itanium2 1.5GHz PGH576 Medusa Opteron 1.6GHz IB PGH576 Flexo Itanium2 1.3GHz RCC Bender Itanium2 1.3GHz RCC Lemuria Itanium2 1.3GHz GigE RCC Archen Pentium-3 1.0GHz Fast E RCC Pala Athlon 1.4GHz Fast E RCC Eldorado Medusa Itanium2 1.5GHz Itanium2 1.5GHz Itanium 2 1.3GHz Itanium2 0.9GHz Opteron Opteron 1.6GHz 2.0GHz ACRL Bioinformatics/IMD Nibbler Itanium2 1.3GHz RCC Archimedes Athlon 1.5GHz Fast E RCC IDIA SCI SCI SCI SCI IB IB PGH576 PGH576

23 Atlantis Usage Statistics FY Node Hours Engineering % Chemistry % Computer Science % TLC % Physics % BioChem % Total Node Hours:

24 TLC 2 Computing, Storage and Visualization Facilities IBIS Retreat UCC 40 TB

25 RENoH College Station Dallas IBIS Retreat Kansas City Chicago San Antonio Baton Rouge Jacksonville El Paso Los Angeles LEARN 12 strands NLR 24 strands

26 LEARN IBIS Retreat NLR Internet2 NewNet August 2007

27 TIGRE Portal and Testbed

28 TIGRE / Ultrascan Demo Centrifuge data TIGRE Portal

29 A Potential Application IBIS Retreat

30 Proton treatment IBIS Retreat

31 Proton treatment IBIS Retreat

32 RENoH Dark Fiber network UHD RCF Research Baylor MD Anderson Rice TAMU HSC Texas Heart I. Texas Women U UTHSCH UH Pharmacy Rice TMC UH UHCL NASA Hospitals Herman H Methodist H. St Luke s Shriner Texas Chikdren s UHCC A&M UTMB

33 National Lambda Rail - NLR

34 Internet2 IBIS Retreat

35 ES-Net, Internet2 ES-Net, Internet2 LEARN SETG NLR

36 Genomic Sequencing

37 Genomic Sequencing IBIS Retreat

38 Genomic Sequencing IBIS Retreat

39 IMD Solexa Sequencer computing and storage env. Sequencer Output Instrument PC (1 TB storage) 1 Gbps via UH campus network 10 Gbps via TLC 2 network Data Switch (231 SR2) Fat Node (5 TB storage?) optional Small Dell PC Control Single mode fiber (1Gbps) via Distributed storage network (private) 231 SR2 Cluster Generation Station 10 Gbps via TLC 2 network Foundry (576 PGH) CISCO (576 PGH) Switch (RCC) 1 Gbps SETG Internet 2 SDSC Atlantis cluster Gimli (16-way) 10 Gbps via TLC 2 network / NLR Data TLC 2 Network Computation UH campus Network new fiber pull Data flow Computation flow Storage Data Data Web server (Data access / Management) Storage Data Data Computation SGI Altix (Nibbler) Firecrest (Image Analysis) Bustard (Base calling) Gerlad (Sequence Analysis) Error/histogram img from Gerlad Need scripts for raw images

40 Keck/IMD NMR Center at UH Brucker 800 NMR Features 18.8-Tesla magnetic field. 800 MHz proton frequency ~2.0 K 70 Gauss/cm z-axis pulsed field gradient (PFG). Two probes: TXI: three-channel inverse probe ( 1 H/ 13 C/ 15 N) QXI: four-channel inverse probe ( 1 H/ 13 C/ 15 N/ 31 P)

41 Keck/IMD NMR Center at UH Brucker 600 NMR Features 14.1-Tesla magnetic field 600 MHz proton frequency 4.2 K 35 Gauss/cm xyz-axis pulsed field gradient (PFG) Two probes: TXI: three-channel inverse probe ( 1 H/ 13 C/ 15 N) QXI: four-channel inverse probe ( 1 H/ 13 C/ 15 N/ 31 P)

42 TLC 2 Innovation Program TLC 2 Innovative Research Awards: FY03 - FY06 $2,357,348 Mechanical Engineering Mathematics Inform. & Logistics Tech.(CTL) Physics Health & Human Performance Psychology HATAC Geosciences Economics Biology & Biochemistry Chemistry Computer Science Biology & Biochemistry Chemistry Computer Science Economics Geosciences HATAC Health & Human Performance Inform. & Logistics Tech.(CTL) Mathematics Mechanical Engineering Physics Psychology

43 TLC 2 - Current initiatives and centers of excellence Abramson Family Center for the Future of Health (joint with the Methodist Hospital Research Institute and Technion-Israel Institute of Technology) Advanced Computing Research Laboratory (ACRL) Houston Luis Stokes Alliance for Minority Participation (H-LSAMP) Institute for Digital Informatics and Analysis (IDIA) Institute for Molecular Design (IMD) Mission Oriented Seismic Research Program (M-OSRP) Southwest Public Safety Technology Center (SWTC) Texas Institute for Measurement, Evaluation and Statistics (TIMES)

44 TLC 2 IBIS Retreat Colleges and Departments Colleges Engineering Liberal Arts and Social Sciences Natural Sciences and Mathematics Pharmacy Technology Departments Antropology Biology and Biochemistry Chemistry Chemical Engineering Computer Science Economics Geosciences Human Development & Consumer Sciences Mathematics Mechanical Engineering Pharmacy Physics Pharmacological and Pharmaceutical Sciences

TMC-Rice-UH data communication needs for Research and Education

TMC-Rice-UH data communication needs for Research and Education TMC-Rice-UH data communication needs for Research and Education Lennart Johnsson Director TLC2 Cullen Professor of Computer Science, Mathematics and Electrical and Computer Engineering Background - Infrastructure

More information

Virtual Organizations in Academic Settings

Virtual Organizations in Academic Settings Virtual Organizations in Academic Settings Alan Sill Senior Scientist, Texas Internet Grid for Research and Education and Adjunct Professor of Physics Texas Tech University Dec. 6, 2006 Internet2 Fall

More information

Part 2: Computing and Networking Capacity (for research and instructional activities)

Part 2: Computing and Networking Capacity (for research and instructional activities) National Science Foundation Part 2: Computing and Networking Capacity (for research and instructional activities) FY 2013 Survey of Science and Engineering Research Facilities Who should be contacted if

More information

University of Houston HUB Report March 2009

University of Houston HUB Report March 2009 Academic Affairs ACADEMIC ADV/ORIENTATION Sum of Amount 382.01 1,160.71 1,542.72 Percent of Total 24.76% 75.24% 100.00% ACADEMIC AFFAIRS Sum of Amount 2,571.93 242.26 2,814.19 Percent of Total 91.39% 8.61%

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

LONI Update. ULS CIO Meeting. Lonnie Leger, LONI / LSU

LONI Update. ULS CIO Meeting. Lonnie Leger, LONI / LSU LONI Update ULS CIO Meeting Lonnie Leger, LONI / LSU How much does LONI cost? No cost to connect to the research networks of Internet2 and NLR No cost to connect to other Higher Education Institutions

More information

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011 Overview of the Texas Advanced Computing Center Bill Barth TACC September 12, 2011 TACC Mission & Strategic Approach To enable discoveries that advance science and society through the application of advanced

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Clemson HPC and Cloud Computing

Clemson HPC and Cloud Computing Clemson HPC and Cloud Computing Jill Gemmill, Ph.D. Executive Director Cyberinfrastructure Technology Integration Computing & Information Technology CLEMSON UNIVERSITY 2 About Clemson University South

More information

An Overview of CSNY, the Cyberinstitute of the State of New York at buffalo

An Overview of CSNY, the Cyberinstitute of the State of New York at buffalo An Overview of, the Cyberinstitute of the State of New York at buffalo Russ Miller Computer Sci & Eng, SUNY-Buffalo Hauptman-Woodward Medical Res Inst NSF, NYS, Dell, HP Cyberinfrastructure Digital Data-Driven

More information

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Advanced Research Compu2ng Informa2on Technology Virginia Tech Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)

More information

Large Scale Remote Interactive Visualization

Large Scale Remote Interactive Visualization Large Scale Remote Interactive Visualization Kelly Gaither Director of Visualization Senior Research Scientist Texas Advanced Computing Center The University of Texas at Austin March 1, 2012 Visualization

More information

The Center for Computational Research

The Center for Computational Research The Center for Computational Research Russ Miller Director, Center for Computational Research UB Distinguished Professor, Computer Science & Engineering Senior Research Scientist, Hauptman-Woodward Medical

More information

NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop

NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop Recently the National Science Foundation (NSF) held it s NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure

More information

UB CCR's Industry Cluster

UB CCR's Industry Cluster STATE UNIVERSITY AT BUFFALO UB CCR's Industry Cluster Center for Computational Research UB CCR's Industry Cluster What is HPC NY? What is UB CCR? The UB CCR HPC NY team The UB CCR industry cluster Cluster

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

Best Practices for Video-Enabled Teaching and Learning

Best Practices for Video-Enabled Teaching and Learning Web Seminar for Higher Education Leaders Best Practices for Video-Enabled Teaching and Learning Kurt Eisele-Dyrli Web Seminar Editor University Business Adam Linzels Director of Videoconferencing & AV

More information

An ASEAN SCIRD Project. Presented by Jon Lau, Deputy Director (NSCC) at AP*Retreat on 6 Sep 2015

An ASEAN SCIRD Project. Presented by Jon Lau, Deputy Director (NSCC) at AP*Retreat on 6 Sep 2015 An ASEAN SCIRD Project Presented by Jon Lau, Deputy Director (NSCC) at AP*Retreat on 6 Sep 2015 ASTRENA- Asean Science & Technology Research & Education Network Alliance A network alliance of RENs in ASEAN

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

Parallel File Systems Compared

Parallel File Systems Compared Parallel File Systems Compared Computing Centre (SSCK) University of Karlsruhe, Germany Laifer@rz.uni-karlsruhe.de page 1 Outline» Parallel file systems (PFS) Design and typical usage Important features

More information

Research Supercomputing: Linking Chains of Knowledge through the Grid

Research Supercomputing: Linking Chains of Knowledge through the Grid Research Supercomputing: Linking Chains of Knowledge through the Grid We are really pleased that UK has chosen to install two top-end HP machines the traditional SMP-style Superdome and the newly developed

More information

APAC: A National Research Infrastructure Program

APAC: A National Research Infrastructure Program APSR Colloquium National Perspectives on Sustainable Repositories 10 June 2005 APAC: A National Research Infrastructure Program John O Callaghan Executive Director Australian Partnership for Advanced Computing

More information

COMPUTER SCIENCE DEPARTMENT IT INFRASTRUCTURE

COMPUTER SCIENCE DEPARTMENT IT INFRASTRUCTURE COMPUTER SCIENCE DEPARTMENT IT INFRASTRUCTURE Systems Group Email: root@cs.odu.edu -Director: Prof Ajay Gupta Director of Computer Resources (ajay@cs.odu.edu) (757) 683-3347 -Full time employees: responsible

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

High Performance Computing at Mississippi State University

High Performance Computing at Mississippi State University High Performance Computing at Mississippi State University Building Facilities HPC Building - Starkville o CCS, GRI, PET, SimCtr o 71,000 square feet o 3,500 square ft. data center 600 KW generator 60

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant Goals Introduction to MSI resources Show you how to access our systems

More information

Architectures for Scalable Media Object Search

Architectures for Scalable Media Object Search Architectures for Scalable Media Object Search Dennis Sng Deputy Director & Principal Scientist NVIDIA GPU Technology Workshop 10 July 2014 ROSE LAB OVERVIEW 2 Large Database of Media Objects Next- Generation

More information

Why Should I become a Member?

Why Should I become a Member? KC PMI Mid-America Chapter Welcome to the Kansas City PMI Mid-America Chapter Information Session Why Should I become a Member? KC PMI Mid-America Chapter Event Agenda 2:00 pm - 2:15 pm Ice Breaker/Networking

More information

Darren D Souza ICT Manager, UQ Diamantina Institute

Darren D Souza ICT Manager, UQ Diamantina Institute Darren D Souza ICT Manager, UQ Diamantina Institute Who is UQDI? The University of Queensland Diamantina Institute was established as UQ s sixth research institute in 2007 The Institute was formed through

More information

University of Houston HUB Report FY2013 1st Half (September 2012 through February 2013)

University of Houston HUB Report FY2013 1st Half (September 2012 through February 2013) Academic Affairs ACADEMIC AFFAIRS Sum of Amount 74,430.74 14,576.80 89,007.54 Percent of Total 83.62% 16.38% 100.00% ACADEMIC AND FACULTY AFFAIRS Sum of Amount 26,162.03 19,445.69 45,607.72 Percent of

More information

Big Data 2015: Sponsor and Participants Research Event ""

Big Data 2015: Sponsor and Participants Research Event Big Data 2015: Sponsor and Participants Research Event "" Center for Large-scale Data Systems Research, CLDS! San Diego Supercomputer Center! UC San Diego! Agenda" Welcome and introductions! SDSC: Who

More information

ACCRE High Performance Compute Cluster

ACCRE High Performance Compute Cluster 6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services

More information

2nd National MBE Manufacturers Summit 2017

2nd National MBE Manufacturers Summit 2017 Global Manufacturing Community 2nd National MBE Manufacturers Summit 2017 August 15-16, 2017 Corporate Sponsorship Global Manufacturing Community Providing Opportunities for MBE Manufacturers Nationally

More information

Parallel Programming with MPI

Parallel Programming with MPI Parallel Programming with MPI Science and Technology Support Ohio Supercomputer Center 1224 Kinnear Road. Columbus, OH 43212 (614) 292-1800 oschelp@osc.edu http://www.osc.edu/supercomputing/ Functions

More information

C-DAC HPC Trends & Activities in India. Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India

C-DAC HPC Trends & Activities in India. Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India C-DAC HPC Trends & Activities in India Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India Presentation Outline A brief profile of C-DAC, India

More information

Great Plains Network. Kate Adams, GPN

Great Plains Network. Kate Adams, GPN Great Plains Network Kate Adams, GPN Who Am I? I am one of three Great Plains Network employees. Official title is Research Assistant System Administrator XSEDE Regional Champion Others: Bill Mitchell,

More information

Next Generation Networking and The HOPI Testbed

Next Generation Networking and The HOPI Testbed Next Generation Networking and The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 CANS 2005 Shenzhen, China 2 November 2005 Agenda Current Abilene Network

More information

Overview of HPC at LONI

Overview of HPC at LONI Overview of HPC at LONI Le Yan HPC Consultant User Services @ LONI What Is HPC High performance computing is to use supercomputers to solve problems computationally The most powerful supercomputer today

More information

LONI Design and Capabilities. Louisiana Tech LONI Symposium 2005 May 4 th -5 th

LONI Design and Capabilities. Louisiana Tech LONI Symposium 2005 May 4 th -5 th LONI Design and Capabilities Louisiana Tech LONI Symposium 2005 May 4 th -5 th What is LONI? One of the Board of Regents recent initiatives supported by the Governor and Higher Education is the establishment

More information

2005 NASCIO Award Submission Category: Communications Infrastructure

2005 NASCIO Award Submission Category: Communications Infrastructure 2005 NASCIO Award Submission Category: Communications Infrastructure Project: The Mid-Atlantic Terascale Partnership (MATP) and The Virginia Optical Research Technology Exchange (VORTEX) Executive Summary

More information

Vampir and Lustre. Understanding Boundaries in I/O Intensive Applications

Vampir and Lustre. Understanding Boundaries in I/O Intensive Applications Center for Information Services and High Performance Computing (ZIH) Vampir and Lustre Understanding Boundaries in I/O Intensive Applications Zellescher Weg 14 Treffz-Bau (HRSK-Anbau) - HRSK 151 Tel. +49

More information

Moab, TORQUE, and Gold in a Heterogeneous, Federated Computing System at the University of Michigan

Moab, TORQUE, and Gold in a Heterogeneous, Federated Computing System at the University of Michigan Moab, TORQUE, and Gold in a Heterogeneous, Federated Computing System at the University of Michigan Andrew Caird Matthew Britt Brock Palen September 18, 2009 Who We Are College of Engineering centralized

More information

High Performance Computing Resources at MSU

High Performance Computing Resources at MSU MICHIGAN STATE UNIVERSITY High Performance Computing Resources at MSU Last Update: August 15, 2017 Institute for Cyber-Enabled Research Misson icer is MSU s central research computing facility. The unit

More information

NAMD GPU Performance Benchmark. March 2011

NAMD GPU Performance Benchmark. March 2011 NAMD GPU Performance Benchmark March 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Intel, Mellanox Compute resource - HPC Advisory

More information

HYCOM Performance Benchmark and Profiling

HYCOM Performance Benchmark and Profiling HYCOM Performance Benchmark and Profiling Jan 2011 Acknowledgment: - The DoD High Performance Computing Modernization Program Note The following research was performed under the HPC Advisory Council activities

More information

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S.

Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Building 10-Gbps Networks: A few observations on the national and regional scales in the U.S. Steve Corbató corbato@internet2.edu Director, Network Initiatives, Internet2 & Visiting Fellow, Center for

More information

SOUTH DAKOTA BOARD OF REGENTS HOUSING AND AUXILIARY FACILITIES SYSTEM REVENUE BONDS, SERIES 2015 Due Diligence Document Request List and Questionnaire

SOUTH DAKOTA BOARD OF REGENTS HOUSING AND AUXILIARY FACILITIES SYSTEM REVENUE BONDS, SERIES 2015 Due Diligence Document Request List and Questionnaire SOUTH DAKOTA BOARD OF REGENTS HOUSING AND AUXILIARY FACILITIES SYSTEM REVENUE BONDS, SERIES 2015 Due Diligence Document Request List and Questionnaire Operations and Statistical Data OS-VII-2 Status of

More information

Performance of Applications on Comet GPU Nodes Utilizing MVAPICH2-GDR. Mahidhar Tatineni MVAPICH User Group Meeting August 16, 2017

Performance of Applications on Comet GPU Nodes Utilizing MVAPICH2-GDR. Mahidhar Tatineni MVAPICH User Group Meeting August 16, 2017 Performance of Applications on Comet GPU Nodes Utilizing MVAPICH2-GDR Mahidhar Tatineni MVAPICH User Group Meeting August 16, 2017 This work supported by the National Science Foundation, award ACI-1341698.

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute MSI Mission MSI is an academic unit of the University of Minnesota under the office of the Vice President for Research. The institute was created in 1984, and has a staff

More information

Analytics of Wide-Area Lustre Throughput Using LNet Routers

Analytics of Wide-Area Lustre Throughput Using LNet Routers Analytics of Wide-Area Throughput Using LNet Routers Nagi Rao, Neena Imam, Jesse Hanley, Sarp Oral Oak Ridge National Laboratory User Group Conference LUG 2018 April 24-26, 2018 Argonne National Laboratory

More information

Rutgers Discovery Informatics Institute (RDI2)

Rutgers Discovery Informatics Institute (RDI2) Rutgers Discovery Informatics Institute (RDI2) Manish Parashar h+p://rdi2.rutgers.edu Modern Science & Society Transformed by Compute & Data The era of Extreme Compute and Big Data New paradigms and prac3ces

More information

GROMACS Performance Benchmark and Profiling. September 2012

GROMACS Performance Benchmark and Profiling. September 2012 GROMACS Performance Benchmark and Profiling September 2012 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource

More information

The design and construction of the facility is anticipated to cost approximately $300,000.

The design and construction of the facility is anticipated to cost approximately $300,000. Council for the Built Environment technical subcommittee August, 00 MEMORANDUM To: Subject: Dr. Douglas Palmer Chair, Council for the Built Environment Proposed Construction: Soccer Restroom RECOMMENDATION

More information

HPC 2 Informed by Industry

HPC 2 Informed by Industry HPC 2 Informed by Industry HPC User Forum October 2011 Merle Giles Private Sector Program & Economic Development mgiles@ncsa.illinois.edu National Center for Supercomputing Applications University of Illinois

More information

Higher Education in Texas: Serving Texas Through Transformational Education, Research, Discovery & Impact

Higher Education in Texas: Serving Texas Through Transformational Education, Research, Discovery & Impact Higher Education in Texas: Serving Texas Through Transformational Education, Research, Discovery & Impact M. Dee Childs, Vice President for Information Technology & Chief Information Officer v Texas A&M

More information

Internet2 Update. Edward Moynihan Program Manager, Global Programs. October 30, 2012

Internet2 Update. Edward Moynihan Program Manager, Global Programs. October 30, 2012 Internet2 Update Edward Moynihan Program Manager, Global Programs October 30, 2012 2 10/30/2012, 2012 Internet2 Higher Education members Internet2 was formed by 34 universities in 1996 Now over 200 member

More information

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman 1 2 Introduction to High Performance Computing (HPC) Introduction High-speed computing. Originally pertaining only to supercomputers

More information

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017 Creating an Exascale Ecosystem for Science Presented to: HPC Saudi 2017 Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences March 14, 2017 ORNL is managed by UT-Battelle

More information

USTAR Services. Larry Garfield, IT Manager USTAR Services

USTAR Services. Larry Garfield, IT Manager USTAR Services USTAR Services Larry Garfield, IT Manager USTAR Services 1 What is USTAR? Utah Science Technology And Research Initiative 2 USTAR Initiative: Making strategic investments at the University of Utah and

More information

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015 LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA

More information

Managing Terascale Systems and Petascale Data Archives

Managing Terascale Systems and Petascale Data Archives Managing Terascale Systems and Petascale Data Archives February 26, 2010 Tommy Minyard, Ph.D. Director of Advanced Computing Systems Motivation: What s all the high performance computing fuss about? It

More information

The Center for Computational Research & Grid Computing

The Center for Computational Research & Grid Computing The Center for Computational Research & Grid Computing Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst NSF, NIH, DOE NIMA, NYS,

More information

ABySS Performance Benchmark and Profiling. May 2010

ABySS Performance Benchmark and Profiling. May 2010 ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

The CISM Education Plan (updated August 2006)

The CISM Education Plan (updated August 2006) The CISM Education Mission The CISM Education Plan (updated August 2006) The CISM Education Mission is to recruit and train the next generation of space physicists and imbue them with an understanding

More information

KSU Computer Science Department 2008 (First!) Graduate Alumni Reunion

KSU Computer Science Department 2008 (First!) Graduate Alumni Reunion KSU Computer Science Department 28 (First!) Graduate Alumni Reunion Kent State University 33, students 22, at Kent & 11, at 7 Regional Campuses 29, undergraduate & 4, graduate 7 Colleges Business Administration,

More information

Umeå University

Umeå University HPC2N: Introduction to HPC2N and Kebnekaise, 2017-09-12 HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N

More information

Umeå University

Umeå University HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N HPC at a glance. HPC2N Abisko, Kebnekaise HPC Programming

More information

AT&T Labs Research Bell Labs/Lucent Technologies Princeton University Rensselaer Polytechnic Institute Rutgers, the State University of New Jersey

AT&T Labs Research Bell Labs/Lucent Technologies Princeton University Rensselaer Polytechnic Institute Rutgers, the State University of New Jersey AT&T Labs Research Bell Labs/Lucent Technologies Princeton University Rensselaer Polytechnic Institute Rutgers, the State University of New Jersey Texas Southern University Texas State University, San

More information

FATTWIN SUPERSERVERS POWER RUTGERS UNIVERSITY S TOP RANKED NEW SUPERCOMPUTER

FATTWIN SUPERSERVERS POWER RUTGERS UNIVERSITY S TOP RANKED NEW SUPERCOMPUTER SUCCESS STORY FATTWIN SUPERSERVERS POWER RUTGERS UNIVERSITY S TOP RANKED NEW SUPERCOMPUTER Here s a look at how Rutgers new supercomputer ranks around the world: 2nd Among Big Ten universities 8th Among

More information

2.1.7 Instructional Chemistry Laboratory Computer Upgrade to University Standards Page 1 of 8

2.1.7 Instructional Chemistry Laboratory Computer Upgrade to University Standards Page 1 of 8 Standards Page 1 of 8 Submitting Organization(s): Department of Chemistry, College of Arts and Sciences Contact Person Name: Al Baumstark and Don Harden Contact Person Email: harden@gsu.edu Contact Person

More information

Facilities, Equipment, and Other Resources

Facilities, Equipment, and Other Resources Facilities, Equipment, and Other Resources The Department of Computer Science maintains computing facilities to satisfy a variety of research and educational needs. The Computer Science Laboratory staff

More information

Outline. March 5, 2012 CIRMMT - McGill University 2

Outline. March 5, 2012 CIRMMT - McGill University 2 Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started

More information

Early Operational Experience with the Cray X1 at the Oak Ridge National Laboratory Center for Computational Sciences

Early Operational Experience with the Cray X1 at the Oak Ridge National Laboratory Center for Computational Sciences Early Operational Experience with the Cray X1 at the Oak Ridge National Laboratory Center for Computational Sciences Buddy Bland, Richard Alexander Steven Carter, Kenneth Matney, Sr. Cray User s Group

More information

Batch Scheduling on XT3

Batch Scheduling on XT3 Batch Scheduling on XT3 Chad Vizino Pittsburgh Supercomputing Center Overview Simon Scheduler Design Features XT3 Scheduling at PSC Past Present Future Back to the Future! Scheduler Design

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Concept

More information

igeni: International Global Environment for Network Innovations

igeni: International Global Environment for Network Innovations igeni: International Global Environment for Network Innovations Joe Mambretti, Director, (j-mambretti@northwestern.edu) International Center for Advanced Internet Research (www.icair.org) Northwestern

More information

Infrastructure for Simulation Science

Infrastructure for Simulation Science Infrastructure for Simulation Science Christian Bischof niversity Center for Computing and Communications (CCC) Institute for Scientific Computing Center for Computing and Communications 1 Simulation Science

More information

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008 1 The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed November 2008 Extending leadership to the HPC community November 2008 2 Motivation Collaborations Hyperion Cluster Timeline

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Energy Action Plan 2015

Energy Action Plan 2015 Energy Action Plan 2015 Purpose: In support of the Texas A&M University Vision 2020: Creating a Culture of Excellence and Action 2015: Education First Strategic Plan, the Energy Action Plan (EAP) 2015

More information

The Texas A&M University System. Internal Audit Department. Fiscal Year 2014 Audit Plan

The Texas A&M University System. Internal Audit Department. Fiscal Year 2014 Audit Plan Introduction The purpose of the Audit Plan is to outline audits and other activities the System Internal Audit Department will conduct during fiscal year 2014. The plan is developed to satisfy responsibilities

More information

About Us. Are you ready for headache-free HPC? Call us and learn more about our custom clustering solutions.

About Us. Are you ready for headache-free HPC? Call us and learn more about our custom clustering solutions. About Us Advanced Clustering Technologies customizes solutions to meet your exact specifications. For more information regarding our services, email us at sales@advancedclustering.com Advanced Clustering

More information

Regional & National HPC resources available to UCSB

Regional & National HPC resources available to UCSB Regional & National HPC resources available to UCSB Triton Affiliates and Partners Program (TAPP) Extreme Science and Engineering Discovery Environment (XSEDE) UCSB clusters https://it.ucsb.edu/services/supercomputing

More information

CO2 Urban Synthesis and Analysis ("CO2-USA") Network

CO2 Urban Synthesis and Analysis (CO2-USA) Network CO2 Urban Synthesis and Analysis ("CO2-USA") Network Nov. 6 & 7, 2017 James Whetstone National Institute of Standards and Technology Gaithersburg, Maryland Astronomy Picture of the Day June 18, 2015 M.

More information

High-Performance Computing, Computational and Data Grids

High-Performance Computing, Computational and Data Grids High-Performance Computing, Computational and Data Grids Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst NSF, NIH, DOE NIMA, NYS,

More information

UH 730 HUB Report June 2016

UH 730 HUB Report June 2016 Academic Affairs ACAD AFFAIRS FINANCE & ADMIN Sum of Amount 153.46 153.46 Percent of Total 0.00% 100.00% 100.00% ACADEMIC AFFAIRS Sum of Amount 381,126.21 2,916.73 384,042.94 Percent of Total 99.24% 0.76%

More information

To Compete You Must Compute

To Compete You Must Compute To Compete You Must Compute Presentation to Society of High Performance Computing Professionals November 3, 2010 Susan Baldwin Executive Director Compute Canada susan.baldwin@computecanada.org Vision To

More information

Programs. New Jersey Institute of Technology 1

Programs. New Jersey Institute of Technology 1 New Jersey Institute of 1 Programs Mathematics Applied Mathematics - M.S. Mathematics Applied Mathematics and Applied Physics - Physics Applied Physics - Physics Applied Physics - /M.D. Accelerated Physics

More information

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012 SGI Overview HPC User Forum Dearborn, Michigan September 17 th, 2012 SGI Market Strategy HPC Commercial Scientific Modeling & Simulation Big Data Hadoop In-memory Analytics Archive Cloud Public Private

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

High Performance Computing with Accelerators

High Performance Computing with Accelerators High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing

More information

Introducing OTF / Vampir / VampirTrace

Introducing OTF / Vampir / VampirTrace Center for Information Services and High Performance Computing (ZIH) Introducing OTF / Vampir / VampirTrace Zellescher Weg 12 Willers-Bau A115 Tel. +49 351-463 - 34049 (Robert.Henschel@zih.tu-dresden.de)

More information

TERENA E2E Workshop

TERENA E2E Workshop CineGrid @ TERENA E2E Workshop Building a New User Community for Very High Quality Media Applications On Very High Speed Networks Michal Krsek CESNET November 29, 2010 Michal.krsek@cesnet.cz k@ What is

More information

Monash High Performance Computing

Monash High Performance Computing MONASH eresearch Monash High Performance Computing Gin Tan Senior HPC Consultant MeRC (Monash eresearch) Monash HPC Infrastructure MASSIVE MonARCH Characterisation VL and Instruments MASSIVE-3 MeRC Infrastructure

More information

COMPUTATION RESEARCH COMPUTING STRENGTH. Since Steele in 2008, Research Computing has deployed many world-class offerings in computation

COMPUTATION RESEARCH COMPUTING STRENGTH. Since Steele in 2008, Research Computing has deployed many world-class offerings in computation Preston Smith Director of Research Computing Services Alex Younts Senior Research Engineer 2017 COMMUNITY 8/29/2017 HPC FACULTY MEETING CLUSTER COMPUTATION RESEARCH COMPUTING STRENGTH Since Steele in 2008,

More information

Imperial College London. Simon Burbidge 29 Sept 2016

Imperial College London. Simon Burbidge 29 Sept 2016 Imperial College London Simon Burbidge 29 Sept 2016 Imperial College London Premier UK University and research institution ranked #2= (with Cambridge) in QS World University rankings (MIT #1) #9 in worldwide

More information

Registration Workshop. Nov. 1, 2017 CS/SE Freshman Seminar

Registration Workshop. Nov. 1, 2017 CS/SE Freshman Seminar Registration Workshop Nov. 1, 2017 CS/SE Freshman Seminar Computer Science Department Website: http://cs.mtech.edu Program descriptions under Degrees & Options Mission statements under Accreditation &

More information

GROMACS Performance Benchmark and Profiling. August 2011

GROMACS Performance Benchmark and Profiling. August 2011 GROMACS Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information