Scientific Visualization at JSC

Size: px
Start display at page:

Download "Scientific Visualization at JSC"

Transcription

1 Mitglied der Helmholtz-Gemeinschaft Scientific Visualization at JSC Jens Henrik Göbbert 1 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team Visualization j.goebbert@fz-juelich.de Folie 1

2 Visualization at JSC Cross-Sectional Team Visualization Domain-specific User Support and Research at JSC Scientific Visualization R&D + support for visualization of scientific data Virtual Reality VR systems for the analysis and presentation Multimedia multimedia productions for websites, presentations or on TV Folie 2

3 InfiniBand Visualization at JSC General Hardware Setup JURECA 12x Visualization Nodes 2 GPUs Nvidia Tesla K40 per node 12 GB RAM on each GPU 2x Vis. Login Nodes jurecavis.fz-juelich.de 10x Vis. Batch Nodes Keep in mind: 8 nodes with 512 GB RAM 2 nodes with 1024 GB RAM Visualization is NOT limited to Visualization nodes (incl. GPUs) only. (software rendering is possible on any node) 10x vis batch nodes 2x vis login nodes 12x login nodes 1872x compute nodes Data GPFS Folie 3

4 Outline Visualization at JSC Cross-Sectional Team Visualization Setup - Software & Hardware Simplify your life: Remote 3d Visualization In Situ Visualization Introduction Coupling Strategies Example Folie 4

5 Mitglied der Helmholtz-Gemeinschaft Simplify your life: Remote 3D Visualization on JURECA with VNC Folie 5

6 Motivation lowering the barriers to visualization Why remote visualization makes sense for the user. Do not spend time installing & configuring visualization software. Do not move your data. Do not be satisfied with your local hardware. makes sense for CS-Team Visualization. Install & configure visualization software once for all users. Any new feature is available to all users. Fix bugs once for all users. More time left for the interesting things. Folie 6

7 Motivation lowering the barriers to visualization desktop symbels for vis apps, LLview, nice blue JSC background MOTD window visualization application CPU, memory utilization clock counting up/down GPU utilization VNC utilization Folie 7

8 InfiniBand Remote 3D Visualization General Setup JURECA User s Workstation 10x vis batch nodes Firewall 2x vis login nodes vis login node: - direct user access - no accounting - shared with other users - no parallel jobs (no srun) vis batch node: - access via batch system - accounting - exclusive usage - parallel jobs possible 12x login nodes 1872x compute nodes Data GPFS Folie 8

9 Remote 3D Visualization with X forwarding + Indirect Rendering Traditional Approach (X forwarding + Indirect Rendering) ssh X <USERID>@<SERVER> uses GLX extension to X Window System X display runs on user workstation OpenGL command are encaplusated inside X11 protocol stream OpenGL commands are executed on user workstation disadvantages User s workstation requires a running X server. User s workstation requires a graphic card capable of the requied OpenGL. User s workstation defines the quality and speed of the visualization. User s workstation requires all data needed to visualize the 3d scene. Try to AVOID for 3D visualization. Folie 9

10 Remote 3D Visualization with VNC (Virtual Network Computing) + GLX forking State-of-the-Art Approach (VNC with VirtualGL) vncserver, vncviewer platform independent application independent session sharing possible advantages No X is required on user s workstation (X display on server, one per session). No OpenGL is required on user s workstation (only images are send). Quality of visualization does not depend on user s workstation. Data size send is independent from data of 3d scene. Try to USE for 3D visualization. Folie 10

11 Remote 3D Visualization with VNC (Virtual Network Computing) + GLX forking GLX forking vglrun (VirtualGL) OpenGL applications send both GLX and X11 commands to the same X display. Once VirtualGL is preloaded into an OpenGL application, it intercepts the GLX function calls from the application and rewrites them. The corresponding GLX commands are then sent to the X display of the 3d X server, which has a 3D hardware Disadvantages: accelerator attached. Multiple users on a login node share the same graphic cards. No resource management for memory/compute of graphic cards. CPU encodes the VNC stream. Currently no hardware accelerated image encoding. Folie 11

12 InfiniBand Remote 3D Visualization Simplify the access JURECA 10x vis batch nodes Firewall 2x vis login nodes manually startup VNC connection 1. ssh to HPC system 2. authenticate via SSH key pair 3. look for an existing VNC server (you may want to reconnect) 4. submit job via slurm 5. wait for job to start start VNC server on node 6. establish SSH tunnel 7. setting up SSH agent forwarding 8. start TurboVNC client and connect 12x login nodes 1872x compute nodes This is far too time consuming. Data GPFS Folie 12

13 InfiniBand Remote 3D Visualization Simplify the access JURECA 10x vis batch nodes Firewall 2x vis login nodes automatically startup VNC ScienTific Remote Desktop Launcher (Strudel) Windows, OS X, Linux written in Python developed by Monash University, Melbourne, Australia 12x login nodes 1872x compute nodes Data GPFS One-Click solution. Folie 13

14 Mitglied der Helmholtz-Gemeinschaft Invitation: Try JURECA Jens Visualization Henrik Göbbert 1 Nodes 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team Visualization j.goebbert@fz-juelich.de Folie 14

15 Invitation Try JURECA Visualization Nodes VNC with VirtualGL (using Strudel) download & install SSH (Putty for Windows) TurboVNC Strudel configure download SSH keys get passphrase for one of the SSH keys j.goebbert@fz-juelich.de load SSH key to your SSH agent Please follow the instructions from run & play start Strudel and connect Folie 15

16 Invitation Try JURECA Visualization Nodes VNC with VirtualGL (using Strudel) Keep in mind: SSH key agent must be running on your system. SSH key for JURECA must be loaded into the SSH key agent. Please send us your Feedback. j.goebbert@fz-juelich.de Folie 16

17 Try JURECA Visualization Nodes OSPRay ParaView CPU ray tracing framework for scientific vis. rendering efficient rendering on CPUs ray tracing / high fidelity rendering made for scientific visualization Built on top of Embree (Intel ray tracing kernels) Intel SIMD Program Compiler OSPRay Integrated into ParaView, VMD, VisIt, VL3, EasternGraphics,... FIU Ground Water Simulation Texas Advanced Computing Center (TACC) and Florida International University Folie 17

18 Try JURECA Visualization Nodes OSPRay with ParaView Ray tracing within ParaView ParaView (standard renderer) Build option in ParaView 5.2 by default Why ray tracing? gives more realistic results adds depth to your image can be faster on large data ParaView (OSPRay) Requirement: CPUs: Anything SSE4 and newer (in part, including Intel Xeon Phi Knights Landing) Cooperation with Electrochemical Process Engineering (IEK-3) Jülich Forschungszentrum GmbH, Germany Folie 18

19 Try JURECA Visualization Nodes Parallel ParaView Folie 19

20 Mitglied der Helmholtz-Gemeinschaft In Situ Visualization Jens Henrik Göbbert 1 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team Visualization j.goebbert@fz-juelich.de Folie 20

21 Why do we need In Situ Visualization? Visualization the old way FZ Jülich Jülich Storage Cluster FZ AIA, RWTH Aachen University wikipedia.org, Chevron FZ Jülich Data FZ Jülich Why changing a running System.? Folie 21

22 Why do we need In Situ Visualization? Bottlenecks for Visualization X Example: AIA, RWTH Aachen University Large Eddy Simulation Analysis of Chevron Geometries 123 TB (= 30 days to download with 50 AIA, RWTH Aachen University Moving data between computing centers becomes a major bottleneck. Folie 22

23 Why do we need In Situ Visualization? Bottlenecks for Visualization X X Moving data to the storage devices will become a major bottleneck. Folie 23

24 What is In Situ Visualization? Basics Visualizing results of a running simulations without the need to copy the data to a storage device. Data compression Reduce I/O traffic and volume. Access to more data. Post-processing Dump Times: In situ processing Enable real-time simulation exploration. Reduce latency to first result. Avoid large-scale post-processing..... Folie 24

25 What is In Situ Visualization? Basics Visualizing results of a running simulations without the need to copy the data to a storage device. But. Simulation- and Visualization code might have different internal data structures. might scale best with different parallelization strategies. Simulation- and Visualization developer Coupling simulation- and visualization codes is a challenge. might have different priorities. might not work in the same team. Folie 25

26 What is In Situ Visualization? Staged vs. On-Node Staged In Situ Visualization (or in transit visualization) On-Node In Situ Visualization Folie 26

27 What is In Situ Visualization? Staged vs. On-Node Staged In-Situ Visualization On-Node In-Situ Visualization Advantages: No need to share hardware resources (cpu / memory) between simulation and visualization. Disadvantages: Data needs to be copied between nodes or even clusters. No zero-copy possible. More difficult to implement. Advantages: No need to copy data between nodes or clusters. Zero-copy possible. Simpler to implement. Disadvantages: Hardware resources (cpu / memory) need to be shared between simulation and visualization. Folie 27

28 What is In Situ Visualization? Loose vs. Tight Loose Coupling Tight Coupling Advantages: Less dependencies at compile/link time Two separate applications. Disadvantages: Data transfer between Simulation and Visualization more difficult to implement. Zero-copy difficult. Advantages: Simpler to implement. Single executable. Zero-copy possible. Disadvantages: Bug in visualization code might affect simulation. On-Node in situ visualization by design. Folie 28

29 Lowering the barriers to in situ visualization What are the major barriers? Major barriers to in situ visualization are first, the individual implementation-, optimization- and coupling-costs to integrate the needed functionality to each simulation code and setup can often not be justified. second, the usage of in situ visualization requires much training for scientists who's research work in general does not focus on visualization in the first place. Folie 29

30 Direct Approch coupling simulation code to in-situ visualization simulation VisIt/ Libsim ParaView/ Catalyst sim. developer vis. developer ParaView/Catalyst VisIt/Libsim Folie 30

31 Adapter Approach coupling simulation code to in situ visualization simulation VisIt/ Libsim ParaView/ Catalyst sim. developer vis. developer Damaris - SENSEI - Strawman - GLEAN - JUSITU - Folie 31

32 Adapter Approach coupling simulation code to in situ visualization simulation VisIt/ Libsim ParaView/ Catalyst data adapter bridge analyze adapter Folie 32

33 Adapter Approach coupling simulation code to in situ visualization data adapter bridge analyze adapter Folie 33

34 Mitglied der Helmholtz-Gemeinschaft In Situ Visualization Jens Example Henrik Göbbert 1 1 Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH, Germany Cross-Sectional-Team Visualization j.goebbert@fz-juelich.de Folie 34

35 JUSITU Applications lately coupled with VisIt/Libsim psopen flow solver (DNS) highly resolved turbulence pseudo-spectral approach Institute for Combustion Technology RWTH Aachen University, Germany Chair of Num. Thermo-Fluid Dyn. TU Freiberg, Germany Fortran90 + MPI + OpenMP JUQUEEN BigWeek participant CIAO flow solver (LES, DNS) multiphysics, multiscale structured finite difference method level-set for surface tracking level-set/volume-of-fluid interface Lagrange particle solver tabulated/finite rate chemistry overset mesh refinement moving meshes Institute for Combustion Technology RWTH Aachen University, Germany Fortran90 + MPI JUQUEEN BigWeek participant ZFS flow solver (LES, DNS) multiphysics, multiscale finite volume method lattice-boltzmann method discontinuous Galerkin method level-set for surface tracking Lagrangian particle solver Institute of Aerodynamics Aachen RWTH Aachen University, Germany C MPI + OpenMP + GPU Folie 35

36 Why In Situ Visualization An example. spray injection & droplet formation Folie 36

37 Ratio of soot trans. intensity Why In Situ Visualization An example. spray injection & droplet formation Geometry of injection systems increasingly complex Emission and pollutant formation strongly depend on injection systems Experimental investigation of geometry impact very challenging smaller k-factor larger k-factor High-fidelity spray simulations required! Baumgarten, C., Mixture formation in internal combustion engines, Springer, Aye, M.M. et al., Studying the influence of k-factor of different nozzles on spray and combustion under diesel engine-like conditions in a high pressure spray chamber (submitted). Folie 37

38 Why In Situ Visualization An example. spray injection & droplet formation Experimental investigation of injection systems very challenging due to small length (~100 μm) and high velocity scales (~600 m/s). Highly resolved simulations result in large amount of data which can be reduced with in-situ visualization. Predictive simulations using in-situ visualization and studying injection systems and resulting cavitation give more insight. Le Chenadec, V. and Pitsch, H., A 3D Unsplit Forward/Backward Volume-of-Fluid Approach and Coupling to the Level Set Method, Journal of Computational Physics 233:10-33, 2013 Le Chenadec, V. and Pitsch, H., A monotonicity preserving conservative sharp interface flow solver for high density ratio two-phase flows, Journal of Computational Physics 249: , 2013 Folie 38

39 Why In Situ Visualization An example. spray injection & droplet formation Dump data with high temporal and spatial resolution needed. Folie 39

40 Why In Situ Visualization An example. spray injection & droplet formation Application CIAO on JUQUEEN (specific setup) excellent scaling of solver file output is bottleneck at scale dump data with high temporal resolution not possible => In situ visualization helps! Folie 40

41 Summary & Conclusion Try the visualization nodes at JSC Send me your feedback. In Situ Visualization Size of simulation data is rising constantly and I/O becomes a major bottleneck. A huge number of simulation codes and scientists will benefit from in situ visualization (in situ processing). But, decisions have to be made how to coupling simulation and analysis tools in future. Folie 41

42 Questions? rendered with Blender from a DNS of a diesel injection spray of ITV, RWTH Aachen University Folie 42

Visualization and Data Analysis using VisIt - In Situ Visualization -

Visualization and Data Analysis using VisIt - In Situ Visualization - Mitglied der Helmholtz-Gemeinschaft Visualization and Data Analysis using VisIt - In Situ Visualization - Jens Henrik Göbbert 1, Herwig Zilken 1 1 Jülich Supercomputing Centre, Forschungszentrum Jülich

More information

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich JÜLICH SUPERCOMPUTING CENTRE Site Introduction 09.04.2018 Michael Stephan JSC @ Forschungszentrum Jülich FORSCHUNGSZENTRUM JÜLICH Research Centre Jülich One of the 15 Helmholtz Research Centers in Germany

More information

Scientific Visualization Services at RZG

Scientific Visualization Services at RZG Scientific Visualization Services at RZG Klaus Reuter, Markus Rampp klaus.reuter@rzg.mpg.de Garching Computing Centre (RZG) 7th GOTiT High Level Course, Garching, 2010 Outline 1 Introduction 2 Details

More information

Johannes Günther, Senior Graphics Software Engineer. Intel Data Center Group, HPC Visualization

Johannes Günther, Senior Graphics Software Engineer. Intel Data Center Group, HPC Visualization Johannes Günther, Senior Graphics Software Engineer Intel Data Center Group, HPC Visualization Data set provided by Florida International University: Simulated fluid flow through a porous medium Large

More information

NVIDIA Application Lab at Jülich

NVIDIA Application Lab at Jülich Mitglied der Helmholtz- Gemeinschaft NVIDIA Application Lab at Jülich Dirk Pleiter Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich at a Glance (status 2010) Budget: 450 mio Euro Staff: 4,800

More information

Remote & Collaborative Visualization. Texas Advanced Computing Center

Remote & Collaborative Visualization. Texas Advanced Computing Center Remote & Collaborative Visualization Texas Advanced Computing Center TACC Remote Visualization Systems Longhorn NSF XD Dell Visualization Cluster 256 nodes, each 8 cores, 48 GB (or 144 GB) memory, 2 NVIDIA

More information

VisIt Libsim. An in-situ visualisation library

VisIt Libsim. An in-situ visualisation library VisIt Libsim. An in-situ visualisation library December 2017 Jean M. Favre, CSCS Outline Motivations In-situ visualization In-situ processing strategies VisIt s libsim library Enable visualization in a

More information

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings Mitglied der Helmholtz-Gemeinschaft I/O at JSC I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O Wolfgang Frings W.Frings@fz-juelich.de Jülich Supercomputing

More information

I/O Monitoring at JSC, SIONlib & Resiliency

I/O Monitoring at JSC, SIONlib & Resiliency Mitglied der Helmholtz-Gemeinschaft I/O Monitoring at JSC, SIONlib & Resiliency Update: I/O Infrastructure @ JSC Update: Monitoring with LLview (I/O, Memory, Load) I/O Workloads on Jureca SIONlib: Task-Local

More information

Parallel Visualization At TACC. Greg Abram

Parallel Visualization At TACC. Greg Abram Parallel Visualization At TACC Greg Abram Visualization Problems * With thanks to Sean Ahern for the metaphor Huge problems: Data cannot be moved off system where it is computed Large Visualization problems:

More information

Mitglied der Helmholtz-Gemeinschaft. System Monitoring: LLview

Mitglied der Helmholtz-Gemeinschaft. System Monitoring: LLview Mitglied der Helmholtz-Gemeinschaft System Monitoring: LLview November 27, 2015 Carsten Karbach and Julia Valder Content 1 Overview 2 Components 3 Customization November 27, 2015 Carsten Karbach and Julia

More information

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances) HPC and IT Issues Session Agenda Deployment of Simulation (Trends and Issues Impacting IT) Discussion Mapping HPC to Performance (Scaling, Technology Advances) Discussion Optimizing IT for Remote Access

More information

ANSYS HPC. Technology Leadership. Barbara Hutchings ANSYS, Inc. September 20, 2011

ANSYS HPC. Technology Leadership. Barbara Hutchings ANSYS, Inc. September 20, 2011 ANSYS HPC Technology Leadership Barbara Hutchings barbara.hutchings@ansys.com 1 ANSYS, Inc. September 20, Why ANSYS Users Need HPC Insight you can t get any other way HPC enables high-fidelity Include

More information

Particleworks: Particle-based CAE Software fully ported to GPU

Particleworks: Particle-based CAE Software fully ported to GPU Particleworks: Particle-based CAE Software fully ported to GPU Introduction PrometechVideo_v3.2.3.wmv 3.5 min. Particleworks Why the particle method? Existing methods FEM, FVM, FLIP, Fluid calculation

More information

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Thursday, Nov 26 13:00-13:30

More information

SCALASCA parallel performance analyses of SPEC MPI2007 applications

SCALASCA parallel performance analyses of SPEC MPI2007 applications Mitglied der Helmholtz-Gemeinschaft SCALASCA parallel performance analyses of SPEC MPI2007 applications 2008-05-22 Zoltán Szebenyi Jülich Supercomputing Centre, Forschungszentrum Jülich Aachen Institute

More information

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE , NOW AND IN THE FUTURE Which, why and how do they compare in our systems? 08.07.2018 I MUG 18, COLUMBUS (OH) I DAMIAN ALVAREZ Outline FZJ mission JSC s role JSC s vision for Exascale-era computing JSC

More information

simulation framework for piecewise regular grids

simulation framework for piecewise regular grids WALBERLA, an ultra-scalable multiphysics simulation framework for piecewise regular grids ParCo 2015, Edinburgh September 3rd, 2015 Christian Godenschwager, Florian Schornbaum, Martin Bauer, Harald Köstler

More information

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 18 13:00-13:30 Welcome

More information

SCIENTIFIC VISUALIZATION ON GPU CLUSTERS PETER MESSMER, NVIDIA

SCIENTIFIC VISUALIZATION ON GPU CLUSTERS PETER MESSMER, NVIDIA SCIENTIFIC VISUALIZATION ON GPU CLUSTERS PETER MESSMER, NVIDIA Visualization Rendering Visualization Isosurfaces, Isovolumes Field Operators (Gradient, Curl,.. ) Coordinate transformations Feature extraction

More information

ANSYS HPC Technology Leadership

ANSYS HPC Technology Leadership ANSYS HPC Technology Leadership 1 ANSYS, Inc. November 14, Why ANSYS Users Need HPC Insight you can t get any other way It s all about getting better insight into product behavior quicker! HPC enables

More information

Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay

Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay Visualizing Biomolecular Complexes on x86 and KNL Platforms: Integrating VMD and OSPRay John E. Stone Theoretical and Computational Biophysics Group Beckman Institute for Advanced Science and Technology

More information

RZG Visualisation Infrastructure

RZG Visualisation Infrastructure Visualisation of Large Data Sets on Supercomputers RZG Visualisation Infrastructure Markus Rampp Computing Centre (RZG) of the Max-Planck-Society and IPP markus.rampp@rzg.mpg.de LRZ/RZG Course on Visualisation

More information

Porting Scientific Applications to OpenPOWER

Porting Scientific Applications to OpenPOWER Porting Scientific Applications to OpenPOWER Dirk Pleiter Forschungszentrum Jülich / JSC #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 JSC s HPC Strategy IBM Power 6 JUMP, 9 TFlop/s Intel

More information

Parallel Visualization At TACC. Greg Abram

Parallel Visualization At TACC. Greg Abram Parallel Visualization At TACC Greg Abram Visualization Problems * With thanks to Sean Ahern for the metaphor Huge problems: Data cannot be moved off system where it is computed Large Visualization problems:

More information

UL HPC Monitoring in practice: why, what, how, where to look

UL HPC Monitoring in practice: why, what, how, where to look C. Parisot UL HPC Monitoring in practice: why, what, how, where to look 1 / 22 What is HPC? Best Practices Getting Fast & Efficient UL HPC Monitoring in practice: why, what, how, where to look Clément

More information

Interactive HPC: Large Scale In-Situ Visualization Using NVIDIA Index in ALYA MultiPhysics

Interactive HPC: Large Scale In-Situ Visualization Using NVIDIA Index in ALYA MultiPhysics www.bsc.es Interactive HPC: Large Scale In-Situ Visualization Using NVIDIA Index in ALYA MultiPhysics Christopher Lux (NV), Vishal Mehta (BSC) and Marc Nienhaus (NV) May 8 th 2017 Barcelona Supercomputing

More information

Guillimin HPC Users Meeting April 13, 2017

Guillimin HPC Users Meeting April 13, 2017 Guillimin HPC Users Meeting April 13, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit to

More information

Vectorisation and Portable Programming using OpenCL

Vectorisation and Portable Programming using OpenCL Vectorisation and Portable Programming using OpenCL Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre (JSC) Andreas Beckmann, Ilya Zhukov, Willi Homberg, JSC Wolfram Schenck, FH Bielefeld

More information

A configurable binary instrumenter

A configurable binary instrumenter Mitglied der Helmholtz-Gemeinschaft A configurable binary instrumenter making use of heuristics to select relevant instrumentation points 12. April 2010 Jan Mussler j.mussler@fz-juelich.de Presentation

More information

Introduction to GPU hardware and to CUDA

Introduction to GPU hardware and to CUDA Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 35 Course outline Introduction to GPU hardware

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

Bridging the Gap Between High Quality and High Performance for HPC Visualization

Bridging the Gap Between High Quality and High Performance for HPC Visualization Bridging the Gap Between High Quality and High Performance for HPC Visualization Rob Sisneros National Center for Supercomputing Applications University of Illinois at Urbana Champaign Outline Why am I

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

Computing on GPUs. Prof. Dr. Uli Göhner. DYNAmore GmbH. Stuttgart, Germany

Computing on GPUs. Prof. Dr. Uli Göhner. DYNAmore GmbH. Stuttgart, Germany Computing on GPUs Prof. Dr. Uli Göhner DYNAmore GmbH Stuttgart, Germany Summary: The increasing power of GPUs has led to the intent to transfer computing load from CPUs to GPUs. A first example has been

More information

Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2

Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2 Intel C++ Compiler User's Guide With Support For The Streaming Simd Extensions 2 This release of the Intel C++ Compiler 16.0 product is a Pre-Release, and as such is 64 architecture processor supporting

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

Large Scale Remote Interactive Visualization

Large Scale Remote Interactive Visualization Large Scale Remote Interactive Visualization Kelly Gaither Director of Visualization Senior Research Scientist Texas Advanced Computing Center The University of Texas at Austin March 1, 2012 Visualization

More information

Large scale Imaging on Current Many- Core Platforms

Large scale Imaging on Current Many- Core Platforms Large scale Imaging on Current Many- Core Platforms SIAM Conf. on Imaging Science 2012 May 20, 2012 Dr. Harald Köstler Chair for System Simulation Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen,

More information

Introduction to Visualization on Stampede

Introduction to Visualization on Stampede Introduction to Visualization on Stampede Aaron Birkland Cornell CAC With contributions from TACC visualization training materials Parallel Computing on Stampede June 11, 2013 From data to Insight Data

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

VISUALISATION A GRANDE ECHELLE (GIGAMODEL RESERVOIR, SISMIQUE, DRP) Bruno Conche (Total)

VISUALISATION A GRANDE ECHELLE (GIGAMODEL RESERVOIR, SISMIQUE, DRP) Bruno Conche (Total) VISUALISATION A GRANDE ECHELLE (GIGAMODEL RESERVOIR, SISMIQUE, DRP) Bruno Conche (Total) TOTAL EXPLORATION-PRODUCTION CONTEXT Increase of simulation data results size Huge data visualization in several

More information

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS)

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) DELIVERABLE D5.5 Report on ICARUS visualization cluster installation John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) 02 May 2011 NextMuSE 2 Next generation Multi-mechanics Simulation Environment Cluster

More information

XSEDE Visualization Use Cases

XSEDE Visualization Use Cases XSEDE Visualization Use Cases July 24, 2014 Version 1.4 XSEDE Visualization Use Cases Page i Table of Contents A. Document History iii B. Document Scope iv XSEDE Visualization Use Cases Page ii A. Document

More information

Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla

Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla Performance Optimization of a Massively Parallel Phase-Field Method Using the HPC Framework walberla SIAM PP 2016, April 13 th 2016 Martin Bauer, Florian Schornbaum, Christian Godenschwager, Johannes Hötzer,

More information

Direct Numerical Simulation of a Low Pressure Turbine Cascade. Christoph Müller

Direct Numerical Simulation of a Low Pressure Turbine Cascade. Christoph Müller Low Pressure NOFUN 2015, Braunschweig, Overview PostProcessing Experimental test facility Grid generation Inflow turbulence Conclusion and slide 2 / 16 Project Scale resolving Simulations give insight

More information

First Steps of YALES2 Code Towards GPU Acceleration on Standard and Prototype Cluster

First Steps of YALES2 Code Towards GPU Acceleration on Standard and Prototype Cluster First Steps of YALES2 Code Towards GPU Acceleration on Standard and Prototype Cluster YALES2: Semi-industrial code for turbulent combustion and flows Jean-Matthieu Etancelin, ROMEO, NVIDIA GPU Application

More information

Parallel Programming

Parallel Programming Parallel Programming Introduction Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 Acknowledgements Prof. Felix Wolf, TU Darmstadt Prof. Matthias

More information

Analyzing the Performance of IWAVE on a Cluster using HPCToolkit

Analyzing the Performance of IWAVE on a Cluster using HPCToolkit Analyzing the Performance of IWAVE on a Cluster using HPCToolkit John Mellor-Crummey and Laksono Adhianto Department of Computer Science Rice University {johnmc,laksono}@rice.edu TRIP Meeting March 30,

More information

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation Ray Browell nvidia Technology Theater SC12 1 2012 ANSYS, Inc. nvidia Technology Theater SC12 HPC Revolution Recent

More information

Modeling of a DaimlerChrysler Truck Engine using an Eulerian Spray Model

Modeling of a DaimlerChrysler Truck Engine using an Eulerian Spray Model Modeling of a DaimlerChrysler Truck Engine using an Eulerian Spray Model C. Hasse, S. Vogel, N. Peters Institut für Technische Mechanik RWTH Aachen Templergraben 64 52056 Aachen Germany c.hasse@itm.rwth-aachen.de

More information

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS AT PITT Kim F. Wong Center for Research Computing SAC-PA, June 22, 2017 Our service The mission of the Center for Research Computing is to

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm

Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm Computational Fluid Dynamics with the Lattice Boltzmann Method KTH SCI, Stockholm March 17 March 21, 2014 Florian Schornbaum, Martin Bauer, Simon Bogner Chair for System Simulation Friedrich-Alexander-Universität

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Parallel I/O on JUQUEEN

Parallel I/O on JUQUEEN Parallel I/O on JUQUEEN 4. Februar 2014, JUQUEEN Porting and Tuning Workshop Mitglied der Helmholtz-Gemeinschaft Wolfgang Frings w.frings@fz-juelich.de Jülich Supercomputing Centre Overview Parallel I/O

More information

Slurm Configuration Impact on Benchmarking

Slurm Configuration Impact on Benchmarking Slurm Configuration Impact on Benchmarking José A. Moríñigo, Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT - Dept. Technology Avda. Complutense 40, Madrid 28040, SPAIN Slurm User Group Meeting 16

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

OzenCloud Case Studies

OzenCloud Case Studies OzenCloud Case Studies Case Studies, April 20, 2015 ANSYS in the Cloud Case Studies: Aerodynamics & fluttering study on an aircraft wing using fluid structure interaction 1 Powered by UberCloud http://www.theubercloud.com

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Team 194: Aerodynamic Study of Airflow around an Airfoil in the EGI Cloud

Team 194: Aerodynamic Study of Airflow around an Airfoil in the EGI Cloud Team 194: Aerodynamic Study of Airflow around an Airfoil in the EGI Cloud CFD Support s OpenFOAM and UberCloud Containers enable efficient, effective, and easy access and use of MEET THE TEAM End-User/CFD

More information

Welcome! Roberto Mucci SuperComputing Applications and Innovation Department

Welcome! Roberto Mucci SuperComputing Applications and Innovation Department Welcome! Roberto Mucci r.mucci@cineca.it SuperComputing Applications and Innovation Department OUTLINE School presentation Introduction to Scientific Visualization Remote visualization @ Cineca ABOUT CINECA

More information

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup The Cray CX1 puts massive power and flexibility right where you need it in your workgroup Up to 96 cores of Intel 5600 compute power 3D visualization Up to 32TB of storage GPU acceleration Small footprint

More information

Transport Simulations beyond Petascale. Jing Fu (ANL)

Transport Simulations beyond Petascale. Jing Fu (ANL) Transport Simulations beyond Petascale Jing Fu (ANL) A) Project Overview The project: Peta- and exascale algorithms and software development (petascalable codes: Nek5000, NekCEM, NekLBM) Science goals:

More information

RckT: Scalable Physically Accurate Spectral Rendering in OSPRay

RckT: Scalable Physically Accurate Spectral Rendering in OSPRay RckT: Scalable Physically Accurate Spectral Rendering in OSPRay Christiaan Gribble SURVICE Engineering Intel HPC Developer Conference 11 November 2017 Innovative Rendering for Simulation High-performance

More information

Computational Fluid Dynamics

Computational Fluid Dynamics Computational Fluid Dynamics Prof. Dr.-Ing. Siegfried Wagner Institut für Aerodynamik und Gasdynamik, Universität Stuttgart, Pfaffenwaldring 21, 70550 Stuttgart A large number of highly qualified papers

More information

Investigation and Feasibility Study of Linux and Windows in the Computational Processing Power of ANSYS Software

Investigation and Feasibility Study of Linux and Windows in the Computational Processing Power of ANSYS Software Science Arena Publications Specialty Journal of Electronic and Computer Sciences Available online at www.sciarena.com 2017, Vol, 3 (1): 40-46 Investigation and Feasibility Study of Linux and Windows in

More information

Bei Wang, Dmitry Prohorov and Carlos Rosales

Bei Wang, Dmitry Prohorov and Carlos Rosales Bei Wang, Dmitry Prohorov and Carlos Rosales Aspects of Application Performance What are the Aspects of Performance Intel Hardware Features Omni-Path Architecture MCDRAM 3D XPoint Many-core Xeon Phi AVX-512

More information

The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms

The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms The walberla Framework: Multi-physics Simulations on Heterogeneous Parallel Platforms Harald Köstler, Uli Rüde (LSS Erlangen, ruede@cs.fau.de) Lehrstuhl für Simulation Universität Erlangen-Nürnberg www10.informatik.uni-erlangen.de

More information

Transitioning to Leibniz and CentOS 7

Transitioning to Leibniz and CentOS 7 Transitioning to Leibniz and CentOS 7 Fall 2017 Overview Introduction: some important hardware properties of leibniz Working on leibniz: Logging on to the cluster Selecting software: toolchains Activating

More information

1. Mathematical Modelling

1. Mathematical Modelling 1. describe a given problem with some mathematical formalism in order to get a formal and precise description see fundamental properties due to the abstraction allow a systematic treatment and, thus, solution

More information

PERFORMANCE PORTABILITY WITH OPENACC. Jeff Larkin, NVIDIA, November 2015

PERFORMANCE PORTABILITY WITH OPENACC. Jeff Larkin, NVIDIA, November 2015 PERFORMANCE PORTABILITY WITH OPENACC Jeff Larkin, NVIDIA, November 2015 TWO TYPES OF PORTABILITY FUNCTIONAL PORTABILITY PERFORMANCE PORTABILITY The ability for a single code to run anywhere. The ability

More information

Jülich Supercomputing Centre

Jülich Supercomputing Centre Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre Norbert Attig Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich (FZJ) Aug 26, 2009 DOAG Regionaltreffen NRW 2 Supercomputing at

More information

High Performance Data Analytics for Numerical Simulations. Bruno Raffin DataMove

High Performance Data Analytics for Numerical Simulations. Bruno Raffin DataMove High Performance Data Analytics for Numerical Simulations Bruno Raffin DataMove bruno.raffin@inria.fr April 2016 About this Talk HPC for analyzing the results of large scale parallel numerical simulations

More information

Parallel Tools Platform for Judge

Parallel Tools Platform for Judge Parallel Tools Platform for Judge Carsten Karbach, Forschungszentrum Jülich GmbH September 20, 2013 Abstract The Parallel Tools Platform (PTP) represents a development environment for parallel applications.

More information

Headline in Arial Bold 30pt. Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008

Headline in Arial Bold 30pt. Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008 Headline in Arial Bold 30pt Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008 Agenda Visualisation Today User Trends Technology Trends Grid Viz Nodes Software Ecosystem

More information

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

Introduction to BioHPC

Introduction to BioHPC Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-03-07 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Parallel I/O and Portable Data Formats I/O strategies

Parallel I/O and Portable Data Formats I/O strategies Parallel I/O and Portable Data Formats I/O strategies Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing Centre Forschungszentrum Jülich GmbH Jülich, March 13 th, 2017 Outline Common I/O strategies

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

Optimising the Mantevo benchmark suite for multi- and many-core architectures

Optimising the Mantevo benchmark suite for multi- and many-core architectures Optimising the Mantevo benchmark suite for multi- and many-core architectures Simon McIntosh-Smith Department of Computer Science University of Bristol 1 Bristol's rich heritage in HPC The University of

More information

Pedraforca: a First ARM + GPU Cluster for HPC

Pedraforca: a First ARM + GPU Cluster for HPC www.bsc.es Pedraforca: a First ARM + GPU Cluster for HPC Nikola Puzovic, Alex Ramirez We ve hit the power wall ALL computers are limited by power consumption Energy-efficient approaches Multi-core Fujitsu

More information

Available Resources Considerate Usage Summary. Scientific Computing Resources and Current Fair Usage. Quentin CAUDRON Ellak SOMFAI Stefan GROSSKINSKY

Available Resources Considerate Usage Summary. Scientific Computing Resources and Current Fair Usage. Quentin CAUDRON Ellak SOMFAI Stefan GROSSKINSKY Scientific Computing Resources and Current Fair Usage Quentin CAUDRON Ellak SOMFAI Stefan GROSSKINSKY Centre for Complexity Science, University of Warwick Complexity DTC Annual Retreat, 2011 C WARWICK

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Software and Performance Engineering for numerical codes on GPU clusters

Software and Performance Engineering for numerical codes on GPU clusters Software and Performance Engineering for numerical codes on GPU clusters H. Köstler International Workshop of GPU Solutions to Multiscale Problems in Science and Engineering Harbin, China 28.7.2010 2 3

More information

OpenMPSuperscalar: Task-Parallel Simulation and Visualization of Crowds with Several CPUs and GPUs

OpenMPSuperscalar: Task-Parallel Simulation and Visualization of Crowds with Several CPUs and GPUs www.bsc.es OpenMPSuperscalar: Task-Parallel Simulation and Visualization of Crowds with Several CPUs and GPUs Hugo Pérez UPC-BSC Benjamin Hernandez Oak Ridge National Lab Isaac Rudomin BSC March 2015 OUTLINE

More information

Jim Jeffers Principal Engineer and Manager, HPC Visualization Intel Corporation

Jim Jeffers Principal Engineer and Manager, HPC Visualization Intel Corporation Jim Jeffers Principal Engineer and Manager, HPC Visualization 2016 Intel Corporation Software Defined Visualization Delivers Higher Visual Fidelity Of Larger DataSETS On Existing HPC Infrastructure Through

More information

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Scalasca support for Intel Xeon Phi Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Overview Scalasca performance analysis toolset support for MPI & OpenMP

More information

Introduction to BioHPC New User Training

Introduction to BioHPC New User Training Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2019-02-06 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics H. Y. Schive ( 薛熙于 ) Graduate Institute of Physics, National Taiwan University Leung Center for Cosmology and Particle Astrophysics

More information

Splotch: High Performance Visualization using MPI, OpenMP and CUDA

Splotch: High Performance Visualization using MPI, OpenMP and CUDA Splotch: High Performance Visualization using MPI, OpenMP and CUDA Klaus Dolag (Munich University Observatory) Martin Reinecke (MPA, Garching) Claudio Gheller (CSCS, Switzerland), Marzia Rivi (CINECA,

More information

Accelerating Implicit LS-DYNA with GPU

Accelerating Implicit LS-DYNA with GPU Accelerating Implicit LS-DYNA with GPU Yih-Yih Lin Hewlett-Packard Company Abstract A major hindrance to the widespread use of Implicit LS-DYNA is its high compute cost. This paper will show modern GPU,

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

LaBS: a CFD tool based on the Lattice Boltzmann method. E. Tannoury, D. Ricot, B. Gaston

LaBS: a CFD tool based on the Lattice Boltzmann method. E. Tannoury, D. Ricot, B. Gaston LaBS: a CFD tool based on the Lattice Boltzmann method E. Tannoury, D. Ricot, B. Gaston Impact of HPC on automotive engineering applications Alain Prost, 1989 Lewis Hamilton, 2008 Impact of HPC on automotive

More information

Top-Down System Design Approach Hans-Christian Hoppe, Intel Deutschland GmbH

Top-Down System Design Approach Hans-Christian Hoppe, Intel Deutschland GmbH Exploiting the Potential of European HPC Stakeholders in Extreme-Scale Demonstrators Top-Down System Design Approach Hans-Christian Hoppe, Intel Deutschland GmbH Motivation & Introduction Computer system

More information

Mitglied der Helmholtz-Gemeinschaft. Eclipse Parallel Tools Platform (PTP)

Mitglied der Helmholtz-Gemeinschaft. Eclipse Parallel Tools Platform (PTP) Mitglied der Helmholtz-Gemeinschaft Eclipse Parallel Tools Platform (PTP) April 25, 2013 Carsten Karbach Content 1 Parallel Tools Platform (PTP) 2 Eclipse Plug-In Development April 25, 2013 Carsten Karbach

More information