Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures 2018 Call for Proposal of Joint Research Projects

Size: px
Start display at page:

Download "Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures 2018 Call for Proposal of Joint Research Projects"

Transcription

1 Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures 2018 Call for Proposal of Joint Research Projects The Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) calls for joint research projects for fiscal year JHPCN is a "Network-Type" Joint Usage/Research Center, certified by the Minister of Education, Culture, Sports, Science and Technology, and comprises eight super-computer equipped centers affiliated with Hokkaido University, Tohoku University, The University of Tokyo, Tokyo Institute of Technology, Nagoya University, Kyoto University, Osaka University and Kyushu University, among which Information Technology Center at the University of Tokyo is functioning as the core institution of JHPCN. Each JHPCN member institution will provide its research resources for joint research projects. Researchers of an adopted joint research project can use provided research resources within granted amount with free of charge. In addition, expenses for publication or presentation of research results such as overseas travel expenses may be supported. Every eligible project should use research resources provided by JHPCN member institutions or its research team should include researchers affiliated with JHPCN member institutions. The available research resources are computers, storages, visualization systems and so on. Some of them can be connected to facilities inside and outside of JHPCN member institutions via high speed networks for the purposes of exchange, accumulation and processing of data. It is also possible to connect research resources of JHPCN member institutions with SINET5 L2VPN. Part of High Performance Computing Infrastructure (HPCI) system, referred to as the HPCI- JHPCN system, are also available for joint research projects. Applications for research projects that use the HPCI-JHPCN System are invited via the HPCI online application system and will be selected in line with our reviewing policy (see Section 6, Research Project Reviews ). The aim of JHPCN is to contribute to the advancement and permanent development of academic and research infrastructure of Japan by implementing joint research projects that require large scale information infrastructures and that address Grand Challenge-type problems, thus far considered extremely difficult to solve, in the following fields: Earth environment, energy, materials, genomic information, web data, academic information, timeseries data from the sensor networks, video data, program analysis, application of large capacity network and other fields in information technology. Since the JHPCN member institutions have enrolled leading researchers, acceleration of joint research projects is anticipated through the collaboration with these researchers. These joint research projects (for the fiscal year 2018) will be implemented from April 2018 to March

2 1. Joint Research Areas This call for joint research projects will adopt interdisciplinary research projects in the four areas: very large-scale numerical computation, very large-scale data processing, very large capacity network technology, and very large-scale information systems. Approximately 60 joint research projects will be adopted. Total number of JHPCN member institutions used for the adopted research projects. (1) Very large-scale numerical computation This includes scientific and technological simulations in scientific/engineering fields such as Earth environment, energy, and materials, as well as modeling, numerical analysis algorithms, visualization techniques, information infrastructure, etc., to support these simulations. (2) Very large-scale data processing This includes processing genomic information, web data (including from Wikipedia, as well as news sites and blogs), academic information contents, time-series data from the sensor networks, high-level multimedia information processing needed for streaming data for video footage, etc., program analysis, access and search, information extraction, statistical and semantic analysis, data-mining, machine learning, etc. (3) Very large capacity network technology This includes control and assurance of network quality for very large-scale data sharing, monitoring and management required for construction and operation of very large capacity networks, assessment and maintenance of the safety of such networks, a large-scale stream processing framework taking advantage of very large capacity networks as well as the development of various technologies for the support of research. Research dealing with largescale data processing using the large capacity network is classified in this area. (4) Very large-scale information systems This combines each of the above-mentioned areas, entailing exascale computer architectures, software for high-performance computing infrastructures, grids computing, virtualization technology, cloud computing, etc. Please pay special attention to the following points when applying. 1 We will only be accepting interdisciplinary joint research proposals that will involve cooperation among researchers from a wide range of disciplines. For example, we presume that a research team in the very large-scale numerical computation area will 2

3 consist of researchers from computer and computational sciences who work together in a cooperative and complimentary manner. In this area, therefore, we invite joint research projects with researchers who will be solving problems in application fields using computers and those who will be conducting research in computer science, such as on algorithms, modeling, and parallel processing. It is not mandatory to use the HPCI-JHPCN system or other research resources provided by JHPCN member institutions. In other words, a joint research project with no use of provided research resources can be acceptable. 2 We particularly appreciate the following categories of joint research projects. Research projects in close cooperation with multiple JHPCN member institutions Taking advantage of the network of JHPCN, a project in this category should use research resources and/or involve researchers of multiple JHPCN member institutions. For example, relevant research topics may include but not limited to large-scale and geographically distributed information systems and multi-platform implementations of applications using research resources provided by multiple JHPCN member institutions. Research projects using both large-scale data and large capacity networks The available research resources include those that can be directly connected to a very wide bandwidth network provided by SINET5, including L2VPN, in cooperation with the National Institute of Informatics. Therefore, research can be conducted that depends upon a very wide bandwidth network. A project in this category should require massive data transfer, using the very wide bandwidth network, between research facilities located at involved researchers working places and at JHPCN member institutions or between those at JHPCN member institutions. Please refer to Attachment 2 for possible examples of research topics in this category. 2. Types of Joint Research Projects Under the premise of the interdisciplinary joint research project structure mentioned above, joint research projects to be invited are as follows. (1) General Joint Research Projects (approximately 80% of the total number of accepted projects will be in this type) (2) International Joint Research Projects (approximately 10% of the total number of accepted projects will be in this type) International joint research projects are conducted in conjunction with foreign researchers to address challenging problems that may not be resolved or clarified with only the help of researchers within Japan. For such research projects, there will be a certain amount of 3

4 subsidies paid to cover travel expenses incurred for holding meetings with foreign joint researchers during the fiscal year of commencement. For details, please contact our office once your research project has been accepted. (3) Industrial Joint Research Project (approximately 10% of the total number of accepted projects will be in this category) Industrial joint research projects are interdisciplinary projects focused on industrial applications. A research proposal submitted to (2) International Joint Research Project or (3) Industrial Joint Research Project might be adopted as a (1) General Joint Research Project. Further, we invite applications in each of these three types for research projects that use the HPCI-JHPCN System and research projects that do not use the HPCI-JHPCN System. If you require the assistance and cooperation of researchers and research groups that are affiliated with JHPCN member institutions in the computer science field, please complete Section 7 of the application form describing the specific requirements in detail. We will arrange for as much assistance and cooperation as possible. During the application, please designate the university (JHPCN member institution) with which you seek collaboration. You may also name several institutions. If this is difficult for you to designate, it is possible for us to decide corresponding research center(s) on your behalf after considering the research proposal. Please be sure to discuss this with our contact person (listed on Section 10) in advance. 3. Application Requirements Applications must be made by a Project Representative. Furthermore, the Project Representative and the Deputy Representative(s), as well as any other joint researchers of a project, must fulfill the following conditions. 1 The Project Representative must be affiliated with an institution in Japan (University, National Laboratory, Private Enterprise, etc), and must be in a position to be able to obtain the approval of his/her institution (or representative head). 2 At least one Deputy Representative must be a researcher in a different academic field from that of the Project Representative. There may be two or more Deputy Representatives, and one of them can be in charge of HPCI face-to-face identity vetting. 3 A graduate student can participate in the project as a joint researcher, but an undergraduate cannot. A Graduate student cannot become the Representative or the Deputy Representative of the project. If a non-resident member, defined by the Foreign 4

5 Exchange and Foreign Trade Act, is going to use computers, a researcher affiliated with the JHPCN member institutions equipped with them must participate as a joint researcher. International joint research projects must, in addition to the above-mentioned 1 3, fulfill the following conditions (4 and 5). 4 At least one researcher affiliated with a research institution outside Japan must be named as a Deputy Representative. Furthermore, an application must be made using the English Application Form. 5 A researcher affiliated with the JHPCN member institutions that you designate Desired University for Joint Research must participate as a joint researcher. Industry joint research projects must, in addition to the above-mentioned 1 3, fulfill the following conditions (6 and 7). 6 The Project Representative must be affiliated with a private company, excluding universities and national laboratories. 7 At least one researcher affiliated with the JHPCN member institutions that you designate Desired University for Joint Research must be named as a Deputy Project Representative. 4. Joint Research Period April 1, 2018 to March 31, Depending on conditions for preparing computer accounts, the commencement of computer use may be delayed. 5. Facility Use Fees The research resources listed in Attachment 1 can be used within granted amount with free of charge. 6. Research Project Reviews Reviews will be conducted by the Joint Research Project Screening Committee, which comprises faculty members affiliated with JHPCN member institutions as well as external members, and the HPCI Project Screening Committee, which comprises industry, academic, and government experts. Research project proposals will be reviewed in both a general and technical sense for their scientific and technological validity, their facility/equipment requirements and the validity of these requirements, and their potential for development. The feasibility of resource requirements at and cooperation/collaboration with the JHPCN member institutions that you designate Desired University for Joint Research will also be subject to 5

6 review. In addition, how relevant a proposal is to its type of joint research project will be considered. Furthermore, for projects continuing from the previous fiscal year and projects determined to have substantial continuity, an assessment of the previous year s interim report and previous usage of computer resources may be considered during the screening process. 7. Notification of Adoption We expect to notify the review results by mid-march Application Process 1. Please note that the following application procedures differ for Research projects that use the HPCI-JHPCN System and Research projects that do not use the HPCI-JHPCN System (The HPCI-JHPCN system is described in Attachment 1 (1)). 2. In particular, for Research projects that use the HPCI-JHPCN System, the Project Representative (and the Deputy Representative who will submit the proposal or who will be in charge of HPCI face-to-face identity vetting on behalf of the Project Representative) and all joint researchers that will use the HPCI-JHPCN system must have obtained their HPCI-ID prior to the application. 3. For international joint research projects, an English application form must be completed. (1) Application Procedure After completing the Research Project Proposal Application Form obtained from the JHPCN website ( and completing the online submission of the electronic application, please print and press seals onto the application form and mail it to the Information Technology Center, The University of Tokyo (address provided on Section 10). For details on the application process, please consult the JHPCN website and the HPCI Quick Start Guide. The summary of the application process is shown below. I: For Research Projects that use the HPCI-JHPCN System 1 Download the Research Project Proposal Application Form (MS Word format) from the JHPCN website and complete it. In parallel, the Project Representative (and the Deputy Representative who will submit the proposal or who will be in charge of HPCI face-to-face identity vetting on behalf of the Project Representative) and all joint researchers who will use the HPCI-JHPCN System must obtain their HPCI-IDs 6

7 unless already have them. For online entries to be made as per the requirements of step 2, it is necessary to register the HPCI-IDs of the Project Representative, the Deputy Representative acting on behalf of Project Representative, and all joint researchers who will use the HPCI-JHPCN System. Joint researchers who are not registered at this stage will not be issued with computer user accounts (HPCI account, local account) and thus, will not be able to use the HPCI-JHPCN system. 2 Access the Research Projects that use the HPCI-JHPCN System on the Project Application page of the above-mentioned JHPCN website. You will be transferred to the HPCI Application Assistance System where after filling out the necessary items, you should upload a PDF file of the Research Project Proposal Application Form (without seal) created in 1. For Research Projects that use the HPCI-JHPCN System the project application system on the JHPCN website will not be used. 3 Print and press seals onto the Research Project Proposal Application Form created in 1, and mail it to the address provided on Section 10. Contact Information If your project is adopted, please follow the Procedures after the awarding of the proposal stipulated by the HPCI ( In particular, face-to-face identity vetting must be provided responsibly by the Project Representative or a Deputy Representative, who may have to bring copies of photographic identification of all joint researchers who will use the HPCI-JHPCN system. (Note that the contact persons such as secretaries are ineligible for a face-to-face identity vetting). II: For Research Projects that do not use the HPCI-JHPCN System 1 Download the Research Project Proposal Application Form (MS Word format) from the JHPCN website and complete it. 2 Access the Research Projects that do not use the HPCI-JHPCN System on the project application page of the above-mentioned JHPCN website. After filling out the necessary items, you should upload a PDF file of the Research Project Proposal Application Form (without seal) created in 1. An Notification of Receipt will be sent to the address that you registered on the Research Project Application page. Applications for Research Projects that do not use the HPCI-JHPCN System do not require the HPCI Application Assistance System. Therefore, it is not necessary to obtain HPCI-IDs. 7

8 3 Print and press seals onto the Research Project Proposal Application Form created in 1, and mail it to the address provided on Section 10. Contact Information. (2) Application Deadline Online Registration (submission of PDF application form): January 9, 2018 (Tue.) 17:00 [Mandatory] Submission of printed application form: January 15, 2018(Mon.) 17:00 [Mandatory] The above deadlines must be followed for submission of the Research Project Proposal Application Forms. However, if submission of printed documents created in (1)Ⅰ3 and (1)Ⅱ 3 above is delayed, please contact the JHPCN office (details on Section 10) in advance. (3) Points to remember when creating the Research Project Proposal Application Form 1 Applicants must comply with rules and regulations of the JHPCN member institutions whose computer resources will be used in the joint research project. 2 Research resources must be only used for the purpose of the adopted research project. 3 The proposal must be for a project with peaceful purposes. 4 The proposal should consider human rights and profit protection. 5 The proposal must comply with the Ministry of Education, Culture, Sports, Science and Technology s Approach to Bioethics and Safety. 6 The proposal must comply with the Ministry of Health, Labour and Welfare s Guidelines on Medical and Health research. 7 The proposal must comply with the Ministry of Economy, Trade and Industry s Concerning Security Trade Management. 8 Applicants must complete a course in Responsible Conduct of Research (RCR). Programs on RCR, including lectures, e-learning, and training workshops, are conducted at each of the affiliated organizations (including CITI Japan e-learning). If your affiliated organization does not provide any RCR programs, please contact our office. Researchers who are eligible to apply for the Japanese Grant-in-Aid for Scientific Research awarded by the Ministry of Education, Culture, Sports, Science and Technology and the Japan Society for the Promotion of Science will be considered as eligible if they note their Kakenhi Researcher Code in the application form. In addition, researchers who were recently awarded research grants that required RCR training will be also considered as eligible if they present evidence. 9 After being accepted, applicants must comply with the deadlines for submitting a research introduction poster and various reports. 8

9 9. Other Important Notices (1) Submission of a written oath Research groups whose research projects have been adopted will be expected to submit a written oath pledging adherence to the contents of the above-mentioned (3) Points to remember when creating the Research Project Proposal Application Form of 8. Application Process. The specific process of submission will be provided post-adoption, but a sample of it is provided on the website for those applicants who wish to familiarize themselves with the requirements of the oath. (2) Regulations for use of facilities While using the facilities, you are expected to follow the regulations for use pertaining to research resources as stipulated by the JHPCN member institutions with which you will work. (3) Types of reports The submission of research introduction posters, interim reports, and final reports are expected. Each report must be submitted by the designated deadline. Research introduction posters as well as the final reports, in particular, will be expected to be suitable for disclosure (see the JHPCN website for details). For International Joint Research Projects, these reports should be submitted in English in principle. Furthermore, you will be asked to introduce the research through poster presentation at the JHPCN symposium (where the results of research projects from the previous year will be presented orally) that is expected to be held during the summer. (4) Disclaimer Each JHPCN member institution assumes no responsibility at all for any inconveniences that affect applicants as a consequence of joint research projects. (5) Handling of intellectual property rights In principle, any intellectual property that arises as a result of a research project will belong to each of the research groups involved. However, it is presumed that any recognition of inventors will be in accordance with each institution s policy concerning intellectual property rights. Please contact each JHPCN member institution for details and handling of other exceptional matters. (6) RCR training If a participant in an accepted project (excluding students) cannot be confirmed to have completed a program pertaining to RCR or equivalent (for example, eligibility for the Japanese Grant-in-Aid for Scientific Research that is awarded by the Ministry of Education, Culture, Sports, Science and Technology, or the Japan Society for the Promotion of 9

10 Science), this individual s use of computers may be suspended until the completion of such a program can be confirmed. (7) Acknowledgement Upon publication of the results of an accepted project, the author(s) should indicate in Acknowledgements that the project was supported by JHPCN (see the JHPCN website for an example sentence).. (8) Other Personal information provided by this proposal shall only be used for screening research projects and providing system access. After the adoption of a research project, however, the project name and the name/affiliation of the Project Representative will be disclosed After the adoption of the research project, changes cannot be made to the JHPCN member institutions to work with or the computers to be used. If you wish to discuss your application, please contact us on the address listed on Section 10 (please note in advance that we are not able to respond to telephone-based inquiries). 10. Contact information (for inquiries about application, etc.) For inquiries about application Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Office address: jhpcn.adm@gs.mail.u-tokyo.ac.jp For available resources, how to use resources, details of eligibility, faculty members who can participate in joint research projects, and the handling of intellectual property of each institution, please feel free to directly contact the following. JHPCN member institutions Information Initiative Center, Hokkaido University Cyberscience Center, Tohoku University Information Technology Center, The University of Tokyo Global Scientific Information and Computing Center, Tokyo Institute of Technology Information Technology Center, Nagoya University Academic Center for Computing and Media Studies, Kyoto University address kyodo@oicte.hokudai.ac.jp joint_research@cc.tohoku.ac.jp jhpcn.adm@gs.mail.u-tokyo.ac.jp jhpcn-kyoten@gsic.titech.ac.jp kyodo@itc.nagoya-u.ac.jp kyoten-8gm@media.kyoto-u.ac.jp 10

11 Cybermedia Center, Osaka University Research Institute for Information Technology, Kyushu University Mailing address for the Research Project Proposal Application Form Yayoi, Bunkyo-ku, Tokyo Information Technology Center, The University of Tokyo Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures Office (You may use the abbreviations, Information Technology Center Office, or the JHPCN Office ) 11

12 Attachment 1 List of research resources available at the JHPCN member institutions for the Joint Research Projects The research resources that can be directly connected via SINET5 L2VPN provided by National Institute of Informatics is annotated as L2VPN ready. (1)List of the HPCI-JHPCN system resources available for Joint Research Projects JHPCN Institution Information Initiative Center, Hokkaido University Computational Resources, Type of Use 1 Supercomputer HITACHI SR16000/M1 (Max. 4 node four months per 1 project) (due to system renewal) 168 nodes,5,376 physical cores,total main memory capacity 21TB, TFLOPS (Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 27,000,000 seconds: file 2.5 TB 2) Calculation time: 2,700,000 seconds: file 0.5 TB (Assumed amount of resources:166,666,667s and 48 TB when combining 1) and 2)) Software resources Compilers Optimized FORTRAN90 (Hitachi), XL Fortran (IBM),XL C/C++ (IBM) Libraries MPI-2 (without dynamic process creation function), MATRIX/MPP,MATRIX/MPP/SSS,MSL2,NAG SMP Library, ESSL,Parallel ESSL, BLAS,LAPACK,ScaLAPACK,ARPACK, FFTW2.1.5/3.2.2 Application software Gaussian 2 Cloud-System Blade Symphony BS / /10 (due to system renewal) Virtual server( S / M / L server) 2 nodes,80 cores Equal to 8 units of L server (May also use S server and M server) (Max. 4 units seven months per 1 project) Physical server( XL server) 1 unit (40 cores,memory 128GB,DISK 2TB) Usage L2VPN Ready: Only for XL server. (VPN service is available for S / M / L servers by Cloud Stack ) 3 Data Science Cloud System HITACHI HA8000 1) Physical server 2 units (20 cores, Memory 80GB, Hard-Disk 2TB) Estimated number of Projects adopted 1+2:10 3 Max.2 4+5:8 6:4 12

13 Cyberscience Center, Tohoku University Usage L2VPN Ready 4 New Supercomputer A 2019/ /03 (due to system renewal) (Max. 32 node years per 1 project) About 900 nodes,36,000 physical cores,total main memory capacity 337TB,2.5PFLOPS (Shared with general user, MPI parallel processing of up to a maximum of 128node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 3,000,000,000 s and 600TB for multiple sets of 1)) Software resources Compilers Fortran, C/C++ Libraries MPI, Numerical libraries 5 New Supercomputer B 2019/ /03 (due to system renewal) (Max. 16 node years per 1 project) About 400 nodes,24,000 physical cores,total main memory capacity 43TB,1.2PFLOPS (Shared with general user, MPI parallel processing of up to a maximum of 64node per 1 job is possible) 1) Calculation time: 15,000,000 seconds: file 3TB (Assumed amount of resources: 1,500,000,000 s and 600TB for multiple sets of 1)) Software resources Compilers Fortran, C/C++ Libraries MPI, Numerical libraries 6 New Cloud System 2018/ /3 (due to system renewal) 1) Physical server 5 nodes (40+ cores, Memory 256+ GB, DISK 2+ TB) 2) Intercloud package 1 sets (Physical servers each of which is installed at Hokkaido university, University of Tokyo, Osaka university, and Kyushu university, connected via SINET VPN) 3) Virtual server (L server) 8 nodes (10core Memory 60+ GB, DISK 500+GB) Usage L2VPN Ready 1 Supercomputer SX-ACE(2,560nodes) 48 node years / project About 707TFLOPS,Main memory 160TB,Maximum number of nodes 1,024, Shared use Software resources Compilers FORTRAN90/SX, C++/SX, NEC Fortran2003 Libraries MPI/SX, ASL, ASLSTAT, MathKeisan 13 10

14 Information Technology Center, the University of Tokyo 2 Supercomputer LX406Re-2(68nodes) 12 node years / project About 31.3TFLOPS,Main memory 8.5TB,Maximum number of nodes 24,Shared use Software resources Compilers Intel Compiler(Fortran, C/C++) Libraries MPI, IPP, TBB, MKL, NumericFactory Application software Gaussian16 3 Storage 4PB 20TB / project(possible to add) 1 Reedbush-U (Intel Broadwell-EP cluster, High-speed file cache system (DDN IME)) Maximum tokens for each project: 16 node-year, Storage 16TB (138,240 node-hour) Options: node occupied service, customized login nodes, L2VPN Ready (negotiable) Software resources Compilers Intel Fortran, C, C++ Libraries MPI, BLAS, LAPACK/ScaLAPACK, FFTW, PETSc, METIS/ParMETIS, SuperLU/SuperLU_DIST Application software OpenFOAM, ABINIT-MP, PHASE, FrontFlow/Blue, FrontFlow/Red, REVOCAP, ppopen-hpc, BioPerl, BioRuby, BWA, GATK, SAMtools, K MapReduce, Spark 2 Reedbush-H (Intel Broadwell-EP+NVIDIA P100 (Pascal) cluster (2 GPU per node), High-speed file cache system (DDN IME)) Maximum tokens for each project: 8 node-year, Storage 32TB (69,120 node-hour) Software resources Compilers Intel Fortran, C, C++, PGI Fortran, C, C++ (with Accelerator support), CUDA Fortran, CUDA C Libraries GPUDirect for RDMA: Open MPI, MVAPICH2-GDR, cublas, cusparse, cufft, MAGMA, OpenCV, ITK, Theano, Anaconda, ROOT, TensorFlow Application software Torch, Caffe, Chainer, GEANT4 3 Reedbush-L (Intel Broadwell-EP+NVIDIA P100 (Pascal) cluster (4 GPU per node), High-speed file cache system (DDN IME)) Maximum tokens for each project: 4 node-year, Storage 16TB (34,560 node-hour) Options: node occupied service, customized login nodes, L2VPN Ready (negotiable) Software resources Same as Reedbush-H 4 Oakforest-PACS (Intel Xeon Phi 7250 (Knights Landing)) 14 10

15 Global Scientific Information and Computing Center, Tokyo Institute of Technology Information Technology Center, Nagoya University Maximum tokens for each project: 64 node-year, Storage 16TB (552,960 node-hour) Software resources Compilers Intel Fortran, C, C++ Libraries MPI, BLAS, LAPACK/ScaLAPACK, FFTW, PETSc, METIS/ParMETIS,, SuperLU/SuperLU_DIST Application software (planned) OpenFOAM, ABINIT-MP, PHASE, FrontFlow/Blue, FrontFlow/Red, REVOCAP, ppopen-hpc 1 Cloudy, Big-Data and Green Super Computer TSUBABE3.0 TSUBAME3.0 system includes 540 compute nodes, which provides 12.15PF performance (CPU 15,120 cores, 0.70PF + GPU 2,160 slots, 11.45PF) in total. Maximum system available at a time is 50% of fullsystem. (Shared use) Total provided resource is 230 units (= 230,000 node-hour, 1 unit = 1,000 node-hour). 27 units (= node-year) are maximum resources for each project. Please specify not only total amount of resources but also quarterly amounts of resources. Maximum storage is 300TB for each project. Ensuring 1TB of storage for one year requires 120 node-hour of resource. Resources should be requested accordingly. Software resources OS SUSE Linux Enterprise Server 12 SP2 Language Compiler: Intel Compiler (C/C++/Fortran), PGI Compiler (C/C++/Fortran, OpenACC, CUDA Fortran), Allinea FORGE, GNU C, GNU Fortran, CUDA, Python, Java SDK, R Libraries OpenMP, MPI (Intel MPI, OpenMPI, SGI MPT),BLAS, LAPACK,CuDNN, NCCL, PETSc, fftw, PAPI Application software Gaussian, Gauss View, AMBER (only for academic users), CST MW-STUDIO (only for Industrial Joint Research Projects), Caffe, Chainer, TensorFlow, Apache Hadoop, ParaView, POV-Ray, VisIt, GAMESS, CP2K, GROMACS, LAMMPS, NAMD, Tinker, OpenFOAM, ABINIT-MP (1) Fujitsu PRIMEHPC FX PF (2,880 nodes, 92,160 cores, 92TiB memory) Software resources OS FX100 dedicates OS (Red Hat Enterprise Linux for login node) Compilers Fortran,C,C++,XPFortran Libraries MPI, BLAS, LAPACK, ScaLAPACK, SSLII, C-SSL II, SSL,II/MPI, NUMPAC, FFTW, HDF5 Application software Gaussian, LS-DYNA, AVS/Express, AVS/Express PCE(parallel version ), EnSight Gold, IDL, ENVI, ParaView, Fujitsu Technical Computing Suite (HPC Portal) (2) Fujitsu PRIMERGY CX400/ TF (384 nodes, 10,752 cores, 48TiB memory) Software resources OS Red Hat Enterprise Linux Programming languages Fujitsu Fortran/C/C++/ XPFortran,Intel Fortran /C/C++, Python (2.7, 3.3) Application software MPI(Fujitsu, Intel), BLAS, LAPACK, (1) +(2)+ (3)+(4):10

16 Academic Center for Computing and Media Studies, Kyoto University Cybermedia Center, ScaLAPACK,SSLII(Fujitsu), C-SSL II, SSL II/MPI, MKL(Intel), NUMPAC, FFTW, HDF5, IPP(Intel) Application software LS-DYNA, ABAQUS, ADF, AMBER,GAMESS, Gaussian, Gromacs, LAMMPS, NAMD, HyperWorks,OpenFOAM, STAR- CCM+, Poynting, AVS/Express Dev/PCE,EnSightGold, IDL, ENVI, ParaView, Fujitsu Technical Computing Suite (HPC Portal) (3) Fujitsu PRIMERGY CX400/ TF (184 nodes Xeon Phi, 4, ,488 cores, 23TiB memory) Software resources Same to software resources of (2) Fujitsu PRIMERGY CX400/2550 (4) SGI UV TF (1,280 cores, 20TiB shared memory, 10TB memory base file system) 3D visualization system: High resolution 8K display system (185in size by 16 panel tile display), Full HD circularly polarized light binocular vision system (150in screen, projector, circularly polarized light eyeglasses, etc.), head mount display system (MixedReality, VICON, etc.), dome type display system Software resources OS SuSE Linux Enterprise Server Programming languages Intel Fortran C C++, Python (2.6, 3.5) Libraries MPT(SGI),MKL(Intel), PBS Professional, FFTW, HDF5,NetCDF, scikit-learn, TensorFlow Application software AVS/Express Dev/PCE, EnSightDR, IDL,ENVI(SARscape), ParaView, POV-Ray, NICE DCV(SGI), ffmpeg, ffplay, osgviewer, vmd(visual Molecular Dynamics) Maximum resource allocation to one project: 15 unit or 86 node year 1 unit becomes 50 thousand node hour (equivalent to 180 thousand yen charge) You can use large scale storage by converting 5000 node hour to 10TB. All resources are shared with general users. When using 3D visualization system, it is 100 thousand yen per research projects.(equivalent to additional contribution) Cray XC40(Camphor 2:Xeon Phi KNL/node) nodes, 8,704 cores, 390.4TFLOPS throughout the year (Each project is assigned up to 48 nodes throughout the year) nodes, 8,704 cores, 390.4TFLOPS 20 weeks (Each project is assigned up to 128 nodes 4 weeks. ) Available amount of computing resources are adjusted by the projects.) Software resources Compilers Fortran2003/C99/C++03(Cray, Intel) Libraries MPI, BLAS, LAPACK, ScaLAPACK Application Software Gaussian09 Usage L2VPN ready (negotiable). (i) Super Computer SX-ACE 1 2:5 (i)(ii)(iii): 10 16

17 Osaka University - Resource per project: up to 22 node years - Computational node: 512 nodes (141 TFlops) will be provided up to 280,000 node hours in shared use. - Storages per project: 20 TByte Software resources - Compilers C++/SX, FORTRAN90/SX, NEC Fortran 2003 compiler - Libraries ASL, ASLQUAD, ASLSTAT, MathKeisan(including BLAS, BLACS, LAPACK, ScaLAPACK), NetCDF (ii) PC Cluster for large scale visualization (possible to link with the largescale visualization system) - Resource per project: up to 6 node years - Computational node: 69 nodes (CPU: up to 31.1 TFlops, GPU: up to TFlops) will be provided up to 53,000 node hours in shared use. Arbitrary number of GPU resources can be attached to each node based on user requirement. Simultaneous use of this system and the largescale visualization system is possible based on user requirement. - Storages per project: 20 TByte Software resources - Compilers Intel Compiler (C/C++, Fortran) - Libraries Intel MKL, Intel MPI(Ver.4.1) - Application software OpenFOAM, GROMACS (for CPU/for GPU),LAMMPS (for CPU/for GPU), Gaussian09, AVS/Express PCE,AVS/Express MPE (Ver.8.2) (iii) OCTOPUS - Resource per project: General purpose CPU nodes: up to 35 node years GPU nodes: up to 5 node years Xeon Phi nodes: up to 6 node years Large-scale shared-memory nodes: up to 0.3 node years - Computational node: General purpose CPU nodes: 236 nodes (CPU: TFLOPS) will be provided up to 310,000 node hours in shared use. Because processors on these nodes are the same as that of GPU nodes, users can use 273 nodes for a job. GPU nodes: 37 nodes (CPU: 73.9 TFLOPS, GPU: TFLOPS) will be provided up to 49,000 node hours in shared use. Xeon Phi nodes: 44 nodes (CPU: TFLOPS) will be provided up to 58,000 node hours in shared use. Large-scale shared-memory nodes: 2 nodes (CPU: 16.4 TFLOPS, memory: 12TB) will be provided up to 2,600 node hours in shared use. Storages per project: 20 TByte Software resources [Compilers] Intel Compiler (FORTRAN, C, C++), GNU Compiler (FORTRAN, C, C++), PGI Compiler (FORTRAN, C, C++), Python 2.7/3.5, R 3.3, Julia, Octave, CUDA, XcalableMP, Gnuplot [Library] IntelMPI, OpenMPI, MVAPICH2, BLAS, LAPACK, FFTW, GNU Scientific Library, NetCDF 4.4.1, Parallel netcdf 1.8.1, HDF [Application software] 17

18 Research Institute for Information Technology, Kyushu University NICE Desktop Cloud Visualization, Gaussian16, GROMACS, OpenFOAM, LAMMPS, Caffe, Theano, Chainer, TensorFlow, Torch, GAMESS, ABINIT-MP, Relion 1 ITO Subsystem A (Fujitsu PRIMERGY) Hardware Resources 1) (Nearly dedicated-use) The Maximum resources allocated for 1 project is 32 nodes for a year. Most of resources are dedicated to the project. 32 nodes(1,152 cores), TFLOPS 2) (Shared-use) Up to 64 nodes can be used at the same time per project. It is shared with general users. 64 nodes(2,304 cores), TFLOPS Software Resources Compilers Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler 2 ITO Subsystem B (Fujitsu PRIMERGY) Hardware Resources (Nearly dedicated-use) The maximum resource allocation per task is 16 nodes for a year. Most of resources are dedicated to the project. 16 nodes(576 cores), CPU 42.39TFLOPS + GPU 339.2TFLOPS, including SSD Software Resources Compilers Intel Cluster Studio XE(Fortran, C, C++), Fujitsu Compiler, CUDA 3 ITO Frontend (Virtual server / Physical server) Hardware Resources Standard Frontend: 1 nodes(36 cores), CPU 2.64TFLOPS, Memory 384GiB, GPU (NVIDIA Quadro P4000) Large Frontend: 1 nodes(352 cores), CPU 12.39TFLOPS,Shared memory 12GiB, GPU(NVIDIA Quadro M4000) Software Resources Compilers Intel Cluster Studio XE(Fortran, C, C++), CUDA Storages per project: 10 TByte, (possible to add) Regarding 1/2, a server for pre/post-processing or visualization is available (Standard Frontend) 11):2 2):8 2:3 3:5 18

19 (2)Other facilities/resources available for Joint Research Projects The following facilities/resources, despite not being part of the HPCI-JHPCN system, are available for Joint Research Projects. JHPCN Institution Information Initiative Center, Hokkaido University Cyberscience Center, Tohoku University Information Technology Center, the University of Tokyo (1) Large-format printer Computational Resources, Type of Use Software resources Usage (1) Large-format printer (2) 3D visualization system(12 screens large stereoscopic display system( (FULL HD) 50 inch projection module 12 screens), visualization server 4, 3D visualization glasses) Software resources AVS/Express MPE(3D visualization software) Usage L2VPN Ready On-demand L2VPN (1) FENNEL (Real-time Data Analysis Nodes) x 4 At maximum four VMs or one bare metal server are provided to each group. The provided VMs or a bare metal server are dedicated to the group. (2) GPGPU (Nvidia Tesla M60) is available on request. 10Gbps Network Access (3) etwork Attached Storage (NAS) 150 TB 10Gbps Network Access Software resources Analysis nodes OS Ubuntu 16.04, Ubuntu 17.04, CentOS 7.4 Programming Language Python, Java, R Application Software Apache Hadoop, Apache Spark, Apache Hive, Facebook Presto, Elastic Search, Chainer, Tensor Flow Usage L2VPN ready (1) Analysis nodes You can directly login to the nodes via remote network with SSH. You can also mount NAS as native file system. (2) NAS You can mount with iscsi, NFS and/or CIFS protocol via SINET L2VPN. Y Other Options Housing services 20U Rack-Space (Arm-Type Racking) Estimated number of Projects adopted or 4 19

20 Global Scientific Information and Computing Center, Tokyo Institute of Technology Information Technology Center, Nagoya University Academic Center for Computing and Media Studies, Kyoto University Cybermedia Center, Osaka University Network Connectivity (Network Bandwidth: 10Gbps, (consultation is necessary), Power: AC100V/15A) (1) Remote GUI environment: the VDI (Virtual Desktop Infrastructure) system If you are planning to use the VDI system, please contact us in advance. Software resources Usage Hardware Resources (1) Visualization system Remote visualization system, on-site use(for data translation) (2) Network connection up to 40GBASE Note: Nagoya University only provides SFP+/QSFP port so that you have to prepare optical modules including the module for network switch of Nagoya University. Software Resources (1) Fujitsu PRIMERGY CX400/2550 Python + scikit-learn + TensorFlow Other software which is executable on Linux (2) Visualization system AVS/Express Dev/PCE, EnsightHPC, 3DAVSPlayer, IDL, ENVI (SARscape),ParaView, osgviewer, EasyVR, POV-Ray, 3dsMAX, 3- matic,vmd, LS-prepost Usage L2VPN Ready You can use L2VPN network connection when you connect own system to the network via hardware resource (2). We configure SINET L2VPN connection to outer university and internal university VLAN (with connection between them) (1) Virtual Server Hosting Standard Spec: CPU 2 Cores, Memory 8GB, Disk capacity 1TB Software resources (1) Hypervisor VMware OS CentOS7 Application Basic software prepared at the center and support for introducing various open source software. Usage L2VPN Ready Dedicated VLAN (on-campus) or SINET L2VPN(off-campus) Connection screen Flat Stereo Visualization System Install location: Umekita Office of the Cybermedia Center, Osaka University 50-inch FHD (1920 x 1080) stereo projection module: 24 displays Image Processing PC Cluster: 7 computing nodes * Prioritize the visualization by users of PC cluster for the largescale visualization screen Cylindrical Stereo Visualization System Install location: the main building of the Cybermedia Center, Suita Campus, Osaka University 46-inch WXGA (1366 x 768) LCD: 15 displays 20

21 Research Institute for Information Technology, Kyushu University Image Processing PC Cluster: 6 computing nodes * Prioritize the visualization by users of PC cluster for the largescale visualization. Software resources AVS Express/MPE, IDL AVS Express/MPE, IDL Usage L2VPN Ready (1) Visualization environment Visualization server (SGI Asterism), 4K projector, and 4K monitor Usage L2VPN Ready 21

22 Attachment 2 The following are possible examples of Research projects using both large-scale data and large capacity networks described in Attention 2 of "1. Joint Research Areas" in the call for proposals. The purposes of this attachment are to present, by example, how research resources provided by JHPCN member institutions may be used. We appreciate proposals of joint research projects using both large-scale data and large capacity networks, not limited to those examples. Information Initiative Center, Hokkaido University Distributed systems (virtual private cloud systems) can be deployed using our cloud system / data science cloud system federated with computational resources of the other universities or the applicant s laboratory, connected via SINET L2VPN or with software VPN solutions. Available resources Supercomputer system, Cloud system, data science cloud system (c.f. Attachment 1.) How to use Dedicated systems can be developed for the collaborative research projects employing physical and virtual servers as virtual private clouds. Distributed systems can also be developed by connecting our cloud resources with servers and storages in the users laboratories as hybrid cloud systems. The users can access the systems not only via ssh/scp but also with virtual console, which provided by Cloud Middleware,through web browsers. address for inquiring about resource usage and joint research kyodo@oicte.hokudai.ac.jp Details of anticipated projects - Experiment data analysis platform in the intercloud environment: constructing a data store, analysis, and sharing infrastructure employing virtual/real machines and storages of the cloud system / data science cloud system of Hokkaido university connected to computational resources of the other universities via SINET L2VPN. - Development of a large-scale pre/post-processing environment federating supercomputers and intercloud systems: developing a large-scale distributed processing environment such as performing analysis of big data generated by supercomputers using Hadoop clusters to visualize at the other universities remote systems. - An always-on platform to support network-oriented research projects: development of a 22

23 nation-wide distributed high-speed networking platform employing the cloud system / data science cloud system of Hokkaido university and private clouds of the other universities connected via SINTE5 L2VPN. Cyberscience Center, Tohoku University Cyberscience center provides vector parallel and scalar parallel supercomputers, a distributed data sharing environment through on-demand L2VPN, and a large-scale remote visualization environment with other centers. These environments allow users to share and analyze vast amounts of observed data obtained by sensors or IoT devices. We strongly invite proposals that try to exploit the potential of these environments. For example, joint research regarding the realtime analytics using supercomputers, and storage/network architectures for a large-scale distributed data sharing. Available resources Storage (500TB / project) 3D visualization system Supercomputers(SX-ACE, LX406 Re-2) On-demand L2VPN 23

24 Software resources OS Redhat Enterprise Linux 6 Programing languages LX406 Re-2: Fortran,C, C++, Ruby, Python, java, etc. SX-ACE : Fortran, C, C++ Application software Basic applications provided by Cyberscinece Center and original codes developed by users. We also support installing/migrating required software to our system. (please contact us in advance.) How to use Supercomputers (LX406 Re-2, SX-ACE) log in to the compute nodes using ssh transfer files to the node using the scp / sftp Network Possible to build L2VPN on SINET5 Storage Possible to use remote mount by NFS through L2VPN address for inquiring about resource usage and joint research joint_research@cc.tohoku.ac.jp Details of anticipated projects (in japanese) 24

25 Information Technology Center, the University of Tokyo Available resources and Connectivity to SINET5 (1) Reedbush System We would like to offer Reedbush-U, H, and L systems for Big-Data analyses using Burst Buffer (DDN IME) which incorporates high-speed and high-capacity SSD storages. In addition, we will offer the direct connection via SINET5 L2VPN for login-node of Reedbush-U and L (Please ask us for detailed configuration). Available Resources Reedbush-U, H, L System Software resources Refer to description of Reedbush-U, H, L System in Attachment 1. How to use You can directly login to the nodes via remote network with SSH. You can transfer data via remote network with SCP / SFTP. You can directly access to login node via SINET5 L2VPN (negotiable for Reedbush-U and L). (2) FENNEL and Network Access Storage We, Information Technology Center in the University of Tokyo, would like to offer a Big-Data analysis infrastructure in which a researcher can take sole usage of its resources. The key design concept of the infrastructure is to facilitate real-time analysis and accumulating intermittently generated data. Further, we will also offer an environment which can connect both your own 25

26 resources and our resources with SINET L2VPN service. The anticipated issues for this resources are IoT Big-Data analysis, SNS message analysis, anomaly detection of network traffic, and etc. Overview of FENNEL Available Resources FENNEL (Real-Time Dana Analysis Nodes) x 4 nodes At maximum four VMs or one bare metal server are provided to each group. The provided VMs or a bare metal server are dedicated to the group. GPGPU (Nvidia Tesla M60) is available on request. Network Access Storage (150TB) 10GbE Network Access Rack Space (with our Housing Service: You can bring your own devices into our datacenter and connect to the network) Software resources OS Ubuntu 16.04, Ubuntu 17.04, CentOS 7.4 Programming languages Python, Java, R Application software Apache Hadoop, Apache Spark, Apache Hive, Apache Impala, Presto, Elastic Search, Chainer, Tensor Flow 26

27 How to use FENNEL (Dedicated and Real-Time Data Analysis Nodes) You can directly login to the nodes via remote network with SSH. You can transfer data via remote network with SCP / SFTP. You can use GPGPU on VM if you want. Network Access Storage (NAS) You can mount with iscsi, NFS and/or CIFS protocol via SINET5 L2VPN. You can use data proxy to store your own data that you obtain form your own sensor devices. The volumes provided by the storage can be mounted on VMs as native filesystem. address for inquiring about resource usage and joint research jhpcn.adm@gs.mail.u-tokyo.ac.jp Details of anticipated projects GSIC, Tokyo Institute of Technology Machine learning jobs, especially in deep learning which recently attracts great attention, require both storage resources for storing large scale I/O data and high performance computation resources. For these jobs, we provide environment for large-scale high-performance machine learning by using lots of GPUs (>2,000 in the whole system) and large storage (up to 300TB per user group) equipped by the TSUBAME3.0 supercomputer. By using pre-installed frameworks that harnesses GPUs, acceleration of research projects of large scale machine learning is expected. 27

28 Available Resources Refer to description of TSUBAME3.0 in Appendix 1. Especially, 4 Tesla P100 GPUs per node are available. Software resources Refer to description of TSUBAME3.0 in Appendix 1. The followings are highly related items to this page: - OS: SUSE Linux Enterprise Server 12 SP2 - Programming Languages: Python, Java SDK, R Application software: Caffe, Chainer, TensorFlow How to use TSUBAME3.0 Same as regular usage. address for inquiring about resource usage and joint research jhpcn-kyoten@gsic.titech.ac.jp Details of anticipated projects Information Technology Center, Nagoya University We provide SGI UV2000 subsystem for very large scale data processing research field which is a large scale shared memory system equipping various storage system. UV2000 subsystem equips 10TB memory based file system by utilizing large scale shared memory, SSD file system which is superior in random access performance, and 2.9PB HDD array which is connected with DAS as storage systems. You have to include planned quota in proposal. UV2000 subsystem equips scikit-learn and TensorFlow libraries for machine learning and deep learning usage. Additionally, Fujitsu PRIMERGY CX400 subsystem equips many CPU cores including Intel Xeon Phi (Knights Corner) so that it is suitable for data processing which requires much CPU power. We are just supposing them for very large scale data processing or real-time large scale information collection and summarizing from the Internet, as research candidates for this subsystem. We provide up to 40GBASE network connection for very large bandwidth network technology research field. You can connect your own system to given network port (with own optical modules) to create very large bandwidth network experiment environment by creating L2 flat network via 28

29 SINET L2VPN for external university and internal university VLAN. We are just supposing it for large scale parallel data processing with utilizing remote computer systems or speculative data caching via large bandwidth network, as research candidates for this resource. We provide visualization subsystem and visualization software suite which can utilize from remote environment for very large scale information system research field. The subsystem is connected to SGI UV2000 subsystem and we execute the visualization software suite on UV2000 subsystem. We are just supposing them for remote visualization processing, VR teleconferencing system which accelerates brain storming, and high performance communication protocol to realtime remote visualization, as research candidates for this subsystem. A subsystem for very large scale data processing by data supply speed oriented system (with SSD file system) Border switch super core SW SINET5 vpc 10Gx4 100G (10G) 40Gx2 10G Experiment with very large network bandwidth with 40GBASE(max) connection and SINET L2VPN connection 40G 10G Fujitsu PRIMERGY CX400/2550 Fujitsu PRIMERGY CX400/270 A subsystem for very large scale data processing by CPU core amount oriented system (with Xeon Phi) HDD array storage InfiniteStorage PB File system with SSD 512TB Visualization system with file system controller 1280 cores (24TF/s), 20TiB shared memory A visualization subsystem for very large scale information system High resolution display system 7680 x 4320 resolution 4m width x 2.3m height (16 tiles) Handy type HMD system Stationary type HMD system Dome type display system Available Resources 1. SGI UV2000 subsystem (1280 cores, 24 TF, 20TiB shared memory) + memory base file system + SSD file system + HDD array (2.9PB) 2. Fujitsu PRIMERGY CX400 subsystem 3. Visualization subsystem: HMD system (Canon HM-A1, nvisorst50, Sony HMZ-T2), dome type display system, 8K display system 4. Network connection up to 40GBASE (with internal university VLAN and SINET 29

30 L2VPN configuration) Software resources 1. SGI UV2000 subsystem OS SuSE Linux Enterprise Server Compilers Fortran/C/C++, Python (2.6, 3.5) Libraries MPT(SGI), MKL(Intel), PBS Professional, FFTW, HDF5, NetCDF, scikitlearn, TensorFlow Application software AVS/Express Dev/PCE, EnSightHPC, FieldviewParallel, IDL, ENVI(SARscape), ParaView, 3DAVSplayer, POV-Ray, NICE DCV(SGI), ffmpeg, ffplay, LS-prepost, osgviewer, vmd(visual Molecular Dynamics) 2. Fujitsu PRIMERGY CX400 subsystem OS Red Hat Enterprise Linux Compilers Fujitsu Fortran/C/C++/ XPFortran,Intel Fortran /C/C++, Python (2.7, 3.3) Libraries MPI(Fujitsu, Intel), BLAS, LAPACK, ScaLAPACK, SSLII(Fujitsu), C-SSL II, SSL II/MPI, MKL(Intel), NUMPAC, FFTW, HDF5, IPP(Intel) Application software LS-DYNA, ABAQUS, ADF, AMBER, GAMESS, Gaussian, Gromacs, LAMMPS, NAMD, HyperWorks, OpenFOAM, STAR-CCM+, Poynting, Fujitsu Technical Computing Suite 4 (HPC Portal) 3. Visualization subsystem Visualization software suite AVS/Express Dev/PCE, EnsightHPC, 3DAVSPlayer, IDL, ENVI(SARscape), ParaView, osgviewer, EasyVR, POV-Ray, 3dsMAX, 3-matic, VMD, ffmpeg, ffplay, LS-prepost How to use Remote login with ssh through login node. You can use remote visualization by VisServer on UV2000 subsystem. File transfer with scp / sftp through login node. Job posting and job management through Fujitsu Technical Computing Suite (HPC Portal). address for inquiring about resource usage and joint research kyodo@itc.nagoya-u.ac.jp Very large scale numerical calculation field: Takahiro Katagiri (High Performance Computing Division) Very large scale data processing field: Takahiro Katagiri (High Performance Computing Division) Very large bandwidth network technology field: Hajime Shimada (Advanced 30

31 Networking Division) Very large scale information system field: Masao Ogino (High Performance Computing Division) Details of anticipated projects Academic Center for Computing and Media Studies, Kyoto University We will provide the infrastructure for collecting large scale data from laboratory equipment and observation equipment possessed by researchers via large capacity network or internet such as Kyoto University internal LAN (KUINS) or SINET5 L2VPN for 24 hours a day, 365 days and analyzing them with a supercomputer system in real time or periodically then offering information of the results on the Web. Available resources Supercomputer system Cray XC40 (Camphor 2: Xeon Phi KNL/node) Each project is assigned up to 48 nodes throughout the year. DDN ExaScaler (SFA14K) Each project is assigned up to 192TB Mainframe System Virtual Server Hosting Virtualized environment: VMWare Standard Spec: CPU 2 Cores, Memory 8GB, Disk capacity 1TB Software resources Supercomputer system Compilers Fortran2003/C99/C++03(Cray, Intel) Libraries MPI, BLAS, LAPACK, ScaLAPACK Mainframe System VM Hosting Standard OS: CentOS7 How to use Supercomputer system Login with SSH (Key authentication) Mainframe System VM Hosting Login with SSH (Granting Root authority) Access by various service port such as HTTP (80/TCP) or HTTPS (443/TCP) Multiple virtual domains are available 31

32 Kyoto University internal LAN (KUINS) VLAN and SINET5 L2VPN can be housed directly in VM address for inquiring about resource usage and joint research Cybermedia Center, Osaka University Our center provides L2 VPN connection service between other site and Osaka University testbed network through SINET. It aims to construct an environment of sharing visualization contents with our systems, devices and storages. Please contact us for detail of our service, usage of our resources or research collaboration. address for inquiring about resource usage and joint research Details of anticipated projects Research Institute for Information Technology, Kyushu University We provide a remote visualization and data analytic infrastructure that researchers can use from 32

33 remote sites. The provided system allows us to process generated large-scale data without moving, thus efficient processing is possible. Besides, if available, L2VPN enables the combined usage of resources between end users and bases. The provided resources are assumed to be used for research subjects that visualize and analyze large-scale parallel simulation and/or observation data. Available user scenarios are batch mode (use the back-end nodes), interactive mode (use the front-end nodes), and in-situ mode (use both the front-end and the back-end nodes simultaneously). If the data you generated does not correspond to the data format of the provided software or the supplied system does not have the analysis function you want, consultation is available. Available resources Hardware resources Subsystem A, Subsystem B, Standard Frontend (c.f. Attachment 1.) Software resources OS Linux Programming languages Python, R Application software Tensor Flow, OpenFOAM, HIVE(Visualization) How to use Batch environment 33

Basic Specification of Oakforest-PACS

Basic Specification of Oakforest-PACS Basic Specification of Oakforest-PACS Joint Center for Advanced HPC (JCAHPC) by Information Technology Center, the University of Tokyo and Center for Computational Sciences, University of Tsukuba Oakforest-PACS

More information

Introduction of Oakforest-PACS

Introduction of Oakforest-PACS Introduction of Oakforest-PACS Hiroshi Nakamura Director of Information Technology Center The Univ. of Tokyo (Director of JCAHPC) Outline Supercomputer deployment plan in Japan What is JCAHPC? Oakforest-PACS

More information

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo Overview of Supercomputer Systems Supercomputing Division Information Technology Center The University of Tokyo Supercomputers at ITC, U. of Tokyo Oakleaf-fx (Fujitsu PRIMEHPC FX10) Total Peak performance

More information

HOKUSAI System. Figure 0-1 System diagram

HOKUSAI System. Figure 0-1 System diagram HOKUSAI System October 11, 2017 Information Systems Division, RIKEN 1.1 System Overview The HOKUSAI system consists of the following key components: - Massively Parallel Computer(GWMPC,BWMPC) - Application

More information

Overview of Reedbush-U How to Login

Overview of Reedbush-U How to Login Overview of Reedbush-U How to Login Information Technology Center The University of Tokyo http://www.cc.u-tokyo.ac.jp/ Supercomputers in ITC/U.Tokyo 2 big systems, 6 yr. cycle FY 08 09 10 11 12 13 14 15

More information

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University Introduction to High Performance Computing Shaohao Chen Research Computing Services (RCS) Boston University Outline What is HPC? Why computer cluster? Basic structure of a computer cluster Computer performance

More information

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo Overview of Supercomputer Systems Supercomputing Division Information Technology Center The University of Tokyo Supercomputers at ITC, U. of Tokyo Oakleaf-fx (Fujitsu PRIMEHPC FX10) Total Peak performance

More information

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo Overview of Supercomputer Systems Supercomputing Division Information Technology Center The University of Tokyo Supercomputers at ITC, U. of Tokyo Oakleaf-fx (Fujitsu PRIMEHPC FX10) Total Peak performance

More information

Supercomputers in Nagoya University

Supercomputers in Nagoya University Supercomputers in Nagoya University KATSUYA ISHII The Information Technology Center (ITC) of Nagoya University was originally established as the Computing Center of Nagoya University, which all researchers

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

IT4Innovations national supercomputing center. Branislav Jansík

IT4Innovations national supercomputing center. Branislav Jansík IT4Innovations national supercomputing center Branislav Jansík branislav.jansik@vsb.cz Anselm Salomon Data center infrastructure Anselm and Salomon Anselm Intel Sandy Bridge E5-2665 2x8 cores 64GB RAM

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Agenda 1 Agenda-Day 1 HPC Overview What is a cluster? Shared v.s. Distributed Parallel v.s. Massively Parallel Interconnects

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

NVIDIA DEEP LEARNING INSTITUTE

NVIDIA DEEP LEARNING INSTITUTE NVIDIA DEEP LEARNING INSTITUTE TRAINING CATALOG Valid Through July 31, 2018 INTRODUCTION The NVIDIA Deep Learning Institute (DLI) trains developers, data scientists, and researchers on how to use artificial

More information

University of California, Riverside. Computing and Communications. Computational UCR. March 28, Introduction 2

University of California, Riverside. Computing and Communications. Computational UCR. March 28, Introduction 2 University of California, Riverside Computing and Communications Computational Clusters @ UCR March 28, 2008 1 Introduction 2 2 Campus Datacenter Co-Location Facility 2 3 Three Cluster Models 2 3.1 Departmentally

More information

BEST BIG DATA CERTIFICATIONS

BEST BIG DATA CERTIFICATIONS VALIANCE INSIGHTS BIG DATA BEST BIG DATA CERTIFICATIONS email : info@valiancesolutions.com website : www.valiancesolutions.com VALIANCE SOLUTIONS Analytics: Optimizing Certificate Engineer Engineering

More information

Research e-infrastructures in Czech Republic (e-infra CZ) for scientific computations, collaborative research & research support

Research e-infrastructures in Czech Republic (e-infra CZ) for scientific computations, collaborative research & research support Research e-infrastructures in Czech Republic (e-infra CZ) for scientific computations, collaborative research & research support Tomáš Rebok CERIT-SC, Institute of Computer Science MU MetaCentrum, CESNET

More information

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Hybrid Computing @ KAUST Many Cores and OpenACC Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Agenda Hybrid Computing n Hybrid Computing n From Multi-Physics

More information

Arm in HPC. Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm Arm Limited

Arm in HPC. Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm Arm Limited Arm in HPC Toshinori Kujiraoka Sales Manager, APAC HPC Tools Arm 2019 Arm Limited Arm Technology Connects the World Arm in IOT 21 billion chips in the past year Mobile/Embedded/IoT/ Automotive/GPUs/Servers

More information

NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG

NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG Valid Through July 31, 2018 INTRODUCTION The NVIDIA Deep Learning Institute (DLI) trains developers, data scientists, and researchers on how to use artificial

More information

New Jersey Financial Aid Management System (NJFAMS) EOF Selection, Awarding, Certification, and Payment Request Process

New Jersey Financial Aid Management System (NJFAMS) EOF Selection, Awarding, Certification, and Payment Request Process New Jersey Financial Aid Management System (NJFAMS) EOF Selection, Awarding, Certification, and Payment Request Process The following information is intended to provide EOF campus programs and their respective

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

CS500 SMARTER CLUSTER SUPERCOMPUTERS

CS500 SMARTER CLUSTER SUPERCOMPUTERS CS500 SMARTER CLUSTER SUPERCOMPUTERS OVERVIEW Extending the boundaries of what you can achieve takes reliable computing tools matched to your workloads. That s why we tailor the Cray CS500 cluster supercomputer

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

JEAS-Accredited Environmental Assessor Qualification Scheme

JEAS-Accredited Environmental Assessor Qualification Scheme JEAS-Accredited Environmental Assessor Qualification Scheme Osamu Kajitani, President Japan Association of Environment Assessment (JEAS) Abstract The Japan Association of Environment Assessment (JEAS)

More information

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity

in Action Fujitsu High Performance Computing Ecosystem Human Centric Innovation Innovation Flexibility Simplicity Fujitsu High Performance Computing Ecosystem Human Centric Innovation in Action Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Innovation Flexibility Simplicity INTERNAL USE ONLY 0 Copyright

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

User Training Cray XC40 IITM, Pune

User Training Cray XC40 IITM, Pune User Training Cray XC40 IITM, Pune Sudhakar Yerneni, Raviteja K, Nachiket Manapragada, etc. 1 Cray XC40 Architecture & Packaging 3 Cray XC Series Building Blocks XC40 System Compute Blade 4 Compute Nodes

More information

MyCloud Computing Business computing in the cloud, ready to go in minutes

MyCloud Computing Business computing in the cloud, ready to go in minutes MyCloud Computing Business computing in the cloud, ready to go in minutes In today s dynamic environment, businesses need to be able to respond quickly to changing demands. Using virtualised computing

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Building NVLink for Developers

Building NVLink for Developers Building NVLink for Developers Unleashing programmatic, architectural and performance capabilities for accelerated computing Why NVLink TM? Simpler, Better and Faster Simplified Programming No specialized

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

FUJITSU PHI Turnkey Solution

FUJITSU PHI Turnkey Solution FUJITSU PHI Turnkey Solution Integrated ready to use XEON-PHI based platform Dr. Pierre Lagier ISC2014 - Leipzig PHI Turnkey Solution challenges System performance challenges Parallel IO best architecture

More information

Application Guideline for BOP/Volume Zone Business Support Coordinator UZBEKISTAN in FY 2015

Application Guideline for BOP/Volume Zone Business Support Coordinator UZBEKISTAN in FY 2015 Application Guideline for BOP/Volume Zone Business Support Coordinator UZBEKISTAN in FY 2015 April 7, 2015 Manabu Shimoyashiro President Director JETRO Tashkent The Japan External Trade Organization, JETRO

More information

FGCP/S5. Introduction Guide. Ver. 2.3 FUJITSU LIMITED

FGCP/S5. Introduction Guide. Ver. 2.3 FUJITSU LIMITED FGCP/S5 Introduction Guide Ver. 2.3 FUJITSU LIMITED FGCP/S5 Instruction Guide Ver. 2.3 Date of publish: July, 2012 All Rights Reserved, Copyright FUJITSU LIMITED No reproduction or republication without

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Cuda C Programming Guide Appendix C Table C-

Cuda C Programming Guide Appendix C Table C- Cuda C Programming Guide Appendix C Table C-4 Professional CUDA C Programming (1118739329) cover image into the powerful world of parallel GPU programming with this down-to-earth, practical guide Table

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

Barcelona Supercomputing Center

Barcelona Supercomputing Center www.bsc.es Barcelona Supercomputing Center Centro Nacional de Supercomputación EMIT 2016. Barcelona June 2 nd, 2016 Barcelona Supercomputing Center Centro Nacional de Supercomputación BSC-CNS objectives:

More information

High Performance Computing and Data Resources at SDSC

High Performance Computing and Data Resources at SDSC High Performance Computing and Data Resources at SDSC "! Mahidhar Tatineni (mahidhar@sdsc.edu)! SDSC Summer Institute! August 05, 2013! HPC Resources at SDSC Hardware Overview HPC Systems : Gordon, Trestles

More information

NVIDIA T4 FOR VIRTUALIZATION

NVIDIA T4 FOR VIRTUALIZATION NVIDIA T4 FOR VIRTUALIZATION TB-09377-001-v01 January 2019 Technical Brief TB-09377-001-v01 TABLE OF CONTENTS Powering Any Virtual Workload... 1 High-Performance Quadro Virtual Workstations... 3 Deep Learning

More information

ACCELERATED COMPUTING: THE PATH FORWARD. Jensen Huang, Founder & CEO SC17 Nov. 13, 2017

ACCELERATED COMPUTING: THE PATH FORWARD. Jensen Huang, Founder & CEO SC17 Nov. 13, 2017 ACCELERATED COMPUTING: THE PATH FORWARD Jensen Huang, Founder & CEO SC17 Nov. 13, 2017 COMPUTING AFTER MOORE S LAW Tech Walker 40 Years of CPU Trend Data 10 7 GPU-Accelerated Computing 10 5 1.1X per year

More information

SPARC 2 Consultations January-February 2016

SPARC 2 Consultations January-February 2016 SPARC 2 Consultations January-February 2016 1 Outline Introduction to Compute Canada SPARC 2 Consultation Context Capital Deployment Plan Services Plan Access and Allocation Policies (RAC, etc.) Discussion

More information

Cloud Computing. UCD IT Services Experience

Cloud Computing. UCD IT Services Experience Cloud Computing UCD IT Services Experience Background - UCD IT Services Central IT provider for University College Dublin 23,000 Full Time Students 7,000 Researchers 5,000 Staff Background - UCD IT Services

More information

TERMS OF REFERENCE FOR INTERNSHIPS THE WHO INTERAGENCY COORDINATION GROUP ON ANTIMICROBIAL RESISTANCE FOR MEDICAL STUDENTS IN IFMSA

TERMS OF REFERENCE FOR INTERNSHIPS THE WHO INTERAGENCY COORDINATION GROUP ON ANTIMICROBIAL RESISTANCE FOR MEDICAL STUDENTS IN IFMSA TERMS OF REFERENCE FOR INTERNSHIPS THE WHO INTERAGENCY COORDINATION GROUP ON ANTIMICROBIAL RESISTANCE FOR MEDICAL STUDENTS IN IFMSA Location: Geneva, Switzerland Duration: 2 months or longer (flexible)

More information

McGill University Virtualization Service Description and Service Level Agreement

McGill University Virtualization Service Description and Service Level Agreement McGill University Virtualization Service Description and Service Level Agreement Document Control Revision No. Document Control Date Description Approved By 1.0 January 19, 2007 Creation of the document

More information

I. General regulations

I. General regulations Degree and examination regulations for the consecutive international master's program in Architecture Typology at Faculty VI of the Technische Universität Berlin, October 2, 206 On October 2, 206, the

More information

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN Current Status of the Next- Generation Supercomputer in Japan YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN International Workshop on Peta-Scale Computing Programming Environment, Languages

More information

HPCI Help Desk System User Manual Ver. 5

HPCI Help Desk System User Manual Ver. 5 Document ID:HPCI-OF01-002E-05 HPCI Help Desk System User Manual Ver. 5 2017/6/7 HPCI Operating Office Revision History Date issued Ver. Descriptions 2012/3/30 1 2013/9/30 2 A full-fledged revision is made

More information

Earth Observation Innovation Platform _. Price List

Earth Observation Innovation Platform _. Price List Earth Observation Innovation Platform _ Price List 2 Computing Cloud VMs Computing cloud is organized into instances (so called VMs - Virtual Machines). VMs contain virtual CPUs - vcores, RAM memory and

More information

ADVANCED CUSTOMER SERVICES ORACLE TRANSITION SERVICE EXHIBIT

ADVANCED CUSTOMER SERVICES ORACLE TRANSITION SERVICE EXHIBIT ADVANCED CUSTOMER SERVICES ORACLE TRANSITION SERVICE EXHIBIT This exhibit incorporates by reference the terms of the order for Oracle Transition Service. A. Description of Services. i. If Your order contains

More information

GPU Clouds IGT Cloud Computing Summit Mordechai Butrashvily, CEO 2009 (c) All rights reserved

GPU Clouds IGT Cloud Computing Summit Mordechai Butrashvily, CEO 2009 (c) All rights reserved GPU Clouds IGT 2009 Cloud Computing Summit Mordechai Butrashvily, CEO moti@hoopoe-cloud.com 02/12/2009 Agenda Introduction to GPU Computing Future GPU architecture GPU on a Cloud: Visualization Computing

More information

NVIDIA VIRTUAL GPU PACKAGING, PRICING AND LICENSING. March 2018 v2

NVIDIA VIRTUAL GPU PACKAGING, PRICING AND LICENSING. March 2018 v2 NVIDIA VIRTUAL GPU PACKAGING, PRICING AND LICENSING March 2018 v2 TABLE OF CONTENTS OVERVIEW 1.1 GENERAL PURCHASING INFORMATION VGPU PRODUCT DETAILS 1.2 VIRTUAL GPU SOFTWARE EDITIONS 1.3 VIRTUAL GPU SOFTWARE

More information

PGC Reimbursement Guidelines

PGC Reimbursement Guidelines PGC 2018-2019 Reimbursement Guidelines Table of Contents Introduction...2 General Policies and Procedures...2 Non-Reimbursable Situations...3 Required Documentation...3 Student-Organized Events...4 Convention

More information

Scientific Visualization Services at RZG

Scientific Visualization Services at RZG Scientific Visualization Services at RZG Klaus Reuter, Markus Rampp klaus.reuter@rzg.mpg.de Garching Computing Centre (RZG) 7th GOTiT High Level Course, Garching, 2010 Outline 1 Introduction 2 Details

More information

Intel Many Integrated Core (MIC) Architecture

Intel Many Integrated Core (MIC) Architecture Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products

More information

The next generation supercomputer. Masami NARITA, Keiichi KATAYAMA Numerical Prediction Division, Japan Meteorological Agency

The next generation supercomputer. Masami NARITA, Keiichi KATAYAMA Numerical Prediction Division, Japan Meteorological Agency The next generation supercomputer and NWP system of JMA Masami NARITA, Keiichi KATAYAMA Numerical Prediction Division, Japan Meteorological Agency Contents JMA supercomputer systems Current system (Mar

More information

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GPU ACCELERATED COMPUTING 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GAMING PRO ENTERPRISE VISUALIZATION DATA CENTER AUTO

More information

VMware vcloud Air User's Guide

VMware vcloud Air User's Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Accelerated computing is revolutionizing the economics of the data center. HPC enterprise and hyperscale customers deploy

More information

HPC with GPU and its applications from Inspur. Haibo Xie, Ph.D

HPC with GPU and its applications from Inspur. Haibo Xie, Ph.D HPC with GPU and its applications from Inspur Haibo Xie, Ph.D xiehb@inspur.com 2 Agenda I. HPC with GPU II. YITIAN solution and application 3 New Moore s Law 4 HPC? HPC stands for High Heterogeneous Performance

More information

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016 National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor

More information

IBM Terms of Use SaaS Specific Offering Terms. IBM DB2 on Cloud. 1. IBM SaaS. 2. Charge Metrics

IBM Terms of Use SaaS Specific Offering Terms. IBM DB2 on Cloud. 1. IBM SaaS. 2. Charge Metrics IBM Terms of Use SaaS Specific Offering Terms IBM DB2 on Cloud The Terms of Use ( ToU ) is composed of this IBM Terms of Use - SaaS Specific Offering Terms ( SaaS Specific Offering Terms ) and a document

More information

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training Table of Contents: Part I. Overview Information Part II. Full Text of Announcement Section I.

More information

GPU Architecture. Alan Gray EPCC The University of Edinburgh

GPU Architecture. Alan Gray EPCC The University of Edinburgh GPU Architecture Alan Gray EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? Architectural reasons for accelerator performance advantages Latest GPU Products From

More information

OpenPOWER Performance

OpenPOWER Performance OpenPOWER Performance Alex Mericas Chief Engineer, OpenPOWER Performance IBM Delivering the Linux ecosystem for Power SOLUTIONS OpenPOWER IBM SOFTWARE LINUX ECOSYSTEM OPEN SOURCE Solutions with full stack

More information

ACCRE High Performance Compute Cluster

ACCRE High Performance Compute Cluster 6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services

More information

Subject: Request for proposal for a high-performance computing cluster

Subject: Request for proposal for a high-performance computing cluster Ref. No.: SSCU/GPR/T/2017-79 2 May 2017 Subject: Request for proposal for a high-performance computing cluster Dear Madam/Sir, I wish to purchase a high-performance computing (HPC) cluster comprising of

More information

TURKISH STANDARDS INSTITUTION

TURKISH STANDARDS INSTITUTION TURKISH STANDARDS INSTITUTION NEW GLOBAL OLD APPROACH REGULATIONS CONFORMITY ASSESSMENT PROCEDURES AND PRINCIPLES Board Decision : 29.04.2014 Date 50.-236 Validity date : 05.05.2014 CHAPTER ONE Objective,

More information

The Architecture and the Application Performance of the Earth Simulator

The Architecture and the Application Performance of the Earth Simulator The Architecture and the Application Performance of the Earth Simulator Ken ichi Itakura (JAMSTEC) http://www.jamstec.go.jp 15 Dec., 2011 ICTS-TIFR Discussion Meeting-2011 1 Location of Earth Simulator

More information

Kengo Nakajima Information Technology Center, The University of Tokyo. SC15, November 16-20, 2015 Austin, Texas, USA

Kengo Nakajima Information Technology Center, The University of Tokyo. SC15, November 16-20, 2015 Austin, Texas, USA ppopen-hpc Open Source Infrastructure for Development and Execution of Large-Scale Scientific Applications on Post-Peta Scale Supercomputers with Automatic Tuning (AT) Kengo Nakajima Information Technology

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005 Short history

More information

CERTIFICATE IN LUXEMBOURG COMPANY SECRETARIAL & GOVERNANCE PRACTICE

CERTIFICATE IN LUXEMBOURG COMPANY SECRETARIAL & GOVERNANCE PRACTICE CERTIFICATE IN LUXEMBOURG COMPANY SECRETARIAL & GOVERNANCE PRACTICE POLICY ILA asbl 19, rue de Bitbourg L-1273 Luxembourg TABLE OF CONTENTS Program Entry 3 Eligibility criteria 3 Training program 4 Application

More information

Application Guideline for BOP Business Support Coordinator BANGLADESH in FY 2013

Application Guideline for BOP Business Support Coordinator BANGLADESH in FY 2013 Application Guideline for BOP Business Support Coordinator BANGLADESH in FY 2013 1 May, 2013 Kei Kawano Representative JETRO Dhaka The Japan External Trade Organization (JETRO) Dhaka Office will retain

More information

PROCESS FOR INITIAL CERTIFICATION OF CERTIFIED SCRUM TRAINER PROFESSIONALS WITH CERTIFICATION STANDARDS

PROCESS FOR INITIAL CERTIFICATION OF CERTIFIED SCRUM TRAINER PROFESSIONALS WITH CERTIFICATION STANDARDS PROCESS FOR INITIAL CERTIFICATION OF CERTIFIED SCRUM TRAINER PROFESSIONALS WITH CERTIFICATION STANDARDS Introduction Certified Scrum Trainer professionals ( CSTs ) play a vital role within Scrum Alliance.

More information

Getting Started with InfoSphere Streams Quick Start Edition (VMware)

Getting Started with InfoSphere Streams Quick Start Edition (VMware) IBM InfoSphere Streams Version 3.2 Getting Started with InfoSphere Streams Quick Start Edition (VMware) SC19-4180-00 IBM InfoSphere Streams Version 3.2 Getting Started with InfoSphere Streams Quick Start

More information

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing Jay Boisseau, Director April 17, 2012 TACC Vision & Strategy Provide the most powerful, capable computing technologies and

More information

OnCommand Unified Manager 7.2: Best Practices Guide

OnCommand Unified Manager 7.2: Best Practices Guide Technical Report OnCommand Unified : Best Practices Guide Dhiman Chakraborty August 2017 TR-4621 Version 1.0 Abstract NetApp OnCommand Unified is the most comprehensive product for managing and monitoring

More information

Project Platform Coordinator for Japanese SMEs

Project Platform Coordinator for Japanese SMEs Project Platform Coordinator for Japanese SMEs March 30, 2018 Masayoshi Watanabe JETRO Düsseldorf Director General Under the policy of the Ministry of Economy, Trade and Industry (METI) / Small and Medium

More information

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python Python Landscape Adoption of Python continues to grow among domain specialists and developers for its productivity benefits Challenge#1:

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

FUJITSU Cloud Service S5. Introduction Guide. Ver. 1.3 FUJITSU AMERICA, INC.

FUJITSU Cloud Service S5. Introduction Guide. Ver. 1.3 FUJITSU AMERICA, INC. FUJITSU Cloud Service S5 Introduction Guide Ver. 1.3 FUJITSU AMERICA, INC. 1 FUJITSU Cloud Service S5 Introduction Guide Ver. 1.3 Date of publish: September, 2011 All Rights Reserved, Copyright FUJITSU

More information

BT Assure Cloud Identity Annex to the General Service Schedule

BT Assure Cloud Identity Annex to the General Service Schedule 1 Defined Terms The following definitions apply, in addition to those in the General Terms and Conditions and the General Service Schedule of the Agreement. Administrator means a Customer-authorised person

More information

Japan Plywood Inspection Corporation

Japan Plywood Inspection Corporation About Us Contact & Question to e-mail address : info@jpic-ew.or.jp Japan Plywood Inspection Corporation History Ⅰ. History of JAS Law and Japan Plywood Inspection Corporation (JPIC) 1949 JPIC was established

More information

Part 2: Computing and Networking Capacity (for research and instructional activities)

Part 2: Computing and Networking Capacity (for research and instructional activities) National Science Foundation Part 2: Computing and Networking Capacity (for research and instructional activities) FY 2013 Survey of Science and Engineering Research Facilities Who should be contacted if

More information

3rd update March

3rd update March FAQs for Japanese Nationals applying for a Joint Japan / World Bank Graduate Scholarship Program (JJ/WBGSP) Scholarship 3rd update March 29 2018 Prior updates done in red. Most recent updates done in blue.

More information

Designed for Maximum Accelerator Performance

Designed for Maximum Accelerator Performance Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can

More information

IaaS. IaaS. Virtual Server

IaaS. IaaS. Virtual Server FUJITSU Cloud Service K5 for Public & Virtual Private Cloud UK Region Price List (November 2017) Pricing Overview: FUJITSU Cloud Service K5 for Type 1 and Type 2 Cloud Services is priced on a consumption

More information

Service Description: Software Support

Service Description: Software Support Page 1 of 1 Service Description: Software Support This document describes the service offers under Cisco Software Support. This includes Software Support Service (SWSS), Software Support Basic, Software

More information

Curriculum for the Bachelor's Degree Programme in Web Development Institutional section

Curriculum for the Bachelor's Degree Programme in Web Development Institutional section Curriculum for the Bachelor's Degree Programme in Web Development Institutional section Curriculum for the Bachelor's Degree Programme in Web Development Institutional section Table of contents 1.... 0

More information

The world s first open public infrastructure for advancing AI research and deployment

The world s first open public infrastructure for advancing AI research and deployment 1 The world s first open public infrastructure for advancing AI research and deployment Jason Haga, Ph.D., Senior Research Scientist Satoshi Sekiguchi, Ph.D., Vice President ABCI team National Institute

More information

Machine Learning on VMware vsphere with NVIDIA GPUs

Machine Learning on VMware vsphere with NVIDIA GPUs Machine Learning on VMware vsphere with NVIDIA GPUs Uday Kurkure, Hari Sivaraman, Lan Vu GPU Technology Conference 2017 2016 VMware Inc. All rights reserved. Gartner Hype Cycle for Emerging Technology

More information

ARCHER Champions 2 workshop

ARCHER Champions 2 workshop ARCHER Champions 2 workshop Mike Giles Mathematical Institute & OeRC, University of Oxford Sept 5th, 2016 Mike Giles (Oxford) ARCHER Champions 2 Sept 5th, 2016 1 / 14 Tier 2 bids Out of the 8 bids, I know

More information

INFORMATION TECHNOLOGY DATA MANAGEMENT PROCEDURES AND GOVERNANCE STRUCTURE BALL STATE UNIVERSITY OFFICE OF INFORMATION SECURITY SERVICES

INFORMATION TECHNOLOGY DATA MANAGEMENT PROCEDURES AND GOVERNANCE STRUCTURE BALL STATE UNIVERSITY OFFICE OF INFORMATION SECURITY SERVICES INFORMATION TECHNOLOGY DATA MANAGEMENT PROCEDURES AND GOVERNANCE STRUCTURE BALL STATE UNIVERSITY OFFICE OF INFORMATION SECURITY SERVICES 1. INTRODUCTION If you are responsible for maintaining or using

More information

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration An Oracle White Paper December 2010 Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration Introduction...1 Overview of the Oracle VM Blade Cluster

More information

Service Description. IBM DB2 on Cloud. 1. Cloud Service. 1.1 IBM DB2 on Cloud Standard Small. 1.2 IBM DB2 on Cloud Standard Medium

Service Description. IBM DB2 on Cloud. 1. Cloud Service. 1.1 IBM DB2 on Cloud Standard Small. 1.2 IBM DB2 on Cloud Standard Medium Service Description IBM DB2 on Cloud This Service Description describes the Cloud Service IBM provides to Client. Client means the company and its authorized users and recipients of the Cloud Service.

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant Goals Introduction to MSI resources Show you how to access our systems

More information