Use of the virtualized HPC infrastructure of the Novosibirsk Scientific Center for running production analysis for HEP experiments at BINP

Similar documents
Professor Boris M. Glinsky Nikolay V. Kuchin

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

Virtualizing a Batch. University Grid Center

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

IEPSAS-Kosice: experiences in running LCG site

Andrea Sciabà CERN, Switzerland

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

AGIS: The ATLAS Grid Information System

The INFN Tier1. 1. INFN-CNAF, Italy

Virtualization. A very short summary by Owen Synge

Data transfer over the wide area network with a large round trip time

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

How то Use HPC Resources Efficiently by a Message Oriented Framework.

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

Status of e-science Activities in Mongolia

JINR cloud infrastructure development

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

ATLAS software configuration and build tool optimisation

The LHC Computing Grid

The extension of optical networks into the campus

The High-Level Dataset-based Data Transfer System in BESDIRAC

ATLAS Nightly Build System Upgrade

Private Cloud at IIT Delhi

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The JINR Tier1 Site Simulation for Research and Development Purposes

Overview. About CERN 2 / 11

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

PoS(ISGC 2011 & OGF 31)049

Pan-European Grid einfrastructure for LHC Experiments at CERN - SCL's Activities in EGEE

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

Use of containerisation as an alternative to full virtualisation in grid environments.

Figure 1: cstcdie Grid Site architecture

Connectivity Services, Autobahn and New Services

Storage and I/O requirements of the LHC experiments

HPC learning using Cloud infrastructure

CernVM-FS beyond LHC computing

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

TECHNOLOGY BRIEF. 10 Gigabit Ethernet Technology Brief

NOTE: A minimum of 1 gigabyte (1 GB) of server memory is required per each NC510F adapter. HP NC510F PCIe 10 Gigabit Server Adapter

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Current Status of the Ceph Based Storage Systems at the RACF

Users and utilization of CERIT-SC infrastructure

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

The Legnaro-Padova distributed Tier-2: challenges and results

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Austrian Federated WLCG Tier-2

Cisco Nexus 7000 F3-Series 6-Port 100 Gigabit Ethernet Module

JINR cloud infrastructure

Meta-cluster of distributed computing Dubna-Grid

Ó³ Ÿ , º 5(203) Ä1050. Š Œ œ ƒˆˆ ˆ ˆŠ

Chelsio Communications. Meeting Today s Datacenter Challenges. Produced by Tabor Custom Publishing in conjunction with: CUSTOM PUBLISHING

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

Improved ATLAS HammerCloud Monitoring for Local Site Administration

The Grid: Processing the Data from the World s Largest Scientific Machine

Cabling Solutions for Nexus 9000 Switches With Dual-Rate 40/100G BiDi Transceiver and Panduit Connectivity

CC-IN2P3: A High Performance Data Center for Research

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

Status of KISTI Tier2 Center for ALICE

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Network Design Considerations for Grid Computing

Data Intensive Science Impact on Networks

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

Data Center Interconnect Solution Overview

An ATCA framework for the upgraded ATLAS read out electronics at the LHC

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

QuickSpecs. HP Z 10GbE Dual Port Module. Models

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

ARISTA: Improving Application Performance While Reducing Complexity

THE OPEN DATA CENTER FABRIC FOR THE CLOUD

Cisco Nexus 7700 F3-Series 24-Port 40 Gigabit Ethernet Module

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

OptiDriver 100 Gbps Application Suite

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

Exploiting peer group concept for adaptive and highly available services

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

Moving e-infrastructure into a new era the FP7 challenge

InfiniBand SDR, DDR, and QDR Technology Guide

ISTITUTO NAZIONALE DI FISICA NUCLEARE

<Insert Picture Here> Linux: The Journey, Milestones, and What s Ahead Edward Screven, Chief Corporate Architect, Oracle

Batch Services at CERN: Status and Future Evolution

Cisco Virtualized Workload Mobility Introduction

Dynamic Extension of a Virtualized Cluster by using Cloud Resources

Grid-related related Activities around KISTI Tier2 Center for ALICE

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

CouchDB-based system for data management in a Grid environment Implementation and Experience

Analysis of internal network requirements for the distributed Nordic Tier-1

Exploring cloud storage for scien3fic research

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Programmable Information Highway (with no Traffic Jams)

Transcription:

Use of the virtualized HPC infrastructure of the Novosibirsk Scientific Center for running production analysis for HEP experiments at BINP S. Belov, V. Kaplin, K. Skovpen, A.Korol, A. Sukharev 1, A. Zaytsev Budker Institute of Nuclear Physics SB RAS, Novosibirsk, Russia E-mail: A.M.Suharev@inp.nsk.su A. Adakin, D. Chubarov, V. Nikultsev Institute of Computational Technologies SB RAS, Novosibirsk, Russia N. Kuchin, S. Lomakin Institute of Computational Mathematics and Mathematical Geophysics SB RAS, Novosibirsk, Russia V. Kalyuzhny Novosibirsk State University, Novosibirsk, Russia Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we ve got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM&MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially KEDR detector experiment at VEPP-4M electron-positron collider. The International Symposium on Grids and Clouds (ISGC) 2012 Academia Sinica, Taipei, Taiwan February 26 March 2, 2012 1 Speaker Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it

1. Introduction Over the last few years the computing infrastructure of Novosibirsk Scientific Center (NSC) located in Novosibirsk Akademgorodok [1] has improved dramatically as the new high performance computing facilities in Novosibirsk State University (NSU) [2] and various institutes of Siberian Branch of the Russian Academy of Sciences (SB RAS) [3] were established in order to be used as shared resources for scientific and educational purposes. The need for providing these facilities with the robust and reliable network infrastructure which would make it possible to share the storage and computing resources across the sites emerged instantly once the facilities entered production. In 2008, the following organizations: Institute of Computational Technologies (ICT) [4] hosting all the centralized scientific network infrastructure of SB RAS and Akademgorodok in particular, Novosibirsk State University (NSU) hosting NSU Supercomputer Center (NUSC) [5], Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG) [6] hosting Siberian Supercomputer Center (SSCC) [7], Budker Institute of Nuclear Physics (BINP) [8] hosting a GRID computing facility (BINP/GCF) optimized for massive parallel data processing of HEP experiments (which is supposed to be used as a BINP RDIG [9] and WLCG [10] site in the near future) formed a consortium with the primary goal to build such a network (named later on as the NSC supercomputer network, or simply NSC/SCN) and provide it with the long term technical support. The first stage of the NSC/SCN infrastructure was deployed by ICT specialists in 2009 on top of the existing optical links of SB RAS Data Communication Network [11] and it is being maintained ever since on 24x7 basis. This contribution is focused on how the existing NSC/SCN infrastructure was used in order to build a virtualized computing environment on top of the NUSC and BINP/GCF facilities which is now exploited for running massive parallel data processing jobs related to KEDR detector experiment [12] being carried out at BINP. The prospected ways of using this environment for serving the needs of other BINP detector experiments and locally maintained GRID sites are also discussed. 2. NSC supercomputer network design and implementation The supercomputer network as it is implemented now, has a star topology based on 10 Gigabit Ethernet technology. As shown in Figure 1, the central switch of the network is located in ICT and connected to each of the remote participating sites by means of two pairs of single mode fibers (SMF), two of which are equipped with the pair of long range (LR) 10 GbE optical transceivers and the remaining ones are used for two independent 1 Gbps wavelength-division multiplexing (WDM) technology based auxiliary control and monitoring links. The only exception is the link between the ICT and SSCC facilities which is less than 200 meters long and currently deployed over the multi-mode fiber (MMF) equipped with the short range (SR) 10 GbE transceivers. 2

Figure 1: Networking layout of the NSC supercomputer network. Links to the user groups which are still to be established are represented by the dashed lines/circles, and the primary 10 GbE links by the green lines. MRTG network statistics gathering points are also shown. NSC/SCN is designed as a well isolated private network which is not directly exposed to the general purpose networks of the organizations involved. Each of the sites connected to the NSC/SCN infrastructure is equipped with the edge switch, used for the centralized access management and control, though the client side connections of the 10 Gbps uplinks are implemented individually on each site, reflecting the architectural differences between them. All the network links are continuously monitored by means of Multi Router Traffic Grapher (MRTG) [13] instances deployed on sites. The following approaches and protocols are now exploited for exposing computing and storage resources of the interconnected facilities to each other across the NSC supercomputer network: OSI Layer 3 (static) routing between the private IPv4 subnets, IEEE 802.1Q VLANs spanned across all the NSC/SCN switches, Higher level protocols for storage interconnect and also InfiniBand and RDMA interconnect across the sites over the Ethernet links (experimental). Round Trip Time (RTT) values between the sites in the network are less than 0.2 ms. The maximum data transfer rate between the sites for unidirectional TCP bulk transfer over the NSC/SCN network measured with Iperf [14] is equal to 9.4 Gbps. No redundancy implemented yet for the 10 Gbps links and the routing core of the network, but these features are planned to 3

be added during the future upgrades of the infrastructure, along with increasing the maximum bandwidth of each link up to 20 Gbps, while preserving the low value of RTT for all of them. The prospects for extending the NSC/SCN network beyond Akademgorodok are also being considered. 3. NSC/SCN common virtualized computing environment 3.1 Generic Design Overview Since the early days of the deployment, the NSC supercomputer network interconnects three facilities that are quite different from the point of view of their primary field of application and amount of resources available: NUSC at NSU: high performance computing (HPC) oriented facility (HP BladeSystem c7000 based solution running SLES 11 x86_64 [15] under control of PBS Pro [16] batch system) provided with 37 TFlops of combined computing power (including 8 TFlops delivered by the GPUs), 108 TB of total storage system capacity, and KVM [17] based virtualization solution, SSCC at ICM&MG: HPC oriented facility with 116 TFlops of combined computing power (including 85 TFlops delivered by the GPUs) plus 120 TB of total storage system capacity, GCF at BINP: parallel data processing and storage oriented facility (0.2 TFlops of computing power running SL5 x86_64 [18] plus 96 TB of centralized storage system capacity) provided with both XEN [18] and KVM based virtualization solutions. Considering the obvious imbalance of computing resources among the listed sites an initiative has emerged to use the NSC/SCN capabilities for sharing the storage resources of BINP/GCF with NUSC and SSCC facilities, and at the same time allow BINP user groups to access the computing resources of these facilities, thus creating a common virtualized computing environment on top of them. The computing environment of the largest user group supported by BINP/GCF (described in the next section) was selected for prototyping, early debugging and implementation of such an environment. 3.2 Computing Environment of the KEDR Experiment KEDR [12, 20] is a large scale particle detector experiment being carried out at VEPP-4M electron-positron collider [21] at BINP. The offline software of the experiment was being developed since late 90 s. After several migrations the standard computing environment was frozen on Scientific Linux CERN 3 i386 [22] and no further migrations are expected in the future. The amount of software developed for KEDR experiment is about 350 ksloc as estimated by the SLOCCount [23] tool with the default settings. The code is written mostly in Fortran (44%) and C/C++ (53%). The combined development effort invested into it is estimated to be more than 100 man-years. An overall size of experimental data recorded by KEDR detector since 2000 is 3.6 TB which are stored in a private format. Sun Grid Engine (SGE) [24] is utilized as the experiment s standard batch system. All the specific features of the computing environment mentioned here are making it extremely difficult to run KEDR event simulation 4

and reconstruction jobs within the modern HPC environment of the NUSC and SSCC facilities, thus making it an ideal candidate for testing the common virtualized environment infrastructure deployed on top of the BINP/GCF and NUSC resources. 3.3 Initial Implementation and Validation Procedures The following candidates for a solution of the problem stated above were carefully evaluated at NUSC facility while trying to find an optimal configuration of the virtualized environment capable of providing both high efficiency of using the host system CPU power and the long term virtual machine stability at the same time: VMware server [25] based solution requiring minimal changes in the native OS of NUSC cluster we observed poor long term stability of the virtual machines and also a significant performance penalty incompatible with running production jobs of BINP detector experiments, XEN based solution identical to the one deployed on BINP/GCF resources ruled out as it required running a modified version of Linux kernel which was not officially supported by the hardware vendor of the NUSC cluster, KVM based solution which have shown the best performance and long term stability while running SLC3 based virtual machines among all the evaluated candidates and therefore picked up for the final validation and running the KEDR detector production jobs. In addition the KVM based solution was validated during the large scale tests involving up to 512 dual VCPU virtual machines of KEDR experiment running up to 1024 experimental data processing and full detector simulation jobs in parallel controlled by the KEDR experiment SGE based batch system. The final layout of networking and storage interconnect schema developed for the KVM based virtualization environment deployed on the NUSC resources are shown in Figure 2. Note that all the virtual machine images and input/output data samples are exported to the nodes of the NUSC cluster directly from BINP/GCF and KEDR experiment storage systems through the NSC/SCN infrastructure. All the stages of deployment of the KVM based virtualization environment on the computing nodes of NUSC cluster were automated and now handled via the standard PBS Pro batch system user interface. Furthermore, a generic integration mechanism has been deployed on top of the virtualization system which handles all the message exchange between the batch systems of KEDR experiment and NUSC cluster thus delivering a completely automated solution for the management of the NSC/SCN virtualized infrastructure. The generic layout of the batch system integration mechanism is shown in Figure 3. A dedicated service periodically checks if new jobs are pending in the KEDR batch system. In case of necessity it submits the corresponding amount of so called "VM jobs" to the NUSC batch system. A "VM job" occupies an entire physical node of the NUSC cluster and starts several VMs according to the number of CPU cores on the node. VMs boot and report themselves to the KEDR batch system and then user jobs are started. When there is no appropriate jobs in the KEDR batch system queue, the VM shutdown procedure is initiated thus freeing NUSC cluster resources for its other users. 5

Figure 2: Network, computing and storage resources integration layout for NUSC and BINP/GCF computing facilities by means of NSC/SCN supercomputer network. 3.4 Recent Developments and Future Prospects The solution was tested thoroughly on a large scale within the computing environment of KEDR experiment in 2011Q1 and has been used for running production jobs ever since, delivering up to 75% of computing resources required by KEDR experiment over the last 9 months of continuous operation, resulting in a significant speed-up of the process of physical analysis, e.g. [26-28]. Later on in 2011Q3 the SND detector [29, 30] experiment which is being carried out at BINP at VEPP-2000 collider [31, 32] have successfully adopted the virtualization solution previously built for KEDR detector in order to satisfy their own needs for HPC resources. Recently in 2012Q1 the BINP user group doing data analysis for ATLAS experiment [33] at the LHC machine [34] (CERN, Switzerland) within the frawework of ATLAS Exotics Working Group [35] have joined the activity as well. Summary on the resource usage by these BINP user groups actively participating in the project is shown in Figure 4. We also look forward to other BINP and NSC based user groups joining the activity, especially VEPP-2000 collider (BINP), CMD-3 detector [36] at VEPP-2000 collider (BINP), Russian National Nanotechnology Network (NNN) resource site of SB RAS (ICT). 6

The similar virtualization infrastructure is proposed to be deployed on SSCC side which would result in an increase of the amount of computing resources accessible via the NSC/SCN up to approximately 150 TFlops of computing power and more than 300 TB of shared storage capacity by the end of 2012. 4.Summary The supercomputer network of the Novosibirsk Scientific Center based on 10 Gigabit Ethernet technology was built by a consortium of institutes located in Novosibirsk Akademgorodok in 2009 on top of existing optical network infrastructure in order to provide a robust and high bandwidth interconnect for the largest local computer centres devoted to scientific and educational purposes. It currently unites all the HPC communities and major computing facilities of the NSC. Although the NSC/SCN infrastructure is still geographically localized within a circle of 1.5 km in diameter, it may be extended to the regional level in the future in order to reach the next nearest Siberian supercomputing sites. Once constructed the NSC supercomputer network made it possible to build various computing environments spanned across the resources of multiple participating computing sites, and in particular to implement a virtualization technology based environment for running typical HEP-specific massive parallel data processing jobs serving the needs of detector experiments being carried out at BINP. The solution obtained is currently exploited for running production jobs of KEDR and SND detector experiments (BINP) and also the final stages of data analysis for BINP based user groups participating in ATLAS detector experiment (LHC, CERN). It is foreseen to be used as a template for making a fraction of NUSC computing resources available for deployment of the glite [37] worker nodes of BINP RDIG/WLCG resource site, Russian National Nanotechnology Network (NNN) resource site of SB RAS, and also for prototyping the future TDAQ and offline data processing farms for the detector experiment at Super c-tau Factory electron-positron collider [38] proposed to be constructed at BINP over the upcoming 10 years. The experience gained while building, testing, and validating the virtualization technology based solution suitable for running production jobs of HEP detector experiments which is compatible with modern high density HPC environments, such as those deployed at NUSC and SSCC computing facilities might be of interest for other HEP experiments and WLCG sites across the globe. 5. Acknowledgements This work is supported by the Ministry of Education and Science of the Russian Federation [39], grants from the Russian Foundation for Basic Research [40] (RFBR, grant No. 08-07-05031-B in particular), and SB RAS Program "Telecommunications and Multimedia Resources of SB RAS". 7

Figure 3: Batch system integration layout between the detector experiment user groups on the BINP side and NUSC computing facility. Four stages of a specific virtual machine group deployment on the NUSC side are shown. Figure 4: Summary on usage of NUSC computing resources by the user groups of the detector experiments involved since the beginning of 2011. Currently three user groups based at BINP are running their production jobs within the virtualization infrastructure spanning across the NUSC and BINP/GCF computing facilities: KEDR, SND and ATLAS detector user groups shown here. 8

References [1] Novosibirsk Akademgorodok: http://en.wikipedia.org/wiki/akademgorodok [2] Novosibirsk State University (NSU): http://www.nsu.ru [3] Siberian Branch of the Russian Academy of Sciences (SB RAS): http://www.nsc.ru/en/ [4] Institute of Computational Technologies (ICT SB RAS): http://www.ict.nsc.ru [5] Novosibirsk State University Supercomputer Center (NUSC): http://www.nusc.ru [6] Institute of Computational Mathematics and Mathematical Geophysics (ICM&MG): http://www.sscc.ru [7] Siberian Supercomputer Center (SSCC SB RAS): http://www2.sscc.ru [8] Budker Institute of Nuclear Physics (BINP): http://www.inp.nsk.su [9] Russian Data Intensive Grid Consortium (RDIG): http://www.egee-rdig.ru [10] Worldwide LHC Computing Grid: http://cern.ch/lcg/ [11] KEDR detector experiment at BINP: http://kedr.inp.nsk.su [12] SB RAS Data Communication Network: http://www.nsc.ru/en/net/ [13] Multi Router Traffic Grapher (MRTG): http://oss.oetiker.ch/mrtg/ [14] Iperf project homepage: http://sourceforge.net/projects/iperf/ [15] SUSE Linux Enterprise Server (SLES): http://www.novell.com/products/server/ [16] PBS Pro batch system: http://www.pbsgridworks.com [17] KVM virtualization platform: http://www.linux-kvm.org [18] Scientific Linux (SL): http://www.scientificlinux.org [19] XEN virtualization platform: http://www.xen.org [20] V.V. Anashin et al., Status of the KEDR detector. NIM A478(2002)420-425 [21] VEPP-4M collider at BINP: http://v4.inp.nsk.su [22] Scientific Linux CERN (SLC): http://linuxsoft.cern.ch [23] SLOCCount tool homepage: http://www.dwheeler.com/sloccount/ [24] Sun Grid Engine: http://wikipedia.org/wiki/sun_grid_engine/ [25] VMware server virtualization solution: http://www.vmware.com/products/server/ [26] V.V. Anashin et al., Measurement of ѱ(3770) parameters. http://arxiv.org/abs/1109.4205 [27] V.V. Anashin et al., Measurement of main parameters of the ѱ(2s) resonance. http://arxiv.org/abs/1109.4215 9

[28] E.M.Baldin (KEDR collaboration). Recent results from the KEDR detector at the VEPP-4M e+e- collider. Proceedings of Science (EPS-HEP2011) 174. [29] SND Collaboration website: http://wwwsnd.inp.nsk.su [30] G.N. Abramov et al., Budker INP, 2007-20, Novosibirsk, 2007 [31] VEPP-2000 collider at BINP: http://vepp2k.inp.nsk.su [32] D.E. Berkaev et al. ICFA Beam Dyn. Newslett., 2009, 48: 235 [33] ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider. J. Instrum. 3 (2008) S08003. doi:10.1088/1748-0221/3/08/s08003. SISSA/IOP e-print: http://iopscience.iop.org/1748-0221/3/08/s08003/ [34] Large Hadron Collider (LHC) at CERN: http://cern.ch/lhc/ [35] Public results of ATLAS Exotics workgroup: https://twiki.cern.ch/twiki/bin/view/atlaspublic/exoticspublicresults [36] G V. Fedotovich et al., Nucl. Phys. Proc. Suppl., 2006, 162: 332 [37] glite Grid Middleware: http://glite.cern.ch [38] I.A. Koop et al., The project of Super-ct-factory with Crab Waist in Novosibirsk. Physics of Particles and Nuclei Letters, Vol. 5, No. 7, pp. 554-559. [39] Ministry of Education and Science of the Russian Federation: http://mon.gov.ru [40] Russian Foundation for Basic Research: http://www.rfbr.ru 10