A DAQ Architecture for Agata experiment

Size: px
Start display at page:

Download "A DAQ Architecture for Agata experiment"

Transcription

1 1 A DAQ Architecture for Agata experiment Gaetano Maron INFN Laboratori Nazionali di Legnaro

2 On-line Computing Requirements Outline 2 Event Builder Technologies for the On-line System Run Control and Slow Control Agata Demonstrator Project Organization Off-line Infrastructure

3 Agata Global On-line Computing Requirements Front-end electronic and pre-processing Gbps (1 Gbps x 200) Max 10 Gbps (50 Mbps x 200) Pulse Shape Analysis Event Builder 1.5 x 10 6 SI95(?) (present algorithm) 5 x 10 3 SI95 10 Gbps 1 Gbps SI95 = SpecInt 95 1 SI95 = 10 Cern Unit = 40 MIPS Tracking Storage 3 x 10 5 SI95 (no GLT) 3 x 10 4 SI95 30 khz) GLT = Global Level Trigger

4 Input to the DAQ System 4 PREAM FADC FADC DSP DSP DSP DSP FPGA Mux PC Detector i=1-200 PSA Farm 1 Gbps x MSI95 (now) 50 Mbps x 200

5 Farming 5 Computational nodes Data Servers Commodity PC farms + linux are a well established technology Rack mountable cases allow to fit hundreds PCs in few racks (with blade form factor) about 16 ksi95 per rack in 2004 about 50 ksi95 per rack in 2007 About 150 ksi95 per rack in 2010 Gateways

6 Agata on-line system 6 Front-End F1 F2 F3 F Gbps PSA Farm R1 R2 R3 R200 Event Builder Builder Network 10 Gbps HPCC builder Tracking Farm Data Servers B1 B2 B20 ds1 ds2 ds3 ds4 Storage (1000 TB) 10 Gbps 1 Gbps 1 Gbps 100 Mbps > 1 Gbps

7 Event Building: simple case 7 Clock R1 R2 R3 R200 PSA Farms Time Slot 1 Time Slot 2 Time Slot 3 T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 T11 T12 T01 T02 T05 T06 T07 T09 T01 T02 T04 T05 T06 T09 T10 T12 T01 T03 T07 T08 T09 T12 T01 T03 T07 T09 T12 Builder Network (10 Gbps) BU1 BU2 BUn Builder Units Where n could range (now) from 10 to 15 according to the efficiency of the event builder algorithm and to the communication protocol used In the final configuration (~ 2005) we could image To have a single 10 Gbps output link and a single BU

8 Time Slot Assignment 8 Det i MUX PSA FARM Ev # Ev # Ev # Ev Ev Ev # Ev Ev Ev # # Ev Ev Ev # # Buf # Buf Ev Ev # Buf Ev Ev # Ev Ev Buf # TTC assigns EV # (time stamp) MUX buffers events according to a given rule and then define the Time Slot MUX assigns a buffer # to this collection of events MUX distributes the buffers to PSA farm according to their buffer # EB # Detector Slice PSA farm shrink down the incoming buffers, a further buffering is then needed PSA assigns a EB (Event Builder) # to the new buffers PSA distributes the buffer to Event Builder farm according to such number. Slice 1 Slice 2 Slice 200 All this is synchronous for all the detectors Event Merging in the EB Farm is then feasible EB FARM

9 Agata Event Builder: some more requirements Clck R1 R2 R3 R200 Readout Farms 9 Time Slot 1 Time Slot 2 Time Slot 3 T01 T02 T03 T04 T05 T06 T07 T08 T09 T10 T11 T12 T01 T02 T05 T06 T07 T09 T01 T02 T04 T05 T06 T09 T10 T12 T01 T03 T07 T08 T09 T12 T01 T03 T07 T09 T12 Builder Network (10 Gbps) BU1 BU2 BU3 Builder Units - delayed coincidence can span more time slots. - fragments of the same events are in different BUs.

10 HPCC for Event Builder 10 - High speed links (> 2 Gbps) - low latency switch - fast inter processor comm - low latency message passing Builder Network High Performance Computing and Communication (HPCC) System

11 Technologies for Agata On-line System 11 On-line Software Networking Trends Event Builder CPU Trends Building blocks for the Agata Farms Storage Systems

12 On-line Software: a common framework Environment for data acquisition applications communication over multiple network technologies concurrently e.g. input on Myrinet, output on TCP/IP over Ethernet, Infiniband, etc. configuration (parametrization) and control protocol and bookkeeping of information cross-platform deployment write once, use on every supported platform (Unix, RTOS) high-level provision of system services memory management, synchronized queues, tasks built-in efficiency enablers zero-copy and buffer loaning schemes usable by everyone Aim at creating interoperable systems PSA farm applications, event builder, tracking commonly managed CMS 12

13 Local networking is not an issue. Already now Ethernet fits the future needs of the NP experiments link speed max 1 Gbps switch aggregate bdw O(100) Gbps O(100) Gbit ports per switch O(1000) FastEthernet per switch Networking Trends 256 Gbps 192 Gbps 128 Gbps 64 Gbps Myrinet Infiniband 10 GbEth GigaEthernet Aggregate bandwidth within a single switch 13 If HCCP is requested (e.g. Agata builder farm) options are Myrinet, Infiniband = 4 x Myrinet, 10 Gbit Eth 10 µsec latency time Myrinet one way latency MB/s Myrinet throughput 2003

14 Event Builder and Switch Technologies 14

15 CMS EVB Demonstrator 32x32 15 CMS

16 Myrinet EVB (with Barrel Shifter) 16 CMS

17 Raw GbEth EVB CMS 17

18 GbEth full Standard TCP/IP 18 CMS CPU Load 100 %

19 TCP/IP CPU Off-loading - iscsi 19 Internet SCSI (iscsi) is a standard protocol for encapsulating SCSI command into TCP/IP packets and enabling I/O block data transport over IP networks iscsi adapters combines NIC and HBA functions. Application Layer Driver Layer Network Interface Card Storage HBA FC Storage IP Server FC Server IP Server iscsi Adapter File Block Block Block Block 1. take the data in block form 2. handle the segmentation and processing with TCP/IP processing engine 3. send IP packets across the IP network I Processor IP Packets Link Layer IP Packets on Ethernet FC Packets IP Packets on Ethernet Intel GE 1000 T IP Storage Adapter

20 Comments on the Agata Event Builder 20 Agata Event Builder is not an issue (also now). CMS experiment has already shown the ability to work with an order of magnitudine better that the Agata requirements Agata could work, also in the prototype, fully on standard TCP/IP Agata could require an hpcc based Event Builder. Technologies already exist for that, but never applied to event builder problems. Should not be a big issue, but requires time to develop, debug, test, etc. Myrinet (now) 10 GEthernet on desktops (now, but expensive) Infiniband (soon) All this does not mean Agata Event Builder is an easy task: We have a 2 event builder chain (single detector level, global level) 3 different numbers to be assigned (event number, buffer number, event builder number). All of them have to be synchronized, etc. etc. All this means Agata Event Builder is a feasible task, also now; providing the proper resources (human in particular)

21 Processors and Storage trends SI95 Now 250 GB Now 2007 CPU = 250 SI CPU = 700 SI95 Year disk = 1 TByte

22 Building Blocks for the Agata Farms 22 ~ 1700 euro now Year SI95 x Box 1 U CPU Box with 2 processors 40 Boxes x Rack Configurations Detector 15 Detectors 200 Detectors Farms Type Nr. Boxes Nr. Racks Nr. Boxes Nr. Racks Nr. Boxes Nr. Racks PSA Farm Builder Farm / /4 Track. Farm No GLT Track, Farm GLT / /2

23 Blade Based Farms ~30000 euro now 1 Blade Box with 2 processors 14 Boxes x crate (7 U) 6 Blade crates x rack = 108 Boxes Power = 16 KW x Rack ~ 2150 euro now backplane Gbps backplane SW2 SW1 Configurations Farms Type Nr. Blades Detector Nr. Racks Nr. Blades Detectors Nr. Racks Nr. Blades Detectors Nr. Racks 2 x 4 x 1 Gbps uplinks PSA Farm Builder Farm 35-1/ < crate < crate Track. Farm No GLT / Track, Farm GLT < crate 20 1/5

24 On-line Storage On-line Storage needs 1-2 weeks experiments Max 100 TByte / experiment (no GBT) Max 1000 TByte/year disk = 4 Tbyte Storage Agata System: 250 disks (+ 250 for mirroring) 24 Archiving O (1000) TB per year can not be handled as normal flat files Not only physics data stored run conditions Calibration Correlation between physics data, calibration and run condition are important for off-line analysis Data Base technology already plays an important role in physics data archiving (Babar, LHC experiments, etc.). Agata can exploit their experience and development

25 Storage Technologies Trends Application Servers Data Servers GEth/iSCSI 25 Infiniband SAN enabled disk array gateway Commodity Storage Area Network share all the farms nodes. Technololgies interested for us are: - iscsi over Giga (or 10 Giga) Ethernet - Infiniband Full integration between the SAN and the farm is realized if a Cluster File System is used. Example of Cluster File System are: - LUSTRE ( - STORAGE TANK (IBM soon off. released)

26 Example of a iscsi SAN available today 26 Application Servers GEth/iSCSI Data Servers Host adapters: - Intel GE 1000 T - Adaptec ASA LSI ecc. 2 x GE LSI logic imegaraid 1 SATA iscsi Controller RAID SATA Controller 16 = ~ 5 Tbyte x controller SATA = Serial ATA

27 Data Archiving: Advance Parallel Server 27 Input Load balancing switch Data Servers Low latency Interconnection (e.g. HPCC) Internet Intranet Shared Data Caching Storage Area Network Scalability Example: Oracle Real Application Cluster

28 Run Control and Slow control 28 Slow Control Front-end electronic and pre-processing Run Control and Monitor System Pulse Shape Analysis Event Builder Tracking Storage

29 Run Control and Slow Control Technological Trends 29 Virtual Counting Room Take a shift from distance Tele-presence (web cam, videoconference and chatting, etc.) Web based technologies SOAP Web Services and Grid Services (Open Grid Service Architecture) Data Base Demonstrators in operation at CMS test beams facilities

30 RCMS present demonstrators 30 Java MySQL Or Oracle Java TomCat Containers Or Grid Services SOAP CMS

31 Slow Control Trends 31 Ethernet every where Agata could be fully controlled by Ethernet connections, including the front end electronics This lead to have an homogeneous network avoiding the use of bridges between busses, software drivers to peform the bridging, etc. Tini system Embedded web server and embedded java virtual machines on the electronics Embedded Java should guarantee an homogeneous development environment, portability, etc. Xilink Virtex II Pro

32 Agata Demonstrator ( ) 32 Front-End F1 F2 F3 F15 PSA Farm P1 P2 P3 P15 14 Blade Centers Event Builder HPCC builder Builder Network B1 B2 15 x 2 Eth Switch 2 Dual processor Servers + Myrinet Tracking Farm T1 T2 3 full Blade Centers no GLT 4 blades with GLT Data Servers SAN Storage Area network = iscsi iscsi Disk Array + SATA disks TByte 1 Gbps 100 Mbps

33 Project Break Down 33 F1 F2 F3 F15 3. Slow Control P1 P2 P3 P15 Builder Network B1 B2 T1 T2 SAN 2. Run Control & Monitoring Main Data Flow

34 Human Resources Agata has requirements very closed to the LHC experiments. We should exploit the LHC experiments technologies in particular for: Data Acquisition (framework, readout, event builder, etc.) Run Control (framework for distributed system, basic GUI, etc.) Slow Control Data Storage (storage techniques, DBs, etc.) In the following table it is assumed to adopt CMS (a LHC experiment) technologies for most of the above items. The red numbers indicates the FTEs required to build the Agata applications over the CMS Frameworks Slow Control Framework Development and maintenance (fte x year) Agata s Applications (fte x year) 34 Run Control & Monitoring Main Data Flow (FE read out, event builders, storage, etc. 4 FTE (LNL + Cern) 5 FTE (Cern) 2 FTE 1 FTE (LNL) + 3 FTE

35 Resources Distribution 35 We could organize 3 development centers Main Data Flow: Legnaro Run Control: XX Slow Control: YY Human resources should be concentrated in the related dev.centers Legnaro: 4 FTE (1 from LNL, 3 from ext) XX: 2 YY:?

36 Conclusions(I) No fundamental technological issues for the final Agata on-line system: 36 The experiment requirements and the present understanding of the PSA algorithm fit with a final (2010) moderate size ( O(1000) machines) on-line system. Only a 3 times improvement in the PSA calculations lead to a system much more manageable (3 racks). Both network and event builder issues already fit with the today available technologies. Storage requirements (1000 TByte) fit with the evolution of the storage technologies. Demonstrator Same architecture of the final system, only scaled down to the foreseen number of detector.

37 Conclusions (II) 37 Strong collaboration with LHC experiments and in particular with CMS is suggested, as we can benefits of: Their developments Support (partially) Experience and advice

38 38 OFF-LINE INFRASTRUCTURE A GRID APPROACH

39 > = 10 Gbps links T1 T1 T1 T0 T1 T1 LHC Off-line Infrastructure T1 T1 T1 T2s CERN Data Production Center: - on-line system - online storage ( ~ PBytes) - central archive Regional Computing Facilities (Country bounded): - computing power for analysis - on-line storage - local archive Local (in the Country) Computing Facilities: - Dep. Farm Analysis 39 Agata could exploit the LHC off-line infrastructure and in particular the Regional Computing (RC) centers Agata Data Production Center should be linked properly to such regional centers typical experiment will take about 1 day to copy the entire data set no tape copy; tape only for backup New model for off-line analysis: Grid approach

40 World Wide Farming: the GRID GRID is an emerging model of large distributed computing. Main aims are: Transparent access to multi-petabyte distributed data bases Easy to plug in Hidden complexity of the infrastructure GRID tools can help significantly the computing of the experiments promoting the integration between data centers and the regional computing facilities GRID technology fits the Agata experiment off-line requests 40 FermiLab/USA T1 T1 Brookhaven/USA T1 LYON/FR T0 T1 RAL/UK T1 T1 CNAF/IT KARLSRUHE/GE LHC Computing Grid (LCG) Infrastructure

41 Local job submission Wkld Mngm - find free resources -submit jobs to grid-farms Resource Broker DataGRID Data Mover EU has funded the DataGrid project. The main activities of the project have been: Workload Management Services for grid scheduling and resource management Data Management High speed data mover and replica Monitoring Services Global states and error monitoring Local Farm Management tools for new automated system management techniques of large computing farms 41 Local Farm Management Based on Globus (Argonne) toolkit Evolution in EU FP6: EGEE

42 Off-line Agata GRID: a possible scenario Data are produced by a given experiment in the Agata Data Taken Center 42 The experiment data set is temporary stored in the Center and optionally (if computing power is available) pre-processed with the aim to reduce their size. Original data set could be saved in the Center The (pre-processed) data set is then moved to the Regional Centers. (It could be moved directly to the final destination (lab, dep., etc.) if enough storage is available) Massive data processing (like sorting, matrix production, etc.) can be performed at the level of Regional Centers. Grid helps to provide the location of such data sets, indicating the computing power available and their location, etc. All the data sets are grid reachable and then available for processing to the whole collaboration. Outcomes are still grid reachable and then available for further analysis Final analysis can be performed exploiting all the Grid computing power available.

43 User Interface User Interface User Interface On-line Agata Grid? Take control from a distance Supporting Services Virtual Control Room Virtual Control Room Telprs Svc DAQ Svc TRG Svc Slow Svc GRID Domain Farm Services Instr. Instr. Farm 43 Raw data for on-line monitoring, Special exp line, Processing Farm extension? Video Conf. & Chat Service Data Repres. Service Storage Services Instr. Instr. DBs Tn Farms On/Off-line DBs e.g. Calibrations

44 Agata GRID: conclusions 44 GRID is becoming a serious and robust infrastructure, well funded and supported (see EGEE EU project) All the LHC experiments (see LCG project: will use it to integrate both the computing power and storage capability provided by the Regional Centers and by the production center (CERN) into a cooperative environment. Agata could join this infrastructure providing (or taken an agreement with) the needed computing power and storage to the Regional Centers. Agata, adopting the GRID middleware as proposed by LCG, has to take care of the impact on the development. This will be mainly on: On/Off-line Data Bases of the experiment and the way to access them Local farms (department level) Production Center Farm (if any) On-line software for what is concerning the DB access GRID technology could also be used to have a real time control of the experiment (control the experiment from a distance)

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System Chapter 8 Online System The task of the Online system is to ensure the transfer of data from the front-end electronics to permanent storage under known and controlled conditions. This includes not only

More information

The CMS Event Builder

The CMS Event Builder The CMS Event Builder Frans Meijers CERN/EP-CMD CMD on behalf of the CMS-DAQ group CHEP03, La Jolla, USA, March 24-28 28 2003 1. Introduction 2. Selected Results from the Technical Design Report R&D programme

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting Gopal Hegde, Intel Corporation Outline Market Trends Business Case Blade Server Architectures

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Module 2 Storage Network Architecture

Module 2 Storage Network Architecture Module 2 Storage Network Architecture 1. SCSI 2. FC Protocol Stack 3. SAN:FC SAN 4. IP Storage 5. Infiniband and Virtual Interfaces FIBRE CHANNEL SAN 1. First consider the three FC topologies pointto-point,

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Update on PRad GEMs, Readout Electronics & DAQ

Update on PRad GEMs, Readout Electronics & DAQ Update on PRad GEMs, Readout Electronics & DAQ Kondo Gnanvo University of Virginia, Charlottesville, VA Outline PRad GEMs update Upgrade of SRS electronics Integration into JLab DAQ system Cosmic tests

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard On February 11 th 2003, the Internet Engineering Task Force (IETF) ratified the iscsi standard. The IETF was made up of

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps:// IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com Exam : 000-115 Title : Storage Sales V2 Version : Demo 1 / 5 1.The IBM TS7680 ProtecTIER Deduplication Gateway

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

10 Gbit/s Challenge inside the Openlab framework

10 Gbit/s Challenge inside the Openlab framework 10 Gbit/s Challenge inside the Openlab framework Sverre Jarp IT Division CERN SJ Feb 2003 1 Agenda Introductions All Overview Sverre Feedback Enterasys HP Intel Further discussions Elaboration of plan

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version : IBM 000-742 IBM Open Systems Storage Solutions Version 4 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-742 Answer: B QUESTION: 156 Given the configuration shown, which of the

More information

Anil Vasudeva IMEX Research. (408)

Anil Vasudeva IMEX Research. (408) Architecting Next Generation Enterprise Network Storage Anil Vasudeva Research (408) 268-0800 vasudeva@imexresearch.com 2000-2004 Research All rights Reserved Reproduction prohibited, without written permission

More information

Extending PCI-Express in MicroTCA Platforms. Whitepaper. Joey Maitra of Magma & Tony Romero of Performance Technologies

Extending PCI-Express in MicroTCA Platforms. Whitepaper. Joey Maitra of Magma & Tony Romero of Performance Technologies Extending PCI-Express in MicroTCA Platforms Whitepaper Joey Maitra of Magma & Tony Romero of Performance Technologies Introduction: The introduction of MicroTCA platforms has opened the door for AdvancedMC

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Sun N1: Storage Virtualization and Oracle

Sun N1: Storage Virtualization and Oracle OracleWorld 2003 Session 36707 - Sun N1: Storage Virtualization and Oracle Glenn Colaco Performance Engineer Sun Microsystems Performance and Availability Engineering September 9, 2003 Background PAE works

More information

Exam : Title : Storage Sales V2. Version : Demo

Exam : Title : Storage Sales V2. Version : Demo Exam : 000-115 Title : Storage Sales V2 Version : Demo 1.The IBM TS7680 ProtecTIER Deduplication Gateway for System z solution is designed to provide all of the following EXCEPT: A. ESCON attach to System

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

iscsi Technology: A Convergence of Networking and Storage

iscsi Technology: A Convergence of Networking and Storage HP Industry Standard Servers April 2003 iscsi Technology: A Convergence of Networking and Storage technology brief TC030402TB Table of Contents Abstract... 2 Introduction... 2 The Changing Storage Environment...

More information

Figure 1: cstcdie Grid Site architecture

Figure 1: cstcdie Grid Site architecture AccessionIndex: TCD-SCSS-T.20121208.098 Accession Date: Accession By: Object name: cstcdie Grid Site Beowulf Clusters and Datastore Vintage: c.2009 Synopsis: Complex of clusters & storage (1500 cores/600

More information

Hálózatok üzleti tervezése

Hálózatok üzleti tervezése Hálózatok üzleti tervezése hogyan tervezzünk, ha eddig is jó volt... Rab Gergely HP ESSN Technical Consultant gergely.rab@hp.com IT sprawl has business at the breaking point 70% captive in operations and

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

The new detector readout system for the ATLAS experiment

The new detector readout system for the ATLAS experiment LInk exange The new detector readout system for the ATLAS experiment Soo Ryu Argonne National Laboratory On behalf of the ATLAS Collaboration ATLAS DAQ for LHC Run2 (2015-2018) 40MHz L1 trigger 100kHz

More information

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo Data Acquisition Photon Controls and Data Systems (PCDS) Group SLAC, June 9 th 2009 FAC Review 1 Data System Architecture Detector specific Photon Control Data Systems (PCDS) L1: Acquisition Beam Line

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Storage Update and Storage Best Practices for Microsoft Server Applications Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek Agenda Introduction Storage Technologies Storage Devices

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group)

FELI. : the detector readout upgrade of the ATLAS experiment. Soo Ryu. Argonne National Laboratory, (on behalf of the FELIX group) LI : the detector readout upgrade of the ATLAS experiment Soo Ryu Argonne National Laboratory, sryu@anl.gov (on behalf of the LIX group) LIX group John Anderson, Soo Ryu, Jinlong Zhang Hucheng Chen, Kai

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

Software-defined Shared Application Acceleration

Software-defined Shared Application Acceleration Software-defined Shared Application Acceleration ION Data Accelerator software transforms industry-leading server platforms into powerful, shared iomemory application acceleration appliances. ION Data

More information

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC 1 EMC Symmetrix Series The High End Platform Tom Gorodecki EMC 2 EMC Symmetrix -3 Series World s Most Trusted Storage Platform Symmetrix -3: World s Largest High-end Storage Array -3 950: New High-end

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Highly Scalable, Non-RDMA NVMe Fabric. Bob Hansen,, VP System Architecture

Highly Scalable, Non-RDMA NVMe Fabric. Bob Hansen,, VP System Architecture A Cost Effective,, High g Performance,, Highly Scalable, Non-RDMA NVMe Fabric Bob Hansen,, VP System Architecture bob@apeirondata.com Storage Developers Conference, September 2015 Agenda 3 rd Platform

More information

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC SAS Connectivity Roadmap Background Connectivity Objectives Converged

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

Extending InfiniBand Globally

Extending InfiniBand Globally Extending InfiniBand Globally Eric Dube (eric@baymicrosystems.com) com) Senior Product Manager of Systems November 2010 Bay Microsystems Overview About Bay Founded in 2000 to provide high performance networking

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

RPC Trigger Overview

RPC Trigger Overview RPC Trigger Overview presented by Maciek Kudla, Warsaw University RPC Trigger ESR Warsaw, July 8th, 2003 RPC Trigger Task The task of RPC Muon Trigger electronics is to deliver 4 highest momentum muons

More information

Vendor: IBM. Exam Code: Exam Name: Storage Sales V2. Version: DEMO

Vendor: IBM. Exam Code: Exam Name: Storage Sales V2. Version: DEMO Vendor: IBM Exam Code: 000-115 Exam Name: Storage Sales V2 Version: DEMO 1.Which of the following customer requirements is the TS7650G Gateway designed to address? A. latency across a long distance network

More information

Storage Area Network (SAN)

Storage Area Network (SAN) Storage Area Network (SAN) 1 Outline Shared Storage Architecture Direct Access Storage (DAS) SCSI RAID Network Attached Storage (NAS) Storage Area Network (SAN) Fiber Channel and Fiber Channel Switch 2

More information

MAHA. - Supercomputing System for Bioinformatics

MAHA. - Supercomputing System for Bioinformatics MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version :

HP0-S15. Planning and Designing ProLiant Solutions for the Enterprise. Download Full Version : HP HP0-S15 Planning and Designing ProLiant Solutions for the Enterprise Download Full Version : http://killexams.com/pass4sure/exam-detail/hp0-s15 QUESTION: 174 Which rules should be followed when installing

More information

ROM Status Update. U. Marconi, INFN Bologna

ROM Status Update. U. Marconi, INFN Bologna ROM Status Update U. Marconi, INFN Bologna Drift Chamber ~ 35 L1 processor EMC ~ 80 L1 processor? SVT L1 processor L3 to L5 ~15 Radiation wall Clk, L1, Sync Cmds Global Level1 Trigger (GLT) Raw L1 FCTS

More information

Comparing File (NAS) and Block (SAN) Storage

Comparing File (NAS) and Block (SAN) Storage Comparing File (NAS) and Block (SAN) Storage January 2014 Contents Abstract... 3 Introduction... 3 Network-Attached Storage... 3 Storage Area Network... 4 Networks and Storage... 4 Network Roadmaps...

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005 Short history

More information

Alternative Ideas for the CALICE Back-End System

Alternative Ideas for the CALICE Back-End System Alternative Ideas for the CALICE Back-End System Matthew Warren and Gordon Crone University College London 5 February 2002 5 Feb 2002 Alternative Ideas for the CALICE Backend System 1 Concept Based on

More information

Centre de Physique des Particules de Marseille. The PCIe-based readout system for the LHCb experiment

Centre de Physique des Particules de Marseille. The PCIe-based readout system for the LHCb experiment The PCIe-based readout system for the LHCb experiment K.Arnaud, J.P. Duval, J.P. Cachemiche, Cachemiche,P.-Y. F. Réthoré F. Hachon, M. Jevaud, R. Le Gac, Rethore Centre de Physique des Particules def.marseille

More information

Data Management at CHESS

Data Management at CHESS Data Management at CHESS Marian Szebenyi 1 Outline Background Big Data at CHESS CHESS-DAQ What our users say Conclusions 2 CHESS and MacCHESS CHESS: National synchrotron facility, 11 stations (NSF $) CHESS

More information

L1 and Subsequent Triggers

L1 and Subsequent Triggers April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Dell MD3860i Storage Array Multi-Server 1050 Camera Test Case 4-2-2016 Table of Contents Executive Summary:... 3 Abstract...

More information

IBM Virtual Fabric Architecture

IBM Virtual Fabric Architecture IBM Virtual Fabric Architecture Seppo Kemivirta Product Manager Finland IBM System x & BladeCenter 2007 IBM Corporation Five Years of Durable Infrastructure Foundation for Success BladeCenter Announced

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X400. EMC Isilon X410 ARCHITECTURE

EMC ISILON X-SERIES. Specifications. EMC Isilon X200. EMC Isilon X400. EMC Isilon X410 ARCHITECTURE EMC ISILON X-SERIES EMC Isilon X200 EMC Isilon X400 The EMC Isilon X-Series, powered by the OneFS operating system, uses a highly versatile yet simple scale-out storage architecture to speed access to

More information

Modeling Resource Utilization of a Large Data Acquisition System

Modeling Resource Utilization of a Large Data Acquisition System Modeling Resource Utilization of a Large Data Acquisition System Alejandro Santos CERN / Ruprecht-Karls-Universität Heidelberg On behalf of the ATLAS Collaboration 1 Outline Introduction ATLAS TDAQ Simulation

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in

More information

The benefits of. Clustered storage offers advantages in both performance and scalability, but users need to evaluate three different architectures.

The benefits of. Clustered storage offers advantages in both performance and scalability, but users need to evaluate three different architectures. The benefits of clustered block storage Clustered storage offers advantages in both performance and scalability, but users need to evaluate three different architectures. By Ray Lucchesi Today s data centers

More information

Creating an agile infrastructure with Virtualized I/O

Creating an agile infrastructure with Virtualized I/O etrading & Market Data Agile infrastructure Telecoms Data Center Grid Creating an agile infrastructure with Virtualized I/O Richard Croucher May 2009 Smart Infrastructure Solutions London New York Singapore

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

Andy Kowalski Ian Bird, Bryan Hess

Andy Kowalski Ian Bird, Bryan Hess Building the Mass Storage System at Jefferson Lab Andy Kowalski Ian Bird, Bryan Hess SURA/Jefferson Lab Jefferson Lab Who are we? Thomas Jefferson National Accelerator Facility SURA/DOE What do we do?

More information

Evolving HPC Solutions Using Open Source Software & Industry-Standard Hardware

Evolving HPC Solutions Using Open Source Software & Industry-Standard Hardware CLUSTER TO CLOUD Evolving HPC Solutions Using Open Source Software & Industry-Standard Hardware Carl Trieloff cctrieloff@redhat.com Red Hat, Technical Director Lee Fisher lee.fisher@hp.com Hewlett-Packard,

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors

Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors Affordable and power efficient computing for high energy physics: CPU and FFT benchmarks of ARM processors Mitchell A Cox, Robert Reed and Bruce Mellado School of Physics, University of the Witwatersrand.

More information

FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE

FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE Carl Trieloff cctrieloff@redhat.com Red Hat Lee Fisher lee.fisher@hp.com Hewlett-Packard High Performance Computing on Wall Street conference 14

More information

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP

All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP All-Flash High-Performance SAN/NAS Solutions for Virtualization & OLTP All-flash configurations are designed to deliver maximum IOPS and throughput numbers for mission critical workloads and applicati

More information

InfiniBand based storage target

InfiniBand based storage target Philippe BRUIANT Business Development Manager EMEA OpenFabrics Workshop Paris June 22-23, 2006 InfiniBand based storage target Tuesday, 27 June 2006 At-A-Glance Founded in Feb. 2000 by Veterans of Cheyenne

More information

No More Waiting Around

No More Waiting Around White Paper 10 GbE NETWORK UPGRADE FOR SMB FOR IT ADMINISTRATORS, DECISION-MAKERS, AND OWNERS OF SMALL TO MEDIUM-SIZED BUSINESSES No More Waiting Around How 10 Gb/s Will Change Your Company Network Introduction

More information

experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica 1, Rome, Italy

experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica 1, Rome, Italy Using SNMP implementing Data Flow and Run Controls in the KLOE experiment E. Pasqualucci INFN, Sez. of University of Rome `Tor Vergata', Via della Ricerca Scientica, 0033 Rome, Italy M. L. Ferrer, W. Grandegger,

More information

IBM System Storage DS4800

IBM System Storage DS4800 Scalable, high-performance storage for on demand computing environments IBM System Storage DS4800 Highlights 4 Gbps Fibre Channel interface Includes IBM System Storage technology DS4000 Storage Manager

More information

How to Choose the Right Bus for Your Measurement System

How to Choose the Right Bus for Your Measurement System 1 How to Choose the Right Bus for Your Measurement System Overview When you have hundreds of different data acquisition (DAQ) devices to choose from on a wide variety of buses, it can be difficult to select

More information

IBM InfoSphere Streams v4.0 Performance Best Practices

IBM InfoSphere Streams v4.0 Performance Best Practices Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related

More information

Coming Changes in Storage Technology. Be Ready or Be Left Behind

Coming Changes in Storage Technology. Be Ready or Be Left Behind Coming Changes in Storage Technology Be Ready or Be Left Behind Henry Newman, CTO Instrumental Inc. hsn@instrumental.com Copyright 2008 Instrumental, Inc. 1of 32 The Future Will Be Different The storage

More information

Coordinating Parallel HSM in Object-based Cluster Filesystems

Coordinating Parallel HSM in Object-based Cluster Filesystems Coordinating Parallel HSM in Object-based Cluster Filesystems Dingshan He, Xianbo Zhang, David Du University of Minnesota Gary Grider Los Alamos National Lab Agenda Motivations Parallel archiving/retrieving

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

ELFms industrialisation plans

ELFms industrialisation plans ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with

More information

Storage Systems Market Analysis Dec 04

Storage Systems Market Analysis Dec 04 Storage Systems Market Analysis Dec 04 Storage Market & Technologies World Wide Disk Storage Systems Market Analysis Wor ldwi d e D i s k Storage S y s tems Revenu e b y Sup p l i e r, 2001-2003 2001

More information

Introduction to TCP/IP Offload Engine (TOE)

Introduction to TCP/IP Offload Engine (TOE) Introduction to TCP/IP Offload Engine (TOE) Version 1.0, April 2002 Authored By: Eric Yeh, Hewlett Packard Herman Chao, QLogic Corp. Venu Mannem, Adaptec, Inc. Joe Gervais, Alacritech Bradley Booth, Intel

More information