iscsi Testing At NASA GSFC

Similar documents
Considerations and Performance Evaluations Of Shared Storage Area Networks At NASA Goddard Space Flight Center

Implementing iscsi in Storage Area Networks

Implementing iscsi in Storage Area Networks

iscsi Technology: A Convergence of Networking and Storage

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

SNIA Discussion on iscsi, FCIP, and IFCP Page 1 of 7. IP storage: A review of iscsi, FCIP, ifcp

Storage and Storage Access

Module 2 Storage Network Architecture

Storage Area Network (SAN)

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

TS Open Day Data Center Fibre Channel over IP

COSC6376 Cloud Computing Lecture 17: Storage Systems

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

Lossless 10 Gigabit Ethernet: The Unifying Infrastructure for SAN and LAN Consolidation

PeerStorage Arrays Unequalled Storage Solutions

New results from CASPUR Storage Lab

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Hálózatok üzleti tervezése

iscsi Unified Network Storage

The NE010 iwarp Adapter

IST346. Data Storage

Hands-On Wide Area Storage & Network Design WAN: Design - Deployment - Performance - Troubleshooting

Snia S Storage Networking Management/Administration.

The Case for RDMA. Jim Pinkerton RDMA Consortium 5/29/2002

HP Designing and Implementing HP SAN Solutions.

An Overview of The Global File System

PRACTICE CONSIDERATIONS FOR DELL/EMC CX3 FC/iSCSI STORAGE SYSTEMS

InfiniBand Networked Flash Storage

A Discussion of Storage Area Network Architecture Relevant to Real-Time Embedded Applications

Cisco. Exam Questions DCTECH Supporting Cisco Data Center System Devices. Version:Demo

CONTENTS. 1. Introduction. 2. How To Store Data. 3. How To Access Data. 4. Manage Data Storage. 5. Benefits Of SAN. 6. Conclusion

Course Documentation: Printable course material This material is copyrighted and is for the exclusive use of the student only

SAN for Business Continuity

CompTIA Storage+ Powered by SNIA

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Database over NFS. beepy s personal perspective. Brian Pawlowski Vice President and Chief Architect Network Appliance

SurFS Product Description

Benefits of Zoning in Storage Networks Among the many benefits for storage administrators, Zoning enables:

Big Data Processing Technologies. Chentao Wu Associate Professor Dept. of Computer Science and Engineering

Coordinating Parallel HSM in Object-based Cluster Filesystems

Traditional SAN environments allow block

Design and Performance Evaluation of Networked Storage Architectures

Agenda. 1. The need for Multi-Protocol Networks. 2. Users Attitudes and Perceptions of iscsi. 3. iscsi: What is it? 4.

Parallel File Systems Compared

Title: CompTIA Storage+ Powered by SNIA

Extending PCI-Express in MicroTCA Platforms. Whitepaper. Joey Maitra of Magma & Tony Romero of Performance Technologies

USING ISCSI MULTIPATHING IN THE SOLARIS 10 OPERATING SYSTEM

copyrighted material -- do not duplicate or distribute in hardcopy or form without written permission from hp

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

SONET Links Extend Fibre Channel SANs

Integrating Fibre Channel Storage Devices into the NCAR MSS

Performance Analysis of iscsi Middleware Optimized for Encryption Processing in a Long-Latency Environment

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

Fibre Channel. Roadmap to your SAN Future! HP Technology Forum & Expo June, 2007, Las Vegas Skip Jones, FCIA Chair, QLogic

TCP offload engines for high-speed data processing

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

OFED Storage Protocols

Latest IT Exam Questions & Answers

PAC094 Performance Tips for New Features in Workstation 5. Anne Holler Irfan Ahmad Aravind Pavuluri

Server Specifications

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

Assignment No. SAN. Title. Roll No. Date. Programming Lab IV. Signature

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

PAC532 iscsi and NAS in ESX Server 3.0. Andy Banta Senior Storage Engineer VMware

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Unified Storage and FCoE

Storage Protocol Offload for Virtualized Environments Session 301-F

RoCE vs. iwarp Competitive Analysis

SAN Virtuosity Fibre Channel over Ethernet

Comparing File (NAS) and Block (SAN) Storage

iscsi Performance Evaluation and Improving of Sequential Storage Access using iscsi Protocol in Long-delayed High throughput Network

Red Hat Global File System

Introduction to TCP/IP Offload Engine (TOE)

ANALYZING CHARACTERISTICS OF PC CLUSTER CONSOLIDATED WITH IP-SAN USING DATA-INTENSIVE APPLICATIONS

Performance Evaluation of Multiple TCP connections in iscsi

Evaluating the Impact of RDMA on Storage I/O over InfiniBand

Multifunction Networking Adapters

by Brian Hausauer, Chief Architect, NetEffect, Inc

Model Answer. Multiple Choice FORM. Questions Answer Key D D D D D C C C C C. Questions

NFS: What s Next. David L. Black, Ph.D. Senior Technologist EMC Corporation NAS Industry Conference. October 12-14, 2004

System input-output, performance aspects March 2009 Guy Chesnot

Unleashing Clustered ECMWF-Workshop 2004

Exam Name: Midrange Storage Technical Support V2

E-Seminar. Storage Networking. Internet Technology Solution Seminar

ECE Enterprise Storage Architecture. Fall 2016

Memory Management Strategies for Data Serving with RDMA

iscsi : A loss-less Ethernet fabric with DCB Jason Blosil, NetApp Gary Gumanow, Dell

Algorithms and Methods for Distributed Storage

Chelsio 10G Ethernet Open MPI OFED iwarp with Arista Switch

Benefits of full TCP/IP offload in the NFS

Storage Evaluations at BNL

Unified Storage Networking. Dennis Martin President, Demartek

Network Layer Flow Control via Credit Buffering

Connectivity. Module 2.2. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectivity - 1

Evaluation of Multi-protocol Storage Latency in a Multi-switch Environment

Deploying Celerra MPFSi in High-Performance Computing Environments

Operating Systems. File Systems. Thomas Ropars.

High Performance Storage Solutions

Transcription:

iscsi Testing At NASA GSFC Hoot Thompson 11030 Clara Barton Drive Fairfax Station, VA 22039 Phone:+1-703-250-3754 FAX: +1-703-250-3742 e-mail: hoot@ptpnow.com Presented at the THIC Meeting at the Raytheon ITS Auditorium, 1616 McCormick Dr., Upper Marlboro MD 20774-5301 November 5-6, 2002

Introduction Some Trends Storage consolidation continues to be a popular architectural construct Storage Area Networks (SAN) Minimizing costs while expanding storage capacity and access is an obvious driver Fibre channel connected storage remains dominant at least in the high performance space Existing IP infrastructures versus still expensive fibre channel represent an attractive mechanism for connecting users to storage at the block level Are IP based transport technologies the answer? 2

By Way of Review Traditional Infrastructure 3

Storage Area Network (SAN) Infrastructure 4

Definitions Fibre Channel Industry standard high-speed SCSI transport technology 1 or 2 Gbit, 10 Gbit coming Internet SCSI (iscsi) represents a light switch approach to storage networking Fibre Channel Over IP (FCIP) means of encapsulating Fibre Channel frames within TCP/IP specifically for linking Fibre Channel SANs over wide areas Internet Fibre Channel Protocol (ifcp) gateway-to-gateway protocol for providing fibre channel fabric services to fibre channel end devices over a TCP/IP network Notes: Standards largely the work of Internet Engineering Task Force (IETF) www.ietf.org Definitions from IP SANs A Guide to iscsi, ifcp, and FCIP Protocols for Storage Area Networks by Tom Clark 5

IP-Based SAN Connections 6

GSFC Pilot SAN Configuration Linux_G1 Bldg 32 RAID Storage Fibre Channel IP Connections Linux_G2 Fibre Channel Switch iscsi Router Linux_MD W2K_G1 Linux_G3 GSFC Inter-Building Network University of Maryland Bldg 28 Sun_G1 Fibre Channel Switch iscsi Router Fibre Channel Switch SGI_G1 FCIP Bridge FCIP Bridge Fibre Channel Switch SGI_AK Bldg 33 Tape Library JBOD California/ Alaska 7

IP Testing iscsi in comparision to native fibre chanel Primarily Linux based machines Sun platform also looked at SGI currently does not support iscsi Windows not the platform of choice TCP off-load engine (TOE) card Evaluated on Windows machine 8

Benchmarks Large file, large block sequential transfers lmdd Small file, transaction oriented tests bonnie++ Postmark Application testing Composite generation from MODIS data by U of MD staff Different file systems ext2fs (native Linux file system) cvfs now StorNext File System (ADIC) Global File System (Sistina) Notes: Benchmark numbers for the most part are out of the box results and should be viewed as representative not definitive. 9

Linux_G2 Write Performance ext2 gfs cvfs Transfer Rate (MB) 60.00 58.00 56.00 54.00 52.00 50.00 48.00 1 2 4 8 16 32 Block Size (MB) 10

Linux_G2 Read Performance ext2 gfs cvfs Transfer Rate (MB) 80.00 75.00 70.00 65.00 60.00 55.00 50.00 45.00 1 2 4 8 16 32 Block Size (MB) 11

Linux_G3 Write Performance ext2 gfs cvfs Transfer Rate (MB) 45.00 40.00 35.00 30.00 25.00 1 2 4 8 16 32 Block Size (MB) 12

Linux_G3 Read Performance ext2 gfs cvfs Transfer Rate (MB) 30.00 25.00 20.00 15.00 10.00 1 2 4 8 16 32 Block Size (MB) 13

Linux_MD Write Performance ext2fs gfs cvfs Transfer Rate (MB) 26.00 24.00 22.00 20.00 18.00 16.00 14.00 12.00 1 2 4 8 16 32 Block Size (MB) 14

Linux_MD Read Performance ext2fs gfs cvfs Transfer Rate (MB) 16.00 15.00 14.00 13.00 12.00 11.00 10.00 1 2 4 8 16 32 Block Size (MB) 15

TOE Performance - Writes 45.00 30.00 Transfer Rate (MB) 40.00 35.00 30.00 25.00 20.00 15.00 10.00 5.00 25.00 20.00 15.00 10.00 5.00 CPU NIC (MB) TOE (MB) NIC () TOE () 0.00 0.00 1 2 4 8 16 32 Block Size (MB) 16

TOE Performance - Reads Transfer Rate 50. 00 40. 00 30. 00 20. 00 10. 00 0. 00 1 2 4 8 16 32 30.00 20.00 10.00 0.00 CPU NIC ( MB) TOE (MB) NIC () TOE () Block Size (MB) 17

Bonnie++ Results Number of Files = 10:120:80/44 Chunk Size = 2GB system file system Sequential Output Sequential Input Random Sequential Create Random Create Per Char Block Rewrite Per Char Block Seeks Create Read Delete Create Read Delete KB CPU KB CPU KB CPU KB CPU KB CPU CPU CPU CPU CPU CPU CPU CPU Linux_G2* ext2fs 9428 99 60735 52 3225 32 9012 98 70047 36 6716.5 45 17420 98 +++++ +++ +++++ +++ 17621 99 +++++ +++ +++++ +++ Linux_G2 gfs 8145 97 40740 65 31937 49 8893 98 64626 39 494.6 4 1330 35 +++++ +++ 938 55 1230 71 18745 98 952 53 Linux_G2* cvfs 7584 86 32691 48 4848 32 7647 87 40208 38 260.5 15 24 1 192 17 25 0 24 2 205 18 48 0 Linux_G3 ext2fs 7042 99 41860 42 11340 18 6353 97 16164 12 385.2 4 11231 100 +++++ +++ +++++ +++ 12182 99 +++++ +++ +++++ +++ Linux_G3 gfs 5640 97 25467 63 13737 34 6206 95 20683 19 1263.4 19 818 35 15870 100 767 52 437 23 12569 100 731 52 Linux_G3 cvfs 4289 68 10007 28 2114 23 3752 62 7967 19 237.5 22 23 2 146 16 25 0 23 2 98 12 24 1 Linux_MD-IDE ext2fs 17626 97 45303 31 21270 14 16018 88 44266 13 258.1 1 12623 71 +++++ +++ +++++ +++ 15112 90 +++++ +++ 8071 28 Linux_MD-iSCSI ext2fs 11179 58 20761 9 6345 3 10664 65 8678 3 91.1 0 +++++ +++ +++++ +++ 15208 13 +++++ +++ +++++ +++ +++++ +++ Linux_MD-iSCSI gfs 10839 64 23343 20 8680 7 11368 70 13551 5 339.5 2 369 6 +++++ +++ 283 4 690 11 +++++ +++ 281 4 Linux_MD-iSCSI cvfs 3277 19 2762 2 753 3 3226 19 7618 7 188.8 7 18 1 82 3 24 0 18 0 74 3 24 0 18

Postmark Results Linux_MD Time Files Data File Range Block Size FS Total Trans Trans Crtd Crtd Crtd Alone Crtd Alone Mixed w/trans Mixed w/trans Read Read App App Del Del Del Alone Del Alone Mixed w/trans Mixed w/trans MB read MB Read MB Written 16K - 1M 512 ext2 3 1 500 730 243 500 250 230 230 252 252 248 248 730 243 460 460 270 270 139.76 46.59 423.93 141.31 16K - 1M 512 gfs 42 15 33 730 17 500 83 230 15 252 16 248 16 730 17 460 21 270 18 139.76 3.33 423.93 10.09 16K - 1M 512 cvfs 204 92 5 730 3 500 5 230 2 252 2 248 2 730 3 460 27 270 2 139.76 702(K) 423.93 2.08 1M - 8M 4096 ext2 238 140 3 742 3 500 5 242 1 252 1 248 1 742 3 484 484 258 1 1210.73 5.09 3604.88 15.15 1M - 8M 4096 gfs 283 154 3 742 2 500 4 242 1 252 1 248 1 742 2 484 26 258 1 1210.73 4.28 3604.88 12.74 1M - 8M 4096 cvfs 1126 505 0 500 0 500 0 242 0 252 0 248 0 742 0 484 25 258 0 1210.73 1.08 3604.88 3.2 MB Writtten Linux_G2 Time Files Data File Range Block Size FS Total Trans Trans Crtd Crtd Crtd Alone Crtd Alone Mixed w/trans Mixed w/trans Read Read App App Del Del Del Alone Del Alone Mixed w/trans Mixed w/trans MB read MB Read MB Written 16K - 1M 512 ext2 6 2 250 730 121 500 166 230 115 252 126 248 124 730 121 460 460 270 135 139.76 23.29 423.93 70.66 16K - 1M 512 gfs 19 8 62 730 38 500 83 230 28 252 31 248 31 730 38 460 92 270 33 139.76 7.36 423.93 22.31 16K - 1M 512 cvfs 90 37 13 730 8 500 13 230 6 252 6 248 6 730 8 460 27 270 7 139.76 1.55 423.93 4.71 1M - 8M 4096 ext2 93 51 9 742 7 500 12 242 4 252 4 248 4 742 7 484 484 258 5 1210.73 13.02 3604.88 38.76 1M - 8M 4096 gfs 147 75 6 742 5 500 7 242 3 252 3 248 3 742 5 484 121 258 3 1210.73 8.24 3604.88 24.52 1M - 8M 4096 cvfs 318 176 2 742 2 500 4 242 1 252 1 248 1 742 2 484 25 258 1 1210.73 3.81 3604.88 11.34 MB Writtten 19

Application Testing U of MD Successfully recognized and mounted storage located ~6 miles away Generated MODIS composite data ext2fs > 1hr 45 min Plus ftp overhead time of 45 min gfs > 2 hr 8 min cvfs > 3hr 15 min 20

AK-To To-MD FCIP Test 21

Conclusions Technology is straightforward to implement and test IP is more than just a viable SAN technology Poised to have a dramatic impact Market has yet to play out the options - iscsi, FCIP and/or ifcp Vendor commitment still forming Gain in flexibility offsets bandwidth loss for potentially a large category of users 22