OneCore Storage SDK 5.0

Similar documents
Emulex LPe16000B 16Gb Fibre Channel HBA Evaluation

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

10Gb iscsi Initiators

Emulex -branded Fibre Channel HBA Product Line

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Enterprise Server Midrange - Hewlett Packard

サーバー製品 Linux 動作確認一覧表 システム構成図 2010 年 11 月 1 日

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Open-FCoE Software Initiator

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Accelerating Workload Performance with Cisco 16Gb Fibre Channel Deployments

OneCore Storage Performance Tuning

Emulex LPe16000B Gen 5 Fibre Channel HBA Feature Comparison

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Demartek December 2007

VMmark 3.0 Results. Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/112/224

JMR ELECTRONICS INC. WHITE PAPER

Gen 6 Fibre Channel Evaluation of Products from Emulex and Brocade

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

Comparison of Storage Protocol Performance ESX Server 3.5

Altos R320 F3 Specifications. Product overview. Product views. Internal view

Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient Protocol

VMmark 3.0 Results. Number of Hosts: 4 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 16/448/896

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

The HBAs tested in this report are the Brocade 825 and the Emulex LightPulse LPe12002 and LPe12000.

Dell PowerEdge with Oracle Database 10g x86-64 and RAC on Enterprise Linux AS 5 U2 version 4.1 Updated 01/30/2009

Model Configurations. Contains: Hardware RAID with JBOD Expansion. HBA Mode with JBOD Expansion. JBOD Expansion

The QLogic 8200 Series is the Adapter of Choice for Converged Data Centers

Evaluation of the Chelsio T580-CR iscsi Offload adapter

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

FCoE at 40Gbps with FC-BB-6

VMmark 3.0 Results. Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 8/224/448

Connectivity. Module 2.2. Copyright 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectivity - 1

VMmark 3.0 Results. Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/112/224

Optimizing Fusion iomemory on Red Hat Enterprise Linux 6 for Database Performance Acceleration. Sanjay Rao, Principal Software Engineer

Technical Paper. Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array

VMware VMmark V1.1.1 Results

NEC Express5800/B120d-h System Configuration Guide

Acer AR320 F1 specifications

Data Storage Institute. SANSIM: A PLATFORM FOR SIMULATION AND DESIGN OF A STORAGE AREA NETWORK Zhu Yaolong

VMware VMmark V1.1 Results

Technical Paper. Performance and Tuning Considerations for SAS on Dell EMC VMAX 250 All-Flash Array

Meltdown and Spectre Interconnect Performance Evaluation Jan Mellanox Technologies

Avid Configuration Guidelines Lenovo P520/P520C workstation Single 6 to 18 Core CPU System P520 P520C

VMmark 3.0 Results. Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/112/224

VMware VMmark V1.1 Results

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

VMware VMmark V2.5.2 Results

Software-defined Shared Application Acceleration

Altos T310 F3 Specifications

High-Performance Lustre with Maximum Data Assurance

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

Dell with Oracle Database 11g R2 on RHEL 6.3 & Oracle Linux 6.3 with Compatible Kernel

NEC Express5800/B110d Configuration Guide

Version 1.0 October Reduce CPU Utilization by 10GbE CNA with Hardware iscsi Offload

Notes Section Notes for Workload. Configuration Section Configuration

Acer AC100 Server Specifications

IBM System Storage DS3000 series Interoperability Matrix IBM System Storage DS3000 series Interoperability Matrix

2009. October. Semiconductor Business SAMSUNG Electronics

Data Sheet Fujitsu Server PRIMERGY CX250 S2 Dual Socket Server Node

IBM s Data Warehouse Appliance Offerings

Avid Configuration Guidelines HP Z640 Dual 8-Core / Dual 10-Core / Dual 12-Core CPU Workstation

FPGAs and Networking

NexentaStor 5.x Reference Architecture

WebBench performance on Intel- and AMD-processorbased servers running Red Hat Enterprise Linux v.4.4

Avid Configuration Guidelines HP Z440 Single 6-core / Single 8-Core CPU Workstation

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired

Avid Configuration Guidelines HP Z420 Six-Core CPU Workstation Media Composer 6.x Symphony 6.x NewsCutter 10.x and later

Avid Configuration Guidelines Lenovo P720 workstation Dual 8 to 28 Core CPU System

IBM System Storage DS3000 series Interoperability Matrix IBM System Storage DS3000 series Interoperability Matrix

Evaluation of the LDC Computing Platform for Point 2

HS22, HS22v, HX5 Boot from SAN with QLogic on IBM UEFI system.

IBM Tivoli Storage Manager. Blueprint and Server Automated Configuration for Linux x86 Version 2 Release 3 IBM

Z400 / AVID Qualified Operating System choices: Microsoft Windows 7 Professional 64-bit Edition with Service Pack 1

Acer AW2000h w/aw170h F2 Specifications

NEC Express5800/B120f-h System Configuration Guide

EMC VFCache. Performance. Intelligence. Protection. #VFCache. Copyright 2012 EMC Corporation. All rights reserved.

Q500-P20 performance report

Evaluating the Impact of RDMA on Storage I/O over InfiniBand

Z400 / AVID Qualified Operating System choices: Microsoft Windows 7 Professional 64-bit Edition with Service Pack 1

Geneva 10.0 System Requirements

Intel SR2612UR storage system

Implementing iscsi in Storage Area Networks

High Performance SSD & Benefit for Server Application

IBM System Storage DS3000 Interoperability Matrix IBM System Storage DS3000 series Interoperability Matrix

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Implementing iscsi in Storage Area Networks

StorageWorks Dual Channel 4Gb, PCI-X-to-Fibre Channel Host Bus Adapter for Windows and Linux (Emulex LP11002)

HP Intelligent Provisioning 2.20 (Updated 09/2015)

Analyst Perspective: Test Lab Report 16 Gb Fibre Channel Performance and Recommendations

November, Executive Summary

VMware VMmark V1.1 Results

Page 1 of 24 Joe Conforti Avid Technology May 1st, 2013 Z210 Workstation / Minitower Rev B Config Guide (Software only)

VMmark 3.0 Results. Number of Hosts: 8 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 16/256/512

CA AppLogic and Supermicro X8 Equipment Validation

AccelStor All-Flash Array VMWare ESXi 6.0 iscsi Multipath Configuration Guide

Exchange 2003 Archiving for Operational Efficiency

Transcription:

OneCore Storage SDK 5.0 SCST 16G FC Performance Report March 28, 2014 2014 Emulex Corporation

Overview Contains performance results for the SCST target mode driver with a dual-port 16G FC LPe16002B-M6 host bus adapter. Key Results ~1M IOPS (Peak) >5.5GB/s 50/50 Bi-directional Throughput Target System Configuration Single LPe16002B-M6 Dual port 16G FC HBA 1CPU / 8 core system SCST v2.2.1 SDK 5.0 OneCore Storage Linux SCST target driver (ocs_fc_scst) 2 2014 Emulex Corporation

Test Configuration Software Test App: Iometer 1.1.0 Host OS: Windows 2008 R2x64 Initiator Driver: elxfc 2.72.012.001 Target OS: RHEL 6.4 x64 Target Driver: SCST Driver (ocs_fc_scst), R10.0.2_Build3 10.0.828.0 Target FW: DK4.8 Beta, v1.1.65.5 Test Environment Setup (illustration on following page) Initiator Servers (4) Ports: 4 (4 initiators x 1 port) LUNs: 8 per port Target Sever (1) Ports: 2 (1 target x 2 ports) LUNs 4 per port Switch: Brocade 16G Switch (SW6510) 3 2014 Emulex Corporation

Test Environment Setup Illustration DYNAMO Initiator Server 1 8 LUNs per Port 1 2 3 4 Ethernet Link FC Link 5 6 7 8 Port Initiator Server 2 Remote Server IOMETER DYNAMO DYNAMO 8 LUNs per Port 1 2 3 4 5 6 7 8 Initiator Server 3 8 LUNs per Port 1 2 3 4 Port Brocade 16G Switch (SW6510) Port 0 Port 1 Target Server (4 LUNs per Port) 1 2 3 1 2 3 4 4 5 6 7 8 Port Initiator Server 4 DYNAMO 8 LUNs per Port 1 2 3 4 5 6 7 8 Port 4 2014 Emulex Corporation

Initiator Server Information Initiator Information (the same for each of the 4 initiators) OS Name: Microsoft Windows Server 2008 R2 Enterprise Version: 6.1.7601 Service Pack 1 Build 7601 System Manufacturer: Supermicro System Model: X9SRW-F Processor: Intel Xeon CPU E5-2640 0 @ 2.50GHz, 2501 Mhz, 6 Core(s), 12 Logical Processor(s) BIOS Version/Date: American Megatrends Inc. 3.00, 7/5/2013 SMBIOS Version: 2.7 Hardware Abstraction Layer Version = "6.1.7601.17514" Installed Physical Memory (RAM): 16.0 GB Total Physical Memory: 16.0 GB Total Virtual Memory: 31.9 GB 5 2014 Emulex Corporation

Target System Information Target System OS Name: Red Hat Enterprise Linux Server 6.4 (Santiago) Kernel: 6.4 (Santiago), 2.6.32-358.el6.x86_64 System Manufacturer: Supermicro System Model: X9SRW-F Processor: Intel Xeon CPU E5-2643 0 @ 3.30GHz, cores 8 BIOS Version/Date: American Megatrends Inc., version 3.00, 07/05/2013, Total Physical Memory: 16.0G Adapter: LPe16002B-M6 PCI Slot: PCI Express 2 x8 6 2014 Emulex Corporation

SCST Configuration # Automatically generated by SCST Configurator v2.2.0. HANDLER vdisk_fileio { DEVICE disk1 { filename /mnt/disk1 threads_num 1 threads_pool_type shared } } DEVICE disk2 { filename /mnt/disk2 threads_num 1 threads_pool_type shared } DEVICE disk3 { filename /mnt/disk3 threads_num 1 threads_pool_type shared } DEVICE disk4 { filename /mnt/disk4 threads_num 1 threads_pool_type shared } TARGET_DRIVER ocs_scst { TARGET 10000090FA533DC4 { cpu_mask ff enabled 1 rel_tgt_id 1 } } LUN 0 disk1 LUN 1 disk2 LUN 2 disk3 LUN 3 disk4 TARGET 10000090FA533DC5 { cpu_mask ff enabled 1 rel_tgt_id 2 } LUN 0 disk1 LUN 1 disk2 LUN 2 disk3 LUN 3 disk4 7 2014 Emulex Corporation

Sequential Read IOPs ~1M IOPS 8 2014 Emulex Corporation

Random Read IOPs 9 2014 Emulex Corporation

Sequential Write IOPs 10 2014 Emulex Corporation

Random Write IOPs 11 2014 Emulex Corporation

Sequential Read Bandwidth >3GB/s Throughput 12 2014 Emulex Corporation

Random Read Bandwidth 13 2014 Emulex Corporation

Sequential Write Bandwidth 14 2014 Emulex Corporation

Random Write Bandwidth 15 2014 Emulex Corporation

Sequential 50/50 Read/Write Bandwidth >5.5GB/s Throughput 16 2014 Emulex Corporation

Random 50/50 Read/Write Bandwidth 17 2014 Emulex Corporation

Design Considerations LPe16000B target performance optimized for multiple initiators OneCore Storage Performance Tuning Application Note Contains details to improve adapter performance when using the OneCore Storage Linux drivers in a multi-core CPU environment. It includes descriptions for CPU affinity and CPU frequency scaling performance settings, as well as a script (ocs_perf_config) that employs these settings. Available on the Developer Portal and in the SDK package. LPe1600xB Target Performance Measurements Guidelines Application Note Describes the test tools, setup, procedures, and guidelines to measure the performance of the LightPulse LPe16000B/LPe16002B Gen5 Fibre Channel (16GFC/8GFC/4GFC) HBA in target mode. Available on the Developer Portal. 18 2014 Emulex Corporation

19 2014 Emulex Corporation