Ultra-Low Latency Down to Microseconds SSDs Make It. Possible

Similar documents
NVMe SSDs Becoming Norm for All Flash Storage

Solid State Drive (SSD) Cache:

Using Synology SSD Technology to Enhance System Performance Synology Inc.

Four Steps to Unleashing The Full Potential of Your Database

New Approach to Unstructured Data

Introducing NVDIMM-X: Designed to be the World s Fastest NAND-Based SSD Architecture and a Platform for the Next Generation of New Media SSDs

Persistent Memory. High Speed and Low Latency. White Paper M-WP006

Fusion iomemory PCIe Solutions from SanDisk and Sqrll make Accumulo Hypersonic

For Healthcare Providers: How All-Flash Storage in EHR and VDI Can Lower Costs and Improve Quality of Care

stec Host Cache Solution

The Role of Database Aware Flash Technologies in Accelerating Mission- Critical Databases

It s Not Your. Drive, or Is It? Transformational Storage Technology. John Scaramuzzo Senior VP/GM

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Fusion-io: Driving Database Performance

Oracle Exadata: Strategy and Roadmap

5300F, 5500F, 5600F, 5800F, 6800F,

EMC XTREMCACHE ACCELERATES ORACLE

Solid Access Technologies, LLC

Was ist dran an einer spezialisierten Data Warehousing platform?

FC-NVMe. NVMe over Fabrics. Fibre Channel the most trusted fabric can transport NVMe natively. White Paper

Annual Update on Flash Memory for Non-Technologists

Lightning Fast Rock Solid

Samsung PM1725a NVMe SSD

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Achieving Memory Level Performance: Secrets Beyond Shared Flash

Optimizing the Data Center with an End to End Solutions Approach

Nimble Storage Adaptive Flash

Virtualization of the MS Exchange Server Environment

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

CASE STUDY: ASHLAND FOOD CO-OP

Best Practices for Setting BIOS Parameters for Performance

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

The Benefits of Solid State in Enterprise Storage Systems. David Dale, NetApp

The Intersection of Cloud & Solid State Storage

Accelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740

Considering the 2.5-inch SSD-based RAID Solution:

Condusiv s V-locity VM Accelerates Exchange 2010 over 60% on Virtual Machines without Additional Hardware

McKesson mixes SSDs with HDDs for Optimal Performance and ROI. Bob Fine, Dir., Product Marketing

UCS Invicta: A New Generation of Storage Performance. Mazen Abou Najm DC Consulting Systems Engineer

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Reconstruyendo una Nube Privada con la Innovadora Hiper-Convergencia Infraestructura Huawei FusionCube Hiper-Convergente

Hewlett Packard Enterprise HPE GEN10 PERSISTENT MEMORY PERFORMANCE THROUGH PERSISTENCE

Solid State Storage is Everywhere Where Does it Work Best?

Understanding Data Locality in VMware vsan First Published On: Last Updated On:

Copyright 2012, Oracle and/or its affiliates. All rights reserved.

Low Latency Evaluation of Fibre Channel, iscsi and SAS Host Interfaces

W H I T E P A P E R U n l o c k i n g t h e P o w e r o f F l a s h w i t h t h e M C x - E n a b l e d N e x t - G e n e r a t i o n V N X

Red Hat Ceph Storage and Samsung NVMe SSDs for intensive workloads

Boosts Server Performance in a BGA-SSD

Driving the MRAM Revolution. Kevin Conley CEO, Everspin Technologies

Survey: Users Share Their Storage Performance Needs. Jim Handy, Objective Analysis Thomas Coughlin, PhD, Coughlin Associates

Copyright 2012 EMC Corporation. All rights reserved.

LEVERAGING FLASH MEMORY in ENTERPRISE STORAGE

NVMe: The Protocol for Future SSDs

Mass-Storage. ICS332 - Fall 2017 Operating Systems. Henri Casanova

Copyright 2012 EMC Corporation. All rights reserved.

vsan 6.6 Performance Improvements First Published On: Last Updated On:

Key Technology Trends, Marketplace Drivers, & AFA Rankings Jerome M. Wendt President & Founder Ken Clipperton Lead Analyst, Storage

Upgrade to Microsoft SQL Server 2016 with Dell EMC Infrastructure

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Session 201-B: Accelerating Enterprise Applications with Flash Memory

Optimizing Tier-1 Enterprise Storage for Solid State Memory

Introduction. Network Architecture Requirements of Data Centers in the Cloud Computing Era

HP VMA-series Memory Arrays

QLogic/Lenovo 16Gb Gen 5 Fibre Channel for Database and Business Analytics

Preface. Fig. 1 Solid-State-Drive block diagram

Enterprise Architectures The Pace Accelerates Camberley Bates Managing Partner & Analyst

Samsung s Green SSD (Solid State Drive) PM830. Boost data center performance while reducing power consumption. More speed. Less energy.

I D C T E C H N O L O G Y S P O T L I G H T

INTRODUCTION TO EMC XTREMSF

JetStor White Paper SSD Caching

Introducing HPE SimpliVity 380

Frequently Asked Questions. s620 SATA SSD Enterprise-Class Solid-State Device

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Micron Quad-Level Cell Technology Brings Affordable Solid State Storage to More Applications

Understanding Data Locality in VMware Virtual SAN

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

Clouds, Convergence & Consolidation

IBM DS8870 Release 7.0 Performance Update

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Entry-level Intel RAID RS3 Controller Family

QLogic 16Gb Gen 5 Fibre Channel for Database and Business Analytics

BEST PRACTICES FOR OPTIMIZING YOUR LINUX VPS AND CLOUD SERVER INFRASTRUCTURE

Defining requirements for a successful allflash

A Better Storage Solution

The PowerEdge M830 blade server

Operating System Performance and Large Servers 1

1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

OceanStor 6800F V5 Mission-Critical All-Flash Storage Systems

PowerVault MD3 SSD Cache Overview

EMC VFCache. Performance. Intelligence. Protection. #VFCache. Copyright 2012 EMC Corporation. All rights reserved.

Reduce Latency and Increase Application Performance Up to 44x with Adaptec maxcache 3.0 SSD Read and Write Caching Solutions

Technology Advancement in SSDs and Related Ecosystem Changes

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

White Paper: Understanding the Relationship Between SSD Endurance and Over-Provisioning. Solid State Drive

Storage Speed and Human Behavior. PRESENTATION TITLE GOES HERE Eric Herzog CMO and Senior VP of Business Development Violin Memory

Intel Solid State Drive Data Center Family for PCIe* in Baidu s Data Center Environment

Emulex LPe16000B Gen 5 Fibre Channel HBA Feature Comparison

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

Transcription:

Ultra-Low Latency Down to Microseconds SSDs Make It Possible DAL is a large ocean shipping company that covers ocean and land transportation, storage, cargo handling, and ship management. Every day, its application system processes more than 100,000 transaction orders and performs data backup and integration activities. Every night, storage and ship scheduling plans are made and must be completed before 6:00 a.m. the next day. In early 2013, DAL planned to add flights to North America, increasing the number of transaction orders per day to 150,000. Its legacy system was unable to handle such a large number of orders every day, but a one-day delay would result in a 100,000 euro loss. According to analysis, a major performance bottleneck lay in the response latency of databases and disk drives during peak hours. When the system IOPS peaked at 200,000, the I/O latency reached 8 ms. This high I/O latency caused 80% of the database operating time to be wasted in I/O waiting, with a prolonged batch processing period. To resolve this performance issue, a storage system with an ultra-low I/O latency was installed for DAL, ensuring that all 150,000 transaction orders were processed within the required time. Performance bottlenecks and countermeasures As enterprise data centers process an increased number of applications, they pose more and more demanding requirements on latency and service levels. A mature and high-performance IT system can facilitate enterprise operations, covering enterprise resource planning, customer relationship management, end-to-end manufacturing, and enterprise management. The field of public affairs is witnessing the same change. As cloud computing and server virtualization technologies develop, each IT appliance must process diversified applications, while users call for shorter waiting time and higher system availability. These exert challenges for IT system response times, concurrent processing capability, and query latency. In the past, many methods have been tried to

improve IT system efficiency, such as adding more servers, upgrading server configurations, and reducing server workloads. Performance bottlenecks persisted, and server computing requests were starved of storage resources. Huawei has diagnosed hundreds of user systems that suffer from the same problem, and found that 87% of system performance issues occur in the interaction between storage systems and application databases. In other words, the response latency and concurrent capability of a storage system affect those of an entire application system. Response latency of a storage system is a performance indicator that most concerns users. Especially for mission-critical businesses, response latency directly determines user experience. Stable and low latency optimizes user experience and also reduces the number of servers required, saving equipment room footprint and power consumption. Shortened latency also adds customer IOPS requirements, which helps system providers increase profits. For example, if the data access latency of a data-intensive application is reduced by 90%, the IOPS requirements will increase tenfold. This benefit is especially significant for applications like OLTP, OLAP, high-performance computing, and virtual desktop. Here we need to clarify that the latency we mention in this article is not the average system latency, but the maximum latency of at least 99% of I/O applications (99% latency). The 99% latency plays a more important role in applications than the average latency, because most applications are online data-intensive applications. One application request triggers multiple data access operations, and the latency of the application request is determined by the operation with the highest latency. This is why we focus on the 99% latency rather than the average latency. As shown in the following figure, when the service pressure increases, the number of I/Os to be processed also rises. Maintaining stable and low latency under heavy I/O pressure (such as 1 million IOPS) ensures the fast response of applications.

Comparative analysis of storage system performance Relationship between system latency and disk drives Before we try to develop a mechanism providing low latency for mass concurrent access requests, we must answer a question: "What is the desirable latency we want to achieve?" We find out that every latency decrease in one order of magnitude brings us brand-new user experience, and if we reduce the latency to lower than 1 ms, we can achieve optimal experience. Such low system latency requires a microsecond-level processing speed of every unit involved in the system, including hardware, software, architecture, and protocols. In traditional storage systems, the latency of hard disk drives (HDDs) slows down the system processing speed, and makes the system latency reach 10 ms at least. But with the application of solid state drives (SSDs) in storage systems, most critical data can be saved on SSDs, which reduces data access latency to shorter than 1 ms. If we use the most advanced DRAM SSDs, the latency can be shorter than 100 us, or even 10 us. Unlike the previous method that uses a large number of HDDs to improve system IOPS, SSDs boost system IOPS with its low latency. This new method improves system performance as well as reduces the cost in system infrastructure. Furthermore, the internal controllers of SSDs enable concurrent access to the back-end NAND Flash chips,

in this way, SSDs can accelerate the system's processing of concurrent requests. Disk drives are an important factor in determining system latency, but it is not the sole factor. Latency is the result of a very complicated process. Every storage request, from entering a storage system to being sent back to users, is processed by multiple system resources (CPUs, locks, caches, disk drives, internal networks, and I/O interfaces) and queues many times. Each processing and queuing operation causes certain latency. Competition for resources and operating system scheduling prolong the latency further. Therefore, the software processing mechanism and protocol stack overhead also require attention. Using SSDs in storage systems for microsecond-level latency If we want to use SSDs to accelerate system performance, we cannot merely replace HDDs with SSDs, but need to redesign the system architecture. Traditional storage systems are based on caches, which provide read-hit and write-back mechanisms to reduce the read and write latency for HDDs, and use an index table to reduce memory usage. However, this index table is comparatively slow and is inadequate for SSDs whose latency is only tens or hundreds of microseconds. The latency caused by the traditional index table greatly hampers SSD performance. Therefore, in an SSD storage system, we need to adopt a new cache index table which delivers a shorter operation latency than before. This table design may occupy some extra CPU and memory resources, but it sacrifices certain CPU and memory usage for high performance, and relieves more system resources to process I/O requests. When designing SSD storage systems, we must regard latency reduction as our foremost concern. Traditional storage systems focus on storage space usage, while SSD storage systems focus on system latency. Another difference distinguishing HDDs and SSDs is that HDDs process random access requests 100 times slower than sequential access requests, while SSDs process random access requests only two to four times slower than sequential access requests. The significant difference in HDDs causes traditional storage systems to employ various cache algorithms to increase sequential access to HDDs. However, these cache algorithms are

not suitable for SSDs. In SSD storage systems, we focus more on issues like Flash page usage, cache contamination, and data selection overhead. Therefore, a variety of technologies are dedicatedly developed for SSDs. The SSD data selection algorithm can effectively separate sequential data, temporary data, and hot data, and obsolete unvalued data. Huawei-proprietary SSD granularity feature matches the data granularity with SSD Flash page and ECC granularities, effectively reducing SSD write penalties and write amplification. Conclusion With the development of cloud computing and server virtualization, a storage system must process a diversity of applications. The storage industry is experiencing a revolution, which promotes storage systems to become converged and unified. SSDs are replacing HDDs as the mainstream storage medium, and they bring stable and ultra-low latency to storage systems. Huawei introduces SSDs into its storage systems and develops a wide range of technologies to maximize the high-iops and low-latency advantages of SSDs. With these cutting-edge technologies, Huawei's storage systems can process millions of IOPS within a few microseconds, meeting the long-term requirements of enterprise data centers.