Rocksteady: Fast Migration for Low-Latency In-memory Storage. Chinmay Kulkarni, Aniraj Kesavan, Tian Zhang, Robert Ricci, Ryan Stutsman

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Rocksteady: Fast Migration for Low-Latency In-memory Storage. Chinmay Kulkarni, Aniraj Kesavan, Tian Zhang, Robert Ricci, Ryan Stutsman"

Transcription

1 Rocksteady: Fast Migration for Low-Latency In-memory Storage Chinmay Kulkarni, niraj Kesavan, Tian Zhang, Robert Ricci, Ryan Stutsman 1

2 Introduction Distributed low-latency in-memory key-value stores are emerging Predictable response times ~10 µs median, ~60 µs 99.9 th -tile Problem: Must migrate data between servers Minimize performance impact of migration go slow? Quickly respond to hot spots, skew shifts, load spikes go fast? Solution: Fast data migration with low impact Early ownership transfer of data, leverage workload skew Low priority, parallel and adaptive migration Result: Migration protocol for RMCloud in-memory key-value store Migrates 256 G in 6 minutes, 99.9 th -tile latency less than 250 µs Median latency recovers from 40 µs to 20 µs in 14 s 2

3 Why Migrate Data? Client 1 Client 2 multiget( ) multiget( C D ) 6 Million C D Server 1 Server 2 No Locality Fanout=7 Poor spatial locality High multiget() fan-out More RPCs 3

4 Migrate To Improve Spatial Locality Client 1 Client 2 multiget( ) multiget( C D ) 6 Million C D Server 1 Server 2 C No Locality Fanout=7 4

5 Spatial Locality Improves Throughput Client 1 Client 2 multiget( ) multiget( C D ) 25 Million 6 Million C D Server 1 Server 2 Full Locality Fanout=1 No Locality Fanout=7 etter spatial locality Fewer RPCs Higher throughput enefits multiget(), range scans 5

6 The RMCloud Key-Value Store Client Client Client Client Coordinator Data Center Fabric ll Data in RM Kernel ypass/ DPDK 10 µs reads Master Master Master Master 6

7 The RMCloud Key-Value Store Client Client Client Client Write RPC Coordinator Data Center Fabric Master Master Master Master 7

8 The RMCloud Key-Value Store Client Client Client Client Write RPC Coordinator Data Center Fabric 1x in DRM 3x on Disk Master Master Master Master 8

9 Fault-tolerance & Recovery In RMCloud Client Client Client Client Coordinator Data Center Fabric Master Master Master Master 9

10 Fault-tolerance & Recovery In RMCloud Client Client Client Client Coordinator Data Center Fabric 2 seconds to recover Master Master Master Master 10

11 Performance Goals For Migration Maintain low access latency 10 µsec median latency System extremely sensitive Tail latency matters at scale Even more sensitive Migrate data fast Workloads dynamic Respond quickly Growing DRM storage: 512 G per server Slow data migration Entire day to scale cluster 11

12 Rocksteady Overview: Early Ownership Transfer Problem: Loaded source can bottleneck migration Solution: Instantly shift ownership and all load to target Client 1 Client 2 Client 3 Client 4 Reads and Writes Source Server Target Server 12

13 Rocksteady Overview: Early Ownership Transfer Problem: Loaded source can bottleneck migration Solution: Instantly shift ownership and all load to target Client 1 Client 2 Client 3 Client 4 Instantly Redirected ll future operations serviced at Target Source Server Target Server Creates headroom to speed migration 13

14 Rocksteady Overview: Leverage Skew Problem: Data has not arrived at source yet Solution: On demand migration of unavailable data Client 1 Client 2 Client 3 Client 4 Read On-demand Pull Source Server Target Server 14

15 Rocksteady Overview: Leverage Skew Problem: Data has not arrived at source yet Solution: On demand migration of unavailable data Client 1 Client 2 Client 3 Client 4 Read Hot keys move early Source Server Target Server Median Latency recovers to 20 µs in 14 s 15

16 Rocksteady Overview: daptive and Parallel Problem: Old single-threaded protocol limited to 130 M/s Solution: Pipelined and parallel at source and target Client 1 Client 2 Client 3 Client 4 On-demand Pull Source Server Parallel Pulls Target Server 16

17 Rocksteady Overview: daptive and Parallel Problem: Old single-threaded protocol limited to 130 M/s Solution: Pipelined and parallel at source and target Client 1 Client 2 Client 3 Client 4 On-demand Pull Target Driven Yields to On-demand Pulls Source Server Parallel Pulls Target Server Moves 758 M/s 17

18 Rocksteady Overview: Eliminate Sync Replication Problem: Synchronous replication bottleneck at target Solution: Safely defer replication until after migration Client 1 Client 2 Client 3 Client 4 On-demand Pull Replication Source Server Parallel Pulls Target Server 18

19 Rocksteady Overview: Eliminate Sync Replication Problem: Synchronous replication bottleneck at target Solution: Safely defer replication until after migration Client 1 Client 2 Client 3 Client 4 Replication Source Server Target Server 19

20 Rocksteady: Putting it all together Instantaneous ownership transfer Immediate load reduction at overloaded source Creates headroom for migration work Leverage skew to rapidly migrate hot data Target comes up to speed with little data movement daptive parallel, pipelined at source and target ll cores avoid stalls, but yield to client-facing operations Safely defer replication at target Eliminates replication bottleneck and contention 20

21 Rocksteady Instantaneous ownership transfer Leverage skew to rapidly migrate hot data daptive parallel, pipelined at source and target Safely defer synchronous replication at target 21

22 Evaluation Setup Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew= Million Records 45 G Source Server Target Server 22

23 Evaluation Setup Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew=0.99 Client YCS- (95/5) Skew= Million Records 22.5 G 150 Million Records 22.5 G 150 Million Records 22.5 G Target Server 23

24 Instantaneous Ownership Transfer Source CPU Utilization 80% 25% Created 55% Source CPU Headroom efore Ownership Transfer Immediately fter Transfer efore migration: Source over-loaded, Target under-loaded Ownership transfer creates Source headroom for migration 24

25 Rocksteady Instantaneous ownership transfer Leverage skew to rapidly migrate hot data daptive parallel, pipelined at source and target Safely defer synchronous replication at target 25

26 Leverage Skew To Move Hot Data 99.9th Latency Median Latency efore Migration: 240µs 245µs 75µs 155µs 28µs 17µs Median=10 µs 99.9 th = 60 µs Uniform (Low) Skew=0.99 Skew=1.5 (High) fter ownership transfer, hot keys pulled on-demand More skew Median restored faster (migrate fewer hot keys) 26

27 Rocksteady Instantaneous ownership transfer Leverage skew to rapidly migrate hot data daptive parallel, pipelined at source and target Safely defer synchronous replication at target 27

28 Parallel, Pipelined, & daptive Pulls Target Hash Table Per-Core uffers Dispatch Core NIC Worker Cores Polling replay replay read() Migration Manager Pull uffers pulling Target driven, migration manager Co-partitioned hash tables, pull from partitions in parallel Replay pulled data into per-core buffers 28

29 Parallel, Pipelined, & daptive Pulls Source Hash Table Copy ddresses Dispatch Core NIC Polling read() pull(11) Gather List pull(17) Gather List Stateless passive Source Granular 20 K pulls 29

30 Parallel, Pipelined, & daptive Pulls Redirect any idle CPU for migration Migration yields to regular requests, on-demand pulls 30

31 Rocksteady Instantaneous ownership transfer Leverage skew to rapidly migrate hot data daptive parallel, pipelined at source and target Safely defer synchronous replication at target 31

32 Naïve Fault Tolerance During Migration Each server has a recovery log distributed across the cluster Source Target C Source Recovery Log C C C Target Recovery Log 32

33 Naïve Fault Tolerance During Migration Migrated data needs to be triplicated to target s recovery log Source Target C Source Recovery Log C C C Target Recovery Log 33

34 Naïve Fault Tolerance During Migration Migrated data needs to be triplicated to target s recovery log Source Target C Source Recovery Log C C C Target Recovery Log 34

35 Synchronous Replication ottlenecks Migration Synchronous replication hits migration speed by 34% Source C Target Source Recovery Log C C C Target Recovery Log 35

36 Rocksteady: Safely Defer Replication t The Target Replicate at Target only after all data has been moved over Source C Target C Source Recovery Log C C C Target Recovery Log 36

37 Writes/Mutations Served y Target Mutations have to be replicated by the target Write Source C Target C Source Recovery Log C C C Target Recovery Log C C C 37

38 Crash Safety During Migration Need both Source and Target recovery log for data recovery Initial table state on Source recovery log Writes/Mutations on Target recovery log Transfer ownership back to Source in case of crash Migration cancelled Recovery involves both recovery logs Source takes a dependency on Target recovery log at migration start Stored reliably at the cluster coordinator Identifies position after which mutations present 38

39 If The Source Crashes During Migration Recover Source, recover from Target recovery log Source C Target C Source Recovery Log C C C Target Recovery Log C C C 39

40 If The Target Crashes During Migration Recover from Source and Target recovery log, recover Target Source C Target C Source Recovery Log C C C Target Recovery Log C C C 40

41 Crash Safety During Migration Need both Source and Target recovery log for data recovery Initial table state on Source recovery log Writes/Mutations on Target recovery log Safely Transfer Ownership t Migration Start Transfer ownership back to Source in case of crash Migration cancelled Recovery involves both recovery logs Safely Delay Replication Till ll Data Has een Moved Source takes a dependency on Target recovery log at migration start Stored reliably at the cluster coordinator Identifies position after which mutations present 41

42 Performance of Rocksteady YCS-, 300 Million objects (30 key, 100 value), migrate half Median ccess Latency (µs) Rocksteady Source Keeps Ownership + Sync Replication Time Since Experiment Start (Seconds) 42

43 Performance of Rocksteady YCS-, 300 Million objects (30 key, 100 value), migrate half Median latency better after ~14 seconds Median ccess Latency (µs) Rocksteady Source Keeps Ownership + Sync Replication Time Since Experiment Start (Seconds) 43

44 Performance of Rocksteady YCS-, 300 Million objects (30 key, 100 value), migrate half Median ccess Latency (µs) Median latency better after ~14 seconds 28% faster migration Rocksteady Source Keeps Ownership + Sync Replication Time Since Experiment Start (Seconds) 44

45 Related Work Dynamo: Pre-partition hash keys Spanner: pplications given control over locality (Directories) FaRM and DrTM: Re-use in-memory redundancy for migration Squall: Reconfiguration protocol for H-Store Early ownership transfer Paced background migration Fully partitioned, serial execution, no synchronization Each migration pull stalls execution Synchronous replication at the target 45

46 Conclusion Distributed low-latency in-memory key-value stores are emerging Predictable response times ~10 µs median, ~60 µs 99.9 th -tile Problem: Must migrate data between servers Minimize performance impact of migration go slow? Quickly respond to hot spots, skew shifts, load spikes go fast? Solution: Fast data migration with low impact Leverage skew: Transfer ownership before data, move hot data first Low priority, parallel and adaptive migration Result: Migration protocol for RMCloud in-memory key-value store Migrates at 758 Mps with 99.9 th -tile latency < 250 µs Source Code: 46

47 Slides 47

48 Rocksteady Tail Latency reakdown Rocksteady 48

49 Rocksteady Tail Latency reakdown Rocksteady Disabling parallel pulls brings tail latency down to 160 µsec 49

50 Rocksteady Tail Latency reakdown Rocksteady Disabling parallel pulls brings tail latency down to 160 µsec Synchronous on-demand pulls further brings tail latency down to 135 µsec 50

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

CGAR: Strong Consistency without Synchronous Replication. Seo Jin Park Advised by: John Ousterhout

CGAR: Strong Consistency without Synchronous Replication. Seo Jin Park Advised by: John Ousterhout CGAR: Strong Consistency without Synchronous Replication Seo Jin Park Advised by: John Ousterhout Improved update performance of storage systems with master-back replication Fast: updates complete before

More information

No compromises: distributed transactions with consistency, availability, and performance

No compromises: distributed transactions with consistency, availability, and performance No compromises: distributed transactions with consistency, availability, and performance Aleksandar Dragojevi c, Dushyanth Narayanan, Edmund B. Nightingale, Matthew Renzelmann, Alex Shamis, Anirudh Badam,

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW

More information

Google File System. By Dinesh Amatya

Google File System. By Dinesh Amatya Google File System By Dinesh Amatya Google File System (GFS) Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung designed and implemented to meet rapidly growing demand of Google's data processing need a scalable

More information

Chapter 11: File System Implementation. Objectives

Chapter 11: File System Implementation. Objectives Chapter 11: File System Implementation Objectives To describe the details of implementing local file systems and directory structures To describe the implementation of remote file systems To discuss block

More information

Building Consistent Transactions with Inconsistent Replication

Building Consistent Transactions with Inconsistent Replication DB Reading Group Fall 2015 slides by Dana Van Aken Building Consistent Transactions with Inconsistent Replication Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, Arvind Krishnamurthy, Dan R. K. Ports

More information

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy Overview SMD149 - Operating Systems - Multiprocessing Roland Parviainen Multiprocessor systems Multiprocessor, operating system and memory organizations December 1, 2005 1/55 2/55 Multiprocessor system

More information

Making RAMCloud Writes Even Faster

Making RAMCloud Writes Even Faster Making RAMCloud Writes Even Faster (Bring Asynchrony to Distributed Systems) Seo Jin Park John Ousterhout Overview Goal: make writes asynchronous with consistency. Approach: rely on client Server returns

More information

RAMCloud and the Low- Latency Datacenter. John Ousterhout Stanford University

RAMCloud and the Low- Latency Datacenter. John Ousterhout Stanford University RAMCloud and the Low- Latency Datacenter John Ousterhout Stanford University Most important driver for innovation in computer systems: Rise of the datacenter Phase 1: large scale Phase 2: low latency Introduction

More information

Understanding Oracle RAC ( ) Internals: The Cache Fusion Edition

Understanding Oracle RAC ( ) Internals: The Cache Fusion Edition Understanding (12.1.0.2) Internals: The Cache Fusion Edition Subtitle Markus Michalewicz Director of Product Management Oracle Real Application Clusters (RAC) November 19th, 2014 @OracleRACpm http://www.linkedin.com/in/markusmichalewicz

More information

Amazon Aurora Deep Dive

Amazon Aurora Deep Dive Amazon Aurora Deep Dive Enterprise-class database for the cloud Damián Arregui, Solutions Architect, AWS October 27 th, 2016 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Enterprise

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Squall: Fine-Grained Live Reconfiguration for Partitioned Main Memory Databases

Squall: Fine-Grained Live Reconfiguration for Partitioned Main Memory Databases Squall: Fine-Grained Live Reconfiguration for Partitioned Main Memory Databases AARON J. ELMORE, VAIBHAV ARORA, REBECCA TAFT, ANDY PAVLO, DIVY AGRAWAL, AMR EL ABBADI Higher OLTP Throughput Demand for High-throughput

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 presented by Kun Suo Outline GFS Background, Concepts and Key words Example of GFS Operations Some optimizations in

More information

Parallel DBs. April 25, 2017

Parallel DBs. April 25, 2017 Parallel DBs April 25, 2017 1 Sending Hints Rk B Si Strategy 3: Bloom Filters Node 1 Node 2 2 Sending Hints Rk B Si Strategy 3: Bloom Filters Node 1 with

More information

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Google File System (GFS) and Hadoop Distributed File System (HDFS) Google File System (GFS) and Hadoop Distributed File System (HDFS) 1 Hadoop: Architectural Design Principles Linear scalability More nodes can do more work within the same time Linear on data size, linear

More information

VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000 WHITE PAPER VERITAS Volume Manager for Windows 2000 VERITAS Cluster Server for Windows 2000 VERITAS CAMPUS CLUSTER SOLUTION FOR WINDOWS 2000 WHITEPAPER 1 TABLE OF CONTENTS TABLE OF CONTENTS...2 Overview...3

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Ceph: A Scalable, High-Performance Distributed File System PRESENTED BY, NITHIN NAGARAJ KASHYAP

Ceph: A Scalable, High-Performance Distributed File System PRESENTED BY, NITHIN NAGARAJ KASHYAP Ceph: A Scalable, High-Performance Distributed File System PRESENTED BY, NITHIN NAGARAJ KASHYAP Outline Introduction. System Overview. Distributed Object Storage. Problem Statements. What is Ceph? Unified

More information

Spanner : Google's Globally-Distributed Database. James Sedgwick and Kayhan Dursun

Spanner : Google's Globally-Distributed Database. James Sedgwick and Kayhan Dursun Spanner : Google's Globally-Distributed Database James Sedgwick and Kayhan Dursun Spanner - A multi-version, globally-distributed, synchronously-replicated database - First system to - Distribute data

More information

ApsaraDB for Redis. Product Introduction

ApsaraDB for Redis. Product Introduction ApsaraDB for Redis is compatible with open-source Redis protocol standards and provides persistent memory database services. Based on its high-reliability dual-machine hot standby architecture and seamlessly

More information

Computer-System Organization (cont.)

Computer-System Organization (cont.) Computer-System Organization (cont.) Interrupt time line for a single process doing output. Interrupts are an important part of a computer architecture. Each computer design has its own interrupt mechanism,

More information

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E.

18-hdfs-gfs.txt Thu Nov 01 09:53: Notes on Parallel File Systems: HDFS & GFS , Fall 2012 Carnegie Mellon University Randal E. 18-hdfs-gfs.txt Thu Nov 01 09:53:32 2012 1 Notes on Parallel File Systems: HDFS & GFS 15-440, Fall 2012 Carnegie Mellon University Randal E. Bryant References: Ghemawat, Gobioff, Leung, "The Google File

More information

! Design constraints. " Component failures are the norm. " Files are huge by traditional standards. ! POSIX-like

! Design constraints.  Component failures are the norm.  Files are huge by traditional standards. ! POSIX-like Cloud background Google File System! Warehouse scale systems " 10K-100K nodes " 50MW (1 MW = 1,000 houses) " Power efficient! Located near cheap power! Passive cooling! Power Usage Effectiveness = Total

More information

Strata: A Cross Media File System. Youngjin Kwon, Henrique Fingler, Tyler Hunt, Simon Peter, Emmett Witchel, Thomas Anderson

Strata: A Cross Media File System. Youngjin Kwon, Henrique Fingler, Tyler Hunt, Simon Peter, Emmett Witchel, Thomas Anderson A Cross Media File System Youngjin Kwon, Henrique Fingler, Tyler Hunt, Simon Peter, Emmett Witchel, Thomas Anderson 1 Let s build a fast server NoSQL store, Database, File server, Mail server Requirements

More information

OPERATING SYSTEM. Chapter 12: File System Implementation

OPERATING SYSTEM. Chapter 12: File System Implementation OPERATING SYSTEM Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management

More information

Router s Queue Management

Router s Queue Management Router s Queue Management Manages sharing of (i) buffer space (ii) bandwidth Q1: Which packet to drop when queue is full? Q2: Which packet to send next? FIFO + Drop Tail Keep a single queue Answer to Q1:

More information

CA Single Sign-On. Performance Test Report R12

CA Single Sign-On. Performance Test Report R12 CA Single Sign-On Performance Test Report R12 Contents CHAPTER 1: OVERVIEW INTRODUCTION SUMMARY METHODOLOGY GLOSSARY CHAPTER 2: TESTING METHOD TEST ENVIRONMENT DATA MODEL CONNECTION PROCESSING SYSTEM PARAMETERS

More information

How Eventual is Eventual Consistency?

How Eventual is Eventual Consistency? Probabilistically Bounded Staleness How Eventual is Eventual Consistency? Peter Bailis, Shivaram Venkataraman, Michael J. Franklin, Joseph M. Hellerstein, Ion Stoica (UC Berkeley) BashoChats 002, 28 February

More information

Chapter 11: Implementing File Systems

Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems Operating System Concepts 99h Edition DM510-14 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation

More information

A Practical Scalable Distributed B-Tree

A Practical Scalable Distributed B-Tree A Practical Scalable Distributed B-Tree CS 848 Paper Presentation Marcos K. Aguilera, Wojciech Golab, Mehul A. Shah PVLDB 08 March 8, 2010 Presenter: Evguenia (Elmi) Eflov Presentation Outline 1 Background

More information

Microsoft SQL Server Database Administration

Microsoft SQL Server Database Administration Address:- #403, 4 th Floor, Manjeera Square, Beside Prime Hospital, Ameerpet, Hyderabad 500038 Contact: - 040/66777220, 9177166122 Microsoft SQL Server Database Administration Course Overview This is 100%

More information

Introduction to the Service Availability Forum

Introduction to the Service Availability Forum . Introduction to the Service Availability Forum Contents Introduction Quick AIS Specification overview AIS Dependability services AIS Communication services Programming model DEMO Design of dependable

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E.

18-hdfs-gfs.txt Thu Oct 27 10:05: Notes on Parallel File Systems: HDFS & GFS , Fall 2011 Carnegie Mellon University Randal E. 18-hdfs-gfs.txt Thu Oct 27 10:05:07 2011 1 Notes on Parallel File Systems: HDFS & GFS 15-440, Fall 2011 Carnegie Mellon University Randal E. Bryant References: Ghemawat, Gobioff, Leung, "The Google File

More information

Chapter 11: Implementing File

Chapter 11: Implementing File Chapter 11: Implementing File Systems Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Chapter 12: File System Implementation

Chapter 12: File System Implementation Chapter 12: File System Implementation Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods Free-Space Management Efficiency

More information

Volley: Automated Data Placement for Geo-Distributed Cloud Services

Volley: Automated Data Placement for Geo-Distributed Cloud Services Volley: Automated Data Placement for Geo-Distributed Cloud Services Authors: Sharad Agarwal, John Dunagen, Navendu Jain, Stefan Saroiu, Alec Wolman, Harbinder Bogan 7th USENIX Symposium on Networked Systems

More information

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition

Chapter 11: Implementing File Systems. Operating System Concepts 9 9h Edition Chapter 11: Implementing File Systems Operating System Concepts 9 9h Edition Silberschatz, Galvin and Gagne 2013 Chapter 11: Implementing File Systems File-System Structure File-System Implementation Directory

More information

Attribute-Driven Design

Attribute-Driven Design Attribute-Driven Design Minsoo Ryu Hanyang University msryu@hanyang.ac.kr Attribute-Driven Design The ADD method is an approach to defining a software architecture in which the design process is based

More information

Chapter 10: File System Implementation

Chapter 10: File System Implementation Chapter 10: File System Implementation Chapter 10: File System Implementation File-System Structure" File-System Implementation " Directory Implementation" Allocation Methods" Free-Space Management " Efficiency

More information

File System Implementation

File System Implementation File System Implementation Last modified: 16.05.2017 1 File-System Structure Virtual File System and FUSE Directory Implementation Allocation Methods Free-Space Management Efficiency and Performance. Buffering

More information

Chapter 12: I/O Systems

Chapter 12: I/O Systems Chapter 12: I/O Systems Chapter 12: I/O Systems I/O Hardware! Application I/O Interface! Kernel I/O Subsystem! Transforming I/O Requests to Hardware Operations! STREAMS! Performance! Silberschatz, Galvin

More information

Chapter 13: I/O Systems

Chapter 13: I/O Systems Chapter 13: I/O Systems Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations STREAMS Performance Silberschatz, Galvin and

More information

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition

Chapter 12: I/O Systems. Operating System Concepts Essentials 8 th Edition Chapter 12: I/O Systems Silberschatz, Galvin and Gagne 2011 Chapter 12: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations STREAMS

More information

Implementing Linearizability at Large Scale and Low Latency

Implementing Linearizability at Large Scale and Low Latency Implementing Linearizability at Large Scale and Low Latency Collin Lee, Seo Jin Park, Ankita Kejriwal, Satoshi Matsushita, and John Ousterhout Stanford University, NEC Abstract Linearizability is the strongest

More information

CPS 512 midterm exam #1, 10/7/2016

CPS 512 midterm exam #1, 10/7/2016 CPS 512 midterm exam #1, 10/7/2016 Your name please: NetID: Answer all questions. Please attempt to confine your answers to the boxes provided. If you don t know the answer to a question, then just say

More information

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved

BERLIN. 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved BERLIN 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved Amazon Aurora: Amazon s New Relational Database Engine Carlos Conde Technology Evangelist @caarlco 2015, Amazon Web Services,

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

Multi-core Programming Evolution

Multi-core Programming Evolution Multi-core Programming Evolution Based on slides from Intel Software ollege and Multi-ore Programming increasing performance through software multi-threading by Shameem Akhter and Jason Roberts, Evolution

More information

TAPIR. By Irene Zhang, Naveen Sharma, Adriana Szekeres, Arvind Krishnamurthy, and Dan Ports Presented by Todd Charlton

TAPIR. By Irene Zhang, Naveen Sharma, Adriana Szekeres, Arvind Krishnamurthy, and Dan Ports Presented by Todd Charlton TAPIR By Irene Zhang, Naveen Sharma, Adriana Szekeres, Arvind Krishnamurthy, and Dan Ports Presented by Todd Charlton Outline Problem Space Inconsistent Replication TAPIR Evaluation Conclusion Problem

More information

ElasticFlow: A Complexity-Effective Approach for Pipelining Irregular Loop Nests

ElasticFlow: A Complexity-Effective Approach for Pipelining Irregular Loop Nests ElasticFlow: A Complexity-Effective Approach for Pipelining Irregular Loop Nests Mingxing Tan 1 2, Gai Liu 1, Ritchie Zhao 1, Steve Dai 1, Zhiru Zhang 1 1 Computer Systems Laboratory, Electrical and Computer

More information

Symantec Backup Exec 10d for Windows Servers AGENTS & OPTIONS MEDIA SERVER OPTIONS KEY BENEFITS AGENT AND OPTION GROUPS DATASHEET

Symantec Backup Exec 10d for Windows Servers AGENTS & OPTIONS MEDIA SERVER OPTIONS KEY BENEFITS AGENT AND OPTION GROUPS DATASHEET DATASHEET Symantec Backup Exec 10d for Windows Servers AGENTS & OPTIONS Designed for disk, Symantec Backup Exec TM 10d for Windows Servers is the Gold Standard in Windows data protection. It provides comprehensive,

More information

High-Performance Data Loading and Augmentation for Deep Neural Network Training

High-Performance Data Loading and Augmentation for Deep Neural Network Training High-Performance Data Loading and Augmentation for Deep Neural Network Training Trevor Gale tgale@ece.neu.edu Steven Eliuk steven.eliuk@gmail.com Cameron Upright c.upright@samsung.com Roadmap 1. The General-Purpose

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 Lecture 24 Mass Storage, HDFS/Hadoop Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ What 2

More information

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage

Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Cloud Optimized Performance: I/O-Intensive Workloads Using Flash-Based Storage Version 1.0 Brocade continues to innovate by delivering the industry s first 16 Gbps switches for low latency and high transaction

More information

The Design and Implementation of a Low-Latency On-Chip Network

The Design and Implementation of a Low-Latency On-Chip Network The Design and Implementation of a Low-Latency On-Chip Network Robert Mullins 11 th Asia and South Pacific Design Automation Conference (ASP-DAC), Jan 24-27 th, 2006, Yokohama, Japan. Introduction Current

More information

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University

I/O, Disks, and RAID Yi Shi Fall Xi an Jiaotong University I/O, Disks, and RAID Yi Shi Fall 2017 Xi an Jiaotong University Goals for Today Disks How does a computer system permanently store data? RAID How to make storage both efficient and reliable? 2 What does

More information

Distributed File Systems II

Distributed File Systems II Distributed File Systems II To do q Very-large scale: Google FS, Hadoop FS, BigTable q Next time: Naming things GFS A radically new environment NFS, etc. Independence Small Scale Variety of workloads Cooperation

More information

FuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc

FuxiSort. Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc Fuxi Jiamang Wang, Yongjun Wu, Hua Cai, Zhipeng Tang, Zhiqiang Lv, Bin Lu, Yangyu Tao, Chao Li, Jingren Zhou, Hong Tang Alibaba Group Inc {jiamang.wang, yongjun.wyj, hua.caihua, zhipeng.tzp, zhiqiang.lv,

More information

Building Consistent Transactions with Inconsistent Replication

Building Consistent Transactions with Inconsistent Replication Building Consistent Transactions with Inconsistent Replication Irene Zhang, Naveen Kr. Sharma, Adriana Szekeres, Arvind Krishnamurthy, Dan R. K. Ports University of Washington Distributed storage systems

More information

Event-based tasks give Logix5000 controllers a more effective way of gaining high-speed processing without compromising CPU performance.

Event-based tasks give Logix5000 controllers a more effective way of gaining high-speed processing without compromising CPU performance. Event-based tasks give Logix5000 controllers a more effective way of gaining high-speed processing without compromising CPU performance. Event Tasks Take Controllers to the Next Level Whether it is material

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Extend your DB2 purescale cluster to another city- Geographically Dispersed purescale Cluster

Extend your DB2 purescale cluster to another city- Geographically Dispersed purescale Cluster Extend your DB2 purescale cluster to another city- Geographically Dispersed purescale Cluster Roy Cecil IBM Session Code: 5 16 April 2014, 16:00 Platform: LUW 2 Agenda Introduction Disaster Recovery &

More information

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897

Flat Datacenter Storage. Edmund B. Nightingale, Jeremy Elson, et al. 6.S897 Flat Datacenter Storage Edmund B. Nightingale, Jeremy Elson, et al. 6.S897 Motivation Imagine a world with flat data storage Simple, Centralized, and easy to program Unfortunately, datacenter networks

More information

WebLogic & Oracle RAC Active GridLink for RAC

WebLogic & Oracle RAC Active GridLink for RAC OLE PRODUCT LOGO WebLogic & Oracle Active GridLink for Roger Freixa Senior Principal Product Manager WebLogic Server, Coherence and Java Infrastructure 1 Copyright 2011, Oracle and/or its affiliates. All

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS NORTHEASTERN UNIVERSITY Lecture 11: File System Implementation Prof. Alan Mislove (amislove@ccs.neu.edu) File-System Structure File structure Logical storage unit Collection

More information

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD.

OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. OPERATING SYSTEMS II DPL. ING. CIPRIAN PUNGILĂ, PHD. File System Implementation FILES. DIRECTORIES (FOLDERS). FILE SYSTEM PROTECTION. B I B L I O G R A P H Y 1. S I L B E R S C H AT Z, G A L V I N, A N

More information

Multithreaded Processors. Department of Electrical Engineering Stanford University

Multithreaded Processors. Department of Electrical Engineering Stanford University Lecture 12: Multithreaded Processors Department of Electrical Engineering Stanford University http://eeclass.stanford.edu/ee382a Lecture 12-1 The Big Picture Previous lectures: Core design for single-thread

More information

Craig Blitz Oracle Coherence Product Management

Craig Blitz Oracle Coherence Product Management Software Architecture for Highly Available, Scalable Trading Apps: Meeting Low-Latency Requirements Intentionally Craig Blitz Oracle Coherence Product Management 1 Copyright 2011, Oracle and/or its affiliates.

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

StressRight: Finding the Right Stress for Accurate In-development System Evaluation

StressRight: Finding the Right Stress for Accurate In-development System Evaluation StressRight: Finding the Right Stress for Accurate In-development System Evaluation Jaewon Lee 1, Hanhwi Jang 1, Jae-eon Jo 1, Gyu-Hyeon Lee 2, Jangwoo Kim 2 High Performance Computing Lab Pohang University

More information

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect Next Gen Storage StoreVirtual 3200 Alex Wilson Solutions Architect NEW HPE StoreVirtual 3200 Storage Low-cost, next-gen storage that scales with you Start at < 5K* and add flash when you are ready Supercharge

More information

ASN Configuration Best Practices

ASN Configuration Best Practices ASN Configuration Best Practices Managed machine Generally used CPUs and RAM amounts are enough for the managed machine: CPU still allows us to read and write data faster than real IO subsystem allows.

More information

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition

Chapter 12: File System Implementation. Operating System Concepts 9 th Edition Chapter 12: File System Implementation Silberschatz, Galvin and Gagne 2013 Chapter 12: File System Implementation File-System Structure File-System Implementation Directory Implementation Allocation Methods

More information

Efficiently Backing up Terabytes of Data with pgbackrest

Efficiently Backing up Terabytes of Data with pgbackrest Efficiently Backing up Terabytes of Data with pgbackrest David Steele Crunchy Data PGDay Russia 2017 July 6, 2017 Agenda 1 Why Backup? 2 Living Backups 3 Design 4 Features 5 Performance 6 Changes to Core

More information

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication CSE 444: Database Internals Section 9: 2-Phase Commit and Replication 1 Today 2-Phase Commit Replication 2 Two-Phase Commit Protocol (2PC) One coordinator and many subordinates Phase 1: Prepare Phase 2:

More information

Scaling KVS. CS6450: Distributed Systems Lecture 14. Ryan Stutsman

Scaling KVS. CS6450: Distributed Systems Lecture 14. Ryan Stutsman Scaling KVS CS6450: Distributed Systems Lecture 14 Ryan Stutsman Material taken/derived from Princeton COS-418 materials created by Michael Freedman and Kyle Jamieson at Princeton University. Licensed

More information

Lies, Damn Lies and Performance Metrics. PRESENTATION TITLE GOES HERE Barry Cooks Virtual Instruments

Lies, Damn Lies and Performance Metrics. PRESENTATION TITLE GOES HERE Barry Cooks Virtual Instruments Lies, Damn Lies and Performance Metrics PRESENTATION TITLE GOES HERE Barry Cooks Virtual Instruments Goal for This Talk Take away a sense of how to make the move from: Improving your mean time to innocence

More information

ScaleArc for SQL Server

ScaleArc for SQL Server Solution Brief ScaleArc for SQL Server Overview Organizations around the world depend on SQL Server for their revenuegenerating, customer-facing applications, running their most business-critical operations

More information

Blizzard: A Distributed Queue

Blizzard: A Distributed Queue Blizzard: A Distributed Queue Amit Levy (levya@cs), Daniel Suskin (dsuskin@u), Josh Goodwin (dravir@cs) December 14th 2009 CSE 551 Project Report 1 Motivation Distributed systems have received much attention

More information

Distributed Systems Principles and Paradigms

Distributed Systems Principles and Paradigms Distributed Systems Principles and Paradigms Chapter 03 (version February 11, 2008) Maarten van Steen Vrije Universiteit Amsterdam, Faculty of Science Dept. Mathematics and Computer Science Room R4.20.

More information

Gnothi: Separating Data and Metadata for Efficient and Available Storage Replication

Gnothi: Separating Data and Metadata for Efficient and Available Storage Replication Gnothi: Separating Data and Metadata for Efficient and Available Storage Replication Yang Wang, Lorenzo Alvisi, and Mike Dahlin The University of Texas at Austin {yangwang, lorenzo, dahlin}@cs.utexas.edu

More information

How VoltDB does Transactions

How VoltDB does Transactions TEHNIL NOTE How does Transactions Note to readers: Some content about read-only transaction coordination in this document is out of date due to changes made in v6.4 as a result of Jepsen testing. Updates

More information

Data Loss and Component Failover

Data Loss and Component Failover This chapter provides information about data loss and component failover. Unified CCE uses sophisticated techniques in gathering and storing data. Due to the complexity of the system, the amount of data

More information

Drive Space Efficiency Using the Deduplication/Compression Function of the FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series

Drive Space Efficiency Using the Deduplication/Compression Function of the FUJITSU Storage ETERNUS AF series and ETERNUS DX S4/S3 series White Paper Drive Space Efficiency Using the Function of the FUJITSU Storage ETERNUS F series and ETERNUS DX S4/S3 series The function is provided by the FUJITSU Storage ETERNUS F series and ETERNUS DX

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

NetCache: Balancing Key-Value Stores with Fast In-Network Caching

NetCache: Balancing Key-Value Stores with Fast In-Network Caching NetCache: Balancing Key-Value Stores with Fast In-Network Caching Xin Jin, Xiaozhou Li, Haoyu Zhang, Robert Soulé Jeongkeun Lee, Nate Foster, Changhoon Kim, Ion Stoica NetCache is a rack-scale key-value

More information

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions

CSE 544 Principles of Database Management Systems. Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions CSE 544 Principles of Database Management Systems Alvin Cheung Fall 2015 Lecture 14 Distributed Transactions Transactions Main issues: Concurrency control Recovery from failures 2 Distributed Transactions

More information

Drizzle: Fast and Adaptable Stream Processing at Scale

Drizzle: Fast and Adaptable Stream Processing at Scale Drizzle: Fast and Adaptable Stream Processing at Scale Shivaram Venkataraman * UC Berkeley Michael Armbrust Databricks Aurojit Panda UC Berkeley Ali Ghodsi Databricks, UC Berkeley Kay Ousterhout UC Berkeley

More information

Amazon Aurora Deep Dive

Amazon Aurora Deep Dive Amazon Aurora Deep Dive Anurag Gupta VP, Big Data Amazon Web Services April, 2016 Up Buffer Quorum 100K to Less Proactive 1/10 15 caches Custom, Shared 6-way Peer than read writes/second Automated Pay

More information

ECE Engineering Robust Server Software. Spring 2018

ECE Engineering Robust Server Software. Spring 2018 ECE590-02 Engineering Robust Server Software Spring 2018 Business Continuity: Disaster Recovery Tyler Bletsch Duke University Includes material adapted from the course Information Storage and Management

More information

Distributed Web Crawling over DHTs. Boon Thau Loo, Owen Cooper, Sailesh Krishnamurthy CS294-4

Distributed Web Crawling over DHTs. Boon Thau Loo, Owen Cooper, Sailesh Krishnamurthy CS294-4 Distributed Web Crawling over DHTs Boon Thau Loo, Owen Cooper, Sailesh Krishnamurthy CS294-4 Search Today Search Index Crawl What s Wrong? Users have a limited search interface Today s web is dynamic and

More information

MoonGen. A Scriptable High-Speed Packet Generator. Paul Emmerich. January 31st, 2016 FOSDEM Chair for Network Architectures and Services

MoonGen. A Scriptable High-Speed Packet Generator. Paul Emmerich. January 31st, 2016 FOSDEM Chair for Network Architectures and Services MoonGen A Scriptable High-Speed Packet Generator Paul Emmerich January 31st, 216 FOSDEM 216 Chair for Network Architectures and Services Department of Informatics Paul Emmerich MoonGen: A Scriptable High-Speed

More information

Flash: an efficient and portable web server

Flash: an efficient and portable web server Flash: an efficient and portable web server High Level Ideas Server performance has several dimensions Lots of different choices on how to express and effect concurrency in a program Paper argues that

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Research. Eurex NTA Timings 06 June Dennis Lohfert.

Research. Eurex NTA Timings 06 June Dennis Lohfert. Research Eurex NTA Timings 06 June 2013 Dennis Lohfert www.ion.fm 1 Introduction Eurex introduced a new trading platform that represents a radical departure from its previous platform based on OpenVMS

More information

Hardware Support for NVM Programming

Hardware Support for NVM Programming Hardware Support for NVM Programming 1 Outline Ordering Transactions Write endurance 2 Volatile Memory Ordering Write-back caching Improves performance Reorders writes to DRAM STORE A STORE B CPU CPU B

More information

Chapter 1: Distributed Systems: What is a distributed system? Fall 2013

Chapter 1: Distributed Systems: What is a distributed system? Fall 2013 Chapter 1: Distributed Systems: What is a distributed system? Fall 2013 Course Goals and Content n Distributed systems and their: n Basic concepts n Main issues, problems, and solutions n Structured and

More information