Enlightening the I/O Path: A Holistic Approach for Application Performance
|
|
- Vincent Short
- 6 years ago
- Views:
Transcription
1 Enlightening the I/O Path: A Holistic Approach for Application Performance Sangwook Kim 13, Hwanju Kim 2, Joonwon Lee 3, and Jinkyu Jeong 3 Apposha 1 Dell EMC 2 Sungkyunkwan University 3
2 Data-Intensive Applications Relational Document Key-value Column Search 2
3 Data-Intensive Applications Common structure Request Application performance Client I/O T1 Response T2 T3 I/O I/O I/O Operating System Application T4 * Example: MongoDB - Client (foreground) - Checkpointer - Log writer - Eviction worker - Storage Device 3
4 Data-Intensive Applications Common structure Client Request Response Application Background tasks are * Example: problematic MongoDB Application T1 T2 T3 T4 - Server (client) performance for application performance I/O I/O I/O I/O Operating System - Checkpointer - Log writer - Evict worker - Storage Device 4
5 Application Impact Illustrative experiment YCSB update-heavy workload against MongoDB 5
6 Operation throughput (ops/sec) Application Impact Illustrative experiment YCSB update-heavy workload against MongoDB CFQ 30 seconds latency at th percentile Regular checkpoint task Elapsed time (sec) 6
7 Operation throughput (ops/sec) Application Impact Illustrative experiment YCSB update-heavy workload against MongoDB CFQ CFQ-IDLE I/O priority does not help Elapsed time (sec) 7
8 Operation throughput (ops/sec) Application Impact Illustrative experiment YCSB update-heavy workload against MongoDB CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO State-of-the-art schedulers do not help much Elapsed time (sec) 8
9 What s the Problem? Independent policies in multiple layers Each layer processes I/Os w/ limited information I/O priority inversion Background I/Os can arbitrarily delay foreground tasks 9
10 What s the Problem? Independent policies in multiple layers Each layer processes I/Os w/ limited information I/O priority inversion Background I/Os can arbitrarily delay foreground tasks 10
11 Abstraction Multiple Independent Layers Independent I/O processing Application Caching Layer File System Layer Block Layer Storage Device 11
12 Abstraction Multiple Independent Layers Independent I/O processing Application Caching Layer read() write() Buffer Cache admission control File System Layer Block Layer Storage Device 12
13 Abstraction Multiple Independent Layers Independent I/O processing Application read() write() Caching Layer Buffer Cache admission control File System Layer Block-level Q Block Layer Storage Device 13
14 Abstraction Multiple Independent Layers Independent I/O processing Application Caching Layer read() write() Buffer Cache reorder File System Layer FG BG FG BG Block Layer Storage Device 14
15 Abstraction Multiple Independent Layers Independent I/O processing Application Caching Layer read() write() Buffer Cache File System Layer Block Layer Storage Device FG BG FG admission control Device-internal Q 15
16 Abstraction Multiple Independent Layers Independent I/O processing Application Caching Layer read() write() Buffer Cache File System Layer FG BG FG Block Layer Storage Device BG BG reorder FG BG 16
17 What s the Problem? Independent policies in multiple layers Each layer processes I/Os w/ limited information I/O priority inversion Background I/Os can arbitrarily delay foreground tasks 17
18 I/O Priority Inversion Task dependency Application Caching Layer File System Layer Block Layer Locks Condition variables Storage Device 18
19 I/O Priority Inversion Task dependency Application Caching Layer File System Layer Block Layer FG lock BG wait Condition variables I/O Storage Device 19
20 I/O Priority Inversion Task dependency Application Caching Layer File System Layer Block Layer FG lock BG wait FG wait var wake I/O BG wait Storage Device 20
21 I/O Priority Inversion Task dependency Application FG wait user wake var wait waitbg FG I/O Caching Layer File System Layer Block Layer Storage Device 21
22 I/O Priority Inversion I/O dependency Application Caching Layer File System Layer Outstanding I/Os Block Layer Storage Device 22
23 I/O Priority Inversion I/O dependency Application Caching Layer File System Layer Block Layer FG wait I/O For ensuring consistency and/or durability Storage Device 23
24 Operation throughput (ops/sec) Our Approach Request-centric I/O prioritization (RCP) Critical I/O: I/O in the critical path of request handling Policy: holistically prioritizes critical I/Os along the I/O path CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO RCP Elapsed time (sec) 100 ms latency at th percentile 24
25 Challenges How to accurately identify I/O criticality How to effectively enforce I/O criticality 25
26 Critical I/O Detection Enlightenment API Interface for tagging foreground tasks I/O priority inheritance Handling task dependency Handling I/O dependency 26
27 I/O Priority Inheritance Handling task dependency Locks inherit submit FG lock BG FG BG I/O FG unlock BG complete inherit wake register BG FG wait submit CV CV BG I/O FG CV Condition variables BG complete 27
28 I/O Priority Inheritance Handling I/O dependency 28
29 I/O Priority Inheritance Handling I/O dependency I/O Block Layer 29
30 I/O Priority Inheritance Handling I/O dependency I/O Q admission stage Sched queueing stage Block Layer 30
31 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking PER-DEV ROOT Q admission stage NCIO NCIO NCIO NCIO Descriptor Location Resolver Sector # Sched queueing stage Block Layer 31
32 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT allocate NCIO NCIO Q admission stage Sched queueing stage Block Layer Descriptor Location Resolver Sector # NCIO NCIO 32
33 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT I/O NCIO NCIO Q admission stage Sched queueing stage Block Layer update Descriptor Location Resolver Sector # NCIO NCIO 33
34 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT NCIO NCIO Q admission stage I/O Sched queueing stage Block Layer update Descriptor Location Resolver Sector # NCIO NCIO 34
35 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT NCIO NCIO Q admission stage I/O Sched queueing stage Block Layer update Descriptor Location Resolver Sector # NCIO NCIO 35
36 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT NCIO NCIO Q admission stage Sched queueing stage Block Layer I/O update Descriptor Location Resolver Sector # NCIO NCIO 36
37 I/O Priority Inheritance Handling I/O dependency I/O Non-critical I/O tracking I/O PER-DEV ROOT NCIO NCIO Q admission stage Sched queueing stage Block Layer I/O delete on completion Descriptor Location Resolver Sector # NCIO NCIO 37
38 Handling Transitive Dependency Possible states of dependent task FG inherit BG wait BG FG inherit BG wait I/O FG inherit BG wait Blocked on task Blocked on I/O Blocked at admission stage 38
39 Handling Transitive Dependency Recording blocking status FG inherit BG wait BG FG inherit BG wait I/O FG inherit BG wait Blocked on task Blocked on I/O Blocked at admission stage 39
40 Handling Transitive Dependency Recording blocking status FG inherit BG inherit BG FG inherit BG wait I/O FG inherit BG wait Task is recorded Blocked on I/O Blocked at admission stage 40
41 Handling Transitive Dependency Recording blocking status inherit inherit inherit reprio inherit wait FG BG BG FG BG I/O FG BG Task is recorded I/O is recorded Blocked at admission stage 41
42 Handling Transitive Dependency Recording blocking status inherit inherit inherit reprio inherit retry FG BG BG FG BG I/O FG BG Task is recorded I/O is recorded 42
43 Challenges How to accurately identify I/O criticality Enlightenment API I/O priority inheritance Recording blocking status How to effectively enforce I/O criticality 43
44 Criticality-Aware I/O Prioritization Caching layer Apply low dirty ratio for non-critical writes (1% by default) Block layer Isolate allocation of block queue slots Maintain 2 FIFO queues Schedule critical I/O first Limit # of outstanding non-critical I/Os (1 by default) Support queue promotion to resolve I/O dependency 44
45 Evaluation Implementation on Linux 3.13 w/ ext4 Application studies PostgreSQL relational database Backend processes as foreground tasks I/O priority inheritance on LWLocks (semop) MongoDB document store Client threads as foreground tasks I/O priority inheritance on Pthread mutex and condition vars (futex) Redis key-value store Master process as foreground task 45
46 Evaluation Experimental setup 2 Dell PowerEdge R530 (server & client) 1TB Micron MX200 SSD I/O prioritization schemes CFQ (default), CFQ-IDLE SPLIT-A (priority), SPLIT-D (deadline) [SOSP 15] QASIO [FAST 15] RCP 46
47 Transaction throughput (trx/sec) Application Throughput PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE 10GB dataset 60GB dataset 200GB dataset 47
48 Transaction throughput (trx/sec) Application Throughput PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE SPLIT-A GB dataset 60GB dataset 200GB dataset 48
49 Transaction throughput (trx/sec) Application Throughput PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE SPLIT-A SPLIT-D GB dataset 60GB dataset 200GB dataset 49
50 Transaction throughput (trx/sec) Application Throughput PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO GB dataset 60GB dataset 200GB dataset 50
51 Transaction throughput (trx/sec) Application Throughput PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO RCP % 31% 28% 10GB dataset 60GB dataset 200GB dataset 51
52 Transaction log size (GB) Application Throughput Impact on background task CFQ CFQ-IDLE QASIO Elapsed time (sec) 52
53 Transaction log size (GB) Application Throughput Impact on background task CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO Elapsed time (sec) 53
54 Transaction log size (GB) Application Throughput Impact on background task CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO RCP Our scheme improves application throughput w/o penalizing background tasks Elapsed time (sec) 54
55 CCDF P[X>=x] Application Latency PostgreSQL w/ TPC-C workload 1.E+00 0th 10 0 CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO 1.E-01 90th E-02 99th th 1.E th 1.E-04 Over 2 sec at 99.9th th 1.E Transaction latency (msec) 55
56 CCDF P[X>=x] Application Latency PostgreSQL w/ TPC-C workload CFQ CFQ-IDLE SPLIT-A SPLIT-D QASIO RCP 1.E+00 0th E-01 90th E-02 99th th 1.E th 1.E-04 Our scheme is effective for improving tail latency Over 2 sec at 99.9th th 1.E msec at th Transaction latency (msec) 56
57 Summary of Other Results Performance results MongoDB: 12%-201% throughput, 5x-20x latency at 99.9 th Redis: 7%-49% throughput, 2x-20x latency at 99.9 th Analysis results System latency analysis using LatencyTOP System throughput vs. Application latency Need for holistic approach 57
58 Conclusions Key observation All the layers in the I/O path should be considered as a whole with I/O priority inversion in mind for effective I/O prioritization Request-centric I/O prioritization Enlightens the I/O path solely for application performance Improves throughput and latency of real applications Ongoing work Practicalizing implementation Applying RCP to database cluster with multiple replicas 58
59 Thank You! Questions and comments Contact 59
Analyzing and Optimizing Linux Kernel for PostgreSQL. Sangwook Kim PGConf.Asia 2017
Analyzing and Optimizing Linux Kernel for PostgreSQL Sangwook Kim PGConf.Asia 2017 Sangwook Kim Co-founder and CEO @ Apposha Ph. D. in Computer Science Cloud/Virtualization SMP scheduling [ASPLOS 13, VEE
More informationRequest-Oriented Durable Write Caching for Application Performance
Request-Oriented Durable Write Caching for Application Performance Sangwook Kim 1, Hwanju Kim 2, Sang-Hoon Kim 3, Joonwon Lee 1, and Jinkyu Jeong 1 Sungkyunkwan University 1 University of Cambridge 2 Korea
More informationRequest-Oriented Durable Write Caching for Application Performance appeared in USENIX ATC '15. Jinkyu Jeong Sungkyunkwan University
Request-Oriented Durable Write Caching for Application Performance appeared in USENIX ATC '15 Jinkyu Jeong Sungkyunkwan University Introduction Volatile DRAM cache is ineffective for write Writes are dominant
More informationBoosting Quasi-Asynchronous I/Os (QASIOs)
Boosting Quasi-hronous s (QASIOs) Joint work with Daeho Jeong and Youngjae Lee Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu The Problem 2 Why?
More informationIO Priority: To The Device and Beyond
IO Priority: The Device and Beyond Adam Manzanares Filip Blagojevic Cyril Guyot 2017 Western Digital Corporation or its affiliates. All rights reserved. 7/24/17 1 The Relationship of Throughput and Latency
More informationEfficient QoS for Multi-Tiered Storage Systems
Efficient QoS for Multi-Tiered Storage Systems Ahmed Elnably Hui Wang Peter Varman Rice University Ajay Gulati VMware Inc Tiered Storage Architecture Client Multi-Tiered Array Client 2 Scheduler SSDs...
More informationVirtual Asymmetric Multiprocessor for Interactive Performance of Consolidated Desktops
Virtual Asymmetric Multiprocessor for Interactive Performance of Consolidated Desktops Hwanju Kim 12, Sangwook Kim 1, Jinkyu Jeong 1, and Joonwon Lee 1 Sungkyunkwan University 1 University of Cambridge
More informationCaching and reliability
Caching and reliability Block cache Vs. Latency ~10 ns 1~ ms Access unit Byte (word) Sector Capacity Gigabytes Terabytes Price Expensive Cheap Caching disk contents in RAM Hit ratio h : probability of
More informationMaking Storage Smarter Jim Williams Martin K. Petersen
Making Storage Smarter Jim Williams Martin K. Petersen Agenda r Background r Examples r Current Work r Future 2 Definition r Storage is made smarter by exchanging information between the application and
More informationMySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona Percona Technical Webinars 9 May 2018
MySQL Performance Optimization and Troubleshooting with PMM Peter Zaitsev, CEO, Percona Percona Technical Webinars 9 May 2018 Few words about Percona Monitoring and Management (PMM) 100% Free, Open Source
More informationAzor: Using Two-level Block Selection to Improve SSD-based I/O caches
Azor: Using Two-level Block Selection to Improve SSD-based I/O caches Yannis Klonatos, Thanos Makatos, Manolis Marazakis, Michail D. Flouris, Angelos Bilas {klonatos, makatos, maraz, flouris, bilas}@ics.forth.gr
More informationData Processing on Modern Hardware
Data Processing on Modern Hardware Jens Teubner, TU Dortmund, DBIS Group jens.teubner@cs.tu-dortmund.de Summer 2014 c Jens Teubner Data Processing on Modern Hardware Summer 2014 1 Part V Execution on Multiple
More informationCPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )
CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling
More informationSTORAGE SYSTEMS. Operating Systems 2015 Spring by Euiseong Seo
STORAGE SYSTEMS Operating Systems 2015 Spring by Euiseong Seo Today s Topics HDDs (Hard Disk Drives) Disk scheduling policies Linux I/O schedulers Secondary Storage Anything that is outside of primary
More informationCascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching
Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching Kefei Wang and Feng Chen Louisiana State University SoCC '18 Carlsbad, CA Key-value Systems in Internet Services Key-value
More informationAccelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740
Accelerating Microsoft SQL Server Performance With NVDIMM-N on Dell EMC PowerEdge R740 A performance study with NVDIMM-N Dell EMC Engineering September 2017 A Dell EMC document category Revisions Date
More informationRevolutionizing the Datacenter Join the Conversation #OpenPOWERSummit
Redis Labs on POWER8 Server: The Promise of OpenPOWER Value Jeffrey L. Leeds, Ph.D. Vice President, Alliances & Channels Revolutionizing the Datacenter Join the Conversation #OpenPOWERSummit Who We Are
More informationHigh-Performance Transaction Processing in Journaling File Systems Y. Son, S. Kim, H. Y. Yeom, and H. Han
High-Performance Transaction Processing in Journaling File Systems Y. Son, S. Kim, H. Y. Yeom, and H. Han Seoul National University, Korea Dongduk Women s University, Korea Contents Motivation and Background
More informationBen Walker Data Center Group Intel Corporation
Ben Walker Data Center Group Intel Corporation Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation.
More informationMySQL Performance Optimization and Troubleshooting with PMM. Peter Zaitsev, CEO, Percona
MySQL Performance Optimization and Troubleshooting with PMM Peter Zaitsev, CEO, Percona In the Presentation Practical approach to deal with some of the common MySQL Issues 2 Assumptions You re looking
More informationTales of the Tail Hardware, OS, and Application-level Sources of Tail Latency
Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency Jialin Li, Naveen Kr. Sharma, Dan R. K. Ports and Steven D. Gribble February 2, 2015 1 Introduction What is Tail Latency? What
More informationCPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University
Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads
More informationScheduler Support for Video-oriented Multimedia on Client-side Virtualization
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization Hwanju Kim 1, Jinkyu Jeong 1, Jaeho Hwang 1, Joonwon Lee 2, and Seungryoul Maeng 1 Korea Advanced Institute of Science and
More informationPreemptive, Low Latency Datacenter Scheduling via Lightweight Virtualization
Preemptive, Low Latency Datacenter Scheduling via Lightweight Virtualization Wei Chen, Jia Rao*, and Xiaobo Zhou University of Colorado, Colorado Springs * University of Texas at Arlington Data Center
More informationImplementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd
Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd Performance Study Dell EMC Engineering October 2017 A Dell EMC Performance Study Revisions Date October 2017
More informationA Basic Snooping-Based Multi-processor
Lecture 15: A Basic Snooping-Based Multi-processor Parallel Computer Architecture and Programming CMU 15-418/15-618, Spring 2014 Tunes Stompa (Serena Ryder) I wrote Stompa because I just get so excited
More informationApplication Programming
Multicore Application Programming For Windows, Linux, and Oracle Solaris Darryl Gove AAddison-Wesley Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris
More informationOSSD: A Case for Object-based Solid State Drives
MSST 2013 2013/5/10 OSSD: A Case for Object-based Solid State Drives Young-Sik Lee Sang-Hoon Kim, Seungryoul Maeng, KAIST Jaesoo Lee, Chanik Park, Samsung Jin-Soo Kim, Sungkyunkwan Univ. SSD Desktop Laptop
More informationClosing the Performance Gap Between Volatile and Persistent K-V Stores
Closing the Performance Gap Between Volatile and Persistent K-V Stores Yihe Huang, Harvard University Matej Pavlovic, EPFL Virendra Marathe, Oracle Labs Margo Seltzer, Oracle Labs Tim Harris, Oracle Labs
More informationFairRide: Near-Optimal Fair Cache Sharing
UC BERKELEY FairRide: Near-Optimal Fair Cache Sharing Qifan Pu, Haoyuan Li, Matei Zaharia, Ali Ghodsi, Ion Stoica 1 Caches are crucial 2 Caches are crucial 2 Caches are crucial 2 Caches are crucial 2 Cache
More informationFeng Chen and Xiaodong Zhang Dept. of Computer Science and Engineering The Ohio State University
Caching for Bursts (C-Burst): Let Hard Disks Sleep Well and Work Energetically Feng Chen and Xiaodong Zhang Dept. of Computer Science and Engineering The Ohio State University Power Management in Hard
More informationFlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs
FlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs Jie Zhang, Miryeong Kwon, Donghyun Gouk, Sungjoon Koh, Changlim Lee, Mohammad Alian, Myoungjun Chun,
More informationArchitecture of a Real-Time Operational DBMS
Architecture of a Real-Time Operational DBMS Srini V. Srinivasan Founder, Chief Development Officer Aerospike CMG India Keynote Thane December 3, 2016 [ CMGI Keynote, Thane, India. 2016 Aerospike Inc.
More informationCLOUD-SCALE FILE SYSTEMS
Data Management in the Cloud CLOUD-SCALE FILE SYSTEMS 92 Google File System (GFS) Designing a file system for the Cloud design assumptions design choices Architecture GFS Master GFS Chunkservers GFS Clients
More informationSSDs vs HDDs for DBMS by Glen Berseth York University, Toronto
SSDs vs HDDs for DBMS by Glen Berseth York University, Toronto So slow So cheap So heavy So fast So expensive So efficient NAND based flash memory Retains memory without power It works by trapping a small
More informationDesign of Flash-Based DBMS: An In-Page Logging Approach
SIGMOD 07 Design of Flash-Based DBMS: An In-Page Logging Approach Sang-Won Lee School of Info & Comm Eng Sungkyunkwan University Suwon,, Korea 440-746 wonlee@ece.skku.ac.kr Bongki Moon Department of Computer
More informationPrincipled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models
Principled Schedulability Analysis for Distributed Storage Systems Using Thread Architecture Models Suli Yang*, Jing Liu, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau * work done while at UW-Madison
More informationBlock Device Scheduling. Don Porter CSE 506
Block Device Scheduling Don Porter CSE 506 Logical Diagram Binary Formats Memory Allocators System Calls Threads User Kernel RCU File System Networking Sync Memory Management Device Drivers CPU Scheduler
More informationBlock Device Scheduling
Logical Diagram Block Device Scheduling Don Porter CSE 506 Binary Formats RCU Memory Management File System Memory Allocators System Calls Device Drivers Interrupts Net Networking Threads Sync User Kernel
More informationijournaling: Fine-Grained Journaling for Improving the Latency of Fsync System Call
ijournaling: Fine-Grained Journaling for Improving the Latency of Fsync System Call Daejun Park and Dongkun Shin, Sungkyunkwan University, Korea https://www.usenix.org/conference/atc17/technical-sessions/presentation/park
More informationPARDA: Proportional Allocation of Resources for Distributed Storage Access
PARDA: Proportional Allocation of Resources for Distributed Storage Access Ajay Gulati, Irfan Ahmad, Carl Waldspurger Resource Management Team VMware Inc. USENIX FAST 09 Conference February 26, 2009 The
More informationNo Tradeoff Low Latency + High Efficiency
No Tradeoff Low Latency + High Efficiency Christos Kozyrakis http://mast.stanford.edu Latency-critical Applications A growing class of online workloads Search, social networking, software-as-service (SaaS),
More informationThe Journey of an I/O request through the Block Layer
The Journey of an I/O request through the Block Layer Suresh Jayaraman Linux Kernel Engineer SUSE Labs sjayaraman@suse.com Introduction Motivation Scope Common cases More emphasis on the Block layer Why
More informationMicroFuge: A Middleware Approach to Providing Performance Isolation in Cloud Storage Systems
1 MicroFuge: A Middleware Approach to Providing Performance Isolation in Cloud Storage Systems Akshay Singh, Xu Cui, Benjamin Cassell, Bernard Wong and Khuzaima Daudjee July 3, 2014 2 Storage Resources
More informationHard Disk Drives. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)
Hard Disk Drives Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau) Storage Stack in the OS Application Virtual file system Concrete file system Generic block layer Driver Disk drive Build
More informationCSE506: Operating Systems CSE 506: Operating Systems
CSE 506: Operating Systems Disk Scheduling Key to Disk Performance Don t access the disk Whenever possible Cache contents in memory Most accesses hit in the block cache Prefetch blocks into block cache
More informationarxiv: v1 [cs.os] 24 Jun 2015
Optimize Unsynchronized Garbage Collection in an SSD Array arxiv:1506.07566v1 [cs.os] 24 Jun 2015 Abstract Da Zheng, Randal Burns Department of Computer Science Johns Hopkins University Solid state disks
More informationOptimizing Fsync Performance with Dynamic Queue Depth Adaptation
JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.15, NO.5, OCTOBER, 2015 ISSN(Print) 1598-1657 http://dx.doi.org/10.5573/jsts.2015.15.5.570 ISSN(Online) 2233-4866 Optimizing Fsync Performance with
More informationPebblesDB: Building Key-Value Stores using Fragmented Log Structured Merge Trees
PebblesDB: Building Key-Value Stores using Fragmented Log Structured Merge Trees Pandian Raju 1, Rohan Kadekodi 1, Vijay Chidambaram 1,2, Ittai Abraham 2 1 The University of Texas at Austin 2 VMware Research
More informationRelaxed Memory Consistency
Relaxed Memory Consistency Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu SSE3054: Multicore Systems, Spring 2017, Jinkyu Jeong (jinkyu@skku.edu)
More informationSOS : Software-based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs
SOS : Software-based Out-of-Order Scheduling for High-Performance NAND Flash-Based SSDs Sangwook Shane Hahn, Sungjin Lee and Jihong Kim Computer Architecture & Embedded Systems Laboratory School of Computer
More informationFStream: Managing Flash Streams in the File System
FStream: Managing Flash Streams in the File System Eunhee Rho, Kanchan Joshi, Seung-Uk Shin, Nitesh Jagadeesh Shetty, Joo-Young Hwang, Sangyeun Cho, Daniel DG Lee, Jaeheon Jeong Memory Division, Samsung
More informationDesign, Implementation, and Evaluation of Write-Back Policy with Cache Augmented Data Stores
Design, Implementation, and Evaluation of Write-Back Policy with Cache Augmented Data Stores Shahram Ghandeharizadeh and Hieu Nguyen Database Laboratory Technical Report 2018-07 Computer Science Department,
More informationReal-Time Internet of Things
Real-Time Internet of Things Chenyang Lu Cyber-Physical Systems Laboratory h7p://www.cse.wustl.edu/~lu/ Internet of Things Ø Convergence of q Miniaturized devices: integrate processor, sensors and radios.
More informationZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency
ZBD: Using Transparent Compression at the Block Level to Increase Storage Space Efficiency Thanos Makatos, Yannis Klonatos, Manolis Marazakis, Michail D. Flouris, and Angelos Bilas {mcatos,klonatos,maraz,flouris,bilas}@ics.forth.gr
More informationNVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory
NVMFS: A New File System Designed Specifically to Take Advantage of Nonvolatile Memory Dhananjoy Das, Sr. Systems Architect SanDisk Corp. 1 Agenda: Applications are KING! Storage landscape (Flash / NVM)
More informationSystem Design for a Million TPS
System Design for a Million TPS Hüsnü Sensoy Global Maksimum Data & Information Technologies Global Maksimum Data & Information Technologies Focused just on large scale data and information problems. Complex
More informationA Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria
A Predictable RTOS Mantis Cheng Department of Computer Science University of Victoria Outline I. Analysis of Timeliness Requirements II. Analysis of IO Requirements III. Time in Scheduling IV. IO in Scheduling
More informationFalcon: Scaling IO Performance in Multi-SSD Volumes. The George Washington University
Falcon: Scaling IO Performance in Multi-SSD Volumes Pradeep Kumar H Howie Huang The George Washington University SSDs in Big Data Applications Recent trends advocate using many SSDs for higher throughput
More informationTasks. Task Implementation and management
Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration
More informationVOLTDB + HP VERTICA. page
VOLTDB + HP VERTICA ARCHITECTURE FOR FAST AND BIG DATA ARCHITECTURE FOR FAST + BIG DATA FAST DATA Fast Serve Analytics BIG DATA BI Reporting Fast Operational Database Streaming Analytics Columnar Analytics
More informationShen PingCAP 2017
Shen Li @ PingCAP About me Shen Li ( 申砾 ) Tech Lead of TiDB, VP of Engineering Netease / 360 / PingCAP Infrastructure software engineer WHY DO WE NEED A NEW DATABASE? Brief History Standalone RDBMS NoSQL
More informationChapter 2 Processes and Threads
MODERN OPERATING SYSTEMS Third Edition ANDREW S. TANENBAUM Chapter 2 Processes and Threads The Process Model Figure 2-1. (a) Multiprogramming of four programs. (b) Conceptual model of four independent,
More informationSolid State Storage Technologies. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University
Solid State Storage Technologies Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu NVMe (1) The industry standard interface for high-performance NVM
More informationDiscriminating Hierarchical Storage (DHIS)
Discriminating Hierarchical Storage (DHIS) Chaitanya Yalamanchili, Kiron Vijayasankar, Erez Zadok Stony Brook University Gopalan Sivathanu Google Inc. http://www.fsl.cs.sunysb.edu/ Discriminating Hierarchical
More informationSTORAGE LATENCY x. RAMAC 350 (600 ms) NAND SSD (60 us)
1 STORAGE LATENCY 2 RAMAC 350 (600 ms) 1956 10 5 x NAND SSD (60 us) 2016 COMPUTE LATENCY 3 RAMAC 305 (100 Hz) 1956 10 8 x 1000x CORE I7 (1 GHZ) 2016 NON-VOLATILE MEMORY 1000x faster than NAND 3D XPOINT
More informationInnoDB Scalability Limits. Peter Zaitsev, Vadim Tkachenko Percona Inc MySQL Users Conference 2008 April 14-17, 2008
InnoDB Scalability Limits Peter Zaitsev, Vadim Tkachenko Percona Inc MySQL Users Conference 2008 April 14-17, 2008 -2- Who are the Speakers? Founders of Percona Inc MySQL Performance and Scaling consulting
More informationDatabase Replication in Tashkent. CSEP 545 Transaction Processing Sameh Elnikety
Database Replication in Tashkent CSEP 545 Transaction Processing Sameh Elnikety Replication for Performance Expensive Limited scalability DB Replication is Challenging Single database system Large, persistent
More informationBarrier Enabled IO Stack for Flash Storage
Barrier Enabled IO Stack for Flash Storage Youjip Won, Jaemin Jung, Gyeongyeol Choi, Joontaek Oh, Seongbae Son, Jooyoung Hwang, Sangyeun Cho Hanyang University Texas A&M University Samsung Electronics
More informationRecovering Disk Storage Metrics from low level Trace events
Recovering Disk Storage Metrics from low level Trace events Progress Report Meeting May 05, 2016 Houssem Daoud Michel Dagenais École Polytechnique de Montréal Laboratoire DORSAL Agenda Introduction and
More informationAccelerate Applications Using EqualLogic Arrays with directcache
Accelerate Applications Using EqualLogic Arrays with directcache Abstract This paper demonstrates how combining Fusion iomemory products with directcache software in host servers significantly improves
More informationC++ Threading. Tim Bailey
C++ Threading Introduction Very basic introduction to threads I do not discuss a thread API Easy to learn Use, eg., Boost.threads Standard thread API will be available soon The thread API is too low-level
More informationLecture 11: SMT and Caching Basics. Today: SMT, cache access basics (Sections 3.5, 5.1)
Lecture 11: SMT and Caching Basics Today: SMT, cache access basics (Sections 3.5, 5.1) 1 Thread-Level Parallelism Motivation: a single thread leaves a processor under-utilized for most of the time by doubling
More informationNo compromises: distributed transactions with consistency, availability, and performance
No compromises: distributed transactions with consistency, availability, and performance Aleksandar Dragojevi c, Dushyanth Narayanan, Edmund B. Nightingale, Matthew Renzelmann, Alex Shamis, Anirudh Badam,
More informationPYTHIA: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads
PYTHIA: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads Ran Xu (Purdue), Subrata Mitra (Adobe Research), Jason Rahman (Facebook), Peter Bai (Purdue),
More informationdavidklee.net gplus.to/kleegeek linked.com/a/davidaklee
@kleegeek davidklee.net gplus.to/kleegeek linked.com/a/davidaklee Specialties / Focus Areas / Passions: Performance Tuning & Troubleshooting Virtualization Cloud Enablement Infrastructure Architecture
More informationDistributed PostgreSQL with YugaByte DB
Distributed PostgreSQL with YugaByte DB Karthik Ranganathan PostgresConf Silicon Valley Oct 16, 2018 1 CHECKOUT THIS REPO: github.com/yugabyte/yb-sql-workshop 2 About Us Founders Kannan Muthukkaruppan,
More informationPARALLEL WORKLOAD MANAGEMENT USING LINUX CFS AND HARD LIMITS. Sorana Rabinovici, Venu Joshi Nov, 2012
PARALLEL WORKLOAD MANAGEMENT USING LINUX CFS AND HARD LIMITS Sorana Rabinovici, Venu Joshi Nov, 2012 Parallel Databases : Technical Differentiators Data distribution Data management & reorganization Optimizer
More informationHammer Slide: Work- and CPU-efficient Streaming Window Aggregation
Large-Scale Data & Systems Group Hammer Slide: Work- and CPU-efficient Streaming Window Aggregation Georgios Theodorakis, Alexandros Koliousis, Peter Pietzuch, Holger Pirk Large-Scale Data & Systems (LSDS)
More informationPower-Aware Throughput Control for Database Management Systems
Power-Aware Throughput Control for Database Management Systems Zichen Xu, Xiaorui Wang, Yi-Cheng Tu * The Ohio State University * The University of South Florida Power-Aware Computer Systems (PACS) Lab
More informationPASTE: A Network Programming Interface for Non-Volatile Main Memory
PASTE: A Network Programming Interface for Non-Volatile Main Memory Michio Honda (NEC Laboratories Europe) Giuseppe Lettieri (Università di Pisa) Lars Eggert and Douglas Santry (NetApp) USENIX NSDI 2018
More informationTailwind: Fast and Atomic RDMA-based Replication. Yacine Taleb, Ryan Stutsman, Gabriel Antoniu, Toni Cortes
Tailwind: Fast and Atomic RDMA-based Replication Yacine Taleb, Ryan Stutsman, Gabriel Antoniu, Toni Cortes In-Memory Key-Value Stores General purpose in-memory key-value stores are widely used nowadays
More informationDistributed Systems 16. Distributed File Systems II
Distributed Systems 16. Distributed File Systems II Paul Krzyzanowski pxk@cs.rutgers.edu 1 Review NFS RPC-based access AFS Long-term caching CODA Read/write replication & disconnected operation DFS AFS
More informationSemaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }
Semaphore Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Can only be accessed via two indivisible (atomic) operations wait (S) { while
More informationBlock Device Driver. Pradipta De
Block Device Driver Pradipta De pradipta.de@sunykorea.ac.kr Today s Topic Block Devices Structure of devices Kernel components I/O Scheduling USB Device Driver Basics CSE506: Block Devices & IO Scheduling
More informationJob Scheduling. CS170 Fall 2018
Job Scheduling CS170 Fall 2018 What to Learn? Algorithms of job scheduling, which maximizes CPU utilization obtained with multiprogramming Select from ready processes and allocates the CPU to one of them
More informationCFDC A Flash-aware Replacement Policy for Database Buffer Management
CFDC A Flash-aware Replacement Policy for Database Buffer Management Yi Ou University of Kaiserslautern Germany Theo Härder University of Kaiserslautern Germany Peiquan Jin University of Science and Technology
More informationField Testing Buffer Pool Extension and In-Memory OLTP Features in SQL Server 2014
Field Testing Buffer Pool Extension and In-Memory OLTP Features in SQL Server 2014 Rick Heiges, SQL MVP Sr Solutions Architect Scalability Experts Ross LoForte - SQL Technology Architect - Microsoft Changing
More informationOperating System Supports for SCM as Main Memory Systems (Focusing on ibuddy)
2011 NVRAMOS Operating System Supports for SCM as Main Memory Systems (Focusing on ibuddy) 2011. 4. 19 Jongmoo Choi http://embedded.dankook.ac.kr/~choijm Contents Overview Motivation Observations Proposal:
More informationCA485 Ray Walshe Google File System
Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage
More informationHydraFS: a High-Throughput File System for the HYDRAstor Content-Addressable Storage System
HydraFS: a High-Throughput File System for the HYDRAstor Content-Addressable Storage System Cristian Ungureanu, Benjamin Atkin, Akshat Aranya, Salil Gokhale, Steve Rago, Grzegorz Calkowski, Cezary Dubnicki,
More informationProceedings of the First Symposium on Networked Systems Design and Implementation
USENIX Association Proceedings of the First Symposium on Networked Systems Design and Implementation San Francisco, CA, USA March 29 31, 2004 2004 by The USENIX Association All Rights Reserved For more
More informationPCAP: Performance-Aware Power Capping for the Disk Drive in the Cloud
PCAP: Performance-Aware Power Capping for the Disk Drive in the Cloud Mohammed G. Khatib & Zvonimir Bandic WDC Research 2/24/16 1 HDD s power impact on its cost 3-yr server & 10-yr infrastructure amortization
More information4 Myths about in-memory databases busted
4 Myths about in-memory databases busted Yiftach Shoolman Co-Founder & CTO @ Redis Labs @yiftachsh, @redislabsinc Background - Redis Created by Salvatore Sanfilippo (@antirez) OSS, in-memory NoSQL k/v
More informationScaling PostgreSQL on SMP Architectures
Scaling PostgreSQL on SMP Architectures Doug Tolbert, David Strong, Johney Tsai {doug.tolbert, david.strong, johney.tsai}@unisys.com PGCon 2007, Ottawa, May 21-24, 2007 Page 1 Performance vs. Scalability
More informationG Virtual Memory. Robert Grimm New York University
G22.3250-001 Virtual Memory Robert Grimm New York University Altogether Now: The Three Questions! What is the problem?! What is new or different?! What are the contributions and limitations? VAX-11 Memory
More informationImproving Performance using the LINUX IO Scheduler Shaun de Witt STFC ISGC2016
Improving Performance using the LINUX IO Scheduler Shaun de Witt STFC ISGC2016 Role of the Scheduler Optimise Access to Storage CPU operations have a few processor cycles (each cycle is < 1ns) Seek operations
More informationPHX: Memory Speed HPC I/O with NVM. Pradeep Fernando Sudarsun Kannan, Ada Gavrilovska, Karsten Schwan
PHX: Memory Speed HPC I/O with NVM Pradeep Fernando Sudarsun Kannan, Ada Gavrilovska, Karsten Schwan Node Local Persistent I/O? Node local checkpoint/ restart - Recover from transient failures ( node restart)
More informationVirtual SQL Servers. Actual Performance. 2016
@kleegeek davidklee.net heraflux.com linkedin.com/in/davidaklee Specialties / Focus Areas / Passions: Performance Tuning & Troubleshooting Virtualization Cloud Enablement Infrastructure Architecture Health
More informationEE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University
EE382A Lecture 7: Dynamic Scheduling Department of Electrical Engineering Stanford University http://eeclass.stanford.edu/ee382a Lecture 7-1 Announcements Project proposal due on Wed 10/14 2-3 pages submitted
More information