RUBIK: FAST ANALYTICAL POWER MANAGEMENT

Size: px
Start display at page:

Download "RUBIK: FAST ANALYTICAL POWER MANAGEMENT"

Transcription

1 RUBIK: FAST ANALYTICAL POWER MANAGEMENT FOR LATENCY-CRITICAL SYSTEMS HARSHAD KASTURE, DAVIDE BARTOLINI, NATHAN BECKMANN, DANIEL SANCHEZ MICRO 2015

2 Motivation 2! Low server utilization in today s datacenters results in resource and energy inefficiency! Stringent latency requirements of user-facing services is a major contributing factor! Power management for these services is challenging! Strict requirements on tail latency! Inherent variability in request arrival and service times! Rubik uses statistical modeling to adapt to short-term variations! Respond to abrupt load changes! Improve power efficiency! Allow colocation of latency-critical and batch applications

3 Understanding Latency-Critical Applications 3 Leaf Node Client Root Node Leaf Node Datacenter Leaf Node

4 Understanding Latency-Critical Applications 4 Leaf Node Client Root Node Leaf Node Datacenter Leaf Node

5 Understanding Latency-Critical Applications 5 Leaf Node Client Root Node Leaf Node Datacenter Leaf Node

6 Understanding Latency-Critical Applications 6 1 ms Leaf Node Client Root Node 1 ms Leaf Node Datacenter Leaf Node! The few slowest responses determine user-perceived latency! Tail latency (e.g., 95 th / 99 th percentile), not mean latency, determines performance

7 Prior Schemes Fall Short 7! Traditional DVFS schemes (cpufreq, TurboBoost )! React to coarse grained metrics like processor utilization, oblivious to short-term performance requirements! Power management for embedded systems (PACE, GRACE )! Do not consider queuing! Schemes designed specifically for latency-critical systems (PEGASUS [Lo ISCA 14], Adrenaline [Hsu HPCA 15])! Rely on application-specific heuristics! Too conservative

8 Insight 1: Short-Term Load Variations 8! Latency-critical applications have significant short-term load variations moses QPS Time (s)! PEGASUS [Lo ISCA 14] uses feedback control to adapt frequency setting to diurnal load variations! Deduce server load from observed request latency! Cannot adapt to short-term variations

9 Insight 2: Queuing Matters! 9! Tail latency is often determined by queuing, not the length of individual requests! Adrenaline [Hsu HPCA 15] uses application-level hints to distinguish long requests from short ones! Long requests boosted (sped up)! Frequency settings must be conservative to handle queuing Response Time (ms) Service Time (ms) Queue Len moses Time (s) Time (s) Time (s)

10 Rubik Overview 10! Use queue length as a measure of instantaneous system load! Update frequency whenever queue length changes! Adapt to short-term load variations Core Activity Queue Length Core Frequency Idle Rubik Time Time Time

11 Goal: Reshaping Latency Distribution 11 Probability Density Response Latency

12 Key Factors in Setting Frequencies 12! Distribution of cycle requirements of individual requests! Larger variance " more conservative frequency setting! How long has a request spent in the queue?! Longer wait times " higher frequency! How many requests are queued waiting for service! Longer queues " higher frequency

13 There s Math! 13 P[S 0 = c] = P[S = c +ω S > ω] = P[S = c +ω] P[S > ω] P Si ω Cycles Cycles! " i times # ## $ = P Si 1 * P S = P S0 * P S * P S *...* P S Cycles * Cycles Cycles f max i=0...n c i L (t i + m i )

14 Efficient Implementation 14! Pre-computed tables store most of the required quantities Target Tail Tables Updated Periodically c 0 c 1 c m 2 c 0 m 1 m 15 2 m ω = 0 15 ω = 0 ω < 25 th pct ω < 25 th pct ω < 50 th pct ω < 50 th pct ω < 75 th pct ω < 75 th pct Otherwise Read on each request arrival/departure! Table contents are independent of system load!! Implemented as a software runtime! Hardware support: fast, per-core DVFS, performance counters for CPI stacks

15 Evaluation 15! Microarchitectural simulations using zsim! Power model tuned to a real system Core 3 Core 4 Core 5 Shared L3 o Westmere-like OOO cores o Fast per-core DVFS o CPI stack counters o Pin threads to cores Core 0 Core 1 Core 2! Compare Rubik against two oracular schemes:! StaticOracle: Pick the lowest static frequency that meets latency targets for a given request trace! AdrenalineOracle: Assume oracular knowledge of long and short requests, use offline training to pick frequencies for each

16 Evaluation 16! Five diverse latency-critical applications! xapian (search engine)! masstree (in-memory key-value store)! moses (statistical machine translation)! shore-mt (OLTP)! specjbb (java middleware)! For each application, latency target set at the tail latency achieved at nominal frequency (2.4 GHz) at 50% utilization

17 Tail Latency 17 Load Tail Lat (ms) Freqs (GHz) StaticOracle Rubik AdrenalineOracle Time (s)

18 Tail Latency 18 Load Tail Lat (ms) Freqs (GHz) StaticOracle Rubik AdrenalineOracle Time (s)

19 Core Power Savings 19 Core Power Savings (%) StaticOracle AdrenalineOracle Rubik 30% 30% 30% 30% 30% 30% masstree moses shore specjbb xapian mean! All three schemes save significant power at low utilization! Rubik performs best, reducing core power by up to 66%

20 Core Power Savings 20 Core Power Savings (%) StaticOracle AdrenalineOracle Rubik 30% 40% 30% 40% 30% 40% 30% 40% 30% 40% 30% 40% masstree moses shore specjbb xapian mean! All three schemes save significant power at low utilization! Rubik performs best, reducing core power by up to 66%! Rubik s relative savings increase as short-term adaptation becomes more important

21 Core Power Savings 21 Core Power Savings (%) StaticOracle AdrenalineOracle Rubik 30% 40% 50% 30% 40% 50% 30% 40% 50% 30% 40% 50% 30% 40% 50% 30% 40% 50% masstree moses shore specjbb xapian mean! All three schemes save significant power at low utilization! Rubik performs best, reducing core power by up to 66%! Rubik s relative savings increase as short-term adaptation becomes more important! Rubik saves significant power even at high utilization! 17% on average, and up to 34%

22 Real Machine Power Savings 22! V/F transition latencies of >100 µs even with integrated voltage controllers! Likely due to inefficiencies in firmware! Rubik successfully adapts to higher V/F transition latencies Core Power Savings (%) StaticOracle Rubik 30% 40% 50% 30% 40% 50% masstree moses

23 Static Power Limits Efficiency 23 Idle Latency-critical Utilization Datacenter Batch Utilization

24 RubikColoc: Colocation Using Rubik 24 Rubik sets Latency-Critical Frequencies Statically Partitioned LLC RubikColoc

25 RubikColoc Savings 25! RubikColoc saves significant power and resources over a segregated datacenter baseline! 17% reduction in datacenter power consumption; 19% fewer machines at high load! 31% reduction in datacenter power consumption, 41% fewer machines at high load

26 Conclusions 26! Rubik uses fine-grained power management to reduce active core power consumption by up to 66%! Rubik uses statistical modeling to account for various sources of uncertainty, and avoids application-specific heuristics! RubikColoc uses Rubik to colocate latency-critical and batch applications, reducing datacenter power consumption by up to 31% while using up to 41% fewer machines

27 THANKS FOR YOUR ATTENTION! QUESTIONS?

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads Towards Energy Proportionality for Large-Scale Latency-Critical Workloads David Lo *, Liqun Cheng *, Rama Govindaraju *, Luiz André Barroso *, Christos Kozyrakis Stanford University * Google Inc. 2012

More information

Ubik: efficient cache sharing with strict qos for latencycritical

Ubik: efficient cache sharing with strict qos for latencycritical Ubik: efficient cache sharing with strict qos for latencycritical workloads The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

Treadmill: Attributing the Source of Tail Latency through Precise Load Testing and Statistical Inference

Treadmill: Attributing the Source of Tail Latency through Precise Load Testing and Statistical Inference Treadmill: Attributing the Source of Tail Latency through Precise Load Testing and Statistical Inference Yunqi Zhang, David Meisner, Jason Mars, Lingjia Tang Internet services User interactive applications

More information

No Tradeoff Low Latency + High Efficiency

No Tradeoff Low Latency + High Efficiency No Tradeoff Low Latency + High Efficiency Christos Kozyrakis http://mast.stanford.edu Latency-critical Applications A growing class of online workloads Search, social networking, software-as-service (SaaS),

More information

SOFTWARE-DEFINED MEMORY HIERARCHIES: SCALABILITY AND QOS IN THOUSAND-CORE SYSTEMS

SOFTWARE-DEFINED MEMORY HIERARCHIES: SCALABILITY AND QOS IN THOUSAND-CORE SYSTEMS SOFTWARE-DEFINED MEMORY HIERARCHIES: SCALABILITY AND QOS IN THOUSAND-CORE SYSTEMS DANIEL SANCHEZ MIT CSAIL IAP MEETING MAY 21, 2013 Research Agenda Lack of technology progress Moore s Law still alive Power

More information

Integrated CPU and Cache Power Management in Multiple Clock Domain Processors

Integrated CPU and Cache Power Management in Multiple Clock Domain Processors Integrated CPU and Cache Power Management in Multiple Clock Domain Processors Nevine AbouGhazaleh, Bruce Childers, Daniel Mossé & Rami Melhem Department of Computer Science University of Pittsburgh HiPEAC

More information

Rethinking DRAM Power Modes for Energy Proportionality

Rethinking DRAM Power Modes for Energy Proportionality Rethinking DRAM Power Modes for Energy Proportionality Krishna Malladi 1, Ian Shaeffer 2, Liji Gopalakrishnan 2, David Lo 1, Benjamin Lee 3, Mark Horowitz 1 Stanford University 1, Rambus Inc 2, Duke University

More information

Treadmill: Tail Latency Measurement at Microsecond-level Precision. Yunqi Zhang Johann Hauswald David Meisner Jason Mars Lingjia Tang

Treadmill: Tail Latency Measurement at Microsecond-level Precision. Yunqi Zhang Johann Hauswald David Meisner Jason Mars Lingjia Tang Treadmill: Tail Latency Measurement at Microsecond-level Precision Yunqi Zhang Johann Hauswald David Meisner Jason Mars Lingjia Tang Schedule Welcome Section 1: Tail latency 08:40 ~ 09:00 Overview of data

More information

Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency

Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency Tales of the Tail Hardware, OS, and Application-level Sources of Tail Latency Jialin Li, Naveen Kr. Sharma, Dan R. K. Ports and Steven D. Gribble February 2, 2015 1 Introduction What is Tail Latency? What

More information

Bandwidth Adaptive Snooping

Bandwidth Adaptive Snooping Two classes of multiprocessors Bandwidth Adaptive Snooping Milo M.K. Martin, Daniel J. Sorin Mark D. Hill, and David A. Wood Wisconsin Multifacet Project Computer Sciences Department University of Wisconsin

More information

DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks. David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz

DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks. David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz DeTail Reducing the Tail of Flow Completion Times in Datacenter Networks David Zats, Tathagata Das, Prashanth Mohan, Dhruba Borthakur, Randy Katz 1 A Typical Facebook Page Modern pages have many components

More information

Method-Level Phase Behavior in Java Workloads

Method-Level Phase Behavior in Java Workloads Method-Level Phase Behavior in Java Workloads Andy Georges, Dries Buytaert, Lieven Eeckhout and Koen De Bosschere Ghent University Presented by Bruno Dufour dufour@cs.rutgers.edu Rutgers University DCS

More information

Best Practices for Setting BIOS Parameters for Performance

Best Practices for Setting BIOS Parameters for Performance White Paper Best Practices for Setting BIOS Parameters for Performance Cisco UCS E5-based M3 Servers May 2013 2014 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page

More information

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads

Towards Energy Proportionality for Large-Scale Latency-Critical Workloads Towards Energy Proportionality for Large-Scale Latency-Critical Workloads David Lo, Liqun Cheng, Rama Govindaraju, Luiz André Barroso and Christos Kozyrakis Stanford University Google, Inc. Abstract Reducing

More information

ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS

ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS DANIEL SANCHEZ MIT CHRISTOS KOZYRAKIS STANFORD ISCA-40 JUNE 27, 2013 Introduction 2 Current detailed simulators are slow (~200

More information

Exploiting Heterogeneity for Tail Latency and Energy Efficiency

Exploiting Heterogeneity for Tail Latency and Energy Efficiency Exploiting Heterogeneity for Tail Latency and Energy Efficiency Md E. Haque Rutgers University mdhaque@cs.rutgers.edu Thu D. Nguyen Rutgers University tdnguyen@cs.rutgers.edu Yuxiong He Microsoft Research

More information

IX: A Protected Dataplane Operating System for High Throughput and Low Latency

IX: A Protected Dataplane Operating System for High Throughput and Low Latency IX: A Protected Dataplane Operating System for High Throughput and Low Latency Belay, A. et al. Proc. of the 11th USENIX Symp. on OSDI, pp. 49-65, 2014. Reviewed by Chun-Yu and Xinghao Li Summary In this

More information

Towards Energy-Proportional Datacenter Memory with Mobile DRAM

Towards Energy-Proportional Datacenter Memory with Mobile DRAM Towards Energy-Proportional Datacenter Memory with Mobile DRAM Krishna Malladi 1 Frank Nothaft 1 Karthika Periyathambi Benjamin Lee 2 Christos Kozyrakis 1 Mark Horowitz 1 Stanford University 1 Duke University

More information

FlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs

FlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs FlashShare: Punching Through Server Storage Stack from Kernel to Firmware for Ultra-Low Latency SSDs Jie Zhang, Miryeong Kwon, Donghyun Gouk, Sungjoon Koh, Changlim Lee, Mohammad Alian, Myoungjun Chun,

More information

TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for Online Search

TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for Online Search TimeTrader: Exploiting Latency Tail to Save Datacenter Energy for Online Search Balajee Vamanan Purdue University Hamza Bin Sohail Jahangir Hasan Purdue University Google Inc. bvamanan@purdue.edu hsohail@purdue.edu

More information

LEoNIDS: a Low-latency and Energyefficient Intrusion Detection System

LEoNIDS: a Low-latency and Energyefficient Intrusion Detection System LEoNIDS: a Low-latency and Energyefficient Intrusion Detection System Nikos Tsikoudis Thesis Supervisor: Evangelos Markatos June 2013 Heraklion, Greece Low-Power Design Low-power systems receive significant

More information

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters

Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Designing Power-Aware Collective Communication Algorithms for InfiniBand Clusters Krishna Kandalla, Emilio P. Mancini, Sayantan Sur, and Dhabaleswar. K. Panda Department of Computer Science & Engineering,

More information

Computer system energy management. Charles Lefurgy

Computer system energy management. Charles Lefurgy 28 July 2011 Computer system energy management Charles Lefurgy Outline A short history of server power management POWER7 EnergyScale AMESTER power measurement tool Challenges ahead 2 A brief history of

More information

ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS

ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS ZSIM: FAST AND ACCURATE MICROARCHITECTURAL SIMULATION OF THOUSAND-CORE SYSTEMS DANIEL SANCHEZ MIT CHRISTOS KOZYRAKIS STANFORD ISCA-40 JUNE 27, 2013 Introduction 2 Current detailed simulators are slow (~200

More information

A Comparison of Capacity Management Schemes for Shared CMP Caches

A Comparison of Capacity Management Schemes for Shared CMP Caches A Comparison of Capacity Management Schemes for Shared CMP Caches Carole-Jean Wu and Margaret Martonosi Princeton University 7 th Annual WDDD 6/22/28 Motivation P P1 P1 Pn L1 L1 L1 L1 Last Level On-Chip

More information

TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks

TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks TCEP: Traffic Consolidation for Energy-Proportional High-Radix Networks Gwangsun Kim Arm Research Hayoung Choi, John Kim KAIST High-radix Networks Dragonfly network in Cray XC30 system 1D Flattened butterfly

More information

POWER-AWARE SOFTWARE ON ARM. Paul Fox

POWER-AWARE SOFTWARE ON ARM. Paul Fox POWER-AWARE SOFTWARE ON ARM Paul Fox OUTLINE MOTIVATION LINUX POWER MANAGEMENT INTERFACES A UNIFIED POWER MANAGEMENT SYSTEM EXPERIMENTAL RESULTS AND FUTURE WORK 2 MOTIVATION MOTIVATION» ARM SoCs designed

More information

Bottleneck Identification and Scheduling in Multithreaded Applications. José A. Joao M. Aater Suleman Onur Mutlu Yale N. Patt

Bottleneck Identification and Scheduling in Multithreaded Applications. José A. Joao M. Aater Suleman Onur Mutlu Yale N. Patt Bottleneck Identification and Scheduling in Multithreaded Applications José A. Joao M. Aater Suleman Onur Mutlu Yale N. Patt Executive Summary Problem: Performance and scalability of multithreaded applications

More information

15-740/ Computer Architecture Lecture 20: Main Memory II. Prof. Onur Mutlu Carnegie Mellon University

15-740/ Computer Architecture Lecture 20: Main Memory II. Prof. Onur Mutlu Carnegie Mellon University 15-740/18-740 Computer Architecture Lecture 20: Main Memory II Prof. Onur Mutlu Carnegie Mellon University Today SRAM vs. DRAM Interleaving/Banking DRAM Microarchitecture Memory controller Memory buses

More information

Potentials and Limitations for Energy Efficiency Auto-Tuning

Potentials and Limitations for Energy Efficiency Auto-Tuning Center for Information Services and High Performance Computing (ZIH) Potentials and Limitations for Energy Efficiency Auto-Tuning Parco Symposium Application Autotuning for HPC (Architectures) Robert Schöne

More information

Adaptive Parallelism for Web Search

Adaptive Parallelism for Web Search Adaptive Parallelism for Web Search Myeongjae Jeon, Yuxiong He, Sameh Elnikety, Alan L. Cox, Scott Rixner Microsoft Research Rice University Redmond, WA, USA Houston, TX, USA Abstract A web search query

More information

SCALING HARDWARE AND SOFTWARE

SCALING HARDWARE AND SOFTWARE SCALING HARDWARE AND SOFTWARE FOR THOUSAND-CORE SYSTEMS Daniel Sanchez Electrical Engineering Stanford University Multicore Scalability 1.E+06 10 6 1.E+05 10 5 1.E+04 10 4 1.E+03 10 3 1.E+02 10 2 1.E+01

More information

Helio X20: The First Tri-Gear Mobile SoC with CorePilot 3.0 Technology

Helio X20: The First Tri-Gear Mobile SoC with CorePilot 3.0 Technology Helio X20: The First Tri-Gear Mobile SoC with CorePilot 3.0 Technology Tsung-Yao Lin, g-hsien Lee, Loda Chou, Clavin Peng, Jih-g Hsu, Jia-g Chen, John-CC Chen, Alex Chiou, Artis Chiu, David Lee, Carrie

More information

Efficient Evaluation and Management of Temperature and Reliability for Multiprocessor Systems

Efficient Evaluation and Management of Temperature and Reliability for Multiprocessor Systems Efficient Evaluation and Management of Temperature and Reliability for Multiprocessor Systems Ayse K. Coskun Electrical and Computer Engineering Department Boston University http://people.bu.edu/acoskun

More information

Banshee: Bandwidth-Efficient DRAM Caching via Software/Hardware Cooperation!

Banshee: Bandwidth-Efficient DRAM Caching via Software/Hardware Cooperation! Banshee: Bandwidth-Efficient DRAM Caching via Software/Hardware Cooperation! Xiangyao Yu 1, Christopher Hughes 2, Nadathur Satish 2, Onur Mutlu 3, Srinivas Devadas 1 1 MIT 2 Intel Labs 3 ETH Zürich 1 High-Bandwidth

More information

PYTHIA: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads

PYTHIA: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads PYTHIA: Improving Datacenter Utilization via Precise Contention Prediction for Multiple Co-located Workloads Ran Xu (Purdue), Subrata Mitra (Adobe Research), Jason Rahman (Facebook), Peter Bai (Purdue),

More information

Fit for Purpose Platform Positioning and Performance Architecture

Fit for Purpose Platform Positioning and Performance Architecture Fit for Purpose Platform Positioning and Performance Architecture Joe Temple IBM Monday, February 4, 11AM-12PM Session Number 12927 Insert Custom Session QR if Desired. Fit for Purpose Categorized Workload

More information

Analytical Modeling of Parallel Programs

Analytical Modeling of Parallel Programs Analytical Modeling of Parallel Programs Alexandre David Introduction to Parallel Computing 1 Topic overview Sources of overhead in parallel programs. Performance metrics for parallel systems. Effect of

More information

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services

SEDA: An Architecture for Well-Conditioned, Scalable Internet Services SEDA: An Architecture for Well-Conditioned, Scalable Internet Services Matt Welsh, David Culler, and Eric Brewer Computer Science Division University of California, Berkeley Operating Systems Principles

More information

Analyzing the Performance of IWAVE on a Cluster using HPCToolkit

Analyzing the Performance of IWAVE on a Cluster using HPCToolkit Analyzing the Performance of IWAVE on a Cluster using HPCToolkit John Mellor-Crummey and Laksono Adhianto Department of Computer Science Rice University {johnmc,laksono}@rice.edu TRIP Meeting March 30,

More information

POWER MANAGEMENT AND ENERGY EFFICIENCY

POWER MANAGEMENT AND ENERGY EFFICIENCY POWER MANAGEMENT AND ENERGY EFFICIENCY * Adopted Power Management for Embedded Systems, Minsoo Ryu 2017 Operating Systems Design Euiseong Seo (euiseong@skku.edu) Need for Power Management Power consumption

More information

Token Coherence. Milo M. K. Martin Dissertation Defense

Token Coherence. Milo M. K. Martin Dissertation Defense Token Coherence Milo M. K. Martin Dissertation Defense Wisconsin Multifacet Project http://www.cs.wisc.edu/multifacet/ University of Wisconsin Madison (C) 2003 Milo Martin Overview Technology and software

More information

Mace, J. et al., "2DFQ: Two-Dimensional Fair Queuing for Multi-Tenant Cloud Services," Proc. of ACM SIGCOMM '16, 46(4): , Aug

Mace, J. et al., 2DFQ: Two-Dimensional Fair Queuing for Multi-Tenant Cloud Services, Proc. of ACM SIGCOMM '16, 46(4): , Aug Mace, J. et al., "2DFQ: Two-Dimensional Fair Queuing for Multi-Tenant Cloud Services," Proc. of ACM SIGCOMM '16, 46(4):144-159, Aug. 2016. Introduction Server has limited capacity Requests by clients are

More information

Adaptive Power Profiling for Many-Core HPC Architectures

Adaptive Power Profiling for Many-Core HPC Architectures Adaptive Power Profiling for Many-Core HPC Architectures Jaimie Kelley, Christopher Stewart The Ohio State University Devesh Tiwari, Saurabh Gupta Oak Ridge National Laboratory State-of-the-Art Schedulers

More information

Workloads, Scalability and QoS Considerations in CMP Platforms

Workloads, Scalability and QoS Considerations in CMP Platforms Workloads, Scalability and QoS Considerations in CMP Platforms Presenter Don Newell Sr. Principal Engineer Intel Corporation 2007 Intel Corporation Agenda Trends and research context Evolving Workload

More information

Improving DRAM Performance by Parallelizing Refreshes with Accesses

Improving DRAM Performance by Parallelizing Refreshes with Accesses Improving DRAM Performance by Parallelizing Refreshes with Accesses Kevin Chang Donghyuk Lee, Zeshan Chishti, Alaa Alameldeen, Chris Wilkerson, Yoongu Kim, Onur Mutlu Executive Summary DRAM refresh interferes

More information

COL862: Low Power Computing Maximizing Performance Under a Power Cap: A Comparison of Hardware, Software, and Hybrid Techniques

COL862: Low Power Computing Maximizing Performance Under a Power Cap: A Comparison of Hardware, Software, and Hybrid Techniques COL862: Low Power Computing Maximizing Performance Under a Power Cap: A Comparison of Hardware, Software, and Hybrid Techniques Authors: Huazhe Zhang and Henry Hoffmann, Published: ASPLOS '16 Proceedings

More information

EECS750: Advanced Operating Systems. 2/24/2014 Heechul Yun

EECS750: Advanced Operating Systems. 2/24/2014 Heechul Yun EECS750: Advanced Operating Systems 2/24/2014 Heechul Yun 1 Administrative Project Feedback of your proposal will be sent by Wednesday Midterm report due on Apr. 2 3 pages: include intro, related work,

More information

Energy Proportionality and Workload Consolidation for Latency-critical Applications

Energy Proportionality and Workload Consolidation for Latency-critical Applications Energy Proportionality and Workload Consolidation for Latency-critical Applications George Prekas 1, Mia Primorac 1, Adam Belay 2, Christos Kozyrakis 2, Edouard Bugnion 1 1 EPFL 2 Stanford Abstract Energy

More information

Reconfigurable Multicore Server Processors for Low Power Operation

Reconfigurable Multicore Server Processors for Low Power Operation Reconfigurable Multicore Server Processors for Low Power Operation Ronald G. Dreslinski, David Fick, David Blaauw, Dennis Sylvester, Trevor Mudge University of Michigan, Advanced Computer Architecture

More information

Peeling the Power Onion

Peeling the Power Onion CERCS IAB Workshop, April 26, 2010 Peeling the Power Onion Hsien-Hsin S. Lee Associate Professor Electrical & Computer Engineering Georgia Tech Power Allocation for Server Farm Room Datacenter 8.1 Total

More information

Why Does Solid State Disk Lower CPI?

Why Does Solid State Disk Lower CPI? Why Does Solid State Disk Lower CPI? Blaine Gaither, Jay Veazey, Paul Cao Revision: June 23, 2010 " 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change

More information

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control

CS CS COMPUTER NETWORKS CS CS CHAPTER 6. CHAPTER 6 Congestion Control COMPUTER NETWORKS CS 45201 CS 55201 CHAPTER 6 Congestion Control COMPUTER NETWORKS CS 45201 CS 55201 CHAPTER 6 Congestion Control P. Farrell and H. Peyravi Department of Computer Science Kent State University

More information

Integrating Concurrency Control and Energy Management in Device Drivers

Integrating Concurrency Control and Energy Management in Device Drivers Integrating Concurrency Control and Energy Management in Device Drivers Kevin Klues, Vlado Handziski, Chenyang Lu, Adam Wolisz, David Culler, David Gay, and Philip Levis Overview Concurrency Control: Concurrency

More information

Multithreading: Exploiting Thread-Level Parallelism within a Processor

Multithreading: Exploiting Thread-Level Parallelism within a Processor Multithreading: Exploiting Thread-Level Parallelism within a Processor Instruction-Level Parallelism (ILP): What we ve seen so far Wrap-up on multiple issue machines Beyond ILP Multithreading Advanced

More information

Abhishek Pandey Aman Chadha Aditya Prakash

Abhishek Pandey Aman Chadha Aditya Prakash Abhishek Pandey Aman Chadha Aditya Prakash System: Building Blocks Motivation: Problem: Determining when to scale down the frequency at runtime is an intricate task. Proposed Solution: Use Machine learning

More information

Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University

Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture These slides are partly from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 9: Multithreading

More information

Intelligent Compilation

Intelligent Compilation Intelligent Compilation John Cavazos Department of Computer and Information Sciences University of Delaware Autotuning and Compilers Proposition: Autotuning is a component of an Intelligent Compiler. Code

More information

Continuous Shape Shifting: Enabling Loop Co-optimization via Near-Free Dynamic Code Rewriting

Continuous Shape Shifting: Enabling Loop Co-optimization via Near-Free Dynamic Code Rewriting Continuous Shape Shifting: Enabling Loop Co-optimization via Near-Free Dynamic Code Rewriting Animesh Jain, Michael A. Laurenzano, Lingjia Tang and Jason Mars International Symposium on Microarchitecture

More information

QoS Handling with DVFS (CPUfreq & Devfreq)

QoS Handling with DVFS (CPUfreq & Devfreq) QoS Handling with DVFS (CPUfreq & Devfreq) MyungJoo Ham SW Center, 1 Performance Issues of DVFS Performance Sucks w/ DVFS! Battery-life Still Matters More Devices (components) w/ DVFS More Performance

More information

Key aspects of cloud computing. Towards fuller utilization. Two main sources of resource demand. Cluster Scheduling

Key aspects of cloud computing. Towards fuller utilization. Two main sources of resource demand. Cluster Scheduling Key aspects of cloud computing Cluster Scheduling 1. Illusion of infinite computing resources available on demand, eliminating need for up-front provisioning. The elimination of an up-front commitment

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient Protocol

Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient Protocol May 2018 Performance Benefits of NVMe over Fibre Channel A New, Parallel, Efficient NVMe over Fibre Channel delivered 58% higher IOPS and 34% lower latency than SCSI FCP. (What s not to like?) Executive

More information

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources

Congestion. Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets limited resources Congestion Source 1 Source 2 10-Mbps Ethernet 100-Mbps FDDI Router 1.5-Mbps T1 link Destination Can t sustain input rate > output rate Issues: - Avoid congestion - Control congestion - Prioritize who gets

More information

XEEMU: An Improved XScale Power Simulator

XEEMU: An Improved XScale Power Simulator Department of Software Engineering (1), University of Szeged Department of Electrical Engineering (2), University of Kaiserslautern XEEMU: An Improved XScale Power Simulator Z. Herczeg(1), Á. Kiss (1),

More information

Data Criticality in Network-On-Chip Design. Joshua San Miguel Natalie Enright Jerger

Data Criticality in Network-On-Chip Design. Joshua San Miguel Natalie Enright Jerger Data Criticality in Network-On-Chip Design Joshua San Miguel Natalie Enright Jerger Network-On-Chip Efficiency Efficiency is the ability to produce results with the least amount of waste. Wasted time Wasted

More information

CSE502: Computer Architecture CSE 502: Computer Architecture

CSE502: Computer Architecture CSE 502: Computer Architecture CSE 502: Computer Architecture Multi-{Socket,,Thread} Getting More Performance Keep pushing IPC and/or frequenecy Design complexity (time to market) Cooling (cost) Power delivery (cost) Possible, but too

More information

Reactive design patterns for microservices on multicore

Reactive design patterns for microservices on multicore Reactive Software with elegance Reactive design patterns for microservices on multicore Reactive summit - 22/10/18 charly.bechara@tredzone.com Outline Microservices on multicore Reactive Multicore Patterns

More information

Volley: Automated Data Placement for Geo-Distributed Cloud Services

Volley: Automated Data Placement for Geo-Distributed Cloud Services Volley: Automated Data Placement for Geo-Distributed Cloud Services Authors: Sharad Agarwal, John Dunagen, Navendu Jain, Stefan Saroiu, Alec Wolman, Harbinder Bogan 7th USENIX Symposium on Networked Systems

More information

Variability in Architectural Simulations of Multi-threaded

Variability in Architectural Simulations of Multi-threaded Variability in Architectural Simulations of Multi-threaded threaded Workloads Alaa R. Alameldeen and David A. Wood University of Wisconsin-Madison {alaa,david}@cs.wisc.edu http://www.cs.wisc.edu/multifacet

More information

JIGSAW: SCALABLE SOFTWARE-DEFINED CACHES

JIGSAW: SCALABLE SOFTWARE-DEFINED CACHES JIGSAW: SCALABLE SOFTWARE-DEFINED CACHES NATHAN BECKMANN AND DANIEL SANCHEZ MIT CSAIL PACT 13 - EDINBURGH, SCOTLAND SEP 11, 2013 Summary NUCA is giving us more capacity, but further away 40 Applications

More information

Multithreaded Processors. Department of Electrical Engineering Stanford University

Multithreaded Processors. Department of Electrical Engineering Stanford University Lecture 12: Multithreaded Processors Department of Electrical Engineering Stanford University http://eeclass.stanford.edu/ee382a Lecture 12-1 The Big Picture Previous lectures: Core design for single-thread

More information

Linux multi-core scalability

Linux multi-core scalability Linux multi-core scalability Oct 2009 Andi Kleen Intel Corporation andi@firstfloor.org Overview Scalability theory Linux history Some common scalability trouble-spots Application workarounds Motivation

More information

Leveraging Flash in HPC Systems

Leveraging Flash in HPC Systems Leveraging Flash in HPC Systems IEEE MSST June 3, 2015 This work was performed under the auspices of the U.S. Department of Energy by under Contract DE-AC52-07NA27344. Lawrence Livermore National Security,

More information

Computer Performance Evaluation and Benchmarking. EE 382M Dr. Lizy Kurian John

Computer Performance Evaluation and Benchmarking. EE 382M Dr. Lizy Kurian John Computer Performance Evaluation and Benchmarking EE 382M Dr. Lizy Kurian John Evolution of Single-Chip Transistor Count 10K- 100K Clock Frequency 0.2-2MHz Microprocessors 1970 s 1980 s 1990 s 2010s 100K-1M

More information

Minimizing Thermal Variation in Heterogeneous HPC System with FPGA Nodes

Minimizing Thermal Variation in Heterogeneous HPC System with FPGA Nodes Minimizing Thermal Variation in Heterogeneous HPC System with FPGA Nodes Yingyi Luo, Xiaoyang Wang, Seda Ogrenci-Memik, Gokhan Memik, Kazutomo Yoshii, Pete Beckman @ICCD 2018 Motivation FPGAs in data centers

More information

Congestion Control 3/16/09

Congestion Control 3/16/09 Congestion Control Outline Resource Allocation Queuing TCP Congestion Control Spring 009 CSE3064 Issues Two sides of the same coin pre-allocate resources so at to avoid congestion control congestion if

More information

Energy-centric DVFS Controlling Method for Multi-core Platforms

Energy-centric DVFS Controlling Method for Multi-core Platforms Energy-centric DVFS Controlling Method for Multi-core Platforms Shin-gyu Kim, Chanho Choi, Hyeonsang Eom, Heon Y. Yeom Seoul National University, Korea MuCoCoS 2012 Salt Lake City, Utah Abstract Goal To

More information

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.

More information

Adaptive QoS Control Beyond Embedded Systems

Adaptive QoS Control Beyond Embedded Systems Adaptive QoS Control Beyond Embedded Systems Chenyang Lu! CSE 520S! Outline! Control-theoretic Framework! Service delay control on Web servers! On-line data migration in storage servers! ControlWare: adaptive

More information

CS425 Computer Systems Architecture

CS425 Computer Systems Architecture CS425 Computer Systems Architecture Fall 2017 Thread Level Parallelism (TLP) CS425 - Vassilis Papaefstathiou 1 Multiple Issue CPI = CPI IDEAL + Stalls STRUC + Stalls RAW + Stalls WAR + Stalls WAW + Stalls

More information

Lecture 18: Core Design, Parallel Algos

Lecture 18: Core Design, Parallel Algos Lecture 18: Core Design, Parallel Algos Today: Innovations for ILP, TLP, power and parallel algos Sign up for class presentations 1 SMT Pipeline Structure Front End Front End Front End Front End Private/

More information

Power Management for Embedded Systems

Power Management for Embedded Systems Power Management for Embedded Systems Minsoo Ryu Hanyang University Why Power Management? Battery-operated devices Smartphones, digital cameras, and laptops use batteries Power savings and battery run

More information

Measuring and Optimizing Tail Latency

Measuring and Optimizing Tail Latency Measuring and Optimizing Tail Latency Kathryn S McKinley, Google CRA-W Undergraduate Town Hall April 5 th, 218 Speaker & Moderator The image part with relationship ID rid3 was not found in the file. Kathryn

More information

Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches

Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches Addendum to Efficiently Enabling Conventional Block Sizes for Very Large Die-stacked DRAM Caches Gabriel H. Loh Mark D. Hill AMD Research Department of Computer Sciences Advanced Micro Devices, Inc. gabe.loh@amd.com

More information

Tail Latency: Beyond Queuing Theory. Kathryn S McKinley Xi Yang, Stephen M Blackburn, Sameh Elnikety, Yuxiong He, Ricardo Bianchini

Tail Latency: Beyond Queuing Theory. Kathryn S McKinley Xi Yang, Stephen M Blackburn, Sameh Elnikety, Yuxiong He, Ricardo Bianchini Tail Latency: Beyond Queuing Theory Kathryn S McKinley Xi Yang, Stephen M Blackburn, Sameh Elnikety, Yuxiong He, Ricardo Bianchini Microsoft s Quincy datacenter 3 Servers in US datacenters 2 Servers in

More information

Staged Memory Scheduling

Staged Memory Scheduling Staged Memory Scheduling Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel H. Loh*, Onur Mutlu Carnegie Mellon University, *AMD Research June 12 th 2012 Executive Summary Observation:

More information

Energy-Efficiency Prediction of Multithreaded Workloads on Heterogeneous Composite Cores Architectures using Machine Learning Techniques

Energy-Efficiency Prediction of Multithreaded Workloads on Heterogeneous Composite Cores Architectures using Machine Learning Techniques Energy-Efficiency Prediction of Multithreaded Workloads on Heterogeneous Composite Cores Architectures using Machine Learning Techniques Hossein Sayadi Department of Electrical and Computer Engineering

More information

PageForge: A Near-Memory Content- Aware Page-Merging Architecture

PageForge: A Near-Memory Content- Aware Page-Merging Architecture PageForge: A Near-Memory Content- Aware Page-Merging Architecture Dimitrios Skarlatos, Nam Sung Kim, and Josep Torrellas University of Illinois at Urbana-Champaign MICRO-50 @ Boston Motivation: Server

More information

Minimization of NBTI Performance Degradation Using Internal Node Control

Minimization of NBTI Performance Degradation Using Internal Node Control Minimization of NBTI Performance Degradation Using Internal Node Control David R. Bild, Gregory E. Bok, and Robert P. Dick Department of EECS Nico Trading University of Michigan 3 S. Wacker Drive, Suite

More information

PageVault: Securing Off-Chip Memory Using Page-Based Authen?ca?on. Blaise-Pascal Tine Sudhakar Yalamanchili

PageVault: Securing Off-Chip Memory Using Page-Based Authen?ca?on. Blaise-Pascal Tine Sudhakar Yalamanchili PageVault: Securing Off-Chip Memory Using Page-Based Authen?ca?on Blaise-Pascal Tine Sudhakar Yalamanchili Outline Background: Memory Security Motivation Proposed Solution Implementation Evaluation Conclusion

More information

PowerTracer: Tracing requests in multi-tier services to diagnose energy inefficiency

PowerTracer: Tracing requests in multi-tier services to diagnose energy inefficiency : Tracing requests in multi-tier services to diagnose energy inefficiency Lin Yuan 1, Gang Lu 1, Jianfeng Zhan 1, Haining Wang 2, and Lei Wang 1 1 Institute of Computing Technology, Chinese Academy of

More information

What Your DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimental Study

What Your DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimental Study What Your DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimental Study Saugata Ghose, A. Giray Yağlıkçı, Raghav Gupta, Donghyuk Lee, Kais Kudrolli, William X. Liu, Hasan Hassan, Kevin

More information

Towards Power Management for FreeBSD

Towards Power Management for FreeBSD Towards Power Management for FreeBSD Robin Randhawa robin.randhawa@arm.com FreeBSD Developer Summit Computer Laboratory University of Cambridge August 2015 Agenda An overview of Energy Aware Scheduling

More information

Research. Eurex NTA Timings 06 June Dennis Lohfert.

Research. Eurex NTA Timings 06 June Dennis Lohfert. Research Eurex NTA Timings 06 June 2013 Dennis Lohfert www.ion.fm 1 Introduction Eurex introduced a new trading platform that represents a radical departure from its previous platform based on OpenVMS

More information

Powernightmares: The Challenge of Efficiently Using Sleep States on Multi-Core Systems

Powernightmares: The Challenge of Efficiently Using Sleep States on Multi-Core Systems Powernightmares: The Challenge of Efficiently Using Sleep States on Multi-Core Systems Thomas Ilsche, Marcus Hähnel, Robert Schöne, Mario Bielert, and Daniel Hackenberg Technische Universität Dresden Observation

More information

Improving Cloud Application Performance with Simulation-Guided CPU State Management

Improving Cloud Application Performance with Simulation-Guided CPU State Management Improving Cloud Application Performance with Simulation-Guided CPU State Management Mathias Gottschlag, Frank Bellosa April 23, 2017 KARLSRUHE INSTITUTE OF TECHNOLOGY (KIT) - OPERATING SYSTEMS GROUP KIT

More information

Exploring the Throughput-Fairness Trade-off on Asymmetric Multicore Systems

Exploring the Throughput-Fairness Trade-off on Asymmetric Multicore Systems Exploring the Throughput-Fairness Trade-off on Asymmetric Multicore Systems J.C. Sáez, A. Pousa, F. Castro, D. Chaver y M. Prieto Complutense University of Madrid, Universidad Nacional de la Plata-LIDI

More information

TAIL LATENCY AND PERFORMANCE AT SCALE

TAIL LATENCY AND PERFORMANCE AT SCALE TAIL LATENCY AND PERFORMANCE AT SCALE George Porter May 21, 2018 ATTRIBUTION These slides are released under an Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) Creative Commons license

More information

Tracing and Visualization of Energy Related Metrics

Tracing and Visualization of Energy Related Metrics Tracing and Visualization of Energy Related Metrics 8th Workshop on High-Performance, Power-Aware Computing 2012, Shanghai Timo Minartz, Julian Kunkel, Thomas Ludwig timo.minartz@informatik.uni-hamburg.de

More information