Levels of Representation / Interpretation

Similar documents
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 17 Datacenters and Cloud Compu5ng

Intro to Software as a Service (SaaS) and Cloud Computing

Warehouse-Scale Computing

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Warehouse-Scale Computing

Computer Architecture. Fall Dongkun Shin, SKKU

CS 61C: Great Ideas in Computer Architecture Introduction to Hardware: Representations and State

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 1. Computer Abstractions and Technology

The Computer Revolution. Classes of Computers. Chapter 1

Computer Architecture s Changing Definition

ECE 486/586. Computer Architecture. Lecture # 2

CS61C Machine Structures. Lecture 1 Introduction. 8/27/2006 John Wawrzynek (Warzneck)

Lecture 1: CS/ECE 3810 Introduction

Fundamentals of Quantitative Design and Analysis

Outline Marquette University

Software as a Service (SaaS), Service-Oriented Architecture (SOA), and Cloud Computing

Fundamentals of Computers Design

Infrastructure Innovation Opportunities Y Combinator 2013

Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism

What is a computer? Units of Measurement. - A machine that: - Counts.

CISC 360. Computer Architecture. Seth Morecraft Course Web Site:

2/15/2008. Announcements. Programming. Instruction Execution Engines. Following Instructions. Instruction Execution Engines. Anatomy of a Computer

Fundamentals of Computer Design

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 1. Computer Abstractions and Technology

Copyright 2012, Elsevier Inc. All rights reserved.

COMPUTER ARCHITECTURE AND OPERATING SYSTEMS (CS31702)

4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4.

LECTURE 1. Introduction

Course Outline. Introduction. Intro Computer Organization. Computer Science Dept Va Tech January McQuain & Ribbens

Data Centers and Cloud Computing. Data Centers

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Data Centers and Cloud Computing

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 1. Copyright 2012, Elsevier Inc. All rights reserved. Computer Technology

Chapter 1. The Computer Revolution

Fundamentals of Computer Design

EECS4201 Computer Architecture

Cloud Computing. What is cloud computing. CS 537 Fall 2017

Lecture 2: Performance

CS/EE 6810: Computer Architecture

CSCI 402: Computer Architectures. Computer Abstractions and Technology (4) Fengguang Song Department of Computer & Information Science IUPUI.

CS61C : Machine Structures

Lecture 1: Introduction

CS4200/5200. Lecture 1 Introduction. Dr. Xiaobo Zhou Department of Computer Science. UC. Colorado Springs. Compiler

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Transistor: Digital Building Blocks

Chapter 2 Logic Gates and Introduction to Computer Architecture

CS Computer Architecture

EECS2021E EECS2021E. The Computer Revolution. Morgan Kaufmann Publishers September 12, Chapter 1 Computer Abstractions and Technology 1

Chapter 1 Introduction. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture

Chapter 2: Computer-System Structures. Hmm this looks like a Computer System?

Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Programming Models for WSCs. Important Design Factors for WSCs

Uniprocessor Computer Architecture Example: Cray T3E

UMBC. Rubini and Corbet, Linux Device Drivers, 2nd Edition, O Reilly. Systems Design and Programming

Computer Evolution. Budditha Hettige. Department of Computer Science

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?

Data Centers and Cloud Computing. Slides courtesy of Tim Wood

CS61C Machine Structures. Lecture 1 Introduction. 8/25/2003 Brian Harvey. John Wawrzynek (Warznek) www-inst.eecs.berkeley.

Computer Architecture. R. Poss

Computer Evolution. Computer Generation. The Zero Generation (3) Charles Babbage. First Generation- Time Line

Semiconductor Memory Classification. Today. ESE 570: Digital Integrated Circuits and VLSI Fundamentals. CPU Memory Hierarchy.

Advanced Computer Architecture (CS620)

Lecture 9: MIMD Architectures

CS3350B Computer Architecture. Introduction

From Boolean Algebra to Smart Glass

Multicore and Parallel Processing

Lec 13: Linking and Memory. Kavita Bala CS 3410, Fall 2008 Computer Science Cornell University. Announcements

Course Outline. Overview 1. I. Introduction II. Performance Evaluation III. Processor Design and Analysis. IV. Memory Design and Analysis

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

ΕΠΛ372 Παράλληλη Επεξεργάσια

CPSC 426/526. Cloud Computing. Ennan Zhai. Computer Science Department Yale University

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 2, 2016

CMPSCI 201: Architecture and Assembly Language

E40M Useless Box, Boolean Logic. M. Horowitz, J. Plummer, R. Howe 1

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

AMD Opteron Processors In the Cloud

Computer Architecture!

Evolution of Computers & Microprocessors. Dr. Cahit Karakuş

Computer Architecture

Green Computing: Datacentres

CS 61C: Great Ideas in Computer Architecture Intro to Assembly Language, MIPS Intro

CS61C Machine Structures. Lecture 8 - Introduction to the MIPS Processor and Assembly Language. 9/14/2007 John Wawrzynek

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CSE : Introduction to Computer Architecture

ECE 154A. Architecture. Dmitri Strukov

Microcomputers. Outline. Number Systems and Digital Logic Review

Power Measurement Using Performance Counters

CS 31: Intro to Systems Digital Logic. Kevin Webb Swarthmore College February 3, 2015

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

1.13 Historical Perspectives and References

Introduction to ICs and Transistor Fundamentals

4. The Processor Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

Lecture 20: WSC, Datacenters. Topics: warehouse-scale computing and datacenters (Sections )

CMSC 411 Computer Systems Architecture Lecture 2 Trends in Technology. Moore s Law: 2X transistors / year

MIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer

EE 3170 Microcontroller Applications

Lecture 9: MIMD Architectures

Computer Architecture!

Storage. CS 3410 Computer System Organization & Programming

ECE 468 Computer Architecture and Organization Lecture 1

Computer System architectures

Transcription:

CMSC 411 Computer Systems Architecture Lecture 25 Architecture Wrapup Levels of Representation / Interpretation High Level Language Program (e.g., C) Assembly Language Program (e.g., MIPS) Machine Language Program (MIPS) Machine Interpretation Compiler Assembler temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; lw $t0, 0($2) lw $t1, 4($2) sw $t1, 0($2) sw $t0, 4($2) Anything can be represented as a number, i.e., data or instructions 0000 1001 1100 0110 1010 1111 0101 1000 1010 1111 0101 1000 0000 1001 1100 0110 1100 0110 1010 1111 0101 1000 0000 1001 0101 1000 0000 1001 1100 0110 1010 1111 Hardware Architecture Description (e.g., block diagrams) Architecture Implementation Logic Circuit Description (Circuit Schematic Diagrams) 12/12/2013 Fall 2013 -- Lecture #17 2 CS252 S05 1

Synchronous Digital Systems Hardware of a processor, such as the MIPS, is an example of a Synchronous Digital System Synchronous All operations coordinated by a central clock» Heartbeat of the system! Digital Represent all values by two discrete values Electrical signals are treated as 1 s and 0 s» 1 and 0 are complements of each other High /low voltage for true / false, 1 / 0 12/12/2013 Fall 2013 -- Lecture #17 3 Switches Basic element of physical implementations Implementing a simple circuit (arrow shows action if wire changes to 1 or is asserted): A Z Close switch (if A is 1 ) and turn on light bulb (Z) A Z Open switch (if A is 0 ) and turn off light bulb (Z) Z A 12/12/2013 Fall 2013 -- Lecture #17 4 CS252 S05 2

Switches (cont.) Compose switches into more complex ones (Boolean functions): AND A B Z A and B A OR Z A or B B 12/12/2013 Fall 2013 -- Lecture #17 5 Transistors High voltage (V dd ) represents 1, or true Low voltage (0 volts or Ground) represents 0, or false Let threshold voltage (V th ) decide if a 0 or a 1 If switches control whether voltages can propagate through a circuit, can build a computer Our switches: CMOS transistors Fall 2013 -- Lecture #17 12/12/2013 6 CS252 S05 3

CMOS Transistor Networks Modern digital systems designed in CMOS MOS: Metal-Oxide on Semiconductor C for complementary: use pairs of normally-open and normally-closed switches» Used to be called COS-MOS for complementarysymmetry -MOS CMOS transistors act as voltage-controlled switches Similar, though easier to work with, than relay switches from earlier era Use energy primarily when switching 12/12/2013 Fall 2013 -- Lecture #17 7 CMOS Transistors Source Gate Drain Three terminals: source, gate, and drain Switch action: if voltage on gate terminal is (some amount) higher/lower than source terminal then conducting path established between drain and source terminals (switch is closed) Source Gate Drain Source Gate Note circle symbol to indicate NOT or complement Drain n-channel transitor p-channel transistor open when voltage at Gate is low closed when voltage at Gate is low closes when: opens when: voltage(gate) > voltage (Threshold) voltage(gate) > voltage (Threshold) (High resistance when gate voltage Low, (Low resistance when gate voltage Low, Low resistance when gate voltage High) High resistance when gate voltage High) 12/12/2013 Fall 2013 -- Lecture #17 8 CS252 S05 4

MOS Networks n-channel transitor open when voltage at Gate is low closes when voltage(gate) > voltage (Source) + ε X what is the relationship between x and y? 3v 0v p-channel transistor closed when voltage at Gate is low opens when voltage(gate) < voltage (Source) ε Y x 0 volts (gnd) 3 volts (Vdd) y 3 volts (Vdd) 0 volts (gnd) Called an invertor or not gate 12/12/2013 Fall 2013 -- Lecture #17 9 Two Input Networks 3v X Y what is the relationship between x, y and z? x y z Called NAND gate (NOT AND) 0 volts 0 volts 3 volts Z 0 volts 3 volts 3 volts 0 volts 3 volts 3 volts 0v 3 volts 3 volts 0 volts 3v X Y Called NOR gate (NOT OR) 0 volts x y z 0 volts 3 volts Z 0 volts 3 volts 3 volts 0 volts 0 volts 0 volts 0v 3 volts 3 volts 0 volts Fall 2013 -- Lecture #17 12/12/2013 10 CS252 S05 5

Combinational Logic Symbols Common combinational logic systems have standard symbols called logic gates Buffer, NOT A Z AND, NAND A B OR, NOR A B Z Z Easy to implement with CMOS transistors (the switches we have available and use most) 12/12/2013 Fall 2013 -- Lecture #17 11 Type of Circuits Synchronous Digital Systems consist of two basic types of circuits: Combinational Logic (CL) circuits» Output is a function of the inputs only, not the history of its execution» E.g., circuits to add A, B (ALUs) Sequential Logic (SL)» Circuits that remember or store information» aka State Elements» E.g., memories and registers (Registers) 12/12/2013 Fall 2013 -- Lecture #17 12 CS252 S05 6

Design Hierarchy system datapath control code registers multiplexer comparator state registers combinational logic register logic switching networks 12/12/2013 Fall 2013 -- Lecture #17 13 A Conceptual MIPS Datapath 12/12/2013 Fall 2013 -- Lecture #17 14 CS252 S05 7

Model for Synchronous Systems Collection of Combinational Logic blocks separated by registers Feedback is optional Clock signal(s) connects only to clock input of registers Clock (CLK): steady square wave that synchronizes the system Register: several bits of state that samples on rising edge of CLK (positive edge-triggered) or falling edge (negative edge-triggered) 12/12/2013 Fall 2013 -- Lecture #17 15 Finite State Machines (FSM) Function can be represented with a state transition diagram With combinational logic and registers, any FSM can be implemented in hardware Also known as finite automata 2 bit counters 12/12/2013 Fall 2013 -- Lecture #17 16 CS252 S05 8

Digital Circuit Summary Multiple Hardware Representations Analog voltages quantized to represent logic 0 and logic 1 Transistor switches form gates» AND, OR, NOT, NAND, NOR Truth table mapped to gates for combinational logic design Boolean algebra for gate minimization State Machines Made from» Stateless combinational logic, and» Stateful Memory Logic (aka Registers) 12/12/2013 Fall 2013 -- Lecture #17 17 Intel CPU Trends # transistors keeps increasing Clock speed plateaus at around 2-3 Ghz around 2003 Power use plateaus at 100 watts around 2003 ILP (instructions per cycle) plateaus at around 8/cycle by 2000 Transistors Clock Speed Power ILP CS411 CS252 S05 9

Intel Microprocessor Designs Microarchitecture designs overlap in time Semi-conductor technology continues improving Fabrication feature width drops from 180nm to 32nm 22nm 3D Tri- Gate announced 2011 Pentium M Pentium 4, D Pentium Pro, 2, 3 Pentium 4 Pentium 3 Pentium M Pentium 4 Pentium 3 Core i3, i7 Core i7 Core, Core 2 Core i3 Core i7 Core i7 Core 2 Core 2 Core Pentium 4 & D Pentium M Pentium D & 4 CS411 Power-Aware Computing Cooking an egg on the CPU CS411 CS252 S05 10

Power Issues in Microprocessors Capacitive (Dynamic) Power I.e., changing bit from 1 0 Vdd Static (Leakage) Power Vin Vout V IN I Gate I Sub C L V OUT C L Temperature Di/Dt (Vdd/Gnd Bounce) Voltage (V) Current (A) Minimum Voltage 20 cycles CS411 Power = ½ C V 2 f Dynamic Energy (when switching) is proportional to Capacitance * Voltage 2 Since pulse is 0 1 0 or 1 0 1, Energy of a single transition is proportional to ½ *Capacitance * Voltage 2 Power is just energy per transition times frequency of transitions: proportional to ½ * Capacitance * Voltage 2 * Frequency 12/12/2013 Fall 2013 -- Lecture #9 22 CS252 S05 11

Architecture Features for Low Power Voltage scaling» Reduce voltage to save power when performance not needed Processor-specific instruction sets» On ARM the default int type is ~ 20% more efficient than char or short (sign/zero extension) Processor-specific features» Shut off unused registers to save power CS411 Intel Core i7 & Atom Specifications i7 920 Atom 230 Clock Rate 2.66 GHz 1.66 GHz Power 130 W 4 W Cache Memory Bandwidth L1 32 KB / 32 KB L2 256 KB L3 2-8 MB L1 32/24 KB L2 512 KB 17 GB/sec 8 GB/sec Issue Rate 4 ops/cycle 2 ops/cycle Scheduling Branch Prediction Speculating Out of order Two-level Dynamic In order Two-level CMSC 411 24 CS252 S05 12

Intel i7 Nehalem Core 731 million transistors 25 Intel i7 Nehalem Core 26 CS252 S05 13

Intel Atom CMSC 411 27 Intel Core i7 920 vs Atom 230 i7 is 4-11x faster i7 uses 1.5-3x more energy CMSC 411 28 CS252 S05 14

Mainframe Era: 1950s-60s Processor (CPU) I/O Big Iron : IBM, UNIVAC, build $1M computers for businesses timesharing OS (Multics) 12/12/2013 Fall 2013 -- Lecture #1 29 Minicomputer Era: 1970s Using integrated circuits, Digital, HP build $10k computers for labs, universities UNIX OS 12/12/2013 Fall 2013 -- Lecture #1 30 CS252 S05 15

PC Era: Mid 1980s - Mid 2000s Using microprocessors, Apple, IBM, build $1k computers for individuals Windows OS, Linux 12/12/2013 Fall 2013 -- Lecture #1 31 PostPC Era: Late 2000s -?? Personal Mobile Devices (PMD): Relying on wireless networking, Apple, Nokia, build $500 smartphone and tablet computers for individuals Android OS Cloud Computing: Using Local Area Networks, Amazon, Google, build $200M Warehouse Scale Computers with 100,000 servers for Internet Services for PMDs MapReduce/Hadoop 12/12/2013 Fall 2013 -- Lecture #1 32 CS252 S05 16

Advanced RISC Machine (ARM) Processor Personal mobile devices utilize low-power integrated processors 12/12/2013 Fall 2013 -- Lecture #1 33 ARM Processor Processor I/O I/O 1 GHz ARM Cortex A8 Memory I/O 12/12/2013 Fall 2012 -- Lecture #1 34 CS252 S05 17

PC Clusters PC Clusters Commodity computers connected by commodity Ethernet switches 10s to 100s of computers Cheaper than multiprocessor servers 20X for equivalent vs. largest servers Few operators for 1000s servers Careful selection of identical HW/SW Virtual Machine Monitors simplify operation Dependability via extensive redundancy 12/12/2013 Fall 2013 -- Lecture #1 35 Warehouse Scale Computers Large scale clusters 100K computers vs. 1000 computers Significant cost-performance advantage Economies of scale pushed down cost of largest datacenter by factors 3X to 8X vs clusters Can achieve better utilization Traditional datacenters utilized 10-20% Can be profitable offering scalable pay-as-you-go use At lower costs than operating local datacenter Designed to support software as a service Cloud computing 36 CS252 S05 18

Software as a Service: SaaS Traditional software (SW) Binary code installed and runs wholly on client device SaaS delivers SW & data as service over Internet via thin program (e.g., browser) running on client device Search, social networking, video Now also SaaS version of traditional SW E.g., Microsoft Office 365, TurboTax Online 12/12/2013 Fall 2013 -- Lecture #1 37 6 Reasons for SaaS 1. No install worries about HW capability, OS 2. No worries about data loss (at remote site) 3. Easy for groups to interact with same data 4. If data is large or changed frequently, simpler to keep 1 copy at central site 5. 1 copy of SW, controlled HW environment => no compatibility hassles for developers 6. 1 copy => simplifies upgrades for developers and no user upgrade requests 12/12/2013 Fall 2013 -- Lecture #1 38 CS252 S05 19

SaaS Infrastructure? SaaS demands on infrastructure Communication» Allow customers to interact with service Scalability» Fluctuations in demand during + new services to add users rapidly Dependability» Service and communication continuously available 24x7 12/12/2013 Fall 2013 -- Lecture #1 39 Utility Computing / Public Cloud Computing Offers computing, storage, communication at pennies per hour No premium to scale: 1000 computers @ 1 hour = 1 computer @ 1000 hours Illusion of infinite scalability to cloud user As many computers as you can afford Leading examples Amazon Web Services Google App Engine Microsoft Azure 40 CS252 S05 20

2012 AWS Instances & Prices Instance Per Hour Ratio to Small Compute Units Virtual Cores Compute Unit/ Core Memory (GB) Disk (GB) Address Standard Small $0.085 1.0 1.0 1 1.00 1.7 160 32 bit Standard Large $0.340 4.0 4.0 2 2.00 7.5 850 64 bit Standard Extra Large $0.680 8.0 8.0 4 2.00 15.0 1690 64 bit High-Memory Extra Large $0.500 5.9 6.5 2 3.25 17.1 420 64 bit High-Memory Double Extra Large $1.200 14.1 13.0 4 3.25 34.2 850 64 bit High-Memory Quadruple Extra Large $2.400 28.2 26.0 8 3.25 68.4 1690 64 bit High-CPU Medium $0.170 2.0 5.0 2 2.50 1.7 350 32 bit High-CPU Extra Large $0.680 8.0 20.0 8 2.50 7.0 1690 64 bit Cluster Quadruple Extra Large $1.300 15.3 33.5 16 2.09 23.0 1690 64 bit Eight Extra Large $2.400 28.2 88.0 32 2.75 60.5 1690 64 bit 41 Supercomputer For Hire Top 500 supercomputer competition 290 Eight Extra Large (@ $2.40/hour) = 240 TeraFLOPS 42 nd /500 supercomputer @ ~$700 per hour Credit card => can use 1000s computers FarmVille on AWS Prior biggest online game 5M users What if startup had to build datacenter? How big? 4 days =1M; 2 months = 10M; 9 months = 75M 42 CS252 S05 21

IBM Watson for Hire? Jeopardy Champion IBM Watson Hardware: 90 IBM Power 750 servers 3.5 GHz 8 cores/server 90 @ ~$2.40/hour = ~$200/hour Cost of human lawyer or account For what tasks could AI be as good as highly trained person @ $200/hour? What would this mean for society? 43 E.g., Google s Oregon WSC 12/12/2013 Fall 2013 -- Lecture #1 44 CS252 S05 22

Equipment Inside a WSC Server (in rack format): 1 ¾ inches high 1U, x 19 inches x 16-20 inches: 8 cores, 16 GB DRAM, 4x1 TB disk 7 foot Rack: 40-80 servers + Ethernet local area network (1-10 Gbps) switch in middle ( rack switch ) Array (aka cluster): 16-32 server racks + larger local area network switch ( array switch ) 10X faster => cost 100X: cost f(n 2 ) 12/12/2013 Fall 2013 -- Lecture #1 45 Server, Rack, Array 12/12/2013 Fall 2013 -- Lecture #1 46 CS252 S05 23

Coping with Performance in Array Lower latency to DRAM in another server than local disk Higher bandwidth to local disk than to DRAM in another server Local Rack Array Racks -- 1 30 Servers 1 80 2400 Cores (Processors) 8 640 19,200 DRAM Capacity (GB) 16 1,280 38,400 Disk Capacity (GB) 4,000 320,000 9,600,000 DRAM Latency (microseconds) 0.1 100 300 Disk Latency (microseconds) 10,000 11,000 12,000 DRAM Bandwidth (MB/sec) 20,000 100 10 Disk Bandwidth (MB/sec) 200 100 10 12/12/2013 Fall 2013 -- Lecture #1 47 Coping with Workload Variation Workload 2X Midnight Noon Midnight Online service: Peak usage 2X off-peak 12/12/2013 Fall 2013 -- Lecture #1 48 CS252 S05 24

Impact of Latency, Bandwidth, Failure, Varying Workload on WSC Software? WSC Software must take care where it places data within an array to get good performance WSC Software must cope with failures gracefully 25 year MTTF for computer = 10 failures daily 4% annual failure for disk = 1 failure per hour WSC Software must scale up and down gracefully in response to varying demand More elaborate hierarchy of memories, failure tolerance, workload accommodation makes WSC software development more challenging than software for single computer 12/12/2013 Fall 2013 -- Lecture #1 49 Power vs. Server Utilization Server power usage as load varies idle to 100% Uses ½ peak power when idle! Uses ⅔ peak power when 10% utilized! 90%@ 50%! Most servers in WSC utilized 10% to 50% Goal should be Energy-Proportionality: % peak load = % peak energy 12/12/2013 Fall 2013 -- Lecture #1 50 CS252 S05 25

Power Usage Effectiveness Overall WSC Energy Efficiency: amount of computational work performed divided by the total energy used in the process Power Usage Effectiveness (PUE): Total building power / IT equipment power Power efficiency measure for WSC, not including efficiency of servers, networking gear 1.0 = perfection 12/12/2013 Fall 2013 -- Lecture #1 51 PUE in the Wild (2007) 12/12/2013 Fall 2013 -- Lecture #1 52 CS252 S05 26

High PUE: Where Does Power Go? Uninterruptable Power Supply (battery) Chiller cools warm water from air conditioner Power Distribution Unit Servers + Networking Computer Room Air Conditioner 12/12/2013 Fall 2013 -- Lecture #1 53 Servers and Networking Power Only Peak Power % 12/12/2013 Fall 2013 -- Lecture #1 54 CS252 S05 27

Containers in WSCs Inside WSC Inside Container 12/12/2013 Fall 2013 -- Lecture #1 55 Google WSC A PUE: 1.24 1. Careful air flow handling Don t mix server hot air exhaust with cold air (separate warm aisle from cold aisle) 2. Elevated cold aisle temperatures 81 F instead of traditional 65-68 F 3. Measure vs. estimate PUE, publish PUE, and improve operation Note subject of marketing Average on a good day with artificial load (Facebook s 1.07) or real load for quarter (Google) 12/12/2013 Fall 2013 -- Lecture #1 56 CS252 S05 28

Google WSC PUE: Quarterly Avg PUE www.google.com/corporate/green/datacenters/measuring.htm 12/12/2013 Fall 2013 -- Lecture #1 57 CS252 S05 29