ECE C61 Computer Architecture Lecture 2 performance Prof Alok N Choudhary choudhar@ecenorthwesternedu 2-1
Today s s Lecture Performance Concepts Response Time Throughput Performance Evaluation Benchmarks Announcements Processor Design Metrics Cycle Time Cycles per Instruction Amdahl s Law Speedup what is important Critical Path 2-2
Performance Concepts 2-3
Performance Perspectives Purchasing perspective Given a collection of machines, which has the - Best performance? - Least cost? - Best performance / cost? Design perspective Faced with design options, which has the Both require - Best performance improvement? - Least cost? - Best performance / cost? basis for comparison metric for evaluation Our goal: understand cost & performance implications of architectural choices 2-4
Two Notions of Performance Plane DC to Paris Speed Passengers Throughput (pmph) Boeing 747 65 hours 610 mph 470 286,700 Concorde 3 hours 1350 mph 132 178,200 Which has higher performance? Execution time (response time, latency, ) Time to do a task Throughput (bandwidth, ) Tasks per unit of time Response time and throughput often are in opposition 2-5
Definitions Performance is typically in units-per-second bigger is better If we are primarily concerned with response time performance = 1 execution_time " X is n times faster than Y" means ExecutionTime ExecutionTime y x = Performance Performance x y = n 2-6
Example Time of Concorde vs Boeing 747? Concord is 1350 mph / 610 mph = 22 times faster = 65 hours / 3 hours Throughput of Concorde vs Boeing 747? Concord is 178,200 pmph / 286,700 pmph Boeing is 286,700 pmph / 178,200 pmph = 062 times faster = 160 times faster Boeing is 16 times ( 60% ) faster in terms of throughput Concord is 22 times ( 120% ) faster in terms of flying time We will focus primarily on execution time for a single job Lots of instructions in a program => Instruction thruput important! 2-7
Benchmarks 2-8
Evaluation Tools Benchmarks, traces and mixes Macrobenchmarks and suites Microbenchmarks Traces Workloads Simulation at many levels ISA, microarchitecture, RTL, gate circuit Trade fidelity for simulation rate (Levels of abstraction) Other metrics Area, clock frequency, power, cost, Analysis Queuing theory, back-of-the-envelope Rules of thumb, basic laws and principles 2-9
Benchmarks Microbenchmarks Measure one performance dimension - Cache bandwidth - Memory bandwidth - Procedure call overhead - FP performance Insight into the underlying performance factors Not a good predictor of application performance Macrobenchmarks Application execution time - Measures overall performance, but on just one application - Need application suite 2-10
Why Do Benchmarks? How we evaluate differences Different systems Changes to a single system Provide a target Benchmarks should represent large class of important programs Improving benchmark performance should help many programs For better or worse, benchmarks shape a field Good ones accelerate progress good target for development Bad benchmarks hurt progress help real programs v sell machines/papers? Inventions that help real programs don t help benchmark 2-11
Popular Benchmark Suites Desktop SPEC CPU2000 - CPU intensive, integer & floating-point applications SPECviewperf, SPECapc - Graphics benchmarks SysMark, Winstone, Winbench Embedded EEMBC - Collection of kernels from 6 application areas Dhrystone - Old synthetic benchmark Servers SPECweb, SPECfs TPC-C - Transaction processing system TPC-H, TPC-R - Decision support system TPC-W - Transactional web benchmark Parallel Computers SPLASH - Scientific applications & kernels Most markets have specific benchmarks for design and marketing 2-12
SPEC CINT2000 2-13
tpc 2-14
Basis of Evaluation Pros representative portable widely used improvements useful in reality Actual Target Workload Full Application Benchmarks Cons very specific non-portable difficult to run, or measure hard to identify cause less representative easy to run, early in design cycle Small Kernel Benchmarks easy to fool identify peak capability and potential bottlenecks Microbenchmarks peak may be a long way from application performance 2-15
Programs to Evaluate Processor Performance (Toy) Benchmarks 10-100 line eg,: sieve, puzzle, quicksort Synthetic Benchmarks attempt to match average frequencies of real workloads eg, Whetstone, dhrystone Kernels Time critical excerpts 2-16
Announcements Website http://wwwecenorthwesternedu/~kcoloma/ece361 Next lecture Instruction Set Architecture 2-17
Processor Design Metrics 2-18
Metrics of Performance Application Seconds per program Programming Language Compiler ISA Datapath Control Function Units Transistors Wires Pins Useful Operations per second (millions) of Instructions per second MIPS (millions) of (FP) operations per second MFLOP/s Megabytes per second Cycles per second (clock rate) 2-19
Organizational Trade-offs Application Programming Language Compiler ISA Datapath Control Function Units Transistors Wires Pins Instruction Mix CPI Cycle Time CPI is a useful design measure relating the Instruction Set Architecture with the Implementation of that architecture, and the program measured 2-20
Processor Cycles Clock Combination Logic Cycle Most contemporary computers have fixed, repeating clock cycles 2-21
CPU Performance 2-22
Cycles Per Instruction (Throughput) Cycles per Instruction CPI = (CPU Time * Clock Rate) / Instruction Count = Cycles / Instruction Count CPU time = Cycle Time! " CPI j! I n j=1 j Instruction Frequency CPI = n! CPIj " Fj where Fj = j=1 I Instruction j Count 2-23
Principal Design Metrics: CPI and Cycle Time Performance = 1 ExecutionTime Performance = CPI 1! CycleTime Performance = 1 Cycles! Instruction Seconds Cycle = Instructions Seconds 2-24
Example Typical Mix Op Freq Cycles CPI ALU 50% 1 5 Load 20% 5 10 Store 10% 3 3 Branch 20% 2 4 How much faster would the machine be if a better data cache reduced the average load time to 2 cycles? Load 20% x 2 cycles = 4 Total CPI 22 16 Relative performance is 22 / 16 = 138 How does this compare with reducing the branch instruction to 1 cycle? Branch 20% x 1 cycle = 2 Total CPI 22 20 22 Relative performance is 22 / 20 = 11 2-25
Summary: Evaluating Instruction Sets and Implementation Design-time metrics: Can it be implemented, in how long, at what cost? Can it be programmed? Ease of compilation? Static Metrics: How many bytes does the program occupy in memory? Dynamic Metrics: How many instructions are executed? How many bytes does the processor fetch to execute the program? How many clocks are required per instruction? How "lean" a clock is practical? Best Metric: Time to execute the program! CPI NOTE: Depends on instructions set, processor organization, and compilation techniques Inst Count Cycle Time 2-26
Amdahl's Law :: Make the Common Case Fast Speedup due to enhancement E: ExTime w/o E Performance w/ E Speedup(E) = -------------------- = --------------------- ExTime w/ E Performance w/o E Suppose that enhancement E accelerates a fraction F of the task by a factor S and the remainder of the task is unaffected then, Performance improvement ExTime(with E) = ((1-F) + F/S) X ExTime(without E) is limited by how much the improved feature is used Invest resources where Speedup(with E) = ExTime(without E) time is spent ((1-F) + F/S) X ExTime(without E) 2-27
Marketing Metrics MIPS = Instruction Count / Time * 10^6 = Clock Rate / CPI * 10^6 machines with different instruction sets? programs with different instruction mixes? dynamic frequency of instructions uncorrelated with performance MFLOP/s= FP Operations / Time * 10^6 machine dependent often not where time is spent 2-28
Summary CPU time = Seconds = Instructions x Cycles x Seconds Program Program Instruction Cycle Time is the measure of computer performance! Good products created when have: Good benchmarks Good ways to summarize performance If not good benchmarks and summary, then choice between improving product for real programs vs improving product to get more sales sales almost always wins Remember Amdahl s Law: Speedup is limited by unimproved part of program 2-29
Critical Path 2-30
Range of Design Styles Custom Design Standard Cell Gate Array/FPGA/CPLD Custom Control Logic Custom ALU Custom Register File Gates Standard ALU Standard Registers Gates Routing Channel Gates Routing Channel Gates Performance Design Complexity (Design Time) Compact Longer wires 2-31
Implementation as Combinational Logic + Latch Clock Combinational Logic Latch Mealey Machine Moore Machine 2-32
Clocking Methodology Clock Combination Logic All storage elements are clocked by the same clock edge (but there may be clock skews) The combination logic block s: Inputs are updated at each clock tick All outputs MUST be stable before the next clock tick 2-33
Critical Path & Cycle Time Clock Critical path: the slowest path between any two storage devices Cycle time is a function of the critical path 2-34
Tricks to Reduce Cycle Time Reduce the number of gate levels A B C Pay attention to loading D A B C D One gate driving many gates is a bad idea Avoid using a small gate to drive a long wire Use multiple stages to drive large load INV4x Revise design Clarge INV4x 2-35
Summary Performance Concepts Response Time Throughput Performance Evaluation Benchmarks Processor Design Metrics Cycle Time Cycles per Instruction Amdahl s Law Speedup what is important Critical Path 2-36