CMSC 611: Advanced Computer Architecture

Similar documents
CMSC 611: Advanced Computer Architecture

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from David Culler, UC Berkeley CS252, Spr 2002

Lecture 2: Computer Performance. Assist.Prof.Dr. Gürhan Küçük Advanced Computer Architectures CSE 533

Overview of Today s Lecture: Cost & Price, Performance { 1+ Administrative Matters Finish Lecture1 Cost and Price Add/Drop - See me after class

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 1. Computer Abstractions and Technology

Defining Performance. Performance 1. Which airplane has the best performance? Computer Organization II Ribbens & McQuain.

The Computer Revolution. Classes of Computers. Chapter 1

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 1. Computer Abstractions and Technology

CpE 442 Introduction to Computer Architecture. The Role of Performance

Performance COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

The bottom line: Performance. Measuring and Discussing Computer System Performance. Our definition of Performance. How to measure Execution Time?

Chapter 1. The Computer Revolution

Defining Performance. Performance. Which airplane has the best performance? Boeing 777. Boeing 777. Boeing 747. Boeing 747

Measure, Report, and Summarize Make intelligent choices See through the marketing hype Key to understanding effects of underlying architecture

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 1. Computer Abstractions and Technology

Chapter 1. Computer Abstractions and Technology. Lesson 2: Understanding Performance

Introduction. Summary. Why computer architecture? Technology trends Cost issues

Computer Architecture

ECE C61 Computer Architecture Lecture 2 performance. Prof. Alok N. Choudhary.

Lecture - 4. Measurement. Dr. Soner Onder CS 4431 Michigan Technological University 9/29/2009 1

The Role of Performance

Chapter 1. Computer Abstractions and Technology. Adapted by Paulo Lopes, IST

EECS2021. EECS2021 Computer Organization. EECS2021 Computer Organization. Morgan Kaufmann Publishers September 14, 2016

Computer Performance. Reread Chapter Quiz on Friday. Study Session Wed Night FB 009, 5pm-6:30pm

Lecture 3: Evaluating Computer Architectures. How to design something:

Performance evaluation. Performance evaluation. CS/COE0447: Computer Organization. It s an everyday process

Performance, Power, Die Yield. CS301 Prof Szajda

EECS2021E EECS2021E. The Computer Revolution. Morgan Kaufmann Publishers September 12, Chapter 1 Computer Abstractions and Technology 1

Computer Performance Evaluation: Cycles Per Instruction (CPI)

ECE 486/586. Computer Architecture. Lecture # 3

ECE 486/586. Computer Architecture. Lecture # 2

CMSC 611: Advanced Computer Architecture

TDT4255 Computer Design. Lecture 1. Magnus Jahre

Fundamentals of Quantitative Design and Analysis

Instructor Information

Computer Architecture. Minas E. Spetsakis Dept. Of Computer Science and Engineering (Class notes based on Hennessy & Patterson)

Performance of computer systems

Lectures 1: Review of Technology Trends and Cost/Performance Prof. David A. Patterson Computer Science 252 Spring 1998

Transistors and Wires

Advanced Computer Architecture (CS620)

Computer Architecture s Changing Definition

MEASURING COMPUTER TIME. A computer faster than another? Necessity of evaluation computer performance

Lecture 2: Performance

1.6 Computer Performance

Computer Organization & Assembly Language Programming (CSE 2312)

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Computer Systems Performance Analysis and Benchmarking (37-235)

Introduction to Computer Architecture II

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture

ELE 455/555 Computer System Engineering. Section 1 Review and Foundations Class 5 Computer System Performance

Chapter 1. and Technology

EE282 Computer Architecture. Lecture 1: What is Computer Architecture?

CMSC 411 Computer Systems Architecture Lecture 13 Instruction Level Parallelism 6 (Limits to ILP & Threading)

What is Good Performance. Benchmark at Home and Office. Benchmark at Home and Office. Program with 2 threads Home program.

Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from Hennessy & Patterson / 2003 Elsevier

EECS4201 Computer Architecture

CMSC 411 Computer Systems Architecture Lecture 2 Trends in Technology. Moore s Law: 2X transistors / year

04S1 COMP3211/9211 Computer Architecture Tutorial 1 (Weeks 02 & 03) Solutions

Outline Marquette University

TDT 4260 lecture 2 spring semester 2015

CSCI 402: Computer Architectures. Computer Abstractions and Technology (4) Fengguang Song Department of Computer & Information Science IUPUI.

1.3 Data processing; data storage; data movement; and control.

Course web site: teaching/courses/car. Piazza discussion forum:

Which is the best? Measuring & Improving Performance (if planes were computers...) An architecture example

Computer Architecture = CS/ECE 552: Introduction to Computer Architecture. 552 In Context. Why Study Computer Architecture?

PERFORMANCE MEASUREMENT

CS61C Performance. Lecture 23. April 21, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Performance. CS 3410 Computer System Organization & Programming. [K. Bala, A. Bracy, E. Sirer, and H. Weatherspoon]

CS61C - Machine Structures. Week 6 - Performance. Oct 3, 2003 John Wawrzynek.

Computer Architecture!

COMPUTER ORGANIZATION AND DESIGN

CPE300: Digital System Architecture and Design

Copyright 2012, Elsevier Inc. All rights reserved.

Lecture Topics. Principle #1: Exploit Parallelism ECE 486/586. Computer Architecture. Lecture # 5. Key Principles of Computer Architecture

Lecture 1: CS/ECE 3810 Introduction

CISC 662 Graduate Computer Architecture Lecture 13 - Limits of ILP

CS252 S05. Outline. Dynamic Branch Prediction. Static Branch Prediction. Dynamic Branch Prediction. Dynamic Branch Prediction

Chapter 1 Introduction. Jin-Fu Li Department of Electrical Engineering National Central University Jungli, Taiwan

How What When Why CSC3501 FALL07 CSC3501 FALL07. Louisiana State University 1- Introduction - 1. Louisiana State University 1- Introduction - 2

Engineering 9859 CoE Fundamentals Computer Architecture

CS 110 Computer Architecture

An Introduction to Parallel Architectures

Response Time and Throughput

Performance, Cost and Amdahl s s Law. Arquitectura de Computadoras

CS654 Advanced Computer Architecture. Lec 3 - Introduction

Computer Architecture. What is it?

ECE 468 Computer Architecture and Organization Lecture 1

Microarchitecture Overview. Performance

EE282H: Computer Architecture and Organization. EE282H: Computer Architecture and Organization -- Course Overview

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

The Von Neumann Computer Model

CS Computer Architecture Spring Lecture 01: Introduction

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 1. Copyright 2012, Elsevier Inc. All rights reserved. Computer Technology

Computer Architecture

Microarchitecture Overview. Performance

Computer Architecture!

Rechnerstrukturen

ECE 154A. Architecture. Dmitri Strukov

Administrivia. CMSC 411 Computer Systems Architecture Lecture 14 Instruction Level Parallelism (cont.) Control Dependencies

Performance. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Transcription:

CMSC 611: Advanced Computer Architecture Cost, Performance & Benchmarking Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from David Culler, UC Berkeley CS252, Spr 2002 course slides, 2002 UC Berkeley Some material adapted from Hennessy & Patterson / 2003 Elsevier Science

Overview Previous Lecture What computer architecture Why it is important to study Organization and anatomy of computers Impact of microelectronics technology on computers Evolution and generations of the computer industry This Lecture Cost considerations in computer design Why measuring performance is important Different performance metrics Performance comparison

Computer Engineering Implementation Complexity Implement Next Generation System Workloads Methodology Evaluate Existing Systems for Bottlenecks Technology Trends Simulate New Designs and Organizations Benchmarks Cost and performance are the main evaluation metrics for a design quality Slide: Dave Patterson

Circuits Need connectors & switches Generation defined by switch technology Year 1951 1965 1975 1995 Technology used in computers Vacuum tube Transistor Integrated circuits Very large-scale integrated circuit Relative performance/unit cost 1 35 900 2,400,000 Advances of the IC technology affect H/W and S/W design philosophy

Integrated Circuits Start with silicon (found in sand) Silicon does not conduct electricity well thus semiconductor Chemical process can transform tiny areas to Excellent conductors of electricity (like copper) Excellent insulator from electricity (like glass) Areas that can conduct or insulate under a special condition (a switch) A transistor is simply an on/off switch controlled by electricity Integrated circuits combines dozens to millions of transistors in a chip

Microelectronics Process Silicon Ingot Blank wafers Slicer 20 to 30 processing Fab steps Tested dies Individual dies (one wafer) Patterned wafers Bond die to Package package Die Test tester Dicer Packaged dies Tested packaged dies Part Test tester Ship to customers Ship

Integrated Circuits Costs IC Cost = Die Cost + Testing Cost +Packing Cost Final Test Yield Die Cost = Wafer Cost Dies per Wafer Die Yield Die cost roughly goes with die area 4 Slide: Dave Patterson

Die Cost Die Cost = Wafer Cost Dies per Wafer Die Yield Dies per Wafer = p (Wafer diameter/2)2 Die Area - p Wafer diameter 2 Die Area Slide: Dave Patterson

Die Cost = Die Yield = Wafer yield Die Cost Wafer Cost Dies per Wafer Die Yield È Í 1 Í Í Defects per unit area Die Area 1+ Î Í a a Figure: Dave Patterson

Real World Examples Chip Layers Wafer cost Defect /cm 2 Area (mm 2 ) Dies/ Wafer Yield Die Cost 386DX 2 $900 1.0 43 360 71% $4 486DX2 3 $1200 1.0 81 181 54% $12 PowerPC 601 4 $1700 1.3 121 115 28% $53 HP PA 7100 3 $1300 1.0 196 66 27% $73 DEC Alpha 3 $1500 1.2 234 53 19% $149 SuperSPARC 3 $1700 1.6 234 48 13% $272 Pentium 3 $1500 1.5 296 40 9% $417 From "Estimating IC Manufacturing Costs, by Linley Gwennap, Microprocessor Report, August 2, 1993, p. 15 Slide: Dave Patterson

Aside: Geometric Reasoning Accelerate triangle rendering by dividing screen into W 5 H pixel regions (processors) H W

Aside: Geometric Reasoning Only render triangle in region hit by triangle bounding box (size w 5 h)

Aside: Geometric Reasoning Triangle replicated for each region overlapping triangle bounding box Like having N extra triangles

Aside: Geometric Reasoning Speedup limited by overlap factor Distribution of box center vs. replication 4x 2x 4x 2x 1x 2x 4x 2x 4x Eyles formula : Ê w + W ˆ Ê h+ Hˆ Á Á Ë W Ë H

Costs and Trends in Cost Understanding trends in component costs (how they will change over time) is an important issue for designers Component prices drop over time without major improvements in manufacturing technology

What Affects Cost 1. Learning curve: The more experience in manufacturing a component, the better the yield In general, a chip, board or system with twice the yield will have half the cost. The learning curve is different for different components, complicating new system design decisions 2. Volume Larger volume increases rate of learning curve and manufacturing efficiency Doubling the volume typically reduces cost by 10% 3. Commodities Essentially identical products sold by multiple vendors in large volumes Foil the competition and drive the efficiency higher and thus the cost down

$/DRAM chip Cost Trends for DRAM A dollar in 1977 = $2.95 in 2001 Cost/MB = $500 in 1997 = $0.35 in 2000 = $0.08 in 2001 Demand exceeded supply Ë slow price drop Each generation drops in price by a factor of 10 to 30 over its lifetime

Intel List price for 1000 units of the Pentium III Cost Trends for Processors Price drop due to yield enhancements

List Price Avg. Selling Price Cost vs. Price Average Discount Gross Margin Direct Cost 25% to 40% 34% to 39% 6% to 8% Component Cost 15% to 33% Component Cost Raw material cost for the system s building blocks Direct Cost (add 25% to 40%) recurring costs: labor, purchasing, scrap, warranty Gross Margin (add 82% to 186%) nonrecurring costs: R&D, marketing, sales, equipment maintenance, rental, financing cost, pretax profits, taxes Average Discount (add 33% to 66%) volume discounts and/or retailer markup Slidei: Dave Patterson

Example: Price vs. Cost 100% 80% 60% 40% 20% Average Discount Gross Margin Direct Costs Component Costs 0% Mini W/S PC Chip Prices (August 1993) for a volume of 10,000 units Chip Area (mm 2 ) Total Cost Price Comment 386DX 43 $9 $31 486DX2 81 $35 $245 No Competition PowerPC 601 121 $77 $280 DEC Alpha 234 $202 $1231 Recoup R&D? Pentium 296 $473 $965 Slide: Dave Patterson

The Role of Performance Hardware performance is key to the effectiveness of the entire system Performance has to be measured and compared Evaluate various design and technological approaches Different types of applications: Different performance metrics may be appropriate Different aspects of a computer system may be most significant Factors that affect performance Instruction use and implementation, memory hierarchy, I/O handling

Defining Performance Performance means different things to different people Analogy from the airline industry: Cruising speed (How fast) Flight range (How far) Passengers (How many) Airplane Passenger Cruising range Cruising speed Passenger throughput capacity (miles) (m.p.h) (Passenger m.p.h) Boeing 777 375 4630 610 228,750 Boeing 747 470 4150 610 286,700 BAC/Sud Concorde 132 4000 1350 178,200 Douglas DC-8-50 146 8720 544 79,424 Criteria of performance evaluation differs among users and designers

Performance Metrics Response (execution) time: Time between the start and completion of a task Measures user perception of the system speed Common in reactive and time critical systems, single-user computer, etc. Throughput: Total number of tasks done in a given time Most relevant to batch processing (billing, credit card processing, etc.) Mainly used for input/output systems (disk access, printer, etc.) Decreasing response time always improves throughput

Performance Metric Examples Replacing the processor of a computer with a faster version Both response time AND throughput Adding additional processors to a system that uses multiple processors for separate tasks (e.g. handling of airline reservations system) Throughput but NOT response time

Response-time Metric Maximizing performance means minimizing response (execution) time Performance = 1 Execution time

Response-time Metric Performance of Processor P 2 is better than P 1 if for a given work load L P 2 takes less time to execute L than P 1 Performance(P 2 ) > Performance(P 1 ) w.r.t. L Execution time(p 2 ) < Execution time(p 1 ) Relative performance: ratio for same workload Speedup = Performance (P 2 ) Performance (P 1 ) = Execution time (P 1 ) Execution time (P 2 )

Designer s s Performance Metrics Users and designers use different metrics Designers look at the bottom line of program execution CPU execution time for a program = CPU clock cycles for a program Clock cycle time = CPU clock cycles for a program Clock rate To enhance the hardware performance, designers focus on reducing the clock cycle time and the number of cycles per program Many techniques to decrease the number of clock cycles also increase the clock cycle time or the average number of cycles per instruction (CPI)

Example A program runs in 10 seconds on computer A with 400 MHz clock. Want a computer B that could run the program in 6 seconds. Substantial increase in the clock speed possible, but would cause computer B to require 1.2 times as many clock cycles as computer A. What should be the clock rate of computer B? clock cycles(a) clock cycles(a) CPU time(a) = = clock rate(a) 400 10 6 cyc /sec = 10 sec clock cycles(a) = 10 sec 400 10 6 cyc/sec = 4 10 9 cycles To get the clock rate of the faster computer, we use the same formula 6 seconds = clock cycles(b) clock rate(b) 1.2 clock cycles(a) = clock rate(b) = 4.8 109 cycles clock rate(b) clock rate(b) = 4.8 106 cycles 6 second = 800 10 6 cycles/second

Calculation of CPU Time CPU time = Instruction count CPI Clock cycle time Instruction count CPI CPU time = Clock rate Instructions Clock cycles CPU time = Program Instruction Seconds Clock cycle Component of performance CPU execution time for a program Instruction count Clock cycles per instructions (CPI) Clock cycle time Clock rate Units of measure Seconds for the program Instructions executed for the program Average number of clock cycles/instruction Seconds per clock cycle Clock cycles per second

CPU Time (Cont.) CPU execution time can be measured by running the program Clock rate usually published by manufacturer Measuring CPI and instruction count non-trivial Instruction counts can be measured by software profiling an architecture simulator hardware counters on some architecture The CPI depends on many factors including processor structure memory system mix of instruction types implementation of these instructions

CPU Time (Cont.) Designers sometimes use the following formula: CPU clock cycles = ÂCPI C n i =1 C i executed of instructions of class i CPI i average cyc. per instruction in class i n number of instruction classes i i

We have two implementation of the same instruction set architecture. Machine A has a clock cycle time of 1 ns and CPI of 2.0 for some program. Machine B has a clock cycle time of 2 ns and CPI of 1.2 for the same. Which machine is faster for this program and by how much? Both execute the same instructions. Assume number of instructions is N, CPU clock cycles (A) = N 5 2.0 CPU clock cycles (B) = N 5 1.2 CPU time (A) = CPU clock cycles (A) 5 Clock cycle time (A) = N 5 2.0 5 1 ns = 2 5 N ns CPU time (B) = CPU clock cycles (B) 5 Clock cycle time (B) = N 5 1.2 5 2 ns = 2.4 5 N ns Therefore machine A will be faster by the following ratio: CPU Performance (A) CPU Performance (B) Example CPU time (B) 2.4 N ns = = CPU time (A) 2 N ns = 1.2

Comparing Code Segments A compiler designer is trying to decide between two code sequences for a particular machine. The hardware designers have supplied the following facts: Instruction class CPI for this instruction class A 1 B 2 C 3 For a particular high-level language statement, the compiler writer is considering two code sequences that require the following instruction counts: Code sequence Instruction count for instruction class A B C 1 2 1 2 2 4 1 1 Which code sequence executes the fewest instructions? Which will be faster? What is the CPI for each sequence? Instructions: ÂC i Sequence 1: 2 + 1 + 2 = 5 instructions 4 Sequence 2: 4 + 1 + 1 = 6 instructions

Comparing Code Segments A compiler designer is trying to decide between two code sequences for a particular machine. The hardware designers have supplied the following facts: Instruction class CPI for this instruction class A 1 B 2 C 3 For a particular high-level language statement, the compiler writer is considering two code sequences that require the following instruction counts: Code sequence Instruction count for instruction class A B C 1 2 1 2 2 4 1 1 Which code sequence executes the fewest instructions? Which will be faster? What is the CPI for each sequence? Execution time: ÂCPI i C i Sequence 1: (2 1) + (1 2) + (2 3) = 10 cycles Sequence 2: (4 1) + (1 2) + (1 3) = 9 cycles 4

Comparing Code Segments A compiler designer is trying to decide between two code sequences for a particular machine. The hardware designers have supplied the following facts: Instruction class CPI for this instruction class A 1 B 2 C 3 For a particular high-level language statement, the compiler writer is considering two code sequences that require the following instruction counts: Code sequence Instruction count for instruction class A B C 1 2 1 2 2 4 1 1 Which code sequence executes the most instructions? Which will be faster? What is the CPI for each sequence? CPI: Sequence 1: Sequence 2: CPU clock cycles Instruction count 10/5 = 2 cycles per instruction 9/6 = 1.5 cycles per instruction

The Role of Performance Hardware performance is key to the effectiveness of the entire system Performance has to be measured and compared Evaluate various design and technological approaches Different types of applications: Different performance metrics may be appropriate Different aspects of a computer system may be most significant Factors that affect performance Instruction use and implementation, memory hierarchy, I/O handling

Metrics of Performance User Application Programming Language Compiler Operations per second Designer ISA Datapath Control Function Units Transistors Wires Pins (millions) of Instructions per second: MIPS (millions) of (FP) operations per second: MFLOP/s Megabytes per second Cycles per second (clock rate) Ë Maximizing performance means minimizing response (execution) time Performance = 1 Execution time Figure: Dave Patterson

Calculation of CPU Time Program CPU time Instruction count CPI = Clock rate Instr. Count CPI Clock Rate Compiler X X Instruction Set X X Organization X X Technology X X CPU clock cycles n = ÂCPI C i =1 i i Where: C i is the count of number of instructions of class i executed CPI i is the average number of cycles per instruction for that instruction class n is the number of different instruction classes

Can Hardware-Indep Metrics Predict Performance? Time KDF9 B5500 Instructions executed Code size in instructions Code size in bits 12 11 10 9 8 7 6 5 ICL 1907 1.1 ms ATLAS 4 3 2 CDC 6600 NU 1108 1

Hardware Model number Powerstation 550 CPU 41.67-MHz POWER 4164 FPU (floating point) Integrated Number of CPU 1 Cache size per CPU 64K data/8k instruction Memory 64 MB Disk subsystem Network interface N/A Software 2 400-MB SCSI OS type and revision AIX Ver. 3.1.5 Compiler revision AIX XL C/6000 Ver. 1.1.5 AIX XL Fortran Ver. 2.2 Other software File system type Firmware level Tuning parameters Background load System state Performance Reports None AIX N/A System None None Multi-user (single-user login) Guiding principle is reproducibility (report environment & experiments setup)

Comparing & Summarizing Performance Wrong summary can be confusing A 10x B or B 10x A? Total execution time is a consistent measure Relative execution times for the same workload can be informative Computer A Computer B Program 1 (seconds) 1 10 Program 2 (seconds) 1000 100 Total time (seconds) 1001 110 CPU Performance (B) CPU Performance (A) = Total execution time (A) Total execution time (B) = 1001 110 = 9.1 Execution time is the only valid and unimpeachable measure of performance

Performance Summary (Cont.) Arithmetic Mean (AM) = 1 n Weighted Arithmetic Mean (WAM)= n  i=1 Execution_Time i n  i=1 w i Execution_Time i  w i = 1; w i 0 Time on A Time on B Norm. to A Norm. to B A B A B Program 1 1 10 1 10 0.1 1 Program 2 1000 100 1 0.1 10 1 AM of time or normalized time 500.5 55 1 5.05 5.05 1 Weighted arithmetic mean summarizes performance while tracking execution time Weights can adjust for different running times, balancing the contribution of each benchmark Never use AM for normalizing execution time relative to a reference machine

Performance Summary (Cont.) Geometric mean is suitable for reporting average normalized execution time Geometric Mean (GM) = i=1 Execution_Time_ratio i Geometric Mean (X i ) Geometric Mean (Y i ) = Geometric Mean Ê X ˆ i Á Ë n n Y i Time on A Time on B Norm. to A Norm. to B A B A B Program 1 1 10 1 10 0.1 1 Program 2 1000 100 1 0.1 10 1 AM of time or normalized time 500.5 55 1 5.05 5.05 1 GM of time or normalized time 31.62 31.62 1 1 1 1

Amdahl s s Law A common theme in hardware design is to make the common case fast Increasing the clock rate would not affect memory access time Using a floating point processing unit does not speed integer ALU operations The performance enhancement possible with a given improvement is limited by the amount that the improved feature is used Execution time after improvement = Original execution time affected by the improvement Amount of improvement + Execution time unaffected

Amdahl s s Law Execution time after improvement = Original execution time affected by the improvement Amount of improvement + Execution time unaffected Example: Floating point instructions improved to run 2X; but only 10% of actual instructions are floating point Exec-Time new = Exec-Time old x (0.9 +.1/2) = 0.95 x Exec-Time old Speedup overall = Exec-Time new / Exec-Time old = 1/0.95 = 1.053

Performance Benchmarks Many widely-used benchmarks are small programs that have significant locality of instruction and data reference Universal benchmarks can be misleading since hardware and compiler vendors might optimize for ONLY these programs The best types of benchmarks are real applications reflect end-user interest Architectures might perform well for some applications and poorly for others Compilation can boost performance by taking advantage of architecture-specific features Application-specific compiler optimization are becoming more popular

Effect of Compilation 800 700 600 500 400 300 200 100 0 gcc espresso spice doduc nasa7 li eqntott matrix300 fpppp tomcatv Benchmark Compiler Enhanced compiler App. and arch. specific optimization can dramatically impact performance

The SPEC Benchmarks Standard Performance Evaluation Corporation Suite of benchmarks by a set of companies improve measurement and reporting of CPU performance SPEC CPU2000 is the latest suite (for CPU) 12 integer programs (written in C) 14 floating-point (Fortran 77) programs Customized SPEC suites for other areas graphics, mail, web, JVM,

SPEC ratio = The SPEC Benchmarks Requires running applications on real hardware memory system has a significant effect Report must include exact configuration Execution time on SUN SPARCstation10/40 Execution time on the measure machine Bigger numeric values of the SPEC ratio indicate faster machine (performance = 1/execution time)

10 9 8 7 6 5 4 3 2 1 SPEC95 for Pentium and Pentium Pro 10 9 8 7 6 5 4 3 2 1 0 50 100 150 Clock rate (MHz) 200 250 Pentium Pentium Pro 0 50 100 Clock rate (MHz) 150 200 250 Pentium Pentium Pro Comments & Observations: The performance measured may be different on otherwise identical HW with different memory systems or compilers At the same clock rate, the SPECint95 shows Pentium Pro 1.4-1.5 times faster / SPECfp95 shows 1.7-1.8 times faster mostly due to enhanced internal architecture Processor performance increase low relative to clock increase due to memory system Large applications are more sensitive to memory system

MIPS as a Performance Metric MIPS = Million Instructions Per Second one of the simplest metrics valid in a limited context MIPS (native MIPS) = Instruction count 6 Execution time 10 There are three problems with MIPS: MIPS does not account for instruction capabilities MIPS can vary between programs on the same computer MIPS can vary inversely with performance (see next example) The use of MIPS is simple and intuitive, faster machines have bigger MIPS

Consider the machine with the following three instruction classes and CPI: Suppose we measure the code for the same program from two compilers: Assume that the machine s clock rate is 500 MHz. Which code sequence will execute faster according to execution time? According to MIPS? Execution time: Sequence 1: Sequence 2: Instruction class CPI for this instruction class A 1 B 2 C 3 Instruction count in (billions) for each Code from instruction class A B C Compiler 1 5 1 1 Compiler 2 10 1 1 Exection time = Example CPU clock cycles Clock rate ; CPU clock cycles = CPI i C i CPU clock cycles = (5 1 + 1 2 + 1 3) 10 9 = 10 10 9 cyc. Execution time = (10 10 9 ) / (500 10 6 ) = 20 seconds CPU clock cycles = (10 1 + 1 2 + 1 3) 10 9 = 15 10 9 cyc. Execution time = (15 10 9 ) / (500 10 6 ) = 30 seconds n  i=1

Consider the machine with the following three instruction classes and CPI: Suppose we measure the code for the same program from two compilers: Assume that the machine s clock rate is 500 MHz. Which code sequence will execute faster according to MIPS? According to execution time? Instruction count MIPS = 6 Execution time 10 MIPS: Example Instruction class CPI for this instruction class A 1 B 2 C 3 Instruction count in (billions) for each Code from instruction class A B C Compiler 1 5 1 1 Compiler 2 10 1 1 Sequence 1: (5+1+1) / (20 10 6 ) = 350 Sequence 2: (10+1+1) / (30 10 6 ) = 400

Native, Peak & Relative MIPS Peak MIPS: Choose instruction mix that maximizes the CPI Relative MIPS: Compares machines to an agreed-upon reference machine (e.g. Vax 11/780) Comparisons even with different instruction sets However, reference machine may become obsolete and no longer exist! Relative MIPS is practical for evolving design of the same computer Execution timereference Relative MIPS = MIPS Execution time unrated reference

Synthetic Benchmarks Artificial programs that are constructed to match the characteristics of large set of programs Whetstone & Dhrystone popular Whetstone (scientific programs in Algol Fortran) Whetstones per second the number of executions of one iteration of the whetstone benchmark Dhrystone (systems programs in Ada C)

Synthetic Benchmarks Synthetic benchmarks suffer the following drawbacks: 1.They may not reflect the user interest since they are not real applications 2. They do not reflect real program behavior (e.g. memory access pattern) 3. Compiler and hardware can inflate the performance of these programs far beyond what the same optimization can achieve for real-programs

Final Remarks Designing for performance only without considering cost is unrealistic For supercomputing performance is the primary and dominant goal Low-end personal and embedded computers are extremely cost driven Performance depends on three major factors number of instructions, cycles consumed by instruction execution clock rate The art of computer design lies not in plugging numbers in a performance equation, but in accurately determining how design alternatives will affect performance and cost

Conclusion Summary Performance reports, summary and comparison (reproducibility, arithmetic and weighted arithmetic means) Widely used benchmark programs (SPEC, Whetstone and Dhrystone) Example industry metrics (e.g. MIPS, MFLOP, etc.) Increasing CPU performance can come from three sources 1.Increases in clock rate 2.Improvement in processor utilization that lower the CPI 3.Compiler enhancement that lower the instruction count or generate instructions with lower CPI Next Lecture: Instruction set architecture