Reduced Instruction Set Computer

Similar documents
Typical DSP application

The Single Cycle Processor

CISC / RISC. Complex / Reduced Instruction Set Computers

CISC Attributes. E.g. Pentium is considered a modern CISC processor

RISC Processors and Parallel Processing. Section and 3.3.6

CISC 360. Computer Architecture. Seth Morecraft Course Web Site:

Chapter 2 Lecture 1 Computer Systems Organization

Microprocessor Architecture Dr. Charles Kim Howard University

Evolution of ISAs. Instruction set architectures have changed over computer generations with changes in the

Chapter 13 Reduced Instruction Set Computers

Von Neumann architecture. The first computers used a single fixed program (like a numeric calculator).

New Advances in Micro-Processors and computer architectures

Review of instruction set architectures

Chapter 5:: Target Machine Architecture (cont.)

Lecture 4: Instruction Set Design/Pipelining

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Chapter 04: Instruction Sets and the Processor organizations. Lesson 20: RISC and converged Architecture

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Alternate definition: Instruction Set Architecture (ISA) What is Computer Architecture? Computer Organization. Computer structure: Von Neumann model

Typical Processor Execution Cycle

Lecture 4: RISC Computers

Computer Systems Architecture I. CSE 560M Lecture 10 Prof. Patrick Crowley

A superscalar machine is one in which multiple instruction streams allow completion of more than one instruction per cycle.

RISC, CISC, and ISA Variations

RISC Principles. Introduction

IF1 --> IF2 ID1 ID2 EX1 EX2 ME1 ME2 WB. add $10, $2, $3 IF1 IF2 ID1 ID2 EX1 EX2 ME1 ME2 WB sub $4, $10, $6 IF1 IF2 ID1 ID2 --> EX1 EX2 ME1 ME2 WB

Advanced d Processor Architecture. Computer Systems Laboratory Sungkyunkwan University

ASSEMBLY LANGUAGE MACHINE ORGANIZATION

Uniprocessors. HPC Fall 2012 Prof. Robert van Engelen

Computer and Hardware Architecture I. Benny Thörnberg Associate Professor in Electronics

Previously. Principles for Modern Processor. History 1. Fetch execute cycle Pipelining and others forms of parallelism Basic architecture

Chapter 4. MARIE: An Introduction to a Simple Computer 4.8 MARIE 4.8 MARIE A Discussion on Decoding

ECE 486/586. Computer Architecture. Lecture # 7

Topic Notes: MIPS Instruction Set Architecture

ECE 341. Lecture # 15

Chapter 2: Instructions How we talk to the computer

Architectures & instruction sets R_B_T_C_. von Neumann architecture. Computer architecture taxonomy. Assembly language.

COS 140: Foundations of Computer Science

CPE300: Digital System Architecture and Design

Computer Architecture

Hakim Weatherspoon CS 3410 Computer Science Cornell University

COMPUTER ORGANIZATION & ARCHITECTURE

CSCI 402: Computer Architectures. Instructions: Language of the Computer (4) Fengguang Song Department of Computer & Information Science IUPUI

Overview of Computer Organization. Chapter 1 S. Dandamudi

Real instruction set architectures. Part 2: a representative sample

Lecture Topics. Principle #1: Exploit Parallelism ECE 486/586. Computer Architecture. Lecture # 5. Key Principles of Computer Architecture

CHAPTER 5 A Closer Look at Instruction Set Architectures

What is Computer Architecture?

Overview of Computer Organization. Outline

Lecture 4: RISC Computers

Memory Hierarchy Basics

CS Computer Architecture

ELC4438: Embedded System Design Embedded Processor

Virtual Memory: From Address Translation to Demand Paging

Processing Unit CS206T

Advanced Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Reduced Instruction Set Computers

Lecture 3 Machine Language. Instructions: Instruction Execution cycle. Speaking computer before voice recognition interfaces

Segment 1A. Introduction to Microcomputer and Microprocessor

INTEL Architectures GOPALAKRISHNAN IYER FALL 2009 ELEC : Computer Architecture and Design

ECE 468 Computer Architecture and Organization Lecture 1

Instruction Set And Architectural Features Of A Modern Risc Processor

LRU. Pseudo LRU A B C D E F G H A B C D E F G H H H C. Copyright 2012, Elsevier Inc. All rights reserved.

Math 230 Assembly Programming (AKA Computer Organization) Spring MIPS Intro

CS 5803 Introduction to High Performance Computer Architecture: RISC vs. CISC. A.R. Hurson 323 Computer Science Building, Missouri S&T

Instruction Set Architectures. CS301 Prof. Szajda

EKT 303 WEEK Pearson Education, Inc., Hoboken, NJ. All rights reserved.

Chapter 5 B. Large and Fast: Exploiting Memory Hierarchy

EJEMPLOS DE ARQUITECTURAS

Advanced Processor Architecture

CMSC Computer Architecture Lecture 2: ISA. Prof. Yanjing Li Department of Computer Science University of Chicago

Cycle Time for Non-pipelined & Pipelined processors

CS Computer Architecture

Chapter 2. OS Overview

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: MIPS Instruction Set Architecture

Memory Hierarchy Basics. Ten Advanced Optimizations. Small and Simple

What is Good Performance. Benchmark at Home and Office. Benchmark at Home and Office. Program with 2 threads Home program.

These actions may use different parts of the CPU. Pipelining is when the parts run simultaneously on different instructions.

CS450/650 Notes Winter 2013 A Morton. Superscalar Pipelines

Figure 1-1. A multilevel machine.

Virtual Machines and Dynamic Translation: Implementing ISAs in Software

November 7, 2014 Prediction

Virtual Memory: From Address Translation to Demand Paging

Modern Computer Architecture (Processor Design) Prof. Dan Connors

Microprogrammed Control

Hyperthreading Technology

IT 252 Computer Organization and Architecture. Introduction. Chia-Chi Teng

REDUCED INSTRUCTION SET COMPUTERS (RISC)

COE608: Computer Organization and Architecture

CHAPTER 5 A Closer Look at Instruction Set Architectures

Topic & Scope. Content: The course gives

CPE300: Digital System Architecture and Design

High Performance Computer Architecture Prof. Ajit Pal Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Lec 11 How to improve cache performance

Intel released new technology call P6P

Control Hazards. Branch Prediction

EE 3170 Microcontroller Applications

Computer Architectures

Instruction Set Architecture (ISA)

Transcription:

Reduced Instruction Set Computer RISC - Reduced Instruction Set Computer By reducing the number of instructions that a processor supports and thereby reducing the complexity of the chip, it is possible to make individual instructions execute faster and achieve a net gain in performance even though more instructions might be required to accomplish a task. RISC trades-off instruction set complexity for instruction execution timing.

RISC Features Large register set: having more registers allows memory access to be minimized. Load/Store architecture: operating data in memory directly is one of the most expensive in terms of clock cycle. Fixed length instruction encoding: This simplifies instruction fetching and decoding logic and allows easy implementation of pipelining. All instructions are register-to-register format except Load/Store which access memory All instructions execute in a single cycle save branch instructions which require two. Almost all single instruction size & same format.

Complex Instruction Set Computer CISC - Complex Instruction Set Computer Philosophy: Hardware is always faster than the software. Objective: Instruction set should be as powerful as possible With a power instruction set, fewer instructions needed to complete (and less memory) the same task as RISC. CISC was developed at a time (early 60 s), when memory technology was not so advanced. Memory was small (in terms of kilobytes) and expensive. But for embedded systems, especially Internet Appliances, memory efficiency comes into play again, especially in chip area and power.

Comparison CISC Any instruction may reference memory Many instructions & addressing modes Variable instruction formats Single register set RISC Only load/store references memory Few instructions & addressing modes Fixed instruction formats Multiple register sets Multi-clock cycle instructions Micro-program interprets instructions Complexity is in the micro-program Less to no pipelining Program code size small Single-clock cycle instructions Hardware (FSM) executes instructions Complexity is in the complier Highly pipelined Program code size large

Which is better RISC Or CISC?

Analogy (Chakravarty, 1994) Construct a 5 foot wall Method A: a large amount of small concrete blocks. Method B: a few large concrete blocks. Which method is better? The amount of work done in either method is equal Method A: more blocks to stack but easier and faster to carry. Method B: fewer blocks to stack but the process is slowed by the weight of each block. The distinction is in the dispersal of the work done.

RISC versus CISC RISC machines: SUN SPARC, SGI Mips, HP PA-RISC CISC machines: Intel 80x86, Motorola 680x0 What really distinguishes RISC from CISC these days lies in the architecture and not in the instruction set. CISC occurs whenever there is a disparity in speed between CPU operations and memory accesses due to technology or cost. What about combining both ideas? Intel 8086 Pentium P6 architecture is externally CISC but internally RISC & CISC! Intel IA-64 executes many instructions in parallel.

Pipelining (Designing,M.J.Quinn, 87) Instruction Pipelining is the use of pipelining to allow more than one instruction to be in some stage of execution at the same time. Cache memory is a small, fast memory unit used as a buffer between a processor and primary memory Ferranti ATLAS (1963): Pipelining reduced the average time per instruction by 375% Memory could not keep up with the CPU, needed a cache.

Memory Hierarchy Faster Registers Pipelining Cache memory Primary real memory Cheaper More Capacity Virtual memory (Disk, swapping)

Pipelining versus Parallelism (Designing,M.J.Quinn, 87) Most high-performance computers exhibit a great deal of concurrency. However, it is not desirable to call every modern computer a parallel computer. Pipelining and parallelism are 2 methods used to achieve concurrency. Pipelining increases concurrency by dividing a computation into a number of steps. Parallelism is the use of multiple resources to increase concurrency.