Von Neumann architecture. The first computers used a single fixed program (like a numeric calculator).

Size: px
Start display at page:

Download "Von Neumann architecture. The first computers used a single fixed program (like a numeric calculator)."

Transcription

1 Microprocessors

2 Von Neumann architecture The first computers used a single fixed program (like a numeric calculator). To change the program, one has to re-wire, re-structure, or re-design the computer. People that do that were not called computer programmers, as they are called today, but rather computer architects. A Von Neumann computer uses a single memory to hold both instructions and data. 1

3 The program is written in an appropriate language, and is not hardwired in the computer itself; the computer is re-programmable. In a Von Neumann computer programs can be seen as data; as a consequence a malfunctioning program can crash the computer.

4 In a Von Neumann processor an instruction is read from memory, decoded, the memory location the instruction asked for is fetched, the operation performed, and the results written back in memory. The term von Neumann architecture dates from June 1945, coined after the name of the mathematician John von Neumann, although such architecture was not designed by Von Neumann alone.

5 The Von Neumann bottleneck The separation between the CPU and memory leads to what is known as the von Neumann bottleneck. The throughput (data transfer rate) between the CPU and memory is very small in comparison with the amount of memory available and the rate at which the CPU can work. As a result, the CPU is continuously forced to wait for data to be transferred to or from memory. Since the CPU speed and memory size have increased much faster than the throughput between the two, the bottleneck has become more intense. A cache memory between the CPU and main memory helps to alleviate the problem. 2

6 The Harvard architecture In the Harvard architecture there is a separate storage and signal pathways for instructions and data. In this architecture, the word width, timing, implementation technology, and memory address structure can differ for program and data. Instruction memory is often wider than data memory. In some systems, instructions can be stored in read-only memory while data memory generally requires random-access memory. Typically, there is much more instruction memory than data memory, so instruction addresses are much wider than data addresses. The CPU can be either reading an instruction or reading/writing data from/to the memory. 3

7 Both cannot occur at the same time in a Von Neumann architecture, since the instructions and data use the same signal pathways and memory. A computer following the Harvard architecture can be faster because it is able to fetch the next instruction at the same time it completes the current instruction (a phenomenon known as pipelining). Speed is gained at the expense of more complex electrical circuitry. Modern high performance CPU chip designs incorporate aspects of both Harvard and von Neumann architecture. On chip cache memory is divided into instruction cache and data cache.

8 Complex instruction set computer CISC A complex instruction set computer (CISC) is a microprocessor instruction set architecture in which each instruction can execute several lowlevel operations, such as a load from memory, an arithmetic operation, and a memory store. The terms register-memory or memory-memory also apply to the same concept. In the early days of computers, compilers did not exist. Programming was done in either machine code or assembly language. To make programming easier, computer architects created more and more complex instructions, which were direct representations of high level functions of high level programming languages. The attitude at the time was that hardware design was easier than compiler design, so the complexity went into the hardware. 4

9 Another force that encouraged complexity was the lack of large memory. Indeed, as every byte of memory was precious (for example, an entire system only had a few kilobytes of storage) the industry moved to such features as highly encoded instructions, instructions which could be variable sized, instructions which did multiple operations and instructions which did both data movement and data calculation. For the above reasons, CPU designers tried to make instructions that would do as much work as possible. This led to one instruction that would do all of the work in a single instruction: load up the two numbers to be added, add them, and then store the result back directly to memory. The compact nature of CISC results in smaller program sizes and fewer calls to main memory.

10 While many designs achieved the aim of higher throughput at lower cost and also allowed highlevel language constructs to be expressed by fewer instructions, it was observed that programs did not took profit from this. This is the point of departure from CISC to RISC. Examples of CISC processors are the Intel x86 CPUs (8051 included). The terms RISC and CISC had become less meaningful with the continued evolution of both CISC and RISC designs and implementations.

11 Reduced instruction set computer RISC The reduced instruction set computer, or RISC, is a CPU design philosophy that favors a reduced and simpler instruction set. The term load-store applies to the same concept. The idea was originally inspired by the discovery that many of the features that were included in traditional CPU designs (i.e., CISC) to facilitate coding were being ignored by the programs/programmers. In the late 1970s researchers demonstrated that the majority of the many addressing modes present in CISC microprocessors were ignored by most programs. This was a side effect of the increasing use of compilers to generate programs, as opposed to writing them in assembly 5

12 language. In others words, compilers were not able to exploit the features of a CISC assembly. At about the same time CPUs started to run even faster than the memory they talked to. It became apparent that more registers (and later caches) would be needed to support these higher operating frequencies. These additional registers and cache memories would require sizeable chip or board areas that could be made available if the complexity of the CPU was reduced. Since real-world programs spent most of their time executing very simple operations, some researchers decided to focus on making those common operations as simple and as fast as possible. The goal of RISC was to make instructions so simple, each one could be executed in a single clock cycle.

13 However RISC also had its drawbacks. Since a series of instructions is needed to complete even simple tasks, the total number of instructions read from memory is larger, and therefore takes longer (see the Von Neumann bottleneck). In the early 1980s it was thought that existing design was reaching theoretical limits. Future improvements in speed would be primarily through improved semiconductor process, that is, smaller features (transistors and wires) on the chip. The complexity of the chip would remain largely the same, but the smaller size would allow it to run at higher clock rates (Moore s law). The RISC CDC 6600 supercomputer, designed in 1964 by Jim Thornton and Seymour Cray has 74 op-codes, while the 8086 from intel has 400.

14 RISC designs have led to a number of successful platforms and architectures, some of the larger ones being: PlayStation, PlayStation 2, PlayStation Portable, PlayStation 3, Nintendo 64 game consoles, Nintendo s Gamecube and Wii, Microsoft s Xbox 360 and Palm PDA s.

15 Pipeline An instruction is made of micro-instructions. In a processor with pipeline, the processor works on one micro-instruction of several different instructions at the same time. For example, the RISC pipeline is broken into five stages: 1. Instruction fetch 2. Instruction decode and register fetch 3. Execute 4. Memory access 6

16 5. Register write back The key to pipelining is the observation that the processor can start reading the next instruction as soon as it finishes reading the last, meaning that it works on two instructions simultaneously: one is being read, the next is being decoded two stage pipelining. While no single instruction is completed any faster, the next instruction would complete right after the previous one. The result was a much more efficient utilization of processor resources. Pipelining reduces cycle time of a processor and hence increases instruction throughput, the number of instructions that can be executed in a unit of time. A typical CISC instruction to add two numbers might be ADD A, B, C, which adds the

17 values found in memory locations A and B, and then puts the result in memory location C. In a pipelined processor the pipeline controller would break this into a series of instructions similar to: LOAD A, R1 LOAD B, R2 ADD R1, R2, R3 STORE R3, C LOAD next instruction The R locations are registers, temporary memory inside the CPU that is quick to access. The end result is the same, the numbers are added and the result placed in C, and the time taken to drive the addition to completion is no different (possibly greater than for the CISC case) from the non-pipelined case. The key to understanding the advantage of pipelining is to consider what happens when

18 this ADD function is half-way done, at the ADD instruction for instance. At this point the circuitry responsible for loading data from memory is no longer being used, and would normally sit idle. In this case the pipeline controller fetches the next instruction from memory, and starts loading the data it needs into registers. That way when the ADD instruction is complete, the data needed for the next ADD is already loaded and ready to go. The overall effective speed of the machine can be greatly increased because no parts of the CPU sit idle. Every microprocessor manufactured today uses at least 2 stages of pipeline. (The Atmel AVR and the PIC microcontroller each have a 2 stage pipeline). Advantages of pipelining: the cycle time of the processor is reduced, thus increasing instruction bandwidth in most cases.

19 Advantages of not pipelining: The processor executes only a single instruction at a time. This prevents branch delays (in effect, every branch is delayed) and problems with serial instructions being executed concurrently. Consequently the design is simpler and cheaper to manufacture. The instruction latency in a non-pipelined processor is slightly lower than in a pipelined equivalent. This is due to the fact that extra flip flops must be added to the data path of a pipelined processor. A non-pipelined processor will have a stable instruction bandwidth. The performance of a pipelined processor is much harder to

20 predict and may vary more widely between different programs. Many designs include pipelines as long as 7, 10 and even 31 stages (like in the Intel Pentium 4). The downside of a long pipeline is when a program branches, the entire pipeline must be flushed, a problem that branch predicting helps to alleviate. The higher throughput of pipelines falls short when the executed code contains many branches: the processor cannot know where to read the next instruction, and must wait for the branch instruction to finish, leaving the pipeline behind it empty. After the branch is resolved, the next instruction has to travel all the way through the pipeline before its result becomes available and the processor appears to work again. In the extreme case,

21 the performance of a pipelined processor could theoretically approach that of an unpipelined processor, or even slightly worse if all but one pipeline stages are idle and a small overhead is present between stages.

22 Bibliography 7

Reduced Instruction Set Computer

Reduced Instruction Set Computer Reduced Instruction Set Computer RISC - Reduced Instruction Set Computer By reducing the number of instructions that a processor supports and thereby reducing the complexity of the chip, it is possible

More information

Microprocessor Architecture Dr. Charles Kim Howard University

Microprocessor Architecture Dr. Charles Kim Howard University EECE416 Microcomputer Fundamentals Microprocessor Architecture Dr. Charles Kim Howard University 1 Computer Architecture Computer System CPU (with PC, Register, SR) + Memory 2 Computer Architecture ALU

More information

Introduction to Microcontrollers

Introduction to Microcontrollers Introduction to Microcontrollers Embedded Controller Simply an embedded controller is a controller that is embedded in a greater system. One can define an embedded controller as a controller (or computer)

More information

RISC Processors and Parallel Processing. Section and 3.3.6

RISC Processors and Parallel Processing. Section and 3.3.6 RISC Processors and Parallel Processing Section 3.3.5 and 3.3.6 The Control Unit When a program is being executed it is actually the CPU receiving and executing a sequence of machine code instructions.

More information

EE 4980 Modern Electronic Systems. Processor Advanced

EE 4980 Modern Electronic Systems. Processor Advanced EE 4980 Modern Electronic Systems Processor Advanced Architecture General Purpose Processor User Programmable Intended to run end user selected programs Application Independent PowerPoint, Chrome, Twitter,

More information

Evolution of ISAs. Instruction set architectures have changed over computer generations with changes in the

Evolution of ISAs. Instruction set architectures have changed over computer generations with changes in the Evolution of ISAs Instruction set architectures have changed over computer generations with changes in the cost of the hardware density of the hardware design philosophy potential performance gains One

More information

Chapter 04: Instruction Sets and the Processor organizations. Lesson 20: RISC and converged Architecture

Chapter 04: Instruction Sets and the Processor organizations. Lesson 20: RISC and converged Architecture Chapter 04: Instruction Sets and the Processor organizations Lesson 20: RISC and converged Architecture 1 Objective Learn the RISC architecture Learn the Converged Architecture 2 Reduced Instruction Set

More information

Reduced Instruction Set Computers

Reduced Instruction Set Computers Reduced Instruction Set Computers The acronym RISC stands for Reduced Instruction Set Computer. RISC represents a design philosophy for the ISA (Instruction Set Architecture) and the CPU microarchitecture

More information

Computer Architecture s Changing Definition

Computer Architecture s Changing Definition Computer Architecture s Changing Definition 1950s Computer Architecture Computer Arithmetic 1960s Operating system support, especially memory management 1970s to mid 1980s Computer Architecture Instruction

More information

ELC4438: Embedded System Design Embedded Processor

ELC4438: Embedded System Design Embedded Processor ELC4438: Embedded System Design Embedded Processor Liang Dong Electrical and Computer Engineering Baylor University 1. Processor Architecture General PC Von Neumann Architecture a.k.a. Princeton Architecture

More information

ELCT708 MicroLab Session #1 Introduction to Embedded Systems and Microcontrollers. Eng. Salma Hesham

ELCT708 MicroLab Session #1 Introduction to Embedded Systems and Microcontrollers. Eng. Salma Hesham ELCT708 MicroLab Session #1 Introduction to Embedded Systems and Microcontrollers What is common between these systems? What is common between these systems? Each consists of an internal smart computer

More information

COMP3221: Microprocessors and. and Embedded Systems. Instruction Set Architecture (ISA) What makes an ISA? #1: Memory Models. What makes an ISA?

COMP3221: Microprocessors and. and Embedded Systems. Instruction Set Architecture (ISA) What makes an ISA? #1: Memory Models. What makes an ISA? COMP3221: Microprocessors and Embedded Systems Lecture 2: Instruction Set Architecture (ISA) http://www.cse.unsw.edu.au/~cs3221 Lecturer: Hui Wu Session 2, 2005 Instruction Set Architecture (ISA) ISA is

More information

RISC (Reduced Instruction Set Computer)

RISC (Reduced Instruction Set Computer) RISC (Reduced Instruction Set Computer) Reduced Instruction Set Computing (RISC), is a microprocessor CPU design philosophy that favors a smaller and simpler set of instructions that all take about the

More information

INTEL Architectures GOPALAKRISHNAN IYER FALL 2009 ELEC : Computer Architecture and Design

INTEL Architectures GOPALAKRISHNAN IYER FALL 2009 ELEC : Computer Architecture and Design INTEL Architectures GOPALAKRISHNAN IYER FALL 2009 GBI0001@AUBURN.EDU ELEC 6200-001: Computer Architecture and Design Silicon Technology Moore s law Moore's Law describes a long-term trend in the history

More information

Figure 1-1. A multilevel machine.

Figure 1-1. A multilevel machine. 1 INTRODUCTION 1 Level n Level 3 Level 2 Level 1 Virtual machine Mn, with machine language Ln Virtual machine M3, with machine language L3 Virtual machine M2, with machine language L2 Virtual machine M1,

More information

Architectures & instruction sets R_B_T_C_. von Neumann architecture. Computer architecture taxonomy. Assembly language.

Architectures & instruction sets R_B_T_C_. von Neumann architecture. Computer architecture taxonomy. Assembly language. Architectures & instruction sets Computer architecture taxonomy. Assembly language. R_B_T_C_ 1. E E C E 2. I E U W 3. I S O O 4. E P O I von Neumann architecture Memory holds data and instructions. Central

More information

Processing Unit CS206T

Processing Unit CS206T Processing Unit CS206T Microprocessors The density of elements on processor chips continued to rise More and more elements were placed on each chip so that fewer and fewer chips were needed to construct

More information

Lecture 4: RISC Computers

Lecture 4: RISC Computers Lecture 4: RISC Computers Introduction Program execution features RISC characteristics RISC vs. CICS Zebo Peng, IDA, LiTH 1 Introduction Reduced Instruction Set Computer (RISC) represents an important

More information

3.3 Hardware Parallel processing

3.3 Hardware Parallel processing Parallel processing is the simultaneous use of more than one CPU to execute a program. Ideally, parallel processing makes a program run faster because there are more CPUs running it. In practice, it is

More information

Computer and Hardware Architecture I. Benny Thörnberg Associate Professor in Electronics

Computer and Hardware Architecture I. Benny Thörnberg Associate Professor in Electronics Computer and Hardware Architecture I Benny Thörnberg Associate Professor in Electronics Hardware architecture Computer architecture The functionality of a modern computer is so complex that no human can

More information

New Advances in Micro-Processors and computer architectures

New Advances in Micro-Processors and computer architectures New Advances in Micro-Processors and computer architectures Prof. (Dr.) K.R. Chowdhary, Director SETG Email: kr.chowdhary@jietjodhpur.com Jodhpur Institute of Engineering and Technology, SETG August 27,

More information

RISC Principles. Introduction

RISC Principles. Introduction 3 RISC Principles In the last chapter, we presented many details on the processor design space as well as the CISC and RISC architectures. It is time we consolidated our discussion to give details of RISC

More information

Modern Design Principles RISC and CISC, Multicore. Edward L. Bosworth, Ph.D. Computer Science Department Columbus State University

Modern Design Principles RISC and CISC, Multicore. Edward L. Bosworth, Ph.D. Computer Science Department Columbus State University Modern Design Principles RISC and CISC, Multicore Edward L. Bosworth, Ph.D. Computer Science Department Columbus State University The von Neumann Inheritance The EDVAC, designed in 1945, was one of the

More information

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored?

what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? Inside the CPU how does the CPU work? what operations can it perform? how does it perform them? on what kind of data? where are instructions and data stored? some short, boring programs to illustrate the

More information

A Cache Hierarchy in a Computer System

A Cache Hierarchy in a Computer System A Cache Hierarchy in a Computer System Ideally one would desire an indefinitely large memory capacity such that any particular... word would be immediately available... We are... forced to recognize the

More information

von Neumann Architecture Basic Computer System Early Computers Microprocessor Reading Assignment An Introduction to Computer Architecture

von Neumann Architecture Basic Computer System Early Computers Microprocessor Reading Assignment An Introduction to Computer Architecture Reading Assignment EEL 4744C: Microprocessor Applications Lecture 1 Part 1 An Introduction to Computer Architecture Microcontrollers and Microcomputers: Chapter 1, Appendix A, Chapter 2 Software and Hardware

More information

Basic Computer System. von Neumann Architecture. Reading Assignment. An Introduction to Computer Architecture. EEL 4744C: Microprocessor Applications

Basic Computer System. von Neumann Architecture. Reading Assignment. An Introduction to Computer Architecture. EEL 4744C: Microprocessor Applications Reading Assignment EEL 4744C: Microprocessor Applications Lecture 1 Part 1 An Introduction to Computer Architecture Microcontrollers and Microcomputers: Chapter 1, Appendix A, Chapter 2 Software and Hardware

More information

ELCT 912: Advanced Embedded Systems

ELCT 912: Advanced Embedded Systems ELCT 912: Advanced Embedded Systems Lecture 2-3: Embedded System Hardware Dr. Mohamed Abd El Ghany, Department of Electronics and Electrical Engineering Embedded System Hardware Used for processing of

More information

COMP2121: Microprocessors and Interfacing. Instruction Set Architecture (ISA)

COMP2121: Microprocessors and Interfacing. Instruction Set Architecture (ISA) COMP2121: Microprocessors and Interfacing Instruction Set Architecture (ISA) http://www.cse.unsw.edu.au/~cs2121 Lecturer: Hui Wu Session 2, 2017 1 Contents Memory models Registers Data types Instructions

More information

Computers in Engineering COMP 208. Computer Structure. Computer Architecture. Computer Structure Michael A. Hawker

Computers in Engineering COMP 208. Computer Structure. Computer Architecture. Computer Structure Michael A. Hawker Computers in Engineering COMP 208 Computer Structure Michael A. Hawker Computer Structure We will briefly look at the structure of a modern computer That will help us understand some of the concepts that

More information

More advanced CPUs. August 4, Howard Huang 1

More advanced CPUs. August 4, Howard Huang 1 More advanced CPUs In the last two weeks we presented the design of a basic processor. The datapath performs operations on register and memory data. A control unit translates program instructions into

More information

Instruction Set And Architectural Features Of A Modern Risc Processor

Instruction Set And Architectural Features Of A Modern Risc Processor Instruction Set And Architectural Features Of A Modern Risc Processor PowerPC, as an evolving instruction set, has since 2006 been named Power 1 History, 2 Design features The result was the POWER instruction

More information

ECE 1160/2160 Embedded Systems Design. Midterm Review. Wei Gao. ECE 1160/2160 Embedded Systems Design

ECE 1160/2160 Embedded Systems Design. Midterm Review. Wei Gao. ECE 1160/2160 Embedded Systems Design ECE 1160/2160 Embedded Systems Design Midterm Review Wei Gao ECE 1160/2160 Embedded Systems Design 1 Midterm Exam When: next Monday (10/16) 4:30-5:45pm Where: Benedum G26 15% of your final grade What about:

More information

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015 Advanced Parallel Architecture Lesson 3 Annalisa Massini - Von Neumann Architecture 2 Two lessons Summary of the traditional computer architecture Von Neumann architecture http://williamstallings.com/coa/coa7e.html

More information

7/28/ Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc.

7/28/ Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc Prentice-Hall, Inc. Technology in Action Technology in Action Chapter 9 Behind the Scenes: A Closer Look a System Hardware Chapter Topics Computer switches Binary number system Inside the CPU Cache memory Types of RAM Computer

More information

UNIT 2 (ECS-10CS72) VTU Question paper solutions

UNIT 2 (ECS-10CS72) VTU Question paper solutions UNIT 2 (ECS-10CS72) VTU Question paper solutions 1. Differentiate between Harvard and von Neumann architecture. Jun 14 The Harvard architecture is a computer architecture with physically separate storage

More information

PDF created with pdffactory Pro trial version How Computer Memory Works by Jeff Tyson. Introduction to How Computer Memory Works

PDF created with pdffactory Pro trial version   How Computer Memory Works by Jeff Tyson. Introduction to How Computer Memory Works Main > Computer > Hardware How Computer Memory Works by Jeff Tyson Introduction to How Computer Memory Works When you think about it, it's amazing how many different types of electronic memory you encounter

More information

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

COSC 6385 Computer Architecture - Memory Hierarchy Design (III) COSC 6385 Computer Architecture - Memory Hierarchy Design (III) Fall 2006 Reducing cache miss penalty Five techniques Multilevel caches Critical word first and early restart Giving priority to read misses

More information

Outline Marquette University

Outline Marquette University COEN-4710 Computer Hardware Lecture 1 Computer Abstractions and Technology (Ch.1) Cristinel Ababei Department of Electrical and Computer Engineering Credits: Slides adapted primarily from presentations

More information

administrivia final hour exam next Wednesday covers assembly language like hw and worksheets

administrivia final hour exam next Wednesday covers assembly language like hw and worksheets administrivia final hour exam next Wednesday covers assembly language like hw and worksheets today last worksheet start looking at more details on hardware not covered on ANY exam probably won t finish

More information

Alternate definition: Instruction Set Architecture (ISA) What is Computer Architecture? Computer Organization. Computer structure: Von Neumann model

Alternate definition: Instruction Set Architecture (ISA) What is Computer Architecture? Computer Organization. Computer structure: Von Neumann model What is Computer Architecture? Structure: static arrangement of the parts Organization: dynamic interaction of the parts and their control Implementation: design of specific building blocks Performance:

More information

CHAPTER 1 Introduction

CHAPTER 1 Introduction CHAPTER 1 Introduction 1.1 Overview 1 1.2 The Main Components of a Computer 3 1.3 An Example System: Wading through the Jargon 4 1.4 Standards Organizations 15 1.5 Historical Development 16 1.5.1 Generation

More information

Multiple Instruction Issue. Superscalars

Multiple Instruction Issue. Superscalars Multiple Instruction Issue Multiple instructions issued each cycle better performance increase instruction throughput decrease in CPI (below 1) greater hardware complexity, potentially longer wire lengths

More information

Chapter 2 Lecture 1 Computer Systems Organization

Chapter 2 Lecture 1 Computer Systems Organization Chapter 2 Lecture 1 Computer Systems Organization This chapter provides an introduction to the components Processors: Primary Memory: Secondary Memory: Input/Output: Busses The Central Processing Unit

More information

CPS 303 High Performance Computing. Wensheng Shen Department of Computational Science SUNY Brockport

CPS 303 High Performance Computing. Wensheng Shen Department of Computational Science SUNY Brockport CPS 303 High Performance Computing Wensheng Shen Department of Computational Science SUNY Brockport Chapter 1: Introduction to High Performance Computing van Neumann Architecture CPU and Memory Speed Motivation

More information

Lecture 15: Pipelining. Spring 2018 Jason Tang

Lecture 15: Pipelining. Spring 2018 Jason Tang Lecture 15: Pipelining Spring 2018 Jason Tang 1 Topics Overview of pipelining Pipeline performance Pipeline hazards 2 Sequential Laundry 6 PM 7 8 9 10 11 Midnight Time T a s k O r d e r A B C D 30 40 20

More information

Computer Systems Architecture

Computer Systems Architecture Computer Systems Architecture Guoping Qiu School of Computer Science The University of Nottingham http://www.cs.nott.ac.uk/~qiu 1 The World of Computers Computers are everywhere Cell phones Game consoles

More information

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections )

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections ) Lecture 9: More ILP Today: limits of ILP, case studies, boosting ILP (Sections 3.8-3.14) 1 ILP Limits The perfect processor: Infinite registers (no WAW or WAR hazards) Perfect branch direction and target

More information

Chapter 1 : Introduction

Chapter 1 : Introduction Chapter 1 Introduction 1.1 Introduction A Microprocessor is a multipurpose programmable, clock driven, register based electronic device that reads binary instructions from a storage device called memory,

More information

Computer Architecture Dr. Charles Kim Howard University

Computer Architecture Dr. Charles Kim Howard University EECE416 Microcomputer Fundamentals Computer Architecture Dr. Charles Kim Howard University 1 Computer Architecture Computer Architecture Art of selecting and interconnecting hardware components to create

More information

VIII. DSP Processors. Digital Signal Processing 8 December 24, 2009

VIII. DSP Processors. Digital Signal Processing 8 December 24, 2009 Digital Signal Processing 8 December 24, 2009 VIII. DSP Processors 2007 Syllabus: Introduction to programmable DSPs: Multiplier and Multiplier-Accumulator (MAC), Modified bus structures and memory access

More information

Instruction Set Architectures. Part 1

Instruction Set Architectures. Part 1 Instruction Set Architectures Part 1 Application Compiler Instr. Set Proc. Operating System I/O system Instruction Set Architecture Digital Design Circuit Design 1/9/02 Some ancient history Earliest (1940

More information

Announcement. Computer Architecture (CSC-3501) Lecture 25 (24 April 2008) Chapter 9 Objectives. 9.2 RISC Machines

Announcement. Computer Architecture (CSC-3501) Lecture 25 (24 April 2008) Chapter 9 Objectives. 9.2 RISC Machines Announcement Computer Architecture (CSC-3501) Lecture 25 (24 April 2008) Seung-Jong Park (Jay) http://wwwcsclsuedu/~sjpark 1 2 Chapter 9 Objectives 91 Introduction Learn the properties that often distinguish

More information

CS Computer Architecture

CS Computer Architecture CS 35101 Computer Architecture Section 600 Dr. Angela Guercio Fall 2010 Computer Systems Organization The CPU (Central Processing Unit) is the brain of the computer. Fetches instructions from main memory.

More information

COMP2121: Microprocessors and Interfacing. Introduction to Microprocessors

COMP2121: Microprocessors and Interfacing. Introduction to Microprocessors COMP2121: Microprocessors and Interfacing Introduction to Microprocessors http://www.cse.unsw.edu.au/~cs2121 Lecturer: Hui Wu Session 2, 2017 1 1 Contents Processor architectures Bus Memory hierarchy 2

More information

ECE 486/586. Computer Architecture. Lecture # 8

ECE 486/586. Computer Architecture. Lecture # 8 ECE 486/586 Computer Architecture Lecture # 8 Spring 2015 Portland State University Lecture Topics Instruction Set Principles MIPS Control flow instructions Dealing with constants IA-32 Fallacies and Pitfalls

More information

Microprocessors and Microcontrollers. Assignment 1:

Microprocessors and Microcontrollers. Assignment 1: Microprocessors and Microcontrollers Assignment 1: 1. List out the mass storage devices and their characteristics. 2. List the current workstations available in the market for graphics and business applications.

More information

Typical DSP application

Typical DSP application DSP markets DSP markets Typical DSP application TI DSP History: Modem applications 1982 TMS32010, TI introduces its first programmable general-purpose DSP to market Operating at 5 MIPS. It was ideal for

More information

COSC 6385 Computer Architecture. - Memory Hierarchies (II)

COSC 6385 Computer Architecture. - Memory Hierarchies (II) COSC 6385 Computer Architecture - Memory Hierarchies (II) Fall 2008 Cache Performance Avg. memory access time = Hit time + Miss rate x Miss penalty with Hit time: time to access a data item which is available

More information

Lecture 4: RISC Computers

Lecture 4: RISC Computers Lecture 4: RISC Computers Introduction Program execution features RISC characteristics RISC vs. CICS Zebo Peng, IDA, LiTH 1 Introduction Reduced Instruction Set Computer (RISC) is an important innovation

More information

Lecture Topics. Branch Condition Options. Branch Conditions ECE 486/586. Computer Architecture. Lecture # 8. Instruction Set Principles.

Lecture Topics. Branch Condition Options. Branch Conditions ECE 486/586. Computer Architecture. Lecture # 8. Instruction Set Principles. ECE 486/586 Computer Architecture Lecture # 8 Spring 2015 Portland State University Instruction Set Principles MIPS Control flow instructions Dealing with constants IA-32 Fallacies and Pitfalls Reference:

More information

EE 3170 Microcontroller Applications

EE 3170 Microcontroller Applications EE 3170 Microcontroller Applications Lecture 4 : Processors, Computers, and Controllers - 1.2 (reading assignment), 1.3-1.5 Based on slides for ECE3170 by Profs. Kieckhafer, Davis, Tan, and Cischke Outline

More information

SYSTEM BUS AND MOCROPROCESSORS HISTORY

SYSTEM BUS AND MOCROPROCESSORS HISTORY SYSTEM BUS AND MOCROPROCESSORS HISTORY Dr. M. Hebaishy momara@su.edu.sa http://colleges.su.edu.sa/dawadmi/fos/pages/hebaishy.aspx Digital Logic Design Ch1-1 SYSTEM BUS The CPU sends various data values,

More information

COMPUTER STRUCTURE AND ORGANIZATION

COMPUTER STRUCTURE AND ORGANIZATION COMPUTER STRUCTURE AND ORGANIZATION Course titular: DUMITRAŞCU Eugen Chapter 4 COMPUTER ORGANIZATION FUNDAMENTAL CONCEPTS CONTENT The scheme of 5 units von Neumann principles Functioning of a von Neumann

More information

Microelectronics. Moore s Law. Initially, only a few gates or memory cells could be reliably manufactured and packaged together.

Microelectronics. Moore s Law. Initially, only a few gates or memory cells could be reliably manufactured and packaged together. Microelectronics Initially, only a few gates or memory cells could be reliably manufactured and packaged together. These early integrated circuits are referred to as small-scale integration (SSI). As time

More information

COMPUTER ORGANIZATION AND DESI

COMPUTER ORGANIZATION AND DESI COMPUTER ORGANIZATION AND DESIGN 5 Edition th The Hardware/Software Interface Chapter 4 The Processor 4.1 Introduction Introduction CPU performance factors Instruction count Determined by ISA and compiler

More information

Chapter 4. MARIE: An Introduction to a Simple Computer 4.8 MARIE 4.8 MARIE A Discussion on Decoding

Chapter 4. MARIE: An Introduction to a Simple Computer 4.8 MARIE 4.8 MARIE A Discussion on Decoding 4.8 MARIE This is the MARIE architecture shown graphically. Chapter 4 MARIE: An Introduction to a Simple Computer 2 4.8 MARIE MARIE s Full Instruction Set A computer s control unit keeps things synchronized,

More information

Chapter 1: Introduction to Parallel Computing

Chapter 1: Introduction to Parallel Computing Parallel and Distributed Computing Chapter 1: Introduction to Parallel Computing Jun Zhang Laboratory for High Performance Computing & Computer Simulation Department of Computer Science University of Kentucky

More information

Cpu Architectures Using Fixed Length Instruction Formats

Cpu Architectures Using Fixed Length Instruction Formats Cpu Architectures Using Fixed Length Instruction Formats Fixed-length instructions (RISC's). + allow easy fetch Load-store architectures. can do: add r1=r2+r3 What would be a good thing about having many

More information

Memory Systems IRAM. Principle of IRAM

Memory Systems IRAM. Principle of IRAM Memory Systems 165 other devices of the module will be in the Standby state (which is the primary state of all RDRAM devices) or another state with low-power consumption. The RDRAM devices provide several

More information

Historical Perspective and Further Reading 3.10

Historical Perspective and Further Reading 3.10 3.10 6.13 Historical Perspective and Further Reading 3.10 This section discusses the history of the first pipelined processors, the earliest superscalars, the development of out-of-order and speculative

More information

Control Hazards. Prediction

Control Hazards. Prediction Control Hazards The nub of the problem: In what pipeline stage does the processor fetch the next instruction? If that instruction is a conditional branch, when does the processor know whether the conditional

More information

Fig 1. Block diagram of a microcomputer

Fig 1. Block diagram of a microcomputer Computer: A computer is a multipurpose programmable machine that reads binary instructions from its memory, accepts binary data as input,processes the data according to those instructions and provides

More information

Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University Processor Architecture Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Moore s Law Gordon Moore @ Intel (1965) 2 Computer Architecture Trends (1)

More information

A superscalar machine is one in which multiple instruction streams allow completion of more than one instruction per cycle.

A superscalar machine is one in which multiple instruction streams allow completion of more than one instruction per cycle. CS 320 Ch. 16 SuperScalar Machines A superscalar machine is one in which multiple instruction streams allow completion of more than one instruction per cycle. A superpipelined machine is one in which a

More information

ECE 486/586. Computer Architecture. Lecture # 7

ECE 486/586. Computer Architecture. Lecture # 7 ECE 486/586 Computer Architecture Lecture # 7 Spring 2015 Portland State University Lecture Topics Instruction Set Principles Instruction Encoding Role of Compilers The MIPS Architecture Reference: Appendix

More information

Lecture 4: ISA Tradeoffs (Continued) and Single-Cycle Microarchitectures

Lecture 4: ISA Tradeoffs (Continued) and Single-Cycle Microarchitectures Lecture 4: ISA Tradeoffs (Continued) and Single-Cycle Microarchitectures ISA-level Tradeoffs: Instruction Length Fixed length: Length of all instructions the same + Easier to decode single instruction

More information

Introduction. CSCI 4850/5850 High-Performance Computing Spring 2018

Introduction. CSCI 4850/5850 High-Performance Computing Spring 2018 Introduction CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University What is Parallel

More information

From CISC to RISC. CISC Creates the Anti CISC Revolution. RISC "Philosophy" CISC Limitations

From CISC to RISC. CISC Creates the Anti CISC Revolution. RISC Philosophy CISC Limitations 1 CISC Creates the Anti CISC Revolution Digital Equipment Company (DEC) introduces VAX (1977) Commercially successful 32-bit CISC minicomputer From CISC to RISC In 1970s and 1980s CISC minicomputers became

More information

What is Pipelining? RISC remainder (our assumptions)

What is Pipelining? RISC remainder (our assumptions) What is Pipelining? Is a key implementation techniques used to make fast CPUs Is an implementation techniques whereby multiple instructions are overlapped in execution It takes advantage of parallelism

More information

Unit 9 : Fundamentals of Parallel Processing

Unit 9 : Fundamentals of Parallel Processing Unit 9 : Fundamentals of Parallel Processing Lesson 1 : Types of Parallel Processing 1.1. Learning Objectives On completion of this lesson you will be able to : classify different types of parallel processing

More information

Chapter 2: Instructions How we talk to the computer

Chapter 2: Instructions How we talk to the computer Chapter 2: Instructions How we talk to the computer 1 The Instruction Set Architecture that part of the architecture that is visible to the programmer - instruction formats - opcodes (available instructions)

More information

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015

Advanced Parallel Architecture Lesson 3. Annalisa Massini /2015 Advanced Parallel Architecture Lesson 3 Annalisa Massini - 2014/2015 Von Neumann Architecture 2 Summary of the traditional computer architecture: Von Neumann architecture http://williamstallings.com/coa/coa7e.html

More information

Real Processors. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

Real Processors. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Real Processors Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Instruction-Level Parallelism (ILP) Pipelining: executing multiple instructions in parallel

More information

Embedded Computing Platform. Architecture and Instruction Set

Embedded Computing Platform. Architecture and Instruction Set Embedded Computing Platform Microprocessor: Architecture and Instruction Set Ingo Sander ingo@kth.se Microprocessor A central part of the embedded platform A platform is the basic hardware and software

More information

Major Advances (continued)

Major Advances (continued) CSCI 4717/5717 Computer Architecture Topic: RISC Processors Reading: Stallings, Chapter 13 Major Advances A number of advances have occurred since the von Neumann architecture was proposed: Family concept

More information

Computer Systems. Binary Representation. Binary Representation. Logical Computation: Boolean Algebra

Computer Systems. Binary Representation. Binary Representation. Logical Computation: Boolean Algebra Binary Representation Computer Systems Information is represented as a sequence of binary digits: Bits What the actual bits represent depends on the context: Seminar 3 Numerical value (integer, floating

More information

Concurrent/Parallel Processing

Concurrent/Parallel Processing Concurrent/Parallel Processing David May: April 9, 2014 Introduction The idea of using a collection of interconnected processing devices is not new. Before the emergence of the modern stored program computer,

More information

CS 5803 Introduction to High Performance Computer Architecture: RISC vs. CISC. A.R. Hurson 323 Computer Science Building, Missouri S&T

CS 5803 Introduction to High Performance Computer Architecture: RISC vs. CISC. A.R. Hurson 323 Computer Science Building, Missouri S&T CS 5803 Introduction to High Performance Computer Architecture: RISC vs. CISC A.R. Hurson 323 Computer Science Building, Missouri S&T hurson@mst.edu 1 Outline How to improve CPU time? Complex Instruction

More information

Overview of Computer Organization. Chapter 1 S. Dandamudi

Overview of Computer Organization. Chapter 1 S. Dandamudi Overview of Computer Organization Chapter 1 S. Dandamudi Outline Introduction Basic Terminology and Notation Views of computer systems User s view Programmer s view Advantages of high-level languages Why

More information

CIS 371 Spring 2010 Thu. 4 March 2010

CIS 371 Spring 2010 Thu. 4 March 2010 1 Computer Organization and Design Midterm Exam Solutions CIS 371 Spring 2010 Thu. 4 March 2010 This exam is closed book and note. You may use one double-sided sheet of notes, but no magnifying glasses!

More information

Processor Architecture

Processor Architecture Processor Architecture Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu SSE2030: Introduction to Computer Systems, Spring 2018, Jinkyu Jeong (jinkyu@skku.edu)

More information

COMPUTER ARCHITECTURES

COMPUTER ARCHITECTURES COMPUTER ARCHITECTURES Random Access Memory Technologies Gábor Horváth BUTE Department of Networked Systems and Services ghorvath@hit.bme.hu Budapest, 2019. 02. 24. Department of Networked Systems and

More information

Computer Organization & Assembly Language Programming. CSE 2312 Lecture 2 Introduction to Computers

Computer Organization & Assembly Language Programming. CSE 2312 Lecture 2 Introduction to Computers Computer Organization & Assembly Language Programming CSE 2312 Lecture 2 Introduction to Computers 1 Languages, Levels, Virtual Machines A multilevel machine 2 Contemporary Multilevel Machines A six-level

More information

CPSC 313, 04w Term 2 Midterm Exam 2 Solutions

CPSC 313, 04w Term 2 Midterm Exam 2 Solutions 1. (10 marks) Short answers. CPSC 313, 04w Term 2 Midterm Exam 2 Solutions Date: March 11, 2005; Instructor: Mike Feeley 1a. Give an example of one important CISC feature that is normally not part of a

More information

AVR MICROCONTROLLER ARCHITECTURTE

AVR MICROCONTROLLER ARCHITECTURTE AVR MICROCONTROLLER ARCHITECTURTE AVR MICROCONTROLLER AVR- Advanced Virtual RISC. The founders are Alf Egil Bogen Vegard Wollan RISC AVR architecture was conceived by two students at Norwegian Institute

More information

Advanced Computer Architecture

Advanced Computer Architecture Advanced Computer Architecture Chapter 1 Introduction into the Sequential and Pipeline Instruction Execution Martin Milata What is a Processors Architecture Instruction Set Architecture (ISA) Describes

More information

Modern Processors. RISC Architectures

Modern Processors. RISC Architectures Modern Processors RISC Architectures Figures used from: Manolis Katevenis, RISC Architectures, Ch. 20 in Zomaya, A.Y.H. (ed), Parallel and Distributed Computing Handbook, McGraw-Hill, 1996 RISC Characteristics

More information

This Unit: Putting It All Together. CIS 501 Computer Architecture. What is Computer Architecture? Sources

This Unit: Putting It All Together. CIS 501 Computer Architecture. What is Computer Architecture? Sources This Unit: Putting It All Together CIS 501 Computer Architecture Unit 12: Putting It All Together: Anatomy of the XBox 360 Game Console Application OS Compiler Firmware CPU I/O Memory Digital Circuits

More information

Overview of Computer Organization. Outline

Overview of Computer Organization. Outline Overview of Computer Organization Chapter 1 S. Dandamudi Outline Introduction Basic Terminology and Notation Views of computer systems User s view Programmer s view Advantages of high-level languages Why

More information