P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16

Similar documents
P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16. P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16

Lecture 2: Number Systems

CHW 261: Logic Design

Module 2: Computer Arithmetic

Chapter 2 Bits, Data Types, and Operations

COMP2611: Computer Organization. Data Representation

Chapter 2 Bits, Data Types, and Operations

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

Chapter 3: Arithmetic for Computers

2.1. Unit 2. Integer Operations (Arithmetic, Overflow, Bitwise Logic, Shifting)

CMSC 2833 Lecture 18. Parity Add a bit to make the number of ones (1s) transmitted odd.

Lecture Topics. Announcements. Today: Integer Arithmetic (P&H ) Next: continued. Consulting hours. Introduction to Sim. Milestone #1 (due 1/26)

Digital Fundamentals

UNIT 7A Data Representation: Numbers and Text. Digital Data

Introduction to Computers and Programming. Numeric Values

Divide: Paper & Pencil

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

BINARY SYSTEM. Binary system is used in digital systems because it is:

Data Representation 1

Ch. 7 Error Detection and Correction

Chapter 2 Bits, Data Types, and Operations

10.1. Unit 10. Signed Representation Systems Binary Arithmetic

D I G I T A L C I R C U I T S E E

Number Systems and Computer Arithmetic

Chapter 10 Error Detection and Correction 10.1

Chapter 2 Bits, Data Types, and Operations

Digital Fundamentals. CHAPTER 2 Number Systems, Operations, and Codes

ECE 30 Introduction to Computer Engineering

Data Representation Type of Data Representation Integers Bits Unsigned 2 s Comp Excess 7 Excess 8

IT 1204 Section 2.0. Data Representation and Arithmetic. 2009, University of Colombo School of Computing 1

Integers. N = sum (b i * 2 i ) where b i = 0 or 1. This is called unsigned binary representation. i = 31. i = 0

CS/COE 0447 Example Problems for Exam 2 Spring 2011

1010 2?= ?= CS 64 Lecture 2 Data Representation. Decimal Numbers: Base 10. Reading: FLD Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

COMP 303 Computer Architecture Lecture 6

ECE 2020B Fundamentals of Digital Design Spring problems, 6 pages Exam Two Solutions 26 February 2014

Semester Transition Point. EE 109 Unit 11 Binary Arithmetic. Binary Arithmetic ARITHMETIC

Lec-6-HW-3-ALUarithmetic-SOLN

Data Representation and Binary Arithmetic. Lecture 2

Topic Notes: Bits and Bytes and Numbers

ECE331: Hardware Organization and Design

Chapter 2 Bits, Data Types, and Operations

Microcomputers. Outline. Number Systems and Digital Logic Review

Chapter 10 Error Detection and Correction. Copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.

Number Systems. Both numbers are positive

Rui Wang, Assistant professor Dept. of Information and Communication Tongji University.

Positional notation Ch Conversions between Decimal and Binary. /continued. Binary to Decimal

Chapter 2 Data Representations

ECE331: Hardware Organization and Design

Numeric Encodings Prof. James L. Frankel Harvard University

Number Systems CHAPTER Positional Number Systems

ECE 2030B 1:00pm Computer Engineering Spring problems, 5 pages Exam Two 10 March 2010

Chapter 2: Number Systems

Advanced Computer Networks. Rab Nawaz Jadoon DCS. Assistant Professor COMSATS University, Lahore Pakistan. Department of Computer Science

Numbers and Computers. Debdeep Mukhopadhyay Assistant Professor Dept of Computer Sc and Engg IIT Madras

Chapter 4: Data Representations

At the ith stage: Input: ci is the carry-in Output: si is the sum ci+1 carry-out to (i+1)st state

Signed umbers. Sign/Magnitude otation

Data Representations & Arithmetic Operations

Floating-point Arithmetic. where you sum up the integer to the left of the decimal point and the fraction to the right.

Objectives. Connecting with Computer Science 2

Arithmetic and Bitwise Operations on Binary Data

Operations On Data CHAPTER 4. (Solutions to Odd-Numbered Problems) Review Questions

The type of all data used in a C (or C++) program must be specified

MYcsvtu Notes DATA REPRESENTATION. Data Types. Complements. Fixed Point Representations. Floating Point Representations. Other Binary Codes

CHAPTER V NUMBER SYSTEMS AND ARITHMETIC

COMP Overview of Tutorial #2

CPE300: Digital System Architecture and Design

Adding Binary Integers. Part 5. Adding Base 10 Numbers. Adding 2's Complement. Adding Binary Example = 10. Arithmetic Logic Unit

MACHINE LEVEL REPRESENTATION OF DATA

Kinds Of Data CHAPTER 3 DATA REPRESENTATION. Numbers Are Different! Positional Number Systems. Text. Numbers. Other

Positional Number System

ECE 2020B Fundamentals of Digital Design Spring problems, 6 pages Exam Two 26 February 2014

Signed Binary Numbers

More about Binary 9/6/2016

Computer Architecture Chapter 3. Fall 2005 Department of Computer Science Kent State University

COMPUTER ARITHMETIC (Part 1)

ECE260: Fundamentals of Computer Engineering

Lecture 6: Signed Numbers & Arithmetic Circuits. BCD (Binary Coded Decimal) Points Addressed in this Lecture

Math in MIPS. Subtracting a binary number from another binary number also bears an uncanny resemblance to the way it s done in decimal.

Computer Science 324 Computer Architecture Mount Holyoke College Fall Topic Notes: Bits and Bytes and Numbers

ECE 2030D Computer Engineering Spring problems, 5 pages Exam Two 8 March 2012

Inf2C - Computer Systems Lecture 2 Data Representation

Arithmetic Processing

Digital Arithmetic. Digital Arithmetic: Operations and Circuits Dr. Farahmand

Review Topics. Midterm Exam Review Slides

CS321: Computer Networks Error Detection and Correction

plc numbers Encoded values; BCD and ASCII Error detection; parity, gray code and checksums

A complement number system is used to represent positive and negative integers. A complement number system is based on a fixed length representation

SIGNED AND UNSIGNED SYSTEMS

Timing for Ripple Carry Adder

World Inside a Computer is Binary

Week 7: Assignment Solutions

Homework 1 graded and returned in class today. Solutions posted online. Request regrades by next class period. Question 10 treated as extra credit

Digital Logic. The Binary System is a way of writing numbers using only the digits 0 and 1. This is the method used by the (digital) computer.

Basic Definition INTEGER DATA. Unsigned Binary and Binary-Coded Decimal. BCD: Binary-Coded Decimal

CPS 104 Computer Organization and Programming Lecture-2 : Data representations,

Bits and Bytes. Data Representation. A binary digit or bit has a value of either 0 or 1; these are the values we can store in hardware devices.

Chapter 2. Data Representation in Computer Systems

Topic Notes: Bits and Bytes and Numbers

Number System. Introduction. Decimal Numbers

Transcription:

CODING and INFORMATION We need encodings for data. How many questions must be asked to be certain where the ball is. (cases: avg, worst, best) P( Hit 1st ) = 1/16 P( Hit 2nd ) = P( Hit 2nd Miss 1st )P( Miss 1st ) = (1/15)(15/16) = 1/16 P( Hit 3rd ) = (1/14) * P( Miss 2nd and 1st ) = (1/14)(14/15)(15/16) = 1/16 E( n ) = 1*(1/16) + 2*(1/16) +... + 15*(1/16) + 15(1/16) = (1+2+3+...+15+15) / 16 = 8 1/2-1/16

Expected (Avg.) bits of information, per message Avg. #bits sent, using our code. We are sending more bits than information content, but we are very close. MIN-Length code ==> MAX compression ==> most info bits in least number of communicated bits. Suppose n different "messages" to send, n = 2^k. Maximum entropy => equally likely: Prob( message-i ) = (1/n) for any message-i. Expected information per message is, Sum[ - (1/n) log[ 1/n ] ] = - n ( 1/n log[ 1/n ] ) = -1 log[ 2^-k ] = -1 (-k) = k bits per message. If we use a k-bit code for our messages, we will be 100% compressed. (k-bit integers? Are they equally likely?)

"message" could be a bit, a string of bits, a character, a page of characters,... Code words: 00 and 11 --- "0" and "1" Code words: 10 and 01 --- 1-bit errors: odd parity codeword indicates error. Works for k-bit messages w/ 1 parity bit, but only if 2-bit errors very unlikely (never occur?). 1-bit Error Correction w/ 3-bit code words: "0" ==> 000 "1" ==> 111 001 ==> "0" 011 ==> "1" 010 ==> "0" 101 ==> "1" 100 ==> "0" 110 ==> "1" 1-bit Correction, 2-bit Detection -- odd parity: 1-bit error corrected -- exactly two 1's: 2-bit error detected -- otherwise: no error How many extra bits are needed at minimum? Depends on noise in channel: Shannon Noisey Coding Theorem. Can you think of a scheme like the paritybit scheme that uses as few bits as possible? (See Reed-Solomon codes, for instance.) More bits, higher error probability.

d_i is a "digit", a symbol for a value: value( "d_i") Positional notation for numbers b is a value, the "base" of the number notation. There is a rule to find the value, given the symbols. unsigned 3-bit binary binary: --- digits = { "0", "1" } --- base = 2 Let's do some arithmetic. ADD: w/ Carry In: 0 0 0 0 1 1 1 0 1 1 1 0 0 0 1 0 1 0 1 0 0 1 1 1 Let's try SUBTRACTION

But first, let's look at a single column of subtraction All possible 1-bit subtractions All possible single columns 0 0 0 0 0 0 1 1 0 1 0 0 0 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 1 3-bit, bit-wise NAND

It's almost this simple in the LC3. This is a 3-bit version of LC3 (sort of). Some sort of function f converts 4-bit opcode to 2-bit ALU.ctl. In ucoded control, this function is implemented as control bits in ROM.

We have 8 possible 3-bit patterns (symbols). Choose an interpretation. What other number values are we interested in? NB--We represent the value using another encoding: base 10! Are other encodings useful?

2's-Complement Encoding: represent POSTIVE and NEGATIVE sanity check: 0-1? Which value makes sense?

How do we do this simply, in general? Note: columns with borrows give a bit flip. Notice: The 1st non-zero bit of x gets copied to S: borrow = 10 subtract bit = - 1 ----------------------- sum bit = 1 NB--These are the negated bits of x. To Get 2sComp( x ): Negate bits, add 1. Produce - x in 2's Complement (regardless of whether x is + or -): Negate bits (aka 1's Complement), then add 1. Simple logic: inverter on each bit, carry in to lowest FA set to 1. ==> We can use adder for signed subtraction

Instruction decoder: Each opcode has its own 1-bit signal. x > 0, y > 0 and x+y < 0 x > 0, y < 0 and x-y < 0 x < 0, y <0 and x+y > 0 x < 0, y > 0 and x-y > 0 0xxx 0yyy --------- 1sss 0xxx - 1yyy => 0zzz ---------- 1sss 1xxx 1yyy ---------- 0sss 1xxx - 0yyy => 1zzz ---------- 0sss

x == 0 : S == carry in; carry out == 0. x == 1 : S == carry in; carry out == 1. 3-bit, parallel load, left-shift Parallel write/load : we=1 and S=0: Q[2:0] <== D[2:0] after next clock tick. Shift Left : we=1 and S=1: Q[2:0] <== { Q[1:0], IN } after next tick. What about signed numbers? Convert to unsigned. Multiply. Convert back.

x X 7 == xn xn-1... x1 x0 X ( 4 + 2 + 1 ) == xn xn-1... x1 x0 X ( 1 ) + xn xn-1... x1 x0 X ( 2 ) + xn xn-1... x1 x0 X ( 4 ) == MULTIPLY: LSR: partial products, initially x. S: partial sum, initially 0. RSR: initially y. Z: all 0s RSR's low-bit MUXes Z or LSR to adder. xn xn-1... x1 x0 + xn xn-1... x1 x0 0 + xn xn-1... x1 x0 0 0 What if y has a 0 bit? Then add 0 instead of shifted x: e.g., y = 0...101 add 0, not B. 1 0 1 1 X 0 1 0 1 -------------------------------- + 0 0 0 0 0 0 0 0 + 0 0 0 0 1 0 1 1 + 0 0 0 0 0 0 0 0 + 0 0 1 0 1 1 0 0 + 0 0 0 0 0 0 0 0 --------------------------------- 0 0 1 1 0 1 1 1 Can we simplify multiplier? -- Get rid of zero register and mux. -- Use y_i to write-enable S register. Can we speed up multiply? We currently iterate n times to multiply n-bit numbers. Add more hardware? How?

left-shift( y ) == y X 2 ===> right-shift( y ) == y / 2 Ok, for division by a power of 2. R-shift(n) = divide-by-2^n If divisor is not power of 2? divbysubtraction( x, k ) q = 0; while( x >= k ) q++ x = x - k endwhile divbyaddition( x, k ) q = 0; y = 0 while ( x - y >= k ) q++ y = y + k endwhile 1. Try n-th power of 10, dn00...0 subtract: x - k x dn00...0 2. result x non-negative? yes: save dn00...0, x <=== x 3. next power of 10: n <=== n-1

INTEGER (unsigned) DIVISON x = kq +r k = divisor, q = quotient, r = remainder (ignore for now). FIND q. Move notnegative RIGHT one bit position after each SUB. Move q LEFT one bit position after each SUB. (write is to lowest bit position)

K-scaled integers: n-bit integer x represents k*x, range = k*2^n. Can we get more consistent errors? We can't represent every number. We choose what type of errors to live with. The part inside "(... )" is essentially integer. The exponent determines the scaling. ===> geometrical-progression scaled integers

How many bits do we need for 4 decimal digits of precision? Well, maybe we can live with that? We have to stop somewhere.

Sorting is most common operation for numerical data Checking x > y seems hard for floats. Checking n > m for ints: do ( n - m ) and check sign bit, if 0 then True. Can we check x > y using integer hardware? That is, can we treat x and y as if they were integers, and do integer subtraction?

regardless of the fractional parts.

E.G., 3-bit exponents in 2s-complement Negative exponents look smaller than positive exponents AS unsigned ints. E = e + 011 value e in 2s-comp E in excess-3 +3 011 +011 ==> 110 +2 010 +011 ==> 101 +1 001 +011 ==> 100 0 000 +011 ==> 011-1 111 +011 ==> 010-2 110 +011 ==> 001-3 101 +011 ==> 000 * -4 100 +011 ==> 111 * * These codes are reserved for special uses. The exponent values -3 and -4 are not allowed.

So much for encoding data. We could go on to audio, video,... But, back to noise and errors. "message" could be a bit, a string of bits, a character, a page of characters,... Code words 00 and 11 are good data, 10 and 01 indicate 1-bit errors. Last bit is "parity" bit, odd parity codeword indicates error. Works for k-bit messages w/ 1 parity bit (if 2-bit errors very unlikely). 1-bit Error Correction w/ 3-bit code words: "0" ==> 000 "1" ==> 111 001 ==> "0" 011 ==> "1" 010 ==> "0" 101 ==> "1" 100 ==> "0" 110 ==> "1" 1-bit Correction, 2-bit Detection -- odd parity: 1-bit error corrected -- exactly two 1's: 2-bit error detected -- otherwise: no error How many extra bits, at minimum? Depends on noise in channel: Shannon Noisey Coding Theorem. We use 4 bits, 1-bit data.

1-bit error: can detect and correct 2-bit error: cannot detect What can we do about 2-bit errors? Add another parity bit. 1-bit error: detect + correct 2-bit error: detect

Hamming 7,4 code: Find distances to all other code words. GREEN-PARITY: Bits[ 3, 2, 0 ] BLUE-PARITY: Bits[ 3, 1, 0 ] RED-PARITY: Bits[ 2, 1, 0 ]

ASCII (See back cover of PP) HEX CODE MEANING Printable? 00 NUL no 01 SOH no...... 20 space yes...... 30 "0" yes 31 "1" yes...... 41 "A" yes 42 "B" yes...... 61 "a" yes 62 "b" yes...... 7A "z" yes...... (other stuff, non-standard) What to Print Starting Memory Address What is displayed (left-to-right) 4-byte number (in hex notation) 0 6D412F32 two 2-byte numbers (in hex) 0 2F32 6D41 four 1-byte numbers (in hex) 0 32 2F 41 6D one 4-byte string 0 2 / A m (see "od" in unix)