Binary Representation. Jerry Cain CS 106AJ October 29, 2018 slides courtesy of Eric Roberts

Similar documents
Beyond Base 10: Non-decimal Based Number Systems

Beyond Base 10: Non-decimal Based Number Systems

CMPS 10 Introduction to Computer Science Lecture Notes

Programming Lecture 3

The type of all data used in a C++ program must be specified

Digital Fundamentals

Number Systems Using and Converting Between Decimal, Binary, Octal and Hexadecimal Number Systems

Software and Hardware

Digital Fundamentals

The type of all data used in a C (or C++) program must be specified

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics. Numbers & Number Systems

Properties and Definitions

IT 1204 Section 2.0. Data Representation and Arithmetic. 2009, University of Colombo School of Computing 1

CDS Computing for Scientists. Midterm Exam Review. Midterm Exam on October 22, 2013

Chapter 2. Data Representation in Computer Systems

Digital Fundamentals. CHAPTER 2 Number Systems, Operations, and Codes

Objectives. Connecting with Computer Science 2

CHW 261: Logic Design

Chapter 4: Computer Codes. In this chapter you will learn about:

MACHINE LEVEL REPRESENTATION OF DATA

Number Systems CHAPTER Positional Number Systems

Digital Computers and Machine Representation of Data

The Building Blocks: Binary Numbers, Boolean Logic, and Gates. Purpose of Chapter. External Representation of Information.

CS & IT Conversions. Magnitude 10,000 1,

1.1. INTRODUCTION 1.2. NUMBER SYSTEMS

Number Systems MA1S1. Tristan McLoughlin. November 27, 2013

Introduction to Computer Science-103. Midterm

Digital Logic. The Binary System is a way of writing numbers using only the digits 0 and 1. This is the method used by the (digital) computer.

Numeral Systems. -Numeral System -Positional systems -Decimal -Binary -Octal. Subjects:

Thus needs to be a consistent method of representing negative numbers in binary computer arithmetic operations.

9/3/2015. Data Representation II. 2.4 Signed Integer Representation. 2.4 Signed Integer Representation

Hexadecimal Numbers. Journal: If you were to extend our numbering system to more digits, what digits would you use? Why those?

Memory Addressing, Binary, and Hexadecimal Review

Review of Number Systems

Computer Number Systems Supplement

LOGIC DESIGN. Dr. Mahmoud Abo_elfetouh

Number System. Introduction. Decimal Numbers

Outline. What Digit? => Number System. Decimal (base 10) Significant Digits. Lect 03 Number System, Gates, Boolean Algebra. CS221: Digital Design

Binary, Hexadecimal and Octal number system

Logic Circuits I ECE 1411 Thursday 4:45pm-7:20pm. Nathan Pihlstrom.

Bits, Words, and Integers

DIGITAL SYSTEM DESIGN

Data Representation 1

Chapter 2 Exercises and Answers

Lecture (02) Operations on numbering systems

Lecture 1: What is a computer?

Using sticks to count was a great idea for its time. And using symbols instead of real sticks was much better.

Computer Organization and Assembly Language. Lab Session 4

(Refer Slide Time: 00:23)

TOPIC: NUMBER SYSTEMS

CHAPTER 2 Data Representation in Computer Systems

Lecture (01) Digital Systems and Binary Numbers By: Dr. Ahmed ElShafee

2 Number Systems 2.1. Foundations of Computer Science Cengage Learning

Programming Concepts and Skills. ASCII and other codes

Computer Organization

cast int( x float( x str( x hex( int string int oct( int string int bin( int string int chr( int int ord( ch

DRAM uses a single capacitor to store and a transistor to select. SRAM typically uses 6 transistors.

Data Representation Type of Data Representation Integers Bits Unsigned 2 s Comp Excess 7 Excess 8

Excerpt from "Art of Problem Solving Volume 1: the Basics" 2014 AoPS Inc.

CHAPTER 2 Data Representation in Computer Systems

This podcast will demonstrate a logical approach as to how a computer adds through logical gates.

SECTION 5.1. Sequences

Information Science 1

Experimental Methods I

Bits. Binary Digits. 0 or 1

Representing Information. Bit Juggling. - Representing information using bits - Number representations. - Some other bits - Chapters 1 and 2.3,2.

Course Schedule. CS 221 Computer Architecture. Week 3: Plan. I. Hexadecimals and Character Representations. Hexadecimal Representation

Variables and Data Representation

4 Operations On Data 4.1. Foundations of Computer Science Cengage Learning

Fundamentals of Programming

Appendix. Numbering Systems. In This Appendix...

Computer Sc. & IT. Digital Logic. Computer Sciencee & Information Technology. 20 Rank under AIR 100. Postal Correspondence

Final Labs and Tutors

DLD VIDYA SAGAR P. potharajuvidyasagar.wordpress.com. Vignana Bharathi Institute of Technology UNIT 1 DLD P VIDYA SAGAR

Review of Data Representation & Binary Operations Dhananjai M. Rao CSA Department Miami University

Computer Arithmetic. In this article we look at the way in which numbers are represented in binary form and manipulated in a computer.

Appendix. Numbering Systems. In this Appendix

Princeton University. Computer Science 217: Introduction to Programming Systems. Data Types in C

Binary Systems and Codes

Lecture (03) Binary Codes Registers and Logic Gates

St. Benedict s High School. Computing Science. Software Design & Development. (Part 2 Computer Architecture) National 5

CHAPTER 2 Number Systems


Computer Science 1000: Part #3. Binary Numbers

Intermediate Programming & Design (C++) Notation

CS E1: Section 1: Counting in Binary

DATA REPRESENTATION. By- Neha Tyagi PGT CS KV 5 Jaipur II Shift, Jaipur Region. Based on CBSE curriculum Class 11. Neha Tyagi, KV 5 Jaipur II Shift

Data Representation. DRAM uses a single capacitor to store and a transistor to select. SRAM typically uses 6 transistors.

Variables and Values

CS 202: Introduction to Computation Fall 2010: Exam #1

CS220/MATH320 Applied Discrete Mathematics Instructor: Marc Pomplun Practice Exam. Sample Solutions

1010 2?= ?= CS 64 Lecture 2 Data Representation. Decimal Numbers: Base 10. Reading: FLD Digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

Chapter 4. Operations on Data

DIGITAL SYSTEM FUNDAMENTALS (ECE 421) DIGITAL ELECTRONICS FUNDAMENTAL (ECE 422) COURSE / CODE NUMBER SYSTEM

FLOATING POINT NUMBERS

Number Systems and Binary Arithmetic. Quantitative Analysis II Professor Bob Orr

CS-201 Introduction to Programming with Java

Discussion. Why do we use Base 10?

UNIT - I: COMPUTER ARITHMETIC, REGISTER TRANSFER LANGUAGE & MICROOPERATIONS

Applied Computer Programming

Transcription:

Binary Representation Jerry Cain CS 106AJ October 29, 2018 slides courtesy of Eric Roberts

Once upon a time...

Claude Shannon Claude Shannon was one of the pioneers who shaped computer science in its early years. In his master s thesis, Shannon showed how it was possible to use Boolean logic and switching circuits to perform arithmetic calculations. That work led directly to the use of binary arithmetic inside computers an idea that initially generated considerable resistance among more established scientists. Shannon is best known as the inventor of information theory, which provides the mathematical foundation for understanding data communication. To many, however, Shannon s great contribution was that he never lost his sense of fun throughout his long career. Claude Shannon (1916-2001)

Binary Representation

The Power of Bits The fundamental unit of memory inside a computer is called a bit a term introduced in a paper by Claude Shannon as a contraction of the words binary and digit. An individual bit exists in one of two states, usually denoted as 0 and 1. More sophisticated data can be represented by combining larger numbers of bits: Two bits can represent four (2 2) values. Three bits can represent eight (2 2 2) values. Four bits can represent 16 (2 4 ) values, and so on. This laptop has 16GB of main memory and can therefore exist in 2 137,438,953,472 states, because 16GB is really 2 37 = 137,438,953,472 independent bits.

Leibniz and Binary Notation Binary notation is an old idea. It was described back in 1703 by the German mathematician Gottfried Wilhelm von Leibniz. Writing in the proceedings of the French Royal Academy of Science, Leibniz describes his use of binary notation in a simple, easy-to-follow style. Leibniz s paper further suggests that the Chinese were clearly familiar with binary arithmetic 2000 years earlier, as evidenced by the patterns of lines found in the I Ching.

Using Bits to Represent Integers Binary notation is similar to decimal notation but uses a different base. Decimal numbers use 10 as their base, which means that each digit counts for ten times as much as the digit to its right. Binary notation uses base 2, which means that each position counts for twice as much, as follows: 0 0 1 0 1 0 1 0 The next The digit next The gives digit rightmost gives digit the number the number of is 4s. the of units 2s. place. 0 x 1 = 0 1 x 2 = 2 0 x 4 = 0 1 x 8 = 8 0 x 16 = 0 1 x 32 = 32 0 x 64 = 0 0 x 128 = 0 42

Numbers and Bases The calculation at the end of the preceding slide makes it clear that the binary representation 00101010 is equivalent to the number 42. When it is important to distinguish the base, the text uses a small subscript, like this: 00101010 2 = 42 10 Although it is useful to be able to convert a number from one base to another, it is important to remember that the number remains the same. What changes is how you write it down. The number 42 is what you get if you count how many stars are in the pattern at the right. The number is the same whether you write it in English as forty-two, in decimal as 42, or in binary as 00101010. Numbers do not have bases; representations do.

Octal and Hexadecimal Notation Because binary notation tends to get rather long, computer scientists often prefer octal (base 8) or hexadecimal (base 16) notation instead. Octal notation uses eight digits: 0 to 7. Hexadecimal notation uses sixteen digits: 0 to 9, followed by the letters A through F to indicate the values 10 to 15. The following diagrams show how the number forty-two appears in both octal and hexadecimal notation: octal 5 2 2 x 1 = 2 5 x 8 = 40 42 hexadecimal 2 A 10 x 1 = 10 02 x 16 = 32 42 The advantage of using either octal or hexadecimal notation is that doing so makes it easy to translate the number back to individual bits because you can convert each digit separately.

Exercises: Number Bases What is the decimal value for each of the following numbers? 10001 2 177 8 AD 16 1 0 0 0 1 1 7 7 17 127 173 1 x 1 1 0 x 2 0 0 x 4 0 0 x 8 0 1 x 16 1 17 A 7 x 1 7 7 x 8 56 1 x 64 64 127 D 13 x 1 13 10 x 16 160 173 As part of a code to identify the file type, every Java class file begins with the following sixteen bits: 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 0 How would you express that number in hexadecimal notation? 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 0 A F E CAFE 16

Exercise: Perfect Numbers in Binary Greek mathematicians took a special interest in numbers that are equal to the sum of their proper divisors, which are simply those divisors less than the number itself. They called such numbers perfect numbers. For example, 6 is a perfect number because it is the sum of 1, 2, and 3, which are the integers less than 6 that divide evenly into 6. The next three perfect numbers all of which were known to the Greeks are 28, 496, and 8128. Convert each of these numbers into its binary representation: 6 = 28 = 496 = 8128 = 110 2 11100 2 111110000 2 1111111000000 2

Bits and Representation Sequences of bits have no intrinsic meaning except for the representation that we assign to them, both by convention and by building particular operations into the hardware. As an example, a 32-bit word represents an integer only because we have designed hardware that can manipulate those words arithmetically, applying operations such as addition, subtraction, and comparison. By choosing an appropriate representation, you can use bits to represent any value you can imagine: Characters are represented using numeric character codes. Floating-point representation supports real numbers. Two-dimensional arrays of bits represent images. Sequences of images represent video. And so on...

Representing Characters Computers use numeric encodings to represent character data inside the memory of the machine, in which each character is assigned an integral value. Character codes, however, are not very useful unless they are standardized. When different computer manufacturers use different coding sequence (as was indeed the case in the early years), it is harder to share such data across machines. The first widely adopted character encoding was ASCII (American Standard Code for Information Interchange). With only 256 possible characters, the ASCII system proved inadequate to represent the many alphabets in use throughout the world. It has therefore been superseded by Unicode, which allows for a much larger number of characters.

The ASCII Subset of Unicode The following Unicode letter A, for value table example, for shows anythe has character first the 128 Unicode in characters the table valueisin 101 the the 8 sum, Unicode which of the is octal the character sum numbers ofset, thewhich at rowtheand are beginning column the same of labels. as that in row the older and column. ASCII scheme: 00x 01x 02x 03x 04x 05x 06x 07x 10x 11x 12x 13x 14x 15x 16x 17x 0 1 2 3 4 5 6 7 \000 \001 \002 \003 \004 \005 \006 \007 \b \t \n \011 \f \r \016 \017 \020 \021 \022 \023 \024 \025 \026 \027 \030 \031 \032 \033 \034 \035 \036 \037 space! " # $ % & ' ( ) * +, -. / 0 1 2 3 4 5 6 7 8 9 : ; < = >? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ ` a b c d e f g h i j k l m n o p q r s t u v w x y z { } ~ \177

Hardware Support for Characters a =97 01100001

Strings as an Abstract Idea Characters are most often used in programming when they are combined to form collections of consecutive characters called strings. As you will discover when you have a chance to look more closely at the internal structure of memory, strings are stored internally as a sequence of characters in sequential memory addresses. The internal representation, however, is really just an implementation detail. For most applications, it is best to think of a string as an abstract conceptual unit rather than as the characters it contains. JavaScript emphasizes the abstract view by defining a built-in string type that defines high-level operations on string values.

The End