Implementation of 64-Bit Pipelined Floating Point ALU Using Verilog

Similar documents
Design and Simulation of Pipelined Double Precision Floating Point Adder/Subtractor and Multiplier Using Verilog

Implementation of IEEE754 Floating Point Multiplier

ISSN Vol.02, Issue.11, December-2014, Pages:

Design and Implementation of IEEE-754 Decimal Floating Point Adder, Subtractor and Multiplier

ARCHITECTURAL DESIGN OF 8 BIT FLOATING POINT MULTIPLICATION UNIT

Design of a Floating-Point Fused Add-Subtract Unit Using Verilog

VHDL implementation of 32-bit floating point unit (FPU)

Synthesis and Simulation of 64 Bit Floating Point Unit with Square Root Operation

FPGA Based Implementation of Pipelined 32-bit RISC Processor with Floating Point Unit

HIGH SPEED SINGLE PRECISION FLOATING POINT UNIT IMPLEMENTATION USING VERILOG

COMPARISION OF PARALLEL BCD MULTIPLICATION IN LUT-6 FPGA AND 64-BIT FLOTING POINT ARITHMATIC USING VHDL

Prachi Sharma 1, Rama Laxmi 2, Arun Kumar Mishra 3 1 Student, 2,3 Assistant Professor, EC Department, Bhabha College of Engineering

Fig.1. Floating point number representation of single-precision (32-bit). Floating point number representation in double-precision (64-bit) format:

Implementation of Double Precision Floating Point Multiplier Using Wallace Tree Multiplier

University, Patiala, Punjab, India 1 2

REALIZATION OF AN 8-BIT PROCESSOR USING XILINX

VLSI Based 16 Bit ALU with Interfacing Circuit

SINGLE PRECISION FLOATING POINT DIVISION

Novel Design of Dual Core RISC Architecture Implementation

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

FPGA based High Speed Double Precision Floating Point Divider

VHDL Implementation of a MIPS-32 Pipeline Processor

Hemraj Sharma 1, Abhilasha 2

An FPGA Based Floating Point Arithmetic Unit Using Verilog

Architecture and Design of Generic IEEE-754 Based Floating Point Adder, Subtractor and Multiplier

Pipelined High Speed Double Precision Floating Point Multiplier Using Dadda Algorithm Based on FPGA

Run-Time Reconfigurable multi-precision floating point multiplier design based on pipelining technique using Karatsuba-Urdhva algorithms

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

International Journal of Advanced Research in Computer Science and Software Engineering

Pipelined Quadratic Equation based Novel Multiplication Method for Cryptographic Applications

An Implementation of Double precision Floating point Adder & Subtractor Using Verilog

PROJECT REPORT IMPLEMENTATION OF LOGARITHM COMPUTATION DEVICE AS PART OF VLSI TOOLS COURSE

Keywords: Soft Core Processor, Arithmetic and Logical Unit, Back End Implementation and Front End Implementation.

Design & Analysis of 16 bit RISC Processor Using low Power Pipelining

Design a floating-point fused add-subtract unit using verilog

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

Figurel. TEEE-754 double precision floating point format. Keywords- Double precision, Floating point, Multiplier,FPGA,IEEE-754.

IJRASET 2015: All Rights are Reserved

Implementation of Double Precision Floating Point Multiplier in VHDL

Review on Floating Point Adder and Converter Units Using VHDL

Development of an FPGA based high speed single precision floating point multiplier

Evaluation of High Speed Hardware Multipliers - Fixed Point and Floating point

Chapter 3: Arithmetic for Computers

International Journal of Advanced Research in Computer Science and Software Engineering

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER USING VEDIC MATHEMATICS

[Sahu* et al., 5(7): July, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116

IMPLEMENTATION OF DOUBLE PRECISION FLOATING POINT RADIX-2 FFT USING VHDL

Design of Double Precision Floating Point Multiplier Using Vedic Multiplication

Vendor Agnostic, High Performance, Double Precision Floating Point Division for FPGAs

VLSI DESIGN OF FLOATING POINT ARITHMETIC & LOGIC UNIT

VLSI Implementation of High Speed and Area Efficient Double-Precision Floating Point Multiplier

Simulation Results Analysis Of Basic And Modified RBSD Adder Circuits 1 Sobina Gujral, 2 Robina Gujral Bagga

VLSI Implementation of a High Speed Single Precision Floating Point Unit Using Verilog

ECE260: Fundamentals of Computer Engineering

An instruction set processor consist of two important units: Data Processing Unit (DataPath) Program Control Unit

Designing an Improved 64 Bit Arithmetic and Logical Unit for Digital Signaling Processing Purposes

Comparison of Adders for optimized Exponent Addition circuit in IEEE754 Floating point multiplier using VHDL

High Speed Single Precision Floating Point Unit Implementation Using Verilog

Implementation of Floating Point Multiplier Using Dadda Algorithm

FPGA based Simulation of Clock Gated ALU Architecture with Multiplexed Logic Enable for Low Power Applications

In this lecture, we will focus on two very important digital building blocks: counters which can either count events or keep time information, and

Implementation of a High Speed Binary Floating point Multiplier Using Dadda Algorithm in FPGA

A Novel Efficient VLSI Architecture for IEEE 754 Floating point multiplier using Modified CSA

DESIGN AND IMPLEMENTATION OF APPLICATION SPECIFIC 32-BITALU USING XILINX FPGA

Built in Self Test Architecture using Concurrent Approach

DIGITAL ARITHMETIC. Miloš D. Ercegovac Computer Science Department University of California Los Angeles and

Floating Point Arithmetic

ISSN: X Impact factor: (Volume3, Issue2) Analyzing Two-Term Dot Product of Multiplier Using Floating Point and Booth Multiplier

Published in: Proceedings of the 3rd International Symposium on Environment-Friendly Energies and Applications (EFEA 2014)

DESIGN OF HIGH PERFORMANCE LOW POWER 32 BIT RISC PROCESSOR

Design of 16-bit RISC Processor Supraj Gaonkar 1, Anitha M. 2

Carry-Free Radix-2 Subtractive Division Algorithm and Implementation of the Divider

VHDL IMPLEMENTATION OF IEEE 754 FLOATING POINT UNIT

Implementation of IEEE-754 Double Precision Floating Point Multiplier

ASIC Implementation of a High Speed Double (64bit) Precision Floating Point Unit Using Verilog

Chapter 5 : Computer Arithmetic

IEEE-754 compliant Algorithms for Fast Multiplication of Double Precision Floating Point Numbers

Optimized Design and Implementation of a 16-bit Iterative Logarithmic Multiplier

International Journal Of Global Innovations -Vol.6, Issue.II Paper Id: SP-V6-I1-P01 ISSN Online:

Performance Analysis of CORDIC Architectures Targeted by FPGA Devices

High speed Integrated Circuit Hardware Description Language), RTL (Register transfer level). Abstract:

An Efficient Implementation of Floating Point Multiplier

DESIGN OF LOGARITHM BASED FLOATING POINT MULTIPLICATION AND DIVISION ON FPGA

Design and Optimized Implementation of Six-Operand Single- Precision Floating-Point Addition

FPGA Implementation of Single Precision Floating Point Multiplier Using High Speed Compressors

ABSTRACT I. INTRODUCTION. 905 P a g e

COMPUTER ARITHMETIC (Part 1)

A Single/Double Precision Floating-Point Reciprocal Unit Design for Multimedia Applications

Survey on Implementation of IEEE754 Floating Point Number Division using Vedic Techniques

Using FPGA for Computer Architecture/Organization Education

Design, Analysis and Processing of Efficient RISC Processor

AN EFFICIENT FLOATING-POINT MULTIPLIER DESIGN USING COMBINED BOOTH AND DADDA ALGORITHMS

ECE 2030B 1:00pm Computer Engineering Spring problems, 5 pages Exam Two 10 March 2010

THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE

Open Floating Point Unit

PROJECT Report. Lecturer: Dr. Ronny Veljanovski FLOATING POINT ADDITION & SUBTRACTION. Student: IBRAHIM HAZMI REENA ASHANTI

Design and Simulation of Floating Point Adder, Subtractor & 24-Bit Vedic Multiplier

FPGA Implementation of a High Speed Multiplier Employing Carry Lookahead Adders in Reduction Phase

An FPGA based Implementation of Floating-point Multiplier

Vendor Agnostic, High Performance, Double Precision Floating Point Division for FPGAs

Transcription:

International Journal of Computer System (ISSN: 2394-1065), Volume 02 Issue 04, April, 2015 Available at http://www.ijcsonline.com/ Implementation of 64-Bit Pipelined Floating Point ALU Using Verilog Megha Sharma, Dr. Gurjit Kaur School of ICT, Gautam Buddha University, Greater Noida, U.P., India Abstract A Pipelined double precision floating point arithmetic logic unit (ALU) using verilog design is introduced. The novelty of the ALU is it gives high performance through the pipelining concept. Pipelining is a technique where multiple instruction execution are overlapped. A double precision floating point unit is to provide five arithmetic operations: addition, subtraction, multiplication, division, and square root and in top-down design approach, four arithmetic modules: addition/subtraction, multiplication, division, and square root are combined to form a double precision floating point ALU. Each module is divided into smaller modules. Three bits selection determines which operation takes place at a particular time. The pipelined modules are independent of each other. The modules are realized and validated using verilog simulation in the Questasim SE 10.0b and synthesis using Xlinx ISE Design Suite 13.3. Keywords: Floating point, Pipelining, Double precision, ALU I. INTRODUCTION Floating point describes a system for representing numbers that would be too large or too small to be represented as integers. Based on this standard, floatingpoint representation for digital systems should be platform-independent and data are interchanged freely among different digital systems. Pipeline is one of the popular method to realize high performance computing platform. A pipeline is a technique used in the design of computers and other digital electronic devices to increase their instruction throughput (number of instructions that can be executed in a unit of time).alu is a block in microprocessor that handles arithmetic operations. The double precision floating point unit implemented in this paper is a 64-bits processing unit which allows arithmetic operations on a floating point numbers. The floating point unit complies fully with IEEE 754 standard. II. DESIGN AND METHODS The main objective of this paper is to implement a double precision floating point unit capable of supporting the five basic operations i.e. addition, subtraction, multiplication, division, and square root. The subobjectives are to design a 64-bit floating point arithmetic logic unit operating on the IEEE 754 standard. The second sub-objective is to model the behavior of the double precision floating point unit using VERILOG. The specifications for a 64-bit floating point unit design are: i. Inputs op_a and op_b are of 64-bit binary floating point. ii. iii. Output op_o_add_sub and op_o_div is of 64-bit for addition, subtraction and division modules respectively. Output op_o_mul is of 55-bit for multiplication module. iv. Output op_o_sr is of 21-bit for square root module. v. Clock pulse is generated at each 100ns. vi. vii. Pos edge of clock edge and logic 1 of reset is used for initializing both the operands op_a and op_b. Select signal fpu_op_i of 3-bit is also used in the design to select the desired arithmetic operation as follows:- TABLE I. Operation Table FPU op i 000 001 010 011 100 Operation Addition Subtraction Multiplication Division Square root 124 International Journal of Computer Systems, ISSN-(2394-1065), Vol. 02, Issue 04, April, 2015

III. FLOATING POINT NUMBERS The floating point representation is one way to represent real numbers. A floating point number n is represented with an exponent e and a mantissa m as follows [7] : n = b^e * m Here, n Floating point number. b Base (i.e. 2). e Exponent/Power/Radix. m Mantissa/Fraction. Example- If we choose a number n=17 and base b=2, the following floating point representation of 17 would be: III. 17 = 2^4 * 1.0625 NEED TO GO BEYOND INTEGERS For extremely large values such as distance between sun and Pluto and also for extremely small values such as mass of electron, these types of values cannot be represented using integer, rational, irrational, real, and complex numbers, hence floating point numbers are introduced. IV. FPU REPRESENTATION IEEE 754 standard has introduced the concept for writing any value in its floating point form. Basically, a floating point number consist of three parts: sign, exponent, and fraction part. IEEE 754 standard defines two floating point representation, which is called as- TABLE II. Range For Single & Double precision Range for Fraction Range for Exponent Single precision number 1< fraction < 2 2^ (-23) -126 < exponent < 127 VALUES FOR SIGN Double precision number 1< fraction < 2 2^ (-52) -1022 < exponent < 1023 0 For positive floating point number. 1 For negative floating point number V. FLOATING POINT OPERATIONS AND FLOWCHARTS A. Addition/Subtraction: [(-1) ^S1 * F1 * 2^E1] + [(-1) ^S2 * F2 * 2^E2] Suppose E1>E2, then we can write as, [(-1)^S1 * F1 * 2^E1] + [(-1) ^S2 * F3 * 2^E1]. (Where, F3=F2/2^ (E1-E2)) Operand A = sign B & e A & frac A yes e A > e B? no TABLE III. BIT DIVISION e L = e A e L = e B Single Number Precision Double Precision Number e S = e B frac L = frac A frac S = frac B e S = e A frac L = frac B frac S = frac A SIGN (1-bit) SIGN (1-bit) EXPONENT (8-bit) FRACTION (23- bit) EXPONENT (11- bit) FRACTION (52-bit) Diff = e L. e S frac O = frac S +/- frac L e O = e L Example- Suppose we had to represent a value 17 in the double precision floating point format. Then following is the required floating point representation: 0 00010000 00011001010100000000000 Fig 1. Flow Chart- Add/Sub 125 International Journal of Computer Systems, ISSN-(2394-1065), Vol. 02, Issue 04, April, 2015

Fig 2. Simulation window- Add/Sub Fig 5. Simulation Window-Multiplication Fig 3. RTL window- Add/Sub Module In this window Green boxes represents registers in which values are stored and Red lines are interconnects that are used to connect registers. B. Multiplication [(-1) ^S1 * F1 * 2^E1] * [(-1) ^S2 * F2 * =(-1) ^ (S1 xor S2) * (F1 *F2) * 2^ (E1+E2). Fig 6. RTL Window-Multiplication Module In this window Green boxes represents registers in which values are stored and Red lines are interconnects that are used to connect registers. C. Division [(-1) ^S1 * F1 * 2^E1] / [(-1) ^S2 * F2 * 2^E2] = (-1) ^ (S1 xor S2) * (F1 / F2) * 2^ (E1-E2). Operand A = sign A & e A & frac A Count leading zeros in both Fractions: z A, z B Operand A = sign A & e A & frac A Shift-left frac A z A bits Shift-left frac B z B bits frac O = frac A X frac B Sign O = sign A XOR e O = e A + e B bias(127) E O = e A e B + bias(127) z A + z B Frac O = frac A / frac B Sign O = sign A XOR sign B Fig 4. Flow Chart- Multiplication Fig7. Flow Chart-Division 126 International Journal of Computer Systems, ISSN-(2394-1065), Vol. 02, Issue 04, April, 2015

The multiplications and additions are not needed. Instead, we use shifts and concatenations by suitable wiring. Fig 8. Simulation window-division Fig 10. Simulation window-square root Fig 9. RTL window-division Module In this window Green boxes represents registers in which values are stored and Red lines are interconnects that are used to connect registers. E. Top-Down Model Fig 11. RTL window-square root D. Square Root The non-restoring square root algorithm using nonredundant binary representation is given below. 1. Set q16 = 0, r16 = 0 and then iterate from k = 15 to 0. 2. If rk+1 _ 0, rk = rk+1d2k+1d2k qk+101, else rk = rk+1d2k+1d2k + qk+111, Fig 12. Simulation window-top-down model 3. If rk _ 0, qk = qk+11 (i.e., Qk = 1), else qk = qk+10 (i.e., Qk = 0), 4. Repeat steps 2 and 3, until k = 0. If r0 < 0, r0 = r0 + q01. Where qk = Q15Q14...Qk has (16 k) bits, e.g., q0 = Q = Q15Q14Q13...Q1Q0, and rk has (17 k) bits, e.g., r0 = R = R16R15R14...R1R0. Notice that rk+1d2k+1d2k means rk+1 4 + D2k+1 2 + D2k. Similarly, qk+11 means qk+1 2 + 1. Fig 13. RTL window-top-down Module 127 International Journal of Computer Systems, ISSN-(2394-1065), Vol. 02, Issue 04, April, 2015

In this model all the four operations i.e. add/sub, multiplication, division, square root are combined in a single unit or module. VI. CONCLUSION In this paper four functionalities: addition/subtraction, multiplication, division and square root are introduced for double precision numbers. And top-down model has also been introduced. Coding in verilog has been done according to the flow-charts shown above. More work can be done regarding power consumption. We can use the concept of clock gating for this. Such that required sub module will perform its operation, and all other sub module will remain off, on applying the values select pins. REFERENCES [1] Mamun Bin Ibne Reaz,, Md. Shabiul Islam, Mohd, and S.Sulaiman, Pipeline Floating Point ALU Design using VHDL, IEEE International Conference on Semiconductor Electronics, Proceedings, ICSE, pp: 204 208, (2002). [2] Rajit Ram Singh, Asish Tiwari, Singh, V.K. ; Tomar, G.S. VHDL environment for floating point Arithmetic Logic Unit-ALU design and simulation, Proceedings of the International Conference on Communication Systems and Network Technologies, Pages 469-472, (2011). [3] Shrivastava Purnima, Tiwari Mukesh, Singh Jaikaran, and Rathore Sanjay, VHDL Environment for Floating Point Arithmatic Logic Unit- ALU Design and Simulation, July ( 2012). [4] L.F.Rahman, Md. Mamun, and M.S.Amin, VHDL Environment for Pipeline Floating Point Arithmatic Logic Unit Design and Simulation, Journal of Applied Sciences Research, 8(1): 611-619, (2012). [5] Yamin Li, and Wanming Chu, Implementation of Single Precision Floating Square Root on FPGAs, April (1997). [6] M. Daumas, and C. Finot, "Division of Floating Point Expansions with an Application to the Computation of the Determinant, June (1999). [7] ANSIWEE std 754-1985, IEEE standard for binary Floating-point arithmetic, IEEE New York (1985). [8] Chen S., Mulgeew B. and Grant P.M., A clustering technique for digital communications channel equalization using radial basis function networks, In IEEE Transactions on Neural Networks, Volume:4, Issue: 4, pp: 570 590, 1993, DOI: http://dx.doi.org/10.1109/72.238312. 128 International Journal of Computer Systems, ISSN-(2394-1065), Vol. 02, Issue 04, April, 2015