VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER USING VEDIC MATHEMATICS

Similar documents
An FPGA based Implementation of Floating-point Multiplier

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER BASED ON VEDIC MULTIPLICATION TECHNIQUE

Design of High Speed Area Efficient IEEE754 Floating Point Multiplier

Design and Simulation of Floating Point Adder, Subtractor & 24-Bit Vedic Multiplier

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

An Efficient Implementation of Floating Point Multiplier

University, Patiala, Punjab, India 1 2

A High Speed Binary Floating Point Multiplier Using Dadda Algorithm

Vedic Mathematics Based Floating Point Multiplier Implementation for 24 Bit FFT Computation

Design of Vedic Multiplier for Digital Signal Processing Applications R.Naresh Naik 1, P.Siva Nagendra Reddy 2, K. Madan Mohan 3

FPGA Implementation of Low-Area Floating Point Multiplier Using Vedic Mathematics

Design and Implementation of Floating Point Multiplier for Better Timing Performance

ISSN: (Online) Volume 2, Issue 10, October 2014 International Journal of Advance Research in Computer Science and Management Studies

Implementation of Double Precision Floating Point Multiplier on FPGA

Design of Double Precision Floating Point Multiplier Using Vedic Multiplication

International Journal of Advanced Research in Computer Science and Software Engineering

Design and Implementation of IEEE-754 Decimal Floating Point Adder, Subtractor and Multiplier

Implementation of Double Precision Floating Point Multiplier in VHDL

Implementation of Floating Point Multiplier Using Dadda Algorithm

REALIZATION OF MULTIPLE- OPERAND ADDER-SUBTRACTOR BASED ON VEDIC MATHEMATICS

International Journal of Research in Computer and Communication Technology, Vol 4, Issue 11, November- 2015

International Journal of Advanced Research in Computer Science and Software Engineering

Design and Implementation of an Efficient Single Precision Floating Point Multiplier using Vedic Multiplication

A comparative study of Floating Point Multipliers Using Ripple Carry Adder and Carry Look Ahead Adder

IJCSIET--International Journal of Computer Science information and Engg., Technologies ISSN

Run-Time Reconfigurable multi-precision floating point multiplier design based on pipelining technique using Karatsuba-Urdhva algorithms

Novel High Speed Vedic Maths Multiplier P. Harish Kumar 1 S.Krithiga 2

Pipelined High Speed Double Precision Floating Point Multiplier Using Dadda Algorithm Based on FPGA

Implementation of IEEE754 Floating Point Multiplier

Fig.1. Floating point number representation of single-precision (32-bit). Floating point number representation in double-precision (64-bit) format:

ISSN Vol.02, Issue.11, December-2014, Pages:

FPGA Implementation of Single Precision Floating Point Multiplier Using High Speed Compressors

Implementation of a High Speed Binary Floating point Multiplier Using Dadda Algorithm in FPGA

ARCHITECTURAL DESIGN OF 8 BIT FLOATING POINT MULTIPLICATION UNIT

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

2 Prof, Dept of ECE, VNR Vignana Jyothi Institute of Engineering & Technology, A.P-India,

International Journal Of Global Innovations -Vol.1, Issue.II Paper Id: SP-V1-I2-221 ISSN Online:

An FPGA Based Floating Point Arithmetic Unit Using Verilog

FPGA IMPLEMENTATION OF DFT PROCESSOR USING VEDIC MULTIPLIER. Amrita School of Engineering, Coimbatore, Amrita Vishwa Vidyapeetham, India

Module 2: Computer Arithmetic

Figurel. TEEE-754 double precision floating point format. Keywords- Double precision, Floating point, Multiplier,FPGA,IEEE-754.

Area-Time Efficient Square Architecture

ISSN: X Impact factor: (Volume3, Issue2) Analyzing Two-Term Dot Product of Multiplier Using Floating Point and Booth Multiplier

Review on 32 bit single precision Floating point unit (FPU) Based on IEEE 754 Standard using VHDL

Implementation of FFT Processor using Urdhva Tiryakbhyam Sutra of Vedic Mathematics

Review on Floating Point Adder and Converter Units Using VHDL

Low Power Floating-Point Multiplier Based On Vedic Mathematics

THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE

Development of an FPGA based high speed single precision floating point multiplier

Pipelined Quadratic Equation based Novel Multiplication Method for Cryptographic Applications

Prachi Sharma 1, Rama Laxmi 2, Arun Kumar Mishra 3 1 Student, 2,3 Assistant Professor, EC Department, Bhabha College of Engineering

Floating Point Arithmetic

Design and Optimized Implementation of Six-Operand Single- Precision Floating-Point Addition

COMPARISION OF PARALLEL BCD MULTIPLICATION IN LUT-6 FPGA AND 64-BIT FLOTING POINT ARITHMATIC USING VHDL

Comparison of pipelined IEEE-754 standard floating point multiplier with unpipelined multiplier

VHDL implementation of 32-bit floating point unit (FPU)

Architecture and Design of Generic IEEE-754 Based Floating Point Adder, Subtractor and Multiplier

Implementation of Double Precision Floating Point Multiplier Using Wallace Tree Multiplier

FPGA IMPLEMENTATION OF HIGH SPEED DCT COMPUTATION OF JPEG USING VEDIC MULTIPLIER

Implementation of IEEE-754 Double Precision Floating Point Multiplier

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

An Efficient Design of Vedic Multiplier using New Encoding Scheme

Hemraj Sharma 1, Abhilasha 2

Number Systems and Computer Arithmetic

By, Ajinkya Karande Adarsh Yoga

A Novel Efficient VLSI Architecture for IEEE 754 Floating point multiplier using Modified CSA

Comparison of Adders for optimized Exponent Addition circuit in IEEE754 Floating point multiplier using VHDL

Numeric Encodings Prof. James L. Frankel Harvard University

FLOATING POINT NUMBERS

Divide: Paper & Pencil

FPGA IMPLEMENTATION OF FLOATING POINT ADDER AND MULTIPLIER UNDER ROUND TO NEAREST

FPGA Implementation of Multiplier for Floating- Point Numbers Based on IEEE Standard

IJRASET 2015: All Rights are Reserved

High speed DCT design using Vedic mathematics N.J.R. Muniraj 1 and N.Senathipathi 2

HIGH SPEED SINGLE PRECISION FLOATING POINT UNIT IMPLEMENTATION USING VERILOG

Operations On Data CHAPTER 4. (Solutions to Odd-Numbered Problems) Review Questions

SINGLE PRECISION FLOATING POINT DIVISION

The Sign consists of a single bit. If this bit is '1', then the number is negative. If this bit is '0', then the number is positive.

Chapter 5 : Computer Arithmetic

FPGA based High Speed Double Precision Floating Point Divider

Optimized Design and Implementation of a 16-bit Iterative Logarithmic Multiplier

Evaluation of High Speed Hardware Multipliers - Fixed Point and Floating point

CHAPTER V NUMBER SYSTEMS AND ARITHMETIC

Chapter 3: part 3 Binary Subtraction

International Journal of Advance Engineering and Research Development

CO212 Lecture 10: Arithmetic & Logical Unit

An Implementation of Double precision Floating point Adder & Subtractor Using Verilog

DESIGN OF LOGARITHM BASED FLOATING POINT MULTIPLICATION AND DIVISION ON FPGA

Implementation of Double Precision Floating Point Adder with Residue for High Accuracy Using FPGA

A Decimal Floating Point Arithmetic Unit for Embedded System Applications using VLSI Techniques

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

VLSI DESIGN OF FLOATING POINT ARITHMETIC & LOGIC UNIT

VLSI Implementation of High Speed and Area Efficient Double-Precision Floating Point Multiplier

Signed Multiplication Multiply the positives Negate result if signs of operand are different

IMPLEMENTATION OF DOUBLE PRECISION FLOATING POINT RADIX-2 FFT USING VHDL

FPGA Based Implementation of Pipelined 32-bit RISC Processor with Floating Point Unit

PROJECT REPORT IMPLEMENTATION OF LOGARITHM COMPUTATION DEVICE AS PART OF VLSI TOOLS COURSE

Quixilica Floating Point FPGA Cores

Number Systems Standard positional representation of numbers: An unsigned number with whole and fraction portions is represented as:

An Optimized Montgomery Modular Multiplication Algorithm for Cryptography

Transcription:

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER USING VEDIC MATHEMATICS I.V.VAIBHAV 1, K.V.SAICHARAN 1, B.SRAVANTHI 1, D.SRINIVASULU 2 1 Students of Department of ECE,SACET, Chirala, AP, India 2 Associate Professor of the Dept ECE, SACET, Chirala, AP, India 3 Electronics and Comunication Department, St.Ann s college of engineering & Technology, Chirala, india. E-mail:innamurivaibhav1@gmail.com, kota.saicharan@gmail.com, sravanthi.408b@gmail.com, seenu_dasari@rediffmail.com Abstract: This project presents a binary floating point multiplier based on Vedic algorithm. To improve power efficiency a new algorithm called URDHVA-TRIYAKBHYAM has been implemented for 24 X 24 bit multiplier design. By using this approach number of components can be decreased and complexity of hardware circuit can be decrease. In this project, Vedic multiplication technique is used to implement IEEE 754 floating point multiplier. The Urdhava-Triyakbhym sutra is used for the multiplication of Mantissa i.e., 24x24 bits. The sign bit of the result is calculated using one XOR gate and a Carry Save Adder is used for adding the two biased Exponents. The underflow and overflow cases are handled. The inputs to the multiplier are provided in IEEE 754, 32 bit format. The multiplier is implemented in VHDL and Spartan3E FPGA is used. The result consists of 32 bit binary number of which MSB represent sign, next 8 bit represents Exponent and remaining 23 bits represents Mantissa. Keywords: Vedic Mathematics, Urdhva -Triyakbhyam sutra, Floating Point multiplier, Field Programmable Gate Array ( FPGA), Carry Save Adder I. INTRODUCTION: Floating-point operations are useful for computations involving large dynamic range, but they require significantly more resources than integer operations. The rapid advance in Field-Programmable Gate Array (FPGA) technology makes such devices increasingly attractive for implementing floating-point arithmetic. FPGAs offer reduced development time and costs compared to application specific integrated circuits, and their flexibility enables field upgrade and adaptation of hardware to run-time conditions. Our main aims of designing floating point units (FPUs) for reconfigurable hardware implementation are: (a) to parameterise the precision and range of the floating-point format to allow optimizing the FPUs for specific applications, and (b) to support optional inclusion of features such as gradual underflow and rounding. An approach meeting these aims will achieve effective design trade-off between performance and required resources. The Binary Floating - Point representation is in two formats i.e., i) Single Precision ii) Double Precision. International Conference on Electrical, Electronics and Communications-ICEEC-21 st June 2014-ISBN-978-93-81693-66-03 110

The following Figure I represents the format of the IEEE 754 standard Single and Double Precision structure. In the Single Precision it contains 32 bits,and the Mantissa has the length 23 bits and 1bit is added to the MSB for normalization, Exponent has the length of 8 bits which is biased to 127, actually the Exponent is represented in excess 127 bit format and MSB of Single is reserved for Sign bit. If the sign bit is 1 it represents that the number is negative and when the sign bit is 0 it represents that the number is positive. In the Double Precision it contains 64 bits and the Mantissa has the length 52 bits and 1bit is added to the MSB for normalization, the Exponent has the length of 11 bits which is biased to 1023 and the MSB of Double is reserved for sign bit. Multiplication of two floating point numbers represented in IEEE 754 format is done by multiplying the normalized 24 bit mantissa, adding the biased 8 bit exponent and resultant is converted in excess 127 bit format and for the biasing result we are using Carry Save Adder, for the sign calculation the input sign bits are XORed. II. FLOATING POINT MULTIPLIER In order to enhance the performance of the multiplier, three pipelining stages are used to divide the critical path thus increasing the maximum operating frequency of the multiplier. The pipelining stages are imbedded at the following locations: 1) In the middle of the significand multiplier, and in the middle of the exponent adder (before the bias subtraction). 2) After the significand multiplier, and after the exponent adder. 3) At the floating point multiplier outputs (sign, exponent and mantissa bits). Three pipelining stages mean that there is latency in the output by three clocks. The synthesis tool retiming option was used so that the synthesizer uses its optimization logic to better place the pipelining registers across the critical path. The exponent can only represent positive numbers 0 through 255. To represent both positive and negative exponents, a fixed value, called a bias, is subtracted from the exponent field to obtain the true exponent. Bias-127 exponent, e = E + bias: This gives us an exponent range from Emin = -126 to Emax =127. The mantissa is the set of 0 s and 1 s to the right of the radix point of the normalized (when the digit to the left of the radix point is 1) binary number. III. MULTIPLICATION STEPS OF URDHVA- TRIYAKBHYAM The Vedic multiplication system is based on 16 Vedic sutras or aphorisms, which describes natural ways of solving a whole range of mathematical problems. Out of these 16 Vedic Sutras the Urdhva-triyakbhyam sutra is suitable for this purpose. In this method the partial products are generated simultaneously which itself reduces delay and makes this method fast. The method for multiplication of two, 3 BITs number. Consider the numbers A and B where A = a2a1a0 and B = b2b1b0. The LSB of A is multiplied with the LSB of B: s0=a0b0; Then a0 is multiplied with b1, and b0 is multiplied with a1 and the results are added together as: c1s1=a1b0+a0b1; Here c1 is carry and s1 is sum. Next step is to add c1 with the multiplication results of a0 with b2, a1 with b1 and a2 with b0. c2s2=c1+a2b0+a1b1 + a0b2; Next step is to add c3 with the multiplication results of a1 with b2 and a2 with b1. c3s3=c2+a1b2+a2b1; Similarly the last step 111

c4s4=c3+a2b2; VHDL Implementation of Floating Point Multiplier Using Vedic Mathematics Now the final result of multiplication of A and B is c4s4s3s2s1s0. Algorithm Steps: 1. Multiplying the significand(1.m1*1.m2 ) 2. Placing the decimal point in the result 3. Adding the exponents i.e.,(e1 + E2 Bias) 4. Obtaining the sign; i.e. s1 XOR s2 5. Normalizing the result; i.e. obtaining 1 at the MSB of the result s significand. 6. Checking for underflow/overflow occurrence The result of the significant multiplication (intermediate product) must be normalized to have a leading 1 just to the left of the decimal point (i.e. in the bit 46 in the intermediate product). Since the inputs are normalized numbers then the intermediate product has the leading one at bit 46 or 47 1. If the leading one is at bit 46 (i.e. to the left of the decimal point) then the intermediate product is already a normalized number and no shift is needed. 2. If the leading one is at bit 47 then the intermediate product is shifted to the right and the exponent is incremented by 1. The shift operation is done using combinational shift logic made by multiplexers. Fig shows a simplified logic of a Normaliser that has an 8 bit intermediate product input and a6bit intermediate exponent input. After addition the result is again biased to excess 127 bit Code. For this purpose 127 is subtracted from the result. Two s complement subtraction using addition is incorporated for this purpose. If ER is the final resultant exponent then, ER = EA + EB 127 where EA and EB are the exponent parts of operands A and B respectively. In this case ER = 10000110. Mantissa multiplication is done using the 24 bit Vedic Multiplier. The mantissa is expressed in 23 bit which is normalized to 24 BIT by adding a 1 at MSB. The block diagram of 24x24 BIT multiplier is shown in Figure 5.This multiplier is modelled using structural style of modelling using VHDL. In this paper first a 3x3 Vedic multiplier is implemented using the above mentioned method. The 6x6 is designed using four 3x3 multipliers. After that the 12x12 is implemented using four 6x6 BIT multiplier. Finally the 24x24 BIT multiplier is made using four 12x12 BIT multipliers. The 24x24 BIT multiplier requires four 12x12 BIT multipliers and two 24 BIT ripple carry adders and 36 BIT ripple carry adders. A 24*24 Vedic multiplier is design by using four 12*12 Vedic multipliers based urdhva Triyakbhyam. Here first block 12X12 multiplier consists of lower 12 bits of x i.e. x(11 down to 0) and y(11 down to 0), second block 12*12 Vedic multiplier inputs are x(23 down to 12) and y(11 down to 0) out off 24 bit output of first block lower adder 12 bits are separated and higher order bits are appended as 12 lower bits in front augmented with 000000000000 now second block 24 bits and above 24 generate bits are added and 24 bits sum is generated using 24 bit ripple carry adder. Higher order 12 bit of y(23 down to 12) and lower order 12 bits of x i.e x(11 down to 0) are multiplier and appended infront with 000000000000 to make 36 data similarly both higher order 12 bits of x and y 112

i.e x(23 downto 12) and y(23 downto 12) are multiplier and 36 bit data is formed by appending 000000000000. The above two 36 bits are added to generate a 36 bit data in the right side resultant 36 bits are added to generate final 36 bits the resultant of 24*24 multiplier is 48 bits consists of 36 higher bits from 36 bit adder output and lower order 12 bits 36+12=48 bits. The number of LUTs and slices required for the Vedic Multiplier is less and due to which the power consumption is reduced. Also the repetitive and regular structure of the multiplier makes it easier to design. And the time required for computing multiplication is less than the other multiplication techniques. An Overflow or Underflow case occurs when the result Exponent is higher than the 8 BIT or lower than 8 BIT respectively. Overflow may occur during the 113

addition of two Exponents which can be compensated at the time of subtracting the bias from the exponent result. When overflow occurs the overflow flag goes up. The under flow can occur after the subtraction of bias from the exponent, it is the case when the number goes below 0 and this situation can be handled by adding 1 at the time of normalization. When the underflow case occur the under flow flag goes high. Example of 32 bit Multiplier The Mantissa Calculation Unit requires a 24 bit multiplier if 32 bit single IEEE 754 format is considered. In this paper we propose the Multiplication of two, 24 bit mantissa is done using the Vedic Multiplier. In this case 48 bit result obtained after the multiplication of mantissa is 25.375---- 010000011100101100000000 25.375----010000011100101100000000 643.890625---- 1010000011111001000000000000000000000000000 00000 Mantissa Output----0 10001000 01000001111100100000000 This result is deduced as AxB = 25.375*25.375 = (643.890625)10 = (1010000011.111001)2 CONCLUSION AND RESULT The multiplier is designed in VHDL and simulated using I- Simulator. The design is synthesized using Xilinx ISE 12.1 tool targeting the Xilinx Spartan3E FPGA. A test bench is used to generate the stimulus and the multiplier operation is verified. The over flow and under flow flags are incorporated in the design in order to show the over flow and under flow cases. The result shows the Vedic multiplication method is the efficient way of multiplying two floating point numbers. The lesser number of LUTs shows that the hardware requirement is reduced, thereby reducing the power consumption. The power is reduced affectively still not compromising delay so much. This project presents an implementation of a floating point multiplier that supports the IEEE 754-32 format; the multiplier doesn t implement rounding and just presents the significant multiplication result as is (48 bits); this gives better precision if the whole 48 bits are utilized in another unit. The ripple carry addition design has been implemented on a Xilinx Spartan3E FPGA. SIMULATION RESULTS 3*3 Vedic multiplier: The below table shows the summary of the multiplier tested: REFERENCES [1] IEEE 754-2008, IEEE Standard for Floating-Point Arithmetic, 2008. Table I : Comparison of results. [2] Brian Hickmann, Andrew Krioukov, and Michael Schulte, Mark Erle, A Parallel IEEE 754 Decimal Floating-Point 114

Multiplier, In 25th International Conference on Computer Design ICCD, Oct. 2007. [3] Jagadguru Swami Sri Bharati Krisna Tirthaji Maharaja, Vedic Mathematics Sixteen Simple Mathematical Formulae from the Veda, 1965. [4] Kavita Khare, R.P.Singh, Nilay Khare, Comparison of pipelined IEEE- 754 standard floating point multiplier with npipelined multiplier. [5] Chetana Nagendra, Robert Michael Owens, and mary Jane Irwin, Power-Delay Characteristics of CMOSAdders, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Vol. 2, No. 3, September 1994. [6] S. S. Kerur, Prakash Narchi, Jayashree C N, Harish M Kittur, Girish V A, Implementation of Vedic Multiplier for Digital Signal Processing, International Journal of Computer Applications (IJCA) 2011. 115