VLSI Implementation of High Speed and Area Efficient Double-Precision Floating Point Multiplier

Similar documents
Fig.1. Floating point number representation of single-precision (32-bit). Floating point number representation in double-precision (64-bit) format:

Pipelined High Speed Double Precision Floating Point Multiplier Using Dadda Algorithm Based on FPGA

Implementation of Double Precision Floating Point Multiplier in VHDL

Design of Double Precision Floating Point Multiplier Using Vedic Multiplication

Implementation of Double Precision Floating Point Multiplier on FPGA

University, Patiala, Punjab, India 1 2

Run-Time Reconfigurable multi-precision floating point multiplier design based on pipelining technique using Karatsuba-Urdhva algorithms

Implementation of IEEE-754 Double Precision Floating Point Multiplier

COMPARISION OF PARALLEL BCD MULTIPLICATION IN LUT-6 FPGA AND 64-BIT FLOTING POINT ARITHMATIC USING VHDL

Architecture and Design of Generic IEEE-754 Based Floating Point Adder, Subtractor and Multiplier

An Implementation of Double precision Floating point Adder & Subtractor Using Verilog

An FPGA Based Floating Point Arithmetic Unit Using Verilog

Implementation of Double Precision Floating Point Multiplier Using Wallace Tree Multiplier

ISSN Vol.02, Issue.11, December-2014, Pages:

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Figurel. TEEE-754 double precision floating point format. Keywords- Double precision, Floating point, Multiplier,FPGA,IEEE-754.

A Novel Efficient VLSI Architecture for IEEE 754 Floating point multiplier using Modified CSA

FPGA based High Speed Double Precision Floating Point Divider

Development of an FPGA based high speed single precision floating point multiplier

International Journal of Advanced Research in Computer Science and Software Engineering

Design and Simulation of Pipelined Double Precision Floating Point Adder/Subtractor and Multiplier Using Verilog

Implementation of IEEE754 Floating Point Multiplier

IEEE-754 compliant Algorithms for Fast Multiplication of Double Precision Floating Point Numbers

A High Speed Binary Floating Point Multiplier Using Dadda Algorithm

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER USING VEDIC MATHEMATICS

Design and Implementation of IEEE-754 Decimal Floating Point Adder, Subtractor and Multiplier

FPGA Implementation of Low-Area Floating Point Multiplier Using Vedic Mathematics

Comparison of Adders for optimized Exponent Addition circuit in IEEE754 Floating point multiplier using VHDL

An Efficient Implementation of Floating Point Multiplier

Design of Vedic Multiplier for Digital Signal Processing Applications R.Naresh Naik 1, P.Siva Nagendra Reddy 2, K. Madan Mohan 3

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Vendor Agnostic, High Performance, Double Precision Floating Point Division for FPGAs

Prachi Sharma 1, Rama Laxmi 2, Arun Kumar Mishra 3 1 Student, 2,3 Assistant Professor, EC Department, Bhabha College of Engineering

Novel High Speed Vedic Maths Multiplier P. Harish Kumar 1 S.Krithiga 2

Implementation of a High Speed Binary Floating point Multiplier Using Dadda Algorithm in FPGA

ARCHITECTURAL DESIGN OF 8 BIT FLOATING POINT MULTIPLICATION UNIT

Review on Floating Point Adder and Converter Units Using VHDL

DESIGN OF DOUBLE PRECISION FLOATING POINT MULTIPLICATION ALGORITHM WITH VECTOR SUPPORT

REALIZATION OF MULTIPLE- OPERAND ADDER-SUBTRACTOR BASED ON VEDIC MATHEMATICS

VHDL implementation of 32-bit floating point unit (FPU)

Hemraj Sharma 1, Abhilasha 2

IJCSIET--International Journal of Computer Science information and Engg., Technologies ISSN

Optimized Design and Implementation of a 16-bit Iterative Logarithmic Multiplier

Pipelined Quadratic Equation based Novel Multiplication Method for Cryptographic Applications

International Journal of Advanced Research in Computer Science and Software Engineering

FPGA based Simulation of Clock Gated ALU Architecture with Multiplexed Logic Enable for Low Power Applications

Power Optimized Programmable Truncated Multiplier and Accumulator Using Reversible Adder

Implementation of 64-Bit Pipelined Floating Point ALU Using Verilog

16 BIT IMPLEMENTATION OF ASYNCHRONOUS TWOS COMPLEMENT ARRAY MULTIPLIER USING MODIFIED BAUGH-WOOLEY ALGORITHM AND ARCHITECTURE.

THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE

SINGLE PRECISION FLOATING POINT DIVISION

International Journal of Research in Computer and Communication Technology, Vol 4, Issue 11, November- 2015

FLOATING POINT NUMBERS

ISSN: X Impact factor: (Volume3, Issue2) Analyzing Two-Term Dot Product of Multiplier Using Floating Point and Booth Multiplier

Area-Time Efficient Square Architecture

An FPGA based Implementation of Floating-point Multiplier

Implementation of Floating Point Multiplier Using Dadda Algorithm

HIGH SPEED SINGLE PRECISION FLOATING POINT UNIT IMPLEMENTATION USING VERILOG

A comparative study of Floating Point Multipliers Using Ripple Carry Adder and Carry Look Ahead Adder

Keywords: Soft Core Processor, Arithmetic and Logical Unit, Back End Implementation and Front End Implementation.

Evaluation of High Speed Hardware Multipliers - Fixed Point and Floating point

Comparison of pipelined IEEE-754 standard floating point multiplier with unpipelined multiplier

An Efficient Design of Vedic Multiplier using New Encoding Scheme

REALIZATION OF AN 8-BIT PROCESSOR USING XILINX

High Speed Single Precision Floating Point Unit Implementation Using Verilog

Implementation of FFT Processor using Urdhva Tiryakbhyam Sutra of Vedic Mathematics

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER BASED ON VEDIC MULTIPLICATION TECHNIQUE

Design of High Speed Area Efficient IEEE754 Floating Point Multiplier

Design and Simulation of Floating Point Adder, Subtractor & 24-Bit Vedic Multiplier

DESIGN AND IMPLEMENTATION OF APPLICATION SPECIFIC 32-BITALU USING XILINX FPGA

FPGA Implementation of Multiplier for Floating- Point Numbers Based on IEEE Standard

Double Precision IEEE-754 Floating-Point Adder Design Based on FPGA

Design and Optimized Implementation of Six-Operand Single- Precision Floating-Point Addition

SIMULATION AND SYNTHESIS OF 32-BIT MULTIPLIER USING CONFIGURABLE DEVICES

International Journal Of Global Innovations -Vol.6, Issue.II Paper Id: SP-V6-I1-P01 ISSN Online:

Digital Logic & Computer Design CS Professor Dan Moldovan Spring 2010

Chapter 5 Design and Implementation of a Unified BCD/Binary Adder/Subtractor

International Journal Of Global Innovations -Vol.1, Issue.II Paper Id: SP-V1-I2-221 ISSN Online:

FPGA Implementation of Single Precision Floating Point Multiplier Using High Speed Compressors

Available online at ScienceDirect. Procedia Technology 24 (2016 )

VLSI Design and Implementation of High Speed and High Throughput DADDA Multiplier

IJRASET 2015: All Rights are Reserved

Vendor Agnostic, High Performance, Double Precision Floating Point Division for FPGAs

Implementation of Double Precision Floating Point Adder with Residue for High Accuracy Using FPGA

A Modified Radix2, Radix4 Algorithms and Modified Adder for Parallel Multiplication

FPGA Implementation of a High Speed Multiplier Employing Carry Lookahead Adders in Reduction Phase

Design of a Floating-Point Fused Add-Subtract Unit Using Verilog

Chapter 3: Arithmetic for Computers

High Performance and Area Efficient DSP Architecture using Dadda Multiplier

DESIGN AND IMPLEMENTATION OF VLSI SYSTOLIC ARRAY MULTIPLIER FOR DSP APPLICATIONS

A High Performance Reconfigurable Data Path Architecture For Flexible Accelerator

Divide: Paper & Pencil

HIGH-PERFORMANCE RECONFIGURABLE FIR FILTER USING PIPELINE TECHNIQUE

A Decimal Floating Point Arithmetic Unit for Embedded System Applications using VLSI Techniques

Design and Implementation of VLSI 8 Bit Systolic Array Multiplier

ISSN: (Online) Volume 2, Issue 10, October 2014 International Journal of Advance Research in Computer Science and Management Studies

ECE232: Hardware Organization and Design

FPGA Based Design and Simulation of 32- Point FFT Through Radix-2 DIT Algorith

Signed Multiplication Multiply the positives Negate result if signs of operand are different

FPGA Implementation of MIPS RISC Processor

Transcription:

VLSI Implementation of High Speed and Area Efficient Double-Precision Floating Point Ramireddy Venkata Suresh 1, K.Bala 2 1 M.Tech, Dept of ECE, Srinivasa Institute of Technology and Science, Ukkayapalli, kadapa, India 2 Associate Professor, Dept of ECE, Srinivasa Institute of Technology and Science, Ukkayapalli, kadapa, India Abstract Floating-point arithmetic is ever-present in computer systems. All most all computer languages has supports a floatingpoint number types. Most of the computer compilers called upon floating-point algorithms from time to time for execution of the floating-point arithmetic operations and every operating system must be react virtually for floating-point exceptions like underflow and overflow. The double-precision floating arithmetic is mainly used in the digital signal processing (filters, FFTs) applications, numerical applications and scientic applications. The double-precision floating arithmetic operations are the addition, the subtraction, the multiplication, and the division. Among the all arithmetic operations, multiplication is widely used and most complex arithmetic operation. The doubleprecision (64-bit) floating point number is divide into three fields, Sign field, Exponent field and Mantissa field. The most significant bit of the number is a sign field and it is a 1-bit length, next 11-bits represents the exponent field of the number and remaining 52-bits are represents the mantissa field of the number. The double-precision floating-point multiplier requires a large 52x52 mantissa multiplications. The performance of the double-precision floating number multiplication mainly depends on the area and speed. The proposed work presents a novel approach to decrease this huge multiplication of mantissa. The Tiryagbhyam technique permits to using a smaller number of multiplication hardware compared to the conventional method. In traditional method adding of the partial products are separately done and it takes more time in comparision with the proposed metdod. In proposed method the partial products are concurrently added with the multiplication operaton and it canreduce the time delay. The double-precision floating multiplier is implemented using Verilog HDL with Xilinx ISE tools on Virtex-5 FPGA. Keywords-Double-precision, Floating point, Multiplication, Vedic, Tiryagbhyam, IEEE-754, Virtex-5 FPGA. I. INTRODUCTION The real numbers[2] are represented in binary format is called as floating point numbers. The floating-point numbers are IEEE-754 standards split into two types, first one is binary interchange format and second one decimal interchange format. The IEEE-754 standard[8] of floatingpoint numbers are the single precision (32-bit) format and the double-precision (64-bit) format. The fig.1 represents single-precision (32-bit) floating point number format. In single-precision floating point number has 32-bits of length. The total 32-bits are divided into three fields, 31 st bit of the number sign (1-bit) field, 30 th bit to 23 rd bit of the number is exponent (8-bits) field and 22 nd bit to 0 th bit of the number is mantissa (23-bits) field. 31 30 1 S Exponent (E) Significant/Fraction/Mantissa Fig. 1: Single Precision (32-bit) floating point number format representation. The fig. 2 represents the double-precision (64-bit) floating point number format. The double-precision floating point number has 64-bits of length. The total 64-bits are divided into three fields, 63 rd bit of the number is sign (1-bit) filed, 62 nd to 52 nd bit of the number is exponent (11-bits) filed and 51 st to 0 th bit of the number is mantissa (52-bits) filed. 63 62 1 8 23 22 S Exponent (E) Significant/Fraction/Mantissa 11 52 51 Fig. 2: Double Precision (64-bit) floating point number format representation. The IEEE-754 standard of hardware implementations for floating point arithmetic operations are very important in all processors. The multiplication operation is crucial and very important among the all. The area of the application is main constraint which should be minimum and provides a greater speed. The area efficient implementation of floating point arithmetic operation and thus the efficient implementation of floating point multiplier are of a major concern. Most of the high performance computations can be implemented by using the FPGAs (Field Programmable Gate Arrays)[7]. The FPGAs supports the high speed, large amount of logic resources i.e. large slice count and large LUTs and limited accessible on board intellectual property (IP)[4] 23 52 0 0 2417

cores builds a large set of applications. Now a days FPGAs can be used in tremendous applications like arithmetical and scientific computations, digital signal and image processing, communications, and cryptography computations. The proposed work is mainly focusing on the double-precision floating point multiplication implementation on FPGA platform. The mantissa multiplication is most important and essential part for multiplication of the double precision floating point numbers. This is the main bottleneck of the performance. The double precision floating point number s mantissa is in 53-bits length, and this require a hardware implementation of 53x53 multipliers, which is very cost effective and expensive. In this work, one algorithm has proposed for the double precision floating numbers multiplication and it can be permits to using a reduced amount of multiplication accomplish a high speed at comparatively low hardware resources cost. The prosed method of multiplication results can be Comparing with algorithm[1] results. The implementation is mainly concerned only normalized numbers. The implementation can be done by using Xilinx ISE synthesis tool, ISIM simulator and Virtex- 5[4](xc5vlx110t-1ff1136) speed grade-1 FPGA platform. The paper contains the Introducation in section 1, section 2 has Proposed Approach of Design, section 3 has implement the design, section 4 has design results and finally conclusion of the paper is presenting section 5. II. PROPOSED APPROACH OF DESIGN Then Multiplication Technique is using to designing proposed method of multplication. This multiplication technique is break up the two mantissas into three parts in fig. 3. 1x1Mult P 3 1x17 Mult P 1 1x17 Mult P 2 18-bits 1-bit 17-bits 1-bit 17-bits 17x17 Mult P 0 19-bits 2-bits 17-bits 2-bits 17-bits 17x17 Mult P 0 2x17 Mult P 1 2x17 Mult P 2 2x2Mult P 3 Fig.4 18-bit and 19-bit s The designing of 53x53-bit mantissa multiplication is realized by using the 18-bit multiplier and 19-bit multipliers. The fig. 4 represents the 18-bit multiplier and 19-bit multipliers. The multiplier of 18-bit has 1-bit sign bit and 17- bits fractional bits, the first step is to multiply two 17-bits of the input operands and generate a partial product p 0 then generate a p 1, p 2, and p 3. In the same way 19-bit multiplier is also design. The designing of Vedic multiplier is supports Vedic multiplication[5] principles (sutras). These principles are broadly used for the multiplication of two large decimal numbers. This same sutra can be used to perform multiplication[5] on two large binary numbers and it shown in fig. 5.. This proposed method is compatible to design digital hardware system. The Tiryagbhyam technique[4] can take (2n-1) steps for designing the n-bit multiplier. In fig. 5 4- bit multiplier can take 7-steps for designing of the multiplier. Fig. 3 53-bit mantissa is splitted into 17-bit and 18-bit fractions The 53-bit mantissa[1] is splitted into 17-bit and 18- bit signed fractions. The terms w 0 and w 1 represents the 18-bit fraction and w 2 represents the 17-bit fraction of the one of the input operand. In the same way x 0 and x 1 represents the 18-bit fractions and x 2 represents 17-bit fraction. Fig.5 Steps involved for 4-bit binary numbers multiplication using Tiryagbhyam Technique. The 4-bit binary multiplier using Tiryagbhyam technique is shown in fig. 5. The 4-bit multiplier has 4-bit multiplier as one of the operand and 4-bit multiplicand is the another operand. The carry resents the previous state generated carry and it can be added to the current to get the final product. 2418

III. IMPLEMENTATION Designing of the multiplier for floating-point numbers carries the operation separately by sign bit, exponent bits and mantissa bits. Output Exponent A. Sign and Exponent operations. The sign and exponent operations are performed in a straightforward manner. The operand 1 and operand 2 of signbits are carry out the operation of logical XOR. Sign_out=Sign_in1 Ө Sign_in2 The resultant exponent field of the number is addition of operand 1 s exponent and operand 2 s exponent and subtracting the BIAS i.e. Exp_out=Exp_in1+Exp_in2-1023 For double-precision floating number, the BIAS is equals to 1023 i.e. (2 11-1 -1). The BIAS of the any floating point number is determined by (2 exponent bits-1-1). B. Exceptional Case Handling The IEEE standard as defined by the many exceptional cases like NaN (Not a Number), INFINITE, ZERO, UNDERFLOW, OVERFLOW. These are appearing for all the floating point arithmetic operations. The main operation has been also combined with the detection of all the exceptional cases, and determining the final output as per standard. All the exceptional cases executions are in parallel with the IEEE-754 standard. The fig. 6 represents the flowchart of the handling of exceptional cases. Input Operand Operand= infinity? YES NO Operand= denormalized number? EXP_OUT <=zero? EXP_OUT=0 UNDERFLOW=1 YES NO EXP_OUT >11 h7fe? EXP_OUT=11 h7ff OVERFLOW=1 (b) Fig. 6 Flowchart for handling of exceptional cases (a) infinite case (b) OVERFLOW and UNDERFLOW cases If one of the operand or both of the operands are infinity; then generate a INFINITY output (with calculate sign-bit). If denormalized number is one of the input operand, then the output is zero (w.r.to sign-bit). If the zero or below zero as output exponent, the displayed output is UNDERFLOW, and the 11 h7fe (2046 in decimal) is the output exponent, OVERFLOW as displayed output. C. Normalization and Rounding For getting of the the final result after mantissa multiplication the normalization is mandatory. After the mantissa multiplication, the resultant number has one extra bit in most significant bit before the decimal point. Similarly, in sometimes the same situation will arise after rounding. So, whenever the additional carry bit is produced after multiplication or rounding, the resultant product must be rightshifted for necessary shifts and the corresponding alterations should be made in exponent also to get the normalized result. Rounding is necessary to get back the 106-bit mantissa multiplication result to 53-bit result only. In this paper only implemented Round to nearest rounding mode precised by the IEEE-754 standard and remaining modes can be implementing as per the application requirement. YES NO EXP_OUT= estimated exponent MANT_OUT= Estimated exponent OVERFLOW=0 UNDERFLOW=0 EXP_OUT=11 h7ff EXP_OUT=0 (a) 2419

IV. RESULTS The FPGA implemenatation of double precision floating point multiplier using Tiryagbhyam technique is divided into 18-bit multiplier, 19-bit multiplier and 53-bit mantissa multiplier. The 53-bit multiplier is designed by combining the 18-bit and 19-bit multipliers. The 18-bit multiplier, 19-bit multiplier and 53-bit mantissa multipliers are in followed figures 7-16. Fig. 10 RTL Schematic of 19-bit Tiryagbhyam multiplier The fig. 10 is the RTL Schematic of 19-bit multiplier using Tiryagbhyam multiplication technique. A in and B in are two operands which are give to input to the multiplier and output of the multiplier displays as Y which is 38-bits in length. Fig. 7 Simulation result of 18-bit Tiryagbhyam multiplier The simulation result of 18-bit multiplier using Tiryagbhyam multiplication technique is in fig. 7.Here A in and B in are the two input signals.in this A in is one of the operand which is 18-bits in length and B in is other operand which is also an 18-bits in length. These two operands are performing a multiplication operation and produce a 36-bit of result. INPUTS: A in [17:0]=18 h00256; B in [17:0]=18 h00100; OUTPUT: Y[35:0]=36 h000025600; Fig. 11 Simulation result of 53-bits mantissa multiplier Simulation result of 53x53-bit multiplier is in fig. 11. In this A in and B in are the inputs of the multiplier; each one has 53- bits in length and output Y has 106-bits in length. INPUTS:A in [52:0]=53 h01234589756478; B in [52:0]=53 h05689745689523; OUTPUT: Y[105:0]=106 h0000000000000000005b36392f6; Fig. 8 RTL Schematic of 18-bit Tiryagbhyam multiplier The fig. 8 represents the RTL Schematic of 18-bit multiplier using Tiryagbhyam multiplication technique. In this Ain and Bin are the input operand each one has the 18-bits in length and output Y is 36-bits in length. Fig. 12 RTL Schematic of 53-bit mantissa multiplier Fig. 9 Simulation result of 19-bit Tiryagbhyam multiplier The simulation results of 19-bit multiplier using Tiryagbhyam multiplication technique is in fig. 9. A in and B in are the two input operand of the 19-bit multiplier and Y is the output produced by the multiplier. INPUTS: A in [18:0]=19 h00019; B in [18:0]=19 h0000f; OUTPUT: Y[37:0]=38 h0000000177; The mantissa bits of the double-precision floating point multiplier have 53-bits. The 53-bit x 53-bit multiplier design is very difficult, the 53-bits are divided into 18-bits and 19-bits and design the multipliers. These multipliers are added to get the 53-bit mantissa multiplier.the fig.12 shows the RTL Schematic of 53x53-bits multiplier using Tiryagbhyam multiplication technique. Here Ain and Bin are the input operands of the 53-bit multiplier; Y is the output of the multiplier. 2420

Overflow and underflow are exceptional conditions occur during the floating point multiplication. Fig. 13 Technology Schematic of 53-bit mantissa multiplier The fig. 13 shows the technology schematic of the mantissa bits multiplier. In this the red color wires are interconnecting wiring network of the circuit. The technology schematic has adders, subtractors and multipliers. These components internally through wiring network to build the actual circuit. Fig.16 Technology Schematic of double-precision Tiryagbhyam multiplier The Technology schematic is in fig. 16 represents the how the components are internally connected. It also represents how the circuits are actually arranged in the FPGA. Table 1: Device utilization summary. Device No. of. Slice Registers Tiryagbhy am No. of. Slice LUTs Tiryagbhy am Fig. 14 Simulation result of double-precision Tiryagbhyam multiplier The resulting waveform of the double-precision Tiryagbhyam multiplier has two inputs and each one has 64- bits wide. The two 64-bit input operands are performed multiplication operation; the resultant has 64-bits length.the fig. 14 shows the simulation resulting waveform of doubleprecision Tiryagbhyam multiplier. In this the exceptional conditions like overflow and underflow are checked by applying the inputs A[63:0]=000a8954b8963b45 and B[63:0]=0008df5678524569. The output will display all zeros because of an exceptional case. The exceptional case is resultant exponent is less than or equal to zero. Fig. 15 RTL Schematic of double-precision Tiryagbhyam multiplier The RTL Schematic of double-precision Tiryagbhyam multiplier is shown in fig. 15. The inputs are A[63:0], B[63:0], clock and reset and outputs arey[63:0], underflow and overflow. A[63:0] has 64-bit of floating point operand and B[63:0] has 64-bit of another floating point operand. These two operands are applied to inputs of floating point multiplier. The Y[63:0] is 64-bit floating point multiplier output. Spartan-2 (xc2s15-6cs144) Spartan-2 (xc2s200-6fg256) Spartan-2e (xc2s600e- 7fg676) Spartan-3 (xc3s4000l -4fg900) Spartan-3a (xc3s1400 a-5fg676) Spartan-3e (xc3s1600 e-5fg484) Virtex-2 (xc2v6000-6ff1517) Virtex-2p (xc2vpx70-7ff1704) Virtex-4 (xc4vlx200-11ff1513) Virtex- 5(xc5vlx11 0t- 1ff1136) 1877 1970 3599 3607 1878 1970 3599 3607 1933 1969 3599 3607 797 504 1384 900 798 526 1378 936 798 525 1378 936 796 506 1384 902 796 506 1384 902 798 441 1384 775 390 373 1456 617 Table 1 represents the Device utilization summary. In this the comparision parameters are number slice registers and number of slice LUTs. These two parameters are comparision with the multiplier and Tiryagbhyam multiplier. The devices are changed from Spartan to Virtex the number slice 2421

Power (W) Delay (ns) ISSN: 2278 909X registers used by the multipliers are reduced and also number of slice LUTs are reduced. In this the designinig of the Tiryagbhyam multiplier requires minimum amount of logic resources to perform double-precision floating point multiplication operations. The fig. 17 represents the delay representation of and Tiryagbhyam multipliers. The delay of the both the multipliers are measured in nano seconds. The delay of the multiplier is 18.139ns and delay of the Tiryagbhyam multiplier is 15.034ns. The delay of the Tiryagbhyam multiplier is less compared to the Karatsuaba multiplier so Tiryagbhyam multiplier is fast in comparison with the multiplier. 20 15 10 5 Type of Tiryagbhyam Fig. 17 Delay representation of and Tiryagbhyam multipliers The fig. 18 represents the Power consumption of the multiplier and Tiryagbhyam multipliers. In this X-axis takes frequency in MHzs and Y- axis takes power in terms of watts. The frequency is from 10MHz to 600MHz and note down the power at the frequencies. In this fig. 18 observes the multiplier consumes more power in comparison with the Tiryagbhyam multiplier. V. CONCLUSION The FPGA implementation of double-precision floating point multiplier using Tiryagbhyam technique achieved the high speed and less area consumption compared to the multiplier and it is fully compatible with the binary interchange format of IEEE-754 standards.the design handles the various exceptional conditions like OVERFLOW and UNDERFLOW. REFERENCES [1] Manish Kumar Jaiswal, Ray C.C. Cheung VLSI Implementation of Double-Precision Floating Using Technique, Circuits syst signal process, Springer science business media, vol. 32, pp. 15-27, July 2013. [2] Purna Ramesh addanki, Venkata Nagaratna Tilak Alapati and Mallikarjuna Prasad Avana An FPGA Based High Speed IEEE-754 Double Precision Floating Point Adder/Subtractor and Using Verilog International Journal of Advanced Science and Technology, vol. 52, pp. 61-74, March 2013. [3] M. al-ashrafy, A. Salem, W. Anis, An Efficient Implementation of Floating Point, Saudi International Electronics, Communications and Photonics Conference(SIECPC), pp. 1-5, April 2011. [4] Sandesh S. Saokar, R. M. Banakar, and Saroja Siddamal, High Speed Signed for Digital Signal Processing Applications, IEEE International Conference on Signal Processing, Computing and Control (ISPCC), pp. 1-6, March2012. [5] Devika Jaina, Kabiraj Sethi, and Rutupama Pamda, Vedic Mathematics Based Multiply Accumulate Unit, International conference on computational intelligence and communication system, pp 754-757, 2011. [6] Deena Dayalan, S.Deborah Priya, High Speed Energy Efficient ALU Design using Vedic Multiplication Techniques, ACTEA IEEE pp 600-603, July 2009. [7] P. Belanovic and M. Leeser, A Library of Parameterized Floating-Point Modules and Their Use, in 12 th International Conference on Field- Programmable Logic and Applications (FPL-02). London,UK: Springer Verilog, pp. 657-666, September 2002. [8] Xilinx, Xilinx floating-point IP core. http://www.xilinx.com. 0.975 1 0.95 0.925 0.9 0.875 0.85 0.825 0.8 Tiryagbhyam Frequency (MHz) Fig. 18 Power consumption of and Tiryagbhyam multipliers 2422

Author s Biography RAMIREDDY VENKATA SURESH received his B.Tech degree from JNTUA Ananthapur Department of ECE. He is pursing M.Tech in Srinivasa Institute of Technology and Science, Ukkayapalli,Kadapa,AP Mr. K. BALA is currently working as an associate professer In ECE Department, Srinivasa Instiute of Technology and science, Ukkayapalli, Kadapa, AP. He received his M.Tech from Sri Kottam Tulasi Reddy Memorial college of Engineering Kondair,Mahaboobnagar, AP. 2423