Run-Time Reconfigurable multi-precision floating point multiplier design based on pipelining technique using Karatsuba-Urdhva algorithms

Similar documents
Pipelined High Speed Double Precision Floating Point Multiplier Using Dadda Algorithm Based on FPGA

A High Speed Binary Floating Point Multiplier Using Dadda Algorithm

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Architecture and Design of Generic IEEE-754 Based Floating Point Adder, Subtractor and Multiplier

University, Patiala, Punjab, India 1 2

Fig.1. Floating point number representation of single-precision (32-bit). Floating point number representation in double-precision (64-bit) format:

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Implementation of Double Precision Floating Point Multiplier in VHDL

ARCHITECTURAL DESIGN OF 8 BIT FLOATING POINT MULTIPLICATION UNIT

Implementation of Floating Point Multiplier Using Dadda Algorithm

Design and Implementation of IEEE-754 Decimal Floating Point Adder, Subtractor and Multiplier

FPGA Implementation of Single Precision Floating Point Multiplier Using High Speed Compressors

Implementation of Double Precision Floating Point Multiplier Using Wallace Tree Multiplier

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER USING VEDIC MATHEMATICS

An Efficient Implementation of Floating Point Multiplier

Design of Double Precision Floating Point Multiplier Using Vedic Multiplication

ECE232: Hardware Organization and Design

THE INTERNATIONAL JOURNAL OF SCIENCE & TECHNOLEDGE

An Implementation of Double precision Floating point Adder & Subtractor Using Verilog

ISSN: X Impact factor: (Volume3, Issue2) Analyzing Two-Term Dot Product of Multiplier Using Floating Point and Booth Multiplier

ISSN: (Online) Volume 2, Issue 10, October 2014 International Journal of Advance Research in Computer Science and Management Studies

DESIGN OF DOUBLE PRECISION FLOATING POINT MULTIPLICATION ALGORITHM WITH VECTOR SUPPORT

VLSI Implementation of High Speed and Area Efficient Double-Precision Floating Point Multiplier

An FPGA based Implementation of Floating-point Multiplier

Number Systems and Computer Arithmetic

Floating-Point Data Representation and Manipulation 198:231 Introduction to Computer Organization Lecture 3

International Journal of Research in Computer and Communication Technology, Vol 4, Issue 11, November- 2015

Review on Floating Point Adder and Converter Units Using VHDL

Implementation of Double Precision Floating Point Multiplier on FPGA

Floating-point Arithmetic. where you sum up the integer to the left of the decimal point and the fraction to the right.

A comparative study of Floating Point Multipliers Using Ripple Carry Adder and Carry Look Ahead Adder

Floating Point Arithmetic

Implementation of a High Speed Binary Floating point Multiplier Using Dadda Algorithm in FPGA

Number Systems Standard positional representation of numbers: An unsigned number with whole and fraction portions is represented as:

Design and Optimized Implementation of Six-Operand Single- Precision Floating-Point Addition

By, Ajinkya Karande Adarsh Yoga

Design and Analysis of Inexact Floating Point Multipliers

An FPGA Based Floating Point Arithmetic Unit Using Verilog

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering

IJCSIET--International Journal of Computer Science information and Engg., Technologies ISSN

The Sign consists of a single bit. If this bit is '1', then the number is negative. If this bit is '0', then the number is positive.

An Efficient Multi Precision Floating Point Complex Multiplier Unit in FFT

VTU NOTES QUESTION PAPERS NEWS RESULTS FORUMS Arithmetic (a) The four possible cases Carry (b) Truth table x y

Floating point. Today! IEEE Floating Point Standard! Rounding! Floating Point Operations! Mathematical properties. Next time. !

Implementation of IEEE754 Floating Point Multiplier

MIPS Integer ALU Requirements

Design and Simulation of Floating Point Adder, Subtractor & 24-Bit Vedic Multiplier

A Decimal Floating Point Arithmetic Unit for Embedded System Applications using VLSI Techniques

Pipelined Quadratic Equation based Novel Multiplication Method for Cryptographic Applications

COMPARISION OF PARALLEL BCD MULTIPLICATION IN LUT-6 FPGA AND 64-BIT FLOTING POINT ARITHMATIC USING VHDL

Floating Point Numbers

Prachi Sharma 1, Rama Laxmi 2, Arun Kumar Mishra 3 1 Student, 2,3 Assistant Professor, EC Department, Bhabha College of Engineering

IJRASET 2015: All Rights are Reserved

Floating Point. The World is Not Just Integers. Programming languages support numbers with fraction

HIGH SPEED SINGLE PRECISION FLOATING POINT UNIT IMPLEMENTATION USING VERILOG

Divide: Paper & Pencil

FLOATING POINT NUMBERS

Figurel. TEEE-754 double precision floating point format. Keywords- Double precision, Floating point, Multiplier,FPGA,IEEE-754.

FPGA IMPLEMENTATION OF FLOATING POINT ADDER AND MULTIPLIER UNDER ROUND TO NEAREST

VHDL IMPLEMENTATION OF FLOATING POINT MULTIPLIER BASED ON VEDIC MULTIPLICATION TECHNIQUE

Comparison of pipelined IEEE-754 standard floating point multiplier with unpipelined multiplier

VHDL implementation of 32-bit floating point unit (FPU)

Floating Point Representation in Computers

Numeric Encodings Prof. James L. Frankel Harvard University

CSCI 402: Computer Architectures. Arithmetic for Computers (3) Fengguang Song Department of Computer & Information Science IUPUI.

ISSN Vol.02, Issue.11, December-2014, Pages:

FPGA Implementation of Multiplier for Floating- Point Numbers Based on IEEE Standard

Chapter 2 Float Point Arithmetic. Real Numbers in Decimal Notation. Real Numbers in Decimal Notation

Design and Simulation of Pipelined Double Precision Floating Point Adder/Subtractor and Multiplier Using Verilog

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

15213 Recitation 2: Floating Point

International Journal Of Global Innovations -Vol.1, Issue.II Paper Id: SP-V1-I2-221 ISSN Online:

Design and Implementation of Floating Point Multiplier for Better Timing Performance

CO212 Lecture 10: Arithmetic & Logical Unit

VLSI Based Low Power FFT Implementation using Floating Point Operations

CPE300: Digital System Architecture and Design

EE260: Logic Design, Spring n Integer multiplication. n Booth s algorithm. n Integer division. n Restoring, non-restoring

Computer Arithmetic Floating Point

Data Representation Floating Point

UNIT-III COMPUTER ARTHIMETIC

16 BIT IMPLEMENTATION OF ASYNCHRONOUS TWOS COMPLEMENT ARRAY MULTIPLIER USING MODIFIED BAUGH-WOOLEY ALGORITHM AND ARCHITECTURE.

FPGA Implementation of Low-Area Floating Point Multiplier Using Vedic Mathematics

Data Representation Floating Point

Computer Arithmetic Ch 8

Computer Arithmetic Ch 8

Double Precision Floating-Point Multiplier using Coarse-Grain Units

Chapter 5 : Computer Arithmetic

UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering. Digital Computer Arithmetic ECE 666

Module 2: Computer Arithmetic

PROJECT REPORT IMPLEMENTATION OF LOGARITHM COMPUTATION DEVICE AS PART OF VLSI TOOLS COURSE

Implementation of IEEE-754 Double Precision Floating Point Multiplier

A Novel Efficient VLSI Architecture for IEEE 754 Floating point multiplier using Modified CSA

Floating Point January 24, 2008

32-bit Signed and Unsigned Advanced Modified Booth Multiplication using Radix-4 Encoding Algorithm

Review on 32 bit single precision Floating point unit (FPU) Based on IEEE 754 Standard using VHDL

Floating point. Today. IEEE Floating Point Standard Rounding Floating Point Operations Mathematical properties Next time.

Floating Point Puzzles. Lecture 3B Floating Point. IEEE Floating Point. Fractional Binary Numbers. Topics. IEEE Standard 754

Chapter 03: Computer Arithmetic. Lesson 09: Arithmetic using floating point numbers

Floating Point Numbers

Transcription:

Run-Time Reconfigurable multi-precision floating point multiplier design based on pipelining technique using Karatsuba-Urdhva algorithms 1 Shruthi K.H., 2 Rekha M.G. 1M.Tech, VLSI design and embedded system, RIT, Hassan, Karnataka, India 2Assistent professor, Dept. of E&C, RIT, Hassan, Karnataka, India shruthikhbe@gmail.com, rekhamg.rit@gmail.com ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - In numerous application fields, for example, becomes a major issue. Even a minute error in accuracy can image processing, signal handling and so forth, floating point cause major consequences. These errors are possible in multiplication is one among the crucial operations. In any floating point units mainly because of the discrete behavior case, each application needs diverse working essentials. of the IEEE-754 floating point representation, where fixed However, IEEE-754 standard is not actually adjustable for number of bits is used to represent numbers. Due to the high high accuracy for example, few requires small power computational requirements of scientific applications such utilization and less delay furthermore the proposal is intricate. as computational geometry, computational physics, etc., it is Ideal run-time reconfigurable model realization don't necessary to have extreme precision in floating point essentially follow IEEE indicated sizes, so it require the calculations. And these increased precision may not be utilization of custom FP structures. In this project, provided with single precision or double precision format. implementation of a run-time-reconfigurable FP multiplier That further increases the complexity of the unit. But some using custom FP structures on FPGA for various applications is applications do not require high precision. Even an shown. Reliant upon the precision or application prerequisite approximate value will be sufficient for the correct this FP multiplier can devise six modes of operations. With the operation. For applications which require lower precision, utilization of ideal hardware using custom IPs and by the use of double precision or quadruple precision floating truncating the multiplicands before multiplication a better point units will be a luxury. It wastes area, power and also implementation is done. To further expands the proficiency of increases latency. the multiplier a mix of Karatsuba calculation and Urdhva- Tiryagbhyam calculation is utilized to realize an unsigned 2. PROPOSED METHODOLOGY parallel multiplier. Pipeline technique has been identified as proficient method for reducing the processing time. Thus reduction in processing time is accomplished by the utilization of pipeline technique. So, Pipeline technique used in Karatsuba Urdhva multiplier hardware will speed up the process by twofold compared to unpipelined Karatsuba Urdhva multiplier. Key Words: Custom format, 6 modes, Karatsuba and Urdhva - Tiryagbhyam algorithm, Pipelining. 1. INTRODUCTION The proposed model is a reconfigurable multi-precision floating point multiplier which can be operated in six different modes according to the accuracy requirements. It can perform floating point format multiplication of different mantissa sizes depending on the precision requirement. The basic unit is a Double-precision floating point unit. According to the precision selected, the size of the mantissa is varied. Fig.1 shows the floating-point multiplication format used in the proposed model. Floating point multiplication units are essential Intellectual Properties (IP) for modern multimedia and high performance computing such as graphics acceleration, signal processing, image processing etc. There are lot of effort is made over the past few decades to improve performance of floating point computations. Floating point units are not only complex, but also require more area and hence more power consuming as compared to fixed point multipliers. And the complexity of the floating point unit increases as accuracy Fig. 1 Floating point format used in the proposed model The multiplier accepts two inputs each of 67-bit wide. The first 3 bits are used for mode selection. The inputs to the multiplier can be given in double-precision floating point format with first 3 bits (66th bit to 64th bit) as mode select bits. The value of the mode select bits for both the inputs must be the same, otherwise a mode select err or signal will be generated and the execution will be stopped. The 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 504

different mode select bit combinations for different modes is shown in table 1. Karatsuba algorithm and Urdhva-Tiryagbhyam algorithm, which gives better optimization in terms of speed and area. The different modes in the proposed multi-precision multiplier are Table. 1 Different Modes Fig.2 Block diagram of the proposed Multiplier model Mode 1: Mode 1 is auto mode, i.e. the controller itself will select the optimum mode by analyzing the inputs and will start execution. The optimum mode is selected by counting the number of zeroes after a leading 1. If the number of zeroes is 6 or more after a leading 1, then the bits up to that leading 1 is counted. If the number of bits up to that leading 1 is less than 8, then mode 2 or 8-bit mantissa mode will be selected. If the number of bits before the leading 1 is less than 16, 16-bit mantissa mode will be selected and so on. Mode 2: This is a custom precision format. It uses a basic size of 8-bit. Mode 3: This is a custom precision format. It uses a basic size of 16-bit. Mode 4: This is a custom precision format. It uses a basic size of 23-bit. Mode 5: This is a custom precision format. It uses a basic size of 32-bit. Mode 6: This mode is a fully fledged double-precision floating point multiplier at the cost of accuracy. The mode with less number of mantissa bits consumes less amount of power. These modes are best suited for integer multiplication and also for applications where accuracy is not a big issue. Rounding of bits is done before multiplication for every mode except mode 6 and this reduces huge variations in results. A simple block diagram of the proposed model is shown in fig. 2. The custom precision formats with 8-bit mantissa and 16-bit mantissa are best suited for integer multiplications where fractional accuracy is not an issue. It can also be used for low value fractional multiplication which requires an integer value as result. By using 8-bit and 16-bit multipliers instead of a fully-fledged double precision floating point multiplier can save a lot of power and can increase the speed. The binary unsigned multiplier used for mantissa multiplication is implemented by using a combination of 3. IMPLEMENTATION A. Floating Point Multiplier A floating point number is represented in IEEE-754 format [2] as ± s b e or ± significand base exponent toperform multiplication of two floating point numbers ± s 1 b e1 and ± s 2 b e2, the significant or mantissa parts are multiplied to get the product mantissa and exponents are added to get the product exponent. i.e.; the product is ± (s 1 s 2) b e1 + e2. The hardware block diagram of floating point multiplier is shown in Fig.3. Fig.3 Floating Point Multiplier The important blocks in the implementation of proposed floating point multiplier is described below. a. Sign Calculation The MSB of floating point number represents the sign bit. The sign of the product will be positive if both the numbers are of same sign and will be negative if numbers are of opposite sign. So, to obtain the sign of the product, it can use a simple XOR gate as the sign calculator. b. Addition of Exponents 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 505

To get the resultant exponent, the input exponents are added together. Since it use a bias in the floating point format exponent, we need to subtract the bias from the sum of exponents to get the actual exponent. The value of bias is 127 10 (01111111 2) for single precision format and 1023 10 (0111111111 2) for double precision format. In proposed custom precision format also, a bias of 127 is used. The computational time of mantissa multiplication operation is much more than the exponent addition. So a simple ripple carry adder and ripple borrow subtractor is optimal for exponent addition. An 11-bit ripple carry adder is used to add the two input exponents. The Bias is subtracted using an array of ripple borrow subtractors. c. Karatsuba - Urdhva-Tiryagbhyam multiplier In floating point multiplication, most important and complex part is the mantissa multiplication. Multiplication operation requires more time compared to addition. And as the number of bits increase, it consumes more area and time. In double precision format, it need a 53x53 bit multiplier and in single precision format it need 24x24 bit multiplier. It requires much time to perform these operations and it is the major contributor to the delay of the floating point multiplier. To make the multiplication operation more area efficient and faster, the proposed model uses a combination of Karatsuba algorithm and Urdhva-Tiryagbhyam algorithm. Karatsuba algorithms uses a divide and conquer approach where it breaks down the inputs into Most Significant half and Least Significant half and this process continues until the operands are of 8-bits wide. Karatsuba algorithm is best suited for operands of higher bit length. But at lower bit lengths, it is not as efficient as it is at higher bit lengths. To eliminate this problem, Urdhva algorithm is used at the lower stages. The combined model of Karatsuba Urdhva- Tiryagbhyam algorithm is shown in Fig.4. Urdhva-Tiryagbhyam algorithm is the best algorithm for binary multiplication in terms of area and delay. But as the number of bits increases, delay also increases as the partial products are added in a ripple manner. For example, for 4- bit multiplication, it requires 6 adders connected in a ripple manner. And 8-bit multiplication requires 14 adders and so on. Compensating the delay will cause increase in area. So Urdhva-Tiryagbhyam algorithm is not that optimal if the number of bits is much more. If we use Karatsuba algorithm at higher stages and Urdhva-Tiryagbhyam algorithm at lower stages, it can somewhat compensate the limitations in both the algorithms and hence the multiplier becomes more efficient. The circuit is further optimized by using carry select and carry save adders instead of ripple carry adders. This reduces the delay to a great extent with minimal increase in hardware. Fig.4 Karatsuba-Urdhva multiplier model d. Normalization of the result Floating point representations have a hidden bit in the mantissa, which always has a value 1 and hence it is not stored in the memory to save one bit. A leading 1 in the mantissa is considered to be the hidden bit, i.e. the 1 just immediate to the left of decimal point. Usually normalization is done by shifting, so that the MSB of mantissa becomes nonzero and in radix 2, nonzero means 1. The decimal point in the mantissa multiplication result is shifted left if the leading 1 is not at the immediate left of decimal point. And for each left shift operation of the result, the exponent value is incremented by one. This is called normalization of the result. Since the value of hidden bit is always 1, it is called hidden 1. e. Representation of Exceptions Some of the numbers cannot be represented with a normalized significand. To represent those numbers a special code is assigned to it. In the proposed model, we use four output signals namely Zero, Infinity, NaN (Not-anumber) and Denormal to represent these exceptions. If the product has exponent + bias = 0 and significand = 0, then the result is taken as Zero (±0). If the product has exponent + bias = 255 and significand = 0, then the result is taken as Infinity. If the product has exponent + bias = 0 and significand 0, then the result is taken as NaN. Denormalized values or Denormals are numbers without a hidden 1 and with the smallest possible exponent. Denormals are used to represent certain small numbers that cannot be represented as normalized numbers. If the product has exponent + bias = 0 and significand 0, then the result is represented as Denormal. Denaormal is represented as ± 0.s 2-126, where s is the significand. 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 506

B. Rounding and Truncating The problem with multiplication result must fit into the same n bits as the multiplier and the multiplicand. This, of course, often leads to loss of precision. By introducing rounding modes this can be reduced. The proposed multiplier supports all IEEE rounding modes: round to nearest even, round to zero, round to positive infinity, and round to negative infinity. In the proposed project, rounding is carried out in two times i.e. before and after multiplication. In rounding before multiplication the input is taken initially without consideration of the decimal point thus the only rounding method used is truncation. C. Pipelining Pipelining is the possible approach to speed-up the clock frequency. It is an important technique used in several applications such as digital signal processing (DSP) systems, microprocessors, etc. Accordingly, it results in speed increase for the critical path in most systems. The performance of the proposed multiplier is advanced by the implementation of pipelined concept. Fig.4.5 illustrates the pipelined architecture concept which enhances the performance of the multiplier. The aim of developing a pipelined floating point multiplier module is order to produce result at every clock cycle. In order to enhance the performance of the multiplier, three pipelining stages are used to divide the critical path thus increasing the operating frequency of the multiplier. As the number of pipeline stages is increased, the path delays of each stage are decreased and the overall performance of the circuit is improved. By pipelining the floating point multiplier, the speed increased, however, the area increased as well. Different coding structures were tried in VHDL code in order to minimize size. Multiplying two numbers in floating point format is done by three main module i.e. multiplier module, rounding module and exceptions module. In this project, pipelined floating point multiplier structure organized in a three-stage. Stage 1 performs the addition of the exponents, the multiplication of the mantissas, and the exclusive-or of the signs. Stage 2 takes these results from stage 1 as inputs, performs the rounding operations. Stage 3 checks the various exceptions conditions like overflow, underflow, invalid and inexact. By performing the multiplication process as explained in the previous sections by using pipelined technique, speed of the pipelined multiplier is doubled compare to the speed of the non-pipelined multiplier. Fig.5 Pipelined double precision floating point multiplier Example for Double precision Floating point multiplication Input A = -18.0, Input B = 9.5 Binary representation of the operands: A = -10010.0, B = +1001.1 Normalized representation of the operands: A = -1.001 2 4, B = +1.0011 2 3 IEEE representation of the operands: A=1 10000000011 0010000000000000000000000000000000 000000000000000000 A = 64 hc032000000000000 B=0 10000000010 0011000000000000000000000000000000 000000000000000000 B = 64 h4023000000000000 Outputs: A B = -18.0 9.5 Out=64 hc065600000000000000000000000 Underflow =0, Overflow=0, Inexact= 0, Exception=0, Invalid= 0, Ready=1 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 507

4. SIMULATION RESULTS The main objective of this work is to design and implement a floating point multi-precision circuit such that the device can reconfigure itself according to the precision requirements and can operate at high speed irrespective of accuracy and consume less power where accuracy is not an issue. Since mantissa multiplication is the most complex part in the floating point multiplier, we designed a multiplier which can operate at high speed and increase in delay and area is significantly less with increasing number of bits. The floating point multipliers of different modes with IEEE-754 standard format and custom precision format is implemented separately using Verilog HDL and tested. The binary multiplier unit (Karatsuba-Urdhva) are further optimized by replacing simple adders with efficient adders like carry select adders and carry save adders. The proposed model is implemented, synthesized and simulated using Xilinx Synthesis Tools (ISE 14.7) targeted on Virtex5 family. Some of the simulation results are shown in Fig.6, Fig.7, Fig.8, Fig.9, Fig.10 and Fig.11. Fig.9 Double precision 16-bit mantissa multiplication mode Fig.10 Double precision 24-bit mantissa multiplication mode Fig.7 64 64 bit pipelined Karatsuba-Urdhva multiplication Fig.11 Double precision 32-bit mantissa multiplication mode Fig.8 Double precision 8-bit mantissa multiplication mode 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 508

Fig.12 Double precision 52-bit mantissa multiplication mode 5. CONCLUSION This paper describes a method to effectively adjust the delay and power consumption for different accuracy requirements. Also the paper shows how to effectively reduce the percentage increase in delay and area of a floating point multiplier with increase in number of bits by using a very efficient combination of Karatsuba-Urdhva algorithms and using 3 stage pipelining technique. ACKNOWLEDGMENT I express my sincere thanks to Mrs. Rekha M.G., Assistant Professor, Department of Electronics and Communication Engineering, Rajeev Institute of Technology, Hassan for her valuable guidance and continuous encouragement in course of my work. I would also like to thank Mrs. Ambika K., Head of the department, Assistant Professor, Department of Electronics and Communication Engineering, Rajeev Institute of Technology, Hassan for her constant support. REFERENCES [1] Arish S, R.K. Sharma, Run-time reconfigurable multiprecision floating point multiplier design for high speed, low-power applications, IEEE 2 nd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 902-907, 2015. [2] IEEE 754-2008, IEEE Standard for Floating-Point Arithmetic, 2008. [3] K. Manolopoulos, D. Reisis, V.A. Chouliaras, An Efficient Multiple Precision Floating-Point Multiplier, 18th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 153-156, 2011. [4] Guy Even, Silvia M. Mueller, Peter -Michael Seidel, A dual precision IEEE Floating-point multiplier INTEGRATION, the VLSI journal 29, pp. 167-180, 2000. [5] Anand Mehta, C. B. Bidhul, Sajeevan Joseph, Jayakrishnan. P, Implementation of Single Precision Floating Point Multiplier using Karatsuba Algorithm, 2013 International Conference on Green Computing, Communication and Conservation of Energy (ICGCE), pp. 254-256, 2013. [6] B. Jeevan, S. Narender, C.V. Krishna Reddy, K. Sivani, A High Speed Binary Floating Point Multiplier Using Dadda Algorithm, International Multi-Conference on Automation, Computing, Communication, Control and Compressed Sensing, pp. 455-460, 2012. [7] Anna Jain, Baisakhy Dash, Ajit Kumar Panda, Muchharla Suresh, FPGA Design of a Fast 32-bit Floating Point Multiplier Unit, International Conference on Devices, Circuits and Systems (ICDCS), pp. 545-547, 2012. [8] P.S. Tulasiram, D. Vaithiyanathan, R. Seshasayanan, Implementation of modified Booth Recoded Wallace Tree Multiplier for fast Arithmetic Circuits, 2012 Asia Pacific Conference on Postgraduate Research in Microelectronics & Electronics (PRIMEASIA), pp. 220-223, 2012. [9] Poornima M, Shivaraj Kumar Patil, Shivukumar, Shridhar K P, Sanjay H, Implementation of Multiplier using Vedic Algorithm, International Journal of Innovative Technology and Exploring Engineering (IJITEE), ISSN: 2278-3075, Volume-2, Issue-6, pp. 219-223, May 2013. BIOGRAPHY Shruthi K.H., pursuing M.Tech in VLSI Designing and Embedded Systems in Rajeev Institute of Technology, Hassan, Karnataka, India. Area of interest in research are VLSI designing, Embedded Systems and Image Processing. Rekha M.G., working as an Assistant Professor in Department of E & C, Rajeev Institute of Technology, Hassan, Karnataka, India. Area of interest in research are digital electronics and communication systems and Digital Signal Processing. 2016, IRJET Impact Factor value: 4.45 ISO 9001:2008 Certified Journal Page 509