UNIVERSITY OF CALGARY. Real-time Implementation of An Exponent-based Tone Mapping Algorithm. Prasoon Ambalathankandy A THESIS

Size: px
Start display at page:

Download "UNIVERSITY OF CALGARY. Real-time Implementation of An Exponent-based Tone Mapping Algorithm. Prasoon Ambalathankandy A THESIS"

Transcription

1 UNIVERSITY OF CALGARY Real-time Implementation of An Exponent-based Tone Mapping Algorithm by Prasoon Ambalathankandy A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE GRADUATE PROGRAM IN ELECTRICAL ENGINEERING CALGARY, ALBERTA MAY, 2016 Prasoon Ambalathankandy 2016

2 Abstract In this thesis, we present a real-time hardware implementation of an exponent-based tone mapping algorithm of Horé et al. Although there are several tone mapping algorithms available in the literature, most of them require manual tuning of their rendering parameters. However, in our implementation, the algorithm has an embedded automatic key parameter estimation block that controls the brightness of the tone-mapped images. We also present the implementation of a Gaussian-based halo reducing filter. The hardware implementation is described in Verilog and synthesized for a field-programmable gate array (FPGA) device. Experimental results performed on different wide dynamic range (WDR) images show that we are able to get images which are of good visual quality and have good brightness and contrast. The good performance of our hardware architecture is also confirmed quantitatively with the high peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index. ii

3 Acknowledgements Like every Master thesis, this one too would not have been possible without the help, guidance and support from many people. While it is not possible to name every one, I would like to acknowledge everyone who directly or indirectly helped to make this work a reality. First and foremost I would like to thank Prof. Orly Yadid-Pecht for giving me this opportunity and then guiding and motivating me through the MSc studies. Many thanks to Dr. Alain Horé for being so patient and helping me throughout this thesis and shaping my research into a good manuscript. I would also like to thank Ms. Pauline Cummings and Ms. Ella Lok for their support. I would like to thank my labmates for all the interactions and numerous coffee and lunch breaks we shared during these two and half years. Special thanks to Nikhil, Tony, Ulian and Yuting for being my sounding board, giving valuable suggestions and making my stay at University of Calgary a memorable one. I would like to acknowledge Dr. Bruce F. Cockburn (University of Alberta, Edmonton) and Dr. Svetlana Yanushkevich for reviewing my thesis and providing valuable feedback. Lastly, I would like to thank my parents and my family who have supported me throughout. This research was supported by The Natural Sciences and Engineering Research Council of Canada (NSERC), the Alberta Informatics Circle of Research Excellence (icore)/alberta Innovates Technology Futures (AITF) and CMC Microsystems. iii

4 Table of Contents Abstract... ii Acknowledgements... iii Table of Contents... iv List of Tables...v List of Figures and Illustrations... vi List of Symbols, Abbreviations and Nomenclature... viii 1. INTRODUCTION THE EXPONENT-BASED TONE MAPPING ALGORITHM OF HORÉ ET AL Introduction to the tone mapping algorithm of Horé et al Estimation of the automatic rendering parameter k Objective quality assessment of Horé et al. s algorithm A halo reducing filter Some known methods to reduce halo artifacts Reducing halo artifacts in Horé et al. s algorithm HARDWARE ARCHITECTURE OF THE TONE MAPPING ALGORITHM Background Hardware platforms for implementing image processing algorithms Register-transfer level (RTL) implementation of tone mapping algorithm of Horé et al Top-level module Mean and max module Sliding window and buffer Implementation of x h The automatic k1 parameter estimation block Inverse exponential function Hardware implementation for the halo reducing filter of Horé et al Convolution x h EXPERIMENTAL RESULTS FPGA resource utilization for the hardware implementation Objective image quality assessment of our hardware implementation Visual quality assessment of the tone mapped images Comparison with other hardware implementations CONCLUSION AND FUTURE WORK...60 REFERENCES...62 iv

5 List of Tables Table 1: Objective quality assessment of tone mapping operators Table 2: Resouce summary for our tone mapping implementation without using the halo reducing filter Table 3: Resource summary for our tone mapping implementation using the halo reducing filter Table 4: PSNR and SSIM values for 25 test images given in Fig Table 5: Comparison with other tone-mapping hardware implementations v

6 List of Figures and Illustrations Figure 1: Range of luminance intensity in the natural environment (adapted from [2]) Figure 2: (a) Example WDR image. (b) Global tone mapped image using algorithm of Drago et al.[22]. (c) Local tone mapped image using algorithm of Fattal et al. [5]. Hybrid tone mapped image using algorithm of Horé et al [37] Figure 3: Impact of parameter k1 on tone mapped images. (a) k1= 0.05 (b) k1= 0.15 (c) k1=0.25 (d) k1=0.35 (e) k1=0.45 (f) k1=0.55 (g) k1=0.65 (h) k1=0.75 (i) k1=0.85 (l) k1= Figure 4: Plot of mean intensity versus parameter k1 of tone mapped images Figure 5: Image 1 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al Figure 6: Image 2 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al Figure 7: Image 3 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al Figure 8: (a) WDR image. (b) Tone-mapped image with halos highlighted. (c) Tone-mapped image with halos highlighted (grayscale) Figure 9: Plot showing the weight function w(p, q) for x(p) = Figure 10: (a) WDR image. (b) Tone-mapped image using Horé et al. s algorithm (c) Tonemapped image using Horé et al. s algorithm with a halo reducing filter. Image source: Figure 11: Block diagram of the exponent-based tone mapping algorithm of Hore et al Figure 12: (a) Flowchart to compute the mean value. (b) Module to compute maximum intensity value Figure 13: (a) Sliding window architecture implementation. (b) Pixels light gray in color are the ones stored in the line buffer, the dark gray pixels are part of the 5 5 processing window Figure 14: Implementation of a Gaussian filter convolution using adders and shifters only Figure 15: Block diagram for k1 parameter estimation Figure 16: Block diagram for the exponential operation vi

7 Figure 17: Block diagram for computing inverse exponential function Figure 18: Taylor series approximation of e-f ln (2) Figure 19: Conceptual data flow diagram for halo reduction filter Figure 20: Architecture for computing the weight value Figure 21: Dataflow diagram for parallel multiplication in the convolution term x*h Figure 22: Block diagram for the tone mapping algorithm of Horé et al. with a halo reducing filter Figure 23: Test images used for the quality assessment of our hardware implementation of the tone-mapping algorithm with the halo reducing filter (see Table 4 for PSNR and SSIM values) Figure 24: (a) WDR image (b) Tone mapped image (Verilog implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter) Figure 25: (a) WDR image (b) Tone mapped image (Verilog implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter) Figure 26: (a) WDR image (b) Tone mapped image (MATLAB implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter) Figure 27: (a) Tone-mapped image generated by the floating-point algorithm of Ofili et al. (b) Tone-mapped image using our tone mapping implementation using a halo reducing filter vii

8 List of Symbols, Abbreviations and Nomenclature Symbol ALU ASIC cd/m 2 CPU CRT DSP FIFO FPGA FPS GPU HDL HDR LCD LDR LE LUT NRE PSNR RMSE SSIM TMQI VCD WDR Definition Arithmetic logic unit Application-specific integrated circuit Candela per square meter Central processing unit Cathode ray tube Digital signal processor First In, First Out Field-programmable gate array Frames per second Graphics processing unit Hardware description language High dynamic range Liquid crystal display Low dynamic range Logic element Lookup Table Non-recurring engineering Peak signal-to-noise ratio Root mean square error Structural similarity index Tone mapping quality index Value change dump Wide dynamic range viii

9 1. INTRODUCTION The variation in luminance intensity in some natural environments is huge. For example, a scene captured under starlight and the same scene when captured under sunlight may have a luminance intensity 100 million times higher than the starlight scene. The human eye instantaneously can adapt between 3 and 5 orders of magnitude (log units) of luminance variation through means of local adaptation [1], [2]. Through the coordinated action of the pupil, duplex retina of rods and cones, photopigment bleaching and neural gain controls, our eyes, in a matter of few minutes, can adapt themselves to view over luminance intensity variation of 14 log units [3], [4]. Fig. 1 shows the range of luminance intensity that we encounter in nature [2]. Figure 1: Range of luminance intensity in the natural environment (adapted from [2]). The dynamic range of a scene is described as the ratio of the brightest object in a scene to that of the darkest object in the same scene. Wide dynamic range (WDR) images, which are also known as high dynamic range (images), are images that can exhibit large dynamic range variation; such images can be generated using dedicated software. The software synthesizes a WDR image of a scene from multiple low dynamic range (LDR) images of the same scene captured with different exposure times [5]. The advancements made in imaging technologies 1

10 have made it possible to directly capture wide dynamic range images. Commercial wide dynamic range sensors are available from CMOSIS (CMV12000) [6], ON Semiconductor (AR0231AT) [7], and Photonfocus (HD1-D G2) [8] to name a few. The novel techniques used to increase the dynamic range of image sensors on-the-fly are discussed in [9] [11] and is an ongoing research subject in our lab [12], [13]. It is also possible to use several image captures to obtain a WDR image [14], [15]. WDR images find diverse applications in digital cameras [16], medical imaging [17], automotive applications [18], [19] and night vision[20]. As indicated in Fig. 1, regular display devices can only reproduce luminance in the range of 2-3 orders of magnitude (e.g., typical computer monitor display: 50 cd/m 2 (CRT) to 150 cd/m 2 (high quality LCD)). These regular display devices use 8-bits per pixel for each of the red, green and blue (RGB) channels; thus each channel can only represent 2 8 different shades of red, green and blue colors, respectively. However, these 2-3 orders of magnitude are not enough to represent faithfully the wide range of illumination in our natural environment. In order to display WDR images on a standard display device, we need to compress the dynamic range in such a way that there is ideally no loss of image detail. The technique that is used to match the dynamic range of captured images to the dynamic range of the standard display device is known as tone mapping or WDR rendering. Tone mapping methods can be classified into two groups: global tone mapping algorithms (also known as tone reproduction curves) and local tone mapping algorithms (also known as tone reproduction operators). Global tone mapping algorithms are spatially invariant, that is they apply the same function to all pixels in the input image [21] [25]. This results in one and only one output value for a particular input pixel value irrespective of the pixels in its neighborhood. Local tone mapping algorithms are spatially variant and apply different functions 2

11 based on the neighborhood of the input pixel [5], [26] [31]. Thus, one input pixel could result in multiple output values based on its position. In general, global tone mapping algorithms are known to be fast and easier to implement when compared to tone reproduction operators [32], whereas local tone mapping algorithms are computationally more expensive and time consuming compared to global tone mapping algorithms. Local tone mapping methods generally produce better quality images as they preserve details, which may not be the case when using global tone mapping methods [33]. However, one of the major drawbacks of local tone mapping algorithms are the creation of halo artifact among the high contrast edges and the graying out of the lowcontrast areas [34], [35]. Fig. 2 (a) shows an example of a WDR scene that has been captured with varying light (WDR Image source [36]). It has areas under shade and under sunlight. When the image is displayed on a standard display device the image appears underexposed with very little visible details. In Fig. 2 (b) we show a global tone mapped image which reveals more information. For illustration, we have utilized Drago s algorithm given in [22]. When compared to a local tone mapped image (given in Fig. 2 (c) using Fattal s algorithm [5]), the global tone mapped image lacks good contrast, which is more appealing in Fig. 2 (c). Fig. 2 (d) presents a hybrid tone mapped image produced using an exponent-based tone mapping algorithm [37], which is described in the following chapter. This algorithm makes use of both global and local information to produce images with more details and contrast. Various technologies are available for implementing image/video processing algorithms. However, the main concerns of design implementation are cost, speed and power. The design methodology adopted for any hardware implementation depends on the application and time to market. Tone mapping algorithms can be implemented on hardware and software. Many 3

12 algorithms have been successfully presented on software platforms [5], [22], [38] [40]. Realtime performance with good tone-mapped image quality requires hardware implementation of tone-mapping algorithms. The hardware-based implementation can be realized using one of these hardware platforms: Application-Specific Integrated Circuits (ASIC), Field-Programmable Gate Arrays (FPGA), and Graphics Processing Units (GPU). Figure 2: (a) Example WDR image. (b) Global tone mapped image using algorithm of Drago et al.[22]. (c) Local tone mapped image using algorithm of Fattal et al. [5]. Hybrid tone mapped image using algorithm of Horé et al [37]. Hassan et al. reported an FPGA-based architecture for local tone-mapping of gray scale HDR images [41], and an extended algorithm for dealing with color images [42]. The described system operates on a image at 60 frames per second. One of the important drawbacks of their design is a large memory requirement (about 3M bits of on-chip memory), significantly increasing the hardware cost. Chiu et al. presented a tone-mapping processor implementation on 4

13 an ASIC-based ARM core SoC (system-on-a-chip) [43]. The processor implements two tone mapping algorithms: the modified global photographic tone mapping algorithm of Reinhard et al. [40] and the block-based gradient compression method proposed by Fattal et al. [5]. The processor delivers a image at 60 frames per second and runs at 100 MHz clock. This approach provides a significant improvement in speed and area in comparison to the FPGA implementation of Hassan et al. A disadvantage of having an ASIC implementation would be the complexity and design time, and the inability to change the ASIC design. Ureña et al. reported two real-time architectures targeting a GPU and FPGA for tone mapping [44]. Here tonemapping and contrast enhancement were done using histogram adaptation of brightness channel without requiring logarithmic compression of the dynamic range and other time consuming computations. A disadvantage of the GPU implementation of tone-mapping algorithms is the considerable time required (about 10 ms) to transfer the frame from CPU memory to the GPU memory. However, we cannot ignore the lower development time required for developing a GPU based solution, when compared to a FPGA implementation [45]. From our lab Ofili et al. reported a real-time architecture of an exponent-based tone mapping algorithm, which used a Cyclone III FPGA as its hardware platform. The implementation could process a image at 126 frames per second [46]. However, the algorithm is prone to halos [47]. Lapray et al. presented a complete tone mapping system which includes an LDR camera for image capture, a Reinhard et al.-like tone-mapping algorithm [40] and a display controller [48]. The system was implemented on a Xilinx Virtex-5 platform, and processes videos at 60 frames per second with resolution. In this thesis we will present an FPGA-based implementation of an automatic exponent based tone mapping algorithm of Horé et al. [37] that was developed in our lab. The algorithm 5

14 utilizes both local and global image information to tone map WDR images. The algorithm is embedded with an automatic rendering parameter which makes use of statistical analysis of WDR images to control the brightness of tone mapped images. We also present a halo reducing filter which aims to reduce the appearance of halos from the tone-mapped images. For our FPGA-based hardware implementation, we have chosen the Altera Cyclone III low cost FPGA. The algorithm architecture was implemented in Verilog HDL and, for verification purposes, the software models were built in MATLAB. The organization of the thesis is as follows: in Chapter 2 we will present in detail the exponent-based tone mapping algorithm of Horé et al., discuss the automatic parameter estimation technique and also present the theoretical concept of the halo reducing filter. In Chapter 3, we will focus on the hardware implementation of the tone mapping algorithm and all other modules presented in Chapter 2. In Chapter 4, we will discuss the experimental results, which include the FPGA resource utilization for our hardware implementation and the assessment of the quality of the resulting tone-mapped images. Our assessment will also include comparisons with other similar tone mapping operator implementations. The concluding remarks and future work are presented in Chapter 5. 6

15 2. THE EXPONENT-BASED TONE MAPPING ALGORITHM OF HORÉ ET AL. 2.1 Introduction to the tone mapping algorithm of Horé et al. The tone mapping algorithm developed in our group by Horé et al.[37] operates on the luminance channel (L in ) of the input WDR image, which can be obtained from a color WDR image using the following equation [49]: L in = 0.299R in G in B in (1) In the above equation, R in, G in and B in represent the three color components of the input WDR image. The exponent based tone mapping algorithm of Horé et al. performs both global and local compression on the input luminance intensity pixel. The tone mapping operator of Horé et al. makes use of the inverse exponential as a nonlinear mapping function to adapt the input WDR image to match the dynamic range of LDR displays [33], [50]. The algorithm is defined by Eq. (2): y(p) = x max 1 e { x 0 (p) = k 1 μ x + x(p) x 0 (p) 1 e x max x 0 (p) (x h)(p) 2 (2) where x is the original WDR image, y is the final LDR image, p is a pixel, x max the maximum value for the display device (for example, for an 8-bit display device, x max = = 255 ). x 0 is the adaptation factor and it is computed as the sum of a global component k 1 μ x and a local component x h. In the global component part, k 1 is a parameter that plays a major role in brightening the tone-mapped image. When k 1 increases, the tone mapped image becomes darker, while it becomes brighter when k 1 decreases as can be seen in Fig. 3 (WDR Image source [36]). In Fig. 4 we have plotted the variation in mean intensity value with respect to the parameter k 1. 7

16 μ x is the average intensity of the WDR image x. Regarding the local component that is used to extract the local information ( ) denotes the convolution operation and h is a low-pass filter (for example a 2D Gaussian filter). Figure 3: Impact of parameter k 1 on tone mapped images. (a) k 1 = 0.05 (b) k 1 = 0.15 (c) k 1 =0.25 (d) k 1 =0.35 (e) k 1 =0.45 (f) k 1 =0.55 (g) k 1 =0.65 (h) k 1 =0.75 (i) k 1 =0.85 (l) k 1 =0.95. The parameter k 1 is used to modulate the amount the amount of brightness for the tone mapped images. With Fig. 3 we illustrate how the increase in rendering parameter k 1 value causes the tone-mapped image to grow darker, and then turn brighter when rendering parameter is decreased. The parameter k 1 is adjusted between 0 and 1 depending upon the image key. 8

17 Figure 4: Plot of mean intensity versus parameter k 1 of tone mapped images. 9

18 2.2 Estimation of the automatic rendering parameter k 1 From a quick survey of the available literature we can easily find many tone mapping algorithms which employ local, global or combined local and global tone mapping schemes to modify WDR images to be made suitable for display on LDR monitors. However, not all of these tone mapping algorithms will be suitable for hardware implementation as they would require some manual tuning of their rendering parameter [31], [40], [51]. Some tone mapping algorithms, like those described in the following references [5], [39], [52], made use of a constant rendering parameter in order to avoid the manual adjustment of the rendering parameter. Having a constant rendering parameter may not result in optimal tone mapped images when considering an assortment of input WDR images because a rendering parameter may need some tuning based on the input image statistics. For hardware implementation it will be an attractive feature to have an automatic rendering parameter that will assist in processing variety of images. In this thesis, we will present the hardware implementation of an automatic computation of the k 1 rendering parameter. The parameter k 1 is determined from a statistical experiment which is explained in detail in reference [31]. In principle, the exponent-based tone mapping algorithm of Horé et al. derives a mean intensity of a coarsely tone mapped image. Here, a simple tone mapping is performed on the WDR input and a mean intensity value (λ) is derived to determine the parameter k 1. It is learned from the experiment of Horé et al. that parameter k 1 can be approximated using a one-dimensional relationship with the mean intensity value of the tone mapped image generated using the simple tone mapping algorithm defined by Eq. (3). 10

19 { yi (p) = x max (1 e x(p) x i 0 (p) ) x 0 i (p) = 0.5 μ x (x h)(p) (3) Here, y i (p) is the tone mapped image used for determining the rendering parameter k 1, x max is the maximum pixel intensity of the display device, and x i 0 (p) is an adaptation factor. For standard display devices, x max = 255 (8-bit). For the filter h, we have made use of a 5 5 Gaussian filter, which is given below: G (4) The advantage of using a Gaussian-like filter as given above is that, when implementing on hardware, it requires only shifters and adders. Based on their experiments, Horé et al. arrived at the following cubic equation Eq. (5), which derives parameter k 1 based on λ (the mean intensity value of the tone mapped image, given by Eq. (3)). k 1 = λ λ λ (5) However, the hardware implementation of the above equation is a challenge. In the hardware implementation section (section no ) we will present an exponent-based equation which will simplify the hardware implementation while approximating the parameter k 1 to very accurate values ( the absolute error is about ). 11

20 2.3 Objective quality assessment of Horé et al. s algorithm The tone mapping equation (Eq. (2)) operates on the luminance channel and produces a monochrome (single band) tone-mapped image. To restore the color in the final tone-mapped image, we make use of Eq. (6) obtained from reference [53]: C out = ( C γ in ) L L out in (6) In Eq. (6), C in represents the original full color (RGB) WDR image and L in is the luminance component of the input WDR image, for example given by Eq. (1). L out represents the output luminance value of the tone-mapped image and C out the three output color components of the tone mapped LDR image. The exponent γ is a color saturation factor for displaying color images, and its value is chosen between 0.4 and 0.6 [5]. To illustrate the performance of Horé et al. tone mapping algorithm, we compare few test images with tone-mapped images generated by other popular tone mapping algorithms of Ashikhmin [54], Drago et al. [22], Durand et al. [55], Fattal et al. [5], Ferwerda et al. [4], Kuang et al. [56], Mantiuk et al. [39], Pattanaik et al. [57], Reinhard et al. [40], Tumblin et al. [25] and Ward et al. [23]. Implementations of these tone mapping algorithms were available through the HDR Toolbox [58] and Luminance HDR (Version 2.4.0) [59]. For objective quality assessment, we have used the tone mapping quality index proposed by Yeganeh et al [60]. From Figs. 5, 6 and 7 (WDR Image source [36]) we can make the following observations: Ashikmin, Fattal and Ferwerda s operators produce darker images. The algorithm of Drago et al. produces blurred images, while the images produced using Mantiuk et al. and Reinhard et al. exhibit faded colors for some images. Durand et al s. operator causes some loss of detail which is prominent in images which have objects in the scene having whitish colors. 12

21 Pattanaik et al. s operator produces visible artifacts, Tumblin et al. s operator manages to produce good quality images. However, it lacks consistency as can be seen with the image shown in Fig. 7 (j), where it fails to match the displayable range. Ward et al. s operator introduces visible artifacts like the one shown Fig. 7 (k). The images produced by the algorithm of Horé et al. have exhibited some good balance of brightness and contrast with the images tested. The quality of goodness in images is also asserted in the TMQI scores presented in Table 1. For objective comparison of the different tone mapping algorithms, a tone mapping quality index proposed by Yeganeh et al. [60] is used, which assesses the effectiveness of the different tone mapping operators. The objective assessment algorithm produces three image quality scores that are used in evaluating the image quality of a tone-mapped image: the structural similarity (S) between the tone-mapped image and the original WDR image, the naturalness (N) of the tonemapped image, and the overall image quality measure which the authors call tone mapping quality index (TMQI). TMQI is computed as a non-linear combination of both the structural similarity score (S) and the naturalness score (N). It generally ranges from 0 to 1, where 1 is the highest in terms of image quality. The TMQI scores for the different tone mapping operators and for the four images used in the tests are shown in Table 1. As can be seen, the TMQI scores of Horé et al. s algorithm are high for the images tested, which confirms the good performance of that tone mapping operator. Among the other tone mapping operators, Durand et al. s operator [55] and Mantiuk et al. s operator [39] also deliver consistently high TMQI scores for WDR images captured under varying light conditions. However, for certain types of images the algorithm of Horé et al. is known to introduce halo artificats [47]. In the same reference article, Horé et al. proposed a new filter which can reduce 13

22 the halos. Fig. 8 (WDR image source [61]) shows a test image in which we can clearly note the appearance of halos. In Fig. 8, a greyscale tone-mapped image is included to clearly highlight the appearance of halos in it. Table 1: Objective quality assessment of tone mapping operators. Tone mapping Image 1 Image 2 Image 3 Algorithm Ashikhmin [54] Drago [22] Durand [55] Fattal [5] Ferwerda [4] Kuang [56] Mantiuk [39] Pattanaik [57] Reinhard [40] Tumblin [25] Ward [23] Horé [37]

23 Figure 5: Image 1 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al. 15

24 Figure 6: Image 2 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al. 16

25 Figure 7: Image 3 tone-mapped using (a) Ashikmin (b) Drago et al. (c) Durand et al. (d) Fattal et al. (e) Ferwarda et al. (f) Kuang et al. (g) Mantiuk et al. (h) Pattanaik et al. (i) Reinhard et al. (j) Tumblin et al. (k) Ward et al. (l) Horé et al. 17

26 Figure 8: (a) WDR image. (b) Tone-mapped image with halos highlighted. (c) Tone-mapped image with halos highlighted (grayscale). 18

27 2.4 A halo reducing filter Some known methods to reduce halo artifacts Encountering halo artifacts when rendering WDR images is a well-known problem [35], [38], [54], [55], [62]. The appearance of halos in images that have objects with very different illuminations like the one in Fig. 8 (b) is due to the local filtering. In Fig. 8 (a), while processing, the dim areas around the lamp are influenced by the very bright pixels representing the lamp, causing a black halo around the lamp. Halo artifacts, being a well-known problem to researchers, have been addressed by several solutions proposed to reduce or avoid them. In [26] Tumblin and Turk proposed a method called Low Curvature Image Simplifiers (LCIS) that that increased the local contrast while avoiding halo artifacts. This method is based on anisotropic diffusion that would enhance the boundaries while smoothing non-significant intensity variations. Anistropic diffusion in image processing is a technique that is aimed at reducing image noise without removing significant parts of the image content, typically edges, lines or other details that are important for interpretation of the image. Following this method of Tumblin and Turk, Durand and Dorsey rendered WDR images using bilinear filtering, which is a fast alternative to anisotropic diffusion [55]. Bilateral filter is a non-linear filter that is derived from a Gaussian filter, and which prevents blurring across edges by decreasing the weight of pixels when the intensity difference is very high [55]. Other researchers have proposed some novel techniques to reduce or solve the problem of halo artifacts in their tone mapping operators. Some of the techniques are listed next. Fattal et al. compress the WDR image using a gradient attenuation function defined by a multiresolution edge detection scheme [5]. An LDR image is obtained by solving a Poisson equation on the 19

28 modified gradient field. This method generates good quality images, but it requires parameter tuning for every WDR image. This makes it less attractive for real-life applications. Reinhard et al. have reported a local method based on the photographic dodging and burning technique [40]. They have used a circular filter whose size is adapted for each pixel by computing a measure of local contrast. Ashikhmin has proposed a method similar to Reinhard et al. [40] in which he computes a measure of the neighborhood luminance for each pixel [54]. This measure is then used for defining the tone mapping function. Both methods of Ashikhmin and Reinhard et al. do provide an efficient way of compressing the dynamic range while reducing halo artifacts. However, the performance seems to depend upon the neighborhood sizes, which will in consequence impact the buffer size. DiCarlo and Wandell proposed a Gaussian-based operator that includes a second weight which depends on the intensity difference between the current pixel and its spatial neighbors [63]. This technique preserves the sharpness of large transitions Reducing halo artifacts in Horé et al. s algorithm The tone mapping operator of Horé et al. is reproduced below: y(p) = x max 1 e { x 0 (p) = k 1 μ x + x(p) x 0 (p) 1 e x max x 0 (p) (x h)(p) 2 (7) Horé et al. have proposed a new halo reducing filter for this tone mapping algorithm [47]. In this section, we will present their ideas comprehensively, which will reduce or eliminate the halos from tone-mapped images. In the algorithm of Horé et al., halos are mainly introduced because of the use of the Gaussian filter for h. Since a Gaussian filter does not care about the difference that exists between pixel intensities within a neighborhood, it is prone to introduce halos: when 20

29 smoothing images, the values of pixels in uniform areas close to edge pixels are altered by the quite different intensities of the edge pixels, and vice versa. Mathematically, a black halo will appear if y(p) 0 ( means tends towards) for a pixel p having an intensity value x(p) quite different from 0, which can be written as: 1 e x(p) x 0 (p) 0 (8) Rewriting Eq. (8) yields: 1 e x(p) x 0 (p) 0 x(p) x 0 (p) 0 x(p) 0 or x(p) x 0(p) (9) where means much smaller than. The condition x(p) 0 is not valid since there are no black halos if the input intensity is already close to 0. Thus, only the second condition holds for getting black halos, i.e. x(p) x 0 (p). By rewriting x 0 (p), the condition for black halos becomes: x(p) k 1 μ x (x h)(p) (10) From Eq. (10), we easily understand that when the convolution term (x h)(p) is too big in comparison to x(p), we will encounter black halos. This will happen, for example, when the pixel p is mostly surrounded by pixels that have an intensity value far higher than x(p). Consequently, to minimize the appearance of black halos, Horé et al. have proposed another filter for replacing the Gaussian filter h. That filter is in fact the Gaussian filter multiplied by a weighting function w which accounts for how far away the intensity values of neighbor pixels are from the central pixel. In fact, the function w will act as a classifier which, when applied to a pixel p, gives more weight to neighboring pixels having intensity values close to that of p. Thus, if we define by g σ the 2D Gaussian function of standard deviation σ, then for a pixel p and a neighbor pixel q, the new filter h is defined by: 21

30 h(p, q) = g σ (p, q)w(p, q) (11) For defining w, Horé et al. proposed to make use of the relative difference in intensities between pixel p and its neighboring pixel q in a 5 5 neighborhood, that is to measure to which extent x(q) is far from x(p): 1 x(p) x(q) (12) w(p, q) = x(p) Note that denotes the upper integer part. By using the upper integer part in w(p, q), it turns out that w(p, q) is not an injective function, which means that intensities belonging to some intervals will have the same weighted value [47]. In fact, as we can see from Fig. 9, w(p, q) has the shape of a staircase function. This is done on purpose: we want to have classes of intensity values x(q) that will have the same weighted value expressing their coarse relative distance to x(p). Besides, using a staircase function can be advantageous for implementation on limited-resources devices, such as smart phones and digital cameras, since look-up tables can be used for programming the function. We note that in practice, to avoid a null denominator in Eq. (12), Horé et al. suggest adding a small value (for example 10 4 ) to the numerator and the denominator. The advantage of using Horé et al. s new filter compared to a filter like the Bilateral filter [55], [64] is that we do not have two Gaussian functions to manage, but we just have one and thus it is more practical for simulations. Moreover, for implementation on low-power devices such as smart phones and wireless cameras, having only one Gaussian function instead of two is advantageous since the exponential function is power consuming. In Fig. 10 (a) we show a test WDR image; the tone-mapped image using Horé et al. algorithm [37] is presented in Fig. 11(b) where we can see clearly a halo artifact appearing around the lamp. In Fig. 11 (c), we show the tone mapped image using the halo reducing filter of Horé et al. [47], which has successfully reduced the halos around the lamp. In the next chapter, we will present the hardware implementation of the exponent-based tone mapping algorithm of Horé et al. and the halo reducing filter. 22

31 Figure 9: Plot showing the weight function w(p, q) for x(p) =

32 Figure 10: (a) WDR image. (b) Tone-mapped image using Horé et al. s algorithm (c) Tonemapped image using Horé et al. s algorithm with a halo reducing filter. Image source: 24

33 3. HARDWARE ARCHITECTURE OF THE TONE MAPPING ALGORITHM 3.1 Background In the last chapter, we presented the theoretical aspect of the tone mapping algorithm and tested the software implementation (in MATLAB ) thereby having a proof of concept. We also made comparisons with some of the known tone mapping operators and made qualitative tests. Now we will present the hardware implementation of the tone mapping algorithm of Horé et al. Achieving high performance with image processing algorithms is a challenging task even though we may be able to describe the mathematical operations compactly. However, these operations will have to be repeated several times over a large data set. This scenario is getting more complicated with the advent of imaging technology as modern image sensors are able to capture much larger images at higher speed. 3.2 Hardware platforms for implementing image processing algorithms Various technologies are available for implementing image/video processing algorithms [33]. Primarily, for hardware implementation there have been three varieties: Digital signal processors (DSP), Application-specific integrated circuits (ASIC) and Field-programmable gate arrays (FPGAs). Of these three platforms, implementation of image processing algorithms on reconfigurable hardware like FPGAs have become very attractive for researchers when compared to the fixed architecture DSPs. When compared to ASIC implementation, FPGA implementation requires shorter design time and is economically more viable. We know that ASIC development is expensive because of high unit cost and NREs (Non-recurring engineering) cost [65]. GPUs (Graphics Processing Units) have also been a popular choice for implementing image and video processing algorithms. The rises in popularity of smartphones embedded with GPUs have further driven development of algorithms for GPU implementation. There are several 25

34 studies which have compared the performance of different data intensive applications on CPUs (Central processing units), FPGAs and GPUs [66] [69]. However, from these studies we can conclude that there is no particular winner. For example in reference [69], FPGAs deliver higher performance than GPUs for image processing applications like k-means clustering, while GPUs were faster for the implementation of 2D convolution. Chase et al. found that FPGA implementation required more design effort when compared to GPUs for the similar performance [67]. Thus, we can conclude that for image processing implementations, both FPGAs and GPUs are viable platforms. Based upon the application and/or the performance, power consumption, cost and development time one of them will be more suitable. For our implementation, performance and power consumption are the most important criteria and FPGA-based implementation is more suitable for us. 3.3 Register-transfer level (RTL) implementation of tone mapping algorithm of Horé et al. The software model of the exponent based algorithm of Horé et al. was developed in MATLAB as a floating point algorithm. The software model is inherently serial as it is developed on a serial processor. The program is executed sequentially, where a sequence of arithmetic and logical operations are performed on the ALU (Arithmetic Logic Unit). The CPU is assigned the task of updating the data required by the ALU. Most of the image processing algorithms have an underlying parallelism, which can be exploited to accelerate the algorithm. There are two kinds of parallelism in image processing applications: spatial and temporal. The image processing algorithms are usually conceived as a sequence of image processing operations. By assigning a separate processor for one operation, we can establish a stream or a pipeline of interconnected processors. Thus, the task is partitioned in time, and this is known as temporal parallelism where each processor is simultaneously operating on different pixels. 26

35 Temporal parallelism application leads to a pipelined computing structure. The pipeline is operated by feeding in one new pixel at every clock cycle and it produces one output pixel at every clock cycle after the initial latency of the pipeline. Here, by latency we mean the total time taken by the modules (processor) in the pipeline to operate on an input pixel in order to produce an output pixel. A typical example of temporal parallelism is an assembly line in factories. Spatial parallelism works in a different way: here, one particular processor is replicated several times. Each processor unit operates independently and simultaneously as there is no data dependency, hence from image processing point of view, such parallelism comes in handy when performing neighborhood operations. In Section 3.3.8, we present a module based on spatial parallelism. An example from our daily life where we can notice spatial parallelism is the rows of checkout counters at the supermarkets. For our hardware implementation we will make use of Verilog HDL (Hardware description language), and will follow an RTL level of abstraction. Abstraction makes it possible for designers to design and build complex systems. RTL abstraction can be viewed as a set of register transfers with optional operators (like arithmetic or logical) as part of the transfers. With RTL abstraction we are able to capture the behavior of the system in terms of a datapath and a control unit, which are two interacting parts of the system. Data-paths perform data processing using structures like adders, comparators, decoders, multiplexers, etc. thus performing a specified task. A data-path also interprets the control signal it receives from the control unit and generates status signals for the controller. The control unit is responsible for data movements in the data-path. It performs its tasks by providing switching signals like enabling or disabling registers or selecting a multiplexer output. Thus it determines the sequence of operations 27

36 performed by the data-path. In the following section, we will present the hardware architecture of the tone mapping algorithm of Horé et al.[37] Top-level module The block diagram of our hardware implementation is given in Fig. 11. The figure basically consists of five main modules: sliding window and buffer, Gaussian filter, module to estimate k 1, and a module to compute the mean and maximum intensity values, and the tone-map block. Each of these blocks will be explained later in this chapter. As we target a real-time implementation of our tone-mapping algorithm, we want to operate at least at 30 frames per second with pixel resolution image, which puts a timing constraint on our design that requires us to be able to operate our processing pipeline no slower than 42 nanoseconds. When operating in real-time, there is very little variation between successive image frames. Hence, image statistics (like mean and maximum) acquired from one frame can be used to process the subsequent frame. The k 1 parameter used in Eq. (7) is derived from a mean value [37]; hence when operating in real-time, the k 1 parameter could be used to process a subsequent image. This saves in terms of computation speed and memory as it is not required to store the whole image to compute the mean and maximum intensity values. In our implementation, we have used a 32-bit fixed-point notation for representing the pixel intensity, where 20 bits represent the integer part and 12 bits represent the fraction part Mean and max module Recalling from Eq. (7), to compute the final tone-mapped pixel intensity value we require image statistics like the mean intensity value μ x and the maximum pixel intensity value x max. The mean intensity value of the input image is calculated by using the mean value module 28

37 illustrated in Fig. 12 (a). This simple design makes use of an accumulator and a divider implemented using shifters only. The maximum intensity value is computed by making use of a comparator followed by a register which refreshes upon finding higher intensity values. The block diagram is given in Fig. 12 (b). Figure 11: Block diagram of the exponent-based tone mapping algorithm of Hore et al Sliding window and buffer From the tone-mapping equation, see Eq. (7), for computing x 0 (p) we know that the filter h is a 5 5 Gaussian filter. Thus, the kernel (of K R size) operates on a 5 5 window of pixels obtained from an M N input image. To extract a window of neighborhood pixels, we have implemented the sliding window module, which is illustrated in Fig. 13. We draw inspiration from the work of Benedetti et al. [70], where they present a pipelined processor which outputs one pixel every clock after the initial processor latency. This implementation requires storing (K 1) N + R pixels only to extract the K R neighborhood pixels (for our 29

38 implementation K = R = 5, M = 768, N = 1024 ) with respect to the center pixel. We use four (K 1) line buffers (a FIFO) of depth equivalent to one row (N) of the input image and R registers to store R pixels of the fifth row, as shown in Fig. 13 (a). With pixels shown under the dark gray area, we would have our first window of pixels to perform convolution with our filter h. In the next clock the window slides to its next position as shown in the Fig. 13 (b). We require a memory buffer to store the input pixels temporarily. These pixels are buffered for 13 clock periods (equivalent to the latency of the convolution with filter h) and then synchronized with the corresponding Gaussian filtered pixel intensity values for computing parameter k 1 and the final tone-mapped pixel values. Figure 12: (a) Flowchart to compute the mean value. (b) Module to compute maximum intensity value. 30

39 Figure 13: (a) Sliding window architecture implementation. (b) Pixels light gray in color are the ones stored in the line buffer, the dark gray pixels are part of the 5 5 processing window. 31

40 3.3.4 Implementation of x h For our hardware implementation we are using the following Gaussian filter (h): h (13) The advantage of using the above filter is that its implementation can be realized just by performing only add and shift operations, thus no multiplications are required (we know that the multipliers are expensive operations in hardware). Our convolution implementation is shown in Fig. 15. The inputs to our implementation are a window of pixels (P1 P25); these pixels are multiplied with their respective coefficients using shifters only. For example pixel P13 is multiplied with coefficient 36 (= ). This can be easily realized by using two left shift operators (left shift 5 and left shift 2), and the results from these two operators are added to obtain the product P Following a similar approach, all the 25 pixels in the window are multiplied with their coefficients, summed up and divided by the sum of the convolution mask which is 256. However, from Fig. 15 we can notice that we are dividing (using right shift operator) the final sum by 512 (2 9 ). Recalling from our tone mapping Eq. (7) that the global component of our adaptation factor is (x h), this division by 2 is brought forward and included in our convolution implementation. 2 32

41 Figure 14: Implementation of a Gaussian filter convolution using adders and shifters only. 33

42 3.3.5 The automatic k 1 parameter estimation block In Chapter 2 Section 2.2, we described how the k 1 parameter is used to adjust the brightness of tone-mapped images. For computing parameter k 1, recall that the original equation provided in [37] is given by: k 1 = λ λ λ (14) where λ represents the average intensity of a tone-mapped image computed from an original WDR image. For our hardware implementation, we will approximate that equation by using an exponent-based function in the form: k 1 = m 2 n λ (15) where m and n are positive real numbers to be computed. For this purpose, the goal is to find these unknown values by minimizing the squared error between the original k 1 and the approximated k 1 ; in other words, a least square approach can be used for computing m and n which can be written as: min m,n ( λ λ λ m 2 n λ ) 2 λ (16) In MATLAB, m and n have been easily obtained by varying them between and 0.1 (using a step of 0.001) and by measuring each time the error between k 1 given by Eq. (14) and k 1 given by Eq. (15). The optimal values of m and n were determined as those having generated the minimal approximation error: m =0.020 and n= Consequently, the equation used in our implementation for computing k 1 is: k 1 = λ (17) 34

43 To assess the quality or reliability of this approximation, the root mean squared error (RMSE) and the coefficient of determination (R-square) were used. The RMSE is used to measure the difference between the model s predicted values and the experimentally-derived values [71]. A better fit is indicated by an RMSE value closer to 0. The R-square is used to measure how well the curve fit explains the variations in the experiment data [71], [72]. It can range from 0 to 1 where a better fit is indicated by a score closer to 1. Based on the values m = and n = 0.036, the following values were obtained RMSE= and R-square= These values indicate a good approximation. Eq. (15) which is of the form: y = m 2 n x (18) could be rewritten as, y = m 2 (I+F) (19) where I and F are the integer and fraction part of n x. The hardware implementation of the integer part (2 I ) m would only require shift registers. As the fractional part (2 F ), can be expressed as e F ln (2), we can rewrite Eq. (17) as follows: y = m 2 I e F ln (2) (20) For implementing the exponential function e F ln (2), we will make use of an iterative digit-by-digit algorithm [73]. A conceptual overview of our implementation is given in Fig. 15. As shown in the figure, the parameter k 1 is estimated from the mean intensity value λ. With the first logic block of shifters, we realize the multiplication with coefficient n in Eq. (18) by noting that n = = This implementation technique avoids the 35

44 need for hardware multipliers. To obtain the integer and fraction part of nx, we can take x. log 2 e = I + F. We implement log 2 e also as sum of powers of two like we implemented our coefficient n, log 2 e = = For the hardware implementation of the exponential operator (Fig.16), we draw inspiration from the previous works of [43] and [73]. Here, an assumption is made that x is limited to the range [0, ln(2)). To approximate y, we use a data set of x i and y i whose initial values x 0 and y 0 are set to the argument x and 1, respectively. The pair x i and y i always satisfy Eq. (21): y i = e x ie x (21) The value of x i is updated by subtracting the normalization constant ln(b i )as shown below: x i+1 = x i ln(b i ) (22) Here, b i = 1 + a i 2 i (23) and a i {0, 1}. The value of a is set to 1 if x i ln (1 + 2 i ) and 0 for all other cases. Our fixed-point tone-mapping implementation has a 12-bit fraction part, thus we will pre-store 12 values of ln(1 + 2 i ) with 0 i < 12 in a ROM-based lookup table for obtaining the values of ln(b i ) given in Eq. (22). y i+1 and x i+1 are computed iteratively using Eq. (24) and (25), respectively. 36

45 Figure 15: Block diagram for k 1 parameter estimation. y i+1 = { y i b i, if a = 1 y i, otherwise x i+1 = { x i ln(b i ), if a = 1 x i, otherwise (24) (25) Notice that in Eq. (21) when x i = 0, then y i = e x. In our case, the final exponential value is e F ln (2), where 0 F ln(2) < ln (2). Setting x 0 = F ln(2) and following the iteration method discussed above we can compute the exponential part of k 1 (Eq. (15)). 37

46 Figure 16: Block diagram for the exponential operation. The exponent value that we have obtained above is to be multiplied with the integer part 2 I and coefficient m given in Eq. (20). As noted earlier, multiplication with 2 I can be easily realized with logical shift operators. The coefficient m = can also be realized using sum of powers of two as follows = This implementation technique has definite advantages as the right-shifters here have a fixed number of required shifts, and on synthesis this infers no logic and involves only routing of the input signals. If both the operands 38

47 with respect to the shifter had been signals, as in a b, on synthesis a complex barrel shifter would have been inferred. After computing k 1, its value is stored for processing the next frame Inverse exponential function Recalling from the original tone-mapping Eq.(7), a part of this equation takes the form of inverse exponential, with the numerator part as 1 e x(p) x0(p) and the denominator as 1 e x(p) xmax. We have implemented the inverse exponential function by modifying the digit-by-digit algorithm as mentioned in [46]. The logic behind this implementation is illustrated in Fig. 17. For any arbitrary number x, we can take x log 2 e = I + F (here I and F are the integer and fractional part of x log 2 e ) and rearrange as follows: x = (I + F) ln (2) (26) Therefore, for any arbitrary value of y = e x, it can be represented as follows: y = e x = e (I+F)ln (2) = e I ln (2)+F ln (2) = 2 I e F ln (2) (27) From the above equation, we can identify that 2 I can be directly implemented with shifters only. Since F satisfies 0 < F < 1, F ln (2) satisfies 0 < F(ln2) < ln2. We compute e F ln (2) using the Taylor series approximation method: e F ln (2) = 1 + F ln (2) 1! + (F ln (2))2 2! + (F ln (2))3 3! (28) As stated above, our tone mapping algorithm has the form of an inverse exponential function y = 1 e x. For our hardware implementation of the tone mapping algorithm we have chosen 12-bits for representing the fraction part, thus the quantization error of the system is ε = 2 12 ( ). We notice that e 9 = , which is less than the quantization error ε. Based on this, we can make the following assumption: 39

48 y = 1 e x A, if x 8 = f(x) = { 1, if x > 8 (29) where A = 1 2 I e F ln (2). The final equation for computing the inverse exponential function using the Taylor series approximation method is given below: y = f(x) = { 1 [2 I (1 F(ln2) 1! + F(ln2)2 2! + F(ln 3)3 3! )], x 8 1, x > 8 (30) Figure 17: Block diagram for computing inverse exponential function. 40

49 Fig. 18 shows the block diagram of our Taylor series implementation for e F ln (2). We can notice that ln(2) = , which can be approximated accurately up to a precision of 10 4 using ln(2) = The implementation of this expression requires no resources and can be realized only by rearranging the signals (example: x >> a is equivalent to x[width 1: a]). DSP multiplication elements were used to implement the multiplications required to compute the higher terms of the Taylor series. For example, multipliers ( ) were used in computing (FXin ln(2)) 2 and (FXin ln(2)) 3. To compute the division by 2! and 3!, we have realized them using the signal routing logic as described above: division by 2 can be realized by simply dropping the last bit. By noticing that 1 = , division by 3! = 6 can be realized by rearranging the signals as suggested above. 41

50 Figure 18: Taylor series approximation of e F ln (2). 42

51 3.3.7 Hardware implementation for the halo reducing filter of Horé et al. We will now present the hardware implementation of a halo reducing filter. The mathematical model of this halo reducing filter h was already given in Eq. (11): h(p, q) = g σ (p, q)w(p, q) (31) w(p, q) = x(p) x(q) 1 (32) x(p) From this equation, we can easily identify that the filter h is in fact the Gaussian filter multiplied by a weighting function. The conceptual block diagram for the halo reducing filter is shown in Fig. 20. We will now explain the implementation of the halo reducing filter in detail. Figure 19: Conceptual data flow diagram for halo reduction filter. We notice from Eq. (32) that the weighting function computes the relative difference in intensities between central pixel p and one of its neighbors (in 5 5 neighborhood) q. We will describe the implementation with the help of Fig. 21. We note that in [47], to avoid a null denominator appearing in Eq. (32), Horé et al. add a small value (1e 4) to the numerator and the denominator. For our implementation, we increment the numerator and denominator in Eq. (32) by ε (ε = , the smallest fractional value with 12-bit fraction). We compute the absolute difference in pixel intensities as (I 3,3 + ε) I i,j ; here, I 3,3 is the intensity value of the 43

52 center pixel (pixel p in Eq. (32)) in the 5 5 input window and I i,j are the intensity values of the neighborhood pixels (pixel q in Eq. (32)) in the input window. From Fig. 21, we can notice that a multiplexer is used to obtain the absolute value from the two subtractors that compute the differences. A divider is used to implement the division in (I 3,3+ε) I i,j = Q. In Eq. (32), is a I 3,3 +ε ceiling function. To implement the ceiling function, the quotient (Q) obtained from the divider is incremented by 1. However, this increment may not be required in every case. For example, when ( (I 3,3 + ε) I i,j ) (I 3,3 + ε). However, if the numerator ( (I 3,3 + ε) I i,j ) is smaller than the denominator (I 3,3 + ε), Divider 1 will produce a quotient Q = 0. This would result in a null denominator for the next stage Divider 2 (see Fig. 20). The increment by 1 in the ceiling function implementation is done on the fraction part of quotient Q, and this causes negligible error to the final weight value. The last step in computing the weighting function requires a multiplicative inversion, which is realized by performing a division 1 Q+1. These weighting coefficients are calculated simultaneously for all the twenty five pixels in the neighborhood of the pixel I 3,3. 44

53 Figure 20: Architecture for computing the weight value. From Eq. (31) we can note that the weighting function coefficients are to be multiplied with the coefficients of the Gaussian filter. For our implementation, we are using the 5 5 Gaussian filter mentioned in Eq. (4) Convolution x h The new halo reducing filter h, which we obtain by multiplying the Gaussian coefficients with weighting functions, is now convolved with the WDR input pixels to obtain the halo reduced pixels. As stated earlier, we are bound by a real-time constraint of 42 nanoseconds. The maximum amount of time that we could spend on computing our halo-reduced tone-mapped pixel from the time we receive a WDR input pixel x is 42 nanoseconds. To satisfy this runtime constraint, we use a spatial parallelism approach. By spatial parallelism, we mean to implement multiple copies of one particular operation. Here, we have chosen to implement 25 multiplication operations in parallel. The data flow model for this parallel operation is illustrated in Fig

54 Figure 21: Dataflow diagram for parallel multiplication in the convolution term x h. In Fig. 22 we show the use of halo reducing filter in the block diagram for reducing halos in tone-mapped images by using the algorithm of Horé et al. In the next chapter we will present the hardware resource utilization of our implementation; we will also present some simulation results along with objective quality assessments that we have performed to measure the effectiveness of our hardware implementation. 46

55 Figure 22: Block diagram for the tone mapping algorithm of Horé et al. with a halo reducing filter. 47

56 4. EXPERIMENTAL RESULTS 4.1 FPGA resource utilization for the hardware implementation The proposed hardware architecture for the tone-mapping algorithm was modelled in Verilog HDL and synthesized using Altera's Quartus II 13.1 toolset. An Altera Cyclone III FPGA (EP3C120F780) development kit was our targeted platform. In Tables 2 and 3 we present the compilation results for our two implementations of the tone-mapping algorithm: an implementation of the exponent-based tone-mapping algorithm without the halo-reducing filter and an implementation with the halo reducing filter. From Table 1 we can see that our tonemapping implementation without using the halo filter requires about 22% (used 26,387 logic elements (LE) of the available 119,088) of the FPGA resources, 84 Kbits of memory and 28 multipliers. The tone-mapping implementation with the halo-reducing filter requires about 78% (used 93,989 LEs) of the FPGA resources; 88 Kbits of memory and 28 multipliers (see Table 2). As regarding the implementation based on the halo-reducing filter, we recall that the haloreducing filter computes pixel intensity differences in the 5 5 neighborhood using a parallel architecture in order to satisfy our real-time constraints. Consequently, this implementation is more hardware demanding as can be seen by its larger footprint (compared to the implementation without the halo-reducing filter). We have measured the power consumption for our tone-mapping implementations using the Quartus II PowerPlay Power Analyzer tool. To improve the accuracy of the power analysis, we have supplied the signal activities of our designs while it was processing an image. The signal activities during the simulation results in a VCD (value-change dump) file, which is a record of all the signal toggles that occurred during the simulation. Quartus II Powerplay improves the accuracy of its power estimation by deriving signal activities from the VCD file. 48

57 Total power dissipation for tone-mapping implementation without the halo reducing filter was mw. Power dissipation for the tone-mapping system using the halo reducing filter was mw. The higher power consumption in our second implementation is a consequence of its larger implementation footprint and corresponding signal activities. By higher signal activities, we imply the 5 5 neighborhood operations carried out by the halo-reducing filter; for computing the weighting function, it requires two dividers per function (totaling 50 dividers for 5 5 neighborhood). Also, we recall from Eq. (31) that the halo-reducing filter requires a parallel array of multipliers to meet the real-time timing constraints. Overall, we have traded power for faster performance. The hardware tone-mapping implementation with halo-reducing filter requires 7.8 ms to tone-map an image, while operating at a clock frequency of 100 MHz, thus making our implementation suitable for real-time applications as it can process images at a rate of 126 frames per second. Table 2: Resouce summary for our tone mapping implementation without using the halo reducing filter. Cyclone III Used Available Percentage Combinatorial functions 14, ,088 12% Total registers 11, , % Memory bits 83,290 3,981, % Embedded multipliers % 49

58 Table 3: Resource summary for our tone mapping implementation using the halo reducing filter. Cyclone III Used Available Percentage Combinatorial functions 67, ,088 57% Total registers 26, ,088 21% Memory bits 87,176 3,981, % Embedded multipliers % 4.2 Objective image quality assessment of our hardware implementation To evaluate the performance of our hardware implementation, we compare the images generated by a floating point MATLAB implementation of the tone-mapping algorithm with the Verilog HDL implementation of the tone-mapping algorithm. We have computed two performance measures for objective quality assessment, by which we can establish how much distortion was produced by our fixed-point hardware implementation. The first one is the PSNR (peak signal-to-noise ratio) and the second one is the SSIM (structural similarity image index) [74], [75]. In general, high values of the PSNR (in theory, the PSNR varies from 0 to infinity) give an indication that two images might be very similar. As for the SSIM, the positive values of the SSIM vary between 0 and 1, and two images are similar when their SSIM is close to 1 (they are identical if SSIM=1). Fig. 23 shows the images that we have used in our experiments for image quality assessment (WDR image source [36], [61], [76]). The corresponding PSNR and SSIM values for these test images are listed in Table 4. Our implementation delivers a high average PSNR of db and a high average SSIM of Of the 25 test images considered, only for one image does the PSNR drops below 50 db. These results show that the 50

59 hardware implementation produces acceptable results that can be well compared to the MATLAB implementation. Table 4: PSNR and SSIM values for 25 test images given in Fig. 23 Image PSNR (db) SSIM Image PSNR (db) SSIM Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Image Average (of 25 images) 51

60 Figure 23: Test images used for the quality assessment of our hardware implementation of the tone-mapping algorithm with the halo reducing filter (see Table 4 for PSNR and SSIM values). 52

61 4.2.1 Visual quality assessment of the tone mapped images Here, we illustrate with three sets of images generated using our tone-mapping system. In the first two sets (Fig. 24 and Fig. 25), we show some WDR images (image source [36]), the outputs generated from the Verilog implementation of the tone-mapping system without the halo-reducing filter, the output images from the MATLAB implementation (with the haloreducing filter) and the output images from our Verilog implementation of the tone-mapping system with the halo reducing filter. In the second set (see Fig. 26), we have included images which, after tone-mapping, have halos clearly appearing in the LDR output images. On applying our tone-mapping with the halo reducing filter, we can barely notice these halos. The visual quality of the output images from our hardware implementation is very similar to the MATLAB implementation, which confirms the high values obtained for the PSNR and the SSIM as presented in Table 4. 53

62 Figure 24: (a) WDR image (b) Tone mapped image (Verilog implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter). 54

63 Figure 25: (a) WDR image (b) Tone mapped image (Verilog implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter). 55

64 Figure 26: (a) WDR image (b) Tone mapped image (MATLAB implementation, without the halo reducing filter) (c) Tone mapped image (MATLAB implementation with the halo reducing filter) (d) Tone mapped image (Verilog implementation with the halo reducing filter). 56

65 4.3 Comparison with other hardware implementations Our hardware implementation was designed primarily for speed as we target real-time applications. The algorithm implementation exploits FPGA architecture by performing many operations in parallel. By doing this, we have achieved a processing speed of 126 frames per second with a frame size of The logic utilization of our hardware implementation (tone mapping with a halo-reducing filter) on a low cost cyclone III FPGA (119,088 logic elements) is about 78% of the resources available on this FPGA. There are larger FPGAs available in the market that offer much more logic resources. For example the Cyclone V 5CGXC9 and 5CEA9 FPGAs offer 301,000 logic elements and Stratix V SGXBB devices offers 952,000 logic elements [77]. In order to perform more assessment of our hardware implementation we compared our work with three other hardware implementations (see Table 5). Ofili et al. in [46] reported an exponent based tone-mapping algorithm. The design consumes 8,546 logic elements, 68 Kbits of memory and 250 mw of power while processing a image. The PSNR reported for a test image (memorial image Image 21 in our experiment, see Table 3) is db. However, this algorithm is prone to halos, false colors and the local contrast is not always good [78]. A tone mapped image generated using this algorithm, where halos clearly appear, is shown in Fig. 27. Hassan et al. reported an FPGA implementation of local tone mapping algorithm [41]. The tone mapping system operates at 60 frames per second on a image. The design requires 34,806 logic elements and a relatively large on-chip memory (3 Mbits) while delivering a PSNR of db for memorial image. This value is clearly smaller than ours, which indicates a better matching of our hardware implementation with our software implementation. Vytla et al. reported a real-time implementation of gradient domain HDR compression [5] using a local 57

66 Poisson solver [52]. This implementation could process 1 Megapixel images at 100 frames per second. The system implementation consumes 9,019 logic elements and 300 Kbits of memory. For the memorial test image, this implementation reported a db PSNR value. With our implementation we can process image at 126 frames per second with an average PSNR of db. The implementations of Hassan et al. and Vytla et al. use constant rendering parameters for tone mapping WDR images, which may not be acceptable for real-life applications. In our implementation the rendering parameter is automatically adjusted based on the properties of the input image, thus making it practically more suitable for real-time and reallife applications. Table 5: Comparison with other tone-mapping hardware implementations. Tone mapping algorithm Image size Processing LE s Memory PSNR (db) (and type) Speed (FPS) (bits) (for Image 21) Hassan et al. [41] (local tone mapping) Vytla et al. [52] (local tone mapping) Offili et al.[46] (Hybrid) This work (Hybrid) (Grayscale) 1 Megapixel (Color) (Color) (Color) 60 34,806 3,153, , , ,546 68, ,989 87,

67 Figure 27: (a) Tone-mapped image generated by the floating-point algorithm of Ofili et al. (b) Tone-mapped image using our tone mapping implementation using a halo reducing filter. 59

UNIVERSITY OF CALGARY. An Automated Hardware-based Tone Mapping System For Displaying. Wide Dynamic Range Images. Chika Antoinette Ofili A THESIS

UNIVERSITY OF CALGARY. An Automated Hardware-based Tone Mapping System For Displaying. Wide Dynamic Range Images. Chika Antoinette Ofili A THESIS UNIVERSITY OF CALGARY An Automated Hardware-based Tone Mapping System For Displaying Wide Dynamic Range Images by Chika Antoinette Ofili A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Perceptual Effects in Real-time Tone Mapping

Perceptual Effects in Real-time Tone Mapping Perceptual Effects in Real-time Tone Mapping G. Krawczyk K. Myszkowski H.-P. Seidel Max-Planck-Institute für Informatik Saarbrücken, Germany SCCG 2005 High Dynamic Range (HDR) HDR Imaging Display of HDR

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 3. HIGH DYNAMIC RANGE Computer Vision 2 Dr. Benjamin Guthier Pixel Value Content of this

More information

Course Evaluations. h"p:// 4 Random Individuals will win an ATI Radeon tm HD2900XT

Course Evaluations. hp://  4 Random Individuals will win an ATI Radeon tm HD2900XT Course Evaluations h"p://www.siggraph.org/courses_evalua4on 4 Random Individuals will win an ATI Radeon tm HD2900XT A Gentle Introduction to Bilateral Filtering and its Applications From Gaussian blur

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

RUN-TIME RECONFIGURABLE IMPLEMENTATION OF DSP ALGORITHMS USING DISTRIBUTED ARITHMETIC. Zoltan Baruch

RUN-TIME RECONFIGURABLE IMPLEMENTATION OF DSP ALGORITHMS USING DISTRIBUTED ARITHMETIC. Zoltan Baruch RUN-TIME RECONFIGURABLE IMPLEMENTATION OF DSP ALGORITHMS USING DISTRIBUTED ARITHMETIC Zoltan Baruch Computer Science Department, Technical University of Cluj-Napoca, 26-28, Bariţiu St., 3400 Cluj-Napoca,

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT

OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT OPTIMIZED QUALITY EVALUATION APPROACH OF TONED MAPPED IMAGES BASED ON OBJECTIVE QUALITY ASSESSMENT ANJIBABU POLEBOINA 1, M.A. SHAHID 2 Digital Electronics and Communication Systems (DECS) 1, Associate

More information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,

More information

HIGH DYNAMIC RANGE IMAGE TONE MAPPING BY OPTIMIZING TONE MAPPED IMAGE QUALITY INDEX

HIGH DYNAMIC RANGE IMAGE TONE MAPPING BY OPTIMIZING TONE MAPPED IMAGE QUALITY INDEX HIGH DYNAMIC RANGE IMAGE TONE MAPPING BY OPTIMIZING TONE MAPPED IMAGE QUALITY INDEX Kede Ma, Hojatollah Yeganeh, Kai Zeng and Zhou Wang Department of ECE, University of Waterloo July 17, 2014 2 / 32 Outline

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Median Filter Algorithm Implementation on FPGA for Restoration of Retina Images

Median Filter Algorithm Implementation on FPGA for Restoration of Retina Images Median Filter Algorithm Implementation on FPGA for Restoration of Retina Images Priyanka CK, Post Graduate Student, Dept of ECE, VVIET, Mysore, Karnataka, India Abstract Diabetic Retinopathy is one of

More information

Image Enhancement 3-1

Image Enhancement 3-1 Image Enhancement The goal of image enhancement is to improve the usefulness of an image for a given task, such as providing a more subjectively pleasing image for human viewing. In image enhancement,

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Lecture 4 Image Enhancement in Spatial Domain

Lecture 4 Image Enhancement in Spatial Domain Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain

More information

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) 5 MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) Contents 5.1 Introduction.128 5.2 Vector Quantization in MRT Domain Using Isometric Transformations and Scaling.130 5.2.1

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

System Verification of Hardware Optimization Based on Edge Detection

System Verification of Hardware Optimization Based on Edge Detection Circuits and Systems, 2013, 4, 293-298 http://dx.doi.org/10.4236/cs.2013.43040 Published Online July 2013 (http://www.scirp.org/journal/cs) System Verification of Hardware Optimization Based on Edge Detection

More information

Massively Parallel Computing on Silicon: SIMD Implementations. V.M.. Brea Univ. of Santiago de Compostela Spain

Massively Parallel Computing on Silicon: SIMD Implementations. V.M.. Brea Univ. of Santiago de Compostela Spain Massively Parallel Computing on Silicon: SIMD Implementations V.M.. Brea Univ. of Santiago de Compostela Spain GOAL Give an overview on the state-of of-the- art of Digital on-chip CMOS SIMD Solutions,

More information

FPGA Implementation of a Nonlinear Two Dimensional Fuzzy Filter

FPGA Implementation of a Nonlinear Two Dimensional Fuzzy Filter Justin G. R. Delva, Ali M. Reza, and Robert D. Turney + + CORE Solutions Group, Xilinx San Jose, CA 9514-3450, USA Department of Electrical Engineering and Computer Science, UWM Milwaukee, Wisconsin 5301-0784,

More information

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN Spatial domain methods Spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image.

More information

Image Segmentation Via Iterative Geodesic Averaging

Image Segmentation Via Iterative Geodesic Averaging Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, www.ijcea.com ISSN 2321-3469 SURVEY ON OBJECT TRACKING IN REAL TIME EMBEDDED SYSTEM USING IMAGE PROCESSING

More information

ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University

ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University Optical Flow on FPGA Ian Thompson (ijt5), Joseph Featherston (jgf82), Judy Stephen

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Perceptual Quality Improvement of Stereoscopic Images

Perceptual Quality Improvement of Stereoscopic Images Perceptual Quality Improvement of Stereoscopic Images Jong In Gil and Manbae Kim Dept. of Computer and Communications Engineering Kangwon National University Chunchon, Republic of Korea, 200-701 E-mail:

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

Analysis and extensions of the Frankle-McCann

Analysis and extensions of the Frankle-McCann Analysis and extensions of the Frankle-McCann Retinex algorithm Jounal of Electronic Image, vol.13(1), pp. 85-92, January. 2004 School of Electrical Engineering and Computer Science Kyungpook National

More information

Sobel Edge Detection Algorithm

Sobel Edge Detection Algorithm Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES

SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA

Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA Arash Nosrat Faculty of Engineering Shahid Chamran University Ahvaz, Iran Yousef S. Kavian

More information

1.Some Basic Gray Level Transformations

1.Some Basic Gray Level Transformations 1.Some Basic Gray Level Transformations We begin the study of image enhancement techniques by discussing gray-level transformation functions.these are among the simplest of all image enhancement techniques.the

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape

More information

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Shamir Alavi Electrical Engineering National Institute of Technology Silchar Silchar 788010 (Assam), India alavi1223@hotmail.com

More information

4DM4 Lab. #1 A: Introduction to VHDL and FPGAs B: An Unbuffered Crossbar Switch (posted Thursday, Sept 19, 2013)

4DM4 Lab. #1 A: Introduction to VHDL and FPGAs B: An Unbuffered Crossbar Switch (posted Thursday, Sept 19, 2013) 1 4DM4 Lab. #1 A: Introduction to VHDL and FPGAs B: An Unbuffered Crossbar Switch (posted Thursday, Sept 19, 2013) Lab #1: ITB Room 157, Thurs. and Fridays, 2:30-5:20, EOW Demos to TA: Thurs, Fri, Sept.

More information

Today. Motivation. Motivation. Image gradient. Image gradient. Computational Photography

Today. Motivation. Motivation. Image gradient. Image gradient. Computational Photography Computational Photography Matthias Zwicker University of Bern Fall 009 Today Gradient domain image manipulation Introduction Gradient cut & paste Tone mapping Color-to-gray conversion Motivation Cut &

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Developing a Data Driven System for Computational Neuroscience

Developing a Data Driven System for Computational Neuroscience Developing a Data Driven System for Computational Neuroscience Ross Snider and Yongming Zhu Montana State University, Bozeman MT 59717, USA Abstract. A data driven system implies the need to integrate

More information

Physics-based Fast Single Image Fog Removal

Physics-based Fast Single Image Fog Removal Physics-based Fast Single Image Fog Removal Jing Yu 1, Chuangbai Xiao 2, Dapeng Li 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China 2 College of Computer Science and

More information

Assignment 4: Seamless Editing

Assignment 4: Seamless Editing Assignment 4: Seamless Editing - EE Affiliate I. INTRODUCTION This assignment discusses and eventually implements the techniques of seamless cloning as detailed in the research paper [1]. First, a summary

More information

CHAPTER 4 BLOOM FILTER

CHAPTER 4 BLOOM FILTER 54 CHAPTER 4 BLOOM FILTER 4.1 INTRODUCTION Bloom filter was formulated by Bloom (1970) and is used widely today for different purposes including web caching, intrusion detection, content based routing,

More information

Render all data necessary into textures Process textures to calculate final image

Render all data necessary into textures Process textures to calculate final image Screenspace Effects Introduction General idea: Render all data necessary into textures Process textures to calculate final image Achievable Effects: Glow/Bloom Depth of field Distortions High dynamic range

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College

More information

A SIMULINK-TO-FPGA MULTI-RATE HIERARCHICAL FIR FILTER DESIGN

A SIMULINK-TO-FPGA MULTI-RATE HIERARCHICAL FIR FILTER DESIGN A SIMULINK-TO-FPGA MULTI-RATE HIERARCHICAL FIR FILTER DESIGN Xiaoying Li 1 Fuming Sun 2 Enhua Wu 1, 3 1 University of Macau, Macao, China 2 University of Science and Technology Beijing, Beijing, China

More information

Brightness and geometric transformations

Brightness and geometric transformations Brightness and geometric transformations Václav Hlaváč Czech Technical University in Prague Czech Institute of Informatics, Robotics and Cybernetics 166 36 Prague 6, Jugoslávských partyzánů 1580/3, Czech

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

Aiyar, Mani Laxman. Keywords: MPEG4, H.264, HEVC, HDTV, DVB, FIR.

Aiyar, Mani Laxman. Keywords: MPEG4, H.264, HEVC, HDTV, DVB, FIR. 2015; 2(2): 201-209 IJMRD 2015; 2(2): 201-209 www.allsubjectjournal.com Received: 07-01-2015 Accepted: 10-02-2015 E-ISSN: 2349-4182 P-ISSN: 2349-5979 Impact factor: 3.762 Aiyar, Mani Laxman Dept. Of ECE,

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

An HEVC Fractional Interpolation Hardware Using Memory Based Constant Multiplication

An HEVC Fractional Interpolation Hardware Using Memory Based Constant Multiplication 2018 IEEE International Conference on Consumer Electronics (ICCE) An HEVC Fractional Interpolation Hardware Using Memory Based Constant Multiplication Ahmet Can Mert, Ercan Kalali, Ilker Hamzaoglu Faculty

More information

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing

What will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing What will we learn? Lecture Slides ME 4060 Machine Vision and Vision-based Control Chapter 10 Neighborhood Processing By Dr. Debao Zhou 1 What is neighborhood processing and how does it differ from point

More information

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture International Journal of Computer Trends and Technology (IJCTT) volume 5 number 5 Nov 2013 Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

More information

Reduction of Blocking artifacts in Compressed Medical Images

Reduction of Blocking artifacts in Compressed Medical Images ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 8, No. 2, 2013, pp. 096-102 Reduction of Blocking artifacts in Compressed Medical Images Jagroop Singh 1, Sukhwinder Singh

More information

High Dynamic Range Imaging.

High Dynamic Range Imaging. High Dynamic Range Imaging High Dynamic Range [3] In photography, dynamic range (DR) is measured in exposure value (EV) differences or stops, between the brightest and darkest parts of the image that show

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

A Library of Parameterized Floating-point Modules and Their Use

A Library of Parameterized Floating-point Modules and Their Use A Library of Parameterized Floating-point Modules and Their Use Pavle Belanović and Miriam Leeser Department of Electrical and Computer Engineering Northeastern University Boston, MA, 02115, USA {pbelanov,mel}@ece.neu.edu

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models Computergrafik Matthias Zwicker Universität Bern Herbst 2009 Today Introduction Local shading models Light sources strategies Compute interaction of light with surfaces Requires simulation of physics Global

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

MRT based Fixed Block size Transform Coding

MRT based Fixed Block size Transform Coding 3 MRT based Fixed Block size Transform Coding Contents 3.1 Transform Coding..64 3.1.1 Transform Selection...65 3.1.2 Sub-image size selection... 66 3.1.3 Bit Allocation.....67 3.2 Transform coding using

More information

Characterizing and Controlling the. Spectral Output of an HDR Display

Characterizing and Controlling the. Spectral Output of an HDR Display Characterizing and Controlling the Spectral Output of an HDR Display Ana Radonjić, Christopher G. Broussard, and David H. Brainard Department of Psychology, University of Pennsylvania, Philadelphia, PA

More information

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 CSE 167: Introduction to Computer Graphics Lecture #7: Color and Shading Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011 Announcements Homework project #3 due this Friday,

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Low Contrast Image Enhancement Using Adaptive Filter and DWT: A Literature Review

Low Contrast Image Enhancement Using Adaptive Filter and DWT: A Literature Review Low Contrast Image Enhancement Using Adaptive Filter and DWT: A Literature Review AARTI PAREYANI Department of Electronics and Communication Engineering Jabalpur Engineering College, Jabalpur (M.P.), India

More information

Quo Vadis JPEG : Future of ISO /T.81

Quo Vadis JPEG : Future of ISO /T.81 Quo Vadis JPEG : Future of ISO 10918-1/T.81 10918/T.81 is still the dominant standard for photographic images An entire toolchain exists to record, manipulate and display images encoded in this specification

More information

FPGA Matrix Multiplier

FPGA Matrix Multiplier FPGA Matrix Multiplier In Hwan Baek Henri Samueli School of Engineering and Applied Science University of California Los Angeles Los Angeles, California Email: chris.inhwan.baek@gmail.com David Boeck Henri

More information

Digital Image Fundamentals

Digital Image Fundamentals Digital Image Fundamentals Image Quality Objective/ subjective Machine/human beings Mathematical and Probabilistic/ human intuition and perception 6 Structure of the Human Eye photoreceptor cells 75~50

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

FPGAs: FAST TRACK TO DSP

FPGAs: FAST TRACK TO DSP FPGAs: FAST TRACK TO DSP Revised February 2009 ABSRACT: Given the prevalence of digital signal processing in a variety of industry segments, several implementation solutions are available depending on

More information

Segmentation of Mushroom and Cap Width Measurement Using Modified K-Means Clustering Algorithm

Segmentation of Mushroom and Cap Width Measurement Using Modified K-Means Clustering Algorithm Segmentation of Mushroom and Cap Width Measurement Using Modified K-Means Clustering Algorithm Eser SERT, Ibrahim Taner OKUMUS Computer Engineering Department, Engineering and Architecture Faculty, Kahramanmaras

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Image restoration. Restoration: Enhancement:

Image restoration. Restoration: Enhancement: Image restoration Most images obtained by optical, electronic, or electro-optic means is likely to be degraded. The degradation can be due to camera misfocus, relative motion between camera and object,

More information

Corona Sky Corona Sun Corona Light Create Camera About

Corona Sky Corona Sun Corona Light Create Camera About Plugin menu Corona Sky creates Sky object with attached Corona Sky tag Corona Sun creates Corona Sun object Corona Light creates Corona Light object Create Camera creates Camera with attached Corona Camera

More information

RISC IMPLEMENTATION OF OPTIMAL PROGRAMMABLE DIGITAL IIR FILTER

RISC IMPLEMENTATION OF OPTIMAL PROGRAMMABLE DIGITAL IIR FILTER RISC IMPLEMENTATION OF OPTIMAL PROGRAMMABLE DIGITAL IIR FILTER Miss. Sushma kumari IES COLLEGE OF ENGINEERING, BHOPAL MADHYA PRADESH Mr. Ashish Raghuwanshi(Assist. Prof.) IES COLLEGE OF ENGINEERING, BHOPAL

More information

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 59 CHAPTER 3 FACE DETECTION AND PRE-PROCESSING 3.1 INTRODUCTION Detecting human faces automatically is becoming a very important task in many applications, such as security access control systems or contentbased

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Fast Evaluation of the Square Root and Other Nonlinear Functions in FPGA

Fast Evaluation of the Square Root and Other Nonlinear Functions in FPGA Edith Cowan University Research Online ECU Publications Pre. 20 2008 Fast Evaluation of the Square Root and Other Nonlinear Functions in FPGA Stefan Lachowicz Edith Cowan University Hans-Joerg Pfleiderer

More information

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS /$ IEEE

IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS /$ IEEE IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 1 Exploration of Heterogeneous FPGAs for Mapping Linear Projection Designs Christos-S. Bouganis, Member, IEEE, Iosifina Pournara, and Peter

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

Vertex Shader Design I

Vertex Shader Design I The following content is extracted from the paper shown in next page. If any wrong citation or reference missing, please contact ldvan@cs.nctu.edu.tw. I will correct the error asap. This course used only

More information