Fast and Adaptive Bidimensional Empirical Mode Decomposition for the Real-time Video Fusion

Similar documents
An Improved Images Watermarking Scheme Using FABEMD Decomposition and DCT

Interpolation artifacts and bidimensional ensemble empirical mode decomposition

Adaptive Boundary Effect Processing For Empirical Mode Decomposition Using Template Matching

Empirical Mode Decomposition Based Denoising by Customized Thresholding

BIDIMENSIONAL EMPIRICAL MODE DECOMPOSITION USING VARIOUS INTERPOLATION TECHNIQUES

FPGA Implementation of HHT for Feature Extraction of Signals

Research Article Fast and Adaptive Bidimensional Empirical Mode Decomposition Using Order-Statistics Filter Based Envelope Estimation

Audio Watermarking using Colour Image Based on EMD and DCT

pyeemd Documentation Release Perttu Luukko

Separation of Surface Roughness Profile from Raw Contour based on Empirical Mode Decomposition Shoubin LIU 1, a*, Hui ZHANG 2, b

A Novel NSCT Based Medical Image Fusion Technique

Empirical Mode Decomposition Analysis using Rational Splines

Multi-Sensor Fusion of Electro-Optic and Infrared Signals for High Resolution Visible Images: Part II

DUE to the high computational complexity and real-time

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

Fusion of Multi-Scale Visible and Thermal Images using EMD for Improved Face Recognition

Fuzzy C-means with Bi-dimensional Empirical Mode Decomposition for Segmentation of Microarray Image

FPGA IMPLEMENTATION OF IMAGE FUSION USING DWT FOR REMOTE SENSING APPLICATION

Design and Implementation of 3-D DWT for Video Processing Applications

Cluster EMD and its Statistical Application

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Wavelet Based Image Compression Using ROI SPIHT Coding

High Performance VLSI Architecture of Fractional Motion Estimation for H.264/AVC

Key words: B- Spline filters, filter banks, sub band coding, Pre processing, Image Averaging IJSER

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Implementation of Efficient Modified Booth Recoder for Fused Sum-Product Operator

Experiments with Edge Detection using One-dimensional Surface Fitting

Learning based face hallucination techniques: A survey

MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING

Empirical Mode Decomposition: Improvement and Application

Multi-focus image fusion using de-noising and sharpness criterion

Design guidelines for embedded real time face detection application

Image Segmentation Techniques for Object-Based Coding

AN EMBEDDED ARCHITECTURE FOR FEATURE DETECTION USING MODIFIED SIFT ALGORITHM

A reversible data hiding based on adaptive prediction technique and histogram shifting

A VLSI Architecture for H.264/AVC Variable Block Size Motion Estimation

Survey on Multi-Focus Image Fusion Algorithms

Image Fusion Using Double Density Discrete Wavelet Transform

USING LINEAR PREDICTION TO MITIGATE END EFFECTS IN EMPIRICAL MODE DECOMPOSITION. Steven Sandoval, Matthew Bredin, and Phillip L.

Evaluation of texture features for image segmentation

Multimedia Decoder Using the Nios II Processor

Compression of Stereo Images using a Huffman-Zip Scheme

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

signal-to-noise ratio (PSNR), 2

An Approach for Reduction of Rain Streaks from a Single Image

Robust color segmentation algorithms in illumination variation conditions

Reconfigurable PLL for Digital System

2. LITERATURE REVIEW

Implementation of Two Level DWT VLSI Architecture

Design of 2-D DWT VLSI Architecture for Image Processing

System Verification of Hardware Optimization Based on Edge Detection

Adaptive Quantization for Video Compression in Frequency Domain

Image Enhancement Techniques for Fingerprint Identification

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

DESIGN AND IMPLEMENTATION OF EMBEDDED TRACKING SYSTEM USING SPATIAL PARALLELISM ON FPGA FOR ROBOTICS

TKT-2431 SoC design. Introduction to exercises. SoC design / September 10

Motion Blur Image Fusion Using Discrete Wavelate Transformation

A Modified Spline Interpolation Method for Function Reconstruction from Its Zero-Crossings

A Survey on Feature Extraction Techniques for Palmprint Identification

University of Bristol - Explore Bristol Research. Peer reviewed version. Link to published version (if available): /ICIP.2005.

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Efficient VLSI Huffman encoder implementation and its application in high rate serial data encoding

Medical Image Fusion Using Discrete Wavelet Transform

Palmprint Recognition Using Transform Domain and Spatial Domain Techniques

An Edge-Based Approach to Motion Detection*

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

Texture Segmentation by Windowed Projection

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

A Novel Algorithm for Color Image matching using Wavelet-SIFT

Domain. Faculty of. Abstract. is desirable to fuse. the for. algorithms based popular. The key. combination, the. A prominent. the

FPGA Provides Speedy Data Compression for Hyperspectral Imagery

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Equivalent Effect Function and Fast Intrinsic Mode Decomposition

IMAGE COMPRESSION TECHNIQUES

Metamorphosis of High Capacity Steganography Schemes

Feature Based Watermarking Algorithm by Adopting Arnold Transform

DESIGN AND IMPLEMENTATION OF VLSI SYSTOLIC ARRAY MULTIPLIER FOR DSP APPLICATIONS

Digital Image Processing

Design and Implementation of Signed, Rounded and Truncated Multipliers using Modified Booth Algorithm for Dsp Systems.

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

Performance Degradation Assessment and Fault Diagnosis of Bearing Based on EMD and PCA-SOM

Video Compression Method for On-Board Systems of Construction Robots

A new approach to reference point location in fingerprint recognition

An Enhanced Video Stabilization Based On Emd Filtering And Spectral Analysis

A SIMULINK-TO-FPGA MULTI-RATE HIERARCHICAL FIR FILTER DESIGN

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Implementation of Hybrid Image Fusion Technique Using Wavelet Based Fusion Rules

Fusion of Visual and IR Images for Concealed Weapon Detection 1

A Robust Wipe Detection Algorithm

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

FPGA Implementation of Rate Control for JPEG2000

Parallel graph traversal for FPGA

Context based optimal shape coding

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

Transcription:

Fast and Adaptive Bidimensional Empirical Mode Decomposition for the Real-time Video Fusion Maciej Wielgus Institute of Micromechanics and Photonics Warsaw University of Technology Warsaw, Poland maciek.wielgus@gmail.com Adrian Antoniewicz, Michał Bartyś, Barbara Putz Institute of Automatic Control and Robotics Warsaw University of Technology Warsaw, Poland aantoniew@gmail.com, bartys@mchtr.pw.edu.pl, bputz@mchtr.pw.edu.pl Abstract The Bidimensional Empirical Mode Decomposition (BEMD) method proved to be capable of producing high quality results of infrared (IR) and visible (VIS) images fusion. However, large complexity of this algorithm does not contemporarily allow for real-time implementation, necessary in many typical applications of VIS-IR fusion, e.g., in environment monitoring. In contrast, the Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD), the variant of BEMD, in which signal envelope is extracted by means of statistical filters rather than 2D spline interpolation, has an ability to overcome this shortcoming. We evaluate FABEMD method outputs in the context of VIS-IR fusion and present developed real-time VIS-IR video fusion system based on one chip Field Programmable Gate Array. Keywords - real-time image fusion, multimodal image fusion, infrared (IR) image, Bidimensional Empirical Mode Decomposition (BEMD), Field Programmable Gate Array (FPGA) I. INTRODUCTION The method of Empirical Method Decomposition (EMD) was introduced in [1] as a preprocessing step for Hilbert Spectral Analysis (HSA). With EMD, signal is decomposed into series of zero-mean, oscillatory subsignals, so-called Intrinsic Mode Functions (IMFs). Being an adaptive and datadriven technique that possesses ability to deal with nonstationary and nonlinear input, EMD rapidly became a widely recognized tool of the signal analysis. It was soon introduced to the field of image processing [2], where twodimensional generalization of the EMD is usually referred to as Bidimensional EMD (BEMD) while IMFs are typically described as Bidimensional IMFs (BIMFs). In [3] EMD algorithm was proposed as an image fusion technique. However, being a 1D method applied subsequently to the image rows, EMD ignores correlation between rows and is more sensitive to noise than 2D methods. More variants of EMD and BEMD were discussed in such a context in [4-6]. Particularly, BEMD application for infrared (IR) and visible (VIS) image fusion presented in [6], proved high quality of obtained results. In comparison, given in [7] BEMD clearly outperformed often favored discrete wavelet transform (DWT) for VIS-IR fusion. Unfortunately, heavy computational load of this method gave little hope of real-time image fusion application. This work has been supported in part by the research project No O R00 0019 07 of National Centre for Research and Development in Poland Recently, Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD) algorithm was introduced in [8] and further developed in [9]. In [10] the proper quality of FABEMD-based fusion was demonstrated, although not in the context of multimodal image fusion, but rather in case of multifocus images. Among other advantages, that will be discussed further, FABEMD offers a significant reduction of computation time in comparison to BEMD. Together with a low-level implementation in Field Programmable Gate Array (FPGA) it allowed to develop a real-time video fusion system with VIS and IR inputs, intended for an environment monitoring purpose. This paper is organized as follows. In section II a brief description of the EMD algorithm is given. Section III describes the FABEMD algorithm, contrasting it with the regular, interpolation-based EMD. In section IV discussion of the fusion algorithm is given. Section V presents the details of fusion implementation on FPGA circuit. Finally, results are discussed in the section VI and conclusions are given in the section VII. II. EMD OVERVIEW EMD consists of performing so-called sifting procedure, during which signal maxima and minima are identified and serve as nodes for upper and lower signal envelopes interpolation, respectively. Interpolation is mainly based on cubic splines in 1D case [1]. In case of image processing with BEMD, radial basis functions [11] or bicubic splines, possibly coupled with domain triangulation [12], are utilized. Then the mean envelope E m is calculated as an average of upper and lower envelopes and subtracted from the data subsequently. If a certain condition is fulfilled, result becomes the first decomposed element, IMF 1, otherwise the procedure is repeated until the condition match (internal loop of EMD). The reason for these iterations is to improve the symmetry between upper and lower envelopes and to ensure that there is a zerocrossing between each two extrema, so that extracted IMFs were truly zero-mean and oscillatory. Stop condition commonly takes form of small enough normalized difference between the results of consecutive iterations [1], as a perfect IMF would be a fixed point of this loop. Finally, obtained IMF is subtracted from the initial data and the algorithm is iterated on the result (external loop of EMD). The number of extrema is diminishing along with the algorithm progression, eventually leading to the 649

monotonic residual signal r N (x). The initial signal s(x) can be therefore represented as: (1). Typically small number of decomposition levels N and often their physical meaningfulness for the real data constitute additional advantages of the EMD algorithm. III. FABEMD METHOD DETAILS A. Algorithm overview The most significant difference between FABEMD and regular BEMD is that the first one utilizes statistical MAX/MIN filters with additional smoothing by averaging to estimate the envelopes and, as argued in [8], does not demand additional iterations for a single IMF extraction (no internal loop of EMD). The FABEMD algorithm can be therefore summarized by the flowchart in Fig. 1. The order statistics (MAX/MIN) and smoothing window size is calculated based on the distribution of signal extrema and distances between the neighboring maxima (minima). Several methods of smoothing window size selection were proposed and discussed in [8-9], e.g., this could be the rounded lowest Euclidean distance between extrema of the same type (Lowest Distance Order Statistics Filter Width, LD-OSFW). The simply non-weighted average is typically used. B. Comparison with BEMD There is no 2D spline interpolation on irregular grid in FABEMD method, which is the main reason for the time efficiency improvement. Secondly, the method does not introduce overshooting and undershooting errors to the envelopes estimation, which is the case with the BEMD. Moreover, it reduces the typical problem of BEMD with the interpolation at the image border. This was particularly troublesome with methods involving triangulation in which effective interpolation could only be obtained inside the convex hull of the extrema set. Only relatively basic operations, such as 2D convolution, are demanded to perform FABEMD. This is a reason why the method is preferred for the low-level FPGA implementation. IV. IMAGE FUSION WITH FABEMD EMD-based approach to image fusion benefits from algorithm adaptivity, which enables to recognize characteristic scales and image features better than linear methods. The idea of EMD-based fusion is to perform decomposition of images to be fused and on each of the decomposition levels locally select (based on certain decision rule) signal that contains more valuable information. In [6-7] high quality of VIS-IR images fusion results with BEMD was noted, favoring BEMD over more common methods such as contrast pyramid or discrete Figure 1. Flowchart of the FABEMD algorithm. wavelet transform. However, with computationally expensive radial basis functions interpolation and rather sophisticated decision rule [6], that method could not be utilized for the realtime application. Such possibility emerged with the introduction of the FABEMD. Fusion based on FABEMD can be summarized in the following steps: 1. Perform FABEMD of both initial images 2. For each decomposition level combine values of two respective BIMFs 3. Combine two residues 4. Sum up all combined components to obtain the result of the fusion. Clearly, for a meaningful comparison between respective BIMFs extracted from different images, their scales should be matched. This is why the size of statistical MAX/MIN filter used in step (1) has to be agreed for both images on every level of the decomposition. Some assumptions on value of the filter window size can be utilized to avoid redundant computations, e.g., for LD-OSFW choice first BIMF is for almost any real data calculated with the smallest possible window size. In Fig. 2 (c-h), exemplary BIMFs are shown along with the residual image after extraction of 3 BIMFs in Fig. 2 (i-j). 650

(a) (b) (c) (d) (e) (f) (k) (l) (m) (n) (o) (g) Figure 2. Image Street : VIS input (a), IR input (b); BIMF1 for VIS and IR (c-d); BIMF 2 for VIS and IR (e-f); BIMF3 for VIS and IR (g-h); residues for VIS and IR after subtraction of 3 BIMFs (i-j); results of combining BIMFs at 3 decomposition levels (k-m); result of combining residues (n) and the final result of FABEMD fusion (o). The method of combining two BIMFs in step (2) has a significant influence on the final result. Clearly, for multifocus image fusion ([5], [10]) the reasonable choice is to select the BIMF which has locally larger variance (or other measure of spatial activity), which is expected to be larger for the image being in-focus. In VIS-IR fusion case, however, local variance method did not perform significantly better than much less computationally expensive criterion of MAX(ABS), given in (2): (h) (2) where Ci(x) represents the combination of IR and VIS image BIMFs on i-th decomposition level. The example of such combinations is given in Fig. 2 (k-m). (i) Residues should be combined as well, which is particularly important for the multimodal images fusion, in which case residues may vary significantly, unlike with multifocus fusion. (j) 651

MAX(ABS) criterion has no motivation in this case, as residues are not oscillatory in nature. Therefore in step (3) we adopted a simple arithmetic average of residues as a combination rule. In experiments, it was revealed that for highly satisfactory fusion results, a full decomposition is not demanded. In fact, it is crucial to show in the fused image the small but palpable details present only in one of the inputs. Fusion on the larger scales can be performed by taking mean value without significant quality loss - see Fig. 2n. This is why we only select 3-5 first BIMFs, treating the remaining parts as residues. Final result in step (4) is obtained as a sum of C i (x) components with the combination of residues, see Fig. 2o. An issue, which has a huge impact on the fusion quality, is the images alignment matching. Presented system is designed to work in dynamic environment, potentially outdoor or on the moving vehicle, possibly tracking objects in varying distances. It is therefore exposed to excitations such as vibrations and varying temperature influencing cameras' properties. These effects demand real-time software corrections of the image misalignment, additionally with the correction of fixed relative positions of cameras. The computational load of images alignment matching diminish the resources available for the fusion and must be taken into account when designing the realtime implementation. V. REAL-TIME FUSION IMPLEMENTATION A. Hardware Implementation of real-time image registration and fusion has been announced in [13-15]. In [13] an FPGA architecture based system called Ad-FIRE is described. Intentionally, it is suited for military purposes and none detailed information about applied image fusion approaches have been given. Reference [14] presents bulky real-time image registration and fusion prototype system implemented on a embedded PC hardware platform composed from off-the-shelf components. Reference [15] demonstrates real-time implementation of image fusion system on the Octec ADEPT60 VME card. This card is typically used for video tracker based appliances. Authors announce the design of the tailor made image fusion card underway. Implementation of multispectral fusion at video frames rates requires high computational power. Taking into account portable and out-door applications, principally for high data streams rates, either Field Programmable Arrays (FPGA) or CPU based architecture processors are to be considered. Application of high power FPGA s instead of CPU based processors take advantage of the low power/heat dissipation ratio [15] combined with massive processing throughput. As show results of our experiments, power dissipation of FPGA image fusion system is rated under 4W level. While running real-time FABEMD based image fusion, FPGA itself dissipates less then 1W electrical power. This allows for implementation of FPGA based fusion systems in a battery powered and/or passively cooled cabinets. Flexible architecture of FPGA gives also possibility of making overall electronic system extremely compact. Due to its programmable and flexible architecture, it allows for replacement of large amount of external components and peripherals (glue-logic) needed in processor based systems. Commercially are available off-the-shelf hardware video development boards. These boards however, may be useful only in early development phases of image registering and fusion approaches. Typically they do not comply with electromagnetic compatibility (EMC) requirements and may not be used for extended ambient temperature range. Therefore, we have decided to build-up a specific FPGA based custom system called UFO. UFO was principally intended to manage real-time image alignment and fusion operating at 50Hz and above. The structure of UFO system is presented in Fig. 3. Different modality analogue CCIR coded video streams from visible and infrared spectrum cameras are fed into low power multi-channel NTSC/PAL integrated video decoder via appropriate external passive filters. External filters are build-up by application of a few passive external tiny discrete elements. Video parameters such as hue, contrast, brightness, saturation and sharpness are programmed for each channel by means of IIC serial interface. Video decoder generates digital video outputs and provides synchronization, blanking, lock and clock signals for FPGA. Both luminance and chrominance 8 bits parallel standard coded (ITU-R BT.656) digital video outputs are presented to FPGA, but for further processing, the only luminance data streams are used. FPGA chosen for UFO prototype board is a low cost low power device providing 150K logic elements, 6.48 Mb of embedded memory, 360 18 18 multipliers, and 475 user I/O arranged in 11 banks. FPGA is running at 150MHz clock frequency. Two banks of external DDRAM2 memories extend available memory space up to 2 x 64M x 32b. Additionally, synchronous burst static RAM (2M x 18b) complete memory resources of the system. Though, it should be mentioned that acceleration of image fusion processing was accomplished in such a way that minimize total amount of necessary memory transfers. Additionally, to speed-up calculations, FPGA embedded memory is intensively addressed. Figure 3. Simplified block schematics of the prototype UFO board. 652

Fused image is fed to external video decoder in the form of digital graphics output signal and then is displayed by means of integrated video encoder. Encoder accepts a digital graphics signal and transmits video signal stream through a TV output (s-video). The device accepts data over from FPGA by 12-bit wide data port and outputs TV standard signal by means of 10- bit video Digital-to-Analogue Converters (DACs). Encoder s TV processor performs non-interlace to interlace conversion with scaling and flicker filters and encode the data into any of the NTSC or PAL video standards. It supports 8 graphics resolutions up to 1024 by 768 pixels. B. Software Early implementations of FABEMD image fusion have been carried out by means of commercially available experimental Altera Cyclone III Video Development Kit. Firstly a NIOS II processor was embedded in FPGA of the kit. NIOS II processor was foreseen to manage other computational blocks developed for fusion processing. After power up, processor initializes video inputs for acquisition of video data streams. Block diagram of FABEMD fusion implementation in FPGA is shown in Fig. 4, where Window blocks denote window size calculation operations, Filter blocks represent MAX/MIN filtration and smoothing (Fig. 1). Fusion processor performs steps 2-4 of fusion algorithm summary (section IV). TABLE I. FABEMD FUSION IMPLEMENTATION ALTERA CYCLONE III REQUIREMENTS Resource FABEMD implementation requirements type Used Available Used [%] Logic cells 17860 119088 15 Registers 7862 119088 7 Memory Bits 617472 3981312 15 DSP elements 0 576 0 Buffered image frames from both video data streams are transferred by Direct Memory Access (DMA) channels directly to FABEMD fusion processing block and then to the buffer of image frames of digital graphics output channel. Each FABEMD decomposition level is performed parallel in order to increase overall system throughput. It was accomplished by application of special input image buffers. The data from DMA channels overwrite image buffers providing simultaneously immediate access to each input data sample. The calculated output coefficients from each decomposition level (BIMFs) are processed by the fusion processor generating output image. Implemented fusion processing structure with parallelized and pipelined processing flow is capable to compute one pixel of fused image per one FPGA system clock. The output data resulting from fusion process are saved in fusion output frame buffer and are displayed in video output device. The resource requirements for FABEMD image fusion implementation are shown in Table I. In comparison to other FPGA multilevel fusion implementations [16], the presented system structure requires small amount of RAM blocks and allows to achieve higher clock speed (tested up to 150 MHz). This gives the possibility to obtain fusion of high resolution images. VI. DISCUSSION OF RESULTS In Fig. 2 an exemplary FABEMD decomposition to 3 BIMFs and residue is presented as well as partial combinations results and final fusion result (Fig. 2o). For quantitative evaluation of fusion quality, the values of objective image fusion performance measure (OIFPM) [17] were used. OIFPM reflects how exact is the information about image gradient magnitude and orientation transferred to the fused image. Results of OIFPM evaluation for two exemplary images, presented in Fig. 2 and Fig. 5, are given in Table II. Comparison with several popular fusion methods (similar as in [6]) indicates FABEMD superiority. TABLE II. OIFPM FUSION QUALITY MEASURE RESULTS Image Mean Contrast pyramid Algorithm DWT (DBS(2,2)) FABEMD Plane and trees 0.4535 0.6235 0.6393 0.6918 Figure 4. Block diagram of FABEMD fusion implementation in FPGA. Street 0.4454 0.4831 0.5724 0.5986 653

of real-time fusion system is able to process 25 pairs of image frames per second with resolution up to 640x480 pixels. This is highly satisfactory result considering typically low resolution of IR cameras. ACKNOWLEDGMENT M. Wielgus thanks Professor K. Patorski for the valuable introduction to the methods of BEMD and FABEMD. REFERENCES (a) [1] (b) [2] [3] [4] (c) [5] (d) [6] [7] [8] (e) [9] (f) Figure 5. Image Plane and trees : VIS input (a), IR input (b), fusion by mean value (c), fusion by contrast pyramid (d), fusion by discrete wavelet transform (e), fusion by FABEMD (f). [10] In Fig. 5 resulting images of fusion with four different algorithms are shown for the image Plane and trees. Note how hot plane engines, present exclusively in the IR image, are transferred to the fusion result, while trees, which are out of focus in IR image, remain unblurred. In presented example FABEMD image (Fig. 5f) represents best fusion quality, providing better contrast than DWT (Fig. 5e), not introducing smoothing as averaging (Fig. 5c) or unnatural background inhomogeneity, as contrast pyramid method (Fig. 5d). [11] [12] [13] [14] VII. CONCLUSIONS We have presented an FPGA implementation of the FABEMD-based image fusion for the real time video fusion application. Proposed solution allows to take advantage of FABEMD properties valuable for the image fusion application while ensuring high processing speed, demanded for real-time processing purposes. In performed tests, FABEMD proved to be more efficient than several popular fusion methods for multimodal (VIS and IR) images fusion. Developed prototype [15] [16] [17] 654 N. E. Huang, Z. Sheng, S. R. Long, M. C. Wu, W. H. Shih, Q. Zeng, N. C. Yen, C. C. Tung, and H. H. Liu, The empirical mode decomposition and the Hilbert spectrum for non-linear and nonstationary time series analysis, Proc. Roy. Soc. Lond. A 454, pp. 903-995, 1998. A. Linderhed, 2-D empirical mode decompositions in the spirit of image compression, Proc. SPIE, Wavelet and Independent Component Analysis Applications IXI, vol. 4738, pp. 1-8, 2002. H. Hariharan, A. Gribok, M. Abidi, and A. Koschan, Image Fusion and Enhancement via Empirical Mode Decomposition, Journal of Pattern Recognition Research, Vol. 1, No. 1, pp. 16-32, 2006. D. P. Mandic, M. Golz, A. Kuh, D. Obradovic, and T. Tanaka, Signal Processing Techniques for Knowledge Extraction and Information Fusion, Springer, New York, 2008. D. Looney and D. P. Mandic, Multiscale Image Fusion Using Complex Extensions of EMD, IEEE Transactions on Signal Processing, Vol 57, No.4, pp. 1626-1630, 2009. W. Liang, Z. Liu, Region-based fusion of infrared and visible images using Bidimensional Empirical Mode Decomposition, Int. Conf. on Educ. and Inf. Techn. ICEIT, pp. V3-358 V3-363, 2010. X. Zhang, Q. Chen, and T. Men, "Comparison of fusion methods for the infrared and color visible images", Int. Conf. on Comp. Science and Inf. Techn. ICCSIT 2009, pp. 421-424. S. M. A. Bhuiyan, R. R. Adhami, and J. F. Khan, A novel approach of fast and adaptive bidimensional empirical mode decomposition, IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 1313-1316, 2008. S. M. A. Bhuiyan, R. R. Adhami, and J. F. Khan, Fast and adaptive bidimensional empirical mode decomposition using order-statistics filter based envelope estimation, EURASIP J. Adv. Signal Proc., ID728356, pp. 1-18, 2008. M. U. Ahmed, D. P. Mandic, Image fusion based on Fast and Adaptive Bidimensional Empirical Mode Decomposition, IEEE 13th Conference on Information Fusion, Fusion 2010, pp. 1-6. J. C. Nunes, Y. Bouaoune, E. Delechelle, O. Niang, Ph. Bunel, Image analysis by bidimensional empirical mode decomposition, Image Vis. Comput., vol. 21, pp. 1019 1026, 2003. C. Damerval, S. Meignen and V. Perrier, A fast algorithm for bidimensional EMD, IEEE Signal Process. Lett., vol. 12, pp. 701 704, 2005. T. Waters, L. Swan, R. Rickman Real-time Image Registration and Fusion in a FPGA Architecture (Ad-FIRE), Proc. SPIE 8042, 80420Y (2011); http://dx.doi.org/10.1117/12.883807. J. P. Heather, M.I. Smith, J. Sadler, D. Hickman Issues and challenges in the development of a commercial image fusion system, Proc. SPIE 7701, 77010A (2010), http://dx.doi.org/10.1117/12.850018. D. Dwyer, M. Smith, J. Dale, J. Heather Real time implementation of image alignment and fusion In: Electro-Optical and Infrared Systems: Technology and Applications; R.G. Driggers, D.A. Huckridge, Editors, Proc. SPIE Vol. 5612 (2004), pp. 85-93. O. Sims, J. Irvine, An FPGA implementation of pattern-selective pyramidal image fusion, IEEE Proceedings of FPL 2006, pp 1-4. C. Xydeas, V. Petrovic, Objective image fusion performance measure, Electronics Letters 36, pp. 308-309, 2002.