Image fusion is the process of combining the relevant information from two or more images into a single highly informative image. The resulting fused image contains more information than the input images. In this paper, different methods for fusing different modality of images [e.g., MRI, CT; MULTI-SPECTRAL, PANCHROMATIC etc.] and comparison of all these methods are presented. In addition to this, image fusion improvement technique also presented. The methods presented here are Simple Averaging Method, Principal Component Analysis [PCA] method, different wavelet transform methods and integrating these wavelet methods with PCA method. The wavelet Transform methods used here are Symlet Wavelets, Bi-Orthogonal wavelet, discrete Meyer wavelet, Reverse Bi-Orthogonal wavelet methods etc. The Fusion results obtained from above methods are evaluated and compared according to the measures Mean, Standard Deviation, Entropy (H) , Correlation Coefficient(CC), Co-Variance, Root Mean Square Error(RMSE), Peak Signal To Noise Ratio(PSNR).
Keywords |
Image Fusion, Panchromatic, Multispectral, Root Mean Square Error (RMSE), Peak Signal To Noise Ratio
(PSNR) |
INTRODUCTION |
In some cases high spectral and spatial Resolutions both are needed together, but the instruments available now can’t
produce such data because of several limitations In addition, several external effects blur images: atmospheric turbulence,
camera lens, relative camera-scene motion, etc. yet we can model them as clear image fusion. Image fusion is a useful
technique for inclusion of similar sensor and multi-sensor images to improve the information content present in the images.
Image fusion techniques have been developed for fusing the complementary information of Multi - source input images in
order to create a new image that is more sui-table for human visual or machine perception. Image fusion has several
applications in various areas such as Medical Imaging, Robotics, manufacturing, Military and law enforcement, remote
sensing, applications and so on. Computer Tomography (CT) and Magnetic Resonance (MR) are the most important
modalities in Medical Imaging, used for clinical diagnosis and computer-aided surgery. CT provides more information about
Bone structures and less information about soft tissues. Magnetic Resonance (MR) imaging provides more information about
the Soft tissues and less information about the bone structures. A single modality of medical image cannot provide
comprehensive and accurate information. As a result, combining anatomical and functional medical images to provide much
more useful information through image fusion. In Robotics, they require motion control, based on feedback from the
environment from visual, tactile, force/torque, and other types of sensors.So, in this case image fusion is useful. In
manufacturing it is useful for Electronic circuit and component inspection, Product surface measurement and inspection, non destructive material inspection, Manufacture process monitoring. In Military and law enforcement it is useful for Detection,
tracking, identification of ocean (air, ground)target/event Concealed weapon detection, Battle-field monitoring ,Night pilot
guidance etc.There are different methods available to implement image fusion such as Principle Component Analysis
(PCA)[1],average method Wavelet transform method. To improve the performance we can integrate any two methods. First
two methods change the spectral characteristics very much. The wavelet transform, due its multi-resolution Decomposition
property, is more suitable to merge the panchromatic band and multispectral bands of the same sensor. |
LITERATURE SURVEY |
Many Image fusion methods have been proposed for combining any two images. The methods are Hue Saturation
(IHS) [12], Principle Component Analysis (PCA) [1][12], Brovey Transform (BT) [12], High Pass Filtering (HPF)[8], High
Pass modulation (HPM) [8], Multiresolution Analysis Based Intensity Modulation (MRAIM) [8], Pixel Block Intensity
Modulation (PBIM) [12], Smoothing Filter Based Intensity Modulation (SFIM) [12], Wavelet Transform (WT) [2] etc., All
these are the already proposed fusion methods. In this paper proposes simple averaging of all the pixels of the input images
is taken to produce averaged pixel so that from these averaged pixels we can get fused image of the input images. Two types
of PCA methods in image fusion are proposed. 1) In PCA multichannel image, replacement of first principle component by
different images. And 2) PCA of all multi image data channels. The basic idea of all wavelet-based fusions schemes is to
combine all respective wavelet coefficients from the input images. The combination is performed according to a specific
fusion rule. The wavelet decomposition of each source image is performed leading to a multiresolution representation. The
actual fusion process is performed as a combination of the corresponding wavelet decomposition coefficients of all input
images, to build a single wavelet decomposition image. This combination takes place on all decomposition levels k (k = 1,
2…L) where L is the maximum wavelet decomposition level. Two different fusion rules are applied to combine the most
important features of the input images. A basic fusion rule is applied to the Lth level approximation sub bands. The three
detail sub bands (horizontal, vertical, and diagonal) are combined using a more sophisticated fusion algorithm [10].Principal
Component Analysis [PCA] is a vector space transform used for reducing the multidimensional data sets to lower
dimensions. In this we calculate Eigen values and Eigen vectors and then these Eigen vectors are sorted in decreasing order
of Eigen values and then choose the best Eigen vector to perform Fusion. To improve Fusion we can integrate any two
methods. |
WEIGHED AVERAGE METHOD |
Basically it is a Pixel based image fusion; the work presented in this paper assumes that the input images meet a number
of requirements. Firstly, input images must be of the same scene, i.e. the fields of view of the sensors must contain a spatial
overlap. Furthermore, inputs are assumed to be spatially registered and of equal size and spatial resolution. This paper
addresses the topic of pixel-level fusion with two monochrome input images and a monochrome fused output image.
However, all the work can be easily extended to accommodate a higher number of inputs and explanations of this are
provided where appropriate. [11] Steps of our proposed method of the pixel based image fusion are explained below: |
Step 1: Read the set of multifocus images i.e. here in our proposed algorithm we have consider two images which are of
same size (registered images). |
Step 2: Alpha Factor can be varied to vary the proportion of mixing of each image. |
With Alpha Factor = 0.5, the two images are mixed equally. |
With Alpha Factor < 0.5, the contribution of background image will be more. |
With Alpha Factor > 0.5, the contribution of foreground image will be more. |
So, in our proposed algorithm we have consider alpha factor= 0.5 |
Step 3: Perform element by element multiplication of image array with alpha factor for foreground image and multiply
complement of alpha factor with image array of background image. |
Step 4: Now perform pixel by pixel intensity value comparison and find the maximum intensity value. |
Step 5: This value is consider for the final output image. |
Step 6: Finally display the fused image contains all the huge intensity value of the pixel. |
WAVELET TRANSFORM METHODS |
The DWT can be interpreted as signal decomposition in a set of independent, spatially oriented frequency channels. The
signal is passed through two complementary filters and emerges as two signals, approximation and Details. This is called
decomposition or analysis. The components can be assembled back into the original signal without loss of information. This
process is called reconstruction or Synthesis. The mathematical manipulation, which implies analysis and synthesis, is called
discrete wavelet transform and inverse discrete wavelet transform. |
|
In all wavelet based image fusion techniques the wavelet transforms W of the two registered input images I1(x, y) and I2(x,
y) are computed and these transforms are combined using some kind of fusion rule ∅ as show in below equation |
I(x,y) = W-1 (Ø ( W( I1(x,y) ), W( I2(x,y) ) ) |
Steps of our proposed method of the wavelet based image fusion are explained below: |
Step 1: Read the set of multifocus images i.e. here in our proposed algorithm we have consider two images which are of
same size (registered images). |
Step 2: Apply wavelet decomposition on both the Images with the use of Daubechies filter. |
Step 3: Extracts from the wavelet decomposition structure [C, S] the horizontal, vertical, or diagonal detail. |
Step 4: Perform average of approximation coefficients of both decomposed images |
.Step 5: Compare horizontal, vertical and diagonal coefficient of both the images and apply maximum selection Scheme to
select the maximum coefficient value by comparing the coefficient of the two images. Perform this for all the pixel values of
image i.e. m x n. |
Step 6: Now Apply wavelet decomposition on both the Images with the use of different wavelet filters (namely Bi-
Orthogonal wavelet, discrete Meyer wavelet ,Reverse Bi-Orthogonal wavelet etc.) |
Step 7: Display the final fused image. |
PRINCIPAL COMPONENT ANALYSIS (PCA) METHOD |
PCA is a vector space transform used for reducing the Multidimensional data sets to lower dimensions. The
PCA algorithm for the fusion of images is discussed as Follows. |
Step1: Generate the column vectors, respectively, from the input image matrices. |
Step2; Calculate the covariance matrix of the two column vectors formed. |
Step 3: The diagonal elements of the 2x2 covariance matrix would contain the variance of each column vector with itself,
respectively. |
Step 4: Calculate the Eigen values and the Eigen vectors of the covariance matrix. |
Step5: Normalize the column vector corresponding to the larger Eigen value by dividing each element with mean of the
Eigen vector. |
Step 6: The values of the normalized Eigen vector act as the Weight values which are respectively multiplied with each pixel
of the input images. |
Step 7: Sum of the two scaled matrices calculated in the previous step will be the fused image matrix . |
PROPOSED WORK |
The main condition for successful fusion is that “all” visible information in the input images should also appear visible in
the fused image. The simplest form is a weighted averaging of the registered input to give the fused image. One of the
simplest of these image fusion methods just takes the pixel-by-pixel gray level average of the source images. This simplistic
approach often has serious side effects. Pixel level image fusion methods are affected by blurring effect which directly affect
on the contrast of the image. |
PCA technique place much more emphasis on the spatial information than spectral information. They can achieve higher
spatial results but preserve less spectral fidelity. |
In wavelet transform method the input images are decomposed using respective wavelets. And, compared to above two
methods, wavelet transform methods produces better results. Many of these image fusion techniques proposed aim at
improving the fusion rate and not concentrating on enhancing the information content on the images.So,,to improve the
information content much more here, proposing integration of wavelet method with PCA method. |
The whole process consists of following steps. |
Step1: Transform the multispectral image into the IHS or PCA components. |
Step2: Apply histogram match between panchromatic image and intensity component and obtain new panchromatic image. |
Step3: Decompose the histogram matched panchromatic image and intensity component to wavelet planes respectively. |
Step4: Replace the LLP in the panchromatic decomposition with the LL1 of the intensity decomposition, add the detail
images in the panchromatic decomposition to the corresponding detail image of the intensity and obtain LL1, LHP, HHP and
HLP. Perform an inverse wavelet transform and generate a new intensity |
Step5: Transform the new intensity together with hue, saturation components or PC1, PC2, PC3 back into RGB space [4]. |
PERFORMANCE MEASURES FOR IMAGE FUSION |
Assessment of image fusion performance can be divided into two categories: one with and one without
reference images. In reference-based assessment, a fused image is evaluated against the reference image which serves as a
ground truth. In second case only fused image is taken. In the present work, we have used some Performance measures to
evaluate the performance of the Image fusion algorithms. |
A. Entropy (H) |
The Entropy (H) is the measure of information content in an image. The maximum value of entropy can be produced
when each gray level of the whole range has the same frequency. If entropy of fused image is higher than parent image then it
indicates that the fused image contains more information. |
B.MEAN (M) |
M = mean (A) returns the mean values of the elements along different dimensions of an array. |
If A is a vector, mean (A) returns the mean value of A. |
If A is a matrix, mean (A) treats the columns of A as vectors, returning a row vector of mean values. |
If A is a multidimensional array, mean (A) treats the values along the first non-singleton dimension as vectors, returning an
array of mean values. |
C. STANDARD DEVIATION |
A measure of how the data values are spread out around the mean. It can also define as the square root of the variance. |
D.CO-VARIANCE |
Co-Variance is a measure of how much two random variables change together. If the greater values of one variable mainly
correspond with the greater values of the other variable, and the same holds for the smaller values, i.e., the variables tend to
show similar behavior, the covariance is positive. In the opposite case, when the greater values of one variable mainly
correspond to the smaller values of the other, i.e., the variables tend to show opposite behavior, the covariance is negative.
The sign of the covariance therefore shows the tendency in the linear relationship between the variables. |
E.CORRELATION CO-EFFICIENT (CC) |
The correlation coefficient is the measure of the closeness or similarity in small size structures between the
original and the fused images. It can vary between -1 and +l Values closer to + 1 indicate that the reference and fused images
are highly similar while the values closer to -1 indicate that the images are highly dissimilar. |
F. Root Mean Square Error (RMSE) |
A commonly used reference based assessment metric is the Root Mean Square Error (RMSE). The RMSE between a
reference image, R, and a fused image, F, is given by the following equation: |
|
Where R (I, j) and F (i, j) are the reference and fused images, respectively, and M and N are image dimensions. Smaller the
value of the RMSE, better the performance of the fusion algorithm. |
G. PEAK SIGNAL TO NOISE RATIO (PSNR) |
PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects
the fidelity of its representation. The PSNR of the fusion result is defined as follows: |
|
Where fmax is the maximum gray scale value of the pixels in the fused image. Higher the value of the PSNR, better the
Performance of the fusion algorithm. |
EXPERIMENTAL RESULTS AND COMPARISONS |
Different images are taken as the source images. Different Fusion methods are applied to fuse the input images. The corresponding outputs of the source images are shown in the below Figs. |
(A)Experimental Results |
(I)Take MRI, CT scanned images as input images |
|
|
|
|
|
|
|
|
References |
- T.S.Anand ; K.Narasimhan P.SaravananâÃâ¬ÃÅPerformance Evaluation of Image Fusion Using the Multi-Wavelet and Curve let TransformsâÃâ¬Ã IEEE - International Conference On Advances In Engineering, Science And Management(ICAESM-20 12) March 30,31,2012.
- Kulkami,J.S,"Wavelet transform applications",3rd International Conference on Electronics Computer Technology ,voLI, pp 11-17,2011.
- Bhatnagar and Balasubramanian Raman,âÃâ¬Ã A New Image Fusion Technique Based on Directive Contrast âÃâ¬ÃÅ,Electronic Letters on Computer Vision and Image Analysis,vol.8(2),pp.18-38, 2009.
- Stavri N., Paul H., David B., Nishan C., âÃâ¬ÃÅWavelet for Image Fusion,âÃâ¬Ã Image communication group, U.K.
- R. Maruthi, Dr. K. Sankarasubramanian,âÃâ¬Ã Multifocus Image based on the information level in the region of the images,âÃâ¬Ã JATIT, 2005-2007.
- Hongzhi Wang; Liying Qian; Jingtao Zhao, âÃâ¬ÃÅAn image denoising method based on fast discrete Curve let transform and Total Variation", International Conference on Signal Processing, pp 1040-1043, 2010.
- HAI-HUI WANG, JUN WANG, WEI WANG, "Multispectral Image Fusion Approach based on GHM Multi Wavelet Transform", Fourth International Conference on Machine Learning and Cybernetics, Guangzho,VoL8, pp 5043-5049, 2005.
- Zhijun Wang, Djemel Ziou, Costas Armenakis, Deren Li, and Qingquan Li, âÃâ¬ÃÅA Comparative Analysis of Image Fusion MethodsâÃâ¬ÃÂ, IEEE Transactions on Geosciences and Remote Sensing,Vol.43,No.6, pp.1391-1402, June 2005.
- Yuanning Liu and Fei He, Yonghua Zhao and Ning Wang, âÃâ¬ÃÅAn Iris Recognition based on GHM multiwavelet transform", 2nd International Conference on Advanced Computer Control, VoL2, pp 264-268, 2010.
- Lahouari Ghouti, Ahmed Bouridane and Mohammad K. Ibrahim, âÃâ¬ÃÅ Improved Image Fusion Using Balanced Multiwavelets âÃâ¬ÃÅ, Proceedings of the 12th European Signal Processing Conference,2004, pp. 57-60, Sep 2004.
- Ramesh, K.P., Gupta, S., Blasch, E.P., "Image Fusion Experiment for Information Content," Proc, Information Fusion, 2007 10th International Conference: 1 - 8, 2007.
- Wenquan Zhu, Xinhua Pang, Yaozhong Pan, Bin Jia & Xiaoqiong Yang , âÃâ¬ÃÅA spectral preservation fusion method based on band ratio and weighted combinationâÃâ¬ÃÂ, Multispectral Image Processing,vol. 6787,2007,doi. 10.1117/12.749474
|