ISSN ONLINE(2319-8753)PRINT(2347-6710)
Gagandeep kaur 1, Anand Kumar Mittal2
|
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
With the availability of multi-sensor data in many fields, image fusion has been receiving increasing attention in the researches for a wide spectrum of applications. Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. Discrete wavelet transform (DWT) was performed on source image. Because DWT is the basic and simplest transform among numerous multi-scale transform and other type of wavelet based fusion schemes are usually similar to the DWT fusion scheme. In this paper, hybrid method for image fusion is proposed. The method used is combination of DCT (Discrete Cosine Transform) and Variance and compared it with Hybrid DWT.
Keywords |
DCT, DWT, PCA |
INTRODUCTION |
Image fusion is the process that combines information from multiple images of the same scene. These images may be captured from different sensors, acquired at different times, or having different spatial and spectral characteristics. The object of the image fusion is to retain the most desirable characteristics of each image. It is basically a process of combining the relevant information from a set of images into a single image, where the resultant fused image will be more informative and complete than any of the input images. Image fusion techniques can improve the quality and increase the application of these data. |
Image fusion is a useful technique for merging single sensor and multi-sensor images to enhance the information. The objective of image fusion is to combine information from multiple images in order to produce an image that deliver only the useful information. The discrete cosine transformation (DCT) based methods of image fusion are more suitable and time-saving in real-time systems. In this paper an efficient approach for fusion of multifocus images is presented which is based on variance calculated in dct domain. |
A. SINGLE SENSOR IMAGE FUSION SYSTEM |
The basic single sensor image fusion scheme has been presented in Figure 1. The sensor shown could be visible-band sensors or some matching band sensors. This sensor captures the real world as a sequence of images. The sequence of images are then fused together to generate a new image with optimum information content. For example in illumination variant and noisy environment, a human operator may not be able to detect objects of his interest which can be highlighted in the resultant fused image. |
B. MULTI-SENSOR IMAGE FUSION SYSTEM |
A multi-sensor image fusion scheme overcomes the limitations of a single sensor image fusion by merging the images from several sensors to form a composite image. Figure 2 illustrates a multi-sensor image fusion system. Here, an infrared camera is accompanying the digital camera and their individual images are merged to obtain a fused image. The digital camera is suitable for daylight scenes; the infrared camera is appropriate in poorly illuminated environments. |
II. IMAGE FUSION CATEGORIES |
1. Multimodal Images: Multimodal fusion of images is applied to images coming from different modalities like visible and infrared, CT and NMR, or panchromatic and multispectral satellite images. The goal of the multimodal image fusion system is to decrease the amount of data and to emphasize band-specific information. |
2. Multifocal Images: In applications of digital cameras, when a lens focuses on a subject at a certain distance, all subjects at that distance are not sharply focused. A possible way to solve this problem is by image fusion, in which one can acquire a series of pictures with different focus settings and fuse them to produce a single image with extended depth of field. |
3. Multi-view Images: In multi-view image fusion, a set of images of the same scene is taken by the same sensor but from different viewpoints or several 3D acquisitions of the same specimen taken from different viewpoints are fused to obtain an image with higher resolution. |
4. Multi-Temporal Images: In Multi-temporal image fusion, images taken at different times (seconds to years) in order to detect changes between them are fused together to obtain one single image. |
III. IMAGE FUSION METHODS |
1. Spatial domain fusion method: In spatial domain techniques, we directly deal with the image pixels. The pixel values are manipulated to achieve desired result. |
2. Transform domain fusion method: In transform domain method image is first transferred in to frequency domain. |
IV. IMAGE FUSION TECHNIQUES/ALGORITHMS |
1. Simple Average: It is a well documented fact that regions of images that are in focus tend to be of higher pixel intensity. Thus this algorithm is a simple way of obtaining an output image with all regions in focus. The value of the pixel P (i, j) of each image is taken and added. This sum is then divided by 2 to obtain the average. The average value is assigned to the corresponding pixel of the output image which is given in equation. This is repeated for all pixel values. K (i, j) = {X (i, j) + Y (i, j)}/2 Where X (i , j) and Y ( i, j) are two input images. |
2. Select Maximum: The greater the pixel values the more in focus the image. Thus this algorithm chooses the infocus regions from each input image by choosing the greatest value for each pixel, resulting in highly focused output. The value of the pixel P (i, j) of each image is taken and compared to each other. The greatest pixel value is assigned to the corresponding pixel. |
3. Brovey Transform (BT): Brovey transform (BT), also known as color normalized fusion, is based on the chromaticity transform and the concept of intensity modulation. The basic procedure of the Brovey Transform first multiplies each MS band by the high-resolution PAN band, and then divides each product by the sum of the MS bands. |
3. Brovey Transform (BT): Brovey transform (BT), also known as color normalized fusion, is based on the chromaticity transform and the concept of intensity modulation. The basic procedure of the Brovey Transform first multiplies each MS band by the high-resolution PAN band, and then divides each product by the sum of the MS bands. |
i. Perform image registration (IR) to PAN and MS, and resample MS. |
ii. Convert MS from RGB space into IHS space. |
iii. Match the histogram of PAN to the histogram of the I component. |
iv. Replace the I component with PAN.zzv. Convert the fused MS back to RGB space. |
5. Principal Component Analysis (PCA) Technique: Principal Component Analysis is a sub space method, which reduces the multidimensional data sets into lower dimensions for analysis. The PCA involves a mathematical procedure that transforms a number of correlated variables into a number of uncorrelated variables called principal components. The PCA is also called as Karhunen-Love transform or the Hotelling transform. |
6. Discrete Wavelet Transform (DWT): Wavelet transforms are multi-resolution image decomposition tool that provide a variety of channels representing the image feature by different frequency sub-bands. It is a famous technique in analyzing signals. 2-D Discrete Wavelet Transformat ion (DWT) converts the image from the spatial domain to frequency domain. The image is divided by vertical and horizontal lines and represents the first-order of DWT, and the image can be separated with four parts those are LL1,LH1, HL1 and HH1 [i]. The most important step for fusion is the formation of fusion pyramid. It is difficult to decide a uniform standard for fusion principle. |
7. Wavelet based image fusion: The standard image fusion techniques, such as IHS based method, PCA based method and Brovey transform method operate under spatial domain. However, the spatial domain fusions may produce spectral degradation. It has been found that wavelet-based fusion techniques outperform the standard fusion techniques in spatial and spectral quality, especially in minimizing color distortion. Schemes that combine the standard methods (HIS or PCA) with wavelet transforms produce superior results than either standard methods or simple wavelet-based methods alone. However, the trade-off is higher complexity and cost. |
V. MOTIVATION |
The aim of this research is to study the concept of image fusion in image processing. Discrete wavelet transform (DWT) was performed on source image. Because DWT is the basic and simplest transform among numerous multiscale transform and other type of wavelet based fusion schemes are usually similar to the DWT fusion scheme. |
The research is based on following objectives: |
1. The objective of image fusion is to represent relevant information from multiple individual images in a single image. |
2. Apply discrete wavelet transform on Intensity images. |
3. Combining multiple image signals into a single fused image using wavelet techniques. |
4. Calculate the perimeters Euclidean Distance, PSNR and DCT. |
5. To improve the image fusion using three approaches DCT, TCA, DSWT. |
VI. RESEARCH METHODOLOGY |
A .DISCRETE STATIONARY WAVELET |
Discrete stationary wavelet transform (DSWT), transforms a discrete time signal to a discrete wavelet representation. Image multi-resolution analysis was introduced by Mallat in the decimated case (critically sub-sampled).The DSWT has been extensively employed for remote sensing data fusion. Couples of sub-bands of corresponding frequency content are merged together. The fused image is synthesized by taking the inverse transform. In literature are proposed fusion schemes based on ‘a trous’ wavelet algorithm and Laplacian pyramids (LP). Unlike the DSWT which is critically sub-sampled, the ‘a trous’ wavelet and the LP are oversampled. |
Image fusion is implemented by two-dimensional discrete wavelet transform. The resolution of an image, which is a measure of amount of detail information in the image, is changed by filtering operations of wavelet transform and the scale is changed by sampling. The DSWT analyses the image at different frequency bands with different resolutions by decomposing the image into coarse approximation and detail coefficients (Gonzalez and Woods, 1998). |
B. FUSION RULES |
Fusion rules determine how the source transforms will be combined: |
- Fusion rules may be application dependent |
- Fusion rules can be the same for all sub-bands or dependent on which sub-band is being fused. |
There are two basic steps to determine the rules: |
- compute salience measures corresponding to the individual source transforms |
- decide how to combine the coefficients |
After comparing the salience measures (selection or averaging). |
Other rules involve more complicated operations, like: energy or edge. For these methods have to be used spatial filtering, like Energy filter or Laplacian operator edge filter. |
Discrete cosine transformation (DCT) is important to numerous applicatons in science,engineering and in image compression like MPEG etc (4.4).For simplicity ,Discrete cosine transformation (DCT) can convert the spatial domain image to frequency domain image .figure 4.4 shows the process flow diagram for discrete cosine transmation (DCT) fusion.The images to be fused are divided into blocks of size NxN .DCT coefficient are computed and fusion rules are applied to get fused DCT coefficient.IDCT is applied to produce the fused image[13]. |
Principal component analysis (PCA) is an important statistical tool that transforms multivariate data with correlated variables into one with uncorrelated variable [5]. PCA is used amply in all forms of analysis-from neuroscience to computer graphics –because it is a simple, non-parametric method of extracting relevant information from mystifying data sets. This technique is applied to the multispectral bands. |
C. FLOWCHART |
VII. RESULTS |
In this section, proposed method has been implemented and the results are presented. |
VIII. CONCLUSION AND FUTURE SCOPE |
In this work, we proposed a new hybrid method for Image fusion, The Method we use have combination of DCT(Discrete Cosine Transform) and Variance then we compare it with Hybrid DWT(Discrete Wavelet Transform) . Our proposed technique has much better results in terms of PSNR and MSE comparison to all other existing techniques. This Modified Method has very efficient to use for many Applications in image processing areas. |
Till now, we are clear with the idea that we have built a new hybrid technique for the image fusion which clearly works only on two images. But in future, fusion can be done on two videos or even on one video and one image or on an audio and one image |
ACKNOWLEDGMENT |
The paper has been written with the kind assistance, guidance and active support of my department who have helped me in this work. I would like to thank all the individuals whose encouragement and support has made the completion of this work possible. |