ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Double Density DWT Based Fusion Technique for Mitigating Turbulence in Surveillance Videos

M.Nawaz Shareef1 and S.Swarna Latha2
  1. M.Tech Student, Dept of Electronics & Communication Engineering, SVUCE, Tirupati, AP, India
  2. Associate Professor, Dept of Electronics & Communication Engineering, SVUCE, Tirupati, AP, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

In this paper a novel turbulence mitigation algorithm is proposed based on Double density discrete wavelet transform based fusion technique. In this scheme DD-DWT is applied to ROI part of selected frame of video sequence to decompose image into sub-bands and then fusion technique is applied to each ROI frame so that obtained output video is distortion free.We compare our method with state-of-art criteria like DTCWT based fusion technique; we prove that our technique yields better results.

Keywords

Dual tree complex wavelet transform (DT-CWT), image restoration, quality metrics, region-level fusion.

INTRODUCTION

Atmospheric turbulence is a naturally occurring phenomenon that can severely degrade the visual quality of video signals during acquisition. There are various types of atmospheric distortions such as fog or haze which reduce contrast and turbulence due to temperature variations or aerosols. Video footage in public areas is affected by such atmospheric distortions which result in blurring, wavering and warping of image of objects in the scene. In strong turbulence, blurring effects which are present in the video imagery, scintillation producing small-scale intensity fluctuations in the scene as well as shearing effect is observedwhere images can be acquired over distances up to 20 km. It is found difficult to interpret information behind the distorted layer due to turbulence effects. This leads to faster and greater micro scale changes in the air’s refractive index. In situations when the ground is hotter than the air above it, the air is heated and forms horizontal layers. Due to this addition in the temperature difference of ground and air is observed so thickness of layer shrinks and air layer move upwards leading to change in the air’s refractive index. Hence, there has been significant research activity attempting to faithfully reconstruct this useful information using various methods.
We suggest a new synthesis method for sinking the property of atmospheric turbulence as depict. First, before apply synthesis, a subset of selected images or ROIs must be associated. Here we introduce a new design approach for distorted images. As accidentally distorted images do not provide matching features, we cannot use conventional methods to find identical features. Instead, we apply a morphological image processing technique, namely erosion, to the ROI (or whole image) based only on the most revealing frames. These are preferred using a quality metric based on sharpness, intensity similarity and ROI size.

EXISTING METHOD

We then employ a region-based scheme to perform fusionat the feature level. This has advantages over pixelbasedprocessing since more intelligent semantic fusion rules canbe considered based on actual features in the image, ratherthan on single or arbitrary groups of pixels. The fusion is performed in the Dual Tree Complex Wavelet Transform(DT-CWT) which employs two different real discrete wavelettransforms (DWT) to provide the real and imaginary partsof the CWT. Two fully decimated trees are produced, onefor the odd samples and one for the even samples generatedat the first level. This increases directional selectivity overthe DWT and is able to distinguish between positive andnegative orientations giving six distinct sub-bands at eachlevel. Additionally, thephase of a DT CWT coefficient is robust to noise and temporalintensity variations thereby providing an efficient tool forremoving distorting ripples. Finally, the DT-CWT is near-shiftinvariant—an important property for this application. Afterfusion, the effect of haze is reduced using locally-adaptivehistogram equalization.For convenience, we refer to this algorithm as CLEAR(Complex wavelet fusion for Atmospheric turbulence).Details of each step in our algorithm are described below.
Capturing video in the presence of atmospheric turbulence,especially when using high magnification lenses, maycause the ROI in each frame to become misaligned. Thedisplacement between the distorted objects in the successiveframes may be too large for conventional image registration,using non-rigid deformation, to cope with. Equally, matchingusing feature detection is not suitable since strong gradients Within each frame are randomly distorted spatially. Hence, anapproach using morphological image processing is proposed.

PROPOSED MITIGATION SCHEME

A distinguished member of the familyof overcomplete discrete wavelet transforms (DWT)is the double density (DD) DWT, based on the filterbank. The most important family of wavelets was discovered by Ingrid Daubechies and fully described in Daubechies (1992). This family is compactly supported with various degrees of smoothness.The formal derivation of Daubechies' wavelets goes beyond the scope of this chapter, but the filter coefficients of some of its family members can be found by following considerations.
For example, to derive the filter taps of a wavelet with N vanishing moments, or equivalently, 2N filter taps, we use the following equations.The normalization property of scaling function implies
image
The input signal is split in threechannels, each decimated by a factor of two. The signalon the first channel is processed by an identical filter bank etc.
image
The DD-DWT is expansive with a factor of two, comparedto the critically sampled DWT.A dual tree (DT) [2, 3] is formed by two wavelet transformsprocessing the same input signal and satisfying acertain relationship: one of the wavelets is an approximateHilbert transform of the other.
The DT-DWT has several appealing properties, such as nearly shift invariance anddirectional selectivity in higher dimensions. Designed initially for the critically sampled DWT, the dual tree conceptcan be extended to other types of DWTs. The conditionsfor two DD-DWTs to form a dual tree are as follows.Let us consider two filter banks with the structure, one (the primal) defined by filters H0(z), H1(z)and H2(z), the other (the dual) defined by filters G0(z),G1(z) and G2(z).
image
Ho(z) Ho(-1/z) + H1(z) H1(-1/z) + H2(z) H2(-1/z)=0
Both the double-density DWT and the dual-tree DWT have their own distinct characteristics and advantages, and as such, it was only natural to combine the two into one transform called the double-density complex DWT. To combine the properties of both the double-density and dual-tree CWTs we ensure that: one pair of the four wavelets is designed to be offset from the other pair of wavelets so that the integer translates of one wavelet pair fall midway between the integer translates of the other pair one wavelet pair is designed to be approximate Hilbert transforms of the other pair of wavelets. By doing this, we are then able to use the double-density complex wavelet transform to implement complex and directional wavelet transforms.To implement the double-density dual-tree CWT, we must first design an appropriate filter bank structure (one that combines the characteristics of the double-density and dual-tree CWTs). We have seen what type of filter bank structure is associated with the double-density CWT in the previous sections, so we will now turn to the properties of the dual-tree CWT. The dual-tree CWT is based primarily on concatenating two critically sampled CWTs. We do this by constructing a filter bank that performs multiple iterations in parallel. Consequently, the filter bank structure corresponding to the double-density complex DWT consists of two oversampled iterated filter banks operating in parallel on the same input data. The iterated oversampled filter bank pair, corresponding to the simultaneous implementation of the double-density and dual-tree DWTs.
image
As you can see, there are two separate filter banks denoted by hi(n) and gi(n) where i = 0, 1, 2. The filter banks hi(n) and gi(n) are unique and designed in a specific way so the sub band signals of the upper DWT can be interpreted as the real part of a complex wavelet transform, and the sub band signals of the lower DWT can be interpreted as the imaginary part. Equivalently, for specially designed sets of filters, the wavelets associated with the upper DWT can be approximate Hilbert transforms of the wavelets associated with the lower DWT. When designed in this way, the double-density complex DWT can be used to implement 2-D oriented wavelet transforms, which are especially efficient in image processing. (Recall that for the double-density DWT, four out of the eight wavelets did not have a dominant orientation.) Because of this, the double-density complex DWT is expected to outperform the doubledensity DWT in various applications, such as image de noising and enhancement. But first we must design the filters for the double-density complex DWT. The FIR perfect reconstruction filter banks designed for this procedure are displayed below. We implement them by using one set of filter banks for the first stage and a second set of filters for the remaining stages .The filter bank for the remaining stages is designed such that the analysis filters of the first tree are the synthesis filters for the second tree, and vice versa. The function provides analysis and synthesis filter for the first stage of the double-density oriented DWT, while the function provides analysis and synthesis filters for the remaining stages of the transform.

RESULTS AND DISCUSSION

image
image
image

CONCLUSION

In terms of image enhancement, the double-density dual tree-complex wavelet transform performed much better at suppressing noise over the double-density wavelet transform. However, to improve the performance further it is necessary to use a different threshold for each sub band because for this transform the wavelets associated with different sub bands have differentnorms. In this scheme DD-DWT is applied to ROI part of selected frame of video sequence to decompose image into sub-bands and then fusion technique is applied to each ROI frame so that obtained output video is distortion free.
We compare our method with state-of-art criteria like DTCWT based fusion technique; we prove that our technique yields better results.

References

  1. L. C. Andrews, R. L. Phillips, C. Y. Hopen, and M. A. Al-Habash, “Theory of optical scintillation,” J. Opt. Soc. Amer. A, vol. 16, no. 6, pp. 1417–1429, Jun. 1999.
  2. H. S. Rana, “Toward generic military imaging adaptive optics,” Proc. SPIE, vol. 7119, p. 711904, Sep. 2008.
  3. Advanced Electro-Optical Systems. (2012) [Online]. Available:
  4. B. Davey, R. Lane, and R. Bates, “Blind de convolution of noisy complex-valued image,” Opt. Commun., vol. 69, nos. 5–6, pp. 353–356, 1989.
  5. E. Y. Lam and J. W. Goodman, “Iterative statistical approach to blind image De convolution,” J. Opt. Soc. Amer. A, vol. 17, no. 7, pp. 1177–1184, Jul. 2000.
  6. J. Delport, “Scintillation mitigation for long-range surveillance video,” in Proc. Sci. Real Relevant Conf., 2010, pp. 1–8.
  7. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via spaceinvariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 1, pp. 157–170, Jan. 2013.
  8. N. Joshi and M. Cohen, “Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal,” in Proc. IEEE Int. Conf. Comput. Photography, Mar. 2010, pp. 1–8.
  9. R. Gregory, “A technique of minimizing the effects of atmospheric disturbance on photographic telescopes,” Nature, vol. 203, pp. 274–295, May 1964.
  10. Z. Wen, D. Fraser, and A. Lambert, “Bicoherence used to predict lucky regions in turbulence affected surveillance,” in Proc. IEEE Int. Conf. Video Signal Based Surveill., Nov. 2006, p. 108.
  11. M. Aubailly, M. Vorontsov, G. Carhart, and M. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: The lucky-region fusion approach,” Proc. SPIE, vol. 7463, p. 74630C, Aug. 2009.
  12. P. J. Kent, S. B. Foulkes, J. G. Burnett, S. C. Woods, and A. J. Turner, “Progress toward a real-time active lucky imaging system,” in Proc. Tech. Conf. Electro Magn. Remote Sens., 2010, pp. B3.1–B3.8. [13] I. Selesnick, R. Baraniuk, and N. Kingsbury, “The dual-tree complex wavelet transform,” IEEE Signal Process. Mag., vol. 22, no. 6, pp. 123–151, Nov. 2005.
  13. Z. Wang, H. Sheikh, and A. Bovik, “No-reference perceptual quality assessment of JPEG compressed images,” in Proc. Int. Conf. Image Process., vol. 1. 2002, pp. 477–480.
  14. H. Sheikh, A. Bovik, and L. Cormack, “No-reference quality assessment using natural scene statistics: JPEG2000,” IEEE Trans. Image Process., vol. 14, no. 11, pp. 1918–1927, Nov. 2005.
  15. M. Aubailly, M. Vorontsov, G. Carhart, and M. Valley, “Automatedvideo enhancement from a stream of atmospherically-distorted images: The lucky-region fusion approach,” Proc. SPIE, vol. 7463, p. 74630C,Aug. 2009.