ISSN ONLINE(2278-8875) PRINT (2320-3765)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Image Compression Using PBO

Vikas Gupta1, Dhanvir Kaur2 and SimratPal Kaur3
  1. Assistant Professor, Dept. of ECE, Adesh Institute of Engineering & Technology, Faridkot, Punjab, India
  2. PG Scholar, Dept. of IT, Adesh Institute of Engineering & Technology, Faridkot, Punjab, India
  3. Assistant Professor, Dept. of IT, Adesh Institute of Engineering & Technology, Faridkot, Punjab, India
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

Abstract

The basic goal of image data compression is to reduce the bit rate for transmission and storage while either maintaining the original quality or providing an acceptable fidelity. JPEG is the one of the hottest topics in image compression technology. JPEG is different because it is primarily a lossy method of compression. It converts the spatial domain into frequency domain. It used DCT which provides the high compressed image. A new proposed PBO based JPEG compression scheme is used to compress the images which helps to reduce the size of images. The method uses four metrics such as peak signal to noise ratio, mean squared error, Entropy value & EPI factor value that measured the performance to compare and analyze the results. The proposed model focuses on reducing the size of image & time elapsed in compression with minimum distortion in reconstructed image and is practically implemented using MATLAB 7.11.0 environment. The aim of compression is to achieve good quality compressed image making the storage and transmission more efficient. The proposed method is implemented using some images. The implementation of the PBO based JPEG obtains the higher PSNR value. The higher the PSNR value higher the quality of an image. The higher PSNR is obtained for compressed image by PBO based JPEG compression as compared to JPEG Compression. This shows that the PBO based JPEG Compression is better than JPEG technique for image compression.

Keywords

Image, Image Compression, JPEG, PBO

INTRODUCTION

Digital image processing is the use of computer algorithms to perform image processing on digital images. Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception. Image compression systems are composed of two distinct structural blocks: an encoder and a decoder.
image
As shown in the figure 1, the encoder is responsible for reducing the coding, interpixel and psycho visual redundancies of input image. In first stage, the mapper transforms the input image into a format designed to reduce interpixel redundancies. The second stage, quantizer block reduces the accuracy of mapper’s output in accordance with a predefined criterion. In third and final stage, a symbol decoder creates a code for quantizer output and maps the output in accordance with the code. These blocks perform, in reverse order, the inverse operations of the encoder’s symbol coder and mapper block. As quantization is irreversible, an inverse quantization is not included. In a raw state, image can occupy a large amount of memory both in RAM and in storage. Image compression reduces the storage space required by an Image and the bandwidth needed when streaming that image across a network. The entire work of the paper has been categorized in a sequence of sections. Section 1 introduces the basic fundamentals of Image processing, Image compression and the techniques available for image compression. It also provides a view of the proposed algorithm to be implemented in this thesis research work. Section 2 provides a comprehensive research literature on the research carried out during the previous years. It opens a window to the grey areas where more work can be carried out to optimize the quality of image compression. Problem Formulation has been discussed in Section 3. The problem has been derived from the comprehensive literature discussed in Section 2. The research work implementation has been discussed in Section 4. This chapter provides information regarding the software and the technique used for carrying out the implementation of the research work. Section 5 provides the results or conclusions and finally the future scope.

LITERATURE SURVEY

Wide research has been carried out in the past on image processing. Hanhart (2013) described, “Subjective evaluation of HEVC intra coding for still image compression” demonstrates a significant improvement in compression efficiency of High Efficiency Video Coding (HEVC) as compared to H.264/MPEG-4 AVC, especially for video with resolution beyond HD, such as 4K UHDTV. One advantage of HEVC is the improved intra coding of video frames. Hence, it is natural to question how such intra coding compares to state of the art compression codecs for still images. This paper attempts to answer this question by providing a detailed analysis and performance comparison of HEVC intra coding with JPEG and JPEG 2000 (both 4:2:0 and 4:4:4 configurations) via a series of subjective and objective evaluations. The evaluation results demonstrate that HEVC intra coding outperforms standard codecs for still images with the average bit rate reduction ranging from 16% (compared to JPEG 2000 4:4:4) up to 43% (compared to JPEG). These findings imply that both still images and moving pictures can be efficiently compressed by the same coding algorithm with higher compression efficiency. Hurtik (2013) “Image Compression Methodology Based on Fuzzy Transform” the main objective is to develop an effective algorithm for image compression. They use both lossy and non-lossy compression to achieve best result. Their compression technique is based on the direct and inverse fuzzy transform (Ftransform), which is modified to work with dynamical fuzzy partition. The essential features of the proposed algorithm are: extracting edges, automatic thresholding, histogram adjustment. The article provides a comparison of our algorithm with the image compression algorithm (JPEG) and other existing algorithms based on fuzzy transform. Hagag (2013) corroborated in , “Multispectral image compression with band ordering and wavelet transforms” a new compression technique aiming at reducing the size of storage of multispectral images and maintaining at the same time the high-quality reconstruction is presented. An optimal multispectral band ordering process is applied before compression, and then, the dual-tree discrete wavelet transform is used in the spectral dimension, and the 2D discrete wavelet transform is used in the spatial dimensions. Finally, a simple Huffman coder is used for compression. Landsat ETM+ images are used for experimentations. Experimental results demonstrate that the proposed technique has better performance than JPEG, JPEG2000, SPIHT, and JPEG2000 with a 3D dual-tree transformation. Jeevan (2013) concluded in the paper, “Performance Comparison of DCT Based Image Compression on Hexagonal and Rectangular Sampling Grid” The advantages of processing images on hexagonal lattice are higher degree of circular symmetry, uniform connectivity, greater angular resolution, and a reduced need of storage and computation in image processing operations. In this work a comparison of DCT based Image compression on hexagonal and rectangular sampling grid is performed. DCT based image compression is performed on both rectangular domain and hexagonal domain using alternate pixel suppressal method. Mean Square Error and Peak Signal to Noise Ratio is considered for the performance analysis. Compression on hexagonal domain gives better results compared to compression on rectangular domain. Ding (2013) results described in the paper, “Two-Dimensional Orthogonal DCT Expansion in Trapezoid and Triangular Blocks and Modified JPEG Image Compression” In the conventional JPEG algorithm, an image is divided into 8 by 8 blocks and then the 2-D DCT is applied to encode each block. In this paper, they find that, in addition to rectangular blocks, the 2-D DCT is also orthogonal in the trapezoid and triangular blocks. Therefore, instead of 8 by 8 blocks, they can generalize the JPEG algorithm and divide an image into trapezoid and triangular blocks according the shapes of objects and achieve higher compression ratio. Compared with the existing shape adaptive compression algorithms, since they do not try to match the shape of each object exactly, the number of bytes used for encoding the edges can be less and the error caused from the high frequency component at the boundary can be avoided. The simulations showed that, when the bit rate is fixed, their proposed algorithm can achieve higher PSNR than the JPEG algorithm and other shape adaptive algorithms. Furthermore, in addition to the 2-D DCT, they can also use our proposed method to generate the 2-D complete and orthogonal sine basis, Hartley basis, Walsh basis, and discrete polynomial basis in a trapezoid or a triangular block. Sreelakshmi (2013) discussed in the paper, “Image compression using anti-forensics method” A large number of image forensics methods are available which are capable of identifying image tampering. But these techniques are not capable of addressing the anti-forensics method which is able to hide the trace of image tampering. In this paper anti-forensics method for digital image compression has been proposed. This anti-forensics method is capable of removing the traces of image compression. Additionally, technique is also able to remove the traces of blocking artifact that are left by image compression algorithms that divide an image into segments during compression process. This method is targeted to remove the compression fingerprints of JPEG compression.

PROBLEM FORMULATION

The Image Compression is of two types, lossy and lossless. Different researchers have put their work in the field of compression. The research problem in this work is:
• To study the JPEG image compression & enhance the results for the lossy image
• To compare the results in terms of PSNR, MSE, Entropy and EPI with the implementation of JPEG compression using Pollination Based Optimization (PBO).
PBO based JPEG Compression is a new technique using which the results of compression are expected to be far better in comparison to the effective lossless Compression.
The purpose of this research work is to enhance the results for the lossy image and to compare the values of the results obtained in previous research papers with the new proposed and implemented technique i.e. PBO. The problem taken for this research work is divided into following objectives:
• To reduce the size of compressed image.
• To reduce the elapsed time of compression by using PBO.
• Completion of above two objectives with minimum distortion in reconstructed image.
• Comparing on the basis of parameters like PSNR, MSE, Entropy, EPI
The methodology focuses on the above four objectives which are helpful in improving the compression parameters and are practically implemented using MATLAB 7.11.0 environment. In this methodology, Pollination based optimization algorithm is used to optimize the compression and to form a new technique for image compression using PBO. This algorithm provides better results as compare to previously implemented techniques.

IMPLEMENTATION

JPEG uses a lossy form of compression based on the discrete cosine transform (DCT). This mathematical operation converts each frame/field of the video source from the spatial (2D) domain into the frequency domain.
A perceptual model based loosely on the human psycho-visual system discards high-frequency information, i.e. sharp transitions in intensity, and color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale (with different occurrences of each number) into a smaller one, and the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the over picture than other coefficients, are characteristically small-values with high compressibility. The quantized coefficients are then sequenced and loss-lessly packed into the output bit stream. Nearly all software implementations of JPEG permit user control over the compression-ratio (as well as other optional parameters), allowing the user to trade off picture-quality for smaller file size. In embedded applications (such as miniDV, which uses a similar DCT-compression scheme), the parameters are pre-selected and fixed for the application. The compression method is usually lossy, meaning that some original image information is lost and cannot be restored, possibly affecting image quality. There is an optional lossless mode defined in the JPEG standard; however, this mode is not widely supported in products.
The JPEG compression algorithm is designed to compress image files created using the Joint Photographic Experts Group (JPEG) standard. JPEG files are inherently difficult to compress because of their built-in compression based on a combination of run-length and entropy coding techniques. The JPEG compression algorithm works by first unwinding this pre-existing compression and then recompressing the file using an improved method that typically results in a 20- 25% savings in space. The algorithm is lossless and reversible so that when the file is de-compressed. The original entropy coding can be reapplied resulting in a bit for bit match with the original. The diagram shown in figure illustrates the process:
image
JPEG files are normally either JPEG File Interchange Format (JFIF) files or Exchangeable Image File Format (Exif) files with the latter being used by most digital cameras. Both formats are based on the JPEG Interchange Format (JIF) as specified in Annex B of the standard. The differences between the two are small, relating to a subset of markers. The marker differences are inconsequential to the compression algorithm so both formats are readily supportedThe JPEG compression algorithm supports the following image types:
• Baseline and extended (sequential) encoding
• 8 or 12 bits/sample
• Scans with 1, 2, 3 or 4 components
• Interleaved and non-interleaved scans
Before compressing a JPEG file its contents are validated. Validation is necessary because the JPEG algorithm requires analyzing the structure of an image, thereby requiring unsupported or corrupt JPEG images be rejected.
image
Also, validation ensures that the original file can be reconstructed exactly when decompressed. As part of the validation process the metadata markers are parsed and verified. There are several markers required by the compression algorithm and those not required are preserved. The primary marker that identifies a JPEG file, SOI (Start of Image), must be found within the first 128 bytes of the file. All data preceding the SOI marker is considered unknown metadata. To complete the validation process the image scans are decoded to ensure there are no errors with the entropy encoding. After the last scan is decoded the EOI (End of Image) marker is parsed. Any data beyond the EOI marker is considered unknown metadata. The algorithm design is shown in Figure below, which involves:
image
`image
In this proposed work, a sample was gray scaled and converted into size (128×128). Then divide image into small segments using wavelets. Figure 4.5 shows the original image and segmented image. This segmented image is used for further process of this proposed system.
image
Figure shows the original image & compressed image using our proposed PBO based JPEG compression technique. The compressed image is less distorted using intelligent water drop system based JPEG compression scheme.
image
Original Image & Compressed Image
This basic GUI design for Image compression is shown in figure
image

CONCLUSION & FUTURE SCOPE

The compression process of the original image consists of the applications of the transformations until the compressed image is retrieved back. Table 5.1 shows some of the original images & reconstructed images after image compression using PBO based JPEG Compression. A result from a comparative study of JPEG & PBO based JPEG compression method is presented. The method uses four metrics such as peak signal to noise ratio, mean squared error, Entropy value & EPI factor value that measured the performance to compare and analyze the results. The implementation of the PBO based JPEG obtains the higher PSNR value. The higher the PSNR value higher the quality of an image. The higher PSNR is obtained for compressed image by PBO based JPEG compression as compared to JPEG Compression. This shows that the PBO based JPEG Compression is better than JPEG technique for image compression.
In the present work we have implemented the PBO based JPEG compression quite successfully. Still there is some hope of improvement. If we can use more than two optimization algorithms together with more image samples for the compression, the results could have been better as compare to single optimization algorithm. So, future work could go on the direction of hybrid systems using more than one algorithm together.

References

  1. Talukder, H, K., and Harada, K., “Haar Wavelet Based Approach for Image Compression and Quality Assessment of Compressed Image,” International Journal of Applied Mathematics, vol.36, pp.1-8, 2007.
  2. Amar, Ben, C., and Jemai, O., “Wavelet Networks Approach for Image Compression,” ICGST International Journal on Graphics, Vision and Image Processing, pp. 37-45, 2007.
  3. A, Khashman., and K, Dimililer., “Image Compression using Neural Networks and Haar Wavelet,” WSEAS Trans Signal Processing vol.4, pp. 330-339, 2008.
  4. Senthilkumaran, N., and Suguna, J., “Neural Network Techniques for Lossless Image Compression using X-ray Images,” International Journal of Computer and Electrical Engineering, vol.3, no.1, 2011.
  5. Kuther, Abood., Hayder., Aboud and Falih, H. A., “X-ray image compression using neural network,” ISSN 2229-5518,vol.3, Issue 10, 2012.
  6. M, Sindhu., and R, Rajkamal., “Images and Its Compression Techniques,” International Journal of Recent Trends in Engineering, vol.2, no. 4, 2009.
  7. Bhardwaj, Anuj., and Ali, Rashid., “Image Compression Using Modified Fast Haar Wavelet Transform,” ISSN 1818-4952, vol.7, no.5, 2009.
  8. S, K, Ajay., Tiwari, Shamik., and Shukla, p., “Wavelet based Multi Class image classification using Neural Network,” International Journal of Computer Applications, vol.37, no.4, 2012.
  9. Mhd, Nagaria, Baluram., Hashmi, Farukh., and Dhakad, Pradeep., “Comparative Analysis of Fast Transform for Image Compression for optimal Image Quality and Higher Compression Ratio,” International Journal of Engineering Science and Technology (IJEST), vol.3, no.5, 2011.
  10. Sivanandam, N, S., Sumathi, S., and Deepa, N, S., “Introduction to Neural Networks using Matlab 6.0,” McGraw Hill, 2012.
  11. Gonzalez. R., and woods, R., “Digital image processing” 3rd edition, Pearson education, 2011.
  12. Sonal., Kumar Dinesh., “A Study of Various Image Compression Technique,” National Conference on Challenges & Opportunities in Information Technology, 2007.
  13. Hanhart, Philippe., Rerabek, Martin., and Korshunov, Pavel., “Subjective evaluation of HEVC intra coding for still image compression,” Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics - VPQM 2013, Scottsdale, Arizona, USA, January 30 - February 1, 2013.
  14. Hurtik, Petr., and Perfilieva, Irina., “Image Compression Methodology Based on Fuzzy Transform,” International Joint Conference CISIS’12- ICEUTE´12-SOCO´12 Special Sessions, pp 525-532, 2013.
  15. Hugag, Ahmed., Amin, Mohamed., Fathi, E., Abd El-Samie., “Multispectral image compression with band ordering and wavelet transforms,”Signal, Image and Video Processing, pp. 1863-1703, 2013.
  16. Mani, K, Jeevan., and A, Ravindran, Rahul., “Performance Comparison of DCT Based Image Compression on Hexagonal and Rectangular Sampling Grid,” Journal of Image and Graphics, Vol. 1, No. 3, September 2013.
  17. Groach, Manik., and Garg, Amit, Dr., “DCSPIHT: Image Compression Algorithm,” International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622, Vol. 2, Issue 2, Mar-Apr 2012, pp.560-567, 2012.
  18. Xu, Peng, Fei., Zhang, Bin, Hong., Wang, Feng, Xin., and Yu, Yong, Zheng., “Color Image Compression Using Block Singular Value Decomposition,” Applied Mechanics and Materials, (Volumes 303 - 306), pp. 2122-2125, February, 2013.
  19. JJ, Ding., YW, Huang., PY, Lin., SC, Pei., HH, Chen., YH, Wang.,“ Two-Dimensional Orthogonal DCT Expansion in Trapezoid and Triangular Blocks and Modified JPEG Image Compression,” IEEE Transactions on Image Processing : a Publication of the IEEE Signal Processing Society, 2013.
  20. Sreelakshmi, S, M., and Venkataraman, D., “Image Compression using Anti-Forensics Method” in Proc. IEEE Int. trans. Information forensics and security, Vol.6,No.3,Sep.2011, pp. 1694–1697.
  21. O’Brien, John, W., “The JPEG Image Compression Algorithm” APPM-3310 FINAL PROJECT, DECEMBER 2, 2005.
  22. Dr. Saudagar Abdul Khader Jilani, “JPEG Image Compression using FPGA with Artificial Neural Networks” IACSIT International Journal of Engineering and Technology, Vol.2, No.3, June 2010.
  23. R.Uma, “FPGA Implementation of 2-D DCT for JPEG Image Compression” International journal of Advanced Engineering Sciences and Technologies, Vol. No. 7, Issue No. 1, 001 – 009, 2011.