ISSN ONLINE(2319-8753)PRINT(2347-6710)
Baljit Kaur1, Vijay Dhir2
|
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology
In this paper we have used different Filters and Methods for the filtration of the Image and to analyse that what exact difference it makes when it comes to detect the edge of the Image. The image processing part consists of image acquisition of noisy image. This part consists of several image-processing techniques. First, we introduce noise in the image at different density levels, then Bacteria Foraging Optimization Algorithm is used to calculate the Threshold value which is to be applied on each filter to remove noise from the image. Here we use Adaptive Median Filter, Haar Denoising Method and Hybrid Filter to remove noise. These Filters are then applied with BFO Algorithm and they are compared with one another which help us to calculate the parameters of noisy images. The parameters of working would be Noise level at different densities, Noise suppression rate, Mean Square Error and PSNR. Here Neural Network Approach is used which consists of feed forward and feed backward layers and at hidden to output layer, BFO Neural Network is used for classification of Image and finally edges are detected.
Keywords |
Adaptive Median Filter, BFO, Haar Denoising Method. |
INTRODUCTION |
Edge detection is a type of image segmentation techniques which determines the presence of an edge or line in an image and outlines them in an appropriate way. The main purpose of edge detection is to simplify the image data in order to minimize the amount of data to be processed. Noise is any undesired information that spoil image. In digital image, noise arise during acquisition and/or transmission process. Images are corrupted during transmission because of interference of channel used for transmission [1]. For example, image transmitted using wireless network that might corrupt as result of lighting or other atmospheric disturbances. As in acquiring image with CCD camera light levels and sensor temperature are major factors affecting the amount of noise in the resulting image. The performance of image sensor is affected by variety of factors such as environmental conditions during image acquisition and by quality of the sensing elements themselves. Edge detection is difficult in noisy images, since both the noise and the edges contain high-frequency content. But in case of noisy images it is a challenging task. Noisy images are corrupted images. Their parameters are difficult to analyze and detect. In this paper, a neural network based new algorithm is used which helps to calculate the parameters of noisy images. The parameters of working would be Noise level at different densities, Noise Suppression Rate, Mean Square Error and PSNR. |
LITERATURE REVIEW |
M Rama Bai , Dr V Venkata Krishna, “A new Morphological Approach for Noise Removal cum Edge Detection”, “November 2010” [2]describes that Edges play an important role in image processing hence their detection is very important. The result of the final processed image depends on how effectively the edges have been extracted. The function of edge detection is to identify the boundaries of homogeneous regions in an image based on properties such as intensity and texture. Some early conventional methods for edge detection are Sobel algorithm, Prewitt algorithm and Laplacian of Gaussian operator. But they belong to high pass filtering methods, which are not effective for noisy images because noise and edge belong to the scope of high frequency. In real world applications, images contain object boundaries, object shadows and noise. Therefore, it may be difficult to distinguish the exact edge from noise or trivial geometric features. Many edge detection algorithms have been developed based on computation of the intensity gradient vector, which, in general, is sensitive to noise in the image. Another approach is to study the statistical distribution of intensity values. The idea is to examine the distribution of intensity values of neighborhood of a given pixel and determine if the pixel is to be classified as an edge. Although there exist some works, less attention has been paid to statistical approaches than the gradient methods in image processing. As the performance of classic edge detectors degrades with noise, edge detection using morphological operators is studied. In this paper, a new morphological approach for noise removal cum edge detection is introduced for both binary and gray scale images. For detecting edges in an image efficiently, first the noise is to be removed. Noise in binary images is of two colors, black and white. The noise in gray scale images manifests itself as light elements on a dark background and as dark elements on the light region. Noise is removed using morphological operations and further morphological operations are applied on this image to extract the edges. |
Tzu-Heng Henry Lee and Taipei, Taiwan, “Edge Detection Analysis”, “September 2012” [3] describes that various edge detection algorithms and detector design methods have been described and discussed. The binary edge maps produced by the Matlab programs that simulate the approximated version of first order derivative edge detectors revealed the detectors’ inability to localize the edges within only a few pixels. Expanding the size of impulse response operators can mitigate but cannot eliminate the noise effects. The resulting edge maps produced by the approximation to the second order derivative edge detection indicate that this model does accurately position the real edges. However, the problem of this approach is the high sensitivity to the image noise. The edge detector performance criterion and methods of evaluation provides us a good understanding on possible ways of finding out the effectiveness of each developed detection model. Meanwhile, the improved algorithms pointed out in are proved to be partially effective in precise ramp edge detection and reduction of noise-induced edges. The major research directions that can be followed and improvements to be made in the future edge detection techniques are categorized in the following categories. |
1) Image noise reduction. |
2) Precise edge detections with a minimum error detection possibility. |
3) Accurate edge localization that can detect edges within a single pixel. |
Mitra Basu, “Gaussian-Based Edge-Detection Methods”, “August 2002” [4] says that present a survey of Gaussianbased edge detection techniques. The detection of edges in an image has been an important problem in image processing for more than 50 years. In a gray level image, an edge may be defined as a sharp change in intensity. Edge detection is the process which detects the presence and locations of these intensity transitions. The edge representation of an image drastically reduces the amount of data to be processed, yet it retains important information about the shapes of objects in the scene. This description of an image is easy to integrate into a large number of object recognition algorithms used in computer vision and other image processing applications. Over the years, many methods have been proposed for detecting edges in images. Some of the earlier methods, such as the Sobel and Prewitt detectors, used local gradient operators which only detected edges having certain orientations and performed poorly when the edges were blurred and noisy. It should be mentioned here that one can combine such directional operators to approximate the performance of a rotationally invariant operator. Since then, more sophisticated operators have been developed to provide some degree of immunity to noise, to be nondirectional and to detect a more accurate location of the edge. The majority of these are linear operators that are derivatives of some sort of smoothing filter. Shen and Castan used a symmetrical exponential filter in edge detection. However, since it was originally proposed by Marr and Hildreth in 1980, the Gaussian filter is by far the most widely used smoothing filter in edge detection. |
Mohamed A. El-Sayed, “A New Algorithm Based Entropic Threshold for Edge Detection in Images”, “September 2011” [5] suggests that the hybrid entropic edge detector presented in this paper uses both Shannon entropy and Tsallis entropy, together. It is already pointed out in the introduction that the traditional methods give rise to the exponential increment of computational time. However, the proposed method is decrease the computation time with generate high quality of edge detection. Experiment results have demonstrated that the proposed scheme for edge detection works satisfactorily for different gray level digital images. Another benefit comes from easy implementation of this method. |
Manimala Singha and K.Hemachandran, “Based Image Retrieval using Color and Texture”, “February 2012” [6] says that a novel approach is presented for Content Based Image Retrieval by combining the color and texture features called Wavelet-Based Color Histogram Image Retrieval (WBCHIR). Similarity between the images is ascertained by means of a distance function. The experimental result shows that the proposed method outperforms the other retrieval methods in terms of Average Precision. Moreover, the computational steps are effectively reduced with the use of Wavelet transformation. As a result, there is a substational increase in the retrieval speed. The whole indexing time for the 1000 image database takes 5-6 minutes. |
Zhi-Hua Zhou, Ke-Jia Chen, and Yuan Jiang, “Exploiting Unlabeled Data in Content- Based Image”, “February 2012” [7] survey that applying semi-supervised learning and active learning together to CBIR. As an example, the proposed Ssair(Semi-Supervised Active Image Retrieval) approach gracefully integrates the merits of these learning mechanisms in exploiting unlabeled data, and experiments show that it could effectively improve the retrieval performance. Although the utility of Ssair has been verified by experiments, there is a lack of theoretical analysis. This might have encumbered the exertion of the full power of Ssair. For example, in current form of Ssair, in each round of relevance feedback each learner only labels for the other learner two images, i.e. the most relevant/irrelevant images it judged to be. If theoretical analysis on the relationship between the performance of the learners and the possible noises in the labeling process is available, it might be found that letting each learner label more images is still safe, which may help improve the performance of Ssair. This is an important issue for future work. Moreover, in this paper images are described by simple color histogram features. Although it is anticipated the retrieval performance could be further improved through utilizing stronger image features, there is a bare possibility that utilizing unlabeled images is not so beneficial when the features are strong. Therefore, investigating the performance of Ssair facilitated with strong image features is also an interesting issue for future work. |
Peter Wilkins, Paul Ferguson, Alan F. Smeaton and Cathal Gurrin, “Text Based Approaches for Content Base Image Retrieval on Large Image Collections”, “1995” [8] says that the approach to reducing the search space for image retrieval described has potential, as these results show that a fair degree of overlap can be achieved in a reduced subset that can be retrieved in a timely manner. As with any information retrieval task the effectiveness of the system will be determined by what the user is attempting to retrieve. A system that employs our aforementioned mechanisms for rapid subset selection would be most applicable to an ad hoc retrieval scenario where a user is looking for some general answers that match their query, and would not care about achieving 100% recall. A major issue confronting this system is that by using a text based approach, we will retrieve documents that only match some part of the query document. However existing similarity techniques such as L2 will rank documents as being very similar even if they do not share any terms in common. In these instances our approach will fail as we re-quire an overlap for the document to be retrieved. However as noted earlier, depending on the retrieval task and the size of the collection, the returned results may be adequate to full fill the users tasks. It would be interesting to compare our results to that of more contemporary ranking techniques (and fusion models) such as to see how this approach compares. To do this though would require the current approach to be extended to incorporate the other MPEG-7 features that we regularly make use of, including Colour Layout and Homogenous Texture. |
IMAGE NOISE |
Noise is any undesired information that contaminates an image. Noise appears in image from various sources. The digital image acquisition process, which converts an optical image into a continuous electrical signal that is then sampled, is primary process by which noise appears in digital image. There are several ways through which noise can be introduced into an image, depending on how the image is created [9]. Satellite image, containing the noise signals lead to a distorted image and not being able to understand and study it properly, requires the use of appropriate filters to limit or reduce much of the noise. It helps the possibility of better interpretation of the content of the image. |
A) TYPES OF NOISE |
Following are the different types of image noise :- |
1. Random Variation Impulsive Noise (RVIN) |
This type of noise is also called the Gaussian noise or normal noise is randomly occurs as white intensity values. |
2. Salt & Pepper Noise (SPN) |
This type contains random occurrences of both black and white intensity values, and often caused by threshold of noise image. Salt and pepper noise is an impulse type of noise, which is also referred to as intensity spikes. This is caused generally due to errors in data transmission. |
3. Speckle Noise (SPKN) |
Speckle noise is a multiplicative noise. If the multiplicative noise is added in the image, speckle noise is a ubiquitous artifact that limits the interpretation of optical coherence of remote sensing image. This type of noise occurs in almost all coherent imaging systems such as laser, acoustics and SAR (Synthetic Aperture Radar) imagery. The source of this noise is attributed to random interference between the coherent returns. |
4. Blurred Noise |
Image blurring has received a lot of attention in the computer graphics and vision communities. We model a blurred, noisy image as the convolution of a latent sharp image with a known shift-invariant kernel plus additive white Gaussian noise, whose result is potentially down sampled. Specifically, blur formation is modeled as [10]: |
B =D (I *K) +N, |
Where K is the blur kernel, N is the noise, D (I) down-samples an image by point-sampling |
I (m, n) = I (sm,sn) at a sampling rate s for integer pixel coordinates (m, n). |
EFFECTS OF NOISE |
1. The effect on noise on digital reconstruction and enhancement are determined from the statistics of the amount of perturbation caused by the noise. |
2. Salt and pepper produced by random noise in the intensity channel that affected for a particularly visible in flat fields [11]. |
3. Noise in the display spot deflection circuits that should be much effected result. |
4. The size of the image sensor, or effective light collection area per pixel sensor, is the largest determinant of signal levels that determine signal-to-noise ratio and hence apparent noise levels. |
5. Temperature can also have an effect on the amount of noise produced by an image sensor due to leakage. |
EDGE DETECTION |
Edge detection is sometimes an abused notion in image processing. Edge detection is one of the subjects of basic importance in image processing. The parts on which immediate changes in grey tones the images are called “edges”. Edge also can be defined as discontinuities in image intensity from one pixel to another [12]. Edge detection is difficult in noisy images, since both the noise and the edges contain high-frequency content. Attempts to reduce the noise result in blurred and distorted edges. Edge detection refers to the process of identifying and locating sharp discontinuities in an image. |
Edge detection is the most familiar approach for detecting significant discontinuities in intensity values. There are three different types of discontinuities in the grey level like point, line and edges. Spatial masks are used to detect all the three types of discontinuities in an image. There are many different approaches of Edge Detection. The most commonly used discontinuity based edge detection approaches are reviewed in this section. |
Different Approaches of Edge Detection:- |
1. Gradient Edge detection |
2. Laplacian Edge Detection |
3. Robert’s Edge Detection |
4. Prewitt Edge Detection |
5. Canny’s Edge Detection |
6. Sobel Edge Detection |
1.Gradient based Edge Detection |
It detects the edges by looking for the maximum and minimum in the first derivative of the image. Sharpening an image results in the detection of fine details as well as enhancing blurred ones. The magnitude of the gradient is the most powerful technique that forms the basis for various approaches to sharpening. |
2. Laplacian Edge Detection |
The Laplacian method searches for zero crossings in the second derivative of the image to find edges. An edge has the one-dimensional shape of a ramp and calculating the derivative of the image can highlight its location. |
3.Robert’s Edge Detection |
The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image [13]. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point. |
4. Prewitt Edge Detection |
The Prewitt filter is very similar to Sobel filter. The Prewitt filter is a fast method for edge detection. The difference with respect to Sobel filter is the spectral response. It is only suitable for well-contrasted noiseless images. This prewitt operator does not place any emphasis on pixels that are closer to the centre of the masks . The prewitt operator is an approximate way to estimate the magnitude and orientation of the edge. |
5. Canny Edge Detection |
The Canny edge detection algorithm [14] is known to many as the optimal edge detector. The first and most obvious is low error rate. It is important that edges occurring in images should not be missed and that there be no responses to non-edges. The second criterion is that the edge points be well localized. In other words, the distance between the edge pixels as found by the detector and the actual edge is to be at a minimum. A third criterion is to have only one response to a single edge. Based on these criteria, the canny edge detector first smoothes the image and eliminate noise. It then finds the image gradient to highlight regions with high spatial derivatives. |
6. Sobel Edge Detection |
It performs 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges [15]. Sobel Operators are the computation of the partial derivation in gradient may be approximated in digital images. Typically it is used to find the approximate absolute gradient magnitude at each point in an input gray scale image. |
FILTERS |
1. Mean Filter (MF) |
Mean Filter (MF) is a simple linear filter, intuitive and easy to implement method of smoothing images, i.e. reducing the amount of intensity variation between one pixel and the next. It is often used to reduce noise in images. The idea of mean filtering is simply to replace each pixel value in an image with the mean (average) value of its neighbours, including itself. Moreover, Mean Filter is a linear filter which uses a mask over each pixel in the signal. Each of the components of the pixels which fall under the mask are averaged together to form a single pixel. This filter is also called as average filter. |
2. Standard Median Filter (SMF) |
Median filter is the non-linear filter which changes the image intensity mean value if the spatial noise distribution in the image is not symmetrical within the window. Median filter reduce is the variance of the intensities in the image. Median filter is a spatial filtering operation, so it uses a 2-D mask that is applied to each pixel in the input image. To apply the mask means to centre it in a pixel, evaluating the covered pixel brightness and determining which brightness value is the median value. |
3. Adaptive Wiener Filter (AWF) |
Adaptive Wiener Filter (AWF) changes its behaviour based on the statistical characteristics of the image inside the filter window. Adaptive filter performance is usually superior to non-adaptive counterparts. But the improved performance is at the cost of added filter complexity. Mean and variance are two important statistical measures using which adaptive filters can be designed. |
4. Gaussian Filter (GF) |
Gaussian low pass filter is the filter which is impulse responsive, Gaussian filters are designed to give no overshoot to a step function input while minimizing the rise and fall time. Gaussian is smoothing filter in the 2D convolution operation that is used to remove noise and blur from image. |
5. Adaptive Median Filter (AMF) |
The Adaptive Median Filter (AMF) is designed to eliminate the problems faced with the Standard Median Filter. The basic difference between the two filters is that in the Adaptive Median Filter, the size of the window surrounding each pixel is variable. This variation depends on the median of the pixels in the present window. If the median value is an impulse, then the size of the window is expanded. |
CONCLUSION |
Edge detection is the initial step in recognition of objects but it is important to differentiate various edge detection techniques. Here various edge detection techniques are used which helps in identifying and locating sharp discontinuities in an image. But some techniques proves less efficient in detecting and maintaining finer edges in an image. The purpose of this study is to assess the potential of using neural networks to predict the performance of Images. In this paper, we have used various filters for filtering the images. It involves filters like Adaptive Mean filter, Haar Denoising method, and then we compare the PSNR , MSE and Noise Suppression Rate of every image. Further on we compute the edges using Canny edge detector. It is further suggested that the proposed Algorithm may be extended to the some limits, which may further improve the Image denoising and edge detection performance. |
FUTURE SCOPE |
The processing in the future will be considered as, the noisy image will be divided into three categories. The three categories would be red, green and blue and then a pixel management would be done to check out whether the ratio of the found arguments match our requirement or not. As in the proposed methodology, it has been mentioned that , the processing will go through each and every pixel. Every image in the data base will be portioned into the pixels and its red green and blue components would compared with the previous stored components. Now, if the found component value will be found to be more it would move to the queue downward and hence in such a manner each and every image would be compared and they would be stored into an array. Finally a sorting would be done to find out the best suited image after the removal of the noise from the image. |
References |
|