Index

Abstract

Despite the success of various enhancement techniques used in many bio-medical applications, edge-preservation-based image enhancement remains a limiting factor for image quality and thus the usefulness of these techniques. In this paper, a new enhancement technique combining the variational mode decomposition (VMD) with the Sobel gradient and equalization technique is proposed. The proposed algorithm first decomposes the image into various sub-modes based on their frequency. The low-frequency components are equalized using the conventional equalization technique, whereas the high-frequency components use a traditional filter. Finally, the edge of the original image is added to the processed image for quality assurance. The proposed algorithm has two advantages over the existing approaches by enhancing only the low-frequency components to extract the hidden artefacts and specifically de-noising the high-frequency component. This process not only enhances the contrast, but also preserves the brightness of the image. A comprehensive study was conducted on the experimental results of benchmark test images using different performance measure matrices to quantify the effectiveness of the approach. In terms of both subjective and objective evaluation, the reconstructed image is found to be more accurate and visually pleasing. It also outperforms the state-of-the-art image-fusion methods, especially in terms of PSNR, RMSE, mutual information, and structural similarity.

Keywords: VMD, Image enhancement, Microscopic image, Sobel operator, Median filter, Curvelet transform, Dual-tree, CWT, MSVD, CS-MCA, NSST.

Received: 1 March 2022 / Revised: 6 April 2022 / Accepted: 22 April 2022/ Published: 9 May 2022

Contribution/ Originality

This method combines the advantages of variational mode decomposition with a multi-technology fusion to preserve some source image features while improving the image clarity, removing blurring and noise, and increasing contrast.

1. INTRODUCTION

Proper diagnosis requires a good visual quality image with the hidden information present, as nowadays, medical images are playing a vital role in clinical assessment [1]. But practically, finding a suitable image and imaging system is very difficult because the image collected by the electron microscope or different medical image acquisition equipment is not up to standard. So post-processing must be deployed for better results and perception. Contrast enhancement is one of the popular image and video signal processing techniques used to improve image quality in various applications where human perception and recognition play a vital role. In some cases, it is also critical to highlight the essential image domain features for automatic pattern recognition and machine learning. As per the view of different researchers, finding the right balance of brightness and contrast is necessary for a good quality image [1]. Based on this principle, image enhancement is an important preprocessing technique for better image analysis. Image enhancement methods are broadly divided into spatial and transformation domain techniques [2]. The common spatial domain techniques are histogram equalization, stretching and matching, gamma corrections, logarithms, sharpening, spatial image smoothing filters, etc. [3-9]. The traditional histogram equalization can efficiently raise pixel intensities in digital images, but it has a propensity to over-enhance in circumstances when the histogram has high peaks, resulting in a noisy output image. Various strategies have been developed to reduce the degree of over-enhancement. Those methods are enlisted as adaptive histogram equalization (AHE), contrast-limited adaptive histogram equalization (CLAHE), brightness-preserving bi-histogram equalization (BBHE), etc. [6, 10-13]. Image decomposition before enhancement comes under the second group of contrast enhancement techniques. This method prevents artifacts and improves subjective quality. The transform domain techniques are based on the Laplacian pyramid, discrete cosine transform (DCT), discrete wavelet transform (DWT), and empirical mode decomposition (EMD). Variational mode decomposition (VMD) is a fully image-dependent technique recently proposed for the decomposition of an image [14, 15] with numerous advantages over DCT, DWT, and EMD. Each of the abovementioned techniques has its own set of benefits and limitations that they also enhance the noise parameter present in the original image [16-18]. A fusion technique effectively addresses these limitations by integrating the information contained within the individual images. This image fusion can be applied to the spatial domain or transform domain. In the former, due to the fusion taking place between the pixels of the source images directly, information loss and brightness distortions are common issues. However, transform domain fusion methods have artifacts around edges and poor visual quality.

A new image-dependent enhancement technique is proposed to limit the over-enhancement problems of the spatial domain equalization method and restrain the noise parameters effectively. This fusion technique has considered the advantages of both spatial and transfer domain. In this approach, variational mode decomposition is used to segregate the different modes in terms of low frequency to high frequency which are carrying the approximation information and edge details respectively. As per the literature, noise is generally high-frequency in nature [1]. So only the low-frequency components were enhanced using contrast limited adaptive histogram equalization [2]. The Sobel operator is used to separate the edge information. The rest of this paper is organized as follows: Section 2 describes the proposed algorithm in detail. Experimental results are depicted in Section 3, and the final conclusion is given in Section 4.

2. PROPOSED WORK

In the case of microscopic or medical images, the traditional decomposition techniques such as wavelets, BEMD, and empirical wavelet transform generate certain artefacts at the boundary [19].

Figure 1. The flow chart of microscopic image enhancement algorithm.

Our proposed work aims to integrate the details of the edge information and hidden structures into a unique image. The advantage of this method is that it highlights the detailed information while reducing the noise effect. For this, we have proposed a multimodal image fusion based on the modes of VMD, Sobel, and CLAHE, as shown in Figure 1.

The details of the algorithm is depicted by four steps in Figure 1 and outlined in different steps as given:

Step1. The edge information of the image I (x, y) is extracted using morphological Sobel operator and represented as N(x, y).
Step 2. The original image is decomposed into k modes by using VMD. In this algorithm k is fixed as 4. The decomposed images are enlisted as M0, M1, M2 and M3 based on the frequency from low to high respectively. The low frequency mode M0 is enhanced using contrast limited adaptive histogram equalization (CLAHE) to M(x, y).
Step 3. The modes containing high frequency information (M1, M2 and M3) are processed through the median filter and returns a processed image P(x, y).
Step 4.  The final enhanced image Y (x, y) is generated by the weighted sum of M (x, y), N (x, y), and P (x, y), which is depicted as in Equation 1.

Y (x, y) = α * M (x, y) + β * N (x, y) + γ * P (x, y)             (1)

Where α, β and γ are known as the linear coefficients of edge information, low frequency component and detail information of the processed image respectively [19].

A. Sobel Operator

Sobel and Feldman presented the idea of an Image Gradient Operator in 1968. In this approach, multiple filters are applied in both the horizontal and vertical directions to create an edge-dependent image. For convolution, the operator employs a kernel that extracts horizontal and vertical changes. The kernel size chosen here is 3x3. The horizontal and vertical changes in image I (x, y) can be represented as:

Figure 2. (a) Original image (b) Sobel gradient transform output.

B. Variational Mode Decomposition

The objective of the two-dimensional VMD is to decompose the input image into K number of sub-images or modes based on the formulation of an optimization problem which is given by Dragomiretskiy and Zosso [14]; Dragomiretskiy and Zosso [15]. In this decomposition, K number of modes can be extracted with their respective centre frequencies. During this process, each mode is concentrated on a central pulsation, ωk [14]. The method for evaluation of modes of an original image (X) is depicted as Dragomiretskiy and Zosso [15].

The modes are evaluated by solving Equation 1 using the augmented Lagrangian and Fourier isometry for an iterative estimation of modes [20]. The parameters used for image mode extraction are α (bandwidth constant), K (number of modes), tol (tolerance), and the center frequencies. In this present study, we have considered K=4, α= 1000, and tol = 10-5, respectively [16]. Figure 3 depicts the modes of the original image using the VMD technique for visual representation. The original image data, as shown in this figure, is divided into several modes based on the frequency content, which ranges from low to high. The information for vertical, horizontal, and diagonal edges is captured in modes 2, 3, and 4, whereas mode 1 captures the low-frequency part of the original image. Hence, the quality measure calculated by considering the modal details of the images will be helpful for quantifying the losses of image information.

Figure 3. Decomposed image using VMD ( a) M0, (b) M1, (c)  M2 and  (d) M3.

C. Contrast Limited Adaptive Histogram Equalization

The traditional histogram equalization technique increases background noise contrast while decreasing useful signal in the contrast enhancement process. This limitation was overcome by adopting a variant of adaptive histogram equalization [2], CLAHE, as this prevents contrast over-amplification.  The method initially subdivides the entire image into small tiles and applies the histogram equalization to individual tiles, then combines all the tiles using bilinear interpolation. Bilinear interpolation removes the artificial boundaries that are created when the adjoining tiles join each other. This algorithm is popularly used in medical imaging for contrast improvement applications.'

D. Median Filter

Median Filter is a non-linear filter having the ability to remove Gaussian noise, speckle noise, and salt-and-pepper type noise. It uses a pre-defined window size. During the filtering process, the median filter replaces the pixel values with the median value of neighboring pixels. Because edge information is the crucial data for an image, the median filter plays a vital role in preserving the edges during the smoothing process. Equation 6 presents the filtration process.

E. Image Fusion Methods

Image fusion techniques are broadly divided into two categories: spatial domain-based and transfer domain-based. The basic advantages of spatial domain fusions are that they are easy to implement and computationally efficient as long as they are limited to a few datasets, but in the maximum data set, it gives the reverse result by reducing the contrast. In some cases, it produces brightness or colour distortions. In transfer domain fusion, in the case of the pyramidal method, artefacts are generated across the edge of the processed image. Furthermore, wavelet transforms are useful for supplying directional information. Still, most of them exhibit artefacts around the borders of the transformed image due to their shift variant nature. Multiscale geometric analysis provides a good result with a loss in the texture part. The details of the available fusion methods are listed in Table 1.

Table 1. Spatial domain and transfer domain image fusion methods.

Different domain image fusion approach
  Spatial domain Transform domain
Wavelet transform Pyramidal methods Multi-scale geometric analysis
Average, minimum,
maximum, morphological operators [21],
Principal Component Analysis [22],
Independent Component Analysis [22]
Discrete wavelet transform (DWT) [20],
Shift invariant discrete wavelet transform (SIDWT) [23],
Dual-tree complex wavelet transform (DcxDWT)
Contrast Pyramid [24],
Ratio of the low-pass pyramid ,
Laplacian [25]
Curvelet,
Contourlet [26],
Shearlet,
Nonsubsampled Shearlet transform (NSST) [27]

3. EXPERIMENTAL RESULTS AND ANALYSIS

This section highlights the results of the proposed algorithm. The experiments were conducted on a PC with a Pentium (R) Dual-Core CPU-E5800 @ 3.20GHz, 4 GB of RAM, and MATLAB2014b specifications. Different publicly available data sets are studied [28, 29]. All the considered images are registered with a resolution of 256 x256. The results of the proposed algorithm are validated using both qualitative and quantitative methods and compared with the existing state-of-the-art methods for a fair assessment.

Figure 4. Sample set of images.

A. Subjective Assessment

A set of sample images as shown in Figure 4 are chosen for qualitative analysis of the proposed algorithm. Figure 5 demonstrates the comparative view of a sample image (Figure 4(d)) processed by various transformation techniques such as curvelet transform (CVT), dual-tree complex wavelet transform (DTCWT), multiresolution Singular Value Decomposition (MSVD), convolutional sparsity-based morphological component analysis (CS-MCA), non-subsampled Shearlet Transform (NSST), and the proposed method. By comparing the listed methods, our proposed algorithm demonstrates a suitable brightness with a more appropriate structure. Figure 5 shows that the result of DT-CWT and MSVD methods have low contrast, whereas NSST-based fusion produces over-enhanced images. Though the method CSMCA provides a better structure, it fails to enhance the contrast. The proposed method not only enhances the contrast but also limits the brightness.

Figure 5. Processed image of sample image (Figure 5.d) using. (a) CVT, (b) DTCWT, (c) MSVD, (d) CSMCA, (e) NSST, (f) proposed method.

B. Objective Assessment

In this section, the performance of the proposed method is evaluated objectively using the metrics to quantify its effectiveness. The performance matrixes considered here are discussed as:

(i) Edge intensity (EI): In digital image processing, EI represents the brightness difference of the images along the gradient direction. This intensity is represented as:
Edge intensity = 100 * (Maximum Edge Intensity +10 * Average edge intensity)             (7)
Where the transfer order magnitude considered here are 10 and 100.
(ii) Mutual information (MI) for determining how much information is shared between the source and the processed images.
(iii) Visual information fidelity (VIF), a full-reference image quality metric that measures image fidelity using information theory. VIF is measured using the Gaussian mixture model.
(iv) Structural similarity index measure (SSIM): The structural difference between the referred image and the source image is estimated using SSIM that quantifies image quality deterioration.
(v) Root mean square error (RMSE): The difference between the source image and the segmented image is measured using RMSE; the lower the value of RMSE, the better the segmentation performance.
(vi) Peak signal-to-noise ratio (PSNR): The peak signal-to-noise ratio is used as a quality measurement between the two images, with the higher value indicating the better quality of the processed image.

Fifty different images are considered for this algorithm. The average value in each of the measuring matrixes is considered in bar graph for easy analysis and represented in Figure 6. In each case, the proposed method yields good results as compared to the other methods. The proposed method has a root mean square error of 0.020, while other methods, except NSST, have a value of 0.029. The non-subsampled Shearlet Transform has a 0.022 RMSE.

Figure 6. Graphical representation of comparative objective matrix. (a) Visual Information Fidelity (b) Structural Similarity Index Measure. (c) Edge Intensity (d) Mutual Information (e) PSNR.

The higher values of VIF, SSIM, EI, MI and PSNR depicted in Figure 6  are the evidence of the effectiveness of the proposed method.

4. CONCLUSIONS

In this work, we have proposed a new fusion method using VMD, Image Gradient Operator, and a filter to enhance the image contrast. The prime objective of this approach is to exploit the advantages of variational mode decomposition with a multi-resolution structure. Experiments on a set of benchmark images demonstrate that the proposed technique outperforms similar types of fusion algorithms, particularly in terms of EI, MI, VIF index, and visual effect. In this approach, we have considered the values of α, β and γ as 0.75, 0.25, and 1 respectively. We advise that future research look into an effective soft computing method for selecting the values of α, β and γ automatically.

Funding: This study received no specific financial support.  

Competing Interests: The authors declare that they have no competing interests.

Authors’ Contributions: Both authors contributed equally to the conception and design of the study.

REFERENCES

[1]          R. Gonzalez and E. Woods, Digital image processing. Upper Saddle River, NJ: Prentice-Hall, 2008.

[2]          H. Lidong, Z. Wei, W. Jun, and S. Zebin, "Combination of contrast limited adaptive histogram equalisation and discrete wavelet transform for image enhancement," IET Image Processing, vol. 9, pp. 908-915, 2015.Available at: https://doi.org/10.1049/iet-ipr.2015.0150.

[3]          Z. Wei, H. Lidong, W. Jun, and S. Zebin, "Entropy maximisation histogram modification scheme for image enhancement," IET Image Processing, vol. 9, pp. 226-235, 2015.Available at: https://doi.org/10.1049/iet-ipr.2014.0347.

[4]          C. Wang, J. Peng, and Z. Ye, "Image enhancement using background brightness preserving histogram equalization," Electronics Letters, vol. 48, pp. 155-157, 2012.Available at: https://doi.org/10.1049/el.2011.3421.

[5]          Y.-T. Kim, "Contrast enhancement using brightness preserving bi-histogram equalization," IEEE transactions on Consumer Electronics, vol. 43, pp. 1-8, 1997.Available at: https://doi.org/10.1109/30.580378.

[6]          C. Wang and Z. Ye, "Brightness preserving histogram equalization with maximum entropy: A variational perspective," IEEE transactions on Consumer Electronics, vol. 51, pp. 1326-1334, 2005.Available at: https://doi.org/10.1109/tce.2005.1561863.

[7]          T. Celik and T. Tjahjadi, "Automatic image equalization and contrast enhancement using Gaussian mixture modeling," IEEE Transactions on Image Processing, vol. 21, pp. 145-156, 2011.Available at: https://doi.org/10.1109/tip.2011.2162419.

[8]          S. C. Huang, F. C. Cheng, and Y. S. Chiu, "Efficient contrast enhancement using adaptive gamma correction with weighting distribution," IEEE Transactions on Image Processing, vol. 224, pp. 1032-1041, 2013.

[9]          T. Arici, S. Dikbas, and Y. Altunbasak, "A histogram modification framework and its application for image contrast enhancement," IEEE Transactions on Image Processing, vol. 18, pp. 1921-1935, 2009.Available at: https://doi.org/10.1109/tip.2009.2021548.

[10]        S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld, "Adaptive histogram equalization and its variations," Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355-368, 1987.

[11]        D. Menotti, L. Najman, J. Facon, and A. d. A. Araújo, "Multi-histogram equalization methods for contrast enhancement and brightness preserving," IEEE transactions on Consumer Electronics, vol. 53, pp. 1186-1194, 2007.Available at: https://doi.org/10.1109/tce.2007.4341603.

[12]        T. Tan, K. Sim, and C. P. Tso, "Image enhancement using background brightness preserving histogram equalisation," Electronics Letters, vol. 48, pp. 155-157, 2012.Available at: https://doi.org/10.1049/el.2011.3421.

[13]        H. Ibrahim and N. S. P. Kong, "Brightness preserving dynamic histogram equalization for image contrast enhancement," IEEE transactions on Consumer Electronics, vol. 53, pp. 1752-1758, 2007.Available at: https://doi.org/10.1109/tce.2007.4429280.

[14]        K. Dragomiretskiy and D. Zosso, "Variational mode decomposition," IEEE Transactions on Signal Processing, vol. 62, pp. 531-544, 2013.

[15]        K. Dragomiretskiy and D. Zosso, "Two-dimensional variational mode decomposition," presented at the International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition”,Springer International Publishing, 2015

[16]        L. M. Satapathy and P. Das, "Bio-medical image enhancement using adaptive multi-resolution technique," presented at the In 2019 International Conference on Applied Machine Learning (ICAML). IEEE, 2019.

[17]        D. Z. H. Yanhong, "An image enhancement algorithm based on wavelet frequency division and bi-histogram equalization," Computer Applications and Software, vol. 24, pp. 159-161, 2007.

[18]        Z. Wang and A. C. Bovik, "A universal image quality index," IEEE Signal Processing Letters, vol. 9, pp. 81-84, 2002.Available at: https://doi.org/10.1109/97.995823.

[19]        W. Chen, X. Mao, and H. Ma, "Low-contrast microscopic image enhancement based on multi-technology fusion," presented at the In 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems. IEEE., 2010.

[20]        S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, "Pixel-level image fusion: A survey of the state of the art," information Fusion, vol. 33, pp. 100-112, 2017.Available at: https://doi.org/10.1016/j.inffus.2016.05.004.

[21]        R. Vijayarajan and S. Muttan, "Iterative block level principal component averaging medical image fusion," Optik, vol. 125, pp. 4751-4757, 2014.Available at: https://doi.org/10.1016/j.ijleo.2014.04.068.

[22]        Y. Kirankumar and S. Shenbaga Devi, "Transform-based medical image fusion," International Journal of Biomedical Engineering and Technology, vol. 1, pp. 101-110, 2007.

[23]        H. Wan, X. Tang, Z. Zhu, B. Xiao, and W. Li, "Multi-focus color image fusion based on quaternion multi-scale singular value decomposition," Frontiers in Neurorobotics, p. 76, 2021.Available at: https://doi.org/10.3389/fnbot.2021.695960.

[24]        V. Naidu, "Image fusion technique using multi-resolution singular value decomposition," Defence Science Journal, vol. 61, pp. 479-484, 2011.Available at: https://doi.org/10.14429/dsj.61.705.

[25]        N. Mitianoudis and T. Stathaki, "Pixel-based and region-based image fusion schemes using ICA bases," information Fusion, vol. 8, pp. 131-142, 2007.Available at: https://doi.org/10.1016/j.inffus.2005.09.001.

[26]        S. Singh and R. Anand, "Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain," IEEE Transactions on Instrumentation and Measurement, vol. 69, pp. 593-607, 2019.Available at: https://doi.org/10.1109/tim.2019.2902808.

[27]        M. Yin, X. Liu, Y. Liu, and X. Chen, "Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain," IEEE Transactions on Instrumentation and Measurement, vol. 68, pp. 49-64, 2018.Available at: https://doi.org/10.1109/tim.2018.2838778.

[28]        K. A. Johnson and J. A. Becker, "BrainMRIImage Data set." Retrieved from: http://www.med.harvard.edu/AANLIB/home.html. [Accessed 1 January, 2022], 2022.

[29]        N. Chakrabarty, "A set of Brain MRI images." Retrieved from: https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor–detection. [Accessed March 15, 2021], 2021.

Views and opinions expressed in this article are the views and opinions of the author(s), Review of Computer Engineering Research shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.