Skip to main content

An adaptive gamma correction for image enhancement

Abstract

Due to the limitations of image-capturing devices or the presence of a non-ideal environment, the quality of digital images may get degraded. In spite of much advancement in imaging science, captured images do not always fulfill users’ expectations of clear and soothing views. Most of the existing methods mainly focus on either global or local enhancement that might not be suitable for all types of images. These methods do not consider the nature of the image, whereas different types of degraded images may demand different types of treatments. Hence, we classify images into several classes based on the statistical information of the respective images. Afterwards, an adaptive gamma correction (AGC) is proposed to appropriately enhance the contrast of the image where the parameters of AGC are set dynamically based on the image information. Extensive experiments along with qualitative and quantitative evaluations show that the performance of AGC is better than other state-of-the-art techniques.

1 Introduction

Since digital cameras have become inexpensive, people have been capturing a large number of images in everyday life. These images are often affected by atmospheric changes [1], the poor quality of the image-capturing devices, the lack of operator expertise, etc. In many cases, these images might demand enhancement for making them more acceptable to the common people. Furthermore, image enhancement is needed because of its wide range of application in areas such as atmospheric sciences [2], astrophotography [3], medical image processing [4], satellite image analysis [5], texture analysis and synthesis [6], remote sensing [7], digital photography, surveillance [8], and video processing applications [9].

Enhancement covers different aspects of image correction such as saturation, sharpness, denoising, tonal adjustment, tonal balance, and contrast correction/enhancement. This paper mainly focuses on contrast enhancement for different types of images. The existing contrast enhancement techniques can be categorized into three groups: global, local, and hybrid techniques. In global enhancement techniques, each pixel of an image is transformed following a single transformation function. However, different parts of the image might demand different types of enhancement, and thus global techniques may create over-enhancement and/or under-enhancement problems at some parts of the image [10]. To solve this problem, local enhancement techniques are proposed where transformation of an image pixel depends on the neighboring pixels’ information. Hence, it lacks global brightness information and may result in local artifacts [11]. Moreover, the computational complexities of these methods are large as compared to that of global enhancement techniques. Hybrid enhancement techniques comprise of both global and local enhancement techniques. Here, the transformation considers both neighboring pixels’ and global image information [12]. However, the parameter(s) controlling the contributions of the local and global transformations to the final output needs to be tuned differently for different images. Hence, a trade-off must be made in choosing the type of the enhancement technique. In this work, we focus on deriving a global enhancement technique which is computationally less complex and, at the same time, suitable for a large variety of images.

A very common observation for most of the available techniques is that any single technique may not perform well for different images due to different characteristics of the images. Figure 1 presents two visually unpleasant images on which two renowned global image enhancement techniques, i.e., histogram equalization (HE) [13] and adaptive gamma correction with weighting distribution (AGCWD) [14], have been applied. The results show that HE produces better result for the “bean” image but not for the “girl” image while AGCWD produces better result for the “girl” image but not for the “bean” image. Hence, to overcome this problem by applying a single technique, image characteristics should be analyzed first, and based on these characteristics, images need to be separated into classes. An enhancement technique should transform the images appropriately according to the class they belong to.

Fig. 1
figure 1

Enhancement by different methods (top-down → “bean,” “girl”). a Original. b HE. c AGCWD

To handle different types of images, Tsai et al. [15] classified images into six groups and applied enhancement techniques for the respective groups of images. However, the predefined values used in the classification may not work for all the cases, whereas an adaptive classification method based on the statistical information is expected to work well in most of the cases.

To mitigate these problems, we propose a global technique named as adaptive gamma correction (AGC), which requires less computation and enhances each type of image according to its characteristics. To this end, the main contributions of our work are as follows:

  • We propose an automatic image classification technique based on the statistical information of an image.

  • For enhancing the contrast of each class of the images, we develop a modified gamma correction technique where the parameters are dynamically set, resulting in quite different transformation functions for different classes of images and requiring less amount of time.

Experimental results show that the dynamic parameters are set well to produce expected improvement of the images.

The rest of this paper is organized as follows. Section 2 presents an overview of the existing works. Section 3 presents our proposed solution. Section 4 provides demonstration of the efficacy of AGC and lists the experimental results to illustrate the performance of AGC as compared to other existing methods. Finally, Section 5 concludes the findings.

2 Literature review

To enhance the contrast of an image, various image enhancement techniques have been proposed [10, 1619]. Histogram equalization (HE) is such a widely used technique [13]. However, HE does not always give satisfactory results since it might cause over-enhancement for frequent gray levels and loss of contrast for less frequent ones [18]. In order to mitigate over-enhancement problems, brightness preserving bi-histogram equalization (BBHE) [19], dualistic sub-image histogram equalization (DSIHE) [20], and minimum mean brightness error bi-histogram equalization (MMBEBHE) [21] have been proposed, which partition a histogram before applying the HE. BBHE partitions the histogram based on the image mean whereas DSIHE uses image median to partition. MMBEBHE recursively partitions the image histogram into multiple groups based on mean brightness error (MBE). In this technique, however, desired improvement may not always be achieved, and the difference between input and output image is minimal [18]. Moreover, because of recursive calculation of MBE, the computational complexity is very large as compared to other techniques [22].

A combination of BBHE and DSIHE is the recursively separated and weighted histogram equalization (RSWHE) [18], which preserves the brightness and enhances the contrast of an image. The core idea of this algorithm is to break down a histogram into two or more parts and apply weights in the form of a normalized power law function for modifying the sub-histograms. Finally, it performs histogram equalization on each of the weighted histograms. However, statistical information of the image may be lost after the transformation, deteriorating the quality of the image [14].

Some other methods are also proposed ranging from traditional gamma correction to more complex methods utilizing depth image histogram [23], pixel contextual information [11], etc., for analyzing image context and pipelining of different stages [24] to speed up the process. Celik and Tjahjadi propose contextual and variational contrast (CVC) [11] where inter pixel contextual information is taken and the enhancement is performed using a smoothed 2D target histogram. As a result, the computational complexity of this technique becomes very large. Adaptive gamma correction with weighting distribution (AGCWD) [14] derives a hybrid histogram modification function combining traditional gamma correction and histogram equalization methods. Although AGCWD enhances the contrast preserving the overall brightness of an image, this technique may not give desired results when an input image lacks bright pixels since the highest intensity in the output image is bounded by the maximum intensity of the input image [25]. This is because, the highest enhanced intensity will never cross the maximum intensity of the input image.

Coltuc et al. propose exact histogram specification (EHS) [16] based on the strict ordering of the pixels of an image. It guarantees that the histogram will be uniform [26] after enhancement. It thus increases the contrast of the image ignoring insignificant errors. However, EHS uses Gaussian model which is not appropriate for most of the natural images [27]. Tsai and Yeh introduce an appropriate piecewise linear transformation (APLT) function for color images by analyzing the contents of an image [28]. APLT may cause over-enhancement and loss of image details in some cases, when an image contains homogeneous background [29].

Celik and Tjahjadi have recently proposed an adaptive image equalization algorithm [30] where the input histogram is first transformed into a Gaussian mixture model and then the intersection points of Gaussian components are used to partition the dynamic range of the image. This technique may not enhance very low illuminated images [31].

The layered difference representation (LDR) proposed by Lee et al. [32] divides an image into multiple layers, derives a transformation function for each layer, and aggregates them into a final desired transformation function. Here, all pixels are considered equally though foreground pixels have more importance than background pixels [33]. Histogram modification framework (HMF) [34] handles these types of problems by reducing the contributions of large smooth areas which often correspond to the background regions. Thus, it enhances object details by degrading background details, and hence this method may not suffice if we want to see the background detail [35].

In order to enhance different parts of an image in different ways, bilateral Bezier curve (BBC) method [36] partitions the image histogram into dark and bright regions, creates transformation curves separately for each segment, and merges these two curves to get the final mapping. However, BBC often generates significant distortions in the image due to brightening and over-enhancement [37].

In general, most of the contrast enhancement techniques fail to produce satisfactory results for diversified images such as dark, low-contrast, bright, mostly dark, high-contrast, mostly bright ones. To get rid of this problem, Tsai et al. [15] propose a decision tree-based contrast enhancement technique, which first classifies images into six groups, and then applies a piecewise linear transformation for each group of the images. However, the classification is performed using manually defined thresholds which may not always fit to enhance different types of images properly.

From the above discussion, it is evident that the available techniques for enhancing the contrast of an image might not be applied for all types of images. A technique producing good results for some images may fail on some other images. To solve this problem, we propose a computationally simple method utilizing an automatic image classification mechanism along with a suitable enhancement method for each of the image classes.

3 Proposed method

The main objective of the proposed technique is to transform an image into a visually pleasing one through maximizing the detail information. This is done by increasing the contrast and brightness without incurring any visual artifact. To achieve this, we propose an adaptive gamma correction (AGC) method which dynamically determines an intensity transformation function according to the characteristics of the input image. The proposed AGC consists of several steps as presented in Fig. 2. The detail of each step is described in the followings.

Fig. 2
figure 2

Functional block diagram of the proposed technique

3.1 Color transformation

Several color models [13], such as red-green-blue (RGB), Lab, HSV, and YUV are available in the image processing domain. However, images are usually available in RGB color space where the three channels are much correlated. And hence, intensity transformations done in the RGB space are likely to change the color of the image. For AGC, we adopt HSV color space which separates the color and brightness information of an image into hue (H), saturation (S), and value (V) channels. HSV color model provides a number of advantages such as having a good capacity of representing a color for human perception and the ability of separating color information completely from the brightness (or lightness) information [28, 38, 39]. Hence, enhancing the V-channel does not change the original color of a pixel.

3.2 Image classification

Every image has its own characteristics, and the enhancement should be done based on that. To appropriately handle different images, the proposed AGC first classifies an input image I into either low-contrast class ϱ 1 or high (or moderate) contrast class ϱ 2 depending on the available contrast of the image using Eq. (1).

$$ g(I)= \left\{\begin{array}{ll} {\varrho_{1}}, & {D}\leq 1/\tau\\ {\varrho_{2}}, & \text{otherwise} \end{array}\right. $$
(1)

where D=diff((μ+2σ),(μ−2σ)) and τ is a parameter used for defining the contrast of an image. σ and μ are the standard deviation and mean of the image intensity, respectively.

Equation (1) classifies an image as a low-contrast one when most of the pixel intensities of that image are clustered within a small range (cf. Fig. 3). The criterion in Eq. (1) is chosen being guided by the Chebyshev’s inequality which states that at least 75 % values of any distribution will stay within 2 σ around its mean on both sides [40]. This leads to the simpler form of the criterion for an image to be classified as a low-contrast one as 4σ≤1/τ. From our experience, we have found that τ=3 is a suitable choice for characterizing the contrasts of different images.

Fig. 3
figure 3

Low-contrast image nature

Again, depending on the brightness of the image, different image intensities should be modified differently. Hence, we divide each of the ϱ 1 and ϱ 2 classes into two sub-classes, bright and dark, based on whether the image mean intensity μ≥0.5 or not. Thus, AGC makes use of the four classes as shown in Fig. 4.

Fig. 4
figure 4

Image classification

3.3 Intensity transformation

The transformation function of the proposed AGC is based on the traditional gamma correction given by

$$ I_{\text{out}} = {cI}_{\text{in}}^{\gamma} $$
(2)

where I in and I out are the input and output image intensities, respectively. c and γ are two parameters that control the shape of the transformation curve. In contrast to traditional gamma correction, AGC sets the values of γ and c automatically using image information, making it an adaptive method. In the following subsections, we describe the procedure of setting these two parameters for different classes of images.

3.3.1 Enhancement of low-contrast image

According to the classification done in Eq. (1), the images falling into group ϱ 1 have poor contrast. Low σ implies that most of the pixels have similar intensities. So, the pixel values should be scattered over a wider range to enhance the contrast.

In gamma correction, γ controls the slope of the transformation function. The higher the value of γ is, the steeper the transformation curve becomes. And the steeper the curve is, the more the corresponding intensities are spread, causing more increase of contrast. In AGC, we conveniently do this for low-contrast images by choosing the value of γ calculated by

$$ \gamma=-\log_{2}(\sigma) $$
(3)

Figure 5 demonstrates a plot of γ with respect to σ using the above formula which shows a decreasing curve. Note that, σ is small in ϱ 1 class. Hence, large γ values will be obtained, which will cause large increase of contrast, as expected.

Fig. 5
figure 5

γ values for different σ

In traditional gamma correction, c is used for brightening or darkening the output image intensities. However, in AGC, we allow c to make more influence on the transformation. The proposed AGC uses different values of c for different images depending on the nature of the respective image according to

$$ c=\frac{1}{1+\text{Heaviside}(0.5-\mu)\times(k-1)} $$
(4)

where k is defined by

$$ k=I_{\text{in}}^{\gamma} + \left(1-I_{\text{in}}^{\gamma}\right)\times\mu^{\gamma} $$
(5)

and the Heaviside function [41] is given by

$$ \text{Heaviside}(x)= \left\{ \begin{array}{ll} 0, & x \leq 0\\ 1, & x > 0 \end{array}\right. $$
(6)

Such choices of γ and c enable AGC to handle bright and dark images in ϱ 1 class in different and appropriate manners. The effectiveness of the proposed transformation function is described in the following subsections.

3.3.1.1 Bright images in ϱ 1

For low-contrast bright images (μ≥0.5), the major concern is to increase the contrast for better distinguishability of the image details that are made up of high intensities. Hence, in AGC, according to Eq. (4), c becomes 1 for such images, and the transformation function becomes

$$ I_{\text{out}}= I_{\text{in}}^{\gamma} $$
(7)

For increasing the contrast in this type of images, the transformation curve should spread out the bright intensities over a wider range of darker intensities. To achieve this, according to AGC, we need γ to be larger than 1, which is assured by Lemma 1.

Lemma 1

For low-contrast images, γ remains greater than 1.

Proof

For low-contrast images, the minimum value of γ in AGC will be \(\gamma _{\text {min}}= -\log _{2}\left (\sigma _{\text {max}}\right)= -\log _{2}\left (\frac {1}{4\tau }\right)\). For a choice of τ=3, we get γ min=− log2(0.0833)=3.585 >1. □

The lower curves in Fig. 6 represent the transformation effects for low-contrast bright images. We get different curves with different slopes depending on the value of σ. A lower σ produces higher spread of intensities, resulting in more increase of contrast.

Fig. 6
figure 6

Transformation curves for images with low-contrast

3.3.1.2 Dark images in ϱ 1

Most of the intensities of an image in this class are clustered in a small range of dark gray levels around the image mean. For increasing the contrast of such images, the transformation curve needs to be spread out the dark intensities to the higher intensities. This requires a transformation curve that lies above the line I out=I in. The transformation function is also desired to spread the “clustered” intensities more than the other intensities.

For a dark image (μ<0.5) with low-contrast, Eqs. (4) and (5) are used and the final transformation function becomes

$$ I_{\text{out}}=\frac{I_{\text{in}}^{\gamma}}{I_{\text{in}}^{\gamma} + \left(1-I_{\text{in}}^{\gamma}\right)\times \mu^{\gamma}} $$
(8)

Figure 6 shows that the transformation functions produced by the AGC for low-contrast dark images indeed fall above the line I out=I in. Again, the steepness of the curves are more for the lower contrast (i.e., low σ) images, as desired. More interestingly, the steep portion of the curve moves with the value of μ. This ensures that the intensities around μ are more spread in the output image. Such a behavior of the transformation is very much expected, since most of the intensities fall around the μ in this class of images.

Figure 7 presents two low-contrast dark and bright images and their histograms along with the corresponding transformation curves as well as the enhanced images and histograms after applying AGC. In the input image histograms, most of the intensities are accumulated within a very limited range. After applying the AGC, the intensities are distributed over wider ranges.

Fig. 7
figure 7

Low-contrast images. a Original dark (“bean”), b enhanced by AGC along with transformation curve of a. c Original bright (“cup”), d enhanced by AGC along with transformation curve of c

3.3.2 Enhancement of high- or moderate-contrast image

An image falls into ϱ 2 class when the intensities are appreciably scattered over the available dynamic range. Brightness adjustment is usually more important than contrast enhancement in such images. In this case, I out and c are calculated similarly as in Eqs. (2) and (4). γ is now calculated differently using Eq. (9), not to make much stretching of the contrast

$$ \gamma=\exp{\left[(1-(\mu+\sigma))/2 \right]} $$
(9)

Lemma 2 confirms that γ falls within a small range around 1 for this class of images, as desired, ensuring not much change in contrast.

Lemma 2

For high- or moderate-contrast images, γ[0.90,1.65].

Proof

The minimum value of γ is found when (μ+σ) has a maximum possible value of \(\text {max} \left (\mu + \sqrt {\mu -\mu ^{2} }\right)\) since for x[0,1], we have σ 2μμ 2. Thus, the maximum of (μ+σ) is \( \frac {1}{2}+ \frac {1}{\sqrt {2}} =1.2071\). This gives the minimum value for \(\gamma =\exp {\left [\frac {\left (\frac {1}{2} - \frac {1}{\sqrt {2}} \right)}{2}\right ]}=0.9016278\). Hence, \( 0.9016278 \leq \gamma \leq \sqrt e =1.64872\). □

We now discuss the effect of γ on the dark and bright images.

3.3.2.1 Dark images in ϱ 2

For images with μ<0.5, (μ+σ)≤1, since both μ and σ are less than (or equal to) 0.5 which implies γ≥1.

Figure 8 presents the transformation curves for different values of μ and σ. Here, we see that the transformation curves pass above the linear curve I out=I in, transforming the dark pixels into brighter ones. Note also that, the lower the mean in the input image, the sharper increase in darker pixel values are performed (steeper curves in Fig. 8). This increases the visibility of the dark images.

Fig. 8
figure 8

Transformation curve for high- or moderate-contrast dark images

For dark images with comparatively larger mean (μ0.5 but μ<0.5), the transformation curves are very close to the linear curve, i.e., not much changes are made in the intensities.

3.3.2.2 Bright images in ϱ 2

For this class of images, I out, c, and γ are calculated using Eqs. (2), (4), and (9), respectively. In this case, images have good quality with respect to brightness and contrast. Here, the main target is to preserve the image quality. Figure 9 shows the transformation curves for different values of the μ and σ. Here, the curves lie very close to the line I out=I in, causing little change in contrast and ensuring not much changes of intensities as expected.

Fig. 9
figure 9

Transformation curve for high or moderate contrast bright images

Note that for the maximally scattered image, i.e., for \(\sigma =\sigma _{\text {max}}=\frac {1}{2} \) and \(\mu =\frac {1}{2}\), (i.e., when half of the image pixels are at zero intensity and the other half at the maximum intensity 1), we need not change the image. It has the maximum contrast and is already enhanced. Here, we need a linear transformation curve, and Eq. (9) exactly produces γ=1 meeting the requirement.

Figure 10 presents two moderate- or high-contrast images and their histograms along with the corresponding transformation curves as well as the enhanced images and histograms after applying AGC. Upon the application of AGC, the gray levels of the images are distributed over wider ranges in the histograms as desired.

Fig. 10
figure 10

Moderate- or high-contrast image. a Original dark (“fountain”), b enhanced by AGC along with transformation curve of a. c Original bright (“lenna”), d enhanced by AGC along with transformation curve of c

4 Experimental result

In this paper, we have three major concerns: (i) appropriate classification along with (ii) transformation of an image for acceptable contrast enhancement with (iii) low computational complexity. We compare the performance of the proposed AGC with some other state-of-the-art techniques, namely HE [13], EHS [16], CVC [11], LDR [32], HMF [34], RSWHE [18], and AGCWD [14]. The comparison is performed in both qualitative and quantitative manners.

4.1 Qualitative assessment

For qualitative assessment, we first consider few representative images from each of the four proposed classes (shown in Fig. 11). Beside these, we also consider some other images used in [42] and [13].

Fig. 11
figure 11

Enhancement results of different methods (top-down → “cat”, “bean”, “cup”, “woman”, “cave” and “rose”). a Original. b HE. c EHS. d CVC. e LDR. f HMF. g RSWHE. h AGCWD. i Proposed

The “bean” and “cat” images belong to dark low-contrast class (in scale of 2551, bean, D=69.05, μ=38.73 and cat, D=44.17, μ=82.95) and contain most of the pixels in the dark region. The main challenge of these images is to remove the haziness and increase the brightness without creating any artifact. To demonstrate the superiority of AGC, let us consider a portion from the “cat” image (i.e., rectangular red marked region in Fig. 11) that represents a plain wall (where intensities vary from 88 to 99). HE creates abnormal effects because the intensities of this region range vary from 161 to 255 in the output image which creates local artifacts. Over-enhancement is also observed in EHS. In contrast, following Eqs. (3), (4), (5), and (8), AGC transforms the intensities to the range [202, 255]. As a result, a visual soothing area is created on the wall in the transformed image. Similar effect is also observed in the “bean” image for different methods. Large contrast is desired to clearly visualize black dots on the white beans. HE and EHS are almost successful in this case. However, CVC highly deteriorates the image quality due to the creation of local artifacts. AGCWD does not create any significant effect on the input images because this method is unable to increase the brightness beyond the highest intensity (highest intensity for “cat” is 104 and for “bean” is 83). RSWHE slightly equalizes the original image in order to preserve the original brightness but does not improve the image quality. LDR largely increases the brightness whereas HMF fails to increase that sufficiently. In contrast, due to the proposed classification along with the proper transformation function, the results of AGC are quite acceptable.

The “cup” image of Fig. 11 is also of low contrast but most of the pixels of this image are too bright (in scale of 255, D=81.70, μ=204.86). The most frequent intensity of the cup image is 255 which is also the maximum intensity of this image whereas the minimum intensity is 111. In case of HE and EHS, some of the pixels become too dark which is not desirable. CVC, LDR, and HMF fail to extract detail information in a few cases. AGCWD transforms most of the intensities into a white range ([128, 255]) and makes the image brighter than expected. On the other hand, AGC appropriately transforms (using Eqs (3), (4), (5), and (7)) a large number of the brighter pixels into darker ones, enhancing the contrast. The proposed AGC performs superior than others because one pixel does not influence other neighboring pixels’ transformation directly. Rather, information such as image mean and standard deviation from the whole image influence the transformation of AGC.

The “cave” and “woman” images represent dark moderate-/high-contrast class (in scale of 255, cave, D=261.22, μ=61.99 and woman, D=210.73, μ=85.22). This is because the pixels of these images are scattered over the whole histogram, and among those, more than half of the pixels are in the dark region. The main challenge of the “cave” and “woman” images is to increase the brightness without creating any artifact, especially in the light regions of the images. In the “woman” image, 75 % of the pixel intensities are lower than 100 and we need to preserve the effect of the lights. HE and EHS over-enhance the image because intensity values above 100 are dramatically changed, creating local artifacts around the lights. LDR, HMF, and CVC produce almost identical results, and the outputs are mostly dark. RSWHE largely deteriorates the original image. Although, AGCWD produces comparatively better result than other existing methods, the outputs lack desired brightness. The proposed method (AGC) increases the brightness of the input images and also keeps good contrast using Eqs. (2), (4), and (9). In the “cave” image, 85 % intensities are lower than 75, but the maximum intensity is 255. Like the “woman” image, similar effects are also observed for the “cave” image. AGC produces better contrast by effectively classifying and transforming the pixel intensities using Eqs. (2), (4), and (9).

The “rose” is an image having good contrast and soothing brightness (in scale of 255, D=265.51, μ=218.79). We choose this image to experiment on the outcomes of different methods on a good quality image. In this image, the minimum and maximum intensities are 11 and 255, respectively. More than 60 % intensities are exactly 255 (mostly coming from the white background). HE and EHS create annoying artifacts which are strongly visible on the leaves and the petals. HE transforms most of the bright pixels to darker regions (e.g., 254 into 101, 200 into 53). As a result, the petals of the rose become dark. Similarly, EHS and RSWHE transform pixels of intensity 255 to lower ones and create dark shades in the background. CVC, LDR, and HMF make the image darker whereas RSWHE and AGCWD increase the brightness. In all of these cases, the output image loses its original contrast. However, AGC does not affect most of the intensities because of the linear nature of its transformation curve (the transformation curve is shown in Fig. 9 using Eqs. (2), (4), and (9)). Hence, visual information of the image is preserved and slightly enhanced.

Although we have discussed our four different cases with example images (e.g., Fig. 11), several other images from [42] and [13] are also considered to show the robustness of AGC as shown in Fig. 12. These results also advocate the superiority of AGC. For example, in case of “girl” image, HE makes the thin necklace visible but there exist too many artifacts on the hair. LDR, HMF, and CVC produce good results on the hair but they do not keep the original color of the face, and the brightness is not increased. EHS enhances the image with too many artifacts on the hair. Although AGCWD produces comparatively better result than other existing methods, the output has lack of desired brightness. On the other side, AGC increases the brightness of the input image and also keeps desired color contrast. For “dark-ocean,” the main challenges are to increase the brightness keeping the sunlight ray and river stream arcs clearly visible. Figure 12 (“room”) presents a raw image which is almost dark. In this case, too, AGC increases the contrast and brightness better than other techniques. The “white-rose” image in Fig. 12 contains pattern noise [13]. Here, our method yields a better image than the other methods without amplifying the noise much.

Fig. 12
figure 12

Enhancement results of different methods (top-down → “girl,” “dark-ocean,” “fountain,” “building,” “room,” and “white-rose”). a Original. b HE. c EHS. d CVC. e LDR. f HMF. g RSWHE. h AGCWD. i Proposed

4.2 Quantitative measurement

Enhancement or improvement of the visual quality of an image is a subjective matter because its judgment varies from person to person. Through quantitative measurements, we can establish numerical justifications. However, quantitative evaluation of contrast enhancement is not an easy task due to the lack of any universally accepted criterion. Here, we assess the performance of enhancement techniques using three quality metrics, namely root-mean-square (rms) contrast, execution time (ET), and discrete entropy (DE).

4.2.1 Root-mean-square (rms) contrast

A common way to define the contrast of an image is to measure the rms contrast [43, 44]. The rms is defined by Eq. (10). Larger rms represents better image contrast.

$$ \text{rms}=\sqrt{\frac{1}{MN}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}\left(\mu-I_{ij}\right)^{2}} $$
(10)

Tables 1 and 2 present the rms contrast produced by different methods. In Table 1, we use the images presented in Fig. 11. Again, to show the robustness of the proposed method, in Table 2, we consider a collection of 4000 images (1000 for each group: low-contrast bright (G1), and dark (G2) images and moderate-/high-contrast bright (G3), and dark (G4) images). We select the images from Gonzalez et al. [13], Extended Yale B [45], Caltech-UCSD birds 200 [46], Corel [47], Caltech 256 [48], and Outex-TC-00034 [49] datasets. From these two tables, it is clear that the proposed method provides larger rms values as compared to other methods. Though HE produces the highest rms in three groups (shown in Table 2), such results are due to the over-enhancement of the images. Except this, AGC demonstrates the best performances than the other state-of-the-art techniques due to its better classification and appropriate transformation function.

Table 1 Root-mean-square contrast for the images used in Fig. 11
Table 2 Root-mean-square contrast calculated for the test images of four groups

4.2.2 Execution time

The execution time is an important metric in image processing to measure the running time of an algorithm. Figure 13 presents the execution time needed to run each algorithm which shows that CVC requires more execution time than all other methods due to considering neighboring pixel information. LDR, EHS, AGCWD, RSWHE, and HMF also take comparatively larger execution time. The execution time of AGC is always the lowest compared to that of other existing methods, even HE This is because its transformation function only considers image current pixel’s information, image mean, and standard deviation. Hence, AGC can be easily adopted with real-time applications such as surveillance camera, web cam, and cameras for its low computational cost.

Fig. 13
figure 13

Execution time (log10S c a l e)

4.2.3 Discrete entropy

Entropy is a measure of the uncertainty of a random variable. The more random the nature of the pixel intensities, the more entropy the image has [50]. Low entropy means low-contrast of the image. Tables 3 and 4 illustrate the DE values achieved by the different methods. Table 3 shows that AGC creates larger entropy in most of the cases and its performance is the best as compared to the other methods. Table 4 presents the DE values calculated for the 4000 images mentioned earlier, where we can also conclude that AGC has performed better than other state-of-the-art methods.

Table 3 Discrete entropy for the images used in Fig. 11
Table 4 Discrete entropy calculated for the test images of four groups

5 Conclusions

In this paper, we have proposed a simple, efficient, and effective technique for contrast enhancement, called adaptive gamma correction (AGC). This method generates visually pleasing enhancement for different types of images. Unlike most of the methods, AGC dynamically changes the values of the different parameters of the method. Performance comparisons with other state-of-the-art enhancement algorithms show that AGC achieves the most satisfactory contrast enhancements in different illumination conditions. As AGC takes low time complexity, it can be incorporated in several application areas such as digital photography, video processing, and other applications in consumer electronics.

6 Endnote

1 For better understanding instead of [0-1] scale, we use [0-255] scale.

References

  1. S-C Huang, B-H Chen, W-J Wang, Visibility restoration of single hazy images captured in real-world weather conditions. Circ. Syst. Video Technol. IEEE Trans.24(10), 1814–1824 (2014).

    Article  Google Scholar 

  2. GP Ellrod, Advances in the detection and analysis of fog at night using goes multispectral infrared imagery. Weather Forecast.10(3), 606–619 (1995).

    Article  Google Scholar 

  3. S Bedi, R Khandelwal, Various image enhancement techniques—a critical review. Int. J. Adv. Res. Comput. Commun. Eng. 2(3), 1605–1609 (2013).

    Google Scholar 

  4. M Zikos, E Kaldoudi, S Orphanoudakis, Medical image processing. Stud. Health Technol. Inf.43(Pt B), 465–469 (1997).

    Google Scholar 

  5. M Neteler, H Mitasova, in Open Source GIS: A Grass GIS Approach. Satellite image processing (SpringerNew York, 2002), pp. 207–262.

    Chapter  Google Scholar 

  6. AA Efros, WT Freeman, in Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Image quilting for texture synthesis and transfer (ACMLos Angeles, 2001), pp. 341–346.

    Google Scholar 

  7. JR Jensen, K Lulla, Introductory digital image processing: a remote sensing perspective. Geocarto International. 2(1), 65 (1987).

    Article  Google Scholar 

  8. MA Renkis, Video surveillance sharing system and method. Google Patents. Patent 8, US, 842, 179 (2014).

  9. F Arman, A Hsu, M-Y Chiu, in Proceedings of the First ACM International Conference on Multimedia. Image processing on compressed data for large video databases (ACMAnaheim, 1993), pp. 267–272.

    Chapter  Google Scholar 

  10. H Cheng, X Shi, A simple and effective histogram equalization approach to image enhancement. Digital Signal Process.14(2), 158–170 (2004).

    Article  Google Scholar 

  11. T Celik, T Tjahjadi, Contextual and variational contrast enhancement. Image Process. IEEE Trans.20(12), 3431–3441 (2011).

    Article  MathSciNet  Google Scholar 

  12. A Ross, A Jain, J Reisman, A hybrid fingerprint matcher. Pattern Recog.36(7), 1661–1673 (2003).

    Article  Google Scholar 

  13. RC Gonzalez, RE Woods, “Digital image processing”, Pearson/Prentice Hall, Upper Saddle River, NJ, 3rd ed. (2008).

  14. S-C Huang, F-C Cheng, Y-S Chiu, Efficient contrast enhancement using adaptive gamma correction with weighting distribution. Image Process. IEEE Trans.22(3), 1032–1041 (2013).

    Article  MathSciNet  Google Scholar 

  15. C-M Tsai, Z-M Yeh, Y-F Wang, Decision tree-based contrast enhancement for various color images. Mach. Vis. Appl.22(1), 21–37 (2011).

    Article  Google Scholar 

  16. D Coltuc, P Bolon, J-M Chassery, Exact histogram specification. Image Process. IEEE Trans.15(5), 1143–1152 (2006).

    Article  Google Scholar 

  17. K Hussain, S Rahman, S Khaled, M Abdullah-Al-Wadud, M Shoyaib, in Software, Knowledge, Information Management and Applications (SKIMA), 2014 8th International Conference On. Dark image enhancement by locally transformed histogram (IEEEDhaka, 2014), pp. 1–7.

    Google Scholar 

  18. M Kim, MG Chung, Recursively separated and weighted histogram equalization for brightness preservation and contrast enhancement. Consum. Electron. IEEE Trans.54(3), 1389–1397 (2008).

    Article  Google Scholar 

  19. Y-T Kim, Contrast enhancement using brightness preserving bi-histogram equalization. Consum. Electron. IEEE Trans.43(1), 1–8 (1997).

    Article  Google Scholar 

  20. Y Wang, Q Chen, B Zhang, Image enhancement based on equal area dualistic sub-image histogram equalization method. Consum. Electron. IEEE Trans.45(1), 68–75 (1999).

    Article  MathSciNet  Google Scholar 

  21. S-D Chen, AR Ramli, Minimum mean brightness error bi-histogram equalization in contrast enhancement. Consum. Electron. IEEE Trans.49(4), 1310–1319 (2003).

    Article  Google Scholar 

  22. S-D Chen, AR Ramli, Preserving brightness in histogram equalization based contrast enhancement techniques. Digital Signal Process.14(5), 413–428 (2004).

    Article  Google Scholar 

  23. S-W Jung, Image contrast enhancement using color and depth histograms. Signal Process. Lett. IEEE. 21(4), 382–385 (2014).

    Article  Google Scholar 

  24. S-C Huang, W-C Chen, A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement. Image Process. IEEE Trans.23(10), 4426–4437 (2014).

    Article  MathSciNet  Google Scholar 

  25. S Rahman, MM Rahman, K Hussain, SM Khaled, M Shoyaib, in Computer and Information Technology (ICCIT), 2014 17th International Conference On. Image enhancement in spatial domain: a comprehensive study (IEEEDhaka, 2014), pp. 368–373.

    Google Scholar 

  26. D Sen, SK Pal, Automatic exact histogram specification for contrast enhancement and visual system based quantitative evaluation. Image Process. IEEE Trans.20(5), 1211–1220 (2011).

    Article  MathSciNet  Google Scholar 

  27. Y Wan, D Shi, Joint exact histogram specification and image enhancement through the wavelet transform. Image Process. IEEE Trans.16(9), 2245–2250 (2007).

    Article  MathSciNet  Google Scholar 

  28. C-M Tsai, Z-M Yeh, Contrast enhancement by automatic and parameter-free piecewise linear transformation for color images. Consum. Electron. IEEE Trans.54(2), 213–219 (2008).

    Article  Google Scholar 

  29. S-H Yun, JH Kim, S Kim, in Consumer Electronics (ICCE), 2011 IEEE International Conference On. Contrast enhancement using a weighted histogram equalization (IEEELas Vegas, 2011), pp. 203–204.

    Chapter  Google Scholar 

  30. T Celik, T Tjahjadi, Automatic image equalization and contrast enhancement using gaussian mixture modeling. Image Process. IEEE Trans.21(1), 145–156 (2012).

    Article  MathSciNet  Google Scholar 

  31. G Phadke, R Velmurgan, in Applications of Computer Vision (WACV), 2013 IEEE Workshop On. Illumination invariant mean-shift tracking (IEEEFlorida, 2013), pp. 407–412.

    Chapter  Google Scholar 

  32. C Lee, C Lee, C-S Kim, Contrast enhancement based on layered difference representation of 2d histograms. Image Process. IEEE Trans.22(12), 5372–5384 (2013).

    Article  Google Scholar 

  33. J-T Lee, C Lee, J-Y Sim, C-S Kim, in Image Processing (ICIP), 2014 IEEE International Conference On. Depth-guided adaptive contrast enhancement using 2d histograms (IEEEParis, 2014), pp. 4527–4531.

    Chapter  Google Scholar 

  34. T Arici, S Dikbas, Y Altunbasak, A histogram modification framework and its application for image contrast enhancement. Image Process. IEEE Trans.18(9), 1921–1935 (2009).

    Article  MathSciNet  Google Scholar 

  35. C Lee, C Lee, Y-Y Lee, C-S Kim, Power-constrained contrast enhancement for emissive displays based on histogram equalization. Image Process. IEEE Trans.21(1), 80–93 (2012).

    Article  MathSciNet  Google Scholar 

  36. F-C Cheng, S-C Huang, Efficient histogram modification using bilateral Bezier curve for the contrast enhancement. Display Technol. J.9(1), 44–50 (2013).

    Article  Google Scholar 

  37. EF Arriaga-Garcia, RE Sanchez-Yanez, J Ruiz-Pinales, M de Guadalupe Garcia-Hernandez, Adaptive sigmoid function bihistogram equalization for image contrast enhancement. J. Electron. Imaging. 24(5), 053009–053009 (2015).

    Article  Google Scholar 

  38. H-D Cheng, X Jiang, Y Sun, J Wang, Color image segmentation: advances and prospects. Pattern Recog.34(12), 2259–2281 (2001).

    Article  MATH  Google Scholar 

  39. NA Ibraheem, MM Hasan, RZ Khan, PK Mishra, Understanding color models: a review. ARPN J. Sci. Technol.2(3), 265–75 (2012).

    Google Scholar 

  40. JG Saw, MC Yang, TC Mo, Chebyshev inequality with estimated mean and variance. Am. Stat.38(2), 130–132 (1984).

    MathSciNet  Google Scholar 

  41. KF Riley, MP Hobson, SJ Bence, Mathematical Methods for Physics and Engineering: a Comprehensive Guide (Cambridge University Press, Cambridge, 2006).

    Book  MATH  Google Scholar 

  42. AR Rivera, B Ryu, O Chae, Content-aware dark image enhancement through channel division. Image Process. IEEE Trans.21(9), 3967–3980 (2012).

    Article  MathSciNet  Google Scholar 

  43. E Peli, Contrast in complex images. JOSA A. 7(10), 2032–2040 (1990).

    Article  Google Scholar 

  44. Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. Image Process. IEEE Trans.13(4), 600–612 (2004).

    Article  Google Scholar 

  45. KC Lee, J Ho, D Kriegman, Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intelligence. 27(5), 684–698 (2005).

    Article  Google Scholar 

  46. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-ucsd birds 200. Tech. Report CNS-TR-2010-001, California Institute of Technology (2010).

  47. P Duygulu, K Barnard, JF de Freitas, DA Forsyth, in Computer Vision-ECCV 2002. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary (SpringerCopenhagen, 2002), pp. 97–112.

    Chapter  Google Scholar 

  48. G Griffin, A Holub, P Perona, Caltech-256 object category dataset. Technical Report CNS-TR-2007-001, California Institute of Technology (2007). http://authors.library.caltech.edu/7694/.

  49. T Ojala, T Mäenpää, M Pietikainen, J Viertola, J Kyllönen, S Huovinen, in Pattern Recognition, 2002. Proceedings. 16th International Conference On, 1. Outex-new framework for empirical evaluation of texture analysis algorithms (IEEEQuebec, 2002), pp. 701–706.

    Google Scholar 

  50. S Gull, J Skilling, Maximum entropy method in image processing. Commun. Radar Signal Process. IEE Proc. F. 131(6), 646–659 (1984).

    Article  MATH  Google Scholar 

Download references

Acknowledgements

We are really grateful to the anonymous reviewers for the corrections and useful suggestions that have substantially improved the paper. Further, we would like to thank M. G. Rabbani for discussion on some statistical issues. We would also like to thank C. Lee, Chul Lee, and C.S. Kim for providing their source codes and A.R. Rivera and A. Seal for providing a few test images.

Authors’ contributions

SR, MR, AW, GA and MS have contributed in designing, developing, and analyzing the methodology, performing the experimentation, and writing and modifying the manuscript. All the authors have read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shanto Rahman.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahman, S., Rahman, M.M., Abdullah-Al-Wadud, M. et al. An adaptive gamma correction for image enhancement. J Image Video Proc. 2016, 35 (2016). https://doi.org/10.1186/s13640-016-0138-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-016-0138-1

Keywords