Skip to main content

A review on dark channel prior based image dehazing algorithms

Abstract

The presence of haze in the atmosphere degrades the quality of images captured by visible camera sensors. The removal of haze, called dehazing, is typically performed under the physical degradation model, which necessitates a solution of an ill-posed inverse problem. To relieve the difficulty of the inverse problem, a novel prior called dark channel prior (DCP) was recently proposed and has received a great deal of attention. The DCP is derived from the characteristic of natural outdoor images that the intensity value of at least one color channel within a local window is close to zero. Based on the DCP, the dehazing is accomplished through four major steps: atmospheric light estimation, transmission map estimation, transmission map refinement, and image reconstruction. This four-step dehazing process makes it possible to provide a step-by-step approach to the complex solution of the ill-posed inverse problem. This also enables us to shed light on the systematic contributions of recent researches related to the DCP for each step of the dehazing process. Our detailed survey and experimental analysis on DCP-based methods will help readers understand the effectiveness of the individual step of the dehazing process and will facilitate development of advanced dehazing algorithms.

1 Review

1.1 Introduction

Due to absorption and scattering by atmospheric particles in haze, outdoor images have poor visibility under inclement weather. Poor visibility negatively impacts not only consumer photography but also computer vision applications for outdoor environments, such as object detection [1] and video surveillance [2]. Haze removal, which is referred to as dehazing, is considered an important process because haze-free images are visually pleasing and can significantly improve the performance of computer vision tasks.

Methods presented in earlier studies had required multiple images to perform dehazing. For example, polarization-based methods [35] use the polarization property of scattered light to restore the scene depth information from two or more images taken with different degrees of polarization. Similarly, in [6, 7], multiple images of the same scene are captured under different weather conditions to be used as reference images with clear weather conditions. However, these methods with multiple reference images have limitation in online image dehazing applications [6, 7] and may need a special imaging sensor [13]. This leads the researchers to focus the dehazing method with a single reference image. Single image based methods rely on the typical characteristics of haze-free images. Tan [8] proposed a method that takes into account the characteristic that a haze-free image has a higher contrast than a hazy image. By maximizing the local contrast of the input hazy image, it enhances the visibility but introduces blocking artifacts around depth discontinuities. Fattal [9] proposed a method that infers the medium transmission by estimating the albedo of the scene. The underlying assumption is that the transmission and surface shading are locally uncorrected, which does not hold under a dense haze.

Observing the property of haze-free outdoor images, He [10] proposed a novel prior—dark channel prior (DCP). The DCP is based on the property of “dark pixels,” which have a very low intensity in at least one color channel, except for the sky region. Owing to its effectiveness in dehazing, the majority of recent dehazing techniques [1036] have adopted the DCP. The DCP-based dehazing techniques are composed of four major steps: atmospheric light estimation, transmission map estimation, transmission map refinement, and image reconstruction. In this paper, we perform an in-depth analysis of the DCP-based methods in the four-step point of view.

We note that there are several review papers on image dehazing or defogging [3742]. In [37], five physical model-based dehazing algorithms are compared. In [38, 39], several enhancement-based and restoration-based defogging methods are investigated. In [40], fog removal algorithms that use depth and prior information are analyzed. In [41], a comparative study on the four representative dehazing methods [4, 9, 10, 43] are performed. In [42], many visibility enhancement techniques developed for homogeneous and heterogeneous fog are discussed. To the best of our knowledge, our paper is the first one dedicated to DCP-based methods. This survey is expected to ascertain researchers’ endeavors toward improving the original DCP method.

The rest of the paper is organized as follows. In Section 1.2, the original DCP-based dehazing method is first reviewed. Section 1.3 provides an in-depth survey of conventional DCP-based methods. Section 1.4 discusses the performance evaluation methods for image dehazing, and Section 1.5 concludes the paper.

1.2 Dark channel prior based image dehazing

1.2.1 Degradation model

A hazy image formed as shown in Fig. 1 can be mathematically modeled as follows [44, 45]

$$ I(x)=J(x){e}^{-\beta d(x)}+A\left(1-{e}^{-\beta d(x)}\right), $$
(1)
Fig. 1
figure 1

Formation of a hazy image

where x represents the image coordinates, I is the observed hazy image, J is the haze-free image, A is the global atmospheric light, β is the scattering coefficient of the atmosphere, and d is the scene depth. Here, e − βd is often represented as the transmission map and is given by

$$ t(x) = {e}^{-\beta d(x)}. $$
(2)

In clear weather conditions, we have β ≈ 0, and thus I ≈ J. However, β becomes non-negligible for hazy images. The first term of Eq. (1), J(x)t(x) (the direct attenuation), decreases as the scene depth increases. In contrast, the second term of Eq. (1), A(1 − t(x)) (the airlight), increases as the scene depth increases. Since the goal of image dehazing is to recover J from I, once A and t are estimated from I, J can be arithmetically obtained as

$$ J(x)=\frac{I(x)-A}{t(x)}+A. $$
(3)

However, the estimation of A and t is non-trivial. In particular, since t varies spatially according to the scene depth, the number of unknowns is equivalent to the number of image pixels. Thus, a direct estimation of t from I is prohibitive without any prior knowledge or assumptions.

1.2.2 Dark channel prior (DCP)

He et al. [10] performed an empirical investigation of the characteristic of haze-free outdoor images. They found that there are dark pixels whose intensity values are very close to zero for at least one color channel within an image patch. Based on this observation, a dark channel is defined as follows:

$$ {J}^{\mathrm{dark}}(x) = \underset{y\in \varOmega (x)}{ \min}\left(\underset{c\in \left\{r,g,b\right\}}{ \min }{J}^c(y)\right), $$
(4)

where J c is an intensity for a color channel c {r, g, b} of the RGB image and Ω(x) is a local patch centered at pixel x. According to Eq. (4), the minimum value among the three color channels and all pixels in Ω(x) is chosen as the dark channel J dark(x).

From 5000 dark channels of outdoor haze-free images, it was demonstrated that about 75 percent of the pixels in the dark channels have zero values and 90 percent of the pixels have values below 35 when the pixels in the sky region are excluded [10]. The low intensities in the dark channel are due to the following three main features: (i) shadows, e.g., shadows from cars and buildings in an urban scene or shadows from trees, leaves, and rocks in a landscape (Fig. 2a); (ii) colorful objects or surfaces, e.g., red or yellow flowers and leaves (Fig. 2b); and (iii) dark objects or surfaces, e.g., dark tree trunks and stones (Fig. 2c). Based on the above observation, the pixel value at the dark channel can be approximated as follows:

$$ {J}^{\mathrm{dark}}\approx 0. $$
(5)

This approximation to zero for the pixel value of the dark channel is called the DCP.

Fig. 2
figure 2

Dark channels of outdoor images [53], where the size of Ω is 15 × 15. The pixel values for the dark channels are close to zero at (a) the shadows of buildings and rocks, (b) colorful flowers and scenes, and (c) tree trunks and stones

On the contrary, the dark channels from hazy images produce pixels that have values far above zero as shown in Fig. 3. Global atmospheric light tends to be achromatic and bright, and a mixture of airlight and direct attenuation significantly increases the minimum value of the three color channels in the local patch. This implies that the pixel values of the dark channel can serve as an important clue to estimate the haze density. Successful dehazing results of various DCP-based dehazing algorithms [1028] support the effectiveness of the DCP in image dehazing.

Fig. 3
figure 3

Dark channel for a hazy image. a Hazy image. b Dark channel of (a)

1.2.3 DCP-based image dehazing

In the DCP-based dehazing algorithm [10], the dark channel is first constructed from the input image as in Eq. (4). The atmospheric light and the transmission map are then obtained from the dark channel. The transmission map is further refined, and the haze-free image is finally reconstructed as Eq. (3).

More specifically, given the degradation model of

$$ I(x)=J(x)t(x)+A\left(1-t(x)\right), $$
(6)

the minimum intensity in the local patch of each color channel is taken after dividing both sides of Eq. (6) by A c as follows:

$$ \underset{y\in \varOmega (x)}{ \min}\frac{I^c(y)}{A^c}=\tilde{t}(x)\underset{y\in \varOmega (x)}{ \min}\frac{J^c(x)}{A^c}+\left(1-\tilde{t}(x)\right). $$
(7)

Here the transmission in the local patch Ω(x) is assumed to be constant and is represented as \( \tilde{t}(x) \) [10]. Then, the min operator of the three color channels can be applied to Eq. (7) as follows:

$$ \underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right)=\tilde{t}(x)\underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{J^c(y)}{A^c}\right) + \left(1-\tilde{t}(x)\right). $$
(8)

According to the DCP approximation of Eq. (5), \( \tilde{t}(x) \) can be represented as

$$ \tilde{t}(x)=1-\underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right). $$
(9)

Here, the atmospheric light A needs to be estimated in order to obtain the transmission map \( \tilde{t} \). Most of the previous single image based dehazing methods estimate A from the most haze-opaque pixels. As discussed in Section 1.2.2, the pixel value of the dark channel is highly correlated with haze density. Therefore, the top 0.1 % of the brightest pixels in the dark channel is first selected, and the color with the highest intensity value among the selected pixels is then used as the value for A [10].

Figure 4 illustrates the process used to obtain A. If the pixel with the highest intensity value is used to estimate A, the pixels in the patches as shown in Fig. 4d, e would be selected, yielding significant estimation errors. Instead, by finding the candidate pixels from the dark channel as shown in Fig. 4b, the pixel that accurately estimates A can be found as shown in Fig. 4c.

Fig. 4
figure 4

Estimation of the atmospheric light [10]. a Hazy image. b Dark channel, where the size of Ω is 15 × 15 and the region inside the red boundary lines corresponds to the most haze-opaque region. c Patch used to determine the atmospheric light. d, e Patches that contain intensity values higher than that of the atmospheric light

It is noted in [10] that the DCP is not reliable in the sky region. Fortunately, the color of the sky is close to A in hazy images, and thus, we have

$$ \underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right) \approx 1\kern0.5em \mathrm{and}\kern0.5em \tilde{t}(x)\approx 0. $$
(10)

This corresponds to the definition of t(x) because d(x) approaches infinity for the sky region. Therefore, the sky does not need special treatment for estimating the transmission map if we obtain \( \tilde{t}(x) \) as Eq. (9). Given A, \( \tilde{t} \), and I, the dehazed image is obtained as

$$ J(x) = \frac{I(x)-A}{ \max \left(\tilde{t}(x),{t}_0\right)}+A, $$
(11)

where t 0 is used as a lower bound for the transmission map.

1.3 Analysis of DCP-based dehazing algorithms

In Section 1.2, we reviewed the original DCP-based dehazing algorithm [10]. The follow-up methods are based on the basic structure presented in [10] but differ in each step of the dehazing procedure. Table 1 shows the DCP-based dehazing algorithms from [1024] that are investigated in this paper. Instead of analyzing each method individually, we classify all the methods in accordance with the four steps of image dehazing and then perform a step-by-step analysis. Each of the following subsections describes and compares the various methods used for each step.

Table 1 Comparison of DCP-based dehazing algorithms

1.3.1 Dark channel construction

Most conventional DCP-based dehazing methods estimate the dark channel from the input hazy image I. In Eq. (4), the size of the local patch Ω(x) is the only parameter that needs to be determined. Although the effect of the size of the local patch is significant, most conventional methods simply use a local patch with a fixed size or do not specify the size of the local patch. Table 2 shows typical patch sizes used in the previous methods.

Table 2 Local patch sizes used for previous methods

Figure 5a shows two hazy images. The top row in Fig. 5 corresponds to a remote aerial photograph with less local texture and heavy haze. Therefore, a small local patch is sufficient in order to estimate the dark channel, resulting in a reduction in the DCP calculation time. However, an image that has complicated local textures, as shown in the second row of Fig. 5, needs a larger local patch size to exclude false textures from the dark channel. Note that the block-min process of Eq. (4) inevitably decreases the apparent resolution of the dark channel as the size of the patch increases. Therefore, the minimum possible patch size that does not produce false textures in the dark channel needs to be found for every hazy image by considering application-dependent image local details.

Fig. 5
figure 5

Dark channels of various patch size obtained by Eq. (4). a Hazy image. Dark channels obtained by Eq. (4) with the patch size of (b) 3 × 3, (c) 7 × 7, (d) 11 × 11, and (e) 15 × 15

Apart from the aforementioned general method for the dark channel estimation, Zhang [23] replaced the minimum operator by the median operator as follows:

$$ {I}^{\mathrm{dark}}(x) = \underset{y\in \varOmega (x)}{\mathrm{median}}\left(\underset{c\in \left\{r,g,b\right\}}{ \min }{I}^c(y)\right). $$
(12)

As a result of the median operation, the dark channels become less blurry, as shown in Fig. 6. However, the median operator is computationally more complex than the minimum operator. Moreover, the median-based method is less physically meaningful because the assumption of the DCP becomes deteriorated. As shown in the second row of Fig. 6, dense image textures remain visible for the dark channel, even when a large patch size of 15 × 15 is used. For the sake of the visibility enhancement of hazy images, however, the median filter is somewhat effective because it does not require complicated post-processing, which is necessary for smooth and blurry dark channels that are obtained by the minimum operator.

Fig. 6
figure 6

Dark channels of various patch size obtained by Eq. (12). a Hazy images. Dark channels obtained by Eq. (12) with the patch size of (b) 3 × 3, (c) 7 × 7, (d) 11 × 11, and (e) 15 × 15

1.3.2 Atmospheric light estimation

The majority of conventional DCP-based dehazing methods estimate A as described in Section 1.2.3. In [19, 20], the pixel with the highest dark channel value is used directly as follows:

$$ A=I\left({\mathrm{argmax}}_x\left({I}^{\mathrm{dark}}(x)\right)\right). $$
(13)

However, the above method can incorrectly select the pixel when the scene contains bright objects. Instead, pixels with a top p% dark channel values are selected as the most haze-opaque pixels, and the one with the highest intensity is used to estimate A. This remains one parameter p in the estimation of A, which is empirically set as 0.1 [1015] or 0.2 [16].

In [21], to explicitly exclude bright objects from the estimation of A, the local entropy is measured as

$$ E(x)={\displaystyle {\sum}_{i=0}^N\left({p}_x(i)\times { \log}_2\left({p}_x(i)\right.\right)}, $$
(14)

where p x (i) represents the probability of a pixel value i in the local patch centered at x, and N represents the maximum pixel value. The local entropy value is low for regions with smooth variations, which highly likely correspond to haze-opaque regions. Therefore, the pixel with the lowest entropy value is used to obtain A among the highest p% pixels in the dark channel (p = 0.1 [21]).

Table 3 lists the conventional methods that are used to estimate atmospheric light. To quantitatively evaluate atmospheric light estimation methods, we used the foggy road image database (FRIDA) [38] which consists of pairs of synthetic color and depth images. For a given depth image and β, the ground-truth transmission map can be constructed as t(x) = e − βd(x). The hazy image I is then obtained as Eq. (6) by using the atmospheric light A. Therefore, a variety of hazy images can be generated by changing β (haze density) and A (global lightness).

Table 3 Conventional methods used to estimate atmospheric light

Figure 7 shows the average root-mean-square error (RMSE) between the ground-truth and estimated atmospheric lights for the 66 test images in the FRIDA. The RMSE is obtained as

$$ RMSE = \sqrt{\frac{1}{3}\left({\left({\widehat{A}}_R-{A}_R^{*}\right)}^2+{\left({\widehat{A}}_G-{A}_G^{*}\right)}^2+{\left({\widehat{A}}_B-{A}_B^{*}\right)}^2\right)}, $$
(15)
Fig. 7
figure 7

The average RMSE between the ground-truth (A* = [220,235,254]) and the estimated atmospheric light. The atmospheric light is estimated from pixels with the highest p% dark channel values. Among the p% pixels, the pixel with (a) the highest intensity or (b) the lowest entropy value is used to estimate the atmospheric light. Sixty-six test images from the FRIDA were used

where \( {A}^{*}=\left({A}_R^{*}\ {A}_G^{*}\ {A}_B^{*}\right) \) and Â = [Â R Â G Â B ] represent the ground-truth and estimated atmospheric lights, respectively. Since the candidate pixels for the atmospheric light estimation are obtained from the dark channel, the local patch size also plays an important role in the accuracy of the estimation. When a small patch size is used, as shown in Fig. 8b, the pixels for bright objects are considered as candidate pixels, yielding inaccurate A estimates. The use of a large patch size can prevent selecting such pixels, as shown in Fig. 8c. The quantitative evaluation result as shown in Fig. 7a also supports our observation. The accuracy is rather insensitive to p when a large 32 × 32 patch is used. Therefore, a large patch size (e.g., 32 × 32) with p = 0~0.4 % is effective only when the accuracy of the atmospheric light estimation is considered. One practical solution that takes into account the accuracy of both the dark channel and the atmospheric light involves using different patch sizes to estimate the dark channel estimation and atmospheric light [26]. When the local entropy, as in Eq. (15), is used to prevent pixels of small bright objects from being selected, the estimation accuracy of the atmospheric light improves, as shown in Fig. 7b [21]. The estimation accuracy is still best for the largest patch size of 32 × 32 and is less sensitive to the p value due to the robustness of candidate pixel selection.

Fig. 8
figure 8

Atmospheric light estimation. a Hazy image. The pixels in the dark channel that are used to estimate the atmospheric light when the size of Ω is (b) 3 × 3 and (c) 32 × 32

1.3.3 Transmission map estimation

The transmission map \( \tilde{t}(x) \) fined in Eq. (9) is obtained from the DCP. If the DCP is not exploited, Eq. (9) can be rewritten as

$$ \tilde{t}(x)=1-\underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right)+\tilde{t}(x)\cdot \underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{J^c(y)}{A^c}\right). $$
(16)

As we observed in Section 1.2.2, the pixel value of the dark channel, J dark(x), is highly likely zero, and so is (J/A)dark(x). However, if (J/A)dark(x) is not close to zero, the transmission map obtained as Eq. (9) can be under-estimated since the positive offset in Eq. (16) is always neglected [28].

In the original DCP-based dehazing method, it is mentioned that the image may look unnatural if the haze is removed thoroughly [10]. A constant ω (0 < ω < 1) is thus used to retain a small amount of haze:

$$ \tilde{t}(x)=1-\omega \underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right). $$
(17)

However, we consider that a better visibility in the dehazed image can be achieved with Eq. (17) because we inadvertently compensate for the under-estimation of \( \tilde{t}(x) \) by multiplying ω.

Figure 9 shows that the transmission map is indeed under-estimated when \( \tilde{t} \) is obtained as Eq. (9). The mean values of the ground-truth transmission maps, as shown in Fig. 9b, are 0.5616 and 0.6365, respectively. However, the mean values for the estimated transmission maps, as shown in Fig. 9c, are obtained as 0.5125 and 0.6086, respectively. When the transmission map is obtained as Eq. (17) by using ω = 0.9, the under-estimation of the transmission map is considerably decreased, as shown in Fig. 10a, c, where the mean values are obtained as 0.5225 and 0.6058, respectively.

Fig. 9
figure 9

Hazy images and ground-truth and estimated transmission maps from the FRIDA [46]. a Hazy images. b Ground-truth transmission maps, where A = [220,235,254] and β = 0.01. c Transmission maps obtained as Eq. (9). For visualization, transmission values are multiplied by 255

Fig. 10
figure 10

Comparison of the estimated transmission maps using the FRIDA [46]. a, c Transmission maps obtained as Eq. (17) using ω = 0.9. b, d The transmission map obtained as Eq. (18) using ρ = 0.08, where A = [220,235,254] and β = 0.01

Xu et al. [14] explicitly addressed the aforementioned under-estimation problem of the transmission map and simply added a positive value ρ [0.08, 0.25] to the transmission map:

$$ \tilde{t}(x)=1-\underset{y\in \varOmega (x)}{ \min}\left(\underset{c}{ \min}\frac{I^c(y)}{A^c}\right) + \rho . $$
(18)

Figure 10b, d shows the estimated transmission maps when ρ = 0.08 is added, where the mean values are obtained as 0.5494 and 0.6431, respectively. The addition of ρ also plays a similar role of t 0 in Eq. (11), making the minimum value of the transmission map be ρ. The under-estimation can be partly solved by using Eq. (17) or (18); however, the values of ω and ρ need to be carefully chosen. To this end, we measured the RMSE values between the ground-truth and estimated transmission maps for different ω and ρ values by using 66 synthetic test images from the FRIDA. Figure 11a, b indicates that ω around 0.9 and ρ around 0.12 are effective. An adaptive scheme also needs to be developed for a better compensation of the under-estimation.

Fig. 11
figure 11

The average RMSE between the ground-truth and estimated transmission maps. The transmission maps are estimated (a) using Eq. (17) with various ω values and (b) using Eq. (18) with various ρ values, where A = [220,235,254] and β = 0.01

1.3.4 Transmission map refinement

Incorrect estimation for the transmission map can lead to some problems such as false textures and blocking artifacts. In particular, the block-min process of Eq. (4) decreases the apparent resolution of the dark channel, resulting in blurry transmission maps. For this reason, many methods have been developed to further sharpen the transmission map [10, 11, 13, 14, 1620, 22, 24].

In [42], it is especially mentioned that many dehazing methods differ in the way of smoothing the transmission map. Table 4 lists post-filtering methods used to improve the accuracy of the transmission map. Some filtering methods, such as the Gaussian and bilateral filters, use only transmission maps, whereas the other methods, such as soft matting, cross-bilateral filter, and guided filter, exploit a hazy color image as a guidance signal. Each method and its performance are analyzed in the following subsections.

Table 4 Conventional methods used to refine the transmission map

Gaussian filter

Denoting the transmission map to be refined as \( \tilde{t} \), the Gaussian filtered transmission map \( \widehat{t} \) is given as

$$ \widehat{t}(x) = \frac{1}{{\displaystyle {\sum}_{y\in \varOmega (x)}}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right)} \cdot {\displaystyle {\sum}_{y\in \varOmega (x)}}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right)\widehat{t}(y), $$
(19)

where \( {G}_{\sigma_s} \) is the 2-D Gaussian function with the standard deviation σ s . The Gaussian filter is not very effective in sharpening a blurry transmission map due to its low-pass characteristic, but it is often useful in removing color textures remaining in the transmission map [19]. As discussed in Section 1.3.1, transmission maps obtained using a small local patch tend to have color textures, and thus, the Gaussian filter can improve the accuracy of the transmission maps. Figure 12 shows some examples before and after Gaussian filtering. As can be seen in Figs. 12b, c, the Gaussian filter is effective in removing false color textures in the transmission ma However, the Gaussian filter can unnecessarily blur the transmission map when there is no annoying false color textures in the transmission map as shown in Fig. 12d, e.

Fig. 12
figure 12

The result of the Gaussian filter. a Hazy images. b Transmission map obtained using the local patch with the size 3 × 3. c Gaussian filtered transmission map using σ s  = 5. d Transmission map obtained using the local patch with the size 15 × 15. e Gaussian filtered transmission map using σ s  = 5

Figure 13 shows the quantitative quality evaluation results. Here, the transmission maps are obtained as Eq. (17) with different sizes of the local patch. The Gaussian filter is then applied and the filtered result is compared with the ground-truth transmission map, which can be reconstructed using the FRIDA [46]. As can be seen in Fig. 13, the Gaussian filter is effective when a proper size of the patch size is used, but the RMSE starts increasing when the Gaussian blur becomes excessive. Therefore, the refinement by the Gaussian filter needs careful treatment with the consideration of the color textures in the hazy image.

Fig. 13
figure 13

The average RMSE between ground-truth and Gaussian filtered transmission maps using the FRIDA [46]. The RMSE results with respect to σ s when the local patch size is (a) 3 × 3, (b) 11 × 11, and (c) 15 × 15

Bilateral filter

The bilateral filter is a widely used edge-preserving smoothing filter. It uses weighted neighboring pixel values with the spatial and range distances as follows:

$$ \widehat{t}(x) = \frac{1}{{\displaystyle {\sum}_{y\in \varOmega (x)}}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right){G}_{\sigma_r}\left(\left\Vert I(x)-I(y)\right\Vert \right)}\cdot {\displaystyle {\sum}_{y\in \varOmega (x)}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right){G}_{\sigma_r}\left(\left\Vert I(x)-I(y)\right\Vert \right)\tilde{t}(y),} $$
(20)

where \( {G}_{\sigma_s} \) and \( {G}_{\sigma_r} \) represent the spatial and range kernels with the standard deviations σ s and σ r , respectively. Since the neighboring pixels that have the similar pixel value with the center pixel are highly weighted, edges in \( \tilde{t} \) can be preserved while smoothing noisy regions in \( \tilde{t} \). The bilateral-filtered transmission maps as shown in Fig. 14 tend to exhibit sharper details than the Gaussian filtered transmission maps as shown in Fig. 12.

Fig. 14
figure 14

The result of the bilateral filter. a Hazy images. b Transmission maps obtained using the local patch with the size 3 × 3. c Bilateral filtered transmission maps using σ s  = 15andσ r  = 0.3. d Transmission maps obtained using the local patch with the size 15 × 15. e Bilateral filtered transmission maps using σ s  = 15 and σ r  = 0.1

We also evaluated the quantitative performance of the bilateral filter as shown in Fig. 15 using the same experimental condition of Fig. 14. σ s is set as 15 and the performance dependency on σ r is only investigated. The results illustrate that the bilateral filter is not very effective in terms of the quantitative performance and tends to increase the RMSE when the standard deviation of the range kernel increases.

Fig. 15
figure 15

The average RMSE between ground-truth and bilateral-filtered transmission maps using the FRIDA [46]. The RMSE values with respect to σ r when the local patch size is (a) 3 × 3, (b) 11 × 11, and (c) 15 × 15, where σ s  = 15 and σ r  {0.01, 0.03, 0.06, 0.1, 0.15, 0.2, 0.25, 0.3}

Soft matting

We found that the Gaussian and bilateral filters are effective for removing false color textures in the transmission map. However, the transmission map should have a similar level of sharpness to the color image for dehazing, which is impossible if the color image is not used in the transmission map refinement. To this end, the original DCP-based dehazing algorithm [10] adopted the soft matting to refine the transmission map. From the observation that the degradation model in Eq. (6) is similar to the matting equation [47], the refined transmission map \( \widehat{t} \) is obtained by minimizing the following energy function:

$$ \widehat{t} = \underset{t}{ \arg\ \min}\left\{{t}^TLt + \lambda {\left(t-\tilde{t}\right)}^T\left(t-\tilde{t}\right)\right\}, $$
(21)

where \( \tilde{t} \) is the transmission map to be refined and a weight λ controls the importance of the data term. It is demonstrated in [11] that the solution of Eq. (21) is equivalent to that of the following sparse linear equation:

$$ \left(L+\lambda U\right)\widehat{t}=\lambda \tilde{t}, $$
(22)

where U represents an identity matrix. Note that in order to exploit sharp details in the hazy image, the Laplacian matrix L is determined from the hazy image. We refer the readers for more details about image matting to [11, 47]. Figure 16 shows the refined transmission maps obtained by the soft matting. As can be seen, blurry edges in the transmission maps have been sharpened due to the use of color images. It should be noted here that the bilateral filter was also applied to the result of the soft matting to further refine the transmission map [10]. To evaluate the performance of the soft matting only, the bilateral filter is not applied.

Fig. 16
figure 16

The result of the soft matting using λ = 10− 4. a Hazy images. b Transmission maps obtained using the local patch with the size 3 × 3. c Soft matting results of (b). d Transmission maps obtained using the local patch with the size 15 × 15. e Soft matting results of (d)

Figure 17 shows the quantitative performance of the soft matting. Different values of λ and patch sizes were used to find out the dependency of the performance of the soft matting on the parameters. A large value of λ was preferred when a small local patch was used because the transmission map before the refinement tended to be inherently similar to the hazy image. When the local patch of the size 15 × 15 was used, a proper value of λ (=2 × 10− 4 in our experiment) showed the best performance.

Fig. 17
figure 17

The average RMSE between ground-truth and soft matting filtered transmission maps using the FRIDA [46]. The RMSE values with respect to σ r when the local patch size is (a) 3 × 3, (b) 11 × 11, and (c) 15 × 15, where λ {10− 5, 10− 4, 2 × 10− 4, 10− 3}

Cross-bilateral filter

The cross-bilateral filter (aka joint-bilateral filter) is a variant of the classic bilateral filter. Unlike the bilateral filter, the cross-bilateral filter computes the range kernel from a cross (guidance) channel as follows:

$$ \widehat{t}(x) = \frac{1}{{\displaystyle {\sum}_{y\in \varOmega (x)}}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right){G}_{\sigma_r}\left(\left\Vert I(x)-I(y)\right\Vert \right)}\cdot {\displaystyle {\sum}_{y\in \varOmega (x)}{G}_{\sigma_s}\left(\left\Vert x-y\right\Vert \right){G}_{\sigma_r}\left(\left\Vert I(x)-I(y)\right\Vert \right)\tilde{t}(y),} $$
(23)

where the guidance channel I corresponds to the hazy image as in Eq. (1). Therefore, the sharpness of I can be inherited to the transmission map \( \widehat{t} \). Figure 18 shows the result of the cross-bilateral filter, and Fig. 19 shows the quantitative performance evaluation result using a fixed value of σ s  = 15 and various σ r values. Owing to the use the cross channel, the resultant transmission map can exhibit sharper edges than those obtained by the Gaussian and bilateral filters. The selection of σ r was also found to be important for the accuracy of the transmission map, and the best value of σ r was found around 0.1 regardless of the size of the local patch. Unlike the computationally expensive soft matting method (which takes 10–20 s on average for images with the size 600 × 400 [11]), it was shown in [20] that the cross-bilateral filter can be implemented in real-time using the GPU.

Fig. 18
figure 18

The result of the cross-bilateral filter. a Hazy images. b Transmission map obtained using the local patch with the size 3 × 3. c Cross-bilateral filtered transmission map using σ s  = 15 and σ r  = 0.1. d Transmission map obtained using the local patch with the size 15 × 15. e Cross-bilateral-filtered transmission map using σ r  = 15 and σ r  = 0.1

Fig. 19
figure 19

The average RMSE between ground-truth and cross-bilateral-filtered transmission maps using the FRIDA [46]. The RMSE values with respect to σ r when the local patch size is (a) 3 × 3, (b) 11 × 11, and (c) 15 × 15, where σ s  = 15 and σ r  {0.01, 0.04, 0.1, 0.15, 0.2, 0.25, 0.3}

Guided filter

To speed up the transmission map refinement, the authors of the original DCP-based dehazing method [10] replaced the soft matting to the guided filter [13, 18, 22, 48]. The guided filter also uses the hazy image I as a guidance, but its novelty lies in adopting the linear model as follows:

$$ \widehat{t}.(y)={a}_xI(y)+{b}_x,\ \forall y\in {\varOmega}_x, $$
(24)

where the coefficients a x and b x are assumed to be constant in Ω x and are derived by minimizing the following energy:

$$ E\left({a}_x,{b}_x\right) = {\displaystyle \sum_{y\in \varOmega (x)}}\left({\left({a}_xI(y)+{b}_x-\tilde{t}(y)\right)}^2+{\left(\varepsilon {a}_x\right)}^2\right), $$
(25)

where ε is a regularization parameter penalizing large a x . The solution (a x , b x ) can be obtained as

$$ {a}_x=\frac{\frac{1}{\left|w\right|}{\displaystyle {\sum}_{y\in {\varOmega}_x}}{I}_y\tilde{t}(y)-{\mu}_x\overline{t}(x)}{\sigma_x^2+\varepsilon },{b}_x=\overline{t}(x)-{b}_x{\mu}_x, $$
(26)

where μ x and \( {\sigma}_{\mathrm{x}}^2 \) re the mean and variance of the guidance image I in window Ω x , respectively. |w| denotes the number of pixels in Ω x and \( \overline{t}(x)=\frac{1}{\left|w\right|\ }{\displaystyle {\sum}_{y\in \varOmega \left(\mathrm{x}\right)}\tilde{t}(y)} \) Considering the overlapping windows in calculating a x and b x , the final refined transmission map \( \widehat{t}(x) \) is obtained as

$$ \widehat{t}(x)={\overline{a}}_x\tilde{t}(x) + {\overline{b}}_x, $$
(27)

where \( {\overline{a}}_x=\frac{1}{\left|w\right|}{\displaystyle {\sum}_{y\in \varOmega (x)}{a}_y} \) and \( {\overline{b}}_x=\frac{1}{\left|w\right|}{\displaystyle {\sum}_{y\in \varOmega (x)}{b}_y} \) denote the average of all the coefficients obtained at pixel x.

Figure 20 shows the result of the guided filter. Since the refined transmission map is fully obtained from the hazy image, the resultant map contains the similar level of sharpness of the hazy image without yielding significant false color textures. Figure 21 shows the quantitative performance of the guided filter with different ε values. As ε increases, the transmission map becomes smooth, and thus, a proper selection of ε is significant. In our experiments using the FRIDA, ε of 0.01 produced the smallest RMSE value regardless of the size of the local patch.

Fig. 20
figure 20

The result of refined transmission map using the guided filter. a Hazy images. b Transmission maps obtained using the local patch with the size 3 × 3. c Guided filtered transmission maps using ε = 0.01. d Transmission maps obtained using the local patch with the size 15 × 15. e Guided filtered transmission maps using ε = 0.01

Fig. 21
figure 21

The average RMSE between ground-truth and guided filtered transmission map using the FRIDA [46]. The RMSE values with respect to ε when the local patch size is (a) 3 × 3, (b) 11 × 11, and (c) 15 × 15, where ε {0.001, 0.005, 0.01, 0.015, 0.02}

Auxiliary methods for transmission map enhancement

Recent efforts are made to enhance the transmission map [31, 32, 3436], which can be categorized into three different approaches. The first approach is to use the transmission map obtained at low resolution [34, 36]. In [34], the guided filter is performed at low resolution and the filter coefficients at the original resolution are obtained using bilinear interpolation, which enables a speedup of the transmission map refinement. In [36], non-overlapping patches with the size 10 × 10 are used to obtain a very low resolution transmission map, and then, it is combined with a very high-resolution transmission map obtained using Ω(x) = x in Eq. (9). This combination scheme can make the transmission map refinement unnecessary.

In the second approach, the transmission map enhancement can be achieved by applying a preprocessing filter to the hazy image. In [32], the total variation based image restoration and morphological filtering are applied to the hazy image, which can prevent producing unnecessary texture details from the estimated transmission map. In [34], an edge-enhanced hazy image is used as a guidance image at the guided filtering step to reconstruct the transmission map with sharp edges.

The third approach is to estimate the transmission map not from rectangular patches but from segments. In [31], the watershed segmentation is performed to extract regions that the transmission can be reliably estimated. In [35], a gray-level thresholding is performed to divide an image into sky and non-sky regions and transmission maps are then separately estimated for the two regions. Since blurry transmission maps are originated from rectangular patch-wise processing in Eq. (9), these segmentation-based methods tend to produce sharp transmission maps without further refinement.

Comparisons

In the above subsections, the transmission map refinement schemes were described individually. The parameter sensitivity of each method was also discussed in detail. We then empirically tuned the best parameter(s) for each method as shown in Table 5 and compared the performance of the methods. Figure 22 shows some refinement results of the five methods for the same transmission maps. As can be seen, the methods that use the hazy image as a cross channel (i.e., soft matting, cross-bilateral filter, and guided filter) provide sharper transmission maps than the methods that do not use the hazy image (i.e., Gaussian and bilateral filters).

Table 5 Parameters used for the performance comparison
Fig. 22
figure 22

The result of refined transmission map using five major methods with the patch 15 × 15. a Gaussian filter. b Bilateral filter. c Soft matting. d Cross-bilateral filter. e Guided filter

Quantitative quality evaluation is also possible because the ground-truth transmission map of the FRIDA can be used as the common reference frame. Table 6 compares the RMSE values obtained by the five methods. The soft matting performed the best, and the cross-bilateral and guided filters showed comparable second-best performance.

Table 6 Comparison of the RMSE values obtained by the five transmission map refinement methods. The patch sizes set to 15 × 15

In addition, we measured the processing time required for transmission map refinement methods as shown in Table 7. A PC with Windows 8, 3.60 GHz CPU, 8 RAM, and MATLAB 2014(a) was used for the evaluation. The memory requirement was also measured using the peak and total memory [49]. The results indicate that the filter-based methods such as bilateral and cross-bilateral filters are memory-efficient. The guided filter is the most memory-inefficient, but its time complexity is low compared to other methods.

Table 7 Comparison of transmission map refinement methods with respect to the time complexity and memory requirements

1.3.5 Dehazed image construction

After estimating the atmospheric light  and transmission map \( \widehat{t} \), the dehazed image J can be readily obtained from the degradation model as Eq. (6). Specifically, J is given as

$$ J(x)=\frac{I(x)-\widehat{A}\ }{ \max \left(\widehat{t}(x),{t}_0\right)}+\widehat{A}, $$
(28)

where t 0 is a typical value for avoiding a low value of the denominator. Most DCP-based dehazing methods used t 0 as 0.1 [1114, 20, 21, 2527]. Figure 23 shows the dehazed images obtained using the top three transmission map refinement methods. As can be seen, the reconstruction of J by Eq. (28) can yield visually pleasant dehazed images.

Fig. 23
figure 23

Comparison of time consumption for each dehazing step with different transmission map refinement methods

When the hazy image contains significant color distortion by abnormal climate such as sandstorm [12], the estimated atmospheric light  becomes far from achromatic, and thus, color correction is required at the dehazed image reconstruction. In such a case, J is obtained as follows:

$$ {J}^c(x)=\frac{I^c(x)-\left({\widehat{A}}^c-{d}^c\right)}{ \max \left(\widehat{t}(x),{t}_0\right)}+\left({\widehat{A}}^c-{d}^c\right), $$
(29)

where the upper-script c represents the color channel, c {r, g, b}, and d c denotes the difference between the average values of the red and c channels of I. In other words, the offset in the red channel caused by sandstorm is subtracted before the construction of the dehazed image. Figure 24b, c shows the dehazed images obtained by using the same refined transmission map as shown in Fig. 22e but with Eqs. (28) and (29), respectively. The experimental results demonstrate the necessity of the color correction at the dehazed image construction stage. Equation (29) can also be easily extended to the images that appeared greenish or bluish due to other abnormal weather conditions or improper camera parameter settings. There are also several works considering the noise amplification problem during dehazed image construction [29, 35]. In addition, to obtain high contrast dehazed images, some image processing techniques can be applied including as linear stretching [30], gamma correction [32], and histogram specification [33].

Fig. 24
figure 24

The image dehazing results when the transmission maps as shown in Fig. 22 are used. a Hazy images. b Dehazed image using Fig. 22d. c Dehazed image using Fig. 22c. d Dehazed image using Fig. 22e

Finally, Fig. 25 shows the time consumption of each dehazing step. Specifically, using the same experimental condition mentioned in Section 1.3.4, the average processing time of the 30 FRIDA test images was measured. The transmission map refinement step required the longest time when the bilateral and cross-bilateral filters were used, and the dark channel construction and transmission map estimation steps also required non-negligible time due to the block-min process in Eqs. (4) and (9).

Fig. 25
figure 25

Image dehazing result for the image captured under abnormal weather condition. a Hazy images. b Dehazed image using Eq. (28). c Dehazed image using Eq. (29)

1.4 Performance evaluation methods

In Section 1.3, we reviewed the conventional DCP-based dehazing algorithms by dividing them into subcomponents and discussing various methods used in each subcomponent. Finally, we need to objectively evaluate the quality of the dehazed images. In this section, we first study the existing metrics developed for evaluating the quality of dehazed images.

Table 8 lists the metrics used for evaluating the quality of dehazed images. The most widely used metric is the ratio of visible edges between dehazed and hazy images (denoted as Q e) [23, 42, 50, 51]. Since the dehazed image tends to have sharper details than the hazy image, it is considered that the higher the Q e value the better the dehazed image. In order to more precisely measure the local image sharpness, the ratio of visible edges’ gradients between the dehazed and hazy images (denoted as Q g) is also evaluated [23, 42, 50, 51]. In a similar manner, the higher the Q g value, the better the dehazed image. In [38, 50, 51], the percentage of pixels which becomes completely black or completely white after dehazing (denoted as Q o) is measured. As Q o accounts for the over-enhancement, the smaller the Q o value, the better the dehazed image. Other quality metrics developed for general image restoration problems such as image entropy [23] and Q-metric [27] are often directly used to evaluate the quality of dehazed images, which are not discussed in this section.

Table 8 Quality metrics developed for evaluating the quality of dehazed images

One problem is that the reliability of the aforementioned metrics has not been verified yet. As the quality of the dehazed image is strongly dependent on the accuracy of the transmission map, we relate the RMSE (between the ground-truth and estimated transmission maps) and the quality metrics of image dehazing. Figure 26 compares the RMSE and quality metrics, where the red curves denote the fitted functions. Each point indicates the result for various haze density (β) ≤ [0.005, 0.015]. As can be seen, a general tendency of the quality metrics is consistent with their definitions (i.e., the RMSE value tends to decrease as Q e and Q g increases and vice versa with respect to Q o).

Fig. 26
figure 26

Comparison of the RMSE (between the ground-truth and estimated transmission maps) and Q-metrics. a Q g. b Q e. c Q o. Estimated transmission maps are obtained by (top) soft matting, (middle) cross-bilateral filter, and (bottom) guided filter

Figure 27 shows the case when Q e and Q g are not trustworthy. As can be seen in Fig. 27b, d, some false positive edges are detected and they tend to unnecessarily increase Q e and Q g values. Therefore, the dehazed images obtained using the Gaussian filter have even higher Q e and Q g values than those obtained using the soft matting (in Fig. 26, Gaussian filter: Q e = 1.7311, Q g = 1.1073; soft matting: Q e = 1.1675, Q g = 0.8774). Therefore, Q e and Q g should be used considering the characteristics of the dehazed algorithms. More dedicated quality evaluation methods need to be developed for image dehazing.

Fig. 27
figure 27

The result of Q g dehazed image with Gaussian filter and soft matting. a Hazy image. b Dehazing image using Gaussian filter. c Dehazing image with soft matting. d The map of the ratio of the gradients at visible edges (Q g) for (b). e The map of the ratio of the gradients at visible edges (Q g) for (d)

Lastly, an application-specific quality metric of image dehazing is also presented [52]. When image dehazing is developed for computer vision applications, it is expected that the dehazed image results in the performance enhancement of computer vision tasks such as object detection and recognition. Since detection and matching of feature points play an important role for such computer vision tasks, the numbers of matched feature points are compared between hazy and dehazed image pairs [52]. It is then assumed that the more the matched feature points, the better the image dehazing algorithm. We believe that other application-specific quality metrics can be devised in a similar manner.

1.5 Summary

In this paper, we performed an in-depth survey on DCP-based image dehazing methods. Especially, we classified relevant research articles related to the DCP according to the four steps and performed a step-by-step analysis. Our findings can be summarized as follows.

  • Dark channel construction: the local patch size is a very important parameter for dark channel construction. Color textures are transferred to the dark channel when a small local patch is used, whereas blurry dark channels are obtained when a large local patch is used. In addition, a physically less meaningful median filter is found to be not very effective in dark channel construction.

  • Atmospheric light estimation: the atmospheric light is reliably estimated from the dark channel, especially when the dark channel is obtained using a large local patch. Therefore, if the local patch size used in dark channel construction is not large enough, it is recommended to use an additional dark channel with a larger local patch size only for atmospheric light estimation. The use of local entropy is also found to be effective in enhancing the estimation accuracy because atmospheric light estimation from bright objects can be prevented.

  • Transmission map estimation: the under-estimation problem of the transmission map is addressed. The conventional gain and offset control methods are examined, but an adaptive correction scheme is found to be necessary for precise estimation of the transmission map, which is missing in the current literature.

  • Transmission map refinement: the performance of transmission map refinement is improved when a hazy image is used as a guidance image. The soft matting method shows the best transmission map estimation accuracy, and the guided and cross-bilateral filters show the second-best accuracy. The Gaussian and guided filters perform best in terms of the computational complexity, but the guided filter is most memory-inefficient among the five investigated refinement schemes.

  • Quality metric for image dehazing: the performance of the image dehazing can be indirectly measured by comparing the ground-truth and estimated transmission maps. The conventional quantitative quality metrics using only the dehazed image are investigated, but they are found to be not trustworthy enough. An advanced or application-specific quality metric needs to be developed.

2 Conclusions

In this paper, we performed an in-depth study on one of the most successful dehazing algorithms: the DCP-based image dehazing algorithm. Considering the four major steps of the DCP-based image dehazing, which are atmospheric light estimation, transmission map estimation, transmission map refinement, and image reconstruction, we classified recent research articles related to the DCP according to these four steps and performed a step-by-step analysis of conventional methods. Moreover, the conventional methods developed for evaluating the performance of image dehazing were also summarized and discussed. We believe that our detailed survey and experimental analysis will help readers understand the DCP-based dehazing methods and will facilitate development of advanced dehazing algorithms.

Abbreviations

DCP:

dark channel prior

FRIDA:

foggy road image database

RMSE:

root-mean-square error

References

  1. E Kermani, D Asemani, A robust adaptive algorithm of moving object detection for video surveillance. EURASIP J. Image Video Process. 2014(27), 1–9 (2014)

    Google Scholar 

  2. M Ozaki, K Kakimuma, M Hashimoto, K Takahashi, Laser-based pedestrian tracking in outdoor environments by multiple mobile robots, in Proceedings of Annual Conference on IEEE Industrial Electronics Society 2011 (IECON, Melbourne, 2011), pp. 197–202

    Chapter  Google Scholar 

  3. YY Schechnner, SG Narasimhan, SK Nayar, Polarization-based vision through haze. Appl. Optics 42(3), 511–525 (2003)

    Article  Google Scholar 

  4. YY Schechnner, SG Narasimhan, Instant dehazing of images using polarization, in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR, Kauai, 2001), pp. 25–332

    Google Scholar 

  5. S Shwartz, E Namer, YY Schechner, Blind haze separation, in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (CVPR, Anchorage, 2006), pp. 1984–1991

    Google Scholar 

  6. SG Narasimhan, SK Nayar, Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 25(6), 713–724 (2003)

    Article  Google Scholar 

  7. SK Nayar, SG Narasimhan, Vision in bad weather, in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV, Kerkyra, 1999), pp. 820–827

    Chapter  Google Scholar 

  8. RT Tan, Visibility in bad weather from a single image, in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR, Anchorage, 2008), pp. 1–8

    Google Scholar 

  9. R Fattal, Single image dehazing. ACM Trans. Graph. 72(3), 72:1-72:9 (2008)

    Google Scholar 

  10. K He, J Sun, X Tang, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR, Miami, 2009), pp. 1956–1963

    Google Scholar 

  11. K He, J Sun, X Tang, Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2010)

    Google Scholar 

  12. SC Huang, BH Chen, WJ Wang, Visibility restoration of single hazy images captured in real-world weather conditions. IEEE Trans. Circuits Sys. Video Tech. 24(10), 1814–1824 (2014)

    Article  Google Scholar 

  13. Y Linan, P Yan, Y Xiaoyuan, Video defogging based on adaptive tolerance. TELKOMNIKA Indonesian Journal of Elec. 10(7), 1644–1654 (2012)

    Google Scholar 

  14. H Xu, J Guo, Q Liu, L Ye, Fast image dehazing using improved dark channel prior, in Proceedings of International Conference on Information Science and Technology (ICIST, Hubei, 2012), pp. 663–667

    Google Scholar 

  15. Z Tan, X Bai, A Higashi, Fast single-image defogging. FUJITSU Sci. Tech. J. 50(1), 60–65 (2014)

    Google Scholar 

  16. C Xiao, J Gan, Fast image dehazing using guided joint bilateral filter. Vis. Comput. 28(6-8), 713–721 (2012)

    Article  Google Scholar 

  17. H Yang, J Wan, H Yang, J Wang, H Yang, J Wang, Color image contrast enhancement by co-occurrence histogram equalization and dark channel prior, in Proceedings of 3rd International Congress on Image and Signal Processing (CISP, Yantai, 2010), pp. 659–663

    Google Scholar 

  18. MS Sandeep, Remote sensing image dehazing using guided filter. IJRSCSE. 1(3), 44–49 (2014)

    Google Scholar 

  19. J Long, Z Shi, W Tang, Fast haze removal for a single remote sensing image using dark channel prior, in Proceedings of International Conference on Computer Vision in Remote Sensing (CVRS, Xiamen, 2012), pp. 132–135

    Google Scholar 

  20. X Lv, W Chen, IF Shen, Real-time dehazing for image and video, in Proceedings of the 18 th IEEE Pacific Conference on Computer Graphics and Applications (HangZhou, 2010), pp. 62–69

  21. S Jeong, S Lee, The single image dehazing based on efficient transmission estimation, in Proceedings of IEEE International Conference on Consumer Electronics (ICCE, Las Vegas, 2013), pp. 376–377

    Google Scholar 

  22. Z Lin, X Wang, Dehazing for image and video using guided filter. Appl. Sci. 2(4B), 123–127 (2012)

    Article  Google Scholar 

  23. YQ Zhang, Y Ding, JS Xiao, J Liu, Z Guo, Visibility enhancement using an image filtering approach. EURASIP J. Adv. Signal Process. 2012(220), 1–6 (2012)

    Article  MATH  Google Scholar 

  24. J Yu, C Xiao, D Li, Physics-based fast single image fog removal, in Proceedings of IEEE 10th International Conference on Signal Processing (ICSP, Beijing, 2010), p. 1048

    Chapter  Google Scholar 

  25. TH Kil, SH Lee, NI Cho, Single image dehazing based on reliability map of dark channel prior, in Proceedings of IEEE 20 th International Conference on Image Processing (ICIP, Melbourne, 2013), pp. 882–885

    Google Scholar 

  26. YJ Cheng, BH Chen, SC Huang, SY Kuo, A Kopylov, O Seredin, L Mestetskiy, B Vishnyakov, Y Vizilter, O Vygolov, CR Lian, CT Wu, Visibility enhancement of single hazy images using hybrid dark channel prior, in Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC, Manchester, 2013), pp. 3267–3632

    Google Scholar 

  27. X Lan, L Zhang, H Shen, Q Yuan, H Li, Single image haze removal considering sensor blur and noise. EURASIP J. Adv. Signal Process. 2013(86), 1–13 (2013)

    Google Scholar 

  28. JB Wang, N He, LL Zhang, K Lu, Single image dehazing with a physical model and dark channel prior. Neurocomputing 149(B), 718–728 (2015)

    Article  Google Scholar 

  29. T Zhang, Y Chen, Single image dehazing based on improved dark channel prior, in Advances in Swarm and Computational Intelligence, ed. by Y Tan et al., vol. 9142 (Springer, 2015), p. 205-212

  30. Y Song, H Luo, B Hui, Z Chang, An improved image dehazing and enhancing method using dark channel prior, in Proceedings of Control and Decision Conference (CCDC) (IEEE, Qingdao, 2015), pp. 5840–5845

    Google Scholar 

  31. B Huo, F Yin, Image dehazing with dark channel prior and novel estimation model. Int. J. Multiphase Ubiquitous Engineering 10(3), 13–22 (2015)

    Article  Google Scholar 

  32. Y Li, Q Fu, F Ye, H Shouno, Dark channel prior based blurred image restoration method using total variation and morphology. J. Syst. Eng. Electron. 26(2), 359–366 (2015)

    Article  Google Scholar 

  33. Z Qingsong, Y Shuai, X Yaoqin, An improved single image haze removal algorithm based on dark channel prior and histogram specification, in Proceedings of 3rd International Conference on Multimedia Technology (ICMT, Atlantis Press, Guangzhou, 2013), pp. 279–292

    Google Scholar 

  34. X Zhu, Y Li, Y Qiao, Fast single image dehazing through edge-guided interpolated filter, in Proceedings of 14th IEEE International Conference on Machine Vision Applications (IAPR, Tokyo, 2015), pp. 443–446

    Google Scholar 

  35. C Chengtao, Z Qiuyu, L Yanhua, Improved dark channel prior dehazing approach using adaptive factor, in Proceedings of IEEE International Conference on Mechatronics and Automation (ICMA, Beijing, 2015), pp. 1703–1707

    Google Scholar 

  36. T Yu, I Riaz, J Piao, H Shin, Real-time single image dehazing using block-to-pixel interpolation and adaptive dark channel prior. IET Image Process. 9(9), 725–734 (2015)

    Article  Google Scholar 

  37. Q Liu, H Zhang, M Lin, Y Wu, Research on image dehazing algorithms based on physical model, in Proceedings of International Conference on Multimedia Technology (ICMT, Hangzhou, 2011), pp. 467–470

    Google Scholar 

  38. AK Tripathi, S Mukhopadhyay, Removal of fog from images: a review. IETE Tech. Rev. 29(2), 148–156 (2012)

    Article  MathSciNet  Google Scholar 

  39. MK Saggu, S Singh, A review on various haze removal techniques for image processing. International Journal of Current Engineering and Technology. 5(3), 1500–1505 (2015)

    Google Scholar 

  40. A Shrivastava, ER Kumari, Review on single image fog removal. International Journal of Advanced Research in Computer Science and Software Engineering. 3(8), 423–427 (2013)

    Google Scholar 

  41. V Sahu, M Singh, A survey paper on single image dehazing. International Journal on Recent and Innovation Trends in Computing and Communication. 3(2), 85–88 (2015)

    MathSciNet  Google Scholar 

  42. JP Tarel, N Hautière, L Caraffa, A Cord, H Halmaoui, D Gruyer, Vision enhancement in homogeneous and heterogeneous fog. IEEE Intell. Transp. Syst. Mag. 4(2), 6–20 (2012)

    Article  Google Scholar 

  43. S Lex, F Clement, S Sabine, Color image dehazing using the near-infrared, in Proceedings of IEEE International Conference on Image Processing (ICIP, Cairo, 2009), pp. 1629–1632

    Google Scholar 

  44. H Koschmieder, Die Sichtweite im Nebel und die Möglichkeiten ihrer künstlichen Beeinflussung, vol. 640 (Springer, 1959), pp. 33–55. p. 171-181

  45. N Hautière, JP Tarel, J Lavenant, D Aubert, Automatic fog detection and estimation of visibility distance through use of onboard camera. Mach. Vis. Appl. 17(1), 8–20 (2006)

    Article  Google Scholar 

  46. IFSTTAR. http://www.sciweavers.org/read/frida-foggy-road-image-database-evaluation-database-for-visibility-restoration-algorithms-184350. Accessed 6 November 2012

  47. A Levin, D Lischinski, Y Weiss, A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008)

    Article  Google Scholar 

  48. K He, S Jian, T, Xiaoou. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)

    Article  Google Scholar 

  49. F Fang, F Li, T Zeng, Single image dehazing and denoising: a fast variational approach. SIAM Journal on Imaging Sciences. 7(2), 969–996 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  50. JP Tarel, N Hautière, Fast visibility restoration from a single color or gray level image, in Proceedings of IEEE 12th International Conference on Computer Vision (ICCV, Kyoto, 2009), pp. 2201–2208

    Google Scholar 

  51. N Hautière, JP Tarel, D Aubert, E Dumont, Blind contrast enhancement assessment by gradient ratioing at visible edges. Image Analysis & Stereology Journal. 27(2), 87–95 (2008)

    Article  MATH  Google Scholar 

  52. C Ancuti, CO Ancuti, Effective contrast-based dehazing for robust image matching. IEEE Geosci. Remote Sens. Lett. 11(11), 1871–1875 (2014)

    Article  Google Scholar 

  53. Computational Visual Cognition Laboratory. http://cvcl.mit.edu/database.htm. Accessed 20 January 2015

Download references

Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and Future Planning (NRF-2014R1A1A2057970).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seung-Won Jung.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, S., Yun, S., Nam, JH. et al. A review on dark channel prior based image dehazing algorithms. J Image Video Proc. 2016, 4 (2016). https://doi.org/10.1186/s13640-016-0104-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-016-0104-y

Keywords