Rain, fog, vog, mist, fume, smog, hail, snow, etc.
are considered as the source of APM. Researchers are fighting the challenge of
the presence of APM which is due to unplanned civilization and technological
advancements. Satellite images show that Asia and Africa and very few parts of
American countries are the most polluted atmosphere deteriorating yearly. APM
are both natural and manmade. APM is a mixture of solid and liquid droplets in
a variety of sizes, Coarse APM as PM10-PM2.5 (micrometre diameter), finer as
PM2.5, and ultrafine below PM0.1[1]. Computer vision (CV) encompasses object
tracking, object recognition, surveillance, image enhancement, etc. A clear
image is a fundamental requirement in CV applications. APM degrades the visibility
of the received image. distance, airlight, transmission, and scattering
coefficient influence image formation at the viewer point (i.e. may be
considered camera) [2]. classical enhancement techniques, like, histogram
equalization, image adjustment, and adaptive histogram equalization work well
for the normal image enhancement process. These techniques fail in special
cases of bad weather lowering the image visibility. Single image visibility
restoration is the challenge of all other image visibility improvements as no
ground truth or reference image could be found. Single image dehazing is one of
the most sought-after single image visibility restoration techniques. Any
outdoor clear image inherently possesses high contrast, airlight does not
affect the richness of the image, pixel intensity is well distributed, and
pixel over-saturation and under-saturation do not exist [22]. Contrary to that
of hazy images are of low contrast and airlight makes images white. Most of the
pixel intensities are very high i.e., under-saturated and flocked together. Over
saturation occurs in one of the channels of the degraded image due to the
illuminant of a strong colour cast, the response of the sensor/camera
differently for different colour channels resulting in achromatic image
artifacts. Visibility Improvement is under the category of an ill-Posed Inverse
Problem [1-16, 31-33]. The best image has to be evaluated from the attenuated
received images. Inverting image formation optical model, a reconstructed image
can be found as close to the original image depending on the applications. The
paper is arranged as: section 2 consists of a Literature survey. The main
contribution of the work has been identified in section 3. Proposed methods
with mathematical modelling have been illustrated in Section 4. The result is
described in section 5 with explains qualitative and quantitative analysis. Finally,
section VI is for the Conclusion.
In some research work, DCP (dark channel prior) has been used
which is a statistical prior on haze free images. This prior indicates that in a
normal RGB image, 75% of pixels of any dark channel is zero where dark channel
indicates the lowest intensities channel out of three RGB image channels. 90%
pixels of that channel is below 25. However, the scenario drifts radically in
case of degraded weather. That corresponds to the high intensity of the dark
channel. It is due to atmospheric airlight which shifts the pixel's intensity
to a very high value producing an almost white image. The method is efficient
but takes a long time to reproduce. Therefore real time applications cannot be
useful [4]. The work of R Tan is based on two observations, the contrast of
image is compromised in the degraded image. Normal image has more contrast than
that of hazy image. A degraded image has more airlight and it increases with
distance. As a result distant part becomes smoother and invisible. The method
is efficient as required a single image, but not applicable for real time [5].
The algorithm proposed by J P Tarel is fast and its complexity is a linear
function with the number of image pixels for both colour and gray image. The
algorithm is tuned by only four parameters, atmospheric veil inference, image
restoration, smoothing, and tone mapping [6]. Research work of R Fattal based
on haze estimation, and scatter light estimation. From that information, haze
free image contrast has been recovered. It has been assumed that transmission
and surface shading is locally uncorrelated. This simple statistical assumption
reduces other complexity like surface albedo. The challenge of this method is
to solve the pixels where no transmission is available. An implicit graphical
model made it possible to extrapolate the solution of those pixels[7]. It is
not a patch based prior contrary to previous methods. It is non-local prior. D
Berman et. al. emphasised that degradation is not uniform. It is different for
different pixels of the image and is controlled by the transmission
coefficient. It has been proposed colours of haze free to be clustered and
spread over the entire image. Whereas a hazy image forms a line of colours that
was earlier clustered, called a haze line. It recovers the distance map. The
algorithm is linear, faster, and deterministic, no training is required [8]. The
author is working on visibility improvements. The works were DCP based vision improvements
where the speed of the original algorithm was improved with reduced complexity
and sky masking [9]. In [10] authors proposed three algorithms and revised DCP
by gamma correction, contrast controller, sky masking and guided filtering. In
[11, 12, and 13] authors emphasised on the objective evaluation of the DCP
method and mathematical modeling of image formation. DCP is basically patch
based or local prior. Patch size in [4] was 15x15, omega was 0.95. These two
parameters play a significant role. This has been shown [14]. DCP with sky
masking is a useful algorithm. But the value of optimum value is difficult to
find out. It is evaluated manually. In [5] this difficulty has been recovered
by using Cuckoo Search Algorithm. The resultant image using CSA removes the
artifacts of sky reflection very well. Visibility Improvement is a classical
Inverse problem. Haze is always associated with blurring. Here both have been
treated and removed [3-16].
As discussed above single image colour dehazing is a challenge
and complex in nature. In this work low complexity depth map non-linear noise
removal model has been estimated. Image degradation optical model with refined transmission
via RLaMs depth map estimation produces the resulting reconstructed output.
Apart from that haziness factor k has been evaluated automatically depending on
the spread of intensity in the depth map [33].
Blur is an integral part of any degraded image. It comes along
with nonlinear noise. The degrading system prior model has to be reconstructed
from blurred or degraded images. Linear filters like Wiener, Least square
filter, and nonlinear filters like Lucy Richardson filter have also been studied.
RLaMs have been applied to remove blur which has been compared to classical
methods using parametric assessment of PSNR and time consumed [24, 25]. RLaMs
are effective and important in computer graphics applications as it is
non-iterative, fast, and bypass the problem of parameterizing system’s degree
of freedom. Finally, it has computational complexity O(n).[28].These advantages
of LaMs have been adapted in this research work. The PSF is a quantity to
determine the power of an optical system. Better resolution may be achieved by
narrowing the PSF. It is the spread of a point source of light as it passes
through a system. Ideally, a point source in space is defined by the delta
function infinite spectrum in special frequency kx, ky. PSF of an image forming
optical system is resolved by the parameter of an optical system and the
distance or depth of the object to be imaged [29]. Figure 3 shows PSF(Point
Spread Function) with Gaussian kernel 3x3 and standard deviation 10, noise
variance 0.1. Twelve different outdoor natural degraded images have been
recovered with Regularized Lagrange Multiplier with the above PSF. It is often
encountered in engineering and science applications the discretization of
linear ill-conditioned problems. This leads to large ill-conditioned linear
systems with right hand side corrupted by noise [27]. The solution of this kind
of linear system needs the solution of a minimizing problem which is dependent
on the estimation of the variance of the noise. This approach is well-known as
regularization. Lagrangian is a technique to solve this type of Noise
Constrained Regularization problem.
Fig. 1 Point Spread Function used with Gaussian
kernel 3x3 and standard deviation 10, noise variance 0.1.
Initially image restoration method is considered under the
category of linear spatially invariant restored filters. Blurring function is
considered as point spread function (PSF) or convolution kernel h (n1,
h2). Statistical properties (mean, correlation) of the original
image are assumed to be non-changeable spatially. Under these conditions image
formation mathematical model is formulated. Here f (n1, n2) is the
ideal spatially discrete image with no blur or noise. The received image is
displayed as
|
3.2.1
|
The above equation can be rewritten in matrix form.
is
the matrix form of original image.
is
the corresponding degraded image. fi,j
is the pixel elements i=1…….r
and j=1…..n.
is
the degradation matrix. Each row of matrix are related by
|
3.2.2
|
Where
represents ith row of the
original image F. Similarly
represents ith row of the
degraded image G. The process is repeated for each row of the matrix and
develops an unknown system of m simultaneous equations with n=m+l-1. It is now
easy to evaluate PSF which is assumed to be spatially invariant, and the
degradation matrix H with zero boundary conditions. It is assumed that the
length of blur be l in pixels which is also known as degradation index and an
integer. Degradation index ‘l’ is very difficult to find and has to be
approximated from the degraded image. Degradation index ‘l’ can be recovered by
two methods i) one dimensional cepstral method, ii) two dimensional cepstral
method. It is now important to find ith
row of the blurred image
from the ith
row of the original image using the eq 3.2.2.
|
3.2.3
|
Where
.
The main objective is to
retrieve original image from degraded image G and priori knowledge of degraded
phenomena matrix H. The matrix
,
blurred image, can be written mathematically
|
3.2.4
|
Now eq 3.2.4 can be rewritten as
|
3.2.5
|
It is clear that there are infinite of exact solutions for f
satisfying the eq 3.2.2 and 3.2.5. Out of them sharpest restored matrix is
essential. The vertical blur matrix is given by
|
3.2.6
|
Now this is assumed that blurring of rows is independent of
blurring of columns in image. Consequently there exists two matrices Hc
and
Hr. In such a scenario these can be expressed as
|
3.2.7
|
Where n=m2+l1-1, r=m1+l2-1,
l1
is linear horizontal blur in pixel, and l2
is linear
vertical blur in pixel.
In this section an excellent method has been reviewed known as
Lagrange Multiplier (LM). This is a linear blur model. The main purpose of the
LM is to remove linear blur and recover original image as optimum as possible
[fumi]. It is assumed that blur length is integer number of pixels and
resolution of the recovered image is very high. From eq 3.2.2
g=Hf,
where
f
᷉
is the first ‘m’ components of ‘ f ’ which has
minimum distance from measured data,
.
Now it is assumed that
=Pf.
P
is a
mxn
matrix to project
f
using the backing of
g.
|
3.3.1
|
Where
I
m
denotes identity matrix of
size
mxm
and
O
signifies
mx(l-1)
null matrix.
Eq 3.2.1,
original optimization problem, is redefined
as
|
3.3.2
|
While subject to constrain
. Therefore eq 3.2.3, and 3.2.4 are
together a constrain optimization problem. Using LMs, an alternate optimization
problem without constrain can be modelled.
|
3.3.3
|
is
known as Lagrange multiplier. Equation 3.2.5 is strictly convex and low semi
continuous with respect to weak-star bounded space topology [24-26]. Now
partial derivative of V with respect to unknown f for very high λ:
|
3.3.4
|
|
3.3.5
|
The solution of eq 3.2.7 in the matrix form is:
|
3.3.6
|
The eq 3.2.7 interprets the solution of recovered image in the
horizontal blurring condition. In case of vertical blurring scenario equation 3.2.6
and 3.2.7 will be helpful.
|
3.3.7
|
Now for a two dimensional separable blurring processes the
recovered image is:
|
3.3.8
|
There are numerous algorithms to solve a specific
problem. Out of several algorithms, one of them has to be chosen. There are
also several criteria to fit one algorithm for a problem.
Efficiency
criteria
will meet and fit for algorithmic fitness selection in computational computing.
Efficiency encompasses three criteria: i) time efficiency, ii) space
efficiency, and iii) Development efficiency [30]. Time complexity in terms of
execution time and big oh notation has been experimented and one way of classifying
and comparing algorithms.
Using the above
described techniques a novel algorithm has been effectively designed to remove
atmospheric turbulence as well as system degradation on single colour image. Total
algorithm with their detail mathematical modelling is given below in the
corresponding sub-sections.
In this paper, a novel algorithm based on inverting H
Koschmieder and E J McCartney image formation optical model [2,32] has been
presented. Transmission is refined through
Lagrange Multiplier
-based depth
map estimation and followed by YCbCr correction as shown in
Fig. 2 [24-29].
They are elaborated below.
Figure 2. Block Diagram of the RLaMs (Proposed Model).
|
Algorithm 1: RLaMs
|
|
|
Input Hazy Image:I
|
Computational Complexity
|
Step I
|
Average of minimum of three channels as Imin
|
O(n)
|
Step II
|
Average of maximum value of three channels as Imax
|
O(n)
|
Step III
|
Haziness factor, k =Imin
/ Imax
Eq.
(4.4.1)
|
O(n)
|
Step IV
|
Airlight Estimation
|
O(n)
|
Step V
|
Estimation of minimum intensity channel
|
O(n)
|
Step VI
|
Refinement / noise removal of minimum intensity channel by
Regularized Lagrange Multiplier Technique (used as Depth Estimation) [28]
|
O(n)
|
Step VII
|
Transmission Estimation from step VII Eq. (4.3.4)
|
O(n)
|
Step VIII
|
Recovery of Dehazed image with image degradation optical model [2,
38, 39]. Eq.(4.6.1)
|
O(n)
|
Step IX
|
YCbCr correction
|
O(n)
|
Step X
|
Evaluation of contrast,k, β and dmax of the Dehazed Image
|
O(n)
|
The image formation model, also known as
airlight scattering model, was proposed by H Koschmieder, and E J McCartney [2,
32] and represented as an ill-posed problem in equation (4.1.1).
I(x):
the hazy
image at a point x,
J(x):
the haze-free
image,
t(x):
the
transmission map,
A:
atmospheric
light,
β:
atmospheric
extinction coefficient,
d: distance between the original image and
hazy image or depth of scene.
Here
I, J,
and
A
are 3-D RGB image arrays. Six variables are shown above. Only
I,
the hazy image, is known.
J has to be developed from
I, t, A, β. Estimation of
A, t
and
β
are
responsible for good quality dehazed image.
|
4.1.1.
|
|
4.1.2
|
Noise is an integral part
of an imaging system. The transmission map, a 2D image array, is severely
captured by noise. The dehazed image is restored by minimizing
TM. TM is
obtained by inverting DM with proper selection of haziness factor [33].
Transmission map is solely associated with depth information and inaccurate
estimation leads to the halo effect leading to a high computational cost
problem [3-16,33] and is associated as an understanding of geometric relationships
in a scene. Single image DM estimation is far harder than multiple images and
to simplify patch-based dark channel [4], DM is estimated as the minimum of
three channels. Random noise on the minimum channel is eliminated by RLaMs
mentioned earlier in section 3.1.
|
4.2.1
|
Ic: indicate
individual channels of RGB image,
Icmin: minimum of three
channels Ic
(noisy),
IcminLMs : RLaMs Icmin (noisefree)
Noisy Icmin is made
noiseless through Lagrange Multiplier and is considered as refined normalized DM.
|
4.2.2
|
Complimenting equation
(4.2.2) will produce low complexity edge preserved smooth maximum intensity.
This maximum intensity channel will be treated as the
TM
t(x) [4, 17-29].
Whereas this proposed concept is computationally simple and easy to implement.
Intensity from any far point
pixel in the minimum intensity channel may be zero and totally faded away with
distance and represented as atmospheric light
A
[0 1] as in
equation (4.1.1). This can be rewritten below.
|
4.3.1
|
In
marginal case, equation 4.3.1 can be refined with atmospheric light
A
as one for the far end point as below.
|
4.3.2
|
Another
improvement needed as
Icmin
is noisy and after
RLaMs
based refinement on depth map estimation according to equation (4.2.2) transmission
equation will be rewritten as F
|
4.3.3
|
|
4.3.4
|
Now after getting refined
transmission map for individual image, tnew
(x) will not be the same
for each image, as individual scenario is different. Therefore additional
factor,
haziness factor
k, is required to be introduced.
|
4.3.5
|
K is
a proportionality constant for aerial perspective respectively. Zero indicates
clear visibility like clear day scene, whereas one indicates absolutely no
visibility like thick fog [4, 7, 33].
|
4.4.1
|
It has already been stated
that k, haziness factor, indicates the amount of haze present in the image of
interest. So far this is calculated manually by visual inspection of the amount
of haze. But for real time applications this cannot be implemented. The author has
already worked with this [31, 33]. Here it is considered that haziness factor k
is the ratio of the average of the minimum intensity channel to the average of the
maximum intensity channel. This concept works well for real time adaptive visibility
improvement.
Image formation optical
model indicates that transmission decays exponentially with distance in Eq.
(4.1.1), (4.1.2). In the end, the far end or background becomes whitish and
technically as equal to A, atmospheric light [2, 4, 32, 33] and distant pixels
are maximum bright due to haze. it has been proposed atmospheric light to be
maximum bright pixels in an image. For more robust estimation, atmospheric
light A has been considered to be the top 0.1% bright pixels of each channel.
An example has been
explained in Fig. 5. It shows the degraded image, its depth map and
transmission map, recovered image, its depth map, and transmission map by the
proposed RLaMs-Dehazing algorithm. It is evident not only from the recovered
image but also from the recovered depth map and transmission map that the proposed
algorithm works well and serves its purpose of cleaning the image.
The main objective of the work is to retrieve the original
hazefree or scene radiance image. Therefore from equation 4.1.1 scene radiance
can be recovered. This is shown below.
|
4.6.1
|
Y is luma
or intensity or achromatic colour channel component of any colour image. Cb and
Cr are blue difference and red difference respectively. Luminance channel Y is
independent of colour information, that is why YCbCr format performs better.By
controlling y channel intensity keeping Cb and Cr channels unaffected radiance
image brightness may be enhanced, so that gloomy radiance image may be look brighter.
This is shown by an example in Fig. 3. Radiance dehazed image visibility may be
enhanced by this YCbCr correction.
Fig. 3. Left: Hazy input, Middle: Scene Radiance,
Right: YCbCr correction
In this section, the performance of the RLaMs dehazing
technique is examined from various aspects. At first, ten sample hazy images
from the O-Haze dataset were examined with the RLaMs technique. GT, DM, refined
DM, TM, refined TM, and RLaMs output were tested qualitatively and
quantitatively.
Haze is not uniform and changes its density in each situation.
The popularity of any dehazing algorithm depends on its visibility improvement
qualitatively and quantitatively [4-16, 30-35]. Ten hazy images from the O-Haze
dataset were selected randomly to investigate the effectiveness of the RLaMs.
The results in Fig 4 show those randomly picked images along with their DM, TM,
refined DM, refined TM, and dehazed outputs. The figure reveals the essential
details along with compatible DM, and TM texture for human perception cues.
Artifactfree, balanced coloured, and clear outputs are obtained. The sky and
clouds look natural. The different depths and haze of those images are well
rectified.
Fig. 4. Performance of the
proposed RLaMs dehazing method,
Left:-Right:
Dense Input hazy Image, GT,
DM,
Refined DM,
TM, Refined TM, RLaMs. 10 images O-Haze dataset [34].
To validate the proposed algorithm, two types of image
datasets were selected, real world and synthetic.
a) Relative Study on Real world images, Ref [36]
Fig. 5. Subjective performance of the
state-of-the-art techniques and RLaMs using Real world images [34].
To verify the power
of the RLaMs, eight state-of-the-art techniques ( DEFADE [37], NLD [38], IDE
[39], MSCNN [40], ProxNet [41], EPDN [42], MSBDN [43], and IDRLP [35]) were
experimented with the RLaMs. Images from the real-world varied haze thickness
dataset in ref [36] are picked here. The subjective results are shown in Fig. 5.
Fig. 5(b-i) generates clear output applicable to computer vision. Fig. 5(j)
produces exceptionally clear and detailed textures in comparison to Fig. 5(b-i).
Fig. 6(ZP1-ZP9) shows artifacts and shortcomings in [37-43, 35] failing to
handle haze, darkness, sky region, and colour over saturation. In Fig. 5(j),
most of the shortcomings in Fig. 5 (a-i) are circumvented.
b) Relative Study on Synthetic images with Haze-free
GT as Refs [44, 34, 45]
Single image
dehazing is the most difficult of all image restoration problems. Haze-free GT
is an important criterion to validate any image restoration algorithm. GT
haze-free images from References SOTs, I-Haze, and O-Haze datasets [44, 34, 45]
have been used in Fig. 6 for comparable analysis of RLaMs with eight
state-of-the-art techniques ( DEFADE, NLD, IDE, MSCNN, ProxNet, EPDN, MSBDN, and
IDRLP) as [37-43, 35] respectively. Six images, two from each dataset, have
been selected sporadically in Fig. 6. As in Fig. 6, RLaMs generate clear
textured outputs befitting for CV applications. In analytical study, comparable
subjective results were observed as in Fig. 5. High resemblance is obtained between
Fig. 6(b) as GT and 6(k) as RLaMs.
Fig. 6. Qualitative comparison between the
proposed RLaMs and state-of-the-art techniques on six synthesis images.
The
quantitative assessment using PSNR and SSIM of the proposed model is summarized
in Table I. Progressive results with the RLaMs showed its effectiveness in
different kinds of haze removal. In six images of three different data sets in
Fig. 6, the RLaMs performs the best of eight state-of-the-art methods with the
average PSNR/SSIM values of the six images. Comparable clear results with RLaMs
and GT are found. The ranking of the nine techniques is also shown in Table I
along with their ranking. The ranking clarifies the effectiveness of the
RLaMs-Dehazing technique(Ranking: 2,1,4,1,1,1,3,1,1,1,1, and 1). It is evident
that the proposed technique is fit quantitatively. Table II shows the ranking
list. The RLaMs_Dehazing tops the list with a considerable margin.
TABLE I
PSNR AND SSIM Analysis of DEFADE, NLD, IDE,
MSCNN, ProxNet, EPDN, MSBDN, IDRLP and RLaMs_Dehazing in Fig. 6.
TABLE II
PSNR/SSIM
|
DEFADE
|
NLD.
|
IDE
|
MSCNN.
|
ProxNet
|
EPDN.
|
MSBDN
|
IDRLP
|
RLaMs.
|
Ranking
|
31/35
|
49/37
|
38/41
|
30/ 36
|
36/32
|
42/44
|
19/23
|
13/14
|
12/06
|
In Table III,
performance analysis was conducted on SOTs, I-Haze, and O-Haze Datasets in Fig.
6. The average PSNR/ SSIM of those images is listed in groups in comparison
with the eight above-mentioned benchmark algorithms and the RLaMs_Dehazed
method along with their ranking. In Table III, the ranking of the RLaMs again
shows its superiority over the other eight methods.
TABLE III
Performance as Average PSNR/ SSIM of
DEFADE, NLD, IDE, MSCNN, ProxNet, EPDN, MSBDN, IDRLP and RLaMs_Dehazing with
SOTS. I-Haze, and O-Haze Datasets:
Apart from subjective and objective evaluation, computational
complexity plays an important role w.r.t any algorithmic performance in
computer vision applications [23]. As shown in Algorithm 1, ten steps have O(n)
complexity each. Thus, the overall complexity of Algorithm 1 is O(n). This
indicates that the algorithm is a linear relationship with the size of the
image (MXN). The time complexity of the eight state-of-the-art models and RLaMs
tested with the R1, R2, R3, and R4 in Fig. 5 is summerized in Table IV. The
results of the different resolutions of those images are listed in Table IV. The
RLaMs-Dehazing proves its fastness in all types of images defeating the other
contestants.
Five images
from O-Haze, I-Haze, and SOTs (11,21,11,21,1410.11) were randomly chosen with
its GT, Hazy and RLaMs-Dehazing in Fig. 7. In Fig. 8, Cropped version of Fig. 7
shows the effectiveness of RLaMs with seven Lagrange Multiplier values in Table
V. No artifacts was found. Moreover, it also verified the range of RLaMs [0.312,1.0953]
for effective results.
Fig. 7. First
Row: Hazy Images( O_Haze(11,21), I_haze(11,21), SOTs); Second Row: GT; Third
Row: RLaMs Dehaze Images
TABLE IV
Processing Time (seconds) of DEFADE, NLD,
IDE, MSCNN, ProxNet, EPDN, MSBDN, and IDRLP and RLaMs_Dehazing with R1, R2, R3,
and R4 in Fig. 6
Fig. 8. Crop version of Fig. 8. with seven
Lagrange Multiplier values; O-Haze(11,21), I-Haze(11,21, SOTs (1410.11)
TABLE V
Lagrange Multiplier used in the above five
images with seven Lagrange Multiplier values
Lagrange Multiplier
|
1.0953
|
0.3916
|
0.1563
|
0.1322
|
0.1037
|
0.0698
|
0.0312
|
Particles
suspended in the air cause hindrance in the path of light travel. This effect
produces serious artifacts and degradation in the image formation process in
the digital image reconstruction system with poor or no visibility. To improve
visibility in the digital image, a low complexity, fast, robust visibility
improvement is presented. The Lagrange-based regularized method is incorporated
to refine TM through DM which improves transmission followed by inverting image
formation optical model; finally, YCbCr correction enhances the results. In
this paper, a novel regularized Lagrange Multiplier-based image visibility
improvement technique RLaMs-Dehaze is presented. TM is purified through clean
DM by the RLaMs optimization technique. These RLaMs are powerful, robust and
low complexity in time; especially linear in time with the size of the image
under investigation. Experiments on a diverse set of real-world images, and
synthetic haze datasets demonstrate the preeminence of RLaMs-Dehaze over
benchmark methods. Eight benchmark methods were selected for this experiment
and excellent results were achieved for the superiority of the RLaMs-Dehazing
qualitatively and quantitatively.
RLaMs method produces more
visibility than existing procedures. More modification possibilities are there
to improve the algorithms depending on transmission maps TM and atmospheric
light estimations (A). The weather condition of each image is unique. Therefore
no one method can be claimed to optimally solve the problem equally.
Conflict of interest:
No conflict of
interest.
Highlight of the RLaMs
Dehazing:
RLaMs DM refinement, followed by TM correction. Inverting
optical image formation model. YcbCr correction.
Acronym:
DM: Depth Map;
TM: Transmission Map; GT: Ground Truth; CV: Computer Vision
Resource:
a) Software:
Matlab2014a
is used as software for experiments.
b) Hardware:
Intel
core i3, 3110M CPU @ 2.40 GHz, 4.0 GB RAM, Intel HD Graphics 4000, 6 years old
has been used for the research.
c) Dataset:
SOTS.
I-Haze, and O-Haze [44, 34, 45]
Potential Application:
This algorithm can be used in surveillance, Military, underwater, outdoor image
post-processing, and onboard moving vehicles to enhance visibility and clear
vision.
[1] J Mao, Study of Image
Dehazing with the self-adjustment of the Haze Degree, Ph.D. Thesis, Division of
Production and Information
Systems Engineering,
Muroran Institute of Technology, 2015
[2] H. Koschmieder,
Theorie der horizontalensichtweite, Beitr.Phys. Freien Atm., vol. 12, 1924, pp.
171–181.
[3] S Roy, S S Chaudhuri,
Modeling of Ill-Posed Inverse Problem, IJMECS, 2016, 12,pp- 46-55
[4]
K.,He, J., Sun, and X., Tang,: Single image haze removal using dark channel
prior”, IEEE Conference on Computer Vision and Pattern
Recognition,
Miami, FL,
2009, pp- 1956 – 1963
[5] R Tan, Visibility in Bad Weather from A Single
Image, 2008CPVRIEEE Explore, DOI: 10.1109/CVPR.2008.4587643, ISSN:
1063-6919.
[6]
Tarel, J.-P.,
Hautiere, N.:
Fast visibility restoration from a single color or gray level image, IEEE 12th
International conference on
Computer Vision (2009)
2201 – 2208.
[7]
R Fattal, Single Image Dehazing, ACM Transaction on Graphics(TOG),
vol-27, Issue-3, August2008.
[8] D Berman, T Treibitz, S Avidan, Non-local
Image Dehazing, CVPR2016.
[9] D Das, S Roy, S S Chaudhuri, Dehazing
Technique based on Dark Channel Prior model with Sky Masking and its
quantitative analysis, CIEC16, IEEE Explore, IEEE Conference
ID: 36757, IEEE Xplore Compliant ISBN No.: 978-1-5090-0035-7, IEEE Xplore Compliant Part No.: CFP1697V-ART,
978-1-5090-0035- 7/16/$31.00©2016IEEE
[10] S Roy, S S Chaudhuri, Development of Real
Time Visibility Enrichment Algorithms NCECERS2016
[11] S K Datta, M Hore, S Roy, Objective
Evaluation of Dehazed Image by DCP, NCECERS2016-mainak
[12] S K Datta, M Hore, S Roy, Mathematical
Modelling of Image Formation through Atmosphere, NSAMTM2016
[13] M Hore, S K Datta, S Roy, Subjective &
Objective Evaluation of Dehazed Image by DCP, IC2C2SE2016
[14] D Roy, S Banerjee, S Roy, S S Chaudhuri,
Removal of the Artifacts Present in the Existing Dehazing Techniques,
IC2C2SE2016
[15] S Roy, S S Chaudhuri, Modelling and control
of sky pixels in visibility improvement through CSA, IC2C2SE2016
[16]
S Roy, S S Chaudhuri, Modeling of Haze Image as Ill posed Inverse Problem &
its solution, IJMECS, vol:8, no:12, pp:46-55, December2016.
[17]
I Pitas, A N Venetsanopoulos, Order Statistics in Digital Signal Processing,
Proceedings of IEEE, Vol-80, No-12, December 1992.pp-1893-1921.
[18]
R C Gonzalez, R E Woods, Digital Image Processing, 3rd
Edition,
Pearson.
[19] N Hautiere, J P Tarel, D Aubert, E Doumont,
Blind contrast enhancement assessment by gradient rationing at visible edges,
Image Analysis and stereology.
[20] V Radhika, G Padmavati, Performance of
various order statistics filters in impulse and mixed noise removal for RS
images, Signal & Image Processing: An International Journal(SIPIJ)
Vol.1, No.2, December 2010
[21] C Mythili, V Kavitha,Efficient technique for
color image noise reduction, The Research Bulletin of Jordan ACM,Vol II(III)
[22] X Xhang, D H Brainard, Estimation of
Saturated Pixel Values in Digital Colour Imaging, Optical Society of America,
Vol-21, No-8,pp-2301-2310.
[23] M Sipser, Introduction to the theory of
computation, Cengage Learning.3rd
Edition, 2013.
[24] I Stoganovic, P Stanimirovic, M Miladinovic,
Applying the algorithm of Lagrange Multipliers in digital image restoration.
[25] T Chan, S Esedoglu, F Park, A Yip, Recent
Trends in Total Variation Image Restoration, Mathematical Model in Computer Vision: A Hand Book, Department of Mathematics,
University of Californis, 1-18, 2004.
[26] A Buades,B Coll, J M Morel, A Reviw of Image
Denoising Algorithms, with a new one, ” Multiscale Model. Simul., vol. 4, no. 2, pp. 490–530, 2005.
[27] G Landi, Lagrangian Methods for the
regularization of the discrete ill-posed problems, Computational Optimization
and Applications, 39(3), 347-368, Springer US,2008.
[28] D Baraff,Linear Time Dynamics using Lagrange
Multipliers, Carneigie Mellon University, 1996.
[29] M Subbarao, On the Depth Information in The
Point Spread Function of a Defocused Optical System, State University of New
York, 1999.
[30] E Pontelli, K Vellaverde, Complexity of
Algorithms, Department of Computer Science,New Maxico State University.
[31] Sangita Roy, Sheli Sinha Chaudhuri, Low
Complexity Single Colour Image Dehazing Technique, Intelligent Multidimensional
Data and Image Processing,2018, IGI Global (formerly Idea Group Inc.).
[32] E J McCartney, Optics of the Atmosphere:
Scattering by Molecules and Particles, New York, NY, USA:Wiley, 1976.
[33] Sangita Roy & Sheli Sinha Chaudhuri
(2022) WLMS-based Transmission Refined Self-Adjusted No Reference Weather
Independent Image Visibility Improvement, IETE Journal of Research, 68:3,
1635-1651, DOI: 10.1080/03772063.2019.1662335.
[34} C. O. Ancuti, C. Ancuti, R. Timofte and C. De
Vleeschouwer, "O-HAZE: A Dehazing Benchmark with Real Hazy and Haze-Free
Outdoor Images," 2018 IEEE/CVF Conference on Computer Vision and
Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp.
867-8678, doi: 10.1109/CVPRW.2018.00119.
[35]
M. Ju, C.
Ding, C. A. Guo, W. Ren and D. Tao, "IDRLP: Image Dehazing Using Region
Line Prior," in IEEE Transactions on Image Processing, vol. 30, pp.
9043-9057, 2021, doi: 10.1109/TIP.2021.3122088.
[36] R
Fattal, Dehazing
using color-lines.
ACM transactions on graphics (TOG)
34.1 (2014): 1-14.
[37]
L. K. Choi, J. You, and A. C. Bovik,
“Referenceless prediction of perceptual fog density and perceptual image
defogging,”
IEEE Trans. Image Process, vol. 24, no. 11, pp. 3888–3901,
Nov. 2015.
[38] D. Berman,
T. Treibitz, and S. Avidan, “Single image dehazing using haze-lines,”
IEEE
Trans. Pattern Anal. Mach. Intell., vol. 42, no. 3, pp. 720–734, Mar. 2020.
[39] M. Ju, C.
Ding, W. Ren, Y. Yang, D. Zhang, and Y. J. Guo, “IDE: Image dehazing and
exposure using an enhanced atmospheric scat- tering model,” IEEE Trans. Image
Process., vol. 30, pp. 2180–2192, 2021.
[40]
W. Ren, S. Liu,
H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via
multi-scale convolutional neural networks,” in Com- puter Vision—ECCV 2016, B.
Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham, Switzerland: Springer,
2016, pp. 154–169.
[41] D. Yang and J. Sun, “Proximal
dehaze-net: A prior learning-based deep network for single image dehazing,” in
Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 702–717.
[42] Y. Qu, Y. Chen, J. Huang, and Y. Xie,
“Enhanced Pix2pix dehazing network,” in Proc. IEEE/CVF Conf. Comput. Vis.
Pattern Recognit. (CVPR), Jun. 2019, pp. 8152–8160.
[43] H. Dong et al., “Multi-scale boosted
dehazing network with dense feature fusion,” in Proc. IEEE/CVF Conf. Comput.
Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 2154–2164.
[44] B. Li et al., “Benchmarking
single-image dehazing and beyond,” IEEE Trans. Image Process., vol. 28, no. 1,
pp. 492–505, Jan. 2019.
[45]
Ancuti, Cosmin, et al. "I-HAZE: a dehazing benchmark with real hazy
and haze-free indoor images."
Advanced Concepts for Intelligent Vision Systems: 19th
International Conference, ACIVS 2018, Poitiers, France, September 24–27, 2018,
Proceedings 19.
Springer International
Publishing, 2018.