Estimation of Ambient Light and Transmission Map with Common Convolutional Architecture
|
|
- Nora Hudson
- 5 years ago
- Views:
Transcription
1 Estimation of Ambient ight and Transmission Map with Common Convolutional Architecture Young-Sik Shin, Younggun Cho, Gaurav Pandey, Ayoung Kim Department of Civil and Environmental Engineering, KAIST, S. Korea {youngsik.shin, yg.cho, Department of Electrical Engineering, IIT, Kanpur, India Abstract This paper presents a method for effective ambient light and transmission estimation in underwater images using a common olutional network architecture. The estimated ambient light and the transmission map are used to dehaze the underwater images. Dehazing underwater images is especially challenging due to the unknown and significantly varying ambient light in underwater environments. Unlike common dehazing methods, the proposed method is capable of estimating ambient light along with the transmission map thereby improving the reconstruction quality of the dehazed images. We evaluate the dehazing performance of the proposed method on real underwater images and also compare our method to current state-of-the-art techniques. I. INTRODUCTION Capturing high-resolution colored images from underwater environments has many applications in oceans engineering. A good quality image from the deep sea can be very useful for scientists studying various underwater phenomena. Despite significant advancements in camera technology, high-quality underwater image acquisition remains an unsolved problem. The scattering of light from water particles along with the attenuation and change in the color of different wavelengths of ambient light (including external light sources) cause a hazing effect in captured underwater images, as is shown in Fig. 1(a). This hazing effect needs to be removed from the images so that a clear picture of the underwater scene can be visualized. Several methods for haze removal from prior information have been proposed in the past [1] [6]. Schechner [1] tried to use prior information from multiple images taken at different environmental conditions and at different degrees of polarization. Narasimhan [2] used the available depth information to enhance the dehazing performance. Fattal [3] presented the first single image dehazing technique using independent component analysis (ICA) to decorrelate the transmission and surface shading. This technique relies on the assumption that the transmission map and the surface shading factor are locally uncorrelated. He et al. [4] proposed a haze removal method using dark channel prior (DCP). This approach uses a strong prior that at least one of the color channels has low intensity in haze-free images. Zhu et al. [5] proposed color attenuation prior (CAP), which uses the fact that the saturation of hazypixels in an image becomes much lower than that of the haze-free pixels. Carlevaris-Bianco et al. [6] used the strong difference between the red color channel and other channels to estimate the depth of scene from a single underwater image. (a) Original Image. (b) Dehazed Image. Fig. 1. Haze removal on underwater images. (a) Original hazed image. (b) the resulting dehazed image. Recently, olutional neural network (CNN) presented promising solutions to many vision tasks [7] [12] including dehazing [13], [14]. Jiaming Mai et al. proposed a back propagation neural network (BPNN) model to estimate the transmission map. Cai et al. [14] proposed a CNN architecture called DehazeNet for the estimation of the transmission map. A hazy image is provided as input to the olutional architecture, and a regression model is learned to predict the transmission map. This transmission map is then used to remove haze using an atmospheric scattering model. This is the work that is the most related to ours; however, the proposed method is different in the sense that we also estimate ambient light in addition to the transmission map for a better reconstruction of the hazy image. Image dehazing is an effective approach to increase the visibility and recover the real radiance of the hazy image. It should be noted that the ambient light in underwater environments is significantly biased, and estimating it accurately improves many underwater vision applications. However, despite the utility of correctly estimating ambient light, it is usually arbitrarily selected from the brightest or the median pixel within the lower estimated transmission [4] [6], [15]. Therefore, in this work we focus on the fast and effective estimation of both ambient light and the transmission map. We propose a common olution architecture to simultaneously estimate ambient light and the transmission map for visibility enhancement of the scene in an image degraded by an underwater environment. The rest of the paper is organized as follows. In section II, we describe the atmospheric scattering model used in
2 3x3 5x5 3x3 R e U 3x3 R e U P o o c o n v R e U Hazy image 7x7 maxout Transmission map or Ambient ight Multi-scale fusion stage Multi-scale feature extraction stage Nonlinear regression Fig. 2. The overall olutional architecture. The network contains three stages: multi-scale fusion, feature extraction and nonlinear regression. this work. Section III describes the proposed olutional architecture. Section IV presents the results from simulated and real data captured underwater. In section V, we present our concluding remarks. II. ATMOSPHERIC SCATTERING MODE In this paper, we adopt the haze model as described in [4], which considers the hazed image as a weighted sum of the haze-free image J and ambient light A. For a pixel (u, v), hazed pixel value I(u, v) is modeled as shown below I(u, v) = J(u, v)t(u, v) + A(1 t(u, v)), (1) where I is the observed haze image, J is the scene radiance, A is the global atmospheric light, and t is the transmission that reaches the camera without scattering. The transmission value t(u, v) at any pixel exponentially decreases with respect to the distance of the reached light t(u, v) = e βd(u,v), (2) where d(u, v) is the depth of the scene point, and β is the attenuation coefficient of the medium. If we resolve the transmission map t and the global atmospheric light A from a given hazed image, the original scene radiance J can be recovered from the model given in (1). III. CONVOUTIONA NEURA NETWORK (CNN) FOR AMBIENT IGHT AND TRANSMISSION ESTIMATION A. Model Architecture The proposed CNN architecture is composed of three stages of network propagation as shown in Fig. 2. The first stage of the network is a multi-scale fusion stage inspired from [16]. In this stage, we use an element-wise summation of pixels for dimensionality reduction. This allows us to generate more feature maps in each layer, thereby improving the training accuracy. Since we are estimating both ambient light and the transmission map from the same architecture, we need to fix the value of one while training for the other variable. Using more feature maps reduces the uncertainty of unknown variables (e.g. transmission in ambient light architecture learning and vice versa). The final stage consists of a nonlinear regression layer for ambient light and transmission map estimation. A similar idea was recently proposed in [14], which used a multi-scale mapping layer and maxout operation for hazerelevant feature extraction. However, our model is different from them as we estimate both ambient light and the transmission map from the same network architecture. 1) Muti-scale fusion: The first stage in the architecture is the multi-scale fusion layer which has been widely used for image enhancement including single image dehazing [14], [16], [17]. We use 3 parallel olutional layers, where each olutional layer has a filter of the size [3 3 32], [5 5 32] and [7 7 32]. We choose more number of feature maps than DehazeNet to reduce the uncertainty in the unknown variables. Moreover, at the end of this stage, we perform an element-wise summation operation whereas DehazeNet only stacks up the multi-scale layers. The summation of multi-scale layers helps to reduce the computational complexity in the later stages. 2) Feature extraction: To handle the ill-posed condition of single image dehazing, previous methods have assumed various features that are closely related with the properties of hazy images. For example, dark channel, hue disparity and rgb channel disparity are utilized as haze-relevant features [4] [6]. Inspired by the previous methods, the second stage is designed to extract haze-relevant features. This stage consist of a maxout unit, two olutional layers with a Rectified inear Unit (ReU) activation function and a max-pooling layer. The maxout unit [18] is selected to find features in the depth direction of input data. After the maxout unit, we use two olution layers with a filter of size [3 3 32]. astly, the max-pooling layer is chosen to get the spatial in feature. In general, max-pooling layers are used to overcome local sensitivity and to reduce the resolution of feature maps in entional CNN. In contrast with this, we densely apply the operation to prevent loss of resolution and it achieve the goal that is the use of CNN for image restoration. 3) Nonlinear regression: The last stage is the non-linear regression layer that performs the estimation of the transmission and ambient light. The olutional layer that is used in
3 (a) Original Image. (b) Transmission map. (c) Ambient light. (d) Dehazed Image. Fig. 3. A process of haze removal on underwater images. (a) the original image, (b) the transmission map and (c) the estimated ambient light. Finally, the dehazed image is recovered in (d). this stage consists of a single filter of the size [3 3 32]. We have also added the widely used ReU layer after every olution layer to avoid any problems of slow ergence and local minima during the training phase [7], [1], [11]). B. Training of CNN 1) Training data: Training of CNN requires a pair of hazy patches and corresponding haze-free information (e.g. transmission map and ambient light). Practically, it is very difficult to obtain such a training dataset via experiments. Therefore, we use the haze model equation in (1) and synthesize hazed patches from haze-free image patches to train our CNN architecture. We use two publicly available datasets, IC-NUIM [19] and SUN database [2], for training. We apply random transmission t (, 1) and random ambient light A (, 1) on small haze-free image patches, assuming that transmission and ambient light are locally constant on small image patches. Note that we use random ambient light for underwater images, which is generally a valid assumption for underwater environments. Moreover, the dataset generated in this manner enables the network to estimate more accurate transmission on hazy images having color distortions. This way, we generate a large number of hazy image patches from haze-free image patches with random transmission and ambient light. This training dataset is used to learn the threestage CNN architecture described above. 2) Training Method: In the proposed model, we use a supervised learning mechanism between hazy image patches and label data (such as transmission value or ambient light value). Filter weights in the model are learned by minimizing the loss function. Given a pair of hazy patches from the above method and their corresponding label, we use the general squared error (MSE) as the loss function, (Θ) = 1 N N F (p i ; Θ) l i, (3) i=1 where p i is the input hazy patches, l i is the label and Θ is the filter weights. We employ the widely-used stochastic gradient descent (SGD) algorithm to train our model. C. Balanced Scene Radiance Recovery When transmission t(u, v) and atmospheric light A are obtained, original scene radiance J can be recovered from the atmospheric scattering model in eq. (1). It was entionally recovered from the inverse atmospheric scattering model as shown below J(u, v) = I(u, v) A + A. (4) max(t(u, v), t ) However, this model cannot achieve the recovery of the original scene radiance in an underwater environment. The attenuation of ambient light in an underwater environment is not only dependent upon the distance travelled and density of particles in the path of the light but also depends on the color/wavelength of the light. For instance, the light intensity of a red channel rapidly decreases whereas the intensity of blue or green channel decreases slowly. Hence, the ambient light component in the images captured in underwater environments is not the true ambient light, which affects the recovery of scene radiance from the entional atmospheric scattering model as shown in (4). Therefore, in order to solve this problem, we propose a novel balance scene recovery model as shown below J(u, v) = I(u, v) Â + A b, (5) max(ˆt(u, v), t ) }{{}}{{} balanced ambient light direct scene radiance where ˆt(u, v) is the estimated transmission value, Â is the estimated ambient light using our CNN model and A b is the balanced ambient light. The balanced ambient light A b is defined as A b = Â a b, (6) where a b is a fixed vector [1/ 3, 1/ 3, 1/ 3] which represent the balanced ambient light in RGB space. Here we assume that the balanced ambient light has same magnitude ( Â ) for all three color channels. The proposed dehazing process with the image reconstructed using the balanced scene radiance recovery model is shown in Fig. 3.
4 (a) He [4]. (b) Zhu [5]. (c) Cai [14]. (d) Proposed (e) He [4]. (f) Zhu [5]. (g) Cai [14]. (h) Proposed. Fig. 5. The error statistics according to saturation value of a ambient light on 15K synthetic patches. The red line s the value of the estimation error and gray boundary s the s. First rows represent the results in balanced ambient light and second rows shows the result in biased ambient light. IV. RESUTS We trained the proposed architecture from about 1 million synthetic hazed patches generated from two publically available datasets (IC-NUIM [19] and SUN database [2]). A mixture of patches from two datasets were used to capture both indoor (IC-NUIM) and outdoor (SUN database) scenes. We used the open source Caffe framework [21] to train our olutional network. We performed several experiments to verify the robustness of the proposed olutional architecture. We also compared the proposed method with several state-of-the-art methods of dehazing available in literature [4] [6], [14]. These algorithms can be broadly classified into (i) entional computer vision techniques that uses prior information [4] [6] and (ii) CNN- (a) Balanced ambient light patches (b) Biased ambient light patches Fig. 4. Two types of synthetic hazy patches. (a) Balanced ambient light patches for light condition without color cast (no bias). (b) Biased ambient light patches for color casted light condition. TABE I TRANSMISSION MAP ACCURACY MSE ( 1 2 ) DCP [4] CAP [5] DehazeNet [14] Ours No color cast With color cast based dehazing methods as in DehazeNet [14] and proposed method. A. Transmission Map Estimation In the haze removal process, the transmission estimation accuracy is the most dominant factor for the dehazing performance. In the atmospheric scattering model (1), the transmission describes a light portion that reaches the camera. When the light is scattered and the transmission attenuated, haze occurs in an image. For underwater environments this attenuation occurs under significantly biased ambient light, which produces color saturation in the hazy region. Therefore, transmission map estimation accuracy has significant effect in original scene radiance estimation. We compare the accuracy of estimated transmission (computed from various methods) for 15K sample patches (exclusively selected from training sets) under two different haze conditions. One haze model is without color cast (i.e., no bias in ambient light as in aerial images) and the other is with color cast in the ambient light (i.e., strong bias in ambient light as in underwater images). We synthetically generated two hazy image sets (Fig. 4), one with balanced ambient light and the other with biased ambient light. Transmission map accuracy from different methods is com-
5 (a) Original Image. (b) Carlevaris-Bianco [6]. (c) He [4]. (d) Zhu [5]. (e) Cai [14]. (f) Proposed. Fig. 6. The comparison of the estimated transmission map on real underwater image. An original hazy image is shown in (a). Some results show a promising performance as in (b), (c) and (f). The others are unsuccessful when estimating transmission in underwater due to high saturation region of the color as shown in (d) and (e). pared in Table I and Fig. 5. It should be noted that biased ambient light in underwater disrupts the transmission estimation for some methods because they only depend on balanced ambient light conditions. Table I compares the MSE between estimated transmission and ground truth under two different ambient light conditions. Fig. 5 presents error statistics over 15K test sample patches. Note that the proposed method shows the best accuracy in transmission map estimation. Performance of DehazeNet [14] and CAP [5] depend on the ambient light condition and are competent under balanced ambient light data but fail under biased ambient light. We also observed that DCP [4] presents consistent performance regardless of color cast. We think that this is mainly because the bias in the ambient light does not affect the DCP values of the local patches. The proposed method outperforms DCP under balanced ambient light and still robustly estimates the transmission under biased ambient light. A summarizing illustration of transmission map estimation from different methods is shown in Fig. 6. Feasible transmission map estimations are reported in He [4], Carlevaris- Bianco [6] and ours, while the others show insufficient performance due to high saturation of color in the water. As these methods heavily depend on RGB values, additional color correction is required for performances improvement (e.g, white balancing [22] and lαβ color correction [23]). B. Real Underwater Images Dehazing We applied a trained algorithm on a set of real underwater images with different levels of haze. A typical sample of underwater images with various color casts were used as shown in Fig. 7. The six test images have various ambient light conditions. Fig. 7 shows the dehazing results and the estimated ambient light for each method. Carlevaris-Bianco [6] represent good performance for dehazing and color balance among previously reported methods. This is because it uses the unique prior associated with the color-dependent attenuation of light specifically in an underwater. He [4] and Zhu [5] enhanced the contrast of the dehazed images. However, as can be seen in the ambient light estimation row, the estimated ambient light is not accurate as they merely compute it from the estimated transmission map. Cai [14] particularly fails the recovery of scene radiance on color casted underwater images because the algorithm assumes balanced ambient light. Overall, the proposed method shows reliable performance of the proposed dehazing network for underwater images. Note that in this experiment both transmission map and ambient light were estimated from proposed common olutional
6 (a) Original images. (b)carlevaris-bianco [6] (c) He [4]. (d) Zhu [5]. (e) Cai [14]. (f) Proposed results. Fig. 7. Comparison results of dehazing with other methods under various ambient light condition. A small color box represent the estimated ambient light in each method and the images below the color box show dehazing results in each method. Note that the best performance is shows in (b) and (f) regardless of ambient light condition. architecture. These results show good dehazing performance for underwater environments. V. C ONCUSION In this paper, we presented a CNN-based ambient light and transmission estimation framework with common olutional architecture for single image haze removal. We evaluated the performance of the proposed method with syn- thetic data and compared it with existing methods. We also evaluated the qualitative performance of the proposed method on some real underwater images. The preliminary results show promising performance of the dehazing ability of the proposed method.
7 ACKNOWEDGMENT This work is supported through a grant from the KAIST via High Rish High Return Project (Award #N111685) and NRF (Award #N115984), and Ministry of and Infrastructure and Transport s U-city program. REFERENCES [1] Y. Schechner and N. Karpel, Recovery of underwater visibility and structure by polarization analysis, Oceanic Engineering, IEEE Journal of, July 25. [2] S. G. Narasimhan and S. Nayar, Interactive deweathering of an image using physical models, in IEEE IEEE Workshop on Color and Photometric Methods in Computer Vision, In Conjunction with ICCV, October 23. [3] R. Fattal, Single image dehazing, ACM Transaction on Graphics (TOG), vol. 27, no. 3, pp. 72:1 72:9, Aug. 28. [4] K. He, J. Sun, and X. Tang, Single image haze removal using dark channel prior, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp , Dec 211. [5] Q. Zhu, J. Mai, and. Shao, A fast single image haze removal algorithm using color attenuation prior, IEEE Transactions on Image Processing, vol. 24, no. 11, pp , Nov 215. [6] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, Initial results in underwater single image dehazing, in Proceedings of the IEEE/MTS OCEANS Conference and Exhibition, Sept 21, pp [7] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, arxiv preprint arxiv: , 215. [8] T. Naseer,. Spinello, W. Burgard, and C. Stachniss, Robust visual robot localization across seasons using network flows, in Proceedings of the National Conference on Artificial Intelligence (AAAI), 214. [9] R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 214. [1] J. Kim, J. K. ee, and K. M. ee, Accurate image super-resolution using very deep olutional networks, arxiv preprint arxiv: , 215. [11] J. Sun, W. Cao, Z. Xu, and J. Ponce, earning a olutional neural network for non-uniform motion blur removal, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 215. [12]. Xu, J. S. Ren, C. iu, and J. Jia, Deep olutional neural network for image deolution, in Advances in Neural Information Processing Systems, 214, pp [13] J. Mai, Q. Zhu, D. Wu, Y. Xie, and. Wang, Back propagation neural network dehazing, in Proc. IEEE Conf. Robotics and Biomimetics, Dec 214, pp [14] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, Dehazenet: An end-to-end system for single image haze removal, arxiv preprint arxiv: , 216. [15] K. Tang, J. Yang, and J. Wang, Investigating haze-relevant features in a learning framework for image dehazing, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 214, pp [16] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, Enhancing underwater images and videos by fusion, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 212, pp [17] Y. iu, S. iu, and Z. Wang, A general framework for image fusion based on multi-scale transform and sparse representation, Information Fusion, vol. 24, pp , 215. [18] I. J. Goodfellow, D. Warde-farley, M. Mirza, A. Courville, and Y. Bengio, Maxout networks, in Proceedings of the 3th International Conference on Machine earning, ICM 213, Atlanta, GA, USA, June 213, 213, pp [19] A. Handa, T. Whelan, J. McDonald, and A. Davison, A benchmark for RGB-D visual odometry, 3D reconstruction and SAM, in Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, May 214, pp [2] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, Sun database: arge-scale scene recognition from abbey to zoo, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 21, pp [21] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. ong, R. Girshick, S. Guadarrama, and T. Darrell, Caffe: Convolutional architecture for fast feature embedding, in Proc. ACM Conf. on Multimedia. ACM, 214, pp [22] Y.-C. iu, W.-H. Chan, and Y.-Q. Chen, Automatic white balance for digital still camera, IEEE Transactions on Consumer Electronics, vol. 41, no. 3, pp , [23] G. Bianco, M. Muzzupappa, F. Bruno, R. Garcia, and. Neumann, a new color correction method for underwater imaging, The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 4, no. 5, p. 25, 215.
1. Introduction. Volume 6 Issue 5, May Licensed Under Creative Commons Attribution CC BY. Shahenaz I. Shaikh 1, B. S.
A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior and Pixel Minimum Channel Shahenaz I. Shaikh 1, B. S. Kapre 2 1 Department of Computer Science and Engineering, Mahatma Gandhi Mission
More informationHAZE is a traditional atmospheric phenomenon where
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 11, NOVEMBER 2016 5187 DehazeNet: An End-to-End System for Single Image Haze Removal Bolun Cai, Xiangmin Xu, Member, IEEE, Kui Jia, Member, IEEE, Chunmei
More informationPhysics-based Fast Single Image Fog Removal
Physics-based Fast Single Image Fog Removal Jing Yu 1, Chuangbai Xiao 2, Dapeng Li 2 1 Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China 2 College of Computer Science and
More informationHAZE is a traditional atmospheric phenomenon where
DehazeNet: An End-to-End System for Single Image Haze Removal Bolun Cai, Xiangmin Xu, Member, IEEE, Kui Jia, Member, IEEE, Chunmei Qing, Member, IEEE, and Dacheng Tao, Fellow, IEEE 1 arxiv:1601.07661v2
More informationProceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong
, March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong TABLE I CLASSIFICATION ACCURACY OF DIFFERENT PRE-TRAINED MODELS ON THE TEST DATA
More informationCost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling
[DOI: 10.2197/ipsjtcva.7.99] Express Paper Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling Takayoshi Yamashita 1,a) Takaya Nakamura 1 Hiroshi Fukui 1,b) Yuji
More informationContent-Based Image Recovery
Content-Based Image Recovery Hong-Yu Zhou and Jianxin Wu National Key Laboratory for Novel Software Technology Nanjing University, China zhouhy@lamda.nju.edu.cn wujx2001@nju.edu.cn Abstract. We propose
More informationHAZE is a traditional atmospheric phenomenon where
DehazeNet: An End-to-End System for Single Image Haze Removal Bolun Cai, Xiangmin Xu, Member, IEEE, Kui Jia, Member, IEEE, Chunmei Qing, Member, IEEE, and Dacheng Tao, Fellow, IEEE 1 arxiv:1601.07661v1
More informationA FAST METHOD OF FOG AND HAZE REMOVAL
A FAST METHOD OF FOG AND HAZE REMOVAL Veeranjaneyulu Toka, Nandan Hosagrahara Sankaramurthy, Ravi Prasad Mohan Kini, Prasanna Kumar Avanigadda, Sibsambhu Kar Samsung R& D Institute India, Bangalore, India
More informationTRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK
TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.
More informationSINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS. Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman
SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman Department of Electrical and Computer Engineering, University of California,
More informationHAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS
HAZE REMOVAL WITHOUT TRANSMISSION MAP REFINEMENT BASED ON DUAL DARK CHANNELS CHENG-HSIUNG HSIEH, YU-SHENG LIN, CHIH-HUI CHANG Department of Computer Science and Information Engineering Chaoyang University
More informationVisibility Enhancement for Underwater Visual SLAM based on Underwater Light Scattering Model
Visibility Enhancement for Underwater Visual SLAM based on Underwater Light Scattering Model Younggun Cho and Ayoung Kim1 Abstract This paper presents a real-time visibility enhancement algorithm for effective
More informationSINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY
SINGLE IMAGE FOG REMOVAL BASED ON FUSION STRATEGY ABSTRACT V. Thulasika and A. Ramanan Department of Computer Science, Faculty of Science, University of Jaffna, Sri Lanka v.thula.sika@gmail.com, a.ramanan@jfn.ac.lk
More informationINVERSE ATMOSHPERIC SCATTERING MODELING WITH CONVOLUTIONAL NEURAL NETWORKS FOR SINGLE IMAGE DEHAZING
INVERSE TMOSHPERIC SCTTERING MODELING WITH CONVOLUTIONL NEURL NETWORKS FOR SINGLE IMGE DEHZING Zehan Chen, Yi Wang, Yuexian Zou* DSPLB, School of ECE, Peking University, Shenzhen 518055, China *E-mail:
More informationDCGANs for image super-resolution, denoising and debluring
DCGANs for image super-resolution, denoising and debluring Qiaojing Yan Stanford University Electrical Engineering qiaojing@stanford.edu Wei Wang Stanford University Electrical Engineering wwang23@stanford.edu
More informationA FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS. Kuan-Chuan Peng and Tsuhan Chen
A FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS Kuan-Chuan Peng and Tsuhan Chen School of Electrical and Computer Engineering, Cornell University, Ithaca, NY
More informationRecognize Complex Events from Static Images by Fusing Deep Channels Supplementary Materials
Recognize Complex Events from Static Images by Fusing Deep Channels Supplementary Materials Yuanjun Xiong 1 Kai Zhu 1 Dahua Lin 1 Xiaoou Tang 1,2 1 Department of Information Engineering, The Chinese University
More informationCOMPARATIVE STUDY OF VARIOUS DEHAZING APPROACHES, LOCAL FEATURE DETECTORS AND DESCRIPTORS
COMPARATIVE STUDY OF VARIOUS DEHAZING APPROACHES, LOCAL FEATURE DETECTORS AND DESCRIPTORS AFTHAB BAIK K.A1, BEENA M.V2 1. Department of Computer Science & Engineering, Vidya Academy of Science & Technology,
More informationTransmission Estimation in Underwater Single Images
2013 IEEE International Conference on Computer Vision Workshops Transmission Estimation in Underwater Single Images P. Drews-Jr 1,2, E. do Nascimento 2, F. Moraes 1, S. Botelho 1, M. Campos 2 1 C3 - Univ.
More informationarxiv: v1 [cs.cv] 20 Dec 2016
End-to-End Pedestrian Collision Warning System based on a Convolutional Neural Network with Semantic Segmentation arxiv:1612.06558v1 [cs.cv] 20 Dec 2016 Heechul Jung heechul@dgist.ac.kr Min-Kook Choi mkchoi@dgist.ac.kr
More informationModel Assisted Multi-band Fusion for Single Image Enhancement and Applications to Robot Vision
Model Assisted Multi-band Fusion for Single Image Enhancement and Applications to Robot Vision Younggun Cho, Jinyong Jeong, and Ayoung Kim1 Abstract This paper presents a fast single image enhancement
More informationUnderwater Image Restoration Based on Convolutional Neural Network
Proceedings of Machine Learning Research 95:296-311, 2018 ACML 2018 Underwater Image Restoration Based on Convolutional Neural Network Yan Hu yanh hy@163.com Keyan Wang kywang@mail.xidian.edu.cn Xi Zhao
More informationSingle image dehazing in inhomogeneous atmosphere
Single image dehazing in inhomogeneous atmosphere Zhenwei Shi a,, Jiao Long a, Wei Tang a, Changshui Zhang b a Image Processing Center, School of Astronautics, Beihang University, Beijing, China b Department
More informationContrast restoration of road images taken in foggy weather
Contrast restoration of road images taken in foggy weather Houssam Halmaoui Aurélien Cord UniverSud, LIVIC, Ifsttar 78000 Versailles Houssam.Halmaoui@ifsttar.fr Aurélien.Cord@ifsttar.fr Nicolas Hautière
More informationarxiv: v1 [cs.cv] 8 May 2018
PAD-Net: A Perception-Aided Single Image Dehazing Network Yu Liu 1 and Guanlong Zhao 2 1 Department of Electrical and Computer Engineering, Texas A&M University 2 Department of Computer Science and Engineering,
More informationCEA LIST s participation to the Scalable Concept Image Annotation task of ImageCLEF 2015
CEA LIST s participation to the Scalable Concept Image Annotation task of ImageCLEF 2015 Etienne Gadeski, Hervé Le Borgne, and Adrian Popescu CEA, LIST, Laboratory of Vision and Content Engineering, France
More informationWEAKLY SUPERVISED FOG DETECTION
WEAKLY SUPERVISED FOG DETECTION Adrian Galdran a,, Pedro Costa a, Javier Vazquez-Corral b, Aurélio Campilho a,c a INESC TEC Porto b Universitat Pompeu Fabra c Faculty of Engineering UP R. Dr. Roberto Frias,
More informationSingle Image Dehazing with Varying Atmospheric Light Intensity
Single Image Dehazing with Varying Atmospheric Light Intensity Sanchayan Santra Supervisor: Prof. Bhabatosh Chanda Electronics and Communication Sciences Unit Indian Statistical Institute 203, B.T. Road
More informationLearning visual odometry with a convolutional network
Learning visual odometry with a convolutional network Kishore Konda 1, Roland Memisevic 2 1 Goethe University Frankfurt 2 University of Montreal konda.kishorereddy@gmail.com, roland.memisevic@gmail.com
More informationOptimizing Monocular Cues for Depth Estimation from Indoor Images
Optimizing Monocular Cues for Depth Estimation from Indoor Images Aditya Venkatraman 1, Sheetal Mahadik 2 1, 2 Department of Electronics and Telecommunication, ST Francis Institute of Technology, Mumbai,
More informationRemoving rain from single images via a deep detail network
207 IEEE Conference on Computer Vision and Pattern Recognition Removing rain from single images via a deep detail network Xueyang Fu Jiabin Huang Delu Zeng 2 Yue Huang Xinghao Ding John Paisley 3 Key Laboratory
More informationA Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang
5th International Conference on Measurement, Instrumentation and Automation (ICMIA 2016) A Novel Multi-Frame Color Images Super-Resolution Framewor based on Deep Convolutional Neural Networ Zhe Li, Shu
More informationarxiv: v1 [cs.cv] 4 Oct 2018
Progressive Feature Fusion Network for Realistic Image Dehazing Kangfu Mei 1[0000 0001 8949 9597], Aiwen Jiang 1[0000 0002 5979 7590], Juncheng Li 2[0000 0001 7314 6754], and Mingwen Wang 1 arxiv:1810.02283v1
More informationIMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION
IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication
More informationOutline Radiometry of Underwater Image Formation
Outline - Introduction - Features and Feature Matching - Geometry of Image Formation - Calibration - Structure from Motion - Dense Stereo - Radiometry of Underwater Image Formation - Conclusion 1 pool
More informationFinding Tiny Faces Supplementary Materials
Finding Tiny Faces Supplementary Materials Peiyun Hu, Deva Ramanan Robotics Institute Carnegie Mellon University {peiyunh,deva}@cs.cmu.edu 1. Error analysis Quantitative analysis We plot the distribution
More informationSelf-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material
Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material Ayush Tewari 1,2 Michael Zollhöfer 1,2,3 Pablo Garrido 1,2 Florian Bernard 1,2 Hyeongwoo
More informationRemoving rain from single images via a deep detail network
Removing rain from single images via a deep detail network Xueyang Fu 1 Jiabin Huang 1 Delu Zeng 2 Yue Huang 1 Xinghao Ding 1 John Paisley 3 1 Key Laboratory of Underwater Acoustic Communication and Marine
More informationarxiv: v1 [cs.cv] 31 Mar 2016
Object Boundary Guided Semantic Segmentation Qin Huang, Chunyang Xia, Wenchao Zheng, Yuhang Song, Hao Xu and C.-C. Jay Kuo arxiv:1603.09742v1 [cs.cv] 31 Mar 2016 University of Southern California Abstract.
More informationLearning image representations equivariant to ego-motion (Supplementary material)
Learning image representations equivariant to ego-motion (Supplementary material) Dinesh Jayaraman UT Austin dineshj@cs.utexas.edu Kristen Grauman UT Austin grauman@cs.utexas.edu max-pool (3x3, stride2)
More informationResearch on Clearance of Aerial Remote Sensing Images Based on Image Fusion
Research on Clearance of Aerial Remote Sensing Images Based on Image Fusion Institute of Oceanographic Instrumentation, Shandong Academy of Sciences Qingdao, 266061, China E-mail:gyygyy1234@163.com Zhigang
More informationChannel Locality Block: A Variant of Squeeze-and-Excitation
Channel Locality Block: A Variant of Squeeze-and-Excitation 1 st Huayu Li Northern Arizona University Flagstaff, United State Northern Arizona University hl459@nau.edu arxiv:1901.01493v1 [cs.lg] 6 Jan
More informationA Review on Different Image Dehazing Methods
A Review on Different Image Dehazing Methods Ruchika Sharma 1, Dr. Vinay Chopra 2 1 Department of Computer Science & Engineering, DAV Institute of Engineering & Technology Jalandhar, India 2 Department
More informationDepth Estimation from a Single Image Using a Deep Neural Network Milestone Report
Figure 1: The architecture of the convolutional network. Input: a single view image; Output: a depth map. 3 Related Work In [4] they used depth maps of indoor scenes produced by a Microsoft Kinect to successfully
More informationRobust Face Recognition Based on Convolutional Neural Network
2017 2nd International Conference on Manufacturing Science and Information Engineering (ICMSIE 2017) ISBN: 978-1-60595-516-2 Robust Face Recognition Based on Convolutional Neural Network Ying Xu, Hui Ma,
More informationReal-time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor Supplemental Document
Real-time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor Supplemental Document Franziska Mueller 1,2 Dushyant Mehta 1,2 Oleksandr Sotnychenko 1 Srinath Sridhar 1 Dan Casas 3 Christian Theobalt
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationVolumetric and Multi-View CNNs for Object Classification on 3D Data Supplementary Material
Volumetric and Multi-View CNNs for Object Classification on 3D Data Supplementary Material Charles R. Qi Hao Su Matthias Nießner Angela Dai Mengyuan Yan Leonidas J. Guibas Stanford University 1. Details
More informationApplications of Light Polarization in Vision
Applications of Light Polarization in Vision Lecture #18 Thanks to Yoav Schechner et al, Nayar et al, Larry Wolff, Ikeuchi et al Separating Reflected and Transmitted Scenes Michael Oprescu, www.photo.net
More informationOne Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models
One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models [Supplemental Materials] 1. Network Architecture b ref b ref +1 We now describe the architecture of the networks
More informationSpecular Reflection Separation using Dark Channel Prior
2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com
More informationA Fast Semi-Inverse Approach to Detect and Remove the Haze from a Single Image
A Fast Semi-Inverse Approach to Detect and Remove the Haze from a Single Image Codruta O. Ancuti, Cosmin Ancuti, Chris Hermans, Philippe Bekaert Hasselt University - tul -IBBT, Expertise Center for Digital
More informationCANDY: Conditional Adversarial Networks based Fully End-to-End System for Single Image Haze Removal
CANDY: Conditional Adversarial Networks based Fully End-to-End System for Single Image Haze Removal Kunal Swami and Saikat Kumar Das (Abstract) Single image haze removal is a very challenging and ill-posed
More informationRobust Image Dehazing and Matching Based on Koschmieder s Law And SIFT Descriptor
Robust Image Dehazing and Matching Based on Koschmieder s Law And SIFT Descriptor 1 Afthab Baik K.A, 2 Beena M.V 1 PG Scholar, 2 Asst. Professor 1 Department of CSE 1 Vidya Academy of Science And Technology,
More informationAutomatic Image De-Weathering Using Physical Model and Maximum Entropy
Automatic Image De-Weathering Using Physical Model and Maximum Entropy Xin Wang, Zhenmin TANG Dept. of Computer Science & Technology Nanjing Univ. of Science and Technology Nanjing, China E-mail: rongtian_helen@yahoo.com.cn
More informationarxiv: v1 [cs.cv] 16 Nov 2015
Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression Zhiao Huang hza@megvii.com Erjin Zhou zej@megvii.com Zhimin Cao czm@megvii.com arxiv:1511.04901v1 [cs.cv] 16 Nov 2015 Abstract Facial
More informationFaceted Navigation for Browsing Large Video Collection
Faceted Navigation for Browsing Large Video Collection Zhenxing Zhang, Wei Li, Cathal Gurrin, Alan F. Smeaton Insight Centre for Data Analytics School of Computing, Dublin City University Glasnevin, Co.
More informationSingle Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization
Single Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization Shengdong Zhang and Jian Yao Computer Vision and Remote Sensing (CVRS) Lab School of Remote Sensing and Information Engineering,
More informationFeature-Fused SSD: Fast Detection for Small Objects
Feature-Fused SSD: Fast Detection for Small Objects Guimei Cao, Xuemei Xie, Wenzhe Yang, Quan Liao, Guangming Shi, Jinjian Wu School of Electronic Engineering, Xidian University, China xmxie@mail.xidian.edu.cn
More informationFog Simulation and Refocusing from Stereo Images
Fog Simulation and Refocusing from Stereo Images Yifei Wang epartment of Electrical Engineering Stanford University yfeiwang@stanford.edu bstract In this project, we use stereo images to estimate depth
More informationExtend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network
Extend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network Liwen Zheng, Canmiao Fu, Yong Zhao * School of Electronic and Computer Engineering, Shenzhen Graduate School of
More informationDeep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns
Deep Learning-driven Depth from Defocus via Active Multispectral Quasi-random Projections with Complex Subpatterns Avery Ma avery.ma@uwaterloo.ca Alexander Wong a28wong@uwaterloo.ca David A Clausi dclausi@uwaterloo.ca
More informationPedestrian Detection based on Deep Fusion Network using Feature Correlation
Pedestrian Detection based on Deep Fusion Network using Feature Correlation Yongwoo Lee, Toan Duc Bui and Jitae Shin School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, South
More informationDeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material
DeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material Yi Li 1, Gu Wang 1, Xiangyang Ji 1, Yu Xiang 2, and Dieter Fox 2 1 Tsinghua University, BNRist 2 University of Washington
More informationPhotometric Stereo with Auto-Radiometric Calibration
Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp
More informationClassifying a specific image region using convolutional nets with an ROI mask as input
Classifying a specific image region using convolutional nets with an ROI mask as input 1 Sagi Eppel Abstract Convolutional neural nets (CNN) are the leading computer vision method for classifying images.
More informationInternational Journal of Computer Engineering and Applications, Volume XII, Special Issue, September 18,
REAL-TIME OBJECT DETECTION WITH CONVOLUTION NEURAL NETWORK USING KERAS Asmita Goswami [1], Lokesh Soni [2 ] Department of Information Technology [1] Jaipur Engineering College and Research Center Jaipur[2]
More information/17/$ IEEE 3205
HAZERD: AN OUTDOOR SCENE DATASET AND BENCHMARK FOR SINGLE IMAGE DEHAZING Yanfu Zhang, Li Ding, and Gaurav Sharma Dept. of Electrical and r Engineering, University of Rochester, Rochester, NY ABSTRACT In
More informationREGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION
REGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION Kingsley Kuan 1, Gaurav Manek 1, Jie Lin 1, Yuan Fang 1, Vijay Chandrasekhar 1,2 Institute for Infocomm Research, A*STAR, Singapore 1 Nanyang Technological
More informationDeep Neural Networks:
Deep Neural Networks: Part II Convolutional Neural Network (CNN) Yuan-Kai Wang, 2016 Web site of this course: http://pattern-recognition.weebly.com source: CNN for ImageClassification, by S. Lazebnik,
More informationDirect Methods in Visual Odometry
Direct Methods in Visual Odometry July 24, 2017 Direct Methods in Visual Odometry July 24, 2017 1 / 47 Motivation for using Visual Odometry Wheel odometry is affected by wheel slip More accurate compared
More informationMULTI-SCALE OBJECT DETECTION WITH FEATURE FUSION AND REGION OBJECTNESS NETWORK. Wenjie Guan, YueXian Zou*, Xiaoqun Zhou
MULTI-SCALE OBJECT DETECTION WITH FEATURE FUSION AND REGION OBJECTNESS NETWORK Wenjie Guan, YueXian Zou*, Xiaoqun Zhou ADSPLAB/Intelligent Lab, School of ECE, Peking University, Shenzhen,518055, China
More informationCombining Semantic Scene Priors and Haze Removal for Single Image Depth Estimation
Combining Semantic Scene Priors and Haze Removal for Single Image Depth Estimation Ke Wang Enrique Dunn Joseph Tighe Jan-Michael Frahm University of North Carolina at Chapel Hill Chapel Hill, NC, USA {kewang,dunn,jtighe,jmf}@cs.unc.edu
More informationWhen Big Datasets are Not Enough: The need for visual virtual worlds.
When Big Datasets are Not Enough: The need for visual virtual worlds. Alan Yuille Bloomberg Distinguished Professor Departments of Cognitive Science and Computer Science Johns Hopkins University Computational
More informationRendering and Modeling of Transparent Objects. Minglun Gong Dept. of CS, Memorial Univ.
Rendering and Modeling of Transparent Objects Minglun Gong Dept. of CS, Memorial Univ. Capture transparent object appearance Using frequency based environmental matting Reduce number of input images needed
More informationHigh-Resolution Image Dehazing with respect to Training Losses and Receptive Field Sizes
High-Resolution Image Dehazing with respect to Training osses and Receptive Field Sizes Hyeonjun Sim, Sehwan Ki, Jae-Seok Choi, Soo Ye Kim, Soomin Seo, Saehun Kim, and Munchurl Kim School of EE, Korea
More informationMulti-scale Single Image Dehazing using Perceptual Pyramid Deep Network
Multi-scale Single Image Dehazing using Perceptual Pyramid Deep Network He Zhang Vishwanath Sindagi Vishal M. Patel Department of Electrical and Computer Engineering Rutgers University, Piscataway, NJ
More informationAn ICA based Approach for Complex Color Scene Text Binarization
An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in
More informationSingle Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization
Single Image Dehazing Using Fixed Points and Nearest-Neighbor Regularization Shengdong Zhang and Jian Yao (B) Computer Vision and Remote Sensing (CVRS) Lab, School of Remote Sensing and Information Engineering,
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationTransfer Learning. Style Transfer in Deep Learning
Transfer Learning & Style Transfer in Deep Learning 4-DEC-2016 Gal Barzilai, Ram Machlev Deep Learning Seminar School of Electrical Engineering Tel Aviv University Part 1: Transfer Learning in Deep Learning
More informationCycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing
Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing Deniz Engin Anıl Genç Hazım Kemal Ekenel SiMiT Lab, Istanbul Technical University, Turkey {deniz.engin, genca16, ekenel}@itu.edu.tr Abstract In
More informationSupplementary Material: Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos
Supplementary Material: Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos Kihyuk Sohn 1 Sifei Liu 2 Guangyu Zhong 3 Xiang Yu 1 Ming-Hsuan Yang 2 Manmohan Chandraker 1,4 1 NEC Labs
More informationarxiv: v2 [cs.cv] 14 May 2018
ContextVP: Fully Context-Aware Video Prediction Wonmin Byeon 1234, Qin Wang 1, Rupesh Kumar Srivastava 3, and Petros Koumoutsakos 1 arxiv:1710.08518v2 [cs.cv] 14 May 2018 Abstract Video prediction models
More informationRecursive Deep Residual Learning for Single Image Dehazing
Recursive Deep Residual Learning for Single Image Dehazing Yixin Du and Xin Li West Virginia University LCSEE, 395 Evansdale Drive, Morgantown, WV 26506-6070, U.S.A. yidu@mix.wvu.edu Xin.Li@mail.wvu.edu
More informationEfficient Image Dehazing with Boundary Constraint and Contextual Regularization
013 IEEE International Conference on Computer Vision Efficient Image Dehazing with Boundary Constraint and Contextual Regularization Gaofeng MENG, Ying WANG, Jiangyong DUAN, Shiming XIANG, Chunhong PAN
More informationDay/Night Unconstrained Image Dehazing
Day/Night Unconstrained Image Dehazing Sanchayan Santra, Bhabatosh Chanda Electronics and Communication Sciences Unit Indian Statistical Institute Kolkata, India Email: {sanchayan r, chanda}@isical.ac.in
More informationarxiv: v1 [cs.cv] 15 May 2018
A DEEPLY-RECURSIVE CONVOLUTIONAL NETWORK FOR CROWD COUNTING Xinghao Ding, Zhirui Lin, Fujin He, Yu Wang, Yue Huang Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University, China
More informationImage Denoising and Blind Deconvolution by Non-uniform Method
Image Denoising and Blind Deconvolution by Non-uniform Method B.Kalaiyarasi 1, S.Kalpana 2 II-M.E(CS) 1, AP / ECE 2, Dhanalakshmi Srinivasan Engineering College, Perambalur. Abstract Image processing allows
More informationSupplementary Material for Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains
Supplementary Material for Zoom and Learn: Generalizing Deep Stereo Matching to Novel Domains Jiahao Pang 1 Wenxiu Sun 1 Chengxi Yang 1 Jimmy Ren 1 Ruichao Xiao 1 Jin Zeng 1 Liang Lin 1,2 1 SenseTime Research
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationRecovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy
Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Xintao Wang Ke Yu Chao Dong Chen Change Loy Problem enlarge 4 times Low-resolution image High-resolution image Previous
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationDepth image super-resolution via multi-frame registration and deep learning
Depth image super-resolution via multi-frame registration and deep learning Ching Wei Tseng 1 and Hong-Ren Su 1 and Shang-Hong Lai 1 * and JenChi Liu 2 1 National Tsing Hua University, Hsinchu, Taiwan
More informationAir-Light Estimation Using Haze-Lines
Air-Light Estimation Using Haze-Lines Dana Berman Tel Aviv University danamena@post.tau.ac.il Tali Treibitz University of Haifa ttreibitz@univ.haifa.ac.il Shai Avidan Tel Aviv University avidan@eng.tau.ac.il
More informationHuman Detection and Tracking for Video Surveillance: A Cognitive Science Approach
Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach Vandit Gajjar gajjar.vandit.381@ldce.ac.in Ayesha Gurnani gurnani.ayesha.52@ldce.ac.in Yash Khandhediya khandhediya.yash.364@ldce.ac.in
More informationBilevel Sparse Coding
Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional
More information