Contrast adjustment via Bayesian sequential partitioning

Similar documents
Edge and local feature detection - 2. Importance of edge detection in computer vision

Digital Image Processing Laboratory: Markov Random Fields and MAP Image Segmentation

Segmentation: Clustering, Graph Cut and EM

Image Segmentation for Image Object Extraction

Computer Vision I - Basics of Image Processing Part 1

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Digital Image Processing ERRATA. Wilhelm Burger Mark J. Burge. An algorithmic introduction using Java. Second Edition. Springer

Color-Texture Segmentation of Medical Images Based on Local Contrast Information

CSCI-B609: A Theorist s Toolkit, Fall 2016 Sept. 6, Firstly let s consider a real world problem: community detection.

Image Sampling and Quantisation

Image Sampling & Quantisation

MULTI-REGION SEGMENTATION

Adaptive Multi-Scale Retinex algorithm for contrast enhancement of real world scenes

Texture Segmentation Using Multichannel Gabor Filtering

Non-Linear Masking based Contrast Enhancement via Illumination Estimation

Digital Image Processing Laboratory: MAP Image Restoration

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Chapter 7. Conclusions and Future Work

SIFT - scale-invariant feature transform Konrad Schindler

Detection of Edges Using Mathematical Morphological Operators

Markov Random Fields and Gibbs Sampling for Image Denoising

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

Segmentation and Grouping

Probabilistic Graphical Models Part III: Example Applications

Lecture 10 CNNs on Graphs

Frame based kernel methods for hyperspectral imagery data

Robust Shape Retrieval Using Maximum Likelihood Theory

IJSER. Abstract : Image binarization is the process of separation of image pixel values as background and as a foreground. We

Introduction to Digital Image Processing

Texture Boundary Detection - A Structural Approach

Ulrik Söderström 16 Feb Image Processing. Segmentation

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation

Lecture: Edge Detection

Including the Size of Regions in Image Segmentation by Region Based Graph

CS 664 Slides #11 Image Segmentation. Prof. Dan Huttenlocher Fall 2003

A new 8-node quadrilateral spline finite element

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

1 Metric spaces. d(x, x) = 0 for all x M, d(x, y) = d(y, x) for all x, y M,

Filtering and Enhancing Images

Edges, interpolation, templates. Nuno Vasconcelos ECE Department, UCSD (with thanks to David Forsyth)

Perceptual Quality Improvement of Stereoscopic Images

Storage Efficient NL-Means Burst Denoising for Programmable Cameras

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Bioimage Informatics

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Intensification Of Dark Mode Images Using FFT And Bilog Transformation

Segmentation. (template) matching

Fundamentals of Digital Image Processing

Comparison between Various Edge Detection Methods on Satellite Image

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Fabric Defect Detection Based on Computer Vision

CPSC 425: Computer Vision

Line, edge, blob and corner detection

Filtering Images. Contents

Announcements. Edge Detection. An Isotropic Gaussian. Filters are templates. Assignment 2 on tracking due this Friday Midterm: Tuesday, May 3.

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

CS 5540 Spring 2013 Assignment 3, v1.0 Due: Apr. 24th 11:59PM

COMPUTATIONAL STATISTICS UNSUPERVISED LEARNING

SLMRACE: A noise-free new RACE implementation with reduced computational time

Iterative MAP and ML Estimations for Image Segmentation

CS 664 Segmentation. Daniel Huttenlocher

Computer Vision I - Filtering and Feature detection

Physics-based Fast Single Image Fog Removal

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation

IMAGE SEGMENTATION AND OBJECT EXTRACTION USING BINARY PARTITION TREE

Shadow detection and removal from a single image

Image Analysis Image Segmentation (Basic Methods)

Graph Based Image Segmentation

Using the Kolmogorov-Smirnov Test for Image Segmentation

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Capturing, Modeling, Rendering 3D Structures

Structural Analysis of Aerial Photographs (HB47 Computer Vision: Assignment)

NAME: Sample Final Exam (based on previous CSE 455 exams by Profs. Seitz and Shapiro)

Using Game Theory for Image Segmentation

Image Enhancement: To improve the quality of images

Lecture 4: Image Processing

RESTORING ARTIFACT-FREE MICROSCOPY IMAGE SEQUENCES. Robotics Institute Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213, USA

Chapter 4. Clustering Core Atoms by Location

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

ECG782: Multidimensional Digital Signal Processing

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Algorithm Optimization for the Edge Extraction of Thangka Images

A Method for Detection of Known Objects in Visual Search Tasks

Digital Image Processing. Image Enhancement - Filtering

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

A New Technique of Extraction of Edge Detection Using Digital Image Processing

SD 372 Pattern Recognition

Adjacent Vertex Distinguishing Incidence Coloring of the Cartesian Product of Some Graphs

Normalized Graph cuts. by Gopalkrishna Veni School of Computing University of Utah

Detecting Coronal Holes for Solar Activity Modeling

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

STUDYING THE FEASIBILITY AND IMPORTANCE OF GRAPH-BASED IMAGE SEGMENTATION TECHNIQUES

Tools for texture/color based search of images

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

A dynamic programming algorithm for perceptually consistent stereo

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Transcription:

Contrast adjustment via Bayesian sequential partitioning Zhiyu Wang, Shuo Xie, Bai Jiang Abstract Photographs taken in dim light have low color contrast. However, traditional methods for adjusting contrast suffer from boundary artifacts created as a side effect of Gaussian smoothing. This paper presents a method for adjusting contrast without creating boundary artifacts, by relying on Bayesian sequential partitioning to group similar colors, and then adjusting grouped colors together. 1 Introduction The perceptual phenomenon of local contrast is fundamental for image enhancement. We percieve the color of a feature in an image relative to its the colors in its immediate neighborhood. Thus, a color will appear darker if it surrounded by bright colors, but the same color will appear lighter if it surrounded by dark colors. A number of well-known optical illusions demonstrate this phenomenon. The following Figure (1) provides an example. The two small boxes inside the larger Figure 1: Example of local contrast boxes appear to have different shades of gray; in fact, they are the same color. We percieve the small box on the right to have a lighter color than the box on the left because of local contrast. Therefore, when enhancing images, we aim to distinguish features relative to their neighborhoods. 2 Computation of local contrast 2.1 Smoothed image Local contrast is usually measured by taking a difference between the original image and a smoothed version of the image. A smoothed image is obtained by using convolution of the input signal with a certain kernel, e.g. Gaussian kernel. The detail of an image is defined by Detail = Original Input Signal - Smoothed Signal (1) 1

Then we get the enhanced image by adding back the detail into the adjusted smoothed image. We also take the image (1) as an example, the results are in Figure (2) and (3). Figure 2: Smoothed image Figure 3: Traditional enhanced image 2.2 Creation of Boundary Artifacts The enhancement procedure produces a boundary artifact, Figure (3): the color of the boundary of the box is over-adjusted, while the color on the inside of the box has not been adjusted as drastically ([3]). This leads to an artifact since the boundary now has a much different color than the inside of the box. We say that the enhanced signal is not propagated into the whole graph properly. Ideally, the boundary of the enhanced picture should have the same color as the inside of the box, as it was in the original image. Thus, traditional contrast adjustment methods suffer from the production of boundary artifacts. But in order to prevent such boundary artifacts from occuring, one must ensure that the boundaries of objects are not treated too differently from the interiors of the same objects. Our approach is to cluster the pixels with similar features into several groups, and change the intensity of pixels identically in each group. We explain this method in the next section. 3 Our method The key idea in our method is that pixels which are close in the RGB scale in in the original image should remain close in the output image. The image enhancement should change a group of pixels with similar colors in the same way. To do this, we turn to unsupervised learning approaches to cluster pixels based on color similarity. 3.1 Bayesian sequential partitioning We identify groups of similarly colored pixels by using Bayesian sequential partitioning (BSP). BSP was developed by Wing Wong et al. for density estimation see ([1]). The BSP will not only cluster points into groups (regions), as the usual K-means would do, but more importantly, also learn the density of each group. More concretely, suppose we have n training data x (1), x (2),..., x (n) R d, assumed i.i.d. We want to estimate the probability P(x A), given a region A. The BSP method partitions the region into small rectangles A i and estimates the density within each rectangle α i, see Figure (4) for a 2-dimension example. The probability P(x A) and the likelihood are given by P(x α 1,..., α t, A 1,..., A t) = t k=1 α k A k 1 x A k } P(x (1),..., x (n) α 1,..., α t, A 1,..., A t) = n P(x (i) α 1,..., α t, A 1,..., A t) = t k=1 α n k k A k n k 2

(a) Partition the region into small regions (b) density α for each small regions Figure 4: BSP method illustration respectively, where A k denotes the volume of A k and n k denotes the number of training data that fall into region A k. All the parameters in the BSP package are set by the methods in ([1]). 3.2 Graph representation Each pixel is considered a three dimensional vector (r, g, b). In what follows we first take a log of the (r, g, b), based on Retinex theorem ([4]). After we apply the BSP, let N denote the number of groups we get. Now we construct a graph of N nodes, representing each group as a node. If we assign a three dimensional vector (a r i, a g i, ab i) for each node i, and define pixels that are clustered into group i to be (a r i, a g i, ab i), then we obtain a new image. Furthermore, define two nodes to be adjacent if the corresponding groups are adjacent in the partitions obtained by BSP. For example, in Figure (4), A 1 and A 2 are adjacent, while A 1 and A 4 are not. Denote α i, (1 i N) to be the density of group i learned by BSP method. Define the weight of adjacent nodes to be w ij = exp 1 (α i α j) 2 } 2σ 2 (α i + α j) 2 while we set the weight between non-adjacent nodes to be zero. Then we obtain an undirected graph with weights defined on each edge. See Figure (5). Remark: We obtain the graph representation from the original image, and we will use it to obtain the BSP smoothed image and output, as we will discuss later. Figure 5: Graph representation after BSP partition 3.3 BSP smoothed image Using the graph representation introduced in subsection 3.2, let (s r i, s g i, sb i), (i = 1, 2,..., N) be the vector corresponding to the smoothed image using traditional convolution and denote s c = (s c 1, s c 2,..., s c n) T,where c r, g, b}. We want to generate an modified smoothed image which is as close as possible to the smoothed image, but will be free of the problematic boundary effect disscused in subsection 2.2. The modified image should thus have the following properties: (a) Maintain the enhancement at the boundary as the smoothed image does. (b) Propagate the signal enhancement to the remaining area of the image 3

We denote a c = (a c 1, a c 2,..., a c n) T to represent the modified image we want to get. Mathematically, in order to satisfy the properties discussed above, we want a i and a j to be close when w ij is large, and a i close to s i. We design the objective function W ((a)) to be W (a c ) = 1 2 1 i,j N w ij(a c i a c j) 2 + N d c i (a c i s c i ) 2 (2) where w ij is the weight in the graph, and d c i = Detail c i. In more compact form, we can rewrite W (a c ) as W (a c ) = (a c ) T La c + (a c s c ) T D(a c s c ) (3) where the L is the graph Laplacian and D = diag(d c 1,..., d c n). Our goal is to minimize W (a c ) to get the value of a c. Since the graph Laplacian is symmetric positive definite ([2]), we could solve Equation (3) in closed form by setting W (α c ) = 0, we obtain a c = (L + D) 1 Ds c Remark: Note c r, g, b}, so we are essentially solving (3) three times, for r, g, b separately. 3.4 Output image Our method for the output image is } Output = exp B Modified Smoothed Image + δ Detail (4) where B is defined shortly. We are taking the exponential since all the previous computation were using the log of the (r, g, b), see subsection 3.2. In the graph representation, our modified smoothed image is a c. Now let us define B = diag(b c 1,..., b c N ). Firstly, chose λ c i = C a c i Input c i / Inputc i Where C is a scalar constant. This choice is to locally suppress the dynamic range of the modified smoothed image. Then chose b c to be b c = arg min (b c ) T Lb c + b c N λ c i (b c i λ c i ) 2} (5) which can be solved similarly to Equation (3). The above optimization for b c is to ensure the adjustment to be aligned with the image structure. Examples of BSP modified smooth image After we solve a, we use the new modified smoothed graph instead of the original smoothed graph, and by Formula (4) we obtained the following picture, see Figure (6) and (7), which is free of boundary artifacts. Figure 6: Modified smoothed image Figure 7: Enhanced image 4

4 More complicated examples We applied our method to real photographs, see Figure (8). As demonstrated by our examples, (a) Input image (b) Modified smoothed image (c) Output image Figure 8: Image enhancement example our contrast adjustment method based on BSP can substantially improve the contrast in an image without introducing boundary artifacts. Acknowledgements We are grateful for the help of the following individuals: Tung-Yu Wu, from Prof. Wong s lab, for initial ideas and fruitful thoughts, especially for the application of BSP method and for providing the code of BSP; Chen-Yu Tseng for instruction on background knowledge of imaging processing; Yingzhou Li for technical support; and Charles Zheng, for proofreading our report. References [1] Lu, L., Jiang, H., Wong, W. H. (2013). Multivariate density estimation by Bayesian Sequential Partitioning. Journal of the American Statistical Association, just-accepted. [2] Cvetkovic, Dragos M., Michael Doob, and Horst Sachs. Spectra of graphs: Theory and application. Vol. 413. New York: Academic press, 1980. [3] Jobson, D. J., Rahman, Z. U., Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. Image Processing, IEEE Transactions on, 6(7), 965-976. [4] Land, E. H. (1986), An alternative technique for the computation of the designator in the retinex theory of color vision, Proc. Nat. Acad. Sci., 83(10), pp. 3078 3080. 5