CS294-26: Image Manipulation and Computational Photography Final Project Report

Similar documents
Statistical image models

Final Exam Schedule. Final exam has been scheduled. 12:30 pm 3:00 pm, May 7. Location: INNOVA It will cover all the topics discussed in class

Metropolis Light Transport

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Automatic Colorization of Grayscale Images

K-Means Clustering Using Localized Histogram Analysis

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Light Field Occlusion Removal

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

SECTION 5 IMAGE PROCESSING 2

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Motion Estimation. There are three main types (or applications) of motion estimation:

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

CS4670: Computer Vision Noah Snavely

Engineering Problem and Goal

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Supervised texture detection in images

Lecture 4: Image Processing

Lighting affects appearance

A NEW IMAGE EDGE DETECTION METHOD USING QUALITY-BASED CLUSTERING

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

A Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection

Light Reflection Models

A System of Image Matching and 3D Reconstruction

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors

Image enhancement for face recognition using color segmentation and Edge detection algorithm

What is an edge? Paint. Depth discontinuity. Material change. Texture boundary

Assignment 3: Edge Detection

CS559: Computer Graphics. Lecture 6: Painterly Rendering and Edges Li Zhang Spring 2008

Mixture Models and EM

Estimation of Multiple Illuminants from a Single Image of Arbitrary Known Geometry*

Rendering Light Reflection Models

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Local Image preprocessing (cont d)

Shading. Brian Curless CSE 557 Autumn 2017

Feature Detectors - Canny Edge Detector

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

EE795: Computer Vision and Intelligent Systems

Predicting 3D Geometric shapes of objects from a Single Image

CSC 2515 Introduction to Machine Learning Assignment 2

CS5670: Computer Vision

Classification and Detection in Images. D.A. Forsyth

HOUGH TRANSFORM CS 6350 C V

Change detection using joint intensity histogram

Digital Image Steganography Using Bit Flipping

Lecture 6: Edge Detection

Color-Texture Segmentation of Medical Images Based on Local Contrast Information

Making Impressionist Paintings Out of Real Images

Region-based Segmentation

Colour Image Segmentation Using K-Means, Fuzzy C-Means and Density Based Clustering

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Perceptual Quality Improvement of Stereoscopic Images

Automated Canvas Analysis for Painting Conservation. By Brendan Tobin

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

THE preceding chapters were all devoted to the analysis of images and signals which

A Generalized Method to Solve Text-Based CAPTCHAs

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Robustness analysis of metal forming simulation state of the art in practice. Lectures. S. Wolff

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Rendering and Modeling of Transparent Objects. Minglun Gong Dept. of CS, Memorial Univ.

Ulrik Söderström 16 Feb Image Processing. Segmentation

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

CS6670: Computer Vision

Fitting: The Hough transform

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Types of Computer Painting

CS6670: Computer Vision

Hybrid Algorithm for Edge Detection using Fuzzy Inference System

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Previously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis

Algorithm Optimization for the Edge Extraction of Thangka Images

Image-Based Deformation of Objects in Real Scenes

A Road Marking Extraction Method Using GPGPU

EECS490: Digital Image Processing. Lecture #19

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

CS4670: Computer Vision

Fingerprint Mosaicking by Rolling with Sliding

Edge detection. Gradient-based edge operators

Occluded Facial Expression Tracking

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Convolution Neural Networks for Chinese Handwriting Recognition

The goals of segmentation

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Object Tracking Algorithm based on Combination of Edge and Color Information

Recovering dual-level rough surface parameters from simple lighting. Graphics Seminar, Fall 2010

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Computer Vision & Digital Image Processing. Image segmentation: thresholding

Real-Time Rendering of Japanese Lacquerware

Rapid Natural Scene Text Segmentation

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Motion Estimation for Video Coding Standards

CPSC 425: Computer Vision

Image Processing. Daniel Danilov July 13, 2015

Transcription:

UNIVERSITY OF CALIFORNIA, BERKELEY CS294-26: Image Manipulation and Computational Photography Final Project Report Ling-Qi Yan December 14, 2015 1 INTRODUCTION Sandpainting is the art of pouring colored sands, and powdered pigments from minerals or crystals, or pigments from other natural or synthetic sources onto a surface to make a fixed, or unfixed sand painting. Figure 1: Illustration of sand painting. (Left) Before painting. (Middle) After painting. (Right) Colored sand used for painting. However, generating sand paintings automatically using computers still remain untouched. The main reason is that, sand paintings gives us an apparent granular appearance, which is 1

well known as high frequency. State of the art image processing methods, including computer vision and machine learning, are mostly based on low frequency hypothesis, thus cannot be easily adopted for sand painting purpose. On the other hand, in the rendering research domain, [4] proposed a method to rendering glints from high frequency normal mapped surfaces, which is promising in generating realistic glints so that the sand paintings look more convincing. However, their method works only for direct reflections i.e. single scattering without considering interactions among sand crystals, while in reality multiple scattering within the sand volume dominates. So it is also not suitable for our need. In this report, we propose a novel method to generate sand paintings. Our method is purely image-based. Given a input photograph with different regions, we refer to a random process to generate sand crystals for each region. Note that, our method has nothing to do with image quilting. The sand is actually generated from the input image rather than copied from anywhere else. 2 I MAGE B ASED S AND PAINTING 2.1 M OTIVATION AND BACKGROUND Figure 2: Steps of sand painting. (Left) Step 1: Pickle out the sticker with a toothpick. (Right) Step 2: Lightly sprinkle with right colored sand. Our work is motivated by the actual process of modern sand painting. The customers are given the outline of the painting already, dividing the paint into regions. For each region, it is glued and covered with a sticker. When painting, there are three main steps: 1. Pickle out the sticker with a toothpick. 2. Lightly sprinkle with right colored sand, and smear evenly with finger or tools. 2

3. Shake off the rest of the sand. Continue for other regions with different colors, a sand painting is finished. 2.2 OUR METHOD We incorporate the above mentioned physical steps into our method. Our method slightly modifies them and consists of the following steps: 1. Perform image segmentation. 2. Simulate the process of sprinkling sand. 3. Extract boundaries. 4. Sand paint boundaries. We now look at each individual step in detail. 2.2.1 IMAGE SEGMENTATION Figure 3: An example of image segmentation. (Left) The input image. (Right) The output image after image segmentation. Given an input image, We first perform an image segmentation, thus we have different regions. This is a preparation step since real world images are certainly not well segmented like the image given for sand painting. We refer to the K -means algorithm [2], which is an interactive technique that is used to partition an image into K clusters. It works by randomly selecting K cluster centers, then assign each pixel in the image to a cluster that minimizes the distance between the pixel and the cluster center. By distance, here it means the difference of color and location between a pair of pixels. 3

The image segmentation step gives us the index of region it belongs to for each pixel. We use this to find the average color of each region. Fig. 3 shows an example of image segmentation. As it is shown, after the segmentation, the images are partitioned into several regions with constant color. 2.2.2 SAND SPRINKLING Figure 4: An example of sand sprinkling. (Left) The input image. (Right) The output image after image segmentation and sand sprinkling. Since image segmentation tells us how the input image is partitioned into regions, we re now able to sand paint each region. Think about the physical process. When we pour sand onto a sticky region. Some of the sand crystals are stuck, while some others bounced and stuck somewhere else. If we focus on a specific location (i.e. one pixel in the output image), when we throw a sand crystal right at it, it might not end up being there exactly. Thus, we model the probability of staying at different locations using a Gaussian distribution as p(x + i, y + j ) = G(i ;σ p ) G(j ;σ p ) (1) where (x, y) is the pixel we want the sand crystal to land, (x + i, y + j ) is the actual location it finally lands. Furthermore, using Gaussian distribution is reasonable, since the sand crystal always tend to land close to where it should be, but could result in an outlier as well. Given the distribution, we now perform a random process to simulate sand sprinkling for each pixel. We randomly pick several sand crystals, perturb its location using the given Gaussian distribution. This could be done using any Gaussian random number generator such as Box-Muller transform. Once the location of a sand crystal is decided, we need to figure out its color. However, a sand appears quite different looking from different directions. Instead of referring to rendering actual appearance, we seek a simple solution but effective and accurate. To do that, we analyze 4

an actual sand painting, finding how the colors distribute within regions where constant colors are expected in its original image. Once we know this distribution, we can directly apply it to choose the color for any sand crystal, given its corresponding averaged color from the image segmentation step. Figure 5: Illustration of analyzing color distributions. (Left) An actual sand painting. (Middle) A patch extracted from the painting, which is supposed to have constant color if it were not sand painted. (Right) The histogram of the patch, note its shape, mean and standard deviation. Fig. 5 illustrates the idea. We take a patch which is supposed to have constant color from an actual sand painting. Then we plot the histogram of its grayscale image. We find that the color distribution looks exactly like a GGX distribution [3] with standard deviation σ c 0.3. Thus, the color of a sand crystal could be written as c(c 0 ) = X (c 0 ;σ c ) (2) where c 0 is the averaged color of the segment containing the pixel to which the sand crystal is thrown. However, a GGX distribution doesn t give us enough glints to match the real appearance of sand paints. So, we manually enhance a random set of pixels intensities to simulate a glinty appearance. 2.2.3 EXTRACT BOUNDARIES As Fig. 1 shows, the sand paintings often have clear boundaries given by the manufacturer. Thus, to make the sand painting look more realistic, we need to sand paint the boundaries as well. We refer to a simple Sobel edge detector by extracting horizontal and vertical boundaries, using the magnitude of gradient as the strength of edges. Fig. 6 shows the boundaries extracted. Note that, this step is optional, especially if the input images given already have clear and thick boundaries. 5

Figure 6: An example of extracting boundaries. (Left) The input image. (Right) Sobel boundaries. 2.2.4 SAND PAINT BOUNDARIES Once the boundaries are extracted, we perform similar sand sprinkling, but with different transparency according to the strength of edges, rather than using colors. Note again this step is optional. We can also directly mark the boundaries without sand painting it. 3 RESULTS Here we show a couple of results generated using our method, together with the original images for comparison. As we can see in Fig. 7 and Fig. 8, our method generates visually convincing sand paintings. The platform is a 2013 Macbook Pro laptop with a 2.4 GHz Intel Core i7 processor. The time to generate a 480P sand painting takes no longer than 1 minute with a Python implementation using only one core. 4 CONCLUSION AND FUTURE WORK In conclusion, we ve presented an image based method to accurately generate sand paintings. Our method is physically based, requiring no training or extra data. And it generates sand paintings in real time. We show comparisons with original input images to show that our sand paintings are visually convincing and are far from merely noise. In the future, we would like to cover more detailed process, such as smearing the sand so that 6

there s no blank dots in the output image, and convolute each region with a filter to produce a saturated appearance to simulate multiple scattering within the sand volume, etc. In fact, the more we correctly model the actual physical process, the more realistic our sand paintings will be. Another direction will be generating sand videos, which would be challenging since the granular appearance must not change over frames. That way, a deterministic random process [1] might help. REFERENCES [1] JAKOB, W., HAÅAAN, M., YAN, L.-Q., LAWRENCE, J., RAMAMOORTHI, R., AND MARSCHNER, S. Discrete stochastic microfacet models. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014) 33, 4 (2014). [2] KASHANIPOUR, A., MILANI, N. S., KASHANIPOUR, A. R., AND EGHRARY, H. H. Robust color classification using fuzzy rule-based particle swarm optimization. In Image and Signal Processing, 2008. CISP 08. Congress on (2008), vol. 2, IEEE, pp. 110 114. [3] WALTER, B., MARSCHNER, S. R., LI, H., AND TORRANCE, K. E. Microfacet models for refraction through rough surfaces. In Proceedings of the 18th Eurographics conference on Rendering Techniques (2007), Eurographics Association, pp. 195 206. [4] YAN, L.-Q., HAÅAAN, M., JAKOB, W., LAWRENCE, J., MARSCHNER, S., AND RAMAMOOR- THI, R. Rendering glints on high-resolution normal-mapped specular surfaces. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014) 33, 4 (2014). 7

Figure 7: Results part one. (Left column) The input images. (Right) The generated sand paintings. 8

Figure 8: Results part two. (Left column) The input images. (Right) The generated sand paintings. 9