IDENTIFYING OPTICAL TRAP

Similar documents
Supporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images

Particle Image Velocimetry for Fluid Dynamics Measurements

Particle Velocimetry Data from COMSOL Model of Micro-channels

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

An Intuitive Explanation of Fourier Theory

Elucidating the Mechanism Governing Particle Alignment and Movement by DEP Yu Zhao 1, Jozef Brcka 2, Jacques Faguet 2, Guigen Zhang *,1,3,4 1

Laser speckle based background oriented schlieren measurements in a fire backlayering front

Classification and Detection in Images. D.A. Forsyth

Nanoparticle Optics: Light Scattering Size Determination of Polystryene Nanospheres by Light Scattering and Mie Theory

Light & Optical Systems Reflection & Refraction. Notes

LASCAD Tutorial No. 1: Modeling a laser cavity with end pumped rod

SUPPLEMENTARY INFORMATION

Noncontact measurements of optical inhomogeneity stratified media parameters by location of laser radiation caustics

FLUOSTAR Rhodamine B- encapsulating microspheres are seeding particles optimized for Particle Image Velocimetry. Technical handbook ver.

Multiple optical traps from a single laser beam using a mechanical element

DAMAGE INSPECTION AND EVALUATION IN THE WHOLE VIEW FIELD USING LASER

Digital Image Processing. Image Enhancement - Filtering

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID

DEPARTMENT OF CHEMICAL ENGINEERING LABORATORY FOR THE PREPARATION OF NANO AND MICROMATERIALS

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009

Selective Optical Assembly of Highly Uniform. Nanoparticles by Doughnut-Shaped Beams

Particle Image Velocimetry Part - 1

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Applying Supervised Learning

Raster Image Correlation Spectroscopy and Number and Brightness Analysis

Silicon Avalanche Photodiodes in Dynamic Light Scattering

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Business Club. Decision Trees

LASCAD Tutorial No. 2: Modeling a laser cavity with side pumped rod

SYNTHETIC SCHLIEREN. Stuart B Dalziel, Graham O Hughes & Bruce R Sutherland. Keywords: schlieren, internal waves, image processing

By Professor Wolfgang Losert

Driven Cavity Example

Upgraded Swimmer for Computationally Efficient Particle Tracking for Jefferson Lab s CLAS12 Spectrometer

Improvement of the bimodal parameterization of particle size distribution using laser diffraction

Hydrodynamic Instability and Particle Image Velocimetry

Lab 9. Julia Janicki. Introduction

Mobility Models. Larissa Marinho Eglem de Oliveira. May 26th CMPE 257 Wireless Networks. (UCSC) May / 50

The Curse of Dimensionality

Nonbias-Limited Tracking of Spherical Particles, Enabling Nanometer Resolution at Low Magnification

A Comparison of the Computational Speed of 3DSIM versus ANSYS Finite Element Analyses for Simulation of Thermal History in Metal Laser Sintering

CHAPTER-4 LOCALIZATION AND CONTOUR DETECTION OF OPTIC DISK

Study of Air Bubble Induced Light Scattering Effect On Image Quality in 193 nm Immersion Lithography

Mapping of Hierarchical Activation in the Visual Cortex Suman Chakravartula, Denise Jones, Guillaume Leseur CS229 Final Project Report. Autumn 2008.

Computer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging

Chapter 9 Object Tracking an Overview

Supplementary Information

Applications of ICFD /SPH Solvers by LS-DYNA to Solve Water Splashing Impact to Automobile Body. Abstract

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

THREE-DIMENSIONA L ELECTRON MICROSCOP Y OF MACROMOLECULAR ASSEMBLIE S. Visualization of Biological Molecules in Their Native Stat e.

Hyperspectral Remote Sensing

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

FACE RECOGNITION USING INDEPENDENT COMPONENT

Contrast Optimization: A faster and better technique for optimizing on MTF ABSTRACT Keywords: INTRODUCTION THEORY

Decoding the Human Motor Cortex

Flow Field of Truncated Spherical Turrets

Cluster Analysis in Process and Environmental Monitoring

Particle Systems. Typical Time Step. Particle Generation. Controlling Groups of Objects: Particle Systems. Flocks and Schools

On completion of this chapter you will be able to understand. The types of optical fibres

Vector Visualization. CSC 7443: Scientific Information Visualization

Chapter 12 Notes: Optics

Spatial Interpolation & Geostatistics

Coherent Gradient Sensing Microscopy: Microinterferometric Technique. for Quantitative Cell Detection

THE preceding chapters were all devoted to the analysis of images and signals which

TweezPal Optical tweezers analysis and calibration software

S-WAVEPLATE RADIAL/AZIMUTH POLARIZATION CONVERTER

Post Processing, Visualization, and Sample Output

Time-resolved PIV measurements with CAVILUX HF diode laser

Instruction manual for T3DS calculator software. Analyzer for terahertz spectra and imaging data. Release 2.4

BCC Particle System Generator

Automatic Partiicle Tracking Software USE ER MANUAL Update: May 2015

Quantifying Three-Dimensional Deformations of Migrating Fibroblasts

CHAPTER 4 RAY COMPUTATION. 4.1 Normal Computation

Local optimization strategies to escape from poor local minima

Three-dimensional Flow Measurement around Micro-obstacles in a Water by Total Internal Reflection Fluorescence Microscopy and Refractive Indexmatching

Chapter 4. Clustering Core Atoms by Location

The Elimination of Correlation Errors in PIV Processing

The Five Rooms Project

Chapter 3 Image Registration. Chapter 3 Image Registration

Materials Modelling MPhil

Bioimage Informatics. Lecture 8, Spring Bioimage Data Analysis (II): Point Feature Detection

3. Multidimensional Information Visualization II Concepts for visualizing univariate to hypervariate data

A Robust Video Hash Scheme Based on. 2D-DCT Temporal Maximum Occurrence

Points Lines Connected points X-Y Scatter. X-Y Matrix Star Plot Histogram Box Plot. Bar Group Bar Stacked H-Bar Grouped H-Bar Stacked

Determining The Surface Tension Of Water Via Light Scattering

2D OPTICAL TRAPPING POTENTIAL FOR THE CONFINEMENT OF HETERONUCLEAR MOLECULES RAYMOND SANTOSO A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE

Chapter 32 Light: Reflection and Refraction. Copyright 2009 Pearson Education, Inc.

Solved with COMSOL Multiphysics 4.3a

Particle Insight Dynamic Image Analyzer

CHAPTER 5 MOTION DETECTION AND ANALYSIS

Development of automated ultraviolet laser beam profiling system using fluorometric technique

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

Pulsating flow around a stationary cylinder: An experimental study

COMPUTER SIMULATION TECHNIQUES FOR ACOUSTICAL DESIGN OF ROOMS - HOW TO TREAT REFLECTIONS IN SOUND FIELD SIMULATION

Optics for nonlinear microscopy

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Lecture # 11: Particle image velocimetry

CS 521 Data Mining Techniques Instructor: Abdullah Mueen

Louis Fourrier Fabien Gaie Thomas Rolf

Swammerdam Institute for Life Sciences (Universiteit van Amsterdam), 1098 XH Amsterdam, The Netherland

Transcription:

IDENTIFYING OPTICAL TRAP Yulwon Cho, Yuxin Zheng 12/16/2011 1. BACKGROUND AND MOTIVATION Optical trapping (also called optical tweezer) is widely used in studying a variety of biological systems in recent years. In optical trapping a focused laser beam provides an attractive or repulsive force (typically on the order of piconewtons), depending on the refractive index mismatch to physically hold and move microscopic dielectric objects. A free microscopic particle in fluid is doing Brownian motion all the time while the trapped particle has significantly lower Brownian motion and stay confined in the trap. When the trap region is big and the trapping strength is not strong multiple particles will aggregate around the trap. The goal of this project is to identify and locate the optical trap based on the video clips of the fluorescent particle doing Brownian motion in various fluidic environments with the possible presence of other aggregation mechanisms such as convection and thermophoresis. Besides, the particles are of different sizes, resulting in fast or slow Brownian motions. Moreover, the particle may stop moving by simply adsorbing to the substrate. All of the above effects results in patterns similar to optical trapping in some aspects but not exactly the same and therefore can be distinguished using appropriate machine learning algorithms. The data set collection is done by recording the optical trapping experiments performed in our lab. Two distinct models are attempted in terms of data processing and feature set design. The first one is based on the tracking of individual particle motion. The second approach is based on the principle component analysis of the temporal correlation map of each video. The algorithms we use are Bayesian logistic regression and SVM. Then we compare the two in terms of accuracy and computational efficiency. Finally we combine some of the features from the two models and achieve a test accuracy of 92.5%. Fig 1. Illustration of optical trapping. Red cones are focused laser light and blue spheres are trapped beads. 2. DATA SET GATHERING We have made 80 video clips of from the experiments in which half of them have optical trap. The polystyrene particles are suspended in pure water in various concentrations and their diameters range from 200nm to 1um. Each video is recorded in NTSC format (~30 frames / sec) for 3 seconds and the frame size is 320x240. Here are some snapshots of the videos. As can be seen from Figure 2 in the next page, it is not possible to identify optical trap based on a single frame of the video. It is necessary to know the motion of the particle. Besides, the other nontrapping aggregation effects and various fluid conditions make identification much more challenging.

Trapping cases: (a) (b) design. This is an example plot of the trajectory of a trapped particle. (c) Fig 4. Example trajectory of a trapped particle. Fig 2. Trapping cases: (a) trapping of multiple particles, (b) weak trapping, (c) trapping of single particle No trapping cases: (a) (c) (b) 3.1.1. FEATURE EXPLORATION BASIC FEATURE SET We focus on three special particles which are most likely to be optically trapped: (1) the one with smallest motion variance, (2) the one with biggest radius and (3) the brightest. For each of the three particles we extract the motion variance, the radius and the brightness to form 9 independent features x j ( j 1 ~ 9). EXTENDED FEATURE SET Fig 3. Non-trapping cases: (a) pure Brownian motion, (b) a big particle at the center, (c) heat-induced particle aggregation 3. ALGORITHM DESIGN AND TEST RESULTS 3.1. Approach #1 (Particle Motion Tracking) In the first approach, we extract features based on the particle tracking results. Tracking is done via a free Matlab particle tracking program written by Prof. E.W. Hansen from Dartmouth University, since our focus is on the machine learning algorithm We added more features such as the square of each basic feature elements and their product terms. Our initial insight was that nonlinear decision boundary will be achieved by those extended terms. Our optical trapping examples have complicated characteristics. For example, the particles that have too low motion variance could be physically stationary particles, which is not an optical trap. At the same time, the particles with high variance could also be non-trap particles. Therefore, by adding square terms, we expected that when the value is too large, the small and negative coefficient of square terms will drive the final decision into nontrap. On the other hand, when the value is not large enough, the coefficient of the 1 st order term is

Radius expected to be more influential. In addition, our thermodynamic analysis suggests that the product of variance, radius, and brightness has physical implication to identify optical traps. DENSITY-BASED FEATURE SET We also consider the fact that the number of particles in an optical trap will gradually become larger as time goes on. Actually, this feature turns out to be effective to differentiate the optical traps from stationary particles or heat induced aggregation of particles. Specifically, we divide the whole screen into N by N grid areas, and take the linear regression of density variance over time. As a result, we obtain two more features C1 and C2 as follows. Density ( x, y, t) C1( x, t C2( x, ( t frame number, ( x, position ) Our close observation also found an interesting fact that the distribution of C, ) for each example ( 1 C2 has different but consistent shape, depending on whether it is an optical trap or non-trap including stationary particles and heat-induced aggregation. Thus, we extracted one more feature which is a principal axis of the distribution. 3.1.2. FEATURE SELECTION Totally, we have 93 feature candidates. Then, we perform logistic regression algorithm on those features. At the same time, we use cross validation and forward search technique to find the feature set that has the highest test accuracy. When we first tried the basic feature set x x ( i, j 1 ~ 9), only three features: x 1, x 2, x 3 are i j enough to give the highest test accuracy of 87% where x 1 is the smallest mean square variance of motion and x 2, x 3 are the radius and brightness of the same particle, respectively, and training accuracy is 90%. After we added the extended feature set, one more feature which is the product of x 1 and x 2 is significant. As a result, the training accuracy becomes 93.75% and test accuracy is 88%. Lastly, after we added the density based feature, x 92 and x 93 are significant, so the training accuracy increased to 95% and testing accuracy of 91%. x1~ x3 x4~ x6 Basic Feature Set variance, radius, brightness of the particle with small motion variance variance, radius, brightness of the particle with biggest radius x7~ x9 variance, radius, brightness of the brightest particle Extended Feature Set x10~ x18 Square of basic features x19~ x90 product of different basic features x91~ x92 Density based Feature Set Linear regression coefficients (density = x91 time+ x92) x93 The principal axis of the distribution ( x91, x92) Figure 4 shows the decision boundary projected on (x1, x2) plane, and Figure 5 shows the average test error rate for each example when all the features are added. 4 3 2 1 0-1 -2-3 -4-4 -3-2 -1 0 1 2 3 4 minrms Fig 4. Data point and decision boundary projected on (x1, x2) feature subspace Fig 5. Test error plot generated by cross validation 3.1.3. ANALYSIS OF THIS APPROACH Even though we achieved 95% training accuracy by adding all these features while excluding useless features, the test accuracy has been improved by only a little. Therefore, we analyze the original video

data that have the highest error rate (wrong labeling in testing). We find that most of them are cases where the trapping is weak or particles cluster and adsorb to the substrate. In the first case the variance of particle motion is not small enough to be identified as optical trap. In the latter case, the individual particle behaves just like it is optically trapped. However, a close inspection reveals that those similar cases actually have different spatial pattern. For example, in weak trapping case, the particles tend to move along a ring-shape path around the center of the trap. In the adsorption case, particles adsorb all over the substrate. In short, optical trapping could be more identifiable if the spatial pattern is taken into account. Brownian motion suppression in optical trapping and other aggregation phenomena. Large magnitude in the map indicates that there are particles hovering around. Some examples are shown below. Fig 6. Optical trapping: (a) snapshot of the video and (b) its temporal correlation map 3.2. Approach #2 (Temporal and Spatial Correlation) In our second approach, we try to design features that convey both temporal information and spatial information. At the beginning, we simply make the whole video a long vector which is a super high dimension feature and feed it to SVM. But we always get 100% training accuracy and less than 50% test accuracy, suggesting high variance in learning. So we redesign the feature. We calculate for each pixel in the frame the autocorrelation function. g ( i, f ( i, k) f ( i, k k where f is pixel value (brightness), (i, is pixel position in the frame and m is frame number. Then we obtain the effective width of the autocorrelation function of time to roughly represent the duration for which the pixel remains bright. g( i, m ( i, max g( i, m Thus ( i, is what we call temporal correlation map. We have verified that it effectively captures Fig 7. Thermal aggregation (no trap): (a) snapshot of the video and (b) its temporal correlation map ALGORITHM Firstly, we compute ( i, for all pixels, and find the maximum as a potential trapping location. Secondly, we take a smaller window (30x30) around the selected location, and perform PCA (Principal Component Analysis) to reduce the dimension of feature. Lastly, we feed this feature to SVM. RESULT Comparing with our initial SVM scheme, the dimension of the feature is reduced from 320*240 (frame size) to 35 (number of influential principal components). Training accuracy of both schemes are 100%, but the test accuracy of the new algorithm using the 35 eigen-pattern is 90% whereas the initial SVM schemes gives less than 50%. If we use the 30x30 correlation map as a training vector without performing PCA, the test accuracy is 85%. However, when we choose 35 principal components, the high variance problem is partly fixed, so the final test accuracy becomes 90%.

A few eigenpatterns are shown below. Feature set Training accuracy Test accuracy Basic feature set 90% 87% + extended feature set 93.75% 88% + density based feature set 95% 91% + pseudo eigenpattern feature set 98% 92.5% Fig 8. Examples of eigenpatterns generated by PCA DISCUSSION SVM algorithm using these features gives a test accuracy of 90%. Again cross validation technique is used to obtain reliable test accuracy. While this method has similar accuracy to the previous one, it is much faster than the other as it does not involve tracking all the particles. When tested in our computer (Intel i5 core 2.67GHz, 4GB DRAM), this method is roughly 48 times faster than the previous method. 3.3. Combining of two approaches Finally, we have combined the two approaches by adding a few more features to the first approaches. In the second approach, we used 35 dimension of vector as a feature for SVM. However, instead of using the full size of vector, we extracted the following three distinguishing factors. This is expected as the two methods capture different aspects and are therefore complimentary. 4. CONCLUSION We have applied different machine learning algorithms to identify optical trapping effect in complex environment where other similar aggregation mechanisms can occur. One is based on particle tracking and the other is based on temporal autocorrelation of each pixel of the video. The two modeling methods give comparable test accuracy but the second one is more computationally efficient. By combining the features of the two methods we achieve a high accuracy of 92.5%. Last, this project may be generalized and extended to identify various physical, chemical or biological processes in more complicated systems, which may be of more interest in terms of applications. Radius of potential trap regions Maximum The number of potential trap regions After adding these three features, the training and testing accuracy increased as follows.