Particle Image Understanding A Primer

Size: px
Start display at page:

Download "Particle Image Understanding A Primer"

Transcription

1 Abstract: This paper will discuss the use of pattern recognition techniques to identify and differentiate different particle types contained in a heterogeneous solution. This application involves imaging the microscopic particles in real-time as they flow in a solution, segregating each individual particle as a separate image, and then applying pattern recognition techniques to differentiate the individual particle types. A framework for discussing the complexity of a pattern recognition operation in this application will be proposed, along with some specific examples showing how this framework applies. Particle Image Understanding A Primer Lew Brown Director of Marketing Fluid Imaging Technologies, Inc. 65 Forest Falls Drive Yarmouth, ME lew@fluidimaging.com I. Introduction The computational method known as pattern recognition has been around for many years now, beginning early in the 1960 s with military uses centered upon remote sensing (aerial and satellite imaging). Use of these techniques then expanded into the field of medical imaging, machine vision and others. The enormous computational demands of these applications limited early use of the technologies to institutions and organizations that could afford the high cost of the hardware necessary to perform these operations. The reason for the high cost of use of the technology has been that pattern recognition attempts to mathematically duplicate cognitive processes performed by the human eye/brain combination with ease. Indeed, many simple pattern recognition operations that we as humans take for granted in our day to day lives are extremely difficult (if not impossible) to reproduce using computational methods. As the cost of computing hardware has dropped precipitously while the performance of this hardware has risen exponentially, it has become increasingly possible for some of the most basic pattern recognition operations to be performed on common, inexpensive computing platforms such as Personal Computers. Figure 1: Most particle analyzers give a distribution of particle size only as shown by the graph on the left. Imaging particle analysis yields size, shape and gray-scale information, enabling the use of pattern recognition algorithms to automatically distinguish different particle types in a heterogeneous sample as shown by the images on the right. This paper will discuss the use of pattern recognition techniques to identify and differentiate different particle types contained in a heterogeneous solution. This application involves imaging microscopic particles in realtime as they flow in a solution, segregating each individual particle as a separate image, and then applying pattern recognition techniques to differentiate the individual particle types. A framework for discussing the complexity of a pattern recognition operation in this application will be proposed, along with some specific examples showing how this framework applies. II. Human Vision versus Computational Vision Pattern recognition has been limited initially to attempting to uncover or find some object(s) within a static image. Applications which involve pattern recognition on moving objects, such as machine vision, have typically required special purpose hardware in order to perform these operations. As briefly discussed in the introduction, duplicating even simple human vision processes that are intuitively simple to us can be an extremely daunting task within a computational system. The human eye/brain system is the most powerful computational system known. It is estimated that over 20% of the neurons in the cortex of the human brain (approximately total neurons) are concentrated

2 on the task of vision (1). An entire branch of computational research has been devoted to trying to understand and duplicate the functions that the human visual system performs on a regular basis. This area of research is referred to as Image Understanding. For the purposes of this paper, we will use the definition contained in the Encyclopedia of Artificial Intelligence by J.K. Tsotos: Image Understanding (IU) is the research area concerned with the design and experimentation of computer systems that integrate explicit models of a visual problem domain with one or more methods for extracting features from images and one or more methods for matching features with models using a control structure. Given a goal, or a reason for looking at a particular scene, these systems produce descriptions of both the images and the real world scenes that the images represent. (2) Image understanding is really a sub-discipline of the broader research area, pattern recognition: Pattern recognition is the research area that studies the operation and design of systems that recognize patterns in data. It encloses subdisciplines like discriminant analysis, feature extraction, error estimation, cluster analysis (together sometimes called statistical pattern recognition), grammatical inference and parsing (sometimes called syntactical pattern recognition). Important application areas are image analysis, character recognition, speech analysis, man and machine diagnostics, person identification and industrial inspection (3) As seen from the above definitions, image understanding (IU) confines itself to the domain of image processing, whereas the term pattern recognition is applied to many diverse fields such as speech recognition and character recognition. As such, image understanding is a better term for the topic under discussion, not only because it is narrower in scope, but, more importantly because it more intuitively describes what is being attempted: namely to understand the contents of a digital image. In the introduction, it was also mentioned that pattern recognition techniques have been historically applied primarily to what I will call needle in the haystack type problems, where a static image is analyzed to try to pull out a desired feature(s). The earliest work in this area can be found in the military, where scanned aerial images were analyzed for features; an example would be find the tank in the forest. The objective of this work was to off-load some of the image interpretation work classically done by humans in the intelligence community to the computer. By doing this, higher volumes of image data could be analyzed in less time, yielding a sharp increase in the amount of intelligence that could be gathered. This type of work was categorized as remote sensing, and became more common when non-classified data sources such as LANDSAT became available. In parallel to this, newer sources for medical imaging such as CT scanners became available, where the same types of techniques could be used; an example would be find the tumor in the image. Just as IU techniques could be used to analyze larger quantities of remotely sensed data, the same workflow could be applied to microscopic images. Instead of presenting microscopic images to a scientist for interpretation, using IU techniques enabled some microscopic interpretation to be automated, for example cell counting. This enables analysis of larger quantities of data, which yields higher statistical significance to any results presented. Particle Image Understanding (PIU) as described in this paper goes one step further: the image understanding techniques are applied to particles which are flowing through a microscopic system in real-time; the particles being analyzed are not static. This means that thousands of particles are being analyzed in the time that a human observer might be able to analyze, at best, a few hundred under a microscope. Once again, this yields huge benefits in the area of greater statistical significance for the results. Imagine trying to characterize the particle contents of ten gallons of liquid through a microscope: a human would only be able to characterize a couple hundred particles in an hour, whereas the system described here could analyze 100,000 particles in a matter of a couple of minutes. Obviously, a 100,000 particle sample from the ten gallons holds much more statistical significance than 200 particles would. Further, this information is gathered in significantly less time! The distinct advantages of using digital imaging for particle analysis as opposed to other automated techniques (such as electrozone counters or laser diffraction systems) are well understood and documented (4). The Particle Image Understanding method consists of two distinct steps: first the particles are segregated from the background into individual particle images, then IU techniques are used to extract information from each particle image. Typically the information to be extracted consists of classifying each particle into different types of particles. This could be as simple as identifying contaminants in a homogeneous sample, or as complex as identifying different types of algae contained in a water sample. Much of the research into IU has pointed toward parallel processing architectures as being the only possible way to duplicate complex visual processes (5). Since the human visual system is so complex, many diverse areas of research become involved when trying to characterize the 2

3 performance of the human visual system, such as computer vision, neurophysiology, neuroanatomy, and psychology (6). To this date, this process is not completely understood. However some basic measurements are postulated: one interesting study estimates a time of around 250 milliseconds for recognition of a simple target in a noncomplex background (7). In this example, the observer is given pre-cognitive information on what the target he is looking for is, and the time to process that information is not included. This example also assumes that the eye/brain system architecture is massively parallel. In 1988, a workshop was conducted entitled The DARPA Integrated Image Understanding Benchmark (8).. This workshop was specifically orientated toward parallel processing architectures, and results were reported for several different existing and purely theoretical computing architectures. The benchmark consisted of identifying several objects in a 2 1/2D image (one gray scale image co-registered with a range image). Even though this was almost 20 years ago, and computing hardware speed was only a fraction of what is available today, the results still yield an interesting statement on the computational complexity of a relatively simple IU task. A commercially available single processor UNIX workstation required seconds to perform the total task. A commercially available 8-processor mini-supercomputer required seconds to perform the task. Finally, a Thinking Machines Connection Machine having 64,000 processors (although these machines could be purchased, they were, at the time, considered somewhat experimental ) was simulated to perform the task in 0.63 seconds (this was not for the complete task, only the low level portion of it) (9). One final discussion is warranted concerning pattern recognition in general, which is the difference between supervised classification and unsupervised classification. In supervised classification algorithms, some a priori information is supplied to the computer beforehand. This is based on a human identifying training sets of data that can be used as a reference for a particular object or image prior to the analysis. In IU, this is manifested by the user identifying objects in an image as belonging to a class, and then instructing the computer to find all the other objects in the image that belong to this class. By contrast, unsupervised classification algorithms, the system is given no a priori knowledge of patterns to look for, instead the computer looks for statistical regularities of data to establish its own classes. All of the IU techniques discussed in this paper are supervised in nature. III. Particle Image Understanding Levels of Understanding In PIU, the first step taken is to segregate particles from the background (fluid containing the particles) in the image. This is done using a simple gray-scale thresholding operation: a threshold level of gray scale is set by which the particle is extracted from the background. A digital camera takes an image of the field of view of the microscope, which divides the field of view into pixels (the number of pixels in the field is determined by the camera s resolution). For each pixel in the field, a gray-scale value is recorded which corresponds to the intensity (for simplicity, we will limit this discussion to a monochrome camera; a color system works the same way except that it records a red, green and blue value for each pixel). In a typical digital camera, the gray-scale is measured as an 8-bit number, which represents 256 discrete levels of intensity. In the thresholding process, each pixel s gray scale is compared to the normal background (fluid only) level, and if the difference exceeds the threshold value, the pixel is counted as a particle pixel. This creates a binary representation of the original image, where each pixel is classified either as background or particle. The binary image is then scanned by the software to group together adjoining pixels that have been classified as particle, which creates groups of pixels representing each particle. It is important to note that for this technique to work properly, the particles must be in a solution that is dilute enough so that each particle is physically separated from the others when presented to the microscope in the fluid. Otherwise, multiple particles will end up being grouped together by the algorithm as one image. The final step is to cookie cut each particle out as a separate image to be measured and stored. It is important to note that while the binary image is used to segregate the particles from the background, the full gray scale image of the particle is what is actually stored. Two different types of measurements can be made from each particle image: spatial and gray-scale. Spatial measurements such as length, width, perimeter, etc. are carried out on the binary thresholded image, which greatly increases the speed at which the measurements can be made. Gray-scale measurements such as transparency and sigma intensity are obviously calculated using the full gray scale image. Since the image is stored in gray scale, it can later be viewed by a human observer for subtle features or classification which might not be possible via machinebased pattern recognition. It has the added benefit of being a permanent record of each particle, so that unexpected (or even expected) results can be studied after the fact; in 3

4 other words, the images represent an audit trail for verification of the automated results. One final note that has to be considered is the spatial resolution of the system; this is a measurement of how many pixels correspond to a unit area (usually expressed as pixels per unit length, assuming the pixels on the sensor are actually square in geometry). The more pixels covering each particle, the more detail that is captured for the particle image. The more detailed the image in terms of spatial resolution, the more information and precision the measurements are associated with the particle. A simple example is shown in Figure 2. With this as a backdrop, I propose a system for different levels of particle image understanding : Level 0 (Figure 3) At this level, the only measurement that can be made is whether a particle is present or not. The only data that can be gathered at Level 0 is a particle count or concentration. Spatial Information Particle Not Present Level 0 Gray-Scale Image Theshold The Effect of Resolution Resolution: 1 pixel = 1 unit area Resolution: 4 pixels = 1 unit area Size (ESD) = 2 (4 pixels/π) = 2.26 units Binary Image Attributes Captured Count Only! Figure 3: Level 0 Particle Image Understanding Size (ESD) = 2 (2 pixels/π) = 1.60 units Gray-Scale Image Level 1 (Figure 4) We begin to get more information gathered at this level. On the spatial side, we can now both count and size the particles. It is important to note, however, that at this level we have a severe restriction placed on the spatial data due to an assumption that all particles are spherical in shape. The size of the particles is expressed as an Equivalent Spherical Diameter (ESD), which can be thought of as scrunching the particle down into a sphere and then calculating its diameter. In imaging particle analysis, this is done by taking the area of the thresholded image and reporting the diameter of a circle of equal area. Other particle analysis techniques use different measurement methodologies than imaging does N/A (usually volumetric data), but the end result is the same due to the assumption of the particles being spherical. This is where these techniques stop, however, because they can only produce count and size (based on ESD). 4 Theshold Binary Image Size (ESD) = 2 [(12 pixels/4)/π] = 1.95 units Size (ESD) = 2 [(6 pixels/4)/π] = 1.38 units Figure 2: The particles on the right are being sampled at 2X the resolution of the particles on the left. Note the increase in detail within the binary (thresholded) images on the right. Also note that the size gains more accuracy with the added resolution. Size is based upon the Equivalent Spherical Diameter (ESD), which is calculated as follows: ESD = 2 (Area/π) Gray-Scale Information N/A In the imaging particle analysis system, we now can add in gray-scale attributes such as average intensity and transparency which give us more information about the particle. These gray-scale measurements are unique to imaging particle analysis. As we will see later on in the discussion, the number

5 Spatial Information Level 1 Gray-Scale Image Theshold Gray-Scale Information ESD (although this measurement is still made). You may be noticing that as we go to higher levels of image understanding that more spatial resolution is required for the higher level measurements. This was hinted at earlier, but becomes quite clear when looking at these diagrams of the various levels of image understanding. Binary Image Level 3 (Figure 6) Size =2 of unique measurements that can be made for each particle greatly affects how discriminating the pattern recognition algorithm can be. More data points (measurements) per particle allows more subtle discriminations to be made amongst different particle types numerically. Level 2 (Figure 5) Size =1 Attributes Captured Count, Area, Size (ESD) At Level 2, we begin to gather morphological information on the shape of the particle by now measuring the particle s length and width. No longer is the assumption being made that a particle is spherical in shape. We are no longer limited to a size measurement based upon Spatial Information Size =2 Avg. Intensity =100 Transparency = 0.2 Size =2 Avg. Intensity =150 Transparency = 0.4 Attributes Captured Count, Area, Size (ESD), Avg. Intensity, Transparency Figure 4: Level 1 Particle Image Understanding Level 2 Gray-Scale Image Gray-Scale Information At Level 3 and higher, the increased spatial resolution enables us to add much higher-level morphological attributes to the measurements made. For example, the circularity of a particle can now be described by measuring the actual perimeter of the particle and comparing it against the theoretical perimeter for a spherical particle having the equivalent ESD. As we will see later, the more discrete measurements that can be made for each particle, the more information available to the pattern recognition algorithms allowing more subtle differentiations between different particle types. As discussed previously, however, the higher-level measurements require higher spatial resolution on the sample (for the spatial measurements especially). This is a key limitation of the imaging particle analysis technique. Without going into a detailed discussion of sampling theory and diffraction limitations, suffice it to say that this technique really does not allow for the higher levels of image understanding for particles below 2 microns in diameter. Size =2 Length =3 Width = 2 Aspect Ratio = 0.67 Size = 1.4 Length = 3 Width = 1 Aspect Ratio = 0.33 Attributes Captured Count, Area, Size (ESD), Length, Width, Aspect Ratio Theshold Binary Image Size = 1.4 Length = 3 Width = 1 Avg. Intensity =100 Transparency = 0.2 Size = 1.4 Length = 3 Width = 1 Avg. Intensity =150 Transparency = 0.4 Attributes Captured Count, Area, Size (ESD), Length, Width, Aspect Ratio, Avg. Intensity, Transparency Figure 5: Level 2 Particle Image Understanding To understand this better, consider the following: for this system, the highest spatial resolution available is around 0.25µ/pixel. If we look at imaging a 1µ ESD sphere, this means that the sphere will be captured in a 4x4 pixel square. Because of this, we can realistically only expect to get Level 1 (count and size) information from the image. We simply need more pixels in order to get the data needed to reliably gather the data necessary for higher levels of PIU. More pixels covering the object are 5

6 always better, but because of the limitations of optical microscopy, we can only get more pixels on larger objects. For this reason, we are being realistic in saying that higher levels of PIU can only be obtained when looking at particles which are larger than 2µ in ESD. IV. Pattern Recognition Applied to Particle Image Understanding Level 3 Spatial and Gray-Scale Information Gray-Scale Image Theshold Binary Image Once the data is captured by the PIU system (step 1 of the process), we are now ready to attempt classification of the data. The system used for this paper is the FlowCAM, manufactured by Fluid Imaging Technologies of Yarmouth, ME. After the FlowCAM has acquired the data, we now have two sets of files which have been gathered: each individual particle image is stored in a TIFF file, with each particle having an associated row in a spreadsheet file which corresponds to all of the measurements made on that particle. The VisualSpreadsheet software includes a third proprietary file, which references each particle image to the corresponding row of data for that particle in the spreadsheet. This enables the ability to look at any particle image and automatically view a readout of all measurements associated with that particle. VisualSpreadsheet operates just as any other spreadsheet does; it can perform sorting and filtering operations on the data. The difference is that rather than interacting with the tabular spreadsheet itself, the operator queries, sorts and filters the data via the images, with the results of any operation being displayed as the particle images themselves as opposed to thousands of rows of numbers only. At any point during these operations, the user can view the tabular data generated by a sort or filter as a summary of statistics (means, standard deviations, Coefficient of Variability (CV), etc.) while also observing all of the images associated with this data. The two types of classification that can be performed in VisualSpreadsheet are value filtering and statistical filtering. Both of these will be described in more detail below, but it is important to remember that both of these processes represent supervised classification. This means that the operator will provide a priori knowledge to the system beforehand on exactly what we are looking to classify. Attributes Captured Count, Area, Size (ESD), Length, Width, Aspect Ratio, Area, Circularity, Elongation, Perimeter, Convex Perimeter, Roughness, Avg. Intensity, Transparency Figure 6: Level 3 Particle Image Understanding Value Filtering Value filtering is the type of filtering most spreadsheet users are familiar with. The user inputs values (or ranges) for a given variable(s) (column(s) in the spreadsheet), and then the computer filters all of the records (rows in the spreadsheet) to find only those records that meet the filter criteria defined by the user. As a simple example, one could query the spreadsheet to find all records which have a particle diameter (ESD) between 10 and 20µ, and the results would be only those particle records that meet this criterion. Because the FlowCAM can record up to 26 different measurements for each particle, very complex queries can be built to look for specific particle types. For example, one could create a filter to find large fibers in a sample by querying on aspect ratio (w/l) <0.2, length >200µ, and transparency <0.4. The more subtle the distinctions needed to be made, the more variables that can be filtered on. This is done via the dialog box shown in Figure 7. With 26 variables to choose from, one can obviously build up very specific filters, but this requires a lot of user interaction and understanding of what variables are best to use in order to classify a particular particle type. This is where VisualSpreadsheet provides a far more intuitive method for accomplishing the same end result: the user merely selects a particle (or group of particles) and instructs the software to filter like selected particles. What this does is to automatically build a value filter containing the data ranges for each particle attribute contained in the selected images. By default, this fills in the value ranges of all available variables (particle measurements) for the selected particle images (the training set ). If desired, the user 6

7 can fine tune the search by editing parameter ranges, excluding parameters or applying percent tolerances to the ranges. For instance, if one were looking for a very specific particle shape, but knew that these particles could be present over a very broad size range, one would merely exclude the size parameter from the value filter. Statistical Filtering While value filtering can produce some very impressive results, it still has some limitations. For one, it requires that the user have some knowledge of just exactly what variables can be best used to pull out the type of particle one is looking for. It is further limited in that it makes a straight AND comparison amongst any variables selected. In other words, if any particle s data value falls outside the given range on any measurement, then it will not be considered as part of the class even if all the other selected variables fall within the given ranges. So, in the end, the classification is purely binary; either the particle fits within all the ranges provided (and is classified as a class member) or it does not. Statistical filtering overcomes these shortcomings by using statistically based weighting on each variable to decide how much emphasis should be placed on each particle measurement. To reuse the brief example discussed above, if we are looking for a certain particle shape that may be present over a broad size range, statistical filtering will determine that the size (ESD) of the particle is not very significant, and therefore weigh that variable very low compared to other variables which show a tighter range in values derived from the training set particles. It is critical to note that the degree of success with either value or statistical based filtering is very dependant upon the user wisely choosing the training set particle images. In the above example, if the user chose training set particles with the same shape, but also around the same size (ESD), then the algorithm would decide that size (ESD) was an important variable, and would only find particles of the desired shape within a narrow size range! This is really a critical point: human knowledge and input is still the most valuable contribution to the process. Someone who really understands the characteristics of the particles we want to find has to define an optimized training set to best pull out the particles that belong in the class. This is the supervised part of supervised classification! The good news here is that once an optimum training set has Figure 7: VisualSpreadsheet Value Filter Definition Dialog been defined, it can be saved and used to classify other data sets containing the particle type (class) we are looking for. In fact, this is desirable, because if we use the same training set (called a library in VisualSpreadsheet) for other populations, then we are always making the same statistical comparison, thereby increasing the statistical confidence in the classification showing differences between different samples of the same fluid. Without going into a detailed discussion of the statistical methods employed in statistical filtering, let us briefly describe how it works. Basically, the training set is used to generate statistics (such as mean, standard deviation and CV) for each data variable being used. In the case of PIU, these data variables are particle measurements such as ESD, length, width, circularity, transparency, and average intensity (up to 26 of these measurements are generated for each particle in the FlowCAM). For each training set (or class), the statistics generate a normalized point in a multidimensional space representing the class. The data has to be normalized due to the fact that each measurement has different ranges of potential values (and units), and this normalization allows for each variable to be evaluated equally. At this point, when the classification is run, each 7

8 particle image has its own normalized point plotted in the same multidimensional space, where it can be compared against the points previously defined for the training sets, and the similarity of the incoming particle to each of the target classes can be determined. While there are literally thousands of different classification algorithms published, the one used most typically is called a Euclidean Distance classifier. The distance in the multidimensional space from the sample particle s point to each of the classes points is calculated, and the class which is closest ( minimum distance ) to the target particle is assigned to it. As a gross oversimplification, consider the illustration shown in Figure 8, where a target point s distance between two classes is calculated. Aspect Ratio (width/len gth) Class A D1 D1<D2 Class A is closer to target point, so target point belongs to Class A Target Point D2 Recall that in value filtering, a binary decision is made that the target particle either belongs to or does not belong to a class. In statistical filtering, we instead calculate the probability of a target particle belonging to a class; essentially this now allows for gray area decisions where a particle may belong to a class. First we calculate the minimum distance of the target particle, which tells us that this particle is most likely a member of the class having the minimum distance to the target particle. But Class B Diameter (ESD) Figure 8: Nearest Neighbor calculation for a target point and two classes the distance to this class gives us a relative measure of how similar the target particle is to the class selected. If the distance to the class is very short, then it is much more likely to actually be a member of that class as opposed to if the distance is very large! The algorithm establishes the degree of similarity to the class by assigning a normalized score to each particle based upon this distance. A particle with a score of 1 is definitely a member of the class, while a particle with a higher score (say 9) is probably not a member of the class. Since the measurements are normalized, the filter score cut-off for being a member of the class will vary depending on how tight the distribution of the library particles are. A tighter distribution will yield a lower filter score cut-off number. In VisualSpreadsheet, when a statistical filter is run, each particle is assigned this score, and the results are then displayed by showing the particle images sorted in ascending order by the filter score. A statistical tolerance is defined for the score, which says that all particles with a score of X or lower are determined to belong to the class. These images are highlighted with a red box, and all other images are not. We can then visually inspect the results by looking at the images, as shown in Figure 9. Figure 9: After statistical filtering, particles below the filter score are highlighted in red. Note that particle #9121 is the first non-selected particle, as its filter score is 8.09, which exceeds the statistically determined cut-off for membership in the class. At this point, the user can interactively edit the classification to include particles which were not selected or remove particles which were selected from the class. This is generally only necessary in a very sparse sample where we need to actually count (enumerate) individual particles in a class. An example of this would be a water analysis where a particular algal species of interest might be present in a very 8

9 low concentration (10 s of particles/ ml). In most applications, where we are merely interested in the relative concentrations of different particle types, we can accept the results of the statistical filter without any further interaction. This is because a variation of a few particles out of thousands will not be statistically significant. Also, it is important to remember that if we apply the same statistical filter to multiple samples, the results are statistically normalized because we are using the same library (training set) for the calculations. V. Examples of Particle Image Understanding The following two examples will be used to illustrate Particle Image Understanding in practice. The first is a relatively simple example where the object is merely to determine the concentration of a single type of particle in a heterogeneous sample, whereas in the second example we will be trying to actually enumerate the quantity of two different particle types in a single sample. Figure 10: Typical results for a FlowCAM run on a chocolate sample. 20,000 particles were imaged, stored and measured. The summary statistics appear in the left hand window, with particle images displayed in the right hand window. Note the diversity of particle size, shape and transparency. Example 1: Quantifying Sugar in Chocolates In this example, the object was to quantify the amount of sugars contained in chocolate samples. As can be seen in Figure 10, the chocolate contains many different types of particles. However, the larger crystalline particles are known to be sugars. Because the sugars have a very distinct appearance and are in a relatively narrow size range, this is an example where a value filter will work quite well. A library (training set) of sugar particle images was built first, and then used as a value filter to quantify the amount of sugar found in multiple samples. VisualSpreadsheet enables the user to directly use any library for either value or statistical filtering. When used as a value Figure 11: Results of the value filter classification on the common milk chocolate sample. Out of 20,020 particles measured, 319 of them were classified as sugars, representing a Volume Percentage of the sample of 11.61%. The Library window shows the particle images used for the training set. filter, the software can also perform the filtering on the fly during acquisition, displaying the filter results as an additional summary statistic both during acquisition and afterward. 9

10 The first sample run was a common milk chocolate and the results of a typical run are shown in Figure 11. For each sample, multiple runs were made in order to check for repeatability in the results. The results were found to be very repeatable. The second sample run was a premium dark chocolate. Once again, multiple runs were made in order to check for repeatability. The same library of particle images was used to perform the value filtering that had been used on the previous sample. As noted previously, the use of the same library particles for the filter on each sample insures that the same particles are being searched for in both samples. In other words, the same statistical comparisons are being made on both samples. The results for a typical run with the premium dark chocolate sample are shown in Figure 12. types contained in a mixture. Another example would be quantifying the amount of oil contained in water that may have many other particle types present, such as in the petrochemical industry when evaluating produced water. Example 2: Enumerating Algae in a Drinking Water Supply In this second example, the goal is to actually quantify (enumerate) the amount of two different algae types contained in a water sample obtained from a public surface water supply. Some algae can cause noticeable taste and odor issues within drinking water, and are therefore undesirable. If these algae go untreated and bloom, residue will end up at the consumer s tap, causing complaints and possibly panic. If the bloom is allowed to go this far, a major quantity of chemicals will be required in the reservoir to remove the algae. However, if the algae s presence can be detected prior to a bloom, a small amount of treatment will prevent the bloom. This saves money for the utility and prevents unnecessary complaints. Figure 12: Results of the value filter classification on the premium dark chocolate sample. Out of 20,026 particles measured, 118 of them were classified as sugars, representing a Volume Percentage of the sample of 7.25%. In comparing the results of the two samples, one sees a volume percent composition of the common milk chocolate of 11.6%, as opposed to a volume percent composition of 7.25% for the premium dark chocolate. These results agree with a qualitative taste testing of the two chocolates: the milk chocolate is higher in sugar content than the dark chocolate, which leads to it tasting sweeter. Although this example involves food, the same technique is applicable to any type of sample one can imagine. For instance a chemical manufacturer will want to know the individual percent content of a number of different particle To prevent this from occuring, the utility will take daily samples from the reservoir for analysis. In the past, these samples had to be examined under a microscope by trained technicians to enumerate the different species of algae present. This is not only expensive and time consuming, but also only allows for small amounts of sample to be analyzed due to the time required to manually perform the enumeration. The FlowCAM is succesfully being used by a number of water utilities to automate this process using statistical pattern recognition techniques. Figure 13 shows a screenshot of a typical water sample after data acquisition in the FlowCAM. Water samples tend to be extremely sparse by nature, having a very low number of particles per unit volume (in this case 1708 particles/ml). Because of this, using manual techniques with a microscope is very time consuming due to the fact that only a very small amount of sample can be viewed at a time. Therefore, the quantities that can be analyzed manually usually do not represent a good statistical sample. This is a perfect example where automated imaging particle analysis using statistical pattern recognition can offer 10

11 immense savings in time and money. The technique is very starightforward: libraries (training sets) are built for each species of algae it is desired to enumerate, and then statistical pattern recognition is used to automatically quantify the amount of each species of algae within each sample. Remember that once these libraries are built satisfactorily (by a trained expert), they are saved and used as the basis for comparison on every incoming sample. The automatic analysis does not require an operator with any specific knowledge of algae identification once the libraries have been built. Figure 14 shows a screenshot of the water sample with example images of the two types of algae we are interested in quantifying, Asterionella and Tabellaria. You will note that these two types of algae have very similar sizes, transparency, and other characteristics. It would be very difficult to construct a simple value filter that could easily distinguish between these two particle types. Statistical pattern recognition, however, can be used to discriminate between the two different algae in a very straightforward manner. First, two sets of libraries (training Figure 13: Overview of results from FlowCAM acqusition of particles from a water sample. Note the sparseness of the sample; it contains only 1708 particles/ml. Also note the diversity of different particle types found in the right hand window. sets) are built by an expert who can identify good examples of each type of algae, similar to the ones shown in Figure 14. At this point, the statistical filter calculates the point for each class in a multidimensional space using all 26 variables collected by the FlowCAM. Finally, each particle in the sample is compared in this multidimensional space against the two libraries and scored against each. Each particle is then assigned to the nearest class, but only if its filter score is less than the confidence score determined by the filter. The remaning particles are left as unclassified. Figure 15 shows the results of this automated classification. Figure 14: Water sample showing the two types of algae that need to be quantified: the upper right window shows Asterionella, and the lower right window shows Tabellaria. These two types of algae are very similar in size and transparency, so can not be as easily distinguished automatically. Although the two classes of particles found in this example may be easy to distinguish by the human eye/brain (as seen in Figure 14), this is computationally a fairly advanced discrimination to make mathematically. The fact that the FlowCAM records 26 different measurements for each particle gives the statistical filter the amount of data necessary to make such a subtle discrimination automatically. In this example, it makes it realistically possible to analyze enough sample to give statistically significant results in a very short period of time. Such an analysis performed manually by humans 11

12 counting through a miroscope would be time and cost-prohibitive to perform on a regular basis. VI. Conclusions The two examples detailed above show how statistical pattern recognition can be used to automatically differentiate and quantify unique particle types contained in a heterogeneous sample. The more subtle the distinctions being made, the higher the level of Particle Image Understanding required in order to make the distinction. Example 1 only required a simple Level 2 value filtering in order to quantify the sugars in the chocolate, whereas Example 2 required a Level 3 statistical filter in order to distinguish and quantify the two different algae types of interest. These types of analyses may seem quite simple to the human eye/brain system, but are quite complex to accomplish automatically using mathematics in a computer. However, as we can see, this type of mathematical analysis is now within the realm of possibility to perform on common personal computers using off-the-shelf software. The key to performing this type of automated analysis is having an image acquisition system capable of producing the amount of different measurements required to make higher level Particle Image Understanding segmentations. Finally, the ability of such a system to collect enough particle images to produce statistically significant sample quantities will enable the use of these techniques in applications where performing manual analysis through a microscope would be cost and time-prohibitive. VII. References Figure 15: Results of the automated statistical classification. The Clasify window shows the particles identified as members of each class. Note there are two tabs in this window, one for each class. In this case the window shows the particles classified as Asterionella, but clicking on the tab labelled Talaberia would show the particles classified as that type. Particles in the right hand window are the particles left over as unclassified. Note also in the lower part of the left hand window that exact statistics (including count and concentration) for each class are summarized. 4.) Brown, L. (2004). Continuous Imaging Fluid Particle Analysis - A Primer. Fluid Imaging Technologies White Paper ( 5,6.) Tsotsos, J.K. How Does Human Vision Beat the Computational Complexity of Visual Perception? From Pylyshyn, Z. (Ed.) (1988). Computational Processes in Human Vision: An Interdisciplinary Perspective. Norwood, NJ: Ablex Press. 7.) Duncan, J., Ward, J., Shapiro, K. (1994). Direct Measurement of Attention Dwell Time in Human Vision. Nature 369, New York, NY: Nature Publishing Group. 1.) Kandel, E. & Schwartz, J. (Ed.) (1981). Principles of Neural Science. New York: Elsevier/North Holland. 2.) Tsotsos, J.K.. Image Understanding. From Shapiro, S. C. (Ed) (1987). Encyclopedia of Artificial Intelligence. New York: John Wiley & Sons. 8,9.) Weems, C., Riseman, E., Hanson, A. & Rosenfeld, A. (1991) The DARPA Image Understanding Benchmark for Parallel Computers From Journal of Parallel and Distributed Computing 11, Amsterdam, NL: Academic Press, Inc. (Elsevier). 3.) Association for the Advancement of Artificial Intelligence web page ( of Pattern Recognition research area at University of Delft ( 12

Continuous Imaging Fluid Particle Analysis: A Primer

Continuous Imaging Fluid Particle Analysis: A Primer Continuous Imaging Fluid Particle Analysis: A Primer Lew Brown Fluid Imaging Technologies Edgecomb, ME Introduction: Many scientific endeavors involve the use of particulate matter suspended within a liquid.

More information

Particle Insight Dynamic Image Analyzer

Particle Insight Dynamic Image Analyzer Particle Insight Dynamic Image Analyzer Particle Size and Particle Shape Particle Shape for Characterizing Irregularly Shaped Particles For many years, particle size analyzers have rendered results with

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

The latest trend of hybrid instrumentation

The latest trend of hybrid instrumentation Multivariate Data Processing of Spectral Images: The Ugly, the Bad, and the True The results of various multivariate data-processing methods of Raman maps recorded with a dispersive Raman microscope are

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract

Chemometrics. Description of Pirouette Algorithms. Technical Note. Abstract 19-1214 Chemometrics Technical Note Description of Pirouette Algorithms Abstract This discussion introduces the three analysis realms available in Pirouette and briefly describes each of the algorithms

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION

CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION CHAPTER 4 DETECTION OF DISEASES IN PLANT LEAF USING IMAGE SEGMENTATION 4.1. Introduction Indian economy is highly dependent of agricultural productivity. Therefore, in field of agriculture, detection of

More information

Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing

Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing Optimal Clustering and Statistical Identification of Defective ICs using I DDQ Testing A. Rao +, A.P. Jayasumana * and Y.K. Malaiya* *Colorado State University, Fort Collins, CO 8523 + PalmChip Corporation,

More information

Automated Particle Size & Shape Analysis System

Automated Particle Size & Shape Analysis System Biovis PSA2000 Automated Particle Size & Shape Analysis System Biovis PSA2000 is an automated imaging system used to detect, characterize, categorize and report, the individual and cumulative particle

More information

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain

More information

Textural Features for Image Database Retrieval

Textural Features for Image Database Retrieval Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi Journal of Asian Scientific Research, 013, 3(1):68-74 Journal of Asian Scientific Research journal homepage: http://aessweb.com/journal-detail.php?id=5003 FEATURES COMPOSTON FOR PROFCENT AND REAL TME RETREVAL

More information

Data Preprocessing. Data Preprocessing

Data Preprocessing. Data Preprocessing Data Preprocessing Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville ranka@cise.ufl.edu Data Preprocessing What preprocessing step can or should

More information

Robust PDF Table Locator

Robust PDF Table Locator Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records

More information

Rare Event Detection Algorithm. User s Guide

Rare Event Detection Algorithm. User s Guide Rare Event Detection Algorithm User s Guide Copyright 2008 Aperio Technologies, Inc. Part Number/Revision: MAN 0123, Revision A Date: September 2, 2008 This document applies to software versions Release

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Data mining overview. Data Mining. Data mining overview. Data mining overview. Data mining overview. Data mining overview 3/24/2014

Data mining overview. Data Mining. Data mining overview. Data mining overview. Data mining overview. Data mining overview 3/24/2014 Data Mining Data mining processes What technological infrastructure is required? Data mining is a system of searching through large amounts of data for patterns. It is a relatively new concept which is

More information

8. Clustering: Pattern Classification by Distance Functions

8. Clustering: Pattern Classification by Distance Functions CEE 6: Digital Image Processing Topic : Clustering/Unsupervised Classification - W. Philpot, Cornell University, January 0. Clustering: Pattern Classification by Distance Functions The premise in clustering

More information

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset. Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied

More information

Data Mining with Oracle 10g using Clustering and Classification Algorithms Nhamo Mdzingwa September 25, 2005

Data Mining with Oracle 10g using Clustering and Classification Algorithms Nhamo Mdzingwa September 25, 2005 Data Mining with Oracle 10g using Clustering and Classification Algorithms Nhamo Mdzingwa September 25, 2005 Abstract Deciding on which algorithm to use, in terms of which is the most effective and accurate

More information

MANUFACTURING OPTIMIZING COMPONENT DESIGN

MANUFACTURING OPTIMIZING COMPONENT DESIGN 82 39 OPTIMIZING COMPONENT DESIGN MANUFACTURING SIMULATION OF LASER WELDING SHORTENS DESIGN CYCLE AND OPTIMIZES COMPONENT DESIGN AT OWENS CORNING JOHN KIRKLEY interviews BYRON BEMIS of Owens Corning It

More information

(Refer Slide Time: 00:02:00)

(Refer Slide Time: 00:02:00) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 18 Polyfill - Scan Conversion of a Polygon Today we will discuss the concepts

More information

University of Florida CISE department Gator Engineering. Data Preprocessing. Dr. Sanjay Ranka

University of Florida CISE department Gator Engineering. Data Preprocessing. Dr. Sanjay Ranka Data Preprocessing Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville ranka@cise.ufl.edu Data Preprocessing What preprocessing step can or should

More information

Automatic Machinery Fault Detection and Diagnosis Using Fuzzy Logic

Automatic Machinery Fault Detection and Diagnosis Using Fuzzy Logic Automatic Machinery Fault Detection and Diagnosis Using Fuzzy Logic Chris K. Mechefske Department of Mechanical and Materials Engineering The University of Western Ontario London, Ontario, Canada N6A5B9

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Counting Particles or Cells Using IMAQ Vision

Counting Particles or Cells Using IMAQ Vision Application Note 107 Counting Particles or Cells Using IMAQ Vision John Hanks Introduction To count objects, you use a common image processing technique called particle analysis, often referred to as blob

More information

A Geostatistical and Flow Simulation Study on a Real Training Image

A Geostatistical and Flow Simulation Study on a Real Training Image A Geostatistical and Flow Simulation Study on a Real Training Image Weishan Ren (wren@ualberta.ca) Department of Civil & Environmental Engineering, University of Alberta Abstract A 12 cm by 18 cm slab

More information

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1 Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Final Report for cs229: Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Abstract. The goal of this work is to use machine learning to understand

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

ANNUAL REPORT OF HAIL STUDIES NEIL G, TOWERY AND RAND I OLSON. Report of Research Conducted. 15 May May For. The Country Companies

ANNUAL REPORT OF HAIL STUDIES NEIL G, TOWERY AND RAND I OLSON. Report of Research Conducted. 15 May May For. The Country Companies ISWS CR 182 Loan c.l ANNUAL REPORT OF HAIL STUDIES BY NEIL G, TOWERY AND RAND I OLSON Report of Research Conducted 15 May 1976-14 May 1977 For The Country Companies May 1977 ANNUAL REPORT OF HAIL STUDIES

More information

Mobile Face Recognization

Mobile Face Recognization Mobile Face Recognization CS4670 Final Project Cooper Bills and Jason Yosinski {csb88,jy495}@cornell.edu December 12, 2010 Abstract We created a mobile based system for detecting faces within a picture

More information

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY VERIFICATION AND ANALYSIS M. L. Hsiao and J. W. Eberhard CR&D General Electric Company Schenectady, NY 12301 J. B. Ross Aircraft Engine - QTC General

More information

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID

EyeTech. Particle Size Particle Shape Particle concentration Analyzer ANKERSMID EyeTech Particle Size Particle Shape Particle concentration Analyzer A new technology for measuring particle size in combination with particle shape and concentration. COMBINED LASERTECHNOLOGY & DIA Content

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

Supporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images

Supporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images Supporting Information High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images Christine R. Laramy, 1, Keith A. Brown, 2, Matthew N. O Brien, 2 and Chad. A.

More information

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University Texture Analysis Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Texture An important approach to image description is to quantify its texture content. Texture

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY

CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 23 CHAPTER 3 AN OVERVIEW OF DESIGN OF EXPERIMENTS AND RESPONSE SURFACE METHODOLOGY 3.1 DESIGN OF EXPERIMENTS Design of experiments is a systematic approach for investigation of a system or process. A series

More information

Simulating Human Performance on the Traveling Salesman Problem Abstract Introduction

Simulating Human Performance on the Traveling Salesman Problem Abstract Introduction Simulating Human Performance on the Traveling Salesman Problem Bradley J. Best (bjbest@cmu.edu), Herbert A. Simon (Herb_Simon@v.gp.cs.cmu.edu) Carnegie Mellon University Department of Psychology Pittsburgh,

More information

Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König

Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Chair for Computing in Engineering, Department of Civil and Environmental Engineering, Ruhr-Universität

More information

Comparative Study of Hand Gesture Recognition Techniques

Comparative Study of Hand Gesture Recognition Techniques Reg. No.:20140316 DOI:V2I4P16 Comparative Study of Hand Gesture Recognition Techniques Ann Abraham Babu Information Technology Department University of Mumbai Pillai Institute of Information Technology

More information

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd

CHAPTER 2. Morphometry on rodent brains. A.E.H. Scheenstra J. Dijkstra L. van der Weerd CHAPTER 2 Morphometry on rodent brains A.E.H. Scheenstra J. Dijkstra L. van der Weerd This chapter was adapted from: Volumetry and other quantitative measurements to assess the rodent brain, In vivo NMR

More information

Time Stamp Detection and Recognition in Video Frames

Time Stamp Detection and Recognition in Video Frames Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th

More information

Motivation. Technical Background

Motivation. Technical Background Handling Outliers through Agglomerative Clustering with Full Model Maximum Likelihood Estimation, with Application to Flow Cytometry Mark Gordon, Justin Li, Kevin Matzen, Bryce Wiedenbeck Motivation Clustering

More information

Lecture 10: Semantic Segmentation and Clustering

Lecture 10: Semantic Segmentation and Clustering Lecture 10: Semantic Segmentation and Clustering Vineet Kosaraju, Davy Ragland, Adrien Truong, Effie Nehoran, Maneekwan Toyungyernsub Department of Computer Science Stanford University Stanford, CA 94305

More information

Hidden Loop Recovery for Handwriting Recognition

Hidden Loop Recovery for Handwriting Recognition Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of

More information

Application of fuzzy set theory in image analysis. Nataša Sladoje Centre for Image Analysis

Application of fuzzy set theory in image analysis. Nataša Sladoje Centre for Image Analysis Application of fuzzy set theory in image analysis Nataša Sladoje Centre for Image Analysis Our topics for today Crisp vs fuzzy Fuzzy sets and fuzzy membership functions Fuzzy set operators Approximate

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

Image Mining: frameworks and techniques

Image Mining: frameworks and techniques Image Mining: frameworks and techniques Madhumathi.k 1, Dr.Antony Selvadoss Thanamani 2 M.Phil, Department of computer science, NGM College, Pollachi, Coimbatore, India 1 HOD Department of Computer Science,

More information

Study Of Spatial Biological Systems Using a Graphical User Interface

Study Of Spatial Biological Systems Using a Graphical User Interface Study Of Spatial Biological Systems Using a Graphical User Interface Nigel J. Burroughs, George D. Tsibidis Mathematics Institute, University of Warwick, Coventry, CV47AL, UK {njb,tsibidis}@maths.warwick.ac.uk

More information

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER

The Bizarre Truth! Automating the Automation. Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER The Bizarre Truth! Complicated & Confusing taxonomy of Model Based Testing approach A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 TABLE OF CONTENTS 1. The context Introduction 2. The approach Know the difference

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery

Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery Annie Chen ANNIEC@CSE.UNSW.EDU.AU Gary Donovan GARYD@CSE.UNSW.EDU.AU

More information

Visual Design. Simplicity, Gestalt Principles, Organization/Structure

Visual Design. Simplicity, Gestalt Principles, Organization/Structure Visual Design Simplicity, Gestalt Principles, Organization/Structure Many examples are from Universal Principles of Design, Lidwell, Holden, and Butler Why discuss visual design? You need to present the

More information

Fast Fuzzy Clustering of Infrared Images. 2. brfcm

Fast Fuzzy Clustering of Infrared Images. 2. brfcm Fast Fuzzy Clustering of Infrared Images Steven Eschrich, Jingwei Ke, Lawrence O. Hall and Dmitry B. Goldgof Department of Computer Science and Engineering, ENB 118 University of South Florida 4202 E.

More information

Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect

Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect BEOP.CTO.TP4 Owner: OCTO Revision: 0001 Approved by: JAT Effective: 08/30/2018 Buchanan & Edwards Proprietary: Printed copies of

More information

Error Analysis, Statistics and Graphing

Error Analysis, Statistics and Graphing Error Analysis, Statistics and Graphing This semester, most of labs we require us to calculate a numerical answer based on the data we obtain. A hard question to answer in most cases is how good is your

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Contour LS-K Optical Surface Profiler

Contour LS-K Optical Surface Profiler Contour LS-K Optical Surface Profiler LightSpeed Focus Variation Provides High-Speed Metrology without Compromise Innovation with Integrity Optical & Stylus Metrology Deeper Understanding More Quickly

More information

Structural and Syntactic Pattern Recognition

Structural and Syntactic Pattern Recognition Structural and Syntactic Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Machine Learning (CSMML16) (Autumn term, ) Xia Hong

Machine Learning (CSMML16) (Autumn term, ) Xia Hong Machine Learning (CSMML16) (Autumn term, 28-29) Xia Hong 1 Useful books: 1. C. M. Bishop: Pattern Recognition and Machine Learning (2007) Springer. 2. S. Haykin: Neural Networks (1999) Prentice Hall. 3.

More information

CSE4334/5334 DATA MINING

CSE4334/5334 DATA MINING CSE4334/5334 DATA MINING Lecture 4: Classification (1) CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides courtesy

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD

OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD CHAPTER - 5 OPTIMISATION OF PIN FIN HEAT SINK USING TAGUCHI METHOD The ever-increasing demand to lower the production costs due to increased competition has prompted engineers to look for rigorous methods

More information

Images Reconstruction using an iterative SOM based algorithm.

Images Reconstruction using an iterative SOM based algorithm. Images Reconstruction using an iterative SOM based algorithm. M.Jouini 1, S.Thiria 2 and M.Crépon 3 * 1- LOCEAN, MMSA team, CNAM University, Paris, France 2- LOCEAN, MMSA team, UVSQ University Paris, France

More information

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018 MIT 801 [Presented by Anna Bosman] 16 February 2018 Machine Learning What is machine learning? Artificial Intelligence? Yes as we know it. What is intelligence? The ability to acquire and apply knowledge

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES

CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES 70 CHAPTER 3 A FAST K-MODES CLUSTERING ALGORITHM TO WAREHOUSE VERY LARGE HETEROGENEOUS MEDICAL DATABASES 3.1 INTRODUCTION In medical science, effective tools are essential to categorize and systematically

More information

How to Tell a Human apart from a Computer. The Turing Test. (and Computer Literacy) Spring 2013 ITS B 1. Are Computers like Human Brains?

How to Tell a Human apart from a Computer. The Turing Test. (and Computer Literacy) Spring 2013 ITS B 1. Are Computers like Human Brains? How to Tell a Human apart from a Computer The Turing Test (and Computer Literacy) Spring 2013 ITS102.23 - B 1 Are Computers like Human Brains? The impressive contributions of computers during World War

More information

Model-based segmentation and recognition from range data

Model-based segmentation and recognition from range data Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This

More information

Cutting edge solution in Particle Size and Shape analysis

Cutting edge solution in Particle Size and Shape analysis Cutting edge solution in Particle Size and Shape analysis PSA300 Automated image analysis to measure particle size and shape. Why Image Analysis? The HORIBA PSA300 is a state of the art turnkey image analysis

More information

EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR

EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR 1.Introductıon. 2.Multi Layer Perception.. 3.Fuzzy C-Means Clustering.. 4.Real

More information

On the Visibility of the Shroud Image. Author: J. Dee German ABSTRACT

On the Visibility of the Shroud Image. Author: J. Dee German ABSTRACT On the Visibility of the Shroud Image Author: J. Dee German ABSTRACT During the 1978 STURP tests on the Shroud of Turin, experimenters observed an interesting phenomenon: the contrast between the image

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Figure 1: Workflow of object-based classification

Figure 1: Workflow of object-based classification Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one

More information

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No. # 20 Concurrency Control Part -1 Foundations for concurrency

More information

UNDERSTANDING CALCULATION LEVEL AND ITERATIVE DECONVOLUTION

UNDERSTANDING CALCULATION LEVEL AND ITERATIVE DECONVOLUTION UNDERSTANDING CALCULATION LEVEL AND ITERATIVE DECONVOLUTION Laser diffraction particle size analyzers use advanced mathematical algorithms to convert a measured scattered light intensity distribution into

More information

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm.

Keywords Binary Linked Object, Binary silhouette, Fingertip Detection, Hand Gesture Recognition, k-nn algorithm. Volume 7, Issue 5, May 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Hand Gestures Recognition

More information

Scattering/Wave Terminology A few terms show up throughout the discussion of electron microscopy:

Scattering/Wave Terminology A few terms show up throughout the discussion of electron microscopy: 1. Scattering and Diffraction Scattering/Wave Terology A few terms show up throughout the discussion of electron microscopy: First, what do we mean by the terms elastic and inelastic? These are both related

More information

A Generalized Method to Solve Text-Based CAPTCHAs

A Generalized Method to Solve Text-Based CAPTCHAs A Generalized Method to Solve Text-Based CAPTCHAs Jason Ma, Bilal Badaoui, Emile Chamoun December 11, 2009 1 Abstract We present work in progress on the automated solving of text-based CAPTCHAs. Our method

More information

CHAPTER 4: CLUSTER ANALYSIS

CHAPTER 4: CLUSTER ANALYSIS CHAPTER 4: CLUSTER ANALYSIS WHAT IS CLUSTER ANALYSIS? A cluster is a collection of data-objects similar to one another within the same group & dissimilar to the objects in other groups. Cluster analysis

More information

Blood Microscopic Image Analysis for Acute Leukemia Detection

Blood Microscopic Image Analysis for Acute Leukemia Detection I J C T A, 9(9), 2016, pp. 3731-3735 International Science Press Blood Microscopic Image Analysis for Acute Leukemia Detection V. Renuga, J. Sivaraman, S. Vinuraj Kumar, S. Sathish, P. Padmapriya and R.

More information

Fabric Defect Detection Based on Computer Vision

Fabric Defect Detection Based on Computer Vision Fabric Defect Detection Based on Computer Vision Jing Sun and Zhiyu Zhou College of Information and Electronics, Zhejiang Sci-Tech University, Hangzhou, China {jings531,zhouzhiyu1993}@163.com Abstract.

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network

Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network S. Bhattacharyya U. Maulik S. Bandyopadhyay Dept. of Information Technology Dept. of Comp. Sc. and Tech. Machine

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information