Breaking Wave Velocity Estimation. Final Report. ME Machine Vision

Size: px
Start display at page:

Download "Breaking Wave Velocity Estimation. Final Report. ME Machine Vision"

Transcription

1 Breaking Wave Velocity Estimation Final Report ME Machine Vision Instructor: Dr. Wayne Daley Nathan Young December 7 th, 2007

2 Table of Contents: Motivation: Problem Definition:... 4 Problem Statement: Literature Review:... 5 Near-Shore Image Analysis Techniques:... 5 Segmentation using Energy Minimization:... 6 Summary: Approaches: ) Read Image ) Subtract Images ) Window and Filter Images ) Apply Image Transects ) Create Intensity Profiles ) Approximate Velocity Results: Intensity Profiling Velocity Estimation Summary Discussion: Intermediate Results Image Subtraction and Windowing Filtering and Thresholding Primary Results Intensity Profiling Velocity Estimation Conclusions and Future Work: References Appendix A - Timeline Appendix B - Deliverables Page 2 of 26

3 Table of Figures: Figure 1: Sample images of breaking waves Figure 2: Approach flow diagram... 9 Figure 3: Sample image data Figure 4: Image subtraction Figure 5: Example of image windows Figure 6: Unsharp contrast enhancement filter Figure 7: Example of transect mapping Figure 8: Example intensity profile Figure 9: Profile mapping for intensity data Figure 10: Intensity profile data Figure 11: Breaking wave interpretation Figure 12: Breaking wave velocity plot Figure 13: Image subtraction Figure 14: Image region windowing Figure 15: Region 1 filtering operations Figure 16: Region 2 filtering operations Figure 17: Region 3 filtering operations Page 3 of 26

4 Motivation: Topographic changes in beach regions (termed the foreshore region) are of particular interest. Two reasons for this interest are recreation and local economies. These two reasons are interrelated as local economies rely on beach recreation to attract the public. Changes in the foreshore region can result in the loss of usable beach area. This loss of beach area can lead to significant impacts on the local economy as visitors no longer seek out the beach or the local community surrounding the beach. In this project, the foam of the breaking waves impinging on the beach will be tracked providing a means to calculate the velocity of the breaking waves. Breaking waves are analyzed due to their significant impact on sediment transport. From the calculation of sediment transport, the evolution of foreshore topography can be monitored to provide feedback on the causes for the appearance or disappearance of beaches allowing local communities to take actions to reduce the amount of sediment loss. 1 Problem Definition: As displayed in Figure 1, beach conditions vary relative to many factors, a couple of which are the time of day and current weather patterns. The breaking waves can be seen in Figure 1 as the foam present at the front of incoming waves on the beach. The waves associated with this foam are typically referred to as bores. The tracking of waves will require the recognition of bore foam, labeling of bores, and tracking of the bore across a sequence of images until it disappears. The disappearance of a bore is difficult to track due to the interaction with other bores; therefore, an algorithm that can track bores until recession is required. From this definition, the algorithm must be capable of maintaining appropriate labels and position data for breaking wave foam in a sequence of images. Figure 1: Sample images of breaking waves. Page 4 of 26

5 Problem Statement: Given a sequence of images from an image acquisition system located on a beach, develop an algorithm which is capable of tracking wave velocity in sequential images to aid in the approximation of sediment transport. 2 Literature Review: The measurement of velocity and frequency of breaking wave events is critical to the approximation of energy processes associated with sediment transport along the beach. The body of literature associated with the characterization and analysis of near-shore image data is very broad in terms of the near shore features extracted from image data, while the literature which focuses directly on the image analysis of breaking waves is somewhat disparate. In this literature review, a set of near-shore image analysis approaches and general segmentation approaches based upon the minimization of some cost or energy function are presented. Near-Shore Image Analysis Techniques: The study of nearshore processes can be divided into two interrelated components: the dynamics of waves and the subsequent bathymetry evolution due to the sediment transport associated with wave field dynamics. In this section, parametric inference models are addressed as they pertain to wave energy transfer processes which contribute to sediment transport. The most relevant approach to approximating the velocity of breaking waves in the surf zone involves the combination of scatterometry and video data. A scatterometer returns data on the backscatter from breaking waves whereby information such as bathymetry and wave energy fluxes can be inferred. Video data is used in conjunction with this technology as a means to validate the information gathered from the scatterometer. In this approach, sea spikes (white caps) are measured from the water surface by means of microwave backscatter. From this backscatter, a normalized radar cross section (NRCS) of backscatter is retrieved. From the NRCS, the sea spikes are interpreted as any deviation above the NRCS mean for a neighborhood comprised of more than one data point. To ensure that the sea spikes from the NRCS are breaking waves, image data is recorded for the scene of interest. Two parallel transects, normal to the wave direction, are overlaid on the image scene where the backscatter is recorded. From each frame, an intensity value is recorded for every transect pixel value. By storing each pixel, an intensity profile is created relative to the number of image frames. By cross correlating backscatter with the intensity profiles, it is shown that the phases of breaking waves can be generally extracted from the profile by observing local optima and comparing the two transect profiles (Haller 2003). Page 5 of 26

6 Problem Connection: In this approach, sea spikes are tracked using both backscatter and image data. Of particular interest is the image processing technique, as it directly applies to the estimation of breaking wave velocity. To extend the sea spike intensity profiling technique to surf zone wave velocity estimation, a series of transects could be applied to the image in a localized region. Using the intensity profiles, the maxima could be tracked thus representing the wave crest. Across the transects, the maxima could be tracked. Using a rectification operation to transform image coordinates to real coordinates, the wave crest velocity could be tracked. Another technique for the extraction of nearshore image data involves the estimation of longshore currents over large time spans, often called the optical current meter (OCM). In this approach, the foam left behind from breaking waves is used as a tracer for longshore current calculations. To record the nearshore image data, a video camera is placed on a hotel balcony approximately 45 meters above the beach. To create a mapping between real world and image coordinates, control points were recorded from a ground survey. After the images are rectified, there are five processing steps: Identify a longshore array of points centered about the location of current measurement. Record the pixel locations corresponding to ground points of interest. Calculate the intensity values from each frame using a 2D spatial linear interpolation scheme. The intensity values for 128 frames are stacked into a time stack. Apply a median filter to the timestack and perform a 2D FFT to generator a foam intensity spectrum in wave number and frequency space to produce a 1D spectrum in velocity space. To account for noise in the velocity spectrum, a model, assuming a Gaussian distribution of foam wave lengths is applied. In the OCM approach, the longshore current can be estimated very well especially for the very low frequency estimations (Haas 2006). Problem Connection: In this approach, the focus is in on estimating longshore currents. There are several techniques applied in this approach which could be applied to the proposed problem of breaking wave velocity estimation. The rectification operation used in this approach can be applied to the proposed problem to translate image to ground coordinates. The application of a median filter could also be attempted to reduce some the noise associated with the uninteresting foam features. Segmentation using Energy Minimization: The following approach is a more general approach used in machine vision to segment an image in such a way which promotes the classification of image regions. Furthermore this approach attempts to classify different textures within an image. The general process in this method is as follows: given a sequence of images with two or more distinct regions, estimate the regions and model parameters of regions namely the initial state and covariance of the driving process. For each region, define a discrepancy metric among regions, which compares classes and parameters such as subspace angles. Since the region of interest may not be known and textures Page 6 of 26

7 depend upon the pixel values of the entire region, there is a resulting chicken and egg problem. To solve this problem, a two step algorithm is implemented: First associate a local signature to each pixel, by integrating visual information on a fixed spatial neighborhood. The signature is computed based upon the subspace angle relative to the reference model for a texture. 1) Calculating Signatures: While considering local neighborhoods around each pixel. Generate a spatio-temporal signature from the cosines of the subspace angles between the signature and a reference model. By minimizing a cost function, which requires a minimization of homogeneity with respect to signatures in each region and minimizing the length of the separating boundary, a segmentation of the image plane is achieved. 2) Level Set Formulation: Level set based representation is used to identify the boundaries of solution as it does not restrict topology and boundary evolution and does not require regridding of marker points which represents a spatial neighborhood search template. 3) Energy Minimization: In incremental steps, estimate mean signatures and boundary evolution. Average the signatures over each phase. For fixed regions, use gradient descent to minimize level-set formulation to model the boundary evolution. From this method, there is a framework for effectively segmenting dynamically behaving regions (Doretto 2003). Problem Connection: In this approach, the idea of dynamic textures is directly analogous to the idea of foam as a texture in the ocean. In the proposed problem areas, such as wave crests in the foam pattern, would be considered as blobs. For each blob, a signature would be applied relative to a model of the ideal signature. To implement this, it would require the analyst to identify ideal wave crest models to judge the blobs in each image. Once the crests are identified and signed, the energy minimization can be applied; whereby, the boundary evolution of each crest is tracked. By tracking the boundary transition between images, the breaking wave velocity could be calculated. It must be observed that this approach could be too computationally expensive to be applied to large amounts of image data. The next approach is related to the first approach based on the minimization of a cost function. In this approach, the image is segmented into regions of motion using an adapted Mumford-Shah energy function. Essentially, motion vectors are used to segment the regions. A major assumption is that the intensity of a moving point is constant; hence, this approach cannot be used for dynamically changing textures. A probabilistic formulation is used to predict whether a point of a spatio-temporal derivative of an image sequence is part of a specific region. From this probability, the vectors associated with these regions can be determined to be orthogonal or parallel; thus, guiding the contour. An energy function is created which incorporates the summation of derivatives of the spatiotemporal gradient of the homogenous velocity for each region contained in an image. A factor containing the length of the contour is added to the function. Essentially, a region is added to the image and it matches the contour of the boundary of a region by minimizing the energy function. Page 7 of 26

8 Using this approach, the boundary of a moving object such as a car can be tracked using both a static and a moving camera (Cremers 2003). Problem Connection: This approach focuses on the motion of segmented regions of an image such as a moving car. Since wave crests are moving, evolving regions of an image the tracking of segmented regions of an image could be of value. The vectors associated with the wave crest region could be tracked and the wave crest velocity directly extracted. Using these vectors, the wave crests could be continuously tracked resulting in a model of the velocity across various images. It must be observed that the pixel value of a moving point is held constant. This could prevent the robust tracking of wave crests due to the constant evolution of the wave crest intensity values. By solving the piecewise-smooth Mumford-Shah energy function with a level set formulation, an image can be analyzed by detecting edges, employing active contours to locate boundaries, denoising images, and segmented to locate objects. The following approach proposes a level set function formulation for minimizing the Mumford-Shah energy function. The approach is presented in the form of three case studies: one dimensional, twodimensional/two phase, and two-dimensional/four phase model. The one dimensional case study involves the segmentation of an image based upon the signal representation of an image. For signal segmentation to be performed, only one level set function, also called a Lipschitz function is needed to represent a piecewise smooth Mumford-Shah energy function. By employing a gradient ascent of descending portion of the signal, the signal jump can be robustly discovered; thus, detecting an image. For the two-dimensional case with a two-phase model, it is assumed that the set of contours used to detect edges can be represented as one level set function. When applied to the image, this representation applies a higher energy value to objects relative to the energy of the background. The energy function of the image can then be searched with a gradient descent algorithm, until the object boundaries are located. For the two-dimensional case with a four-phase model, it is assumed that if a boundary cannot be represented with a single level set function, then it can be located using two level set functions. This idea is the operating principle behind the Four Color Theorem. In this theorem, regions can only be colored with one of four colors, represented by one of two level set functions and their positive or negative state. Also in this representation, adjacent regions are required to have different colors. By employing the Heaviside step-function and the relation between the four colors, the energy minimization in level set formulation can be represented (Chan 2001). Problem Connection: In this application, a one dimensional case could be applied to the proposed problem for computational efficiency. The signal representation of an image could be represented. Using a gradient ascent/descent algorithm, the signal jumps of the image could be detected resulting in a tracking of multiple crests with in the same image. Essentially, the gradient ascent/descent must track the entire image to ensure that the all crests are found and robustly represented from image to image. Page 8 of 26

9 Summary: In this literature review, a set of near-shore image analysis approaches and a general set of segmentation approaches based upon the minimization of some cost or energy function were presented. From these approaches, the two coastal engineering applications for intensity profiling sea spikes and estimation of longshore current seem to be the most feasible problem solving approaches due to their list of fewer assumptions with respect to image processing and success relative to their current applications. Specifically, the idea of image rectification, median filtering, a FFT, and intensity profiling are very promising techniques which provide a basis for the project approach planning to begin. Conversely, the energy minimization approaches are not chosen based upon their computational demands relative to the amount of image data required for nearshore monitoring, the absence of ideal wave crest models, and the assumption of tracking regions with constant pixel values. 3 Approaches: The selected solution approach combines two of the aforementioned coastal engineering applications to approximating nearshore current characteristics using remote sensing. From these two approaches, image windowing, median filtering, thresholding operations, and intensity profiling are used to develop a robust approach to calculating the breaking wave velocity from a single elevated video camera. In this section of the report, these tools will be articulated in the order of implementation to provide a logical explanation of the selected approach. The general flow diagram for this approach is shown in Figure 2. 1) Read Image 2) Subtract images 3) Window and filter images 4) Apply image transects 5) Create intensity profiles 6) Approximate pixel based velocity Figure 2: Approach flow diagram. Page 9 of 26

10 1) Read Image The images selected to read are retrieved from a two frame per second frame dump of the video segment. A sequence of nine images is shown in Figure 3. These images are selected for their good quality and weather conditions. Figure 3: Sample image data. As shown in Figure 3, there are developing waves which can be recognized by two features: 1) the white foam front 2) the dark region due to ambient lighting. These features can be seen in the circled portion of the ninth image. A further discussion of breaking wave recognition is discussed in the image windowing step. Page 10 of 26

11 2) Subtract Images To extract the dynamic breaking wave features, an image subtraction is applied by iteratively subtracting images from the supplied data. An example image subtraction is shown in Figure 4. Subtract Figure 4: Image subtraction. The image subtraction results in a new image. In this new image, constant pixel intensities result in a difference of zero; whereas, dynamic data is represented by the difference between pixel values. The breaking wave fronts are constantly changing; therefore, fronts can be extracted by an image subtraction as shown by the red arrows in Figure 4. 3) Window and Filter Images Due to the nature of the image data, the image is windowed to eliminate superfluous image data such as open ocean and beach area. An example of the selected windows is displayed in Figure Figure 5: Example of image windows. The regions are split for a reduction in computation and to accommodate three areas of interest representing various levels of difficulty relative to feature recognition. The first window is created to capture the initial breaking wave. The second window is created to aid image processing of the most complex region of the surf zone. In this region, spurious foam is more prevalent and waves tend to interact due to varying breaking wave velocities. The third region Page 11 of 26

12 displays the impinging breaking wave front on the beach as represented by a rather distinct white line. By separating these regions, various filtering operations can be used to further extract features. An example of a filtering operation is shown in Figure 6. Figure 6: Unsharp contrast enhancement filter. From the filtering operation, a drastic enhancement is achieved for breaking wave features in region 1. Due to the different wave behavior in each region separate filtering techniques will be employed. Also, it is a noise filtering operation will be attempted to combat the spurious noise which appears as the result of the contrast enhancement filter. 4) Apply Image Transects Image transects are applied to the windowed regions of the image to act as points of profile measurement. An example of this process is shown in Figure 7. N M Figure 7: Example of transect mapping. Profiles are placed every 10 pixels in the M direction and run the entire width of the windowed regions. Page 12 of 26

13 5) Create Intensity Profiles From the transect mapping, intensity values will be recorded every 25 pixels in the N direction for each region. This intensity profile provides a means to track the breaking wave front. An example intensity profile is shown in Figure Intensity Profile 100 Intensity Value (0-255) Pixels Figure 8: Example intensity profile. The intensity profile represents the intensity values of a transaction of points for one frame. In this case, the profile shows the intensity values for the first N transaction. By tracking the local minima on the wave fronts, a velocity can be extracted. 6) Approximate Velocity Since, the distance (in pixels) between the transects is known, a velocity can be approximated based upon the frequency of wave fronts (signal spikes) crossing transects. To perform this, the intensity profiles will be mapped relative to time. The difference in pixel distance relative to the number of frames required to travel that distance will result in an approximation of the velocity. Page 13 of 26

14 4 Results: Having embodied the selected approach in a computing system, several interesting results were obtained pertaining to both intensity profiling and breaking wave velocity estimation. Intensity profiling is discussed first as it is the foundation of the velocity estimation. Intensity Profiling Intensity profiling involves the measurement of intensity values across a transect. For this project, the data representation for the intensity profile is augmented from the proposed profile in the Approaches Section. This augmentation is implemented due to the vast amount of data that is required to be processed for the image data. A transaction profile mapping is displayed in Figure Figure 9: Profile mapping for intensity data. The red, vertical arrows represent the mapping along which intensities are measured for each frame. These lines denote approximate positions of the profile mappings. Furthermore, the direction denoted by the arrow represents the motion of the waves and the origin of distance measurement. The termination end of the arrow represents pixel 183; whereas, the origin of the arrow represents the pixel 0. These values can be correlated to the dependent axes shown in Figure 10. Also, the resulting intensity profile evolution over the entire range of image data (55 frames) is presented in Figure 10. Intensities are measured across each pixel row. Page 14 of 26

15 Figure 10: Intensity profile data. In each subplot of Figure 10, the intensity data for each profile is displayed. Since the image regions were binarized, the data is represented as points because the presence of a point represents the passing of a breaking wave across a row of the intensity profile path. For each intensity profile, the dependent axes represent the pixel number and the independent axes represent the frame number which correlates to time. Each frame was dumped at two frames per second; therefore, each tick on the x-axis relates to approximately one half second. From this distance versus time data, velocity can be estimated which is discussed in the next section. Page 15 of 26

16 Velocity Estimation The velocity estimation performed in this project is implemented as a manual process requiring human intervention. The intensity profile matrices, in Figure 10, were added and the indices of nonzero elements were recorded. The resulting data is displayed as a pixel versus frame plot in Figure 11. Figure 11: Breaking wave interpretation. To estimate the velocities, the breaking waves were identified by human inference. Waves 1, 5, and 6 are quite distinct; therefore, lines were generated from the minimum frame point to the max frame point. For waves 2, 3, and 4, the wave front is not as distinct. The frame data is observed and from the observation it was concluded that waves 2, 3, and 4 could be inferred from the data; therefore, lines were created. Table 1 contains the distances traveled and times of the breaking wave data. Table 1: Breaking wave velocity data. Distance (pixels) Time (s) Velocity (pixels/s) Wave Wave Wave Wave Wave Wave Towards Beach Page 16 of 26

17 As shown by the data, the velocity ranged from 5.5 to -0.2 pixels per second. The velocity decreased as the waves approached the beach. This decrease can be attributed to the dissipation of energy as the waves run-up on the beach. A graphical representation of this data is shown Figure 12. Breaking Wave Velocity Data 6 5 Moving towards beach Velocity (pixels/s) Wave Label Figure 12: Breaking wave velocity plot. As mentioned before, the data decreases as the waves approach the beach. This occurs in a linear fashion as shown by the regression line plotted with the data. Summary After performing the image analysis, a manual approach to calculating velocity has been presented. From the approach, a decreasing trend in breaking wave velocity has been discovered as waves move toward the beach. This observation appears to hold based upon the conservation of energy as waves run up the beach. A discussion of the results is presented in the next section to critically evaluate the quality of the selected solution approach. Page 17 of 26

18 5 Discussion: The overall solution approach is evaluated to identify further processing needs which could result in better velocity estimations. The results are discussed in two sections: 1) intermediate results such as image subtraction, windowing, and filtering and 2) primary results including intensity profiling and velocity estimation. Intermediate Results Intermediate results are discussed first to maintain the logical information flow through the image analysis solution approach. Included in this discussion are image subtraction, windowing, and filtering. Image Subtraction and Windowing Image subtraction involves the subtraction of images of intensity data. This results in capturing the changing data between images such as breaking waves and foam. A sample of this operation is shown in Figure 13. Figure 13: Image subtraction. The image subtraction works very well in capturing information which denotes a breaking wave. The subtraction results in an image which appears to be segmented in a fashion that would produce a good intensity profile. In many cases, this data could be used to determine local optima in the intensity profile which could be used to denote a wave, but due to spurious foam and wave evolution this approach seems limited. The spurious foam and wave evolution cannot be easily seen in Figure 13. In many cases, this foam and dynamic wave data can result in false breaking wave or ambiguous velocity data. The next section discusses a mechanism which is proposed to accommodate regions which have unique wave characteristics and the registration of features within those regions. Page 18 of 26

19 The image is windowed in three locations, as shown in Figure 14. This windowing is performed for two reasons: computational efficiency, wave behavior, and wave recognition. Region 1 Region 2 Region 3 Figure 14: Image region windowing. The argument for computational efficiency is posed for the reason that superfluous image data need not be processed; therefore, time can be saved. From Figure 14, the windowed regions can be seen. Three regions are chosen for intensity variation and wave dynamics. The intensity of the breaking waves appeared to decrease as the wave approached the beach. As this occurs, it becomes more difficult to extract a breaking wave from foam. In this implementation, filtering is used to cope with the sea foam, but it may be a more sound approach to create an adaptive thresholding operation whereby the maximum intensity of a region is recorded and applied to the region. Finally, wave dynamics refers to the way in which the wave breaks. In region 1, breaking waves are quite easy to observe as there is a great deal of foam which makes feature extraction quite easy. Conversely, region 2 and 3 do not contain the same amount of foam resulting in a tougher extraction. Also, region 2 contains a great deal of spurious foam due to the mixing of waves in this region. Page 19 of 26

20 Filtering and Thresholding Filtering is performed for each region of the image data. For region 1 a series of filtering and thresholding operations were performed to extract the breaking wave feature. This process is shown in Figure 15. Figure 15: Region 1 filtering operations. To begin the process, an unsharp contrast enhancement filter is implemented to increase the contrast between the breaking wave and the dark image background. A median filter is applied to the contrast enhancement to eliminate extraneous foam. A threshold is then performed to extract the feature. Finally, a median filter is applied to further eliminate foam data. This series of filtering is performed to produce binary information for the presence of a breaking wave. As mentioned before, a gradient based technique could be applied to intensity profile with subtracted data. This approach is not selected due to the difficulty associated with finding the local optima of a sharply changing intensity function. For future work a different series of filtering could be applied to generate a better feature extraction in region 1. Page 20 of 26

21 In region 2, another series of filtering and thresholding operations are performed. This series of operations includes median filtering, thresholding, and closing. This process is shown in Figure 16. Region 2 Figure 16: Region 2 filtering operations. A median filter is applied to region 2 to reduce the amount of noise in the image. A thresholding operation is performed to extract the feature. A closing operation is used to recover some of the breaking wave front profile. In this region, the filtering operations are not robust at recovering the entire range of wave states due the complexity associated with the wave mixing in this region. Waves tend to disappear and merge resulting in a lack of information to maintain a consistent breaking wave front. For future work, an algorithm might be created to infer a breaking wave front from the line that is creating during feature extraction. The final image region includes the beach front. This region required a series of filtering operations including thresholding, median filtering, and majority filling. This process is displayed in Figure 17. Figure 17: Region 3 filtering operations. Region 3 represents an area of the image data in which the wave is impinging on the beach. Also, the effect of decreasing water depth is more prevalent as it causes the breaking wave foam to spread and dissipate. This results in a more subtle breaking wave profile. To extract the wave front, thresholding and median filtering is performed to extract the feature and remove noise, Page 21 of 26

22 respectively. Finally, a majority operation enhances the breaking wave front by adding a 1 to a pixel if its 3 x 3 neighborhood is 1. The final image region contains a segmented wave front. This segmentation results from the lack of recoverable image information after the filtering operations. The filtering and thresholding operations provide a means to extract the breaking wave fronts as binarized features. By performing these operations in these regions, some critical image information is lost. This information could be maintained with a different solution approach such as a gradient based approach to the intensity profile. By using such an approach, the intensity profile information could be maintained in a form similar to an image subtraction. This would result in continuous information for a breaking wave. By labeling and tracking local optima, the wave velocity could be better estimated because information is maintained in a more original format. Primary Results Primary results include intensity profiling and velocity estimation. Intensity profiling is discussed first as it is required before the velocity estimation can be performed. Intensity Profiling In this implementation, the intensity profiling is plotted as a scatter diagram to capture the change of the breaking wave data over time. This approach to modeling the breaking wave data results in a logical representation of the data, but would be more interpretable if waves were labeled. In labeling the waves, an unambiguous definition of the data would be created which would result in a representation that could be easily implemented in a computing system. This would also result in the avoidance of human interpretation error which could affect the validity of results. Velocity Estimation For velocity estimation, an automated method would be preferred over the implemented method of manual estimation. Manual estimation was chosen to provide a general interpretation of the data to test the validity of results relative to future field data. If results correlate, an automated method could be selected to generate a more accurate set of velocity estimations. Also, automating velocity estimation would increase the data processing capability of the algorithm by several orders of magnitude. An automated method for velocity estimation would be easy to implement once the waves are robustly labeled across image frames as it would only be the distance the wave traveled in the image divided by the frame number or time. After approximating the velocity by pixels, a rectification algorithm could be implemented; whereby, image coordinates are transformed into real world coordinates resulting in a true estimation of velocity. Page 22 of 26

23 6 Conclusions and Future Work: In this project, a literature review of current relevant approaches to breaking wave velocity estimation is presented. From that literature review, an approach for estimating breaking wave velocity from a single remote sensing device has been proposed. In this approach, there are six phases which include: 1) reading images, 2) subtracting images, 3) windowing and filtering, 4) transect application, 5) intensity profiling, and 6) velocity estimation. In this implementation, the velocity is manually interpreted resulting in a linear decrease in velocity as breaking waves approach the beach. From these results, future work is proposed that can better approximate breaking wave velocity estimation. Adaptive thresholding operation an operation could be used to automatically determine the maximum intensity value in a windowed region to identify the appropriate threshold for each frame. Different filtering process the filtering process used to process the subtracted image data could be changed to remove more spurious noise while maintaining the breaking wave front. Bridge breaks in wave front foam for a binarized image, a technique could be developed to bridge gaps in the breaking wave front to create a continuous, more distinct wave front. Gradient based intensity profile technique the local optima of an intensity profile of subtracted images could be tracked to calculate velocity. Labeling waves by labeling waves using a level set formulation with an energy function or gradient based intensity profile tracking, the velocity could be easily interpreted. Automate velocity estimation an automatic velocity estimation system based upon the gradient based approach or labeling of waves would scale up the amount of data which could be processed and provide a means to more accurately determine breaking wave velocity. Page 23 of 26

24 References Chan, T. F., Vese, L.A. (2001). A level set algorithm for minimizing the Mumford-Shah functional in image processing. Workshop on Variational and Level Set Methods in Computer Vision. Cremers, D. (2003). A Variational Framework for Image Segmentation Combining Motion Estimation and Shape Regularization. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Madison, WI. Doretto, G., Cremers, D., Favaro, P., Soatto, S. (2003). Dynamic Texture Segmentation. Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV'03), Nice, France, IEEE Computer Society. Haas, K. A., Cambazoglu, M.K. (2006). Video Observations of Longshore Currents, Myrtyle Beach, South Carolina. Proceedings of the 30th International Conference on Coastal Engineering, San Diego, CA. Haller, M. C., Lyzenga, D.R. (2003). "Comparison of Radar and Video Observations of Shallow Water Breaking Waves." IEEE Transactions on Geoscience and Remote Sensing 41(4): Page 24 of 26

25 Appendix A - Timeline Timeline Phase Event Information ME6406 Project Schedule Literature Review Approaches Section First Executable Code Final Project Executable Code and Report Search for and locate appropriate literature Create literature notes Write a literature review from notes Identify approaches from literature Write an explanation of the various approaches Identify the principal set of approaches to be used Document the step-by-step procedure of the approach Deliver approaches section Create a code for wave crest recognition Create a code for wave crest labeling Create a code for tracking wave crest Deliver an executable code Test and tweak code - Finalize code Assemble assignments and outline report Write report Proofread report Appropriate literature Literature notes Literature review Explanation of approaches Principal approaches Procedure of approach Arpproaches section Code for wave crest recognition Code for wave crest labeling Code for wave crest tracking Executable code Report outline Final report rough draft Proofread rough draft and adjust Final report 15-Oct 22-Oct 29-Oct 5-Nov 12-Nov 19-Nov 26-Nov 3-Dec M W F M W F M W F M W F M W F M W F M W F M W F Page 25 of 26

26 Appendix B - Deliverables Literature Review: October 30 th Relevant literature Complete literature review Approaches Section: November 13 th Various approaches from literature Principal approach/approaches undertaken Step-by-step procedure for the approach/approaches undertaken First Executable Code: November 27 th Complete set of Matlab functions Executable Matlab script with functions Image Data for project Readme file for operating code Final Project Executable Code: December 7 th Updated set of Matlab functions and executable script Image Data Updated Readme file for operating code Final Report: December: 7 th Literature review Approaches Section Procedural guide for chosen approach Results Discussion of results Conclusions from discussion Page 26 of 26

A Multiscale Nested Modeling Framework to Simulate the Interaction of Surface Gravity Waves with Nonlinear Internal Gravity Waves

A Multiscale Nested Modeling Framework to Simulate the Interaction of Surface Gravity Waves with Nonlinear Internal Gravity Waves DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. A Multiscale Nested Modeling Framework to Simulate the Interaction of Surface Gravity Waves with Nonlinear Internal Gravity

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Snakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges

Snakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges Level Sets & Snakes Snakes, Active Contours, and Segmentation Introduction and Classical Active Contours Active Contours Without Edges Scale Space and PDE methods in image analysis and processing - Arjan

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Continuous and Discrete Optimization Methods in Computer Vision. Daniel Cremers Department of Computer Science University of Bonn

Continuous and Discrete Optimization Methods in Computer Vision. Daniel Cremers Department of Computer Science University of Bonn Continuous and Discrete Optimization Methods in Computer Vision Daniel Cremers Department of Computer Science University of Bonn Oxford, August 16 2007 Segmentation by Energy Minimization Given an image,

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Yonggang Shi, Janusz Konrad, W. Clem Karl Department of Electrical and Computer Engineering Boston University, Boston, MA 02215

More information

F020 Methods for Computing Angle Gathers Using RTM

F020 Methods for Computing Angle Gathers Using RTM F020 Methods for Computing Angle Gathers Using RTM M. Vyas* (WesternGeco), X. Du (WesternGeco), E. Mobley (WesternGeco) & R. Fletcher (WesternGeco) SUMMARY Different techniques can be used to compute angle-domain

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Automatic Logo Detection and Removal

Automatic Logo Detection and Removal Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Rapid Natural Scene Text Segmentation

Rapid Natural Scene Text Segmentation Rapid Natural Scene Text Segmentation Ben Newhouse, Stanford University December 10, 2009 1 Abstract A new algorithm was developed to segment text from an image by classifying images according to the gradient

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Logical Templates for Feature Extraction in Fingerprint Images

Logical Templates for Feature Extraction in Fingerprint Images Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Seminario del DCAIN. Offshore Measurements of Ocean Waves using Stereo Vision Systems

Seminario del DCAIN. Offshore Measurements of Ocean Waves using Stereo Vision Systems Seminario del DCAIN Offshore Measurements of Ocean Waves using Stereo Vision Systems Guillermo Gallego Grupo de Tratamiento de Imágenes Universidad Politécnica de Madrid, Spain. Trabajo en colaboración

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

DATA and signal modeling for images and video sequences. Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services

DATA and signal modeling for images and video sequences. Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 8, DECEMBER 1999 1147 Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services P. Salembier,

More information

Gaze interaction (2): models and technologies

Gaze interaction (2): models and technologies Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano boccignone@dsi.unimi.it http://homes.dsi.unimi.it/~boccignone/l

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

Object and Action Detection from a Single Example

Object and Action Detection from a Single Example Object and Action Detection from a Single Example Peyman Milanfar* EE Department University of California, Santa Cruz *Joint work with Hae Jong Seo AFOSR Program Review, June 4-5, 29 Take a look at this:

More information

5. Feature Extraction from Images

5. Feature Extraction from Images 5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Text Extraction in Video

Text Extraction in Video International Journal of Computational Engineering Research Vol, 03 Issue, 5 Text Extraction in Video 1, Ankur Srivastava, 2, Dhananjay Kumar, 3, Om Prakash Gupta, 4, Amit Maurya, 5, Mr.sanjay kumar Srivastava

More information

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham

Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Final Report for cs229: Machine Learning for Pre-emptive Identification of Performance Problems in UNIX Servers Helen Cunningham Abstract. The goal of this work is to use machine learning to understand

More information

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Summary We present a new method for performing full-waveform inversion that appears

More information

Notes 9: Optical Flow

Notes 9: Optical Flow Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find

More information

A method for depth-based hand tracing

A method for depth-based hand tracing A method for depth-based hand tracing Khoa Ha University of Maryland, College Park khoaha@umd.edu Abstract An algorithm for natural human-computer interaction via in-air drawing is detailed. We discuss

More information

Application of Characteristic Function Method in Target Detection

Application of Characteristic Function Method in Target Detection Application of Characteristic Function Method in Target Detection Mohammad H Marhaban and Josef Kittler Centre for Vision, Speech and Signal Processing University of Surrey Surrey, GU2 7XH, UK eep5mm@ee.surrey.ac.uk

More information

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS

MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS MEDICAL IMAGE NOISE REDUCTION AND REGION CONTRAST ENHANCEMENT USING PARTIAL DIFFERENTIAL EQUATIONS Miguel Alemán-Flores, Luis Álvarez-León Departamento de Informática y Sistemas, Universidad de Las Palmas

More information

MEASURING SURFACE CURRENTS USING IR CAMERAS. Background. Optical Current Meter 06/10/2010. J.Paul Rinehimer ESS522

MEASURING SURFACE CURRENTS USING IR CAMERAS. Background. Optical Current Meter 06/10/2010. J.Paul Rinehimer ESS522 J.Paul Rinehimer ESS5 Optical Current Meter 6/1/1 MEASURING SURFACE CURRENTS USING IR CAMERAS Background Most in-situ current measurement techniques are based on sending acoustic pulses and measuring the

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Automatic visual recognition for metro surveillance

Automatic visual recognition for metro surveillance Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text

More information

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava

3D Computer Vision. Dense 3D Reconstruction II. Prof. Didier Stricker. Christiano Gava 3D Computer Vision Dense 3D Reconstruction II Prof. Didier Stricker Christiano Gava Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

A System to Automatically Index Genealogical Microfilm Titleboards Introduction Preprocessing Method Identification

A System to Automatically Index Genealogical Microfilm Titleboards Introduction Preprocessing Method Identification A System to Automatically Index Genealogical Microfilm Titleboards Samuel James Pinson, Mark Pinson and William Barrett Department of Computer Science Brigham Young University Introduction Millions of

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

Moving Object Counting in Video Signals

Moving Object Counting in Video Signals Moving Object Counting in Video Signals Ganesh Raghtate 1, Abhilasha K Tiwari 1 1 Scholar, RTMNU, Nagpur, India E-mail- gsraghate@rediffmail.com Abstract Object detection and tracking is important in the

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

CS 4495 Computer Vision Motion and Optic Flow

CS 4495 Computer Vision Motion and Optic Flow CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS4 is out, due Sunday Oct 27 th. All relevant lectures posted Details about Problem Set: You may *not* use built in Harris

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

Auto-Digitizer for Fast Graph-to-Data Conversion

Auto-Digitizer for Fast Graph-to-Data Conversion Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE

AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE AN EFFICIENT BINARIZATION TECHNIQUE FOR FINGERPRINT IMAGES S. B. SRIDEVI M.Tech., Department of ECE sbsridevi89@gmail.com 287 ABSTRACT Fingerprint identification is the most prominent method of biometric

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

CS4442/9542b Artificial Intelligence II prof. Olga Veksler CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

A Direct Simulation-Based Study of Radiance in a Dynamic Ocean

A Direct Simulation-Based Study of Radiance in a Dynamic Ocean A Direct Simulation-Based Study of Radiance in a Dynamic Ocean Lian Shen Department of Civil Engineering Johns Hopkins University Baltimore, MD 21218 phone: (410) 516-5033 fax: (410) 516-7473 email: LianShen@jhu.edu

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Using Edge Detection in Machine Vision Gauging Applications

Using Edge Detection in Machine Vision Gauging Applications Application Note 125 Using Edge Detection in Machine Vision Gauging Applications John Hanks Introduction This application note introduces common edge-detection software strategies for applications such

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, www.ijcea.com ISSN 2321-3469 SURVEY ON OBJECT TRACKING IN REAL TIME EMBEDDED SYSTEM USING IMAGE PROCESSING

More information

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling

Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University

More information

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki

Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki Computer Vision with MATLAB MATLAB Expo 2012 Steve Kuznicki 2011 The MathWorks, Inc. 1 Today s Topics Introduction Computer Vision Feature-based registration Automatic image registration Object recognition/rotation

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information