Blue Sky Detection for Picture Quality Enhancement
|
|
- Abel Blankenship
- 6 years ago
- Views:
Transcription
1 Blue Sky Detection for Picture Quality Enhancement Bahman Zafarifar 2,3 and Peter H. N. de With 1,2 1 Eindhoven University of Technology, PO Box 513, 5600 MB, The Netherlands, {B.Zafarifar, P.H.N.de.With}@tue.nl 2 LogicaCMG, PO Box 7089, 5600 JB Eindhoven, The Netherlands 3 Philips Innovative Applications (CE), Pathoekeweg 11, 8000 Burges, Belgium Abstract. Content analysis of video and still images is attractive for multiple reasons, such as enabling content-based actions and image manipulation. This paper presents a new algorithm and feature model for blue-sky detection. The algorithm classifies the sky areas by computing a pixel-accurate sky probability. Such a probabilistic measure matches well with the requirements of typical video enhancement functions in TVs. This algorithm enables not only content-adaptive picture quality improvement, but also more advanced applications such as content-based annotation of, and retrieval from image and video databases. When compared to existing algorithms, our proposal shows considerable improvements in correct detection/rejection rate of sky areas, and an improved consistency of the segmentation results. 1 Introduction Sky is among the objects of high visual importance, appearing often in video sequences and photos. A sky-detection system can be used for different applications. At the semantic level, sky detection can contribute to image understanding by e.g. indoor/outdoor classification or automatic detection of image orientation. At this level, applications of sky detection include content-based actions such as image and video selection and retrieval from data-bases, or object-based video coding. At the pixel level, sky detection can be used for content-based image manipulation, like picture quality improvement using color enhancement and noise reduction, or as background detection for 3D depth-map generation. Content-adaptive processing in general, and sky detection in specific, can be used in high-end televisions. Modern TVs employ a variety of signal-processing algorithms for improving the quality of the received video signal. The settings of these processing blocks are often globally constant or adapted to some local pictorial features, like color or the existence of edges in the direct neighborhood. Such features are often too simple to deal with the diverse contents of video sequences, leading to a sub-optimal picture quality as compared to a system that locally adapts the processing to the content of the image. The above-mentioned local adaptation can be realized if the image is analyzed by a number of object
2 detectors, after which areas of similar appearance are segmented and processed with algorithms optimized to the features of each area [1]. Due to its smooth appearance, noise and other artifacts are clearly visible in sky regions. This motivates using appropriate image enhancement techniques specifically in the sky regions. The existence of special circuits in high-end TVs for improving the color in the range of sky-blue also illustrates the subjective importance of sky. Our objective is to develop a sky-detection algorithm, suitable for image enhancement of video sequences. This implies that the detection must be pixel accurate and consistent, and allow for real-time embedded implementation. Previous work on sky detection includes a system [2][3], based on calculating an initial sky belief map using color values 4 and a Neural Network, followed by connected-area extraction. These areas may be accepted or rejected using texture and vertical color analysis, and the degree of fitting to a two-dimensional (2D) spatial model. While this method yields useful results in annotating sky regions, we found it not suitable for the requirements of video applications concerning spatial consistency. The algorithm takes crisp classification decisions per connected-area, leading to abrupt changes in the classification result. As an example, patches of sky may be rejected when their size reduces during a camera zoom-out. A second system proposed in [4][5] is based on the assumption that sky regions are smooth and are normally found at the top of the image. Using predefined settings, an initial sky probability is calculated based on color, texture and vertical position, after which the settings are adapted to regions with higher initial sky probability. These adapted settings are used for calculating a final sky-probability. The employed pixel-oriented technique (as opposed to the connected-area approach of the first system) makes this system suitable for video applications. However, due to its simple color modeling, this method often leads to false detections, such as accepting non-sky blue objects as sky, and false rejections, like a partial rejection of sky regions when they cover a large range in the color space. We propose an algorithm that builds upon the above-mentioned second system, and and exploits its suitability for video applications, while considerably improving the false detection/ rejection rates. The proposed sky detector is confined to blue-sky regions, which includes both clear blue sky and blue sky containing clouds. Experimental simulations of our new proposal indicate a substantial improvement in the correct detection of sky regions covering a large color range, and the correct rejection of non-sky objects when compared to the algorithm proposed in [4][5], as well as an improved spatial consistency with respect to the system described in [2][3]. 4 In this paper, color denotes all color components. When a distinction between chromaticity and gray values is required, we use the terms luminance and chrominance
3 The remainder of the paper is organized as follows. Section 2 characterizes the sky features, Section 3 describes the proposed algorithm, Section 4 presents the results and Section 5 concludes the paper. Fig. 1. Various appearances of sky, from left to right: dark, light, large color range, occluded. 2 Observation of Sky Properties In this section, we discuss the features of sky images, and address the challenges for modeling the sky. Sky can have a variety of appearances, such as clear sky, cloudy sky, and overcast sky (see Fig. 1). Sky color can cover a large part of the color space, from saturated blue to gray, or even orange and red during sun-set. Consequently, a system based on temporally-fixed color settings is likely to fail in correctly detecting different sky appearances. In addition, sky regions can significantly vary in color within an image: a wide-shot clear-sky image tends to be more saturated at the top and becomes less saturated near the horizon, while the luminance tends to increase from the top of the image towards the horizon. As a result, a sky detector using a spatially-fixed color is likely to reject parts of the sky region, when the sky color considerably changes within one image. An additional challenge is the partial occlusion of sky by foreground objects, cutting the sky into many disconnected parts. In order to prevent artifacts in the post-processed video, it is important that all sky areas are assigned coherent probabilities. Another non-trivial task is distinguishing between sky, and objects which look similar to sky but are actually not a part of it. Examples are areas of water, reflections of sky, or other objects with similar color and texture as sky. In the following section, we propose a system that addresses the aforementioned issues.
4 3 Algorithm Description 3.1 Sky Detector Overview We propose a sky-detection system based on the observation that blue-sky regions are more likely to be found at the top of the image, they cover a certain part of the color space, have a smooth texture, and the pixel values show limited horizontal and vertical gradients. The algorithm contains three stages, as depicted in Fig. 2. Input Image (YUV) Initial Settings Initial Sky Probability Calculation PskyInitial Adapting Settings Adapted Settings Final Sky Probability Calculation PskyFinal Y,U,V Texture settings Expected vertical position Y Ptexture Vert. Pos. Pposition PskyInitial Adaptive Threshold level & global sky confidence metric Adaptive vertical position Texture settings Adaptive vertical position Y Ptexture Vert. Pos. Pposition PskyFinal Expected color Pcolor Y,U,V Adaptive expected sky color Adaptive Expected color Pcolor Y,U,V Fig. 2. Block diagram of the Sky detector, divided in three stages. Stage 1 : Initial sky probability. In this stage, an initial sky-probability map is calculated based on the color, vertical position and texture of the image pixels. The texture analysis also includes horizontal and vertical gradient measures. The settings of this stage are fixed, and are chosen such that all targeted sky appearances can be captured. Stage 2 : Analysis and sky-model creation. In this stage, the fixed settings of the first stage are adapted to the image under process. As such, the settings for the vertical-position probability and the expected sky color are adapted to the areas with high sky probability. For the expected color, a spatially-varying 2D model is created that prescribes the sky color for each image position. Stage 3 : Final sky probability. In this stage, a pixel-accurate sky probability is calculated based on the color, vertical position and (optionally) texture of the image, using the adaptive model created in Stage 2. With respect to implementation, we have adopted the YUV color-space because the sky chrominance components in the vertical direction of the image, tend to traverse linearly in the UV plane from saturated blue through gray to red. In order to reduce the amount of computations, the image is down-scaled to QCIF resolution for usage in Stage 1 and 2. However, Stage 3 uses the image at the original resolution in order to produce pixel-accurate results. Sections 3.2, 3.3 and 3.4 describe the three stages of the algorithm in more detail.
5 3.2 Initial Sky Probability Using predefined settings, an initial sky probability (P skyinitial ) is calculated using a down-scaled version of the image. We combine color, vertical position, and texture to compute the initial sky probability as P skyinitial = P color P position P texture. 1. The color probability is calculated using a three-dimensional Gaussian function for the Y, U and V components, centered at predetermined positions Y 0, U 0 and V 0 (representing the expected sky color), with corresponding standard deviations σ y1, σ u1 and σ v1. The settings are chosen such that all desired sky appearances are captured. The color probability is defined as ( ) 2 ( ) 2 ( ) ) 2 Y Y ( 0 σ y1 + U U 0 σ u1 + V V 0 σ v1 P color = e. 2. The vertical-position probability is defined by a Gaussian function, which has its center at the top of the image, starting with unity value and decreasing to 0.36 at the bottom of the image, so that ( ) 2 P position = e r height, where r is the vertical coordinate of the current pixel (at the top of the image r = 0) and height denotes the total number of rows (i.e. TV lines) of the image. 3. The calculation of the texture probability is based on a multi-resolution analysis of the luminance channel of the image. The analysis assigns low probabilities to parts of the image containing high luminance variation, or excessive horizontal or vertical gradients. This probability can be used to eliminate the textured areas from the initial sky probability. More specifically, three downscaled (with factors of 2) versions of the luminance channel are analyzed using a fixed window-size (of 5 5 pixels), and the results are combined in the lowest resolution, using the minimum operator. The texture analysis uses the following two measures. SAD: The local smoothness of the image can be measured by the luminance variation. Using the Sum of Absolute Differences (SAD) between horizontallyadjacent, and vertically-adjacent pixels in the analysis window, we calculate the luminance variation in the surrounding of the current pixel. The horizontal and vertical SAD (SAD hor and SAD ver ) lead to a probabilistic measure P SAD as follows SAD hor (r,c) = 1 N SAD SAD ver (r,c) = 1 N SAD i= w w 1 i= w w 1 j= w Y (r + i,c + j) Y (r + i,c + j + 1), Y (r + i,c + j) Y (r + i + 1,c + j), j= w
6 P SAD = e ([SAD hor + SAD ver T SAD ] 0 )2. Here, r and c are the coordinates of the pixel in the image, w defines the size of the analysis window (window size= 2w + 1), and i and j are indices of the window. The factor 1/N SAD is used to normalize the SAD to the total number of the pixel differences within the window (N SAD = (2w +1) 2w), and T SAD is a noise-dependent threshold level. The symbol [.] 0 denotes a clipping function defined as [f] b a = Min(Max(f,a),b). Gradient: we observe that luminance values of the sky regions have limited horizontal and vertical gradients, and that the luminance often increases in top-down direction. We define the vertical gradient (grad ver ) as the difference between the sum of pixel values of the upper-half of the analysis window, and the sum of the pixel values of the lower-half of the analysis window. The horizontal gradient (grad hor ) is defined similarly, using the pixels of the left-half and the pixels on the right-half of the analysis window. For pixel coordinate (r, c) this leads to grad hor (r,c) = 1 N grad grad ver (r,c) = 1 N grad i= w 1 i= w 1 j= w j= w Y (r + i,c + j) Y (r + i,c + j) i= w i=1 Y (r + i,c + j), j=1 j= w Y (r + i,c + j), where the factor 1/N grad normalizes the gradient to the size of the window (N grad = w (2w + 1)). Using appropriate threshold levels, the horizontal and vertical gradients are translated to a probability P grad, calculated as P grad = e ([T vl grad ver ] 0 + [grad ver T vu ] 0 + [ grad hor T h ] 0 )2, where T vl and T vu are the threshold levels for the lower and upper bounds of the vertical gradient respectively, and T h is the threshold level for the horizontal gradient. These thresholds are fixed values, determined by a set of training images. Using separate thresholds for the upper and lower bounds in the vertical direction allows an increase, and penalized a decrease of the luminance in the downwards image direction. Finally, the texture probability P texture combines P SAD and P grad as P texture = P SAD P grad.
7 3.3 Analysis and Sky-Model Creation In this stage, the initial sky probability (calculated in Stage 1) is analyzed in order to create adaptive models for the color and vertical position used in the final sky-probability calculation. This involves the following steps. 1. Calculating Adaptive threshold level and global sky confidence metric: the initial sky probability needs to be segmented in order to create a map of regions with high probability. Simple measures for threshold determination, such as using the maximum of the sky-probability map as proposed in [5] can perform inadequately, for example by favoring small objects with high sky probability over larger non-perfect sky regions. In order to avoid this problem, we propose a more robust method that takes both the size and the probability of sky regions into account, by computing an adaptive threshold and a global sky confidencemetric Q sky. The confidence metric yields a high value if the image contains a significant number of pixels with high initial sky probability. This prevents small sky-blue objects from being accepted as sky, in images where no large areas with high sky probability are present. The calculation steps are as follows: first the Cumulative Distribution Function (CDF) of the initial sky probability is computed, after which it is weighted using a function that emphasizes the higher sky probability values and decreases to zero towards the lower sky probability values. Due to this weighting, the position of the maximum of the resulting function (weighted CDF) includes our preference for higher probability values, while being dependent on the distribution of the initial sky probability values. Therefore, this position can be used to determine the desired adaptive threshold. The maximum amplitude of the weighted CDF is dependent on the number of pixels with relatively high sky probability, and thus can be used for determining the aforementioned confidence metric Q sky. 2. Adaptive vertical position: the areas with high sky-probability are segmented by thresholding the initial sky-probability map, with the threshold level described in the previous paragraph, after which the mean vertical position of the segmented areas is computed. This adaptive vertical position is used to define a function, which equals unity at the top of the image and linearly decreases towards the bottom of segmented sky region. This function is then used for computing the final sky probability. 3. Adaptive expected sky color: as mentioned in Section 2, the sky detector needs to deal with the wide range of sky color values within and between different frames. In [5], it is proposed to use frame-adaptive, but further spatially-constant expected colors. This method addresses the problem of large color variation between frames, but fails when the sky covers a considerable color range within one frame, resulting in a partial rejection of the sky areas. To address this problem, we propose to use a spatially-adaptive expected sky color. To this end, each signal component (Y, U, and V) is modeled by a spatially-varying 2D function, that is fitted to a selected set of pixels with high sky probability. An example of model fitting technique is as follows. Using a proper adaptive threshold, the initial sky probability is segmented to select sky regions with
8 high sky probability. Next, the segmented pixels are selected with a decreasing density in top-down direction. This exploits our assumption that the pixels at the top are more important for model fitting than those near the bottom, and ensures that the model parameters are less influenced by reflections of sky or other non-sky blue objects below the actual sky region. The last step is to use the values of the (Y, U, V) signal components of these selected pixels to fit the 2D function of the corresponding signal component. The choice of the color model and the fitting strategy of the 2D functions depend on the required accuracy and the permitted computational complexity. We implemented (1) a 2D second-degree polynomial, in combination with a leastsquares optimization for estimating the model parameters, and (2) a model which uses a matrix of values, per color component, for representing the image color [6]. The 2 nd -degree polynomial model offers sufficient spatial flexibility to represent typical sky colors, but is computationally expensive (the presented results in this paper use this model). The second model, also offers the necessary flexibility, and is in addition more suitable for hardware implementation. 3.4 Final Sky Probability Using the adaptive model created in Stage 2, we compute a pixel-accurate final sky-probability map as P skyfinal = P color2 P position2 P texture2 Q sky, where Q sky denotes the sky confidence metric. The required pixel accuracy is achieved by using the original image resolution, and applying a moderate texture measure to prevent distortion in the final sky probability map, near the edges of non-sky objects. The following paragraphs further describe the features applied in this stage. 1. The color probability is calculated using a 3D Gaussian function for Y, U and V components, centered at the spatially-varying values Y 0,(r,c), U 0,(r,c) and V 0,(r,c) (representing expected sky color at the spatial position (r,c)), with corresponding standard deviations σ y2, σ u2 and σ v2. In order to reduce false detections, these standard deviations are reduced with respect to the values of stage As opposed to the fixed vertical-position function used for initial sky probability, the final stage uses an adaptive vertical probability function, which is tuned to cover the sky areas with high sky probability, as calculated in Stage The inclusion and the type of texture measure depend on the application for which the sky detection output is used. For some applications, using a texture measure in the final sky-probability calculation could lead to undesirable effects in the post-processed image, while other applications may require some form of a texture measure. For example, for noise removal in the sky regions, we found it necessary to reduce the sky probability of pixels around the edges of objects adjacent to sky, in order to retain the edge sharpness. This was done by taking the Sobel edge detector as texture measure.
9 Fig. 3. Examples of improved correct detection, left: input, middle: proposed by [4][5], right: our algorithm. Fig. 4. Examples of improved correct rejection, left: input, middle: proposed by [4][5], right: our algorithm. Fig. 5. Examples of improved spatial accuracy, left: input, middle: proposed by [2][3] (courtesy of Eastman Kodak), right: our algorithm.
10 4 Experimental Results We applied the proposed algorithm on more than 200 sky images. The images were selected to present a large variety of sky appearances, many including sky reflections and other challenging situations. Figure 3 compares our results to [4][5]. In Fig. 3-top, the halo (top-middle) is resulted by the spatially constant color model used in [4][5], while the spatially adaptive color model employed in our algorithm is capable of dealing with the large color range of the sky area (top-right). A similar difference in the results can be seen in Fig. 3-bottom, where in addition, the reflection of the sky is removed by the gradient analysis. Figure 4 shows the improved correct rejection of non-sky objects (areas or water in Fig. 4-top and mountains in Fig. 4-bottom), which has been achieved because of the multi-resolution texture analysis. Lastly, Fig. 5 shows the greatly improved spatial accuracy of our results in comparison to [2][3]. This is due to the two-pass approach for calculating the sky probability, in which the second pass uses the original image resolution and a moderate texture measure. When compared to [4][5], our experiments indicate a substantial improvement in the correct detection of sky regions covering a large color range due to the spatially varying color model, and an improved correct rejection of non-sky objects due to the multi-resolution texture analysis. When compared to [2][3], we observed an improved spatial consistency of the segmentation results. Here, a notable improvement in the correct detection was discovered in 16 out of 23 images, where a side-by-side visual comparison was made. In the remaining cases, our proposal performed comparable to the existing system. In many of these cases we still prefer our proposal, as it is based on a smooth probability measure, whereas the existing system produces crisp results, which is more critical in the case of false detections for video applications. An experiment with a video sequence indicated that the spatial consistency also improves the temporal behavior of the system. More algorithmic tuning and experiments will have to be conducted to validate this conjecture. A simplified version of the proposed algorithm is currently being implemented as a real-time embedded system, using FPGA technology. Preliminary mapping results indicate that a real-time implementation is feasible on a standard FPGA device. 5 Conclusions Sky detection for video sequences and still images can be used for various purposes, such as automatic image manipulation (e.g. picture quality improvement) and content-based directives (e.g. interactive selection and retrieval from multimedia databases). The main problems with the existing algorithms is incomplete detection of sky areas with large ranges of color, false detection of sky reflections or other blue objects, and inconsistent detection of small sky areas. This paper has presented a sky-detection algorithm which significantly reduces the mentioned problems, and has suitable properties for video applications. This was
11 achieved by constructing a sky model that incorporates a 2D spatially-varying color model, while reusing the vertical position probability from an existing method. Moreover, we have introduced a confidence metric for improving the consistency and removal of small blue objects. Wrong detection of the reflections of sky areas and other non-sky objects has been reduced by employing a gradient analysis of the luminance component of the sky. Experimental results show that the proposed algorithm is capable of handling a broad range of sky appearances. The two primary advantages of the proposed algorithm are increased correct detection/rejection rates, and an improved spatial accuracy and consistency of the detection results. Our future work includes developing additional measures for meeting the requirements of real-time video applications. Particularly, the key parameters of the system, such as the vertical position model, the color model, and the confidence metric need to be kept consistent over time. Furthermore, the algorithm will be optimized for implementation in consumer television systems. 6 Acknowledgement The authors gratefully acknowledge Dr. Erwin Bellers for his specific input on existing algorithms. We are also thankful to Dr. Jiebo Luo for providing us with the results of sky-detection algorithm described in [2][3] on a number of sample images. References 1. S. Herman and J. Janssen, System and method for performing segmentation-based enhancements of a video image, European Patent EP , date of publication: January A.C. Gallagher, J. Luo, and W. Hao, Improved blue sky detection using polynomial model fit, in IEEE International Conference on Image Processing, October 2004, pp J. Luo and S. Etz, Method for detecting sky in images, European Patent EP , date of publication: February S. Herman and E. Bellers, Locally-adaptive processing of television images based on real-time image segmentation, in IEEE International Conference on Consumer Electronics, June 2002, pp S. Herman and E. Bellers, Adaptive segmentation of television images, European Patent EP , date of publication: September P.H.N. de With B. Zafarifar, Adaptive modeling of sky for video processing and coding applications, in 27 th Symposium on Information Theory in the Benelux, June 2006, pp
Context analysis: sky, water and motion
Context analysis: sky, water and motion Solmaz Javanbakhti Svitlana Zinger Peter H.N. de With Eindhoven University of Technology Electrical Engineering department P.O. Box 513, 5600 MB Eindhoven, The Netherlands
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More informationA Keypoint Descriptor Inspired by Retinal Computation
A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement
More informationNoise filtering for television receivers with reduced memory
Noise filtering for television receivers with reduced memory R. J. Schutten, G. de Haan and A. H. M. van Roermund. Philips Research Laboratories, Television Systems Group, Prof. Holstlaan 4, 5656 AA Eindhoven,
More informationAIIA shot boundary detection at TRECVID 2006
AIIA shot boundary detection at TRECVID 6 Z. Černeková, N. Nikolaidis and I. Pitas Artificial Intelligence and Information Analysis Laboratory Department of Informatics Aristotle University of Thessaloniki
More informationArtifacts and Textured Region Detection
Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In
More informationImage Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus
Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into
More informationImage Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations
Image Enhancement Digital Image Processing, Pratt Chapter 10 (pages 243-261) Part 1: pixel-based operations Image Processing Algorithms Spatial domain Operations are performed in the image domain Image
More informationIntroduction to Medical Imaging (5XSA0) Module 5
Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed
More informationEvaluation of texture features for image segmentation
RIT Scholar Works Articles 9-14-2001 Evaluation of texture features for image segmentation Navid Serrano Jiebo Luo Andreas Savakis Follow this and additional works at: http://scholarworks.rit.edu/article
More informationAutomatic Tracking of Moving Objects in Video for Surveillance Applications
Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering
More informationIdentifying and Reading Visual Code Markers
O. Feinstein, EE368 Digital Image Processing Final Report 1 Identifying and Reading Visual Code Markers Oren Feinstein, Electrical Engineering Department, Stanford University Abstract A visual code marker
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationExtensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space
Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space Orlando HERNANDEZ and Richard KNOWLES Department Electrical and Computer Engineering, The College
More informationDIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS
DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS Television services in Europe currently broadcast video at a frame rate of 25 Hz. Each frame consists of two interlaced fields, giving a field rate of 50
More informationA Robust Wipe Detection Algorithm
A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,
More informationAnalysis of Image and Video Using Color, Texture and Shape Features for Object Identification
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features
More informationBlur Space Iterative De-blurring
Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,
More informationVideo Inter-frame Forgery Identification Based on Optical Flow Consistency
Sensors & Transducers 24 by IFSA Publishing, S. L. http://www.sensorsportal.com Video Inter-frame Forgery Identification Based on Optical Flow Consistency Qi Wang, Zhaohong Li, Zhenzhen Zhang, Qinglong
More informationLogical Templates for Feature Extraction in Fingerprint Images
Logical Templates for Feature Extraction in Fingerprint Images Bir Bhanu, Michael Boshra and Xuejun Tan Center for Research in Intelligent Systems University of Califomia, Riverside, CA 9252 1, USA Email:
More informationDepth Estimation for View Synthesis in Multiview Video Coding
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Depth Estimation for View Synthesis in Multiview Video Coding Serdar Ince, Emin Martinian, Sehoon Yea, Anthony Vetro TR2007-025 June 2007 Abstract
More informationReport: Reducing the error rate of a Cat classifier
Report: Reducing the error rate of a Cat classifier Raphael Sznitman 6 August, 2007 Abstract The following report discusses my work at the IDIAP from 06.2007 to 08.2007. This work had for objective to
More informationStorage Efficient NL-Means Burst Denoising for Programmable Cameras
Storage Efficient NL-Means Burst Denoising for Programmable Cameras Brendan Duncan Stanford University brendand@stanford.edu Miroslav Kukla Stanford University mkukla@stanford.edu Abstract An effective
More informationCharacter Recognition
Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches
More informationA High Quality/Low Computational Cost Technique for Block Matching Motion Estimation
A High Quality/Low Computational Cost Technique for Block Matching Motion Estimation S. López, G.M. Callicó, J.F. López and R. Sarmiento Research Institute for Applied Microelectronics (IUMA) Department
More informationAutomatic Colorization of Grayscale Images
Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,
More informationLinear Operations Using Masks
Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result at that pixel Expressing linear operations on neighborhoods
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationImproved Seam Carving for Video Retargeting. By Erik Jorgensen, Margaret Murphy, and Aziza Saulebay
Improved Seam Carving for Video Retargeting By Erik Jorgensen, Margaret Murphy, and Aziza Saulebay CS 534 Fall 2015 Professor Dyer December 21, 2015 Table of Contents 1. Abstract.....3 2. Introduction.......3
More informationSearching Video Collections:Part I
Searching Video Collections:Part I Introduction to Multimedia Information Retrieval Multimedia Representation Visual Features (Still Images and Image Sequences) Color Texture Shape Edges Objects, Motion
More informationAdditional Material (electronic only)
Additional Material (electronic only) This additional material contains a presentation of additional capabilities of the system, a discussion of performance and temporal coherence as well as other limitations.
More informationAutomatic Video Caption Detection and Extraction in the DCT Compressed Domain
Automatic Video Caption Detection and Extraction in the DCT Compressed Domain Chin-Fu Tsao 1, Yu-Hao Chen 1, Jin-Hau Kuo 1, Chia-wei Lin 1, and Ja-Ling Wu 1,2 1 Communication and Multimedia Laboratory,
More informationColor Characterization and Calibration of an External Display
Color Characterization and Calibration of an External Display Andrew Crocker, Austin Martin, Jon Sandness Department of Math, Statistics, and Computer Science St. Olaf College 1500 St. Olaf Avenue, Northfield,
More informationFPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS
FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,
More informationLecture 10: Semantic Segmentation and Clustering
Lecture 10: Semantic Segmentation and Clustering Vineet Kosaraju, Davy Ragland, Adrien Truong, Effie Nehoran, Maneekwan Toyungyernsub Department of Computer Science Stanford University Stanford, CA 94305
More informationDrywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König
Drywall state detection in image data for automatic indoor progress monitoring C. Kropp, C. Koch and M. König Chair for Computing in Engineering, Department of Civil and Environmental Engineering, Ruhr-Universität
More informationVideo Aesthetic Quality Assessment by Temporal Integration of Photo- and Motion-Based Features. Wei-Ta Chu
1 Video Aesthetic Quality Assessment by Temporal Integration of Photo- and Motion-Based Features Wei-Ta Chu H.-H. Yeh, C.-Y. Yang, M.-S. Lee, and C.-S. Chen, Video Aesthetic Quality Assessment by Temporal
More informationGENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES
GENERAL AUTOMATED FLAW DETECTION SCHEME FOR NDE X-RAY IMAGES Karl W. Ulmer and John P. Basart Center for Nondestructive Evaluation Department of Electrical and Computer Engineering Iowa State University
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationExperiments with Edge Detection using One-dimensional Surface Fitting
Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,
More informationIEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 6, NO. 5, SEPTEMBER 2012 411 Consistent Stereo-Assisted Absolute Phase Unwrapping Methods for Structured Light Systems Ricardo R. Garcia, Student
More informationAutomatic Segmentation of Moving Objects in Video Sequences: A Region Labeling Approach
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 7, JULY 2002 597 Automatic Segmentation of Moving Objects in Video Sequences: A Region Labeling Approach Yaakov Tsaig and Amir
More informationEECS490: Digital Image Processing. Lecture #19
Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationAn Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners
An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners Mohammad Asiful Hossain, Abdul Kawsar Tushar, and Shofiullah Babor Computer Science and Engineering Department,
More informationFundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision
Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching
More informationAn Efficient Fully Unsupervised Video Object Segmentation Scheme Using an Adaptive Neural-Network Classifier Architecture
616 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 3, MAY 2003 An Efficient Fully Unsupervised Video Object Segmentation Scheme Using an Adaptive Neural-Network Classifier Architecture Anastasios Doulamis,
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationContext based optimal shape coding
IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,
More informationMAXIMIZING BANDWIDTH EFFICIENCY
MAXIMIZING BANDWIDTH EFFICIENCY Benefits of Mezzanine Encoding Rev PA1 Ericsson AB 2016 1 (19) 1 Motivation 1.1 Consumption of Available Bandwidth Pressure on available fiber bandwidth continues to outpace
More informationRegion-based Segmentation and Object Detection
Region-based Segmentation and Object Detection Stephen Gould Tianshi Gao Daphne Koller Presented at NIPS 2009 Discussion and Slides by Eric Wang April 23, 2010 Outline Introduction Model Overview Model
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationEE368 Project: Visual Code Marker Detection
EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.
More informationLine Segment Based Watershed Segmentation
Line Segment Based Watershed Segmentation Johan De Bock 1 and Wilfried Philips Dep. TELIN/TW07, Ghent University Sint-Pietersnieuwstraat 41, B-9000 Ghent, Belgium jdebock@telin.ugent.be Abstract. In this
More informationMultimedia Technology CHAPTER 4. Video and Animation
CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationCONTENT ADAPTIVE SCREEN IMAGE SCALING
CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationDECONFLICTION AND SURFACE GENERATION FROM BATHYMETRY DATA USING LR B- SPLINES
DECONFLICTION AND SURFACE GENERATION FROM BATHYMETRY DATA USING LR B- SPLINES IQMULUS WORKSHOP BERGEN, SEPTEMBER 21, 2016 Vibeke Skytt, SINTEF Jennifer Herbert, HR Wallingford The research leading to these
More informationIntroduction to Video Compression
Insight, Analysis, and Advice on Signal Processing Technology Introduction to Video Compression Jeff Bier Berkeley Design Technology, Inc. info@bdti.com http://www.bdti.com Outline Motivation and scope
More informationCS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning
CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with
More informationIRIS SEGMENTATION OF NON-IDEAL IMAGES
IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322
More informationImage Compression for Mobile Devices using Prediction and Direct Coding Approach
Image Compression for Mobile Devices using Prediction and Direct Coding Approach Joshua Rajah Devadason M.E. scholar, CIT Coimbatore, India Mr. T. Ramraj Assistant Professor, CIT Coimbatore, India Abstract
More informationEdge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)
Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More informationRobust color segmentation algorithms in illumination variation conditions
286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,
More informationAn Image Based Approach to Compute Object Distance
An Image Based Approach to Compute Object Distance Ashfaqur Rahman * Department of Computer Science, American International University Bangladesh Dhaka 1213, Bangladesh Abdus Salam, Mahfuzul Islam, and
More informationError-Diffusion Robust to Mis-Registration in Multi-Pass Printing
Error-Diffusion Robust to Mis-Registration in Multi-Pass Printing Zhigang Fan, Gaurav Sharma, and Shen-ge Wang Xerox Corporation Webster, New York Abstract Error-diffusion and its variants are commonly
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationSpatial Adaptive Filter for Object Boundary Identification in an Image
Advances in Computational Sciences and Technology ISSN 0973-6107 Volume 9, Number 1 (2016) pp. 1-10 Research India Publications http://www.ripublication.com Spatial Adaptive Filter for Object Boundary
More informationVideo De-interlacing with Scene Change Detection Based on 3D Wavelet Transform
Video De-interlacing with Scene Change Detection Based on 3D Wavelet Transform M. Nancy Regina 1, S. Caroline 2 PG Scholar, ECE, St. Xavier s Catholic College of Engineering, Nagercoil, India 1 Assistant
More informationTime Stamp Detection and Recognition in Video Frames
Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationWATERMARKING FOR LIGHT FIELD RENDERING 1
ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,
More informationPresented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey
Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationUNIVERSITY OF DUBLIN TRINITY COLLEGE
UNIVERSITY OF DUBLIN TRINITY COLLEGE FACULTY OF ENGINEERING, MATHEMATICS & SCIENCE SCHOOL OF ENGINEERING Electronic and Electrical Engineering Senior Sophister Trinity Term, 2010 Engineering Annual Examinations
More informationCorrecting User Guided Image Segmentation
Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationDATA and signal modeling for images and video sequences. Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 8, DECEMBER 1999 1147 Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services P. Salembier,
More informationChapter 3: Intensity Transformations and Spatial Filtering
Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing
More informationMulti-View Image Coding in 3-D Space Based on 3-D Reconstruction
Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationKeywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.
Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Blobs and Cracks
More informationA Vision System for Monitoring Intermodal Freight Trains
A Vision System for Monitoring Intermodal Freight Trains Avinash Kumar, Narendra Ahuja, John M Hart Dept. of Electrical and Computer Engineering University of Illinois,Urbana-Champaign Urbana, Illinois
More informationGraph Matching Iris Image Blocks with Local Binary Pattern
Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of
More informationApplying Synthetic Images to Learning Grasping Orientation from Single Monocular Images
Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic
More informationClassification of Protein Crystallization Imagery
Classification of Protein Crystallization Imagery Xiaoqing Zhu, Shaohua Sun, Samuel Cheng Stanford University Marshall Bern Palo Alto Research Center September 2004, EMBC 04 Outline Background X-ray crystallography
More informationFog Detection System Based on Computer Vision Techniques
Fog Detection System Based on Computer Vision Techniques S. Bronte, L. M. Bergasa, P. F. Alcantarilla Department of Electronics University of Alcalá Alcalá de Henares, Spain sebastian.bronte, bergasa,
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationMultimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology
Course Presentation Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology Video Coding Correlation in Video Sequence Spatial correlation Similar pixels seem
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More information