Joint Key-frame Extraction and Object-based Video Segmentation

Similar documents
Mixture Models and EM

Flexible-Hybrid Sequential Floating Search in Statistical Feature Selection

Video Key-Frame Extraction using Entropy value as Global and Local Feature

Normalized Texture Motifs and Their Application to Statistical Object Modeling

A Graph Theoretic Approach to Image Database Retrieval

A Probabilistic Framework for Spatio-Temporal Video Representation & Indexing

STATISTICAL FEATURE SELECTION AND EXTRACTION FOR VIDEO AND IMAGE SEGMENTATION XIAOMU SONG

A Robust Wipe Detection Algorithm

Information-Theoretic Feature Selection Algorithms for Text Classification

Color Image Segmentation

AS STORAGE and bandwidth capacities increase, digital

70 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 1, FEBRUARY ClassView: Hierarchical Video Shot Classification, Indexing, and Accessing

Bus Detection and recognition for visually impaired people

MR IMAGE SEGMENTATION

A Miniature-Based Image Retrieval System

Latest development in image feature representation and extraction

MODULE 6 Different Approaches to Feature Selection LESSON 10

Motion Estimation for Video Coding Standards

A Bottom Up Algebraic Approach to Motion Segmentation

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Image retrieval based on bag of images

The ToCAI Description Scheme for Indexing and Retrieval of Multimedia Documents 1

CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

PERSONALIZATION OF MESSAGES

Consistent Line Clusters for Building Recognition in CBIR

Unsupervised Learning and Clustering

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

Including the Size of Regions in Image Segmentation by Region Based Graph

Computer vision: models, learning and inference. Chapter 10 Graphical Models

CS Introduction to Data Mining Instructor: Abdullah Mueen

Still Image Objective Segmentation Evaluation using Ground Truth

Semantic Extraction and Semantics-based Annotation and Retrieval for Video Databases

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural

Empirical Bayesian Motion Segmentation

10. MLSP intro. (Clustering: K-means, EM, GMM, etc.)

Patch-Based Image Classification Using Image Epitomes

Color Local Texture Features Based Face Recognition

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Hierarchical Combination of Object Models using Mutual Information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Simultaneous Appearance Modeling and Segmentation for Matching People under Occlusion

Unsupervised Learning and Clustering

SAR change detection based on Generalized Gamma distribution. divergence and auto-threshold segmentation

Generative and discriminative classification techniques

IN RECENT years, there has been a growing interest in developing

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

A Unified Framework to Integrate Supervision and Metric Learning into Clustering

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Image Segmentation Techniques for Object-Based Coding

Edge tracking for motion segmentation and depth ordering

Applying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations

Many are called, but few are chosen. Feature selection and error estimation in high dimensional spaces

Textural Features for Image Database Retrieval

Random projection for non-gaussian mixture models

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

ECG782: Multidimensional Digital Signal Processing

Real-time Monitoring System for TV Commercials Using Video Features

Image Segmentation for Image Object Extraction

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014

Tracking of Virus Particles in Time-Lapse Fluorescence Microscopy Image Sequences

Texture Image Segmentation using FCM

Key Frame Extraction and Indexing for Multimedia Databases

Automatic Classification of Outdoor Images by Region Matching

Probabilistic Tracking of Virus Particles in Fluorescence Microscopy Image Sequences

A Model-based Line Detection Algorithm in Documents

Improving Recognition through Object Sub-categorization

A Robust and Efficient Motion Segmentation Based on Orthogonal Projection Matrix of Shape Space

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Toward Optimal Pixel Decimation Patterns for Block Matching in Motion Estimation

Motivation. Technical Background

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM

A Quantitative Approach for Textural Image Segmentation with Median Filter

Key-frame extraction using dominant-set clustering

Module 7 VIDEO CODING AND MOTION ESTIMATION

CHAPTER 7. PAPER 3: EFFICIENT HIERARCHICAL CLUSTERING OF LARGE DATA SETS USING P-TREES

Probabilistic Graphical Models Part III: Example Applications

Bagging for One-Class Learning

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, 2013 ISSN:

Graph-based High Level Motion Segmentation using Normalized Cuts

QUANTIZER DESIGN FOR EXPLOITING COMMON INFORMATION IN LAYERED CODING. Mehdi Salehifar, Tejaswi Nanjundaswamy, and Kenneth Rose

CONTENT analysis of video is to find meaningful structures

Automatic Video Caption Detection and Extraction in the DCT Compressed Domain

Scene Change Detection Based on Twice Difference of Luminance Histograms

On Feature Selection with Measurement Cost and Grouped Features

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

Segmentation & Clustering

A Probabilistic Architecture for Content-based Image Retrieval

Content-based Image and Video Retrieval. Image Segmentation

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

Extracting Spatio-temporal Local Features Considering Consecutiveness of Motions

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Chapter 3 Image Registration. Chapter 3 Image Registration

9.1. K-means Clustering

Graph Matching: Fast Candidate Elimination Using Machine Learning Techniques

Transcription:

Joint Key-frame Extraction and Object-based Video Segmentation Xiaomu Song School of Electrical and Computer Engineering Oklahoma State University Stillwater, OK 74078, USA xiaomu.song@okstate.edu Guoliang Fan School of Electrical and Computer Engineering Oklahoma State University Stillwater, OK 74078, USA guoliang.fan@okstate.edu Abstract In this paper, we propose a coherent framework for joint key-frame extraction and object-based video segmentation. Conventional key-frame extraction and object segmentation are usually implemented independently and separately due to the fact that they are on different semantic levels. This ignores the inherent relationship between key-frames and objects. The proposed method extracts a small number of keyframes within a shot so that the divergence between video objects in a feature space can be maximized, supporting robust and efficient object segmentation. This method can utilize advantages of both temporal and object-based video segmentations, and be helpful to build a unified framework for content-based analysis and structured video representation. Theoretical analysis and simulation results on both synthetic and real video sequences manifest the efficiency and robustness of the proposed method.. Introduction Video segmentation is a fundamental step towards structured video representation, which supports the interpretability and manipulability of visual data. Based on different semantic levels, video segmentation often refers to two categories as temporal and object-based video segmentations. A video sequence comprises a group of video shots, and a video shot is an unbroken sequence of frames captured from one perspective. Temporal video segmentation partitions a video sequence into a set of shots, and some keyframes are extracted to represent a shot. In this work, we only consider key-frame extraction in one shot, which can be carried out by a clustering process based on similarity measurements [, ] or statistical modeling processes This work was supported by the National Science Foundation (NSF) under Grant IIS-034763 (CAREER). [0]. Extracted key-frames can provide a compact representation for video indexing and browsing, while it cannot support content-based video analysis at a higher semantic level [5]. Object-based video segmentation extracts objects for content-based analysis and provide structured representation for many object-oriented video applications. Current object-based video segmentation methods can be classified into three types: segmentation with spatial priority, segmentation with temporal priority, and joint spatial and temporal segmentation [7]. More recent interests are on joint spatial and temporal video segmentation [3, 8, 9,, 6] due to the nature of human vision that recognizes salient video structures jointly in spatial and temporal domains [7]. Hence, both spatial and temporal pixel-wise features are extracted to construct a multi-dimensional feature space for object segmentation. Compared with key-frame extraction methods using frame-wise features, e.g., color histogram, these approaches are usually more computationally expensive. Due to different semantic levels, key-frame extraction and object segmentation are usually implemented independently and separately. The work in [5] presents a universal framework where key-frame extraction and object segmentation independently support content-based video analysis at different semantic levels, and their results can only be unified via a high-level description. In order to make the content analysis and representation more efficient, comprehensive, and flexible, it is helpful to exploit inherent relationship between key-frame extraction and object segmentation. In addition, the new MPEG-7 standard provides a generic segment-based representation model for video data [6], and both key-frame extraction and object segmentations could be grouped into a unified paradigm, where video key-frames are extracted to support efficient and robust object segmentation, and to facilitate the construction of the suggested universal description scheme in [5]. Recently, we have proposed a combined key-frame extraction and object-based video segmentation method in [5], where the extracted key-frames are used to estimate statistical models for model-based object segmentation, and

object segmentation results are used to further refine the initially extracted key-frames. This approach significantly reduces the model estimation time compared with [8, 9], and provides more representative key-frames. However, the relationship between key-frame extraction and object segmentation is not explicit yet in this approach. It is not shown that how key-frame extraction affects object segmentation. In addition, some predefined and data-dependent thresholds are needed that influence the final results. In this work, we attempt to exploit an explicit relationship between keyframe extraction and object segmentation, and propose a coherent framework for joint key-frame extraction and object segmentation. The key point is to treat key-frame extraction as a feature selection process. Maximum average interclass Kullback Leibler distance (AIKLD) criterion is used with an efficient key-frame extraction method. Compared with [5], the proposed method provide an explicit relationship between key-frame extraction and object segmentation.. Unified Feature Space Video key-frame extraction and object segmentation are usually based on different feature subsets. A unified feature subset is necessary for joint key-frame and object-based video segmentation. This feature subset should contain both spatial and temporal features that are easy to be extracted. In this work we use a pixel-wise 7-D feature vector suggested in [5], including YUV color features, x-y spatial location, time T, as well as intensity change over the time to provide additional motion information. The original idea comes from the feature selection in pattern recognition. Given a candidate feature set X = {x i i =,,, n}, where i is the feature index, feature selection aims at selecting a subset X = {x i i =,,, m}, m < n from X so that an objective function F ( X) related to classification performance can be optimized: X = arg max F (Z). () Z X Generally, the goal of feature selection is to reduce the feature dimension. In this work, we apply feature selection to extract video key-frames rather than reducing the feature dimension. According to [], the video frames within a shot represent a spatially and temporally continuous action, and they share the common visual and often semantic-related characteristics, resulting in tremendously redundancy. Since a video shot should be characterized both spatially and temporally, a set of key-frames could be enough to model the object behavior in the shot. Moreover, by extracting a set of representative key-frames that supports salient and condensed object representation in the feature space, we can obtain compact video representation and efficient object segmentation simultaneously. Thus, the issue is how to find a set of key-frames that can facilitate object segmentation. 3 N 3 3 3 3 3 3 3 Video frames Feature space Video objects Figure. Unified feature space. Key-frames 3 3 For example, in Fig., a video shot of N frames contains three objects. Outliers, including noise and insignificant objects that might randomly appear, usually cause the feature space overlap among major objects. Therefore, keyframe extraction can be treated as a feature selection process where key-frames are extracted by minimizing the feature space overlap among three objects. One often used feature selection criterion is to maximize the cluster divergence in the feature space, and we will discuss such a criterion and its implementation to the joint key-frame and object-based video segmentation. 3. Proposed Method 3.. Maximum Average Interclass Kullback Leibler Distance Kullback Leibler distance (KLD) measures the distance between two probability density functions [4]. In this section, we will discuss how to apply a feature selection method based on KLD to jointly extract key-frames and objects. A frequently used criterion is to minimize the KLD between the true density and the density estimated from feature subsets. Nevertheless, this approach aims at minimizing the approximation error rather than extracting the most discriminative feature subsets. Although it is often desired that this criterion can lead to good discrimination among classes as well, this assumption is not always valid [8]. For the purpose of robust classification, divergence-based feature selection criterion is more preferred [8]. Given two probability density f i (x) and f j (x), the KLD between them is defined as: KL(f i, f j ) = f i (x) ln f i(x) dx, () f j (x) KLD is usually not a symmetric distance measurement and is symmetrized by adding KL(f i, f j ) and KL(f j, f i ) together: D(f i, f j ) = KL(f i, f j ) + KL(f j, f i ). (3) 3

KLD is often used as the divergence measurement of different clusters in the feature space. Ideally, the larger the KLD, the more separability between clusters. If there are M clusters, the average interclass KLD (AIKLD) is defined as: D = C M i= j>i M D(f i, f j ), (4) where C = M(M ). Conventional approaches that reduce the feature dimension based on the maximum AIKLD (MAIKLD) usually has D 0 D, where D 0 is the AIKLD of clusters in the reduced feature space. As mentioned before, key-frame extraction is formulated as a feature selection process, and we want to extract a set of key-frames where the average pairwise cluster divergence is maximized. Let X be the original video shot with N frames and M objects, and be represented as a set of frames X = {x i, i N} with cardinality X = N. Let Z = {x i, i N } be any subset of X with cardinality Z = N N. The objective function is defined as: X = arg max Z X, Z N D Z, (5) where X is a subset of X that is optimal in the sense of MAIKLD, and D Z is the AIKLD of M objects within Z in the 7-D feature space. We might have D X D X because some frames might contain aforementioned outliers that deteriorate the cluster separability, decreasing D X. Hence removing those noisy frames might mitigate the cluster overlapping problem. According to [], MAIKLD is optimal in the sense of a minimum Bayes error. If we assign zero-one cost to the classification, then this leads to a maximum a posteriori (MAP) estimation. Therefore an optimal solution to (5) will lead to an optimal subset of key-frames that can minimize the error probability of video object segmentation. Nevertheless, it is not easy to find an optimal solution, especially when N is large, and a suboptimal but computationally efficient solution might be preferred. 3.. Key-Frame Extraction Feature selection methods have been well studied and some very good reviews can be found in [, 3]. It is well known that the exhaustive searching method can guarantee the optimality of the feature subset according to the objective function. Nevertheless, the exhaustive method is computational expensive and impractical for large feature sets. For example, if a video shot X has N frames, then the exhaustive search needs to try N possible frame subsets. Various suboptimal approaches were suggested and amongst them a deterministic feature selection method called Sequential Forward Floating Selection (SFFS) method shows good performance [9]. When N is not very large, SFFS method could even provide optimal solutions for feature selection. For simplicity, we do not begin with all N frames in X but apply the method in [, 5] to extract N N initial key-frames, which are usually redundant. In the following, we call these initially extracted key-frames as key-frame candidates. Based on the initial N key-frame candidates, Gaussian mixture model (GMM) is used to model video objects coherently in the unified feature space. The iterative Expectation maximization (EM) algorithm [4] is applied with the minimum description length (MDL) model selection criterion [0]. After the model estimation, the objects in all keyframe candidates are segmented out using the maximum likelihood (ML) criterion. Then the proposed key-frame extraction algorithm is performed as follows, where SFFS is initialized by using sequential forward selection (SFS): () Start with an empty set X (no key-frame), and n is the cardinality of X, i.e., n = X and initially n = 0; () Based on the MAIKLD criterion, first use SFS to generate a combination that comprises key-frame candidates, and X = ; (3) Search for one key-frame candidate that maximizes AIKLD when X = n +, and add it to X, let n = n + ; (4) If n >, remove one key-frame candidate from X and compute AIKLD based on the remained key-frame candidates in X, and go to (4), otherwise go to (3); (5) Determine if AIKLD increases or not after removing the selected key-frame candidate. If the answer is yes, let n = n, and go to (4), otherwise go to (3). The algorithm is stopped when n equals to a certain number or the iteration reaches a given times (e.g., 0). The proposed segmentation method has several significant advantages: () Since model estimations are based on a small number of key-frames, the proposed segmentation method is computationally efficient compared with those using all frames [8]. () The optimal or near-optimal set of key-frames that maximizes AIKLD can be extracted for robust object segmentation. These key-frames are more representative than those extracted by our previous method [5]. (3) The algorithm is flexible without significant datadependent thresholds. This work develops a unified framework for key-frame extraction and object segmentation, which will support more coherent content-based analysis and structured video representation. 4. Simulations and Discussions The proposed method is tested on both synthetic and real video sequences. The purpose of using synthetic video is to

(a) Method-I. Figure. Synthetic videos: Video-A (first row), Video-B (Second row). (b) Method-II. (a) Car (b) People (c) Face Figure 3. Real video sequences. (a) Method-I. numerically evaluate the video object segmentation performance, where we calculate segmentation accuracy, precision, and recall with respect to all moving objects. In order to show the validity of MAIKLD, we also compare the suggested method with our previous one in [5] based on these videos. The frame size of all the video sequences is 76 44. For convenience, we denote the method in [5] as Method-I, and the proposed method as Method-II. Methods-I and -II are first tested on two synthetic video sequences comprising 36 frames each as illustrated in Fig.. The first row of Fig. shows three frames in Video-A where a circular object moves sigmoidally. There are two moving objects in Video-B as shown in the second row of Fig.. One is an elliptic object that is moving diagonally from the top-left to the bottom-right corner and changing size simultaneously, the other is a rectangular object moving from right to left horizontally. Some Additive White Gaus- Video sequences Key-frame candidates Extracted key-frames (Method-I) (Method-II) Video-A (36 frames) 8 9 Video-B (36 frames) 9 9 Car (39 frames) 0 3 People (50 frames) 6 3 Face (50 frames) 6 8 Table. Key-frame numbers (b) Method-II. Figure 4. Segmented moving objects of Videos-A and -B. sian Noise (AWGN) is deliberately added to the synthetic video. The key-frame extraction is stopped after 0 times SFFS iteration or n > N /. The numerical results are shown in Fig. 5. As we can see, both methods have similar segmentation performance on the moving object of Video-A while Method-II uses less key-frames as listed in Table. Particularly, both methods can detect the moving object with 00% recall. Method-II outperforms Method-I in Video-B even though Method-II uses less key-frames for object segmentation. From Fig. 4 (a) we can see that the moving rectangle cannot be discrim-

Accuracy.00 0.998 0.996 0.994 0.99 0.99 0.988 0.986 0.984 Accuracy.0 0.99 0.98 0.97 0.96 0.95 0.94 they might not be representative enough for video object segmentation. However, in Method-II, the key-frames are extracted by considering both spatial-temporal information in the unified feature space. Consequently, extracted keyframes should be more accurate to represent the dynamics of video objects. Precision Recall 0.98 0 5 0 5 0 5 30 35 0.95 0.9 5 0.75 0.7 0.65 0.6 0.55 0.5 0 5 0 5 0 5 30 35.8.6.4. 0.6 0.4 0. 0 0 5 0 5 0 5 30 35 (a) Video-A Precision Recall 0.93 0 5 0 5 0 5 30 35. 0.9 0.7 0.6 0.5 0.4 0.3 0. 0 5 0 5 0 5 30 35..05 0.95 0.9 5 0.75 0.7 0.65 0 5 0 5 0 5 30 35 (b) Video-B Figure 5. Numerical results. Dash and solid lines indicate results of Methods-I, and -II, respectively. inated from a static background object (dark square) by Method-I. Moreover, the moving rectangle is misclassified into two separate objects in the latter part of Video-B. This indicates that Method-II can extract more representative and salient key-frames regarding video objects than Method-I. We also compare two methods on three real video sequences as shown in Fig. 3. The number of initial key-frame candidates and finally extracted key-frames are listed in Table. In order to demonstrate the effectiveness of Method- II, we change the initial threshold for key-frame extraction in Method-I so that object segmentation is based on the same number of key-frames as Method-II. It can be seen in Fig. 6 that with the same number of key-frames, the performance of Method-II is better than that of Method-I. In particular, if we stop the key-frame extraction of Car video using the same criterion as that used for Videos-A and -B, both methods provide similar segmentation results. However, if we deliberately stop the key-frame extraction process when the key-frame number n > N /3, the Method-II provides much more representative key-frames for object segmentation than Method-I, as shown in Fig. 6 (a) and (b). In Method-I, the key-frames are extracted using the framewise color histogram without local spatial information, and 4.. Conclusions This paper presents a coherent framework for joint keyframe extraction and object-based segmentation within a video shot, where key-frames are extracted by maximizing the AIKLD of major video objects in the unified feature space. The suggested framework provides an integrated platform where the inherent and explicit relationship between key-frames and video objects is revealed. Simulation results on both synthetic and real video sequences show that the proposed approach can provide robust and accurate object segmentation results with more compact temporal representation of a video shot using key-frames compared with our previous work. This work also open a new avenue to support the content-based video analysis. References [] G. Davenport, T. A. Smith, and N. Pincever. Cinematic primitives for multimedia. IEEE Computer Graphics and Applications, (4):67 74, July 99. [] H. P. Decell and J. A. Quirein. An iterative approach to the feature selection problem. In Proc. of Purdue Univ. Conf. on Machine Processing of Remotely Sensed Data, volume, pages 3B 3B, 97. [3] D. DeMenthon and R. Megret. Spatio-temporal segmentation of video by hierarchical mean shift analysis. Technical Report: LAMP-TR-090/CAR-TR-978/CS-TR- 4388/UMIACS-TR-00-68, 00. [4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc., 39: 38, 977. [5] A. M. Ferman, A. M. Tekalp, and R. Mehrotra. Effective content representation for video. In Proc. IEEE Int l Conference on Image Processing, Chicago, IL, 998. [6] C. Fowlkes, S. Belongie, and J. Malik. Efficient spatiotemporal grouping using the Nystrom method. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, volume, pages 3 38, 00. [7] S. Gepshtein and M. Kubovy. The emergence of visual objects in space-time. In Proc. of the National Academy of Science, volume 97, pages 886 89, USA, 000. [8] H. Greenspan, J. Goldberger, and A. Mayer. A probabilistic framework for spatio-temporal video representation and indexing. In Proc. European Conf. on Computer Vision, volume 4, pages 46 475, Berlin, Germany, 00. [9] H. Greenspan, J. Goldberger, and A. Mayer. Probabilistic space-time video modeling via piecewise GMM. IEEE

Trans. Pattern Analysis and Machine Intelligence, (3):384 396, March 004. [0] R. Hammoud and R. Mohr. A probabilistic framework of selecting effective key frames for video browsing and indexing. In International workshop on Real-Time Image Sequence Analysis, 000. [] A. Hanjalic and H. J. Zhang. An integrated scheme for automated video abstraction based on unsupervsied clustervalidity analysis. IEEE Trans. on CSVT, 9(8):80 89, 999. [] A. K. Jain, R. P. W. Duin, and J. Mao. Statistical pattern recognition: a review. IEEE Trans. Pattern Analysis and Machine Interlligence, (), January 000. [3] A. K. Jain and D. Zongker. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Analysis and Machine Intelligence, ():53 58, Feb. 997. [4] S. Kullback. Information Theory and Statistics. Dover, New York, 968. [5] L. Liu and G. Fan. Combined key-frame extraction and object-based video segmentation. IEEE Trans. Circuits and System for Video Technology, 005, to appear. [6] J. M. Martnez. Mpeg-7 overview (ver.8). ISO/IEC JTC/SC9/WG N4980, July 00. [7] R. Megret and D. DeMenthon. A survey of spatiotemporal grouping techniques. Technical report, University of Maryland, College Park, March 00. http://www.umiacs.umd.edu/lamp/pubs/techreports/. [8] J. Novovicova, P. Pudil, and J. Kittler. Divergence based feature selection for multimodal class densities. IEEE Trans. Pattern Analysis and Machine Intelligence, 8():8 3, 996. [9] P. Pudil, J. Novovicova, and J. Kittler. Floating search methods in feature selection. Pattern Recognition Letters, pages 9 5, Nov. 994. [0] J. Rissanen. A universal prior for integers and estimation by minimum description length. Annals of Statistics, ():47 43, 983. [] J. Shi and J. Malik. Motion segmentation and tracking using Normalized cuts. In Proc. of Int. Conf. on Computer Vision, pages 5 60, 998. [] Y. Zhuang, Y. Rui, T. S. Huang, and S. Mehrotra. Adaptive key frame extraction using unsupervised clustering. In Proc. of IEEE Int Conf on Image Processing, pages 866 870, Chicago, IL, 998. (a) Method-I. (b) Method-II. (c) Method-I. (d) Method-II. (e) Method-I. (f) Method-II. Figure 6. Segmentation results of the real video sequences using the same number of key-frames.