AUDIO-VISUAL services over very low bit rate channels,

Size: px
Start display at page:

Download "AUDIO-VISUAL services over very low bit rate channels,"

Transcription

1 1270 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 Occlusion-Adaptive, Content-Based Mesh Design and Forward Tracking Yucel Altunbasak and A. Murat Tekalp, Senior Member, IEEE Abstract Two-dimensional (2-D) mesh-based motion compensation preserves neighboring relations (through connectivity of the mesh) as well as allowing warping transformations between pairs of frames; thus, it effectively eliminates blocking artifacts that are common in motion compensation by block matching. However, available 2-D mesh models, whether uniform or nonuniform, enforce connectivity everywhere within a frame, which is clearly not suitable across occlusion boundaries. To this effect, we hereby propose an occlusion-adaptive forward-tracking mesh model, where connectivity of the mesh elements (patches) across covered and uncovered region boundaries are broken. This is achieved by allowing no node points within the background to be covered (BTBC) and refining the mesh structure within the model failure (MF) region(s) at each frame. The proposed content-based mesh structure enables better rendition of the motion (compared to a uniform or a hierarchical mesh), while tracking is necessary to avoid transmission of all node locations at each frame. Experimental results show successful motion compensation and tracking. Index Terms Mesh refinement, occlusion-adaptive forward tracking, 2-D content-based mesh design. I. INTRODUCTION AUDIO-VISUAL services over very low bit rate channels, such as the public switched telephone network (PSTN) and wireless media, is an important emerging application [1] [4]. The International Telecommunications Union (ITU- T) has recently adopted the draft recommendation H.263 [5] as a worldwide standard for video compression/decompression at less than 64 kb/s. Recommendation H.263 employs an improved version of the classical block-based motioncompensated discrete cosine transform (MC-DCT) (hybrid) coding method, which also forms the basis of the prior H.261, MPEG-1, and MPEG-2 standards. The MC-DCT coding strategy is based on the source model of 2-D translational blocks, which is known to result in blocking and mosquito artifacts at very low bit rates, especially with common intermediate format (CIF) or larger video formats. This is because translational block model i) cannot handle image rotations and scaling and ii) cannot preserve the neighboring relations between the blocks, resulting in blocking artifacts. Manuscript received December 26, 1995; revised January 20, This work was supported in part by a National Science Foundation IUCRC grant and a New York State Science and Technology Foundation grant to the Center for Electronic Imaging Systems, University of Rochester. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Eric Dubois. The authors are with the Department of Electrical Engineering and Center for Electronic Imaging Systems, University of Rochester, Rochester, NY USA ( altunbas@ee.rochester.edu). Publisher Item Identifier S (97) Generalized block matching [6] was proposed to allow for spatial transformations (e.g., affine, perspective, etc.) to overcome the limitations of the translational model. However, it does not address the discontinuity problem at the block boundaries. Overlapped block motion compensation, which is included in the advanced prediction mode of H.263, offers a partial solution to both problems [7]. A promising alternative is motion compensation by 2-D mesh models. 2-D meshes allow for spatial transformations and preserve neighboring relations between patches, while requiring transmission of only a coarsely sampled motion field (a dense motion field can be interpolated from vectors at the node locations). 2-D meshes can be classified as uniform (regular) meshes with equal size elements, and nonuniform (hierarchical or content-based) meshes that adapt to particular scene content. Nonuniform mesh modeling/design may be viewed as a method of irregular sampling of the dense motion field. Brusewitz [8] was among the first who proposed trianglebased motion compensation, where a regular triangular mesh is overlayed on the image. Sullivan and Baker [9] used quadrilateral meshes for motion compensation under the name control grid interpolation. Motion compensation within each mesh element (patch) is accomplished by means of a spatial transformation (affine, bilinear, etc.) whose parameters can be computed from node-point motion vectors. Recently, Nakaya et al. [10] proposed a hexagonal matching procedure for motion estimation based on a uniform mesh. Uniform meshes are suitable for motion compensation by redesign ; that is, a new uniform mesh is overlayed on each frame, and motion vectors at the node points are estimated from frame to for motion compensation. However, uniform meshes are often inadequate for motion rendition in the vicinity of object boundaries, where a patch may contain two or more different motions. This problem may be addressed by splitting those patches with more than one motion into smaller elements, resulting in a hierarchical mesh. Huang et al. [11] employed a hierarchical mesh based on a quadtree structure. Nonuniform mesh design using edge analysis was proposed in [12]. Clearly, information about mesh elements that are split must be transmitted as overhead. A more fundamental approach to overcome the problem of mesh elements with more than one motion is to design a content-based mesh, which is not limited to a pseudoregular hierarchical structure. Wang et al. [13] advanced an optimization framework for motion compensation based on an active mesh, which adapts to scene content. However, content-based meshes are not suitable for motion compensation by redesign, /97$ IEEE

2 ALTUNBASAK AND TEKALP: MESH DESIGN AND FORWARD TRACKING 1271 because transmission of all node locations at each frame constitute an excessive amount of overhead. Therefore, motion compensation by a content-based mesh must be coupled with a forward tracking procedure, where a new mesh is designed only at selected key frames, and, in between, it is tracked by node point motion vectors estimated from frame to. Such a scheme works well except around occlusion boundaries, where reliable forward tracking is not possible with the available methods. Note that an important drawback of all these mesh models (uniform, hierarchical or content based) is that they enforce connectivity of the structure everywhere. This is similar to imposing a global smoothness constraint on the 2-D motion field, which is clearly unsuitable across motion and occlusion boundaries. To this effect, we introduce the concept of occlusionadaptive mesh modeling. Occlusion regions, classified as background to be covered (BTBC) and uncovered background (UB), may appear at the object boundaries (due to global object motion) or within objects (due to local motion or deformations). The latter is generally referred to as selfocclusion. We hereby propose an occlusion-adaptive, contentbased mesh design and forward tracking procedure, where no node points are allowed in the BTBC regions and the mesh within the model failure region(s) is redefined for subsequent tracking of these regions. The success of forward tracking is closely related to how well we can detect occlusion and model failure regions and estimate the motion field in the vicinity of their boundaries. Some known occlusion detection methods include [14] [16]. In motion compensation by forward tracking, positions of all nodes need to be transmitted only in selected key frames; in all other frames, it suffices to transmit the boundaries of the BTBC regions (to determine nodes to be killed) and the locations of the newly born nodes. Thus, the proposed approach combines the benefit of a content-based mesh with a small amount of overhead transmission. The paper is organized as follows. Section II proposes a new, efficient, occlusion-adaptive, content-based mesh design procedure that is employed to design the initial mesh, and select new node points within the UB region for the subsequent frames. Section III covers node-point motion estimation for the purposes of motion compensation and mesh tracking, followed by a forward node tracking and mesh refinement algorithm. An occlusion-adaptive motion compensation algorithm by redesign is presented in the Appendix (for comparison of the results). Experimental results with frames of the mother and daughter sequence, provided in Section IV, show that the proposed occlusion-adaptive, content-based mesh design and tracking procedure is successful. II. OCCLUSION-ADAPTIVE MESH DESIGN This section starts with a discussion of the forward-tracking occlusion-adaptive mesh concept. We then provide an algorithm to determine the BTBC region(s) and approximation of their boundaries by polygons. Next, we present a practical method to design an occlusion-adaptive, content-based mesh with triangular patches, such that patch boundaries conform Fig. 1. Illustration of the occlusion adaptive mesh concept. with the boundaries of objects in the scene, BTBC and UB regions, and motions, as much as possible. A. The Occlusion-Adaptive Mesh Concept We introduce the occlusion-adaptive mesh concept to overcome a fundamental limitation of standard mesh models (which enforce continuity of motion across the whole frame) in dealing with multiple motion and occlusion regions. Let us consider the example of a single moving object against a still background, which is shown in Fig. 1. Suppose that the elliptical object is translating to the right, resulting in two types of motion boundaries: To the right is the motion boundary associated with the BTBC region, and to the left is the boundary associated with the UB region. Assuming that these boundaries and the motion vectors in the vicinity of them can be correctly estimated, the BTBC region in frame (marked in Fig. 1) should completely disappear (get covered) in frame, and node points in frame along the motion boundary associated with the UB region (left boundary in Fig. 1) should be split into two. The splitting is a result of the fact that these nodes belong to triangles covering both the background and the object; thus, they need to be assigned two different motion vectors (one to model the motion of the background and another for the motion of the object). An occlusion adaptive mesh should, therefore, have the following properties: i) no nodes are present within the BTBC region(s) in the temporally first frame since they will be covered in the next frame, hence no meaningful motion vectors can be computed for them; ii) nodes that are on object boundaries may be assigned more than one motion vector, since they are common to two or more objects; and iii) the mesh is redesigned within the UB region(s) in the temporally next frame for subsequent tracking of newly exposed regions. Clearly, accurate modeling of these regions using a mesh model requires placement of node points along them (polygon approximation of these regions). The success of this scheme, however, depends on how reliable the BTBC and UB regions, and the motion vectors in the vicinity of these regions can be estimated. Because, in practice, the BTBC and UB regions cannot be accurately estimated due to inaccuracy of motion vectors in the vicinity of these regions, in this paper we have chosen an implementation that favors robustness of the motion compensation in the presence of such estimation errors. To this effect,

3 1272 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 we estimate the model failure (MF) region instead of the UB region (see Section III-C). That is, our mesh design algorithm first estimates the BTBC region(s) in the temporally first frame, and does not place any nodes within them. Then, forward motion vectors at the node points are estimated, and affine mapping parameters are computed for each mesh element from these node point motion vectors. Motion compensation of the temporally next frame is performed by inverting each affine mapping. Finally, the MF region is estimated (in the temporally next frame) from the displaced frame difference as a function of the parametric motion field; the intensity within the MF region is intracoded, and the mesh within the MF region is refined for purposes of tracking these regions (e.g., newly appearing objects) in the subsequent frames. Ideally, as the estimated motion vectors approach the true motion vectors, the MF region approaches the UB region; therefore, our implementation is consistent with the above-cited properties of an occlusion-adaptive mesh. Robustness of our implementation is due to i) the estimated BTBC region corresponds to pixels where motion vectors are unreliable (thus, placing nodes in these regions should be avoided); and ii) the estimated MF regions correspond to pixels where resulting motion affine compensation is erroneous (thus, mesh refinement within such regions improves motion compensation accuracy for future frames). B. BTBC Region Detection The BTBC region(s) refers to clusters of pixels within the temporally first frame, which are covered in the temporally next frame (see Fig. 1). The boundaries of the BTBC region(s) can be estimated by thresholding the displaced frame difference, computed from the forward dense motion field estimated from frame to. Clearly, the accuracy of the occlusion boundaries depends on the accuracy of dense motion estimates, especially in the vicinity of the occlusion regions. In this work, we have used optic-flow equation based methods, such as the method of Lucas Kanade [20] and Horn Schunck [21], since they yield smoother (hence closer to the actual) motion fields than, for example, block matching. It should be noted that values of the smoothness and/or blurring parameters in these methods are important, since, if the motion field is oversmoothed, motion tracking over small regions becomes impossible. For the sake of minimizing the computational burden, 2-D dense motion estimation and occlusion detection are performed only over the changed regions (which can be easily detected from the frame difference). A summary of the BTBC detection algorithm is as follows. 1) Calculate the frame difference (FD); and determine the change detection mask (CDM) by thresholding and postprocessing as follows: a) apply median filtering with a 5 5 kernel; b) apply three successive morphological closing operations with a 3 3 kernel, followed by three morphological opening operations with the same kernel; c) eliminate small regions which are smaller than a predetermined size. Fig. 2. Approximation of the occlusion region by a polygon. 2) Estimate the dense motion field from frame to frame, within the CDM (e.g., using the method of Lucas Kanade [20] or Horn Schunck [21]). 3) Motion compensate frame from frame to compute the predicted estimate using the estimated dense motion field. 4) If the displaced frame difference, DFD, where is the observed intensity, is greater than a predefined threshold, then label as an occlusion pixel. 5) Perform postprocessing to form smooth BTBC region(s) as in Step 1). The thresholding operation may result in small pixel clusters. Postprocessing is applied following thresholding to form contiguous CDM and BTBC regions. Median and morphological filtering operations remove small pixel clusters, and also fill in small holes. The sizes of the small clusters to be eliminated and holes to be filled in depend on the filter kernel size. Note that in the case of backward motion compensation, the roles of the BTBC and UB regions are interchanged. Next, the boundaries of these regions are approximated by polygons to facilitate occlusion-adaptive mesh design. C. Polygon Approximation The boundary of the occlusion regions (BTBC and UB) needs to be approximated by a shape model that can be represented with a few parameters. Most commonly employed shape models are polygonal and/or B-spline approximations. Here, we employ a polygonal approximation (see Fig. 2), because of its simplicity and robustness. Furthermore, a polygonal boundary naturally fits with the boundaries of the proposed occlusion-adaptive mesh model. We use a polygon approximation algorithm, which is similar to that proposed by Gerken [18]. It is summarized below. 1) Find a pair of pixels, among all possible pairs, on the boundary of the occlusion region, which has the maximum distance between them. The line drawn between these two points (vertices 1 and 2) is called the main axis. 2) Find the two points on the boundary with the largest perpendicular distance from the main axis to the left and right of the axis (vertices 3 and 4). These four vertices are called the initial vertices. 3) Draw a straight line from vertex 1 to the nearest vertex (clockwise). If, for every point on the boundary between

4 ALTUNBASAK AND TEKALP: MESH DESIGN AND FORWARD TRACKING 1273 these two vertices, the maximum distance between the straight line and the boundary is below a certain threshold and the area between the boundary and the straight line is less than 5% of the area of the entire occlusion region, then no new vertices need to be inserted on this segment of the boundary. If, however, both criteria are not satisfied, a new vertex is inserted at the pixel with the maximum distance from the straight line, and this procedure is repeated until no new nodes are needed within each new pair of boundary segments. 4) Repeat 3) for each of the three remaining segments between the initial vertices. The nodes of the polygon approximations will serve as nodal points of the 2-D mesh (see mesh refinement algorithm in Section III-B) to be able to accurately track the motion in the vicinity of occlusion regions by means of motion vectors at these node points. D. Content-Based Mesh Design Prior work on adaptive mesh design includes optimization and split-and-merge methods. Designing an optimal mesh structure requires global optimization of a suitable cost function [13]. However, most practical optimization methods converge to a local minimum which is close to a uniform mesh (assuming the initial mesh is a uniform one). Split-and-merge methods successively divide, starting possibly with a uniform mesh, those patches that do not satisfy a predetermined criterion in an attempt to find locally optimum solutions [11], [17]. Some of the inserted nodes may need to be treated specially to preserve the connectivity (geometry) of the mesh and/or children-parent patch relationships in hierarchical mesh refinement procedures [11]. Here, we propose a new, computationally efficient algorithm for content-based 2-D triangular mesh design. The algorithm consists of a nonuniformly spaced node-point selection procedure, followed by triangulation using the selected node points. The basic principle of the algorithm is to place node points in such a way that mesh boundaries align with object boundaries and the density of node points is proportional to the local motion activity. The former is attempted by placing node points on spatial edges (pixels with high spatial gradient). The latter is achieved by allocating node points in such a way that a predefined function of the displaced frame difference (DFD) within each patch attains approximately the same value. An outline of the algorithm is as follows. 1) Label all pixels, except those in the BTBC polygon(s), as unmarked. Include all corner points of the BTBC polygon(s) (see Section II-C) in the list of selected node points. 2) Compute the average displaced frame difference DFD given by DFD DFD (1) where DFD stands for the displaced frame difference as a function of the dense motion field computed in the BTBC region detection procedure above, the Fig. 3. Demonstration of proximity constraints in the mesh design algorithm. summation is over all unmarked points, is the number of unmarked pixels, and is a positive number (e.g., ). 3) For each unmarked pixel, compute a cost function where and stand for the partials of the intensity with respect to and coordinates evaluated at the pixel. The cost function is related to the spatial intensity gradient so that selected node points; hence, the boundaries of the patches, coincide with spatial edges. 4) Find the unmarked pixel with the highest, which is not closer to any other previously selected node point than a prespecified distance. Label this point as a node point. 5) Grow a circle about this node point until DFD in this circle is greater than DFD. Label all pixels within the circle (depicted in Fig. 3) as marked. 6) Go to 2) until a desired number of node points,, are selected, or the distance criterion in 4) is violated. 7) Given the selected node points, apply a triangulation procedure (e.g., Delauney triangulation [19]) to obtain a content-based mesh. Steps 1 6 of this procedure constitute the node-point selection algorithm. A flowchart of the above mesh design procedure is shown in Fig. 4. Node points are placed at pixels with the highest spatial gradient among all eligible (unmarked) points, so that they correspond to spatial edge points. In reference to Fig. 3, a small circle indicates a high temporal activity, while a large circle indicates low temporal activity. The pixels within each circle are marked, so that another node point cannot be placed within a marked circle. This process controls the node point density in proportion to the local temporal activity (motion). (2)

5 1274 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 Fig. 5. Illustration of inconsistent motion vectors. Fig. 4. A flowchart of the content-based mesh design algorithm. III. FORWARD TRACKING This section presents methods for i) node-point motion estimation, ii) affine motion compensation, iii) model failure region detection, and iv) forward mesh tracking. The forward motion compensation algorithm aims to achieve the goals that the BTBC region(s) in frame should disappear (get covered) in frame, and the MF region(s) in frame should reduce to the UB region(s). A. Node-Point Motion Computation Motion compensation using a 2-D mesh requires computation of the parameters of a spatial transformation within each triangular mesh element (patch). It is well known that the parameters of an affine mapping can be uniquely estimated from three point correspondences (e.g., at the vertices of a triangular patch). Therefore, in order to estimate the affine mapping parameters for motion compensation of each patch, it suffices to estimate the motion vectors at the node points. There are two approaches for node point motion estimation: i) estimate motion vectors at the node points independently of each other (e.g., perform block matching at each node point or sample motion vectors from a dense optical flow field); ii) estimate motion vectors optimized for warping transformations of each triangular patch (e.g., using a constrained search [10], [23] or closed-form solutions [24]). Because the latter option is computationally more intensive, this paper employs the former. That is, we sample (pick) our node point motion vectors from the dense motion field, which is already estimated during the BTBC region detection/meshdesign procedures. An optical flow-based method has been preferred in Section II-B over block matching, because opticalflow methods yield smoother dense motion fields, which are more suitable for parameterization [22]. Although hierarchical block matching for dense motion estimation yields high motion-compensation peak signal-to-noise ratio (PSNR), the motion field usually contains several outliers (because of the aperture problem and lack of explicit smoothness constraints across the blocks). It should be noted that the value of the smoothness parameter in [20] or [21] is important, since if the motion field is oversmoothed tracking of motion over small regions becomes impossible. It is possible that motion vectors sampled from a dense motion field are inconsistent in the sense that they do not preserve the connectivity of the mesh structure. This is illustrated in Fig. 5, where the node in frame is connected with the nodes and. The motion vector at the node must move it to inside the polygon formed by the motioncompensated nodes and in frame. However, because the dense motion estimation method utilized to compute the node-point motion vector at the node does not employ such a constraint, this condition may be violated as shown in Fig. 5. When such a condition occurs, the estimated motion vector should be replaced by a motion vector that is interpolated from those of the surrounding nodes, or alternatively, a local search may be conducted to estimate the motion vectors at such nodes. Therefore, we propose a motion vector postprocessing algorithm to preserve the connectivity of the patches. This is achieved by ordering the sampled motion vectors according to a measure of confidence (as a function of the resulting local displaced frame difference). In the case of a motion vector cross over, the lowest confidence motion vectors are interpolated from the higher confidence ones to eliminate cross-overs as described in the following. 1) Determine the ordering of the nodes for postprocessing as follows. a) For each node, find its enclosing polygon (determined by nodes connected to node ). b) Calculate the following criterion function DFD (3) where and are positive scalars, DFD, and denote the displaced frame difference, variance, and number of points in the polygon enclosing the node, respectively.

6 ALTUNBASAK AND TEKALP: MESH DESIGN AND FORWARD TRACKING 1275 c) The node with the highest is then processed first. Prioritization enables resolution of conflicts at nodes with lowest confidence motion vectors first in favor of those nodes with more reliable motion vectors. 2) Scan the nodes in the order determined in Step 1 to detect nodes with inconsistent motion vectors as follows: At each node, a) Find all the nodes connected to node, and label them as,, where is the number of nodes connected to. b) Find the motion-compensated node locations, in frame using the motion vectors at the nodes. Form the polygon defined by. c) Motion compensate the node to find.if is inside the polygon, go to next node in the order. Otherwise, go to 3) 3) Interpolate the motion of the node from its neighbors as follows: Fig. 6. Affine motion compensation. 1) For each patch at frame, the affine mapping parameters can be computed by solving the linear equation (8) where is the node-motion vector at the node, and is the distance between the node and node. Postprocessing provides increased robustness to errors in motion estimation around occlusion boundaries. The affine mapping parameters for each triangular patch are then computed from the postprocessed, sampled node-point motion vectors at the respective vertices. B. Motion Compensation Node point motion vectors establish a set of point correspondences from frame to frame, which are used to determine a set of backward affine spatial transformation from frame to (see Fig. 6), given by (4) (5) (6) (7) where denotes the coordinates of a pixel in frame which corresponds to point in frame. Note that, since node point motion vectors are computed from frame to, the coordinates are real numbers that may not correspond to pixel locations. The computation of the set of affine parameters and affine motion compensation can be described through the following procedure. where and are the corresponding node positions. 2) All pixels within the current patch at frame are motion compensated from frame by using the affine mapping (7). If the corresponding location in frame is not a pixel location, then bilinear interpolation is used. C. Model Failure Region Detection Following motion compensation by the parametric motion field, the MF region is detected by thresholding the displaced frame difference where denotes the motion compensated frame using the affine motion field computed in Section III-B, and is the MF region detection threshold. Small pixel clusters and small holes are removed using the same postprocessing steps as discussed in BTBC region detection (see Section II-B). The MF regions correspond to clusters of pixels where affine motion compensation fails. This may be due to presence of uncovered background (UB) regions, inaccuracy of node point motion estimation, or insufficiency of the affine motion model. (Note that the UB region(s) refer to pixels in the temporally next frame (frame ), which are uncovered as a result of the interframe motion [see Fig. 1)]. Hence, it is expected that with successful tracking, the MF regions should reduce to the UB regions. D. Mesh Refinement The simplest scheme for forward tracking of the mesh from frame to is to propagate all nodes by their motion vectors. However, since motion compensation is deemed unsuccessful within the MF region, nodes that map into the MF (9)

7 1276 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 Fig. 7. Addition, deletion, and propagation of nodes for mesh refinement. region in frame cannot be considered reliable. Thus, we model the boundary of the MF region with a polygon (similar to the procedure given in Section II-C) and delete all nodes that are mapped into the MF polygon in frame. Note that the coordinates of the corners of the boundary polygon is sufficient to describe all nodes that should be deleted. It is assumed that the contents of the MF polygon (intensity values within the MF polygon) at each frame have to be transmitted once as overhead information. Finally, the mesh tracked from frame is refined by redesigning a new mesh within the MF region (excluding the BTBC region of the next frame) for the purposes of tracking the contents of the MF polygon in the subsequent frames. Mesh refinement procedure in frame does not place any nodes in the BTBC region between the frames and, since this region will vanish (get covered) in frame (just as the initial mesh design procedure does not place any node points in the BTBC region in frame ). The regions within which nodes are added, deleted, and propagated are illustrated in Fig. 7. Note that corners of the BTBC polygon (between frames and ) are accepted as nodes for mesh refinement. Additional nodes within the BTBC region may also be included as determined by the mesh design procedure in Section II-D. E. The Complete Forward-Tracking Algorithm This section summarizes the complete forward-tracking mesh algorithm, including the BTBC region detection, mesh design, node-point motion estimation, motion compensation, MF region detection, and mesh refinement steps, as follows. 1) Set. Find the BTBC in the th frame given the dense motion field from the th to st frame (see Sections II-B and II-C). 2) Design a 2-D content-based mesh for the th frame, where no nodes are allowed in the BTBC (see Section II-D). (a) (b) Fig. 8. Flowchart of motion compensation/tracking algorithms. (a) Motion compensation by forward tracking algorithm. (b) Motion compensation by redesign algorithm. 3) Compute node-point motion vectors as described in Sections III-A, and perform motion compensation as in Section III-B. 4) Propagate all node points by their computed motion vectors. Increment by one. Find all MF regions using the actual and reconstructed th frames (as described in Section III-C). Find the enclosing polygons for each MF region. Delete all node points inside the enclosing polygons. 5) Compute the dense motion field from the th to st frame, and find the BTBC in the th frame. Employ the following mesh refinement algorithm in the MF region (excluding the BTBC). a) Insert the corner points of the MF polygon in the new node list. b) Apply the node-point selection algorithm (see Section II-D) within the MF region (excluding the BTBC region). c) Reapply triangulation. d) Go to Step 3. A flowchart of the forward tracking algorithm is shown in Fig. 8(a). IV. RESULTS Experimental results are provided to compare the performance of the proposed motion compensation by occlusionadaptive forward mesh tracking versus two benchmark methods over frames of the MPEG4 mother and daughter test sequence. The sequence starts with slow movements of the head of the mother. However, the part between frames 56 and

8 ALTUNBASAK AND TEKALP: MESH DESIGN AND FORWARD TRACKING 1277 (a) (b) Fig. 9. Initial mesh structures. (a) Uniform mesh. (b) Content-based mesh overlaid on the first frame. 87 is especially challenging and well suited to demonstrate the occlusion-adaptive mesh tracking concept, since the mother s hand enters the field of view (covering the face of the daughter) generating significant BTBC and UB regions. The processed video is SIF format ( ), which is converted from the original ITU-R 601 format by using the subsampling filter specified in the document [25]. We identified two benchmark results: motion compensation by redesigning 1) a new uniform mesh at each frame, and 2) a new content-based mesh at each frame. An algorithmic description of the motion-compensation by redesign approach is given in the Appendix. The case of redesigning a uniform mesh is expected to be a lower bound on the performance of motion compensation by the proposed forward tracking content-based mesh, since the structure of the mesh may not fit the motion boundaries well, leading to multiple motions within a single patch. However, it requires transmission of no overhead information about the mesh structure. The case of redesigning a content-based mesh at each frame, on the other hand, is expected to be an upper-bound, since the mesh structure should fit scene content well at each frame; however, it requires transmission of all node locations at each frame. The proposed forward tracking content-based mesh is a compromise between these two, since it yields a mesh structure that fits the scene content without too much overhead transmission. To this effect, the efficacy of the proposed method has been evaluated based on how it compares against these benchmarks in terms of motion-compensation PSNR (MC-PSNR) and the number of node points whose coordinates need to be transmitted at each frame. The MC-PSNR values refer to the prediction PSNR of each frame based on the original of the previous frame, using the affine motion field interpolated from the node-point motion vectors. An initial content-based mesh with 200 nodes is designed for the first frame of the sequence, and tracked by the proposed algorithm. The initial uniform and content-based mesh models, overlaid on the first frame, are depicted in Fig. 9(a) and (b), respectively. MC-PSNR versus frame number obtained by the forward tracking algorithm is plotted in Fig. 10. The variation of the number of nodes versus frame number (governed by the mesh refinement algorithm) as well as the number of nodes whose coordinates need to be transmitted (as overhead) at each frame are plotted in Fig. 11. The fluctuation in the number of nodes used at each frame is due to the fact that the number of added nodes may be different than the deleted nodes. The average number of nodes used over 100 frames is approximately 214. Hence, approximately the same number of nodes is used in the benchmark experiments. That is, in the case of redesigning a uniform mesh, a regular grid of nodes are used at each frame [see Fig. 9(a)]; and in the case of redesigning a content-based mesh, 214 nodes are employed at each frame. The MC-PSNR values for each frame by the two redesign approaches are also plotted in Fig. 10. In all of these experiments, the same dense motion field, estimated by the method of Lucas Kanade implemented in three steps by using 11 11, 9 9, and 7 7 square blocks, respectively, has been used. For the purpose of spatio-temporal gradient estimation, each frame is blurred by using a 5 5 Gaussian blur kernel with variance 2.5 in each direction. (Recall that BTBC detection requires dense motion estimates; furthermore, the node-point motion vectors are sampled from this dense motion field.) The BTBC detection threshold is set equal to eight; the MF threshold is nine. Node-point motion vectors have been quantized with 0.25 pixel accuracy. The tracking performance of the mesh over frames is demonstrated in Fig. 12(e) (h). The detected BTBC region (on frame 59) and MF region (on frame 60) are depicted in Fig. 13(a) and (b), respectively. Inspection of Fig. 10 shows that the occlusion-adaptive forward mesh tracking algorithm is almost as good as redesigning a content-based mesh for each frame; and it is clearly superior to redesigning a uniform mesh at each frame. The average MC- PSNR over 100 frames was computed as 39.88, 39.67, and db for the cases of content-based redesign, occlusionadaptive forward tracking, and uniform redesign methods, respectively. This is a very encouraging result considering that the content-based redesign method requires transmission of all (214) node coordinates, whereas the tracking method only needs to transmit the boundary nodes of the BTBC regions (it suffices to describe the nodes to be deleted) and the coordinates of the nodes to be added. The average (over 100 frames) number of nodes whose coordinates need to be transmitted has been 11 for frames of the mother

9 1278 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 Fig. 10. Motion-compensation PSNR versus frame number for uniform mesh redesigned at each frame (uniform mesh); forward-tracking occlusion-adaptive content-based mesh (occlusion-adaptive tracking); and content-based mesh redesigned at each frame (redesign). Fig. 11. Number of nodes whose coordinates need to be transmitted; and total nodes versus frame number in the case of forward-tracking occlusion-adaptive content-based mesh. and daughter sequence (see Fig. 11). Whereas redesigning a uniform mesh at each frame does not require any overhead transmission, its PSNR is well below (approximately 2 db) that of motion compensation by the forward-tracking mesh. That is, a significant improvement in PSNR can be achieved by sending an overhead that does not exceed a few hundred bits per frame. V. DISCUSSION AND CONCLUSIONS Forward tracking mesh-based motion compensation serves more than a mere tool for improving compression efficiency. It also enables such functionalities as nonlinear video editing for special effects, image registration for augmented reality, synthetic-natural hybrid coding, etc. [23]. The proposed occlusion-adaptive, content-based mesh design and forward tracking algorithm has several desirable characteristics described below. Robustness against occlusion boundary and motion estimation errors: Accurate estimation of occlusion boundaries is a challenging task, since motion estimates around occlusion regions are generally inaccurate. The proposed scheme provides the desired robustness against such errors by redesigning the mesh within the MF region at each frame. The efficacy of the resulting tracking algorithm can be assessed by how close the MC-PSNR values at each frame comes to that of redesigning a content-based mesh at each frame. Extensibility for object scalability: Object-scalable mesh design and tracking introduces the additional consideration that no patch should cross over the object boundaries in any frame (which imposes a constraint on the mesh design and tracking algorithms). The proposed scheme can be easily extended for object scalability although that is not addressed in this paper. The proposed method requires a forward dense motion field (within the changed region) for BTBC region estimation. If the estimation of a full dense motion field cannot be implemented within the processing requirements of the application, it may be interpolated from a subsampled motion field estimate; hence, trading accuracy with speed. The density of subsampled motion estimation may be adapted to the computational complexity requirements of the application. Finally, the proposed occlusion-adaptive forward mesh tracking algorithm can be extended to a complete 2-D mesh-

10 ALTUNBASAK AND TEKALP: MESH DESIGN AND FORWARD TRACKING 1279 (a) (e) (b) (f) (c) (g) Fig. 12. (d) (h) Comparison of forward mesh tracking and redesign on frames 58 61: (a) (d) tracking; (e) (h) redesign. based codec by adding a model failure (intra) coding module, node point and motion vector coding modules, and bit budget control module including node number control scheme. This is currently under investigation.

11 1280 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 9, SEPTEMBER 1997 REFERENCES (a) (b) Fig. 13. (a) BTBC region in frame 59. (b) MF region in frame 60. APPENDIX MOTION COMPENSATION BY REDESIGN Here we present a forward motion-compensation scheme, where a new mesh is designed between each pair of frames for comparison purposes. This method does not involve tracking; thus, it can be used as a benchmark to evaluate the performance of the proposed forward-tracking algorithm. Note that the redesign method can be used for either forward or backward motion compensation. The algorithm for the case of forward motion compensation is summarized below. 1) Set. 2) Estimate the dense motion field from the th to st frame, and find the BTBC region in the th frame (see Sections II-B and II-C). 3) Design a mesh (uniform or content based) for the th frame. 4) Estimate node-point motion vectors as described in Section III-A, and perform motion compensation (see Section III-B). 5) Increment by 1. Go to Step 2). A flowchart of this algorithm is shown in Fig. 8(b). A uniform mesh is obtained by segmenting the frame into equal size triangular elements; hence, there is no overhead to transmit the mesh structure. Note that in case a content-based mesh is employed, no nodes are allowed within the BTBC region (see Section II-D), and all node locations need to be transmitted at each frame. [1] H. G. Musmann, M. Hotter, and J. Osterman, Object oriented analysissynthesis coding of moving images, Signal Proc.: Image Commun., vol. 1, pp , Oct [2] H. Li, A. Lundmark, and R. Forchheimer, Image sequence coding at very low bit-rates: A review, IEEE Trans. Image Processing, vol. 3, pp , Sept [3] K. Aizawa and T. S. Huang, Model-based image coding: Advanced video coding techniques for very low bit-rate applications, Proc. IEEE, vol. 83, pp , Feb [4] T. Ebrahimi, E. Reusens, and W. Li, New trends in very low bit-rate video coding, Proc. IEEE, vol. 83, pp , June [5] ITU-T Draft Recommend. H.263, Video coding for low bit-rate communication (TMN5), July [6] V. Seferidis and M. Ghanbari, General approach to block-matching motion estimation, Opt. Eng., vol. 32, pp , July [7] M. Orchard and G. Sullivan, Overlapped block motion compensation: An estimation-theoretic approach, IEEE Trans. Image Processing, vol. 3, pp , Sept [8] H. Brusewitz, Motion compensation with triangles, in Proc. 3rd Int. Conf. 64-kbit Coding of Moving Video, Rotterdam, The Netherlands, Sept [9] G. J. Sullivan and R. L. Baker, Motion compensation for video compression using control grid interpolation, in Proc. ICASSP 91, Toronto, Canada, pp [10] Y. Nakaya and H. Harashima, Motion compensation based on spatial transformations, IEEE Trans. Circuits Syst. Video Technol., vol. 4, pp , June [11] C. L. Huang and C. Y. Hsu, A new motion compensation method for image sequence coding using hierarchical grid interpolation, IEEE Trans. Circuits Syst. Video Technol., vol. 4, pp , [12] J. Nieweglowski, T. G. Campbell, and P. Haavisto, A novel video coding scheme based on temporal prediction using digital image warping, IEEE Trans. Consumer Electron., vol. 39, pp , Aug [13] Y. Wang and O. Lee, Active mesh A feature seeking and tracking image sequence representation scheme, IEEE Trans. Image Processing, vol. 3, pp , Sept [14] W. B. Thompson, K. M. Mutch, and V. A. Berzins, Dynamic occlusion analysis in optical flow fields, IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-7, pp , [15] M. Hoetter and R. Thoma, Image segmentation based on objectoriented mapping parameter estimation, Signal Process., vol. 15, pp , [16] M. Irani, B. Rousso, and S. Peleg, Computing occluding and transparent motions, Int. J. Comput. Vis., vol. 12, pp. 5 16, [17] W.-F. Lee and C.-K. Chan, Two-dimensional split and merge algorithm for image coding, Proc. SPIE Conf. Visual Communications and Image Processing 95, May 1995, vol. 2501, pp [18] P. Gerken, Object-based analysis-synthesis coding of image sequences at very low bit-rates, IEEE Trans. Circuits Syst. Video. Technol., vol. 4, pp , June [19] D. T. Lee and B. J. Schachter, Two algorithms for constructing a Delauney triangulation, Int. J. Comput. Inform. Sci., vol. 9, no. 3, [20] B. D. Lucas and T. Kanade, An iterative image registration technique with an application to stereo vision, in Proc. DARPA Image Understanding Workshop, 1981, pp [21] B. K. P. Horn and B. G. Schunck, Determining optical flow, Artif. Intell., vol. 17, pp , [22] A. M. Tekalp, Digital Video Processing. Englewood Cliffs, NJ: Prentice-Hall, [23] C. Toklu, A. T. Erdem, M. I. Sezan, and A. M. Tekalp, Tracking motion and intensity-variations using hierarchical 2-D mesh modeling for synthetic object transfiguration, Graphic Models Image Process., vol. 58, pp , Nov [24] Y. Altunbasak and A. M. Tekalp, Closed-form connectivity-preserving solutions for motion compensation using 2-D meshes, this issue, pp [25] ISO/IEC JTC1/SC29/WG11 MPEG Document N999, Yucel Altunbasak, for a photograph and biography, see this issue, p A. Murat Tekalp (S 80 M 82 SM 91), for a photograph and biography, see this issue, p

A Low Bit-Rate Video Codec Based on Two-Dimensional Mesh Motion Compensation with Adaptive Interpolation

A Low Bit-Rate Video Codec Based on Two-Dimensional Mesh Motion Compensation with Adaptive Interpolation IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 11, NO. 1, JANUARY 2001 111 A Low Bit-Rate Video Codec Based on Two-Dimensional Mesh Motion Compensation with Adaptive Interpolation

More information

BI-DIRECTIONAL AFFINE MOTION COMPENSATION USING A CONTENT-BASED, NON-CONNECTED, TRIANGULAR MESH

BI-DIRECTIONAL AFFINE MOTION COMPENSATION USING A CONTENT-BASED, NON-CONNECTED, TRIANGULAR MESH BI-DIRECTIONAL AFFINE MOTION COMPENSATION USING A CONTENT-BASED, NON-CONNECTED, TRIANGULAR MESH Marc Servais, Theo Vlachos and Thomas Davies University of Surrey, UK; and BBC Research and Development,

More information

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

Multiframe Blocking-Artifact Reduction for Transform-Coded Video 276 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 4, APRIL 2002 Multiframe Blocking-Artifact Reduction for Transform-Coded Video Bahadir K. Gunturk, Yucel Altunbasak, and

More information

AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING

AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING Mahdi Koohi 1 and Abbas Shakery 2 1 Department of Computer Engineering, Islamic Azad University, Shahr-e-Qods Branch,Tehran,Iran m.kohy@yahoo.com 2 Department

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV Jeffrey S. McVeigh 1 and Siu-Wai Wu 2 1 Carnegie Mellon University Department of Electrical and Computer Engineering

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Fast image warping using adaptive partial matching

Fast image warping using adaptive partial matching Fast image warping using adaptive partial matching Dong-Keun Lim Samsung Electronics Company Limited Digital Media Research and Development Center 416 Maetan-3Dong Paldal-Gu, Suwon Kyungi-Do, 442-742 Korea

More information

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION Ninad Thakoor, Jean Gao and Huamei Chen Computer Science and Engineering Department The University of Texas Arlington TX 76019, USA ABSTRACT

More information

A deblocking filter with two separate modes in block-based video coding

A deblocking filter with two separate modes in block-based video coding A deblocing filter with two separate modes in bloc-based video coding Sung Deu Kim Jaeyoun Yi and Jong Beom Ra Dept. of Electrical Engineering Korea Advanced Institute of Science and Technology 7- Kusongdong

More information

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution 2011 IEEE International Symposium on Multimedia Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution Jeffrey Glaister, Calvin Chan, Michael Frankovich, Adrian

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Redundancy and Correlation: Temporal

Redundancy and Correlation: Temporal Redundancy and Correlation: Temporal Mother and Daughter CIF 352 x 288 Frame 60 Frame 61 Time Copyright 2007 by Lina J. Karam 1 Motion Estimation and Compensation Video is a sequence of frames (images)

More information

Gwenaëlle MARQUANT, Stéphane PATEUX and Claude LABIT IRISA/INRIA - Campus Universitaire de Beaulieu RENNES Cedex - France

Gwenaëlle MARQUANT, Stéphane PATEUX and Claude LABIT IRISA/INRIA - Campus Universitaire de Beaulieu RENNES Cedex - France Mesh-Based Scalable Video Coding with Rate-Distortion Optimization Gwenaëlle MARQUANT, Stéphane PATEUX and Claude LABIT IRISA/INRIA - Campus Universitaire de Beaulieu 35042 RENNES Cedex - France ABSTRACT

More information

FRAME-RATE UP-CONVERSION USING TRANSMITTED TRUE MOTION VECTORS

FRAME-RATE UP-CONVERSION USING TRANSMITTED TRUE MOTION VECTORS FRAME-RATE UP-CONVERSION USING TRANSMITTED TRUE MOTION VECTORS Yen-Kuang Chen 1, Anthony Vetro 2, Huifang Sun 3, and S. Y. Kung 4 Intel Corp. 1, Mitsubishi Electric ITA 2 3, and Princeton University 1

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Facial Expression Analysis for Model-Based Coding of Video Sequences

Facial Expression Analysis for Model-Based Coding of Video Sequences Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of

More information

Mesh Based Interpolative Coding (MBIC)

Mesh Based Interpolative Coding (MBIC) Mesh Based Interpolative Coding (MBIC) Eckhart Baum, Joachim Speidel Institut für Nachrichtenübertragung, University of Stuttgart An alternative method to H.6 encoding of moving images at bit rates below

More information

Dense Motion Field Reduction for Motion Estimation

Dense Motion Field Reduction for Motion Estimation Dense Motion Field Reduction for Motion Estimation Aaron Deever Center for Applied Mathematics Cornell University Ithaca, NY 14853 adeever@cam.cornell.edu Sheila S. Hemami School of Electrical Engineering

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

AN important task of low level video analysis is to extract

AN important task of low level video analysis is to extract 584 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 5, MAY 2007 Spatio Temporal Regularity Flow (SPREF): Its Estimation and Applications Orkun Alatas, Pingkun Yan, Member,

More information

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT Wai Chong Chia, Li-Minn Ang, and Kah Phooi Seng Abstract The 3D Set Partitioning In Hierarchical Trees (SPIHT) is a video

More information

MOTION COMPENSATION IN TEMPORAL DISCRETE WAVELET TRANSFORMS. Wei Zhao

MOTION COMPENSATION IN TEMPORAL DISCRETE WAVELET TRANSFORMS. Wei Zhao MOTION COMPENSATION IN TEMPORAL DISCRETE WAVELET TRANSFORMS Wei Zhao August 2004 Boston University Department of Electrical and Computer Engineering Technical Report No. ECE-2004-04 BOSTON UNIVERSITY MOTION

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural

VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey

More information

MOTION COMPENSATION IN BLOCK DCT CODING BASED ON PERSPECTIVE WARPING

MOTION COMPENSATION IN BLOCK DCT CODING BASED ON PERSPECTIVE WARPING MOTION COMPENSATION IN BLOCK DCT CODING BASED ON PERSPECTIVE WARPING L. Capodiferro*, S. Puledda*, G. Jacovitti** * Fondazione Ugo Bordoni c/o ISPT, Viale Europa 190, 00149 Rome, Italy Tel: +39-6-54802132;

More information

DATA and signal modeling for images and video sequences. Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services

DATA and signal modeling for images and video sequences. Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 9, NO. 8, DECEMBER 1999 1147 Region-Based Representations of Image and Video: Segmentation Tools for Multimedia Services P. Salembier,

More information

Tracking of video objects using a backward projection technique

Tracking of video objects using a backward projection technique Tracking of video objects using a backward projection technique Stéphane Pateux IRISA/INRIA, Temics Project Campus Universitaire de Beaulieu 35042 Rennes Cedex, FRANCE ABSTRACT In this paper, we present

More information

Rate Distortion Optimization in Video Compression

Rate Distortion Optimization in Video Compression Rate Distortion Optimization in Video Compression Xue Tu Dept. of Electrical and Computer Engineering State University of New York at Stony Brook 1. Introduction From Shannon s classic rate distortion

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Motion Vector Estimation Search using Hexagon-Diamond Pattern for Video Sequences, Grid Point and Block-Based

Motion Vector Estimation Search using Hexagon-Diamond Pattern for Video Sequences, Grid Point and Block-Based Motion Vector Estimation Search using Hexagon-Diamond Pattern for Video Sequences, Grid Point and Block-Based S. S. S. Ranjit, S. K. Subramaniam, S. I. Md Salim Faculty of Electronics and Computer Engineering,

More information

Yui-Lam CHAN and Wan-Chi SIU

Yui-Lam CHAN and Wan-Chi SIU A NEW ADAPTIVE INTERFRAME TRANSFORM CODING USING DIRECTIONAL CLASSIFICATION Yui-Lam CHAN and Wan-Chi SIU Department of Electronic Engineering Hong Kong Polytechnic Hung Hom, Kowloon, Hong Kong ABSTRACT

More information

Efficient Block Matching Algorithm for Motion Estimation

Efficient Block Matching Algorithm for Motion Estimation Efficient Block Matching Algorithm for Motion Estimation Zong Chen International Science Inde Computer and Information Engineering waset.org/publication/1581 Abstract Motion estimation is a key problem

More information

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

2014 Summer School on MPEG/VCEG Video. Video Coding Concept 2014 Summer School on MPEG/VCEG Video 1 Video Coding Concept Outline 2 Introduction Capture and representation of digital video Fundamentals of video coding Summary Outline 3 Introduction Capture and representation

More information

Image Frame Fusion using 3D Anisotropic Diffusion

Image Frame Fusion using 3D Anisotropic Diffusion Image Frame Fusion using 3D Anisotropic Diffusion Fatih Kahraman 1, C. Deniz Mendi 1, Muhittin Gökmen 2 1 TUBITAK Marmara Research Center, Informatics Institute, Kocaeli, Turkey 2 ITU Computer Engineering

More information

MULTICHANNEL image processing is studied in this

MULTICHANNEL image processing is studied in this 186 IEEE SIGNAL PROCESSING LETTERS, VOL. 6, NO. 7, JULY 1999 Vector Median-Rational Hybrid Filters for Multichannel Image Processing Lazhar Khriji and Moncef Gabbouj, Senior Member, IEEE Abstract In this

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Project Title: Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Midterm Report CS 584 Multimedia Communications Submitted by: Syed Jawwad Bukhari 2004-03-0028 About

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

Very Low Bit Rate Color Video

Very Low Bit Rate Color Video 1 Very Low Bit Rate Color Video Coding Using Adaptive Subband Vector Quantization with Dynamic Bit Allocation Stathis P. Voukelatos and John J. Soraghan This work was supported by the GEC-Marconi Hirst

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign

More information

Enhanced Hexagon with Early Termination Algorithm for Motion estimation

Enhanced Hexagon with Early Termination Algorithm for Motion estimation Volume No - 5, Issue No - 1, January, 2017 Enhanced Hexagon with Early Termination Algorithm for Motion estimation Neethu Susan Idiculay Assistant Professor, Department of Applied Electronics & Instrumentation,

More information

Motion Estimation Using Low-Band-Shift Method for Wavelet-Based Moving-Picture Coding

Motion Estimation Using Low-Band-Shift Method for Wavelet-Based Moving-Picture Coding IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 9, NO. 4, APRIL 2000 577 Motion Estimation Using Low-Band-Shift Method for Wavelet-Based Moving-Picture Coding Hyun-Wook Park, Senior Member, IEEE, and Hyung-Sun

More information

Variable Temporal-Length 3-D Discrete Cosine Transform Coding

Variable Temporal-Length 3-D Discrete Cosine Transform Coding 758 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 5, MAY 1997 [13] T. R. Fischer, A pyramid vector quantizer, IEEE Trans. Inform. Theory, pp. 568 583, July 1986. [14] R. Rinaldo and G. Calvagno, Coding

More information

SOME stereo image-matching methods require a user-selected

SOME stereo image-matching methods require a user-selected IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 2, APRIL 2006 207 Seed Point Selection Method for Triangle Constrained Image Matching Propagation Qing Zhu, Bo Wu, and Zhi-Xiang Xu Abstract In order

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Motion Estimation. Original. enhancement layers. Motion Compensation. Baselayer. Scan-Specific Entropy Coding. Prediction Error.

Motion Estimation. Original. enhancement layers. Motion Compensation. Baselayer. Scan-Specific Entropy Coding. Prediction Error. ON VIDEO SNR SCALABILITY Lisimachos P. Kondi, Faisal Ishtiaq and Aggelos K. Katsaggelos Northwestern University Dept. of Electrical and Computer Engineering 2145 Sheridan Road Evanston, IL 60208 E-Mail:

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD

SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Compression of Light Field Images using Projective 2-D Warping method and Block matching

Compression of Light Field Images using Projective 2-D Warping method and Block matching Compression of Light Field Images using Projective 2-D Warping method and Block matching A project Report for EE 398A Anand Kamat Tarcar Electrical Engineering Stanford University, CA (anandkt@stanford.edu)

More information

EECS 556 Image Processing W 09

EECS 556 Image Processing W 09 EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Adaptive Multi-Stage 2D Image Motion Field Estimation

Adaptive Multi-Stage 2D Image Motion Field Estimation Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his

More information

Predictive Interpolation for Registration

Predictive Interpolation for Registration Predictive Interpolation for Registration D.G. Bailey Institute of Information Sciences and Technology, Massey University, Private bag 11222, Palmerston North D.G.Bailey@massey.ac.nz Abstract Predictive

More information

Pre- and Post-Processing for Video Compression

Pre- and Post-Processing for Video Compression Whitepaper submitted to Mozilla Research Pre- and Post-Processing for Video Compression Aggelos K. Katsaggelos AT&T Professor Department of Electrical Engineering and Computer Science Northwestern University

More information

Depth Estimation for View Synthesis in Multiview Video Coding

Depth Estimation for View Synthesis in Multiview Video Coding MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Depth Estimation for View Synthesis in Multiview Video Coding Serdar Ince, Emin Martinian, Sehoon Yea, Anthony Vetro TR2007-025 June 2007 Abstract

More information

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method

Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Multiple Motion and Occlusion Segmentation with a Multiphase Level Set Method Yonggang Shi, Janusz Konrad, W. Clem Karl Department of Electrical and Computer Engineering Boston University, Boston, MA 02215

More information

Reduced Frame Quantization in Video Coding

Reduced Frame Quantization in Video Coding Reduced Frame Quantization in Video Coding Tuukka Toivonen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P. O. Box 500, FIN-900 University

More information

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation Optimizing the Deblocking Algorithm for H.264 Decoder Implementation Ken Kin-Hung Lam Abstract In the emerging H.264 video coding standard, a deblocking/loop filter is required for improving the visual

More information

Video Compression Method for On-Board Systems of Construction Robots

Video Compression Method for On-Board Systems of Construction Robots Video Compression Method for On-Board Systems of Construction Robots Andrei Petukhov, Michael Rachkov Moscow State Industrial University Department of Automatics, Informatics and Control Systems ul. Avtozavodskaya,

More information

Fast Natural Feature Tracking for Mobile Augmented Reality Applications

Fast Natural Feature Tracking for Mobile Augmented Reality Applications Fast Natural Feature Tracking for Mobile Augmented Reality Applications Jong-Seung Park 1, Byeong-Jo Bae 2, and Ramesh Jain 3 1 Dept. of Computer Science & Eng., University of Incheon, Korea 2 Hyundai

More information

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE 5359 Gaurav Hansda 1000721849 gaurav.hansda@mavs.uta.edu Outline Introduction to H.264 Current algorithms for

More information

Reducing/eliminating visual artifacts in HEVC by the deblocking filter.

Reducing/eliminating visual artifacts in HEVC by the deblocking filter. 1 Reducing/eliminating visual artifacts in HEVC by the deblocking filter. EE5359 Multimedia Processing Project Proposal Spring 2014 The University of Texas at Arlington Department of Electrical Engineering

More information

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

International Journal of Emerging Technology and Advanced Engineering Website:   (ISSN , Volume 2, Issue 4, April 2012) A Technical Analysis Towards Digital Video Compression Rutika Joshi 1, Rajesh Rai 2, Rajesh Nema 3 1 Student, Electronics and Communication Department, NIIST College, Bhopal, 2,3 Prof., Electronics and

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality Multidimensional DSP Literature Survey Eric Heinen 3/21/08

More information

Peripheral drift illusion

Peripheral drift illusion Peripheral drift illusion Does it work on other animals? Computer Vision Motion and Optical Flow Many slides adapted from J. Hays, S. Seitz, R. Szeliski, M. Pollefeys, K. Grauman and others Video A video

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Motion Estimation using Block Overlap Minimization

Motion Estimation using Block Overlap Minimization Motion Estimation using Block Overlap Minimization Michael Santoro, Ghassan AlRegib, Yucel Altunbasak School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332 USA

More information

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm Mahmoud Saeid Khadijeh Saeid Mahmoud Khaleghi Abstract In this paper, we propose a novel spatiotemporal fuzzy based algorithm for noise

More information

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION

DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS INTRODUCTION DETECTION AND ROBUST ESTIMATION OF CYLINDER FEATURES IN POINT CLOUDS Yun-Ting Su James Bethel Geomatics Engineering School of Civil Engineering Purdue University 550 Stadium Mall Drive, West Lafayette,

More information

Reconstruction PSNR [db]

Reconstruction PSNR [db] Proc. Vision, Modeling, and Visualization VMV-2000 Saarbrücken, Germany, pp. 199-203, November 2000 Progressive Compression and Rendering of Light Fields Marcus Magnor, Andreas Endmann Telecommunications

More information

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Saeed AL-Mansoori 1 and Alavi Kunhu 2 1 Associate Image Processing Engineer, SIPAD Image Enhancement Section Emirates Institution

More information

Digital Video Processing

Digital Video Processing Video signal is basically any sequence of time varying images. In a digital video, the picture information is digitized both spatially and temporally and the resultant pixel intensities are quantized.

More information

Research Article Block-Matching Translational and Rotational Motion Compensated Prediction Using Interpolated Reference Frame

Research Article Block-Matching Translational and Rotational Motion Compensated Prediction Using Interpolated Reference Frame Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2010, Article ID 385631, 9 pages doi:10.1155/2010/385631 Research Article Block-Matching Translational and Rotational

More information

EE Low Complexity H.264 encoder for mobile applications

EE Low Complexity H.264 encoder for mobile applications EE 5359 Low Complexity H.264 encoder for mobile applications Thejaswini Purushotham Student I.D.: 1000-616 811 Date: February 18,2010 Objective The objective of the project is to implement a low-complexity

More information

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM

A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video

Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video Quality versus Intelligibility: Evaluating the Coding Trade-offs for American Sign Language Video Frank Ciaramello, Jung Ko, Sheila Hemami School of Electrical and Computer Engineering Cornell University,

More information

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease

Particle Tracking. For Bulk Material Handling Systems Using DEM Models. By: Jordan Pease Particle Tracking For Bulk Material Handling Systems Using DEM Models By: Jordan Pease Introduction Motivation for project Particle Tracking Application to DEM models Experimental Results Future Work References

More information

NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING

NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING Nicole Atzpadin 1, Serap Askar, Peter Kauff, Oliver Schreer Fraunhofer Institut für Nachrichtentechnik, Heinrich-Hertz-Institut,

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Reduction of Blocking artifacts in Compressed Medical Images

Reduction of Blocking artifacts in Compressed Medical Images ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 8, No. 2, 2013, pp. 096-102 Reduction of Blocking artifacts in Compressed Medical Images Jagroop Singh 1, Sukhwinder Singh

More information

SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC

SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC Randa Atta, Rehab F. Abdel-Kader, and Amera Abd-AlRahem Electrical Engineering Department, Faculty of Engineering, Port

More information

A New Fast Motion Estimation Algorithm. - Literature Survey. Instructor: Brian L. Evans. Authors: Yue Chen, Yu Wang, Ying Lu.

A New Fast Motion Estimation Algorithm. - Literature Survey. Instructor: Brian L. Evans. Authors: Yue Chen, Yu Wang, Ying Lu. A New Fast Motion Estimation Algorithm - Literature Survey Instructor: Brian L. Evans Authors: Yue Chen, Yu Wang, Ying Lu Date: 10/19/1998 A New Fast Motion Estimation Algorithm 1. Abstract Video compression

More information

Optimal Estimation for Error Concealment in Scalable Video Coding

Optimal Estimation for Error Concealment in Scalable Video Coding Optimal Estimation for Error Concealment in Scalable Video Coding Rui Zhang, Shankar L. Regunathan and Kenneth Rose Department of Electrical and Computer Engineering University of California Santa Barbara,

More information