Framework for Processing Videos in the Presence of Spatially Varying Motion Blur

Size: px
Start display at page:

Download "Framework for Processing Videos in the Presence of Spatially Varying Motion Blur"

Transcription

1 AFRL-AFOSR-JP-TR Framework for Processing Vieos in the Presence of Spatially Varying Motion Blur Ambasamuram Rajagopalan INDIAN INSTITUTE OF TECHNOLOGY MADRAS 02/10/2016 Final Report Air Force Research Laboratory AF Office Of Scientific Research (AFOSR)/ IOA Arlington, Virginia Air Force Materiel Comman

2 REPORT DOCUMENTATION PAGE Form Approve OMB No The public reporting buren for this collection of information is estimate to average 1 hour per response, incluing the time for reviewing instructions, searching existing ata sources, gathering an maintaining the ata neee, an completing an reviewing the collection of information. Sen comments regaring this buren estimate or any other aspect of this collection of information, incluing suggestions for reucing the buren, to Department of Defense, Washington Heaquarters Services, Directorate for Information Operations an Reports ( ), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA Responents shoul be aware that notwithstaning any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it oes not isplay a currently vali OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) 2. REPORT TYPE 3. DATES COVERED (From - To) Final 30 Sep Sep TITLE AND SUBTITLE 6. AUTHOR(S) Framework for Processing Vieos in the Presence of Spatially Varying Motion Blur Prof Ambasamuram Narayanan Rajagopalan 5a. CONTRACT NUMBER FA b. GRANT NUMBER 13RSZ116_ c. PROGRAM ELEMENT NUMBER 61102F 5. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Inian Institute of Technology Maras IIT Maras Chennai Inia 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AOARD UNIT APO AP PERFORMING ORGANIZATION REPORT NUMBER N/A 10. SPONSOR/MONITOR'S ACRONYM(S) AFRL/AFOSR/IOA(AOARD) 11. SPONSOR/MONITOR'S REPORT NUMBER(S) AOARD DISTRIBUTION/AVAILABILITY STATEMENT Distribution A: Approve for public release. Distribution is unlimite 13. SUPPLEMENTARY NOTES 14. ABSTRACT Motion blurring is both a bane an a boon. Most works treat motion blur as nuisance an seek ways an means to mitigate its effects so as to restore the original image. Unlike the optical blur, motion-blur can be space-varying even when the scene is planar; an example case is that of a rotating camera imaging a istant scene. However, it must be emphasize that motion blur can also serve as a vital cue for camera motion estimation, epth recovery, super-resolution, an image forensics. 15. SUBJECT TERMS full motion vieo analysis, Image Processing, Vieo analysis, Information Technology 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF a. REPORT b. ABSTRACT c. THIS PAGE ABSTRACT U U U SAR 18. NUMBER OF PAGES 41 19a. NAME OF RESPONSIBLE PERSON Seng Hong, Ph.D. 19b. TELEPHONE NUMBER (Inclue area coe) Stanar Form 298 (Rev. 8/98) Prescribe by ANSI St. Z39.18

3 FINAL REPORT FOR AOARD-AFRL (FA , Year ) TITLE PI AFRL POC AOARD PM AFOSR PM Framework for Processing Vieos in the Presence of Spatially Varying Motion Blur Prof. A.N. Rajagopalan, Inian Institute of Technology Maras Dr. Guna Seetharaman, Civ DR-IV AFRL/RIEA Dr. Seng Hong Dr. Tristan Nguyen, AFOSR/RSL Duration Oct September 30, 2014 Cost 50K (FY14) 1 Introuction The current proposal is focuse on a basic (6.1) level research on full motion vieo analysis - a topic of importance to the U.S. Air Force - with a potential impact on image analysis, characterization an exploitation. The amount of full motion vieo clips that we process has grown exponentially. These images are typically acquire for surveillance purpose, collecte persistently over a fixe fiel of view, albeit with varying egrees of relative motion between the camera an objects within the scene. The key challenge is to hanle the complexities (incluing loss of resolution) that arise from space-varying local blurring ue to camera motion. Recent times have seen the resurgence of motion blur as an area of great interest to computer vision an image processing researchers. Motion blur results when there is relative motion between the camera an the scene. For planar scenes, the shape of the blur kernel is a function of camera motion while the weights of the kernel can be relate to the exposure time corresponing to the set of geometric transformations that the camera traverse along its motion trajectory. Motion blur has acquire special significance with han-hel imaging, aerial imaging, an imaging on the move shooting into prominence. It is also relevant to situations where the camera is still but the scene comprises of several moving objects. Motion blurring is both a bane an a boon. Most works treat motion blur as nuisance an seek ways an means 1

4 to mitigate its effects so as to restore the original image. Unlike the optical blur, motion-blur can be space-varying even when the scene is planar; an example case is that of a rotating camera imaging a istant scene. However, it must be emphasize that motion blur can also serve as a vital cue for camera motion estimation, epth recovery, super-resolution, image forensics, etc. In this report, we iscuss the efforts carrie out jointly with Dr. Guna Seetharaman uring the perio Oct. 1, 2013 to Sept. 30, There was regular exchange of information between the PI an the AFRL collaborator incluing physical meetings along the sielines of conferences. We first invesgate the recovery of normal of a planar sccene image by a moving camera. Here, the motion blur is harnesse as a cue for estimation of normal since the extent of motion blur at an image point is ictate both by scene structure an camera motion. We have evelope a scheme for recovering the orientation of a planar scene from a single translationally-motion blurre image. By leveraging the homography relationship among image coorinates of 3D points lying on a plane, an by exploiting natural corresponences among the extremities of the blur kernels erive from the motion blurre observation, the propose metho can accurately infer the normal of the planar surface. We valiate our approach on synthetic as well as real planar scenes. Next, we aresse the problem of image registration using low-rank, sparse error matrix ecomposition when there are geometric as well as photometric ifferences in the given image pair. The aitional challenge is to perform registration an change etection for large motion blurre images. The unreasonable eman that this task puts on computational an memory resources preclues the possibility of any irect attempt at solving this problem. We hanle this issue by observing the fact that the camera motion experience by a sufficiently large sub-image is approximately the same as that of the entire image itself. We evise an algorithm for juicious sub-image selection so that the camera motion can be eciphere correctly, irrespective of the presence or absence of occluer. We aopt a reblur-ifference framework to etect changes as this is an artifact-free pipeline unlike the traitional eblur-ifference approach. We emonstrate the results of our algorithm on both synthetic an real ata. Following this, we attempt to solve the problem of motion eblurring which has significant ramifications in aerial imaging. Our work eals with eblurring of aerial imagery an we evelop a methoology for blin restoration of spatially varying blur inuce by camera motion cause by instabilities of the moving platform. A sharp image is beneficial not only from the perspective of visual appeal but also because it forms the basis for applications such as moving object tracking, change etection, an robust feature extraction. In the presence of general camera motion, the apparent motion of scene points in the image will vary at ifferent locations resulting in space-variant blurring. 2

5 However, ue to the large istances involve in aerial imaging, we show that the blurre image of the groun plane can be expresse as a weighte average of geometrically warpe instances of the original focuse but unknown image. The weight corresponing to each warp enotes the fraction of the total exposure uration the camera spent in that pose. Given a single motion blurre aerial observation, we propose a scheme to estimate the original focuse image affecte by arbitrarily-shape blur kernels. The latent image an its associate warps are estimate by optimizing suitably erive cost functions with juiciously chosen priors within an alternating minimization framework. Several results are given on the challenging VIRAT aerial ataset for valiation. In the following sections, we iscuss each of the above problems in more etail. While some of the results of these efforts have alreay been publishe in IEEE conferences an journals, others are uner review in prestigious venues. 2 Normal Inference from a Single Motion Blurre Image An extensively researche area in computer vision is the recovery of 3D structure from image intensities [1]. Wellknown cues for epth recovery inclue isparity, optical flow, texture, shaing, efocus blur an motion blur, to name a few. While estimation of 3D epth/shape has been of general interest, there have also been works targeting the special case of inferring planar 3D geometry (such as the Manhattan moel). This is ue to the fact that the worl aroun us can, in many cases, be moele as being piecewise planar. Approximating a 3D scene with planes (where possible) has tremenous avantage in terms of reucing the computational complexity. Estimation of surface normals of a scene/object plays a crucial role in ientifying the 3D geometry/shape of that scene/object. The elegant homography relationship between two images (original an transforme ue to relative motion between camera an scene) hols for scene points lying on a plane in the 3D worl. Estimating a plane involves fining its surface normal an the perpenicular istance from the center of the camera to the plane. The relevance of this problem is evient from the many works that exist in the literature. Clark et al. [2] implemente a technique to recover the orientation of text planes using perspective geometry. In [3], Fari et al. reveal the fact that the projection of a planar texture having ranom phase leas to higher-orer correlations in the frequency omain, an these correlations are proportional to the orientaion of the plane. Greinera et al. [4] have propose a metho to etermine the surface normal using projective geometry an spectral analysis. Haines et al. [5] escribe a technique that makes use of prior training ata gathere in an urban environment to classify planar/non-planar surfaces 3

6 an to compute the orientaion of the planes. We propose to use motion blur as a cue to estimate the orientation of a planar scene given a single motion blurre image of the plane. Usually, blurring is consiere as a nuisance whose effect nees to be remove. However, works o exist that, in fact, use blur (optical/motion) as a cue to infer valuable information such as the epth of the scene an relative motion of the camera with respect to the scene. To the best of our knowlege, the only metho to estimate plane orientation using blur as a cue is the recent work by McCloskey et al. [6] who have propose a metho base on blur graients to evaluate the planar orientaion (slant an tilt angles) from a single image using optical blur as a cue. They exploit the relationship between blur variations for the equifocal (fronto-parallel scene) plane an a plane s tilt an slant angles. For a fronto-parallel scene, all the pixels in the image have the same amount of blur. In the case of an incline plane, the amount of blur varies inversely with epth. The user has to manually mark a patch of interest for which slant an tilt angles are estimate. Their work assumes a homogeneously texture observation. We propose an interesting approach (a first of its kin) to etermine the surface normal of a plane from a single motion-blurre image. We exploit the homography relation that exists in the image omain uner camera motion to etermine the surface normal. For a planar scene, the blurre image can be represente as a weighte average of warpe versions of the unblurre image. This representation helps in characterizing the space-variant blur by a set of global homographies. We extract patches from the image an estimate blur kernels at these patches. Using the corresponences among the extremities of blur kernels at ifferent locations, we set up a system of linear equations that are solve to yiel the surface normal. 2.1 Planar motion blur Motion blur in an image is ue to relative motion between camera an scene uring exposure time. Since the camera sensor sees ifferent scene points at ifferent instants of time within the exposure winow, these intensities get average resulting in a blurre image. Let g be the blurre image capture by a camera with exposure time E t, an let f be the original image (without camera shake). During the exposure time, f may have unergone a set of transformations ue to relative motion between the camera an the scene. The transforme image at time instant τ can be explaine using homography H τ as g τ (H τ (x)) = f(x) where x represents pixel coorinates. Therefore, the blurre image can be moele as the average of transforme versions of f uring the exposure time E t. The blurre image intensity at 4

7 a location x can then be expresse as g(x) = 1 E t Et 0 f(hτ 1 (x)) τ The homography relation in the image omain hols only for the set of scene points lying on a plane. The homography ( ) q 0 0 n at time instant τ is given by H τ = K R τ + t T τ K 1 where K = 0 q 0, with q being the focal length of the camera. Here R τ enotes the rotation matrix at time instant τ an is a combination of the rotational matrices about the X, Y an Z axes, an is the perpenicular istance from the center of the camera to the plane an is a constant for the entire plane. Here, t τ = [T Xτ T Yτ T Zτ ] T represents the 3D translation vector at time τ an n = [N X N Y N Z ] T enotes the surface normal of the planar scene. Following recent works, we assume that the motion blur is ue to camera translations only. Therefore, R τ is a 3 3 ientity matrix (I) an the homography simplifies to 2.2 Normal from point-corresponences n H τ = K (I T + t τ ) K 1. (1) The aim of our work is to use a single motion blurre image to estimate the surface normal of a planar scene. It is straightforwar to show that the blur kernel centere at location x can be written as h(x, u) = 1 Et δ(u (H τ (x) x)) τ (2) E t 0 i.e., the PSF represents the isplacements unergone by an image point ue to a set of motion transformations. The blur kernel inuce will ieally consist of impulses at the corresponing shifts, an the weight of the impulse will be governe by the fraction of the exposure time spent in that homography/pose. For a fronto-parallel scene i.e., when n = [0 0 1] T, the blur inuce woul be space-invariant when camera unergoes only in-plane translations in the xy plane. This is because for some transformation t = [T Xτ T Yτ 0] T an n = [0 0 1] T, we obtain qt x τ 1 0 Xτ x y τ = qt 0 1 Yτ y (3) Clearly, the isplacements in x an y irections are a constant (inepenent of the spatial location) an equal qt Xτ an qt Yτ, respectively. However, for a general incline plane, the blur inuce woul be space-variant (ue to change 5

8 in epth of the scene) even for pure in-plane translational motion. Corresponing to this situation, we will have (for t = [T Xτ T Yτ 0] T ) x τ y τ 1 = 1 + N X T Xτ N Y T Yτ N Y T Xτ 1 + N Y T Yτ qn Z T Xτ qn Z T Yτ x y 1 (4) Note that the isplacements along x an y are no longer a constant an, in fact, vary as a function of the spatial location of the image point. Since our interest is in estimating the surface normal n = [N X N Y N Z ] T (an not the camera motion per se), we rewrite equation (4) as an [ ] [ ] x τ = x y 1 [ ] [ ] y τ = x y N X T Xτ N Y T Xτ qn Z T Xτ N X T Yτ 1 + N Y T Yτ qn Z T Yτ In equations (5) an (6), assuming point corresponences between (x, y) an (x τ, y τ ) to be known, the unknowns are N X, N Y, N Z, T Xτ, T Yτ an, an these appear in the right-most column vector. Note that the ratio T Xτ (or T Yτ a common scale factor multiplying the normal an hence nee not be estimate. At first glance, it might appear that one can enforce unit norm on the normal to reuce the unknowns by one. However, we refrain from oing so since we lose the elegance of the linear equations (5) an (6) in the process. Thus, there are effectively three unknowns (N X, N Y, N Z ) that are to be estimate. Hence, we nee at least three point corresponences to solve this problem. If we can fin point isplacements at other locations in the image corresponing to the same motion [T Xτ T Yτ 0] T, then it shoul theoretically be possible to etermine the unknowns. This, in fact, forms the basic premise for our metho. As iscusse earlier in equation (2), the PSF or blur kernel encapsulates the isplacements of pixels uner the influence of camera motion. Thus, if we can establish point-corresponences (all influence by the same motion) across atleast three blur kernels, then we can solve for the surface normal. However, because the blur kernel estimation itself is prone to small errors, it is only pruent that we use as many corresponences as possible. Note that we nee to ientify corresponing points among the PSFs with respect to the same homography. On this issue, we wish to (5) (6) ) is 6

9 point out an interesting fact that a natural corresponence exists among the extremities of blur kernels (i.e., non-zero impulses at maximum istance from the origin of the PSF an on either sie of the origin) across the image. We coul potentially use the left (or right) extremity of the blur kernel in equation (5) or (6). Although it might appear that one can then solve for the normal, there is an ambiguity issue which we wish to highlight. Since the blur kernels are estimate inepenently across the image, there is a possibility of incurring spatial shifts in the PSFs when employing any blin eblurring metho. A blurre patch b can be represente as convolution of latent patch l an blur kernel h i.e., b = l h. Note that a shifte version (translational shift along x an y irections) of the true h also satisfies the convolution relation because b(x) = l(x s 0 ) h(x + s 0 ). The shift introuce in the blur kernel is equivalently compensate in the latent image. Hence, if we choose only one extremity from the blur kernels, the surface normal cannot be estimate correctly ue to possible misalignment errors. In orer to resolve this issue, we choose the isplacement between the extremities for computing corresponenes since this isplacement is inepenent of any shift in the blur kernel. From equation (5), the extremity of a PSF (say h 1 ) ue to translation (say T Xp ) can be expresse as [ ] [ ] x l1 = x 1 y N X T Xp N Y T Xp qn Z T Xp (7) where (x 1 y 1 ) is the spatial-location of the origin of h 1. Similarly, the x coorinate of the right-extreme point of h 1 ue to another translation (say T Xq ) will be [ ] [ x r1 = x 1 y 1 1 ] 1 + N X T Xq N Y T Xq qn Z T Xq (8) Subtracting equation (7) from equation (8), we get [ ] [ x 1 = x 1 y 1 1 ] N X T Xq T Xp N Y T Xq T Xp qn Z T Xq T Xp (9) where x 1 inicates the ifference between the x coorinates of the two extreme points of the blur kernel h 1. If we 7

10 can etermine M such PSFs in the given blurre image, then we have a set of M ( 3) linear equations given by x 1 x 2.. x M x 1 y 1 1 x 2 y 2 1 = x M y M 1 N X T Xq T Xp N Y T Xq T Xp qn Z T Xq T Xp where (x i y i ) represents the spatial-location of the origin of the i th PSF. Note that T Xq T Xp (10) is a constant that multiplies every component of n an hence nee not be estimate. Therefore, one can solve equation (10) using least-squares to infer the surface normal. The proceure explaine above, in fact, is equally applicable to extreme points along the y irection too. Since our scheme relies on pixel motion, we propose to use x i or y i, whichever is higher in magnitue. Note that the frontoparallel plane is a special case of our formulation in that the PSFs will be ientical at all locations i.e, x i = k i from which the solution can be inferre as n = [0 0 1] T. Due to translational motion of the camera, the PSFs vary with the spatial location of the patch. A patch closer to the camera contains more blur as compare to a patch farther away from the camera. To etermine the extremities of a PSF, we calculate the row sum an column sum of the PSF an choose the positions of the first an last non-zero values of the PSF as extreme points. These points are inicate by re (left-most) an green (right-most) pixels. Pixels with the same color constitute point corresponences. Therefore, all the re (green) points correspon to the same homography PSF estimation Although our interest is not in estimating camera motion, we nee to etermine PSFs at ifferent spatial locations in the blurre image. There exist several methos in the literature for blur kernel estimation. We use an off-the-shelf blin motion eblurring technique [7] to estimate the blur kernel for a selecte patch. Estimating the PSF from a single motion blurre image is a very ill-pose problem since there exist many possible combinations of PSF an latent image that can lea to the same blurre image. Hence, blin motion eblurring methos typically impose priors on the PSF an the latent image. The metho of [7] reveals that strong eges nee not always lea to accurate PSF estimation an employs a two-phase approach to estimate PSF. In the first phase, the authors efine a metric to ientify 8

11 useful eges. These eges are consiere to estimate a coarse blur kernel. In the secon phase, an iterative support etection metho is use (instea of har-thresholing) to estimate the sparse blur kernel. The metho executes fast an the accuracy of PSF estimation is also quite satisfactory [7]. 2.3 Experiments In this section, we valiate the propose metho with examples, both synthetic an real. Since both PSF estimation as well as extreme point etection can involve small errors, we propose to use about 8 point-corresponences (instea of the minimum of 3) in equation (10) for robustness against noise. For the synthetic case, we choose focal length q = 1200 pixels which is a practical value. For these experiments, we assume a surface normal, applie a set of homographies (camera translations) on an unblurre texture image, an compute the weighte average of the transforme images to yiel the blurre observation. For the real case, the focal length (usually in mm) is gathere from the meta-ata itself, an is converte into pixels using the sensor imensions an the resolution of the image. The value of in equation (10) is the same for all the points lying on the plane an it can be any constant (other than zero). In this work, we are intereste only in the orientation of plane (an not in which is embee in the constant that multiplies n in equation (10)) Synthetic case In the first example, we assume a fronto-parallel planar scene (n = [0 0 1] T ). We applie a set of translations along both x an y irections an the blurre image thus obtaine is shown in Fig. 1(a). Due to the fronto-parallel nature of the scene, all the 3D points are at the same istance from the camera an experience ientical blur. We ranomly select eight (spatially well-separate) patches in Fig. 1(a) an estimate their PSFs using [7]. These PSFs are shown in Fig. 1(b) an, as expecte, have the same form. The extreme points in each PSF are etecte as iscusse earlier in section III an the isplacements between these points is substitute in equation (10) to solve for the normal. The estimate normal turne out to be ˆn = [ ] which is quite close to the true normal. Next, we use the same image as in the earlier example but assume an incline plane with normal n = [ ]. Following the proceure outline earlier for the fronto-parallel case, a blurre observation (Fig. 2(a)) was generate using a set of transformations for the camera motion. We ranomly selecte eight patches (each of size pixels) an the corresponing PSFs estimate using [7] are shown in Fig. 2(b). Note that the blur kernel is spacevariant, as expecte. The extreme point correponences among the blur kernels have also been inicate in Fig. 9

12 (a) (b) Figure 1: (a) A fronto-parallel scene with translational blur. (b) PSFs estimate using [7] at ranom locations in (a). (a) (b) Figure 2: (a) Incline plane with motion blur. (b) PSFs estimate at ifferent locations in (a). 4(b). From the isplacements of the extremities, the normal was estimate using equation (10) by employing only the x translations. The result was ˆn = [ ] which is close to the actual normal. The angular error between the actual (re arrow) an the estimate normal (green arrow) is only 3.7 egrees as epicte in Fig. 2(a). (a) (b) Figure 3: (a) Fronto-parallel blurre image. (b) PSFs estimate at ifferent locations in (a). 10

13 2.3.2 Real case We use a Canon 60D camera to capture real ata. The sensor with of the camera was 23.2 mm an the spatial resolution was pixels. For the real experiments, we employe a translational stage to inuce translational motion blur along both x an y irections. In the first example, we capture a translationally blurre fronto-parallel texture boar (Fig. 6(a)). Akin to the synthetic case, we choose eight ifferent patches (again of size pixels) an etermine the PSFs corresponing to the center of these patches using [7]. The estimate PSFs are shown in Fig. 6(b) with extreme points marke. The normal estimate using equation (10) was foun to be ˆn = [0 0 1]. Since we know apriori that the scene is (a) () (b) (e) (c) (f) Figure 4: (a)-(c) Blurre images of an incline plane for ifferent camera translations. ()-(f) PSFs corresponing to figures.(a)-(c), respectively. fronto-parallel, we can conclue that the estimate normal is inee correct. Next, we capture a blurre image of an incline plane as shown in Fig. 4(a). One can visually perceive the space-variant nature of the blur in this image. We ranomly picke eight patches an the estimate PSFs are shown in Fig. 4(). The extreme points in each PSF are represente with re (left-most) an green (right-most) colors. By following the same proceure iscusse in the earlier experiments, the surface normal was foun to be ˆn = [ ]. Since, this is a real example, we o not know the true normal. We ascertain the correctness of the estimate normal by capturing blurre images of the same plane but with two ifferent camera translations. Ieally, the estimate normals shoul be ientical irrespective of the camera motion. We capture two more blurre 11

14 images with ifferent in-plane translations an these are shown in Figs. 4(b)-(c), with their corresponing PSFs (Figs. 4(e)-(f)). The estimate normals were foun to be ˆn= [ ] an [ ] respectively. Note that the estimate normals in all the three cases are quite close to one another reaffirming the correctness of our proceure. Furthermore, we physically measure the orientation of the plane an foun it to be 30 egrees. This is inee close to the value of 28 egrees obtaine using the propose metho. (a) (b) Figure 5: (a) Planar surface with motion blur. (b) PSFs estimate at ifferent locations in (a). We show another real example in Fig. 5(a). We capture an outoor groun-plane with the optical axis of the camera approximately parallel to the groun-plane. The focal length was 18 mm. From Fig. 5(a), we observe that the image has significant variations in blur. The lower portion of the image (close to the camera) has more blur compare to the upper portion (far from the camera). To recover surface normal, we selecte eight patches such that the patches were spreaout across the image. Their estimate PSFs are shown in Fig. 5(b). From the PSFs, we can infer that the translation is more prevalant along the x irection. The etecte extremities in each PSF are also inicate in Fig. 5(b). By following the proceure iscusse earlier, the normal was compute as [ ]. Because the optical axis was not exactly parallel to the plane, the resultant angle turns out to be 81 egrees which is as expecte. In the final example, a planar scene was image as shown in Fig. 6(a). The bottom of the plane is closest to the camera while the top ege of the plane is the farthest. The focal length of the camera (18 mm) was obtaine from the image meta-ata. Using sensor imensions an image size, the focal length translates to 581 pixels. We ranomly picke eight patches throughout the image an their corresponing PSFs are shown in Fig. 6(b). After substituting the focal length an isplacements of each PSF in equation (10), the compute surface normal was foun to be [ ] an is inicate by a green arrow. 12

15 (b) (a) Figure 6: (a) A planar surface with translational motion blur. (b) Blur kernels extracte at ifferent spatial locations in (a). 3 Efficient Change Detection for Large Motion Blurre Images Feature-base approach is commonly use in image registration. There are several methos for feature extraction such as SIFT, SURF, ORB an MSER (Lowe et al. [8], Bay et al. [9], Rublee et al. [10], Matas et al. [11]). These algorithms are primarily esigne to work on small to meium-size images. Memory requirement is an important factor to consier when employing these approaches for high resolution images. Huo et al. [12] showe that SIFT features require a prohibitively huge amount of memory for very large images. Another rawback of feature-base approaches while working on large images is incorrect feature matching ue to the occurrence of multiple instances of similar objects across the image (Carleer et al. [13]). Coarse-to-fine strategies for feature matching are followe by Yu et al. [14] an Huo et al. [12] to enable matching. Within the scope of the problem tackle here, there is yet another eterrent in aopting feature-base approach an that is blur. Motion blur is a common occurrence in aerial imagery where the imaging vehicle is always on the move. In aition to geometric matching, photometric matching becomes essential in such a scenario. Feature-base approaches are not esigne to hanle the presence of blur an fail to reliably etect features in the presence of blur. A traitional approach to hanle this situation is to first eblur the observation, an then pass on the resultant image to the change etection pipeline where it is compare with a clean reference image after feature-base registration. A number of approaches alreay exist in the literature to perform eblurring. Blin econvolution methos recover a sharp image from the blurre image with an unknown blur kernel uner the assumption of space-invariant blur. Fergus et al. [15] take a natural image statistics base Bayesian approach to estimate the blur kernel an eblur using Richarson-Lucy algorithm. A two-phase approach with kernel initialisation using ege priors an kernel refinement 13

16 base on iterative support etection is employe by Xu et al. [16] for kernel estimation, an the eblurring is sought through TV-l 1 econvolution. Space-variant blur base approaches inclue that of Gupta et al. [17] who moel a motion ensity function to represent the time spent in each camera pose an to generate spatially varying blur kernels an eventually restore the eblurre image using a graient-base optimisation. Whyte et al. [18] efine a transformation sprea function for space-variant blur similar to the point sprea function for space-invariant blur to restore the motion blurre image using MAP approach. Hu et al. [19] estimate weights for each camera pose in a restricte pose space using a backprojection moel while eblurring is carrie out by employing a graient-base prior. Leveraging graient sparsity, Xu et al. [20] propose a unifie framework to perform both uniform an nonuniform image eblurring. An issue with such a eblur-ifference framework is that it must eal with the annoying problem of artifacts that ten to get introuce uring the course of eblurring. A more serious issue within the context of this work is that none of the eblurring methos are esigne to hanle very large images. Furthermore, the eblurring methos woul fail if the occluer was not static since the image will then be governe by two inepenent motions. In the problem of change etection, the goal is to etect the ifference between a reference image with no artifacts an an observe image which is blurre an has viewpoint changes as well. We evelop a unifie framework to register the reference image with the blurre image an also to etect occlusions simultaneously. The occluer is not constraine to be static. To aress the issue of image size, we reveal that the camera motion can be elegantly extracte from only a part of the observation. For reasons iscusse earlier, we follow a reblur-ifference pipeline instea of a eblur-ifference pipeline. While Punnappurath et al. [21] also followe a reblur-ifference strategy, our work is more general an, in fact, subsumes their work. Specifically, we use an optimisation framework with partial non-negative constraint which can hanle occlusions of any polarity, an we efficiently tackle the issue of large image imension. In aition, our algorithm can also eal with ynamic occluers. In our approach, the estimate camera motion is use to reblur the reference image to photometrically match it with the observe image, an thereby etecting the changes. We evelop a scheme to automatically select goo sub-images from the given observation to enable reliable estimation of the camera motion. We propose a memory an computationally efficient registration scheme to estimate the camera motion from the selecte sub-image, irrespective of the presence or absence of occlusions in the subimage. We avocate a reblur-ifference pipeline for geometric as well as photometric registration of the reference image an the blurre observation for robust change etection. 14

17 3.1 Blur, Registration an Occlusion In this section, we briefly iscuss motion blur moel in a camera. We then show how to invoke an optimisation framework to simultaneously register the reference image with the blurre image as well as etect occlusions, if any Motion Blur Moel Each pixel in a igital camera embes a sensor which collects photons from the scene. A igital circuit provies the intensity value base on the number of photons receive. All the pixels are expose for a finite amount of perio T e which is the exposure time of the image. The resultant intensity at each pixel is the average of all intensities that the pixel sees uring the exposure perio. Let us enote the camera path uring the image exposure perio by p(t) for 0 t T e. Let f represent the image observe by the camera uring an infinitesimal amount of time. Let g be the image observe by the camera with an exposure time T e. Let the number of rows an columns in the images be M an N respectively, so that f, g R MN 1. Then, we have g = 1 Te f T p(t) t. (11) e where f p(t) is the image observe by the camera ue to the pose p(t) at a particular time t. 0 When there is no motion, the camera observes the same scene uring the entire exposure time, an hence a clean image without any blur is observe. In this case, p(t) = 0 for all 0 t T e, an g = f. Thus f represents also the image seen by the camera with no motion uring the exposure time T e. In the presence of camera motion, the sensor array recors ifferent scenes at every instant uring the exposure time. The resultant image thus emboies blur in it, an we have g f. We iscretise the continuous moel in (11) with respect to a finite camera pose space P. We assume that the camera can unergo only a finite set of poses uring the exposure time. Let us efine P = {p} P i=1 as the set of possible camera poses. We can write (11) equivalently as g = p k P ω pk f pk (12) where f pk is the warpe reference image f ue to the camera pose p k. Each scalar ω pk represents the fraction of exposure time that the camera staye in the pose p k. Thus we have p k ω pk = 1 if the camera takes only the poses from the efine pose set P. The weights of all poses are stacke in the pose weight vector ω. Since the averaging 15

18 effect removes the time epenency of the continuous camera path p(t), this iscretisation moel is vali. We assume that the scene is far enough from the camera such that planarity can be assume Joint Registration an Occlusion Detection We now consier the problem of estimation of camera poses uring exposure. Given a reference image f which is capture with no camera motion, an a blurre image g arising from an unknown camera motion, the following problem can be pose to solve for the camera motion. ω = min g Fω λ ω 1 subject to ω 0 (13) Here F is the matrix which contains the warpe copies of the reference image f in its columns for the camera poses in P. In the whole pose space, the camera can be move through only a small set of poses. This is prioritise as the l 1 norm in (13) which promotes sparsity of the pose weight vector. The above problem seeks the sparsest non-negative pose weight vector which satisfies the relation between reference an blurre images. Matrix-vector multiplication Fω is an equivalent form of (12). This moel, however, oes not accommoate occluing objects in the observation g although this is quite often the case in aerial surveillance systems. To hanle this, let g occ be the observe image capture with blur an occlusions. We moel the occlusion as an aitive term to g to give g occ = g + χ. The occlue image χ can take both positive an negative values since the occlue pixels can have intensities greater or lesser than the intensities purely explaine by blur. This moel can then be written as ] g occ = [F I ω N = Aξ. (14) χ Here A is a combine ictionary of warpe reference images to represent blur an the N N ientity matrix to represent occlusions, where ξ is the combine weight vector, the first P elements of which represent the pose weight ω an the remaining N elements represent the occlusion vector χ. To solve this uner-etermine system, we leverage the prior information about the camera motion an occlusion, viz. the sparsity of the camera motion in the pose space an the sparsity of the occlusion in the spatial omain. Thus we impose l 1 norm prior on ξ. We estimate the combine weight vector by solving the following optimisation problem. ξ = arg min ξ g occ Aξ λ ξ 1 subject to Cξ 0 (15) 16

19 where C = I P 0. As mentione earlier, the occlusion vector can take both positive an negative values. 0 0 Thus, unlike the work of Punnappurath et al. [21] who moify the signs of the ientity matrix, we neatly impose non-negativity constraint only on the elements of the pose weight vector. 3.2 Registration of Very Large Images Builing the matrix A in (14) is a crucial step in our problem. The occlusion part of the matrix I N can be store an processe efficiently since it is a iagonal matrix. The first part of the matrix F contains the warpe versions of f for all the poses in P. Though the reference image f operates in the intensity range [0-255] an requires only an unsigne 8-bit integer for each pixel, this is not the case for the storage of the warpe versions. The pixel values of the warpe image f pk can take floating-point values ue to bilinear interpolation uring its generation. A roun-off uring the interpolation makes the equality in (12) only approximate, an hence it might lea to a wrong solution. A single warpe image nees MN bits of storage memory for operation, where is the number of bits require to store a floating-point number. For even a 25 mega-pixel image with 5000 rows an 5000 columns an with = 32 bits, a warpe image requires bits, that is 95.3 megabytes. If all three colour channels are use, this value will triple. Storing all warps for the pose space as the matrix F thus warrants a huge amount of memory allocation which is infeasible in practical situations Pose Weight Estimation from Sub-images Our solution to the large image problem stems from the interesting observation that all the pixels in an image observe the same camera motion uring the exposure perio. We leverage this fact to estimate the pose weight vector from a subset of pixels in the image. Let f (S) an g (S) represent a portion of the reference an blurre images, respectively. The sub-image size is S S, an f (S), g (S) R S2 1. We call these, respectively, as reference sub-image an blurre sub-image. We will ignore the presence of occlusion in this iscussion for clarity. The relation in (12) hols for f (S) an g (S) as well i.e. g (S) = p k P ω pk f (S) p k (16) The estimate pose weight vector ω will be the same irrespective of whether we use f an g or f (S) an g (S) in (13). We propose to estimate the camera motion using only the sub-images thus effectively circumventing the issue 17

20 Figure 7: Shown are some of the reference (top row) an blurre (bottom row) images use in our experiments in Section 3.2. of memory storage. To verify our proposition, we now perform experiments to estimate camera motion from sub-images of large synthetically blurre images. We simulate five ifferent continuous camera paths for a preefine set of iscrete translation an rotation ranges. We use a set of five images for this experiment. We thus have a set of five reference images f an 25 blurre images g. Some of the reference an blurre images are shown in Fig. 7. We pick a pair of f an g, an for a given S we pick the sub-images f (S) an g (S). Using these two images, we estimate the pose weight vector ω using (13). Since the motion involves combinations of rotations an translations, irect comparison of original an estimate motion vectors may not lea to a correct measure of error. Hence we measure the success of our estimation by reblurring. We warp f using the poses in P with the estimate weights ω, an perform a weighte average of the warps, resulting in a reblurre reference image f. We then calculate the reconstruction PSNR of the reblurre reference image with respect to the original blurre image g. If the motion estimation from the sub-image is correct, then the reblurre image will be close in appearance to the original blurre image resulting in a high PSNR. We repeat this experiment for ifferent values of S. The variation of PSNR with respect to S is shown in Fig. 8(a) for image sizes of an For small values of S, the variation of motion blur within the sub-image will be small an will approximately ten to mimic space-invariant blur. Hence solving (13) results in a wrong pose weight estimate which results in a poor PSNR between the reblurre an blurre images. The PSNR increases as S increases since the blur variation insie the sub-image also increases. We observe that the PSNR value stabilises after a particular value of S. Beyon this 18

21 (a) (b) Figure 8: (a) PSNR in B, an (b) correlation measure (for ifferent sub-image sizes S). Original image sizes are (blue circle) an (re square). S = 100 S = 300 S = 600 Figure 9: Estimate blur kernels for ifferent sub-image sizes S. The blur kernels are isplaye as binary images with non-zero values shown in white colour. point, any further increase in S results only in marginal benefits in terms of correct estimation of pose weights. The size of the sub-image is an important factor in estimating the true camera motion. Too small an S reners the notion of space-variant blur insie the sub-image invali, an results in a wrong pose weight estimate. Too large an S will kinle storage an processing problems. In the following subsection, we formulate a metho to automatically choose goo sub-images for reliably estimating the camera motion. 19

22 3.2.2 Choosing a Goo Sub-image It is important to evise an automatic metho to select a sub-image of a particular size at a particular location from the given large blurre observation. We evelop a measure that woul inicate the quality of the selecte sub-image for estimating the camera motion. Given a pair of reference an blurre sub-images f (S) an g (S) of size S, we ranomly select N h scattere locations across the image. We crop small patches, f (S) k an g (S) k, from f (S) an g (S) respectively, for k = 1 to N h. We approximate the blur to be space-invariant in these patches, an estimate blur kernels using (13) allowing the pose space to contain only in-plane translations. Let us enote these blur kernels by h k for k = 1 to N h. If the selecte sub-image has sufficient variations in blur across it, then each of these blur kernels will be ifferent as they are quite sprea out spatially. Hence a comparison of these estimate kernels is a goo way to ecie the suitability of the sub-image for motion estimation. We avocate the use of normalise cross-correlation of the kernels for this ecision. Normalise cross-correlation between two 2D kernels h i an h j is given by NCC(h i, h j ) = corr(h i, h j ) h i 2 h j 2. (17) Values of the matrix NCC lie in [0, 1]. We use the maximum value of this matrix as our measure to compare the blur kernels, i.e., Correlation measure m(h i, h j ) = max NCC(h i, h j ) (18) Note that m(h i, h j ) attains a peak value of 1 if the two blur kernels are same. If the sub-image size is small, then there will not be sufficient blur variations across it, an our measure value will be close to 1. If the kernels are issimilar, then m takes values close to 0. Fig. 9 shows four blur kernels of the patches that are extracte ranomly from sub-images of sizes S = 100, 300 an 600. The patch size use is an N h = 4. Blur kernels corresponing to space-invariant blur will appear the same irrespective of the spatial point. For a small sub-image of size S = 100, it can be clearly observe that the four kernels are similar. Hence the camera motion cannot be correctly explaine by this sub-image. For S = 300, the blur kernels are more issimilar, an for S = 600, they look completely ifferent. Thus, higher values of S escribe the motion better. From these four blur kernels, six measure values m are estimate for every pair. Fig. 8 (b) shows the plot of mean m of these six values with respect to the sub-image size. The curve falls with increase in sub-image size as expecte ue to continuous ecrease in kernel similarity. A synonymity can be observe between the plots in Figs. 8 (a) an (b). Correlation measure ecreases initially with increasing S an stays almost constant after a certain 20

23 Figure 10: (a) PSNR in B for S = 600 an ifferent occlusion sizes K. value of S. Similarly, the reconstruction PSNR stabilises after it reaches a particular sub-image size. Base on these observations, we efine a threshol T m = 0.6 m 100, where m 100 is the correlation measure for S = 100, to accept or reject a sub-image for motion estimation. If m S0 for a sub-image of a specific size S 0 is less than this threshol, we ecie that the quality of the selecte sub-image of size S 0 is goo, an that the camera motion can be estimate from it Presence of Occlusion A natural question to ask is how well our algorithm fares when there is occlusion in the selecte sub-image itself. We a a ranom occlusion patch of size K K to the reference image f. We blur this image using the generate camera motion path, the resultant image being g occ. We slice the sub-images f (S) an g (S) occ, from f an g occ respectively. We o not restrict the position of the sub-image with respect to the occlusion. Therefore, the sub-image can either inclue the full occlusion or a part of the occlusion or be evoi of the occlusion completely. Our combine ictionary A in (14) tackles both the presence of blur an occlusion simultaneously. If occlusion is present either fully or partially, it woul be accommoate by the weights of the ientity matrix in A. If there is no occlusion present, then the occlusion weight vector will be zero. Thus, irrespective of the absence or presence (complete or partial) of the occluer in the sub-image, our formulation can elegantly hanle it. We next iscuss the effect of the size of the occlusion for a chosen sub-image size S. We consier the worst case of the occlusion being present completely insie the chosen sub-image. We solve the optimisation problem in (15) with f (S) an g (S) occ to arrive at the combine pose weight an occlusion weight vectors. Using the estimate ω, we reblur the large reference image f. We compare this reblurre image with the large blurre image g occ ignoring the values in the occlusion region, since this comparison is to verify the success of our motion estimation. Fig. 10 shows 21

24 how the PSNR varies with respect to the value of K for S = 600. We note that our algorithm tackles the presence of occlusion quite well. The motion estimation is correct an thus PSNR values are goo even when the occluer occupies up to half the sub-image area. Algorithm 1 shows our complete framework of choosing goo sub-images automatically, estimation of motion an change etection. In our experiments, we use an α value of 5 an the upper limit of S, S max, is chosen as 900 base on the many experiments we carrie out. Algorithm 1: Inputs: Reference image f, blurre an occlue image g. Init: Pick four sub-images f (100) base on Hu et al. [22], extract blur kernels an calculate m for each of the kernels. Average the four values to get m 100. Let S = Pick a sub-image of size S. If m < 0.6 m 100, goto Step 5. Else, choose a ifferent sub-image of the same size at a ifferent location. 2. If a particular S is chosen α times, upate S S Goto Step If S > S max, eclare blur to be space-invariant. Use one of the estimate blur kernels itself as the camera pose weight vector. Goto Step Estimate pose weight vector an occlusion weight vector for the selecte sub-images f (S) an g (S) using (15). 5. Reblur the original reference image f using the estimate pose weight vector ω. 6. Detect the changes by ifferencing the reblurre image an the original blurre image g. 22

25 3.3 Experiments We first evaluate the performance of our algorithm using a synthetic example. A reference image of size pixels is shown in Fig. 11(a). We selecte the pose space as follows- in-plane translations: [ 8:1:8] pixels, in-plane rotations: [ 3 : 1 : 3 ]. These ranges are also practically meaningful. To simulate blur incurre ue to camera shake, we manually generate camera motion with a connecte path in the pose space an initialize the weights. The synthesize camera motion was then applie on the same scene taken from a ifferent view point with synthetically ae occluers to prouce the blurre an occlue image in Fig. 11(b). To evaluate the propose metho, we followe the steps outline in Algorithm 1, selecte four sub-images (base on Hu et al. [22]) of size pixels, an calculate m 100 inepenently for each. The average value of m 100 was compute. Next, we picke a sub-image of size pixels an calculate m. The four kernels compute within the sub-image bore a large egree of similarity inicating that the space-varying nature of the blur was not being capture at this size. This step was repeate for five ifferent sub-images of size pixels but the value of m they yiele was approximately equal to m 100 revealing that only a bigger size sub-image can encapsulate the space-varying camera motion. Our algorithm converge for a sub-image of size pixels where the compute m was less than 0.6 m 100. The selecte sub-images of size pixels from the focuse, an the blurre an occlue input images are shown in Figs. 11(c) an 11(), respectively. The position of these sub-images have been inicate by re boxes in Figs. 11(a) an (b). Note that the selecte sub-image, incientally, oes not contain any occlusion. To hanle pose changes between the two images, we first coarsely aligne the reference image an the blurre an occlue image at a lower resolution using a multiscale implementation similar to [21]. Fig. 11(e) shows the reference image reblurre using the estimate ω. The etecte occlusions shown in Fig. 11(f) are foun by comparing the blurre an occlue observation (Fig. 11(b)) with the reblurre reference image (Fig. 11(e)). For our next experiment, we use the publicly available VIRAT atabase (Oh et al. [23]) which is a benchmark for vieo surveillance an change etection. Two frames corresponing to the reference image an the occlue image (shown in Figs. 12(a) an (b), respectively) were manually extracte from an aerial vieo. The frames are at the resolution of the original vieo i.e., pixels. Since the resolution is low, we run our algorithm irectly on the whole image instea of a sub-image. The etecte occlusion is shown in Fig. 12(c). Although, strictly speaking, the images are not high resolution at all, the purpose is to emonstrate the efficiency of our metho for aerial imaging. Also, this example illustrates how the propose metho elegantly subsumes the work in [21] for the case of low 23

26 (a) (b) (c) () (e) (f) Figure 11: (a) Reference image, (b) synthetically blurre an occlue observation from a ifferent view point, (c) sub-image from (a), () sub-image from (b), (e) reference image reblurre using the estimate camera motion, an (f) etecte occlusion. 24

27 (a) (b) (c) Figure 12: (a) Reference image, (b) real blurre an occlue observation, an (c) etecte occlusion. (a) (b) (c) Figure 13: (a) Reference image, (b) real blurre an occlue observation, an (c) multiple occluers foun. resolution images. A final real example is shown in Fig. 13. The two images in Figs. 13 (a) an (b) were capture from the same view point but with a small time lapse using a Google Nexus 4 mobile phone which has an 8 MP camera. Observe how even small occluers with intensities close to the backgroun are correctly etecte by our algorithm (Fig. 13(c)). This example threw up small spurious non-connecte occlusions in the bottom half of the image ue to movement of leaves, an these were remove by simple post-processing. To perform quantitative assessment of our metho, we compute the following metrics which are well known in the change etection community: percentage of correct classification (PCC), Jaccar coefficient (JC) an Yule coef- 25

28 ficient (YC) [24]. For the real experiments, the groun-truth occlusion was assesse by asking ifferent iniviuals to locally mark out the occlue regions as per their iniviual perception. The efficacy of our algorithm is further evient from the values in Table 1. Table 1: Quantitative metrics for our results in Figs. 11 to 13. Fig. PCC JC YC The maximum size of the images consiere was 18 MP. Despite our best attempts, we coul not fin any atabase containing image pairs of very large sizes (of the orer of 100M) that coul be use for testing. Nevertheless, the framework propose here has the potential to hanle even very large images. Due to file size constraints, we have inclue only the ownscale images in the pf, an not the original high resolution images. 4 Space-variant Deblurring of Aerial Imagery Blur in images resulting from motion of the camera uring exposure time is an issue in many areas of optical imaging such as remote sensing, aerial reconnaissance an igital photography. For instance, the images capture by cameras attache to airplanes or helicopters are blurre ue to both the forwar motion of the aircraft, an vibrations. Manufacturers of aerial imaging systems employ compensation mechanisms such as gyroscope gimbals to mitigate the effect of vibrations. Although this reuces the blur ue to jitter to some extent, there is no straightforwar way to o the same for the forwar movement. Moreover, these harware solutions come at the expense of higher cost, weight an energy consumption. A system that can remove the blur by algorithmic post-processing provies an elegant solution to this problem. Traitionally, image restoration techniques have moele blurring ue to camera shake as a convolution with a single blur kernel [25, 26, 27, 28, 29]. However, it is a well-establishe fact that the convolution moel that employs a uniform blur kernel or point sprea function (PSF) across the image is not sufficient to moel the blurring phenomenon if the motion is not compose merely of in-plane translations. In fact, camera tilts an rotations occur frequently [30] an the blur inuce by camera shake is typically non-uniform. This is especially true in the case of aerial imagery 26

29 where the blur incurre is not just ue to the linear motion of the aircraft but also ue to vibrations. Approaches to hanling non-uniform blur broaly fall into two categories. The first relies on local uniformity of the blur. Base on the assumption that a continuously varying blur can be approximate by a spatially varying combination of localize uniform blurs, Hirsch et al. [31] propose a metho to restore non-uniform motion blur by using an efficient filter flow framework. Builing on the iea of a motion ensity function, yet another scheme for space-varying blur has been propose by Gupta et al. [32]. The motion-blurre image is moele by consiering the camera motion to be comprise only of in-plane translations an in-plane rotations. The secon an more recent non-uniform eblurring approach uses an elegant global moel [30, 33, 34] in which the blurre image is represente as the weighte average of warpe instances of the latent image, for a constant epth scene. The warpe instances can be viewe as the intermeiate images observe by the camera uring the exposure time when the camera suffers a shake. Tai et al. [35] have propose a non-blin eblurring scheme base on moifying the Richarson Lucy econvolution technique for space-variant blur. However, they assume that the blurring function is known a priori an oes not nee to be estimate. Whyte et al. [30, 36] propose a non-uniform image restoration technique where the blurring function is represente on a 3D gri corresponing to the three irections of camera rotations. As pointe out in [34], the main isavantage of this global geometric moel is heavy computational loa ue to the ense sampling of poses in the high imensional camera motion space. A common approach to tackle this problem is to aopt a multi-scale approach that involves constructing an image pyrami an using coarse-graine sampling. But this simplification inevitably introuces reconstruction errors [34]. Hu an Yang [34] present a fast non-uniform eblurring technique by using locally estimate blur kernels to restrain the possible camera poses to a low-imensional subspace. But the kernels themselves nee to be input by the user an the final eblurring quality is epenent on the accuracy of the estimate PSFs. An unnatural L 0 sparse representation for uniform an non-uniform eblurring has also been propose recently by Xu et al. [37]. In the harware-assiste restoration techniques, Joshi et al. [38] attach sensors to the camera to etermine the blurring function, while Tai et al. [39] propose a eblurring scheme that uses coe exposure an some simple user interactions to etermine the PSF. In this work, we propose a fully blin single image non-uniform eblurring algorithm suite for aerial imagery that oes not require any aitional harware. We reuce computational overhea by approximating the camera motion with a 3D pose space an optimizing only over a subspace of active camera poses. This reuction in imensionality allows us to use ense sampling an our results compare favourably with state-of-the-art eblurring algorithms. In contrast to [34], our alternating minimization algorithm, which uses a novel camera pose initialization an pose 27

30 perturbation step, works on the global geometric moel an oesn t require the calculation of blur kernels at various image locations, thereby eliminating the nee for user interaction. 4.1 The motion blur moel In this section, we review the non-uniform blur moel for aerial images. Since the istances involve are quite large, the groun scene can be moele as being approximately planar. When the motion of the camera is not restricte to in-plane translations, the paths trace by scene points in the image plane will vary across the image resulting in space-variant blur. The convolution moel with a single blur kernel oes not hol in such a scenario. However, the blurre image can be accurately moele as the weighte average of warpe instances of the latent image using the projective moel in [40, 30, 32, 34], when the scene is planar. In the iscrete omain, this can be represente as b (i, j) = k T ω (k) l (H k (i, j)) (19) where l(i, j) enotes the latent image of the scene, b(i, j) is the blurre observation, an H k (i, j) enotes the image coorinates when a homography H k is applie to the point (i, j). The parameter ω, also calle the transformation sprea function (TSF) [40] in the literature, epicts the camera motion, an ω(k) enotes the fraction of the total exposure uration for which the camera staye in the position that cause the transformation H k. Akin to a PSF, k T ω (k) = 1. The TSF ω is efine on the iscrete transformation space T which is the set of sample camera poses. The transformation space is iscretize in such a manner that the ifference in the isplacements of a point light source ue to two ifferent transformations from the iscrete set T is at least one pixel. Note that although the apparent motion of scene points in the image will vary at ifferent locations when the camera motion is unrestricte, the blurring operation can still be escribe by a single TSF using equation (19). For example, if the camera unergoes only in-plane rotations, the TSF will have non-zero weights only for the rotational transformations. Observe that if the camera motion is confine to 2D translations, the PSF an TSF will be equivalent. If l, b represent the latent image an the blurre image, respectively, lexicographically orere as vectors, then, in matrix-vector notation, equation (19) can be expresse as b = Aω (20) where A is the matrix whose columns contain projectively transforme copies of l, an ω enotes the vector of weights ω(k). Note that ω is a sparse vector since the blur is typically ue to inciental camera shake an only a small fraction 28

31 of the poses in T will have non-zero weights in ω. Alternately, b can also be represente as ( ) b = ω(k)h k l = Bl (21) k T where H k is the matrix that warps the latent image l accoring to the homography H k, while B = k T ω(k)h k is the matrix that performs the non-uniform blurring operation. Note that B is a sparse square matrix that can be efficiently store in memory an each row of B correspons to the blur kernel at that particular pixel location. The homography H k in equation (19) in terms of the camera parameters is given by H k = K v (R k + 1 ) T k [0 0 1] Kv 1 (22) 0 where T k =[T Xk T Yk T Zk ] T is the translation vector, an 0 is the scene epth which is an unknown constant. The rotation matrix R k is parameterize [30] in terms of θ X, θ Y an θ Z, which are the angles of rotation about the three axes. The camera intrinsic matrix K v is assume to be of the form K v = iag(v, v, 1), where v is the focal length. Six egrees of freeom arise from T k an R k (three each). However, it has been shown in [30] that the 6D camera pose space can be approximate by 3D rotations without consiering translations when the focal length is large. An alternate approach [32, 34] is to moel out-of-plane rotations by in-plane-translations uner the same assumption of a sufficiently long focal length. This is the approach that we also take to reuce the imensionality of the problem i.e., the set of transformations T becomes a 3D space efine by the axes t X, t Y an θ Z corresponing to in-plane translations along the X an Y axes, an in-plane-rotations about the Z axis, respectively. The homography given in the equation (22) then simplifies to H k = cos θ Zk sin θ Zk t Xk sin θ Zk cos θ Zk t Yk (23) where the translation parameters are given by t Xk = vt X k 0 an t Yk = vt Y k Single image eblurring In orer to recover the latent image l, our alternating minimization (AM) algorithm procees by upating the estimate of the TSF at one step, an the latent image at the next. We minimize the following energy function over the variables l an ω, E(l, ω) = ( ω(k)h k )l b αφ 1 (l) + βφ 2 (ω). (24) k T 29

32 The energy function consists of three terms. The first measures the fielity to the ata an emanates from our acquisition moel (21). The remaining two are regularization terms on the latent image l an the weights ω, respectively, with positive weighting constants α an β that attract the minimum of E to an amissible set of solutions. The regularization terms will be explaine in the following sub-sections. Our algorithm requires the user to specify a rough guess of the extent of the blur (translation in pixels along X, Y axes an rotation in egrees about Z axis) to buil the initial TSF. The 3D camera pose space, whose limits are specifie by the user, is uniformly sample to select the initial set of camera poses. We enote this sample pose space by S where S T. In our experiments, the initial TSF containe 200 poses which is still much smaller than the poses that the whole space T woul contain even for small to moerate blurs. Note that our algorithm requires no other user input. In contrast, Hu an Yang [34], whose work comes closest to ours, requires the user to input the blur kernels at various locations in the image an we observe that the final eblurring quality epens greatly on the number, location an correctness of these blur kernels. Furthermore, since we moel our blur using in-plane rotations an translations, we o not nee to know the focal length of the camera as in the case of [30] whose camera pose space is compose of 3D rotations. In the TSF estimation step, we compute ω given the current estimate of the latent image l base on equation (24) Image Preiction Similar to [28], we perform an image preiction step at each iteration before TSF estimation to obtain more accurate results an to facilitate faster convergence. The preiction step consists of bilateral filtering, shock filtering an graient magnitue thresholing. Details of the implementation can be foun in [28]. The preicte image, enote by l, is sharper than the current estimate of the latent image l an has fewer artifacts TSF estimation on a subspace of T In the first iteration, we optimize over the initial TSF by minimizing the following energy function E(ω) = Aω b βφ 2 (ω) (25) where A = k S H k l an Φ 2 (ω) = ω 1. Similar to [28], we work on graients instea of image intensities in our implementation of equation (25) since image erivatives have been shown to be effective in reucing ringing effects [26]. This optimization problem can be solve using the nnleastr function of the Lasso algorithm [41] which 30

33 consiers the aitional l1 norm constraint an imposes non-negativity on the TSF weights. Only the ominant poses in the initial TSF S are selecte as a result of the sparsity constraint impose by the l1 norm an the remaining poses which are outliers are remove. We now rebuil the set S for the secon iteration so that its carinality is the same as the initial TSF. The new poses are picke aroun the selecte ominant poses by sampling using a Gaussian istribution. This pose perturbation step is base on the notion that the camera trajectory forms a connecte 1D path in the camera motion space an, therefore, the poses close to the ominant ones are most likely to be inliers. In the next iteration, equation (25) is minimize over this new active set of poses. The variance of the Gaussian istribution is graually reuce with iterations as the estimate TSF converges to the true TSF. Experiments on synthetic an real ata show that our pose perturbation step lens robustness to the algorithm an it oes not get stuck in local minima. Note that the number of columns in A equals the carinality of the set S which is much less than the total number of poses in T. This allows us to compute the matrix A at the highest image resolution without running into memory issues. We use a β value of 0.1 for our experiments Iimage estimation In this step, the latent image l is estimate by fixing the TSF weights ω. The blurring matrix is constructe using only the poses in the active set since the the weights of the poses of the inactive set are zero, i.e. B = k S ω(k)h k an the energy function to be minimize takes the form E(l) = Bl b αφ 1 (l) (26) We use the regularization term Φ 1 (l) = l 2 2 in [28] an a conjugate graient metho to solve this problem. 4.3 Experiments This section consists of two parts. We first evaluate the performance of our algorithm on synthetic ata an also compare our results with various state-of-the-art single image eblurring techniques. Following this, we emonstrate the applicability of the propose metho on real images using the challenging VIRAT [42] aerial ataset. We begin with a synthetic example. A latent image of size pixels is shown in Fig. 14(a). In orer to emonstrate our algorithm s ability to hanle 6D motion using just a 3D TSF, we choose the following 6D TSF space- in-plane translations in pixels: [ 8 : 1 : 8], in-plane rotations: [ 1.5 : 0.5 : 1.5 ], out-of-plane translations: [0.95 : 0.05 : 1.05] on the image plane, an out-of-plane rotations: [ 4 3 : 1 3 : 4 3 ]. 31

34 (a) (b) (c) () (e) (f) Figure 14: A synthetic example. Rows 1 an 2: (a) Latent image, (b) synthetically blurre image, (c), (), (e) eblurre outputs obtaine using the state-of-the-art econvolution methos in [37], [34], [30], respectively, (f) eblurre output obtaine using the propose metho. Rows 3,4 an 5: Three zoome-in patches from the images (a) to (f) emonstrating our algorithm s ability to prouce artifact-free eblurre outputs. 32

35 Figure 15: Deblurring results on the VIRAT aerial atabase using the propose metho. The first column contains the blurre frames while the secon shows eblurre outputs obtaine by our algorithm. Xu et al. [37] Hu an Yang [34] Whyte et al. [30] Propose metho PSNR (in B) SSIM [43] Table 2: Comparison with state-of-the-art methos for the synthetic example in Fig

Exercises of PIV. incomplete draft, version 0.0. October 2009

Exercises of PIV. incomplete draft, version 0.0. October 2009 Exercises of PIV incomplete raft, version 0.0 October 2009 1 Images Images are signals efine in 2D or 3D omains. They can be vector value (e.g., color images), real (monocromatic images), complex or binary

More information

Online Appendix to: Generalizing Database Forensics

Online Appendix to: Generalizing Database Forensics Online Appenix to: Generalizing Database Forensics KYRIACOS E. PAVLOU an RICHARD T. SNODGRASS, University of Arizona This appenix presents a step-by-step iscussion of the forensic analysis protocol that

More information

Efficient Change Detection for Very Large Motion Blurred Images

Efficient Change Detection for Very Large Motion Blurred Images Efficient Change Detection for Very Large Motion Blurred Images Vijay Rengarajan, Abhijith Punnappurath, A.N. Rajagopalan Indian Institute of Technology Madras Chennai, India {ee11d35,ee1d38,raju}@ee.iitm.ac.in

More information

Classical Mechanics Examples (Lagrange Multipliers)

Classical Mechanics Examples (Lagrange Multipliers) Classical Mechanics Examples (Lagrange Multipliers) Dipan Kumar Ghosh Physics Department, Inian Institute of Technology Bombay Powai, Mumbai 400076 September 3, 015 1 Introuction We have seen that the

More information

Refinement of scene depth from stereo camera ego-motion parameters

Refinement of scene depth from stereo camera ego-motion parameters Refinement of scene epth from stereo camera ego-motion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho

More information

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION Zhaoyin Jia, Anrew Gallagher, Tsuhan Chen School of Electrical an Computer Engineering, Cornell University ABSTRACT Photography on a mobile camera

More information

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing

Computer Graphics Chapter 7 Three-Dimensional Viewing Viewing Computer Graphics Chapter 7 Three-Dimensional Viewing Outline Overview of Three-Dimensional Viewing Concepts The Three-Dimensional Viewing Pipeline Three-Dimensional Viewing-Coorinate Parameters Transformation

More information

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction New Geometric Interpretation an Analytic Solution for uarilateral Reconstruction Joo-Haeng Lee Convergence Technology Research Lab ETRI Daejeon, 305 777, KOREA Abstract A new geometric framework, calle

More information

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method Southern Cross University epublications@scu 23r Australasian Conference on the Mechanics of Structures an Materials 214 Transient analysis of wave propagation in 3D soil by using the scale bounary finite

More information

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means Classifying Facial Expression with Raial Basis Function Networks, using Graient Descent an K-means Neil Allrin Department of Computer Science University of California, San Diego La Jolla, CA 9237 nallrin@cs.ucs.eu

More information

Coupling the User Interfaces of a Multiuser Program

Coupling the User Interfaces of a Multiuser Program Coupling the User Interfaces of a Multiuser Program PRASUN DEWAN University of North Carolina at Chapel Hill RAJIV CHOUDHARY Intel Corporation We have evelope a new moel for coupling the user-interfaces

More information

A Plane Tracker for AEC-automation Applications

A Plane Tracker for AEC-automation Applications A Plane Tracker for AEC-automation Applications Chen Feng *, an Vineet R. Kamat Department of Civil an Environmental Engineering, University of Michigan, Ann Arbor, USA * Corresponing author (cforrest@umich.eu)

More information

Image Segmentation using K-means clustering and Thresholding

Image Segmentation using K-means clustering and Thresholding Image Segmentation using Kmeans clustering an Thresholing Preeti Panwar 1, Girhar Gopal 2, Rakesh Kumar 3 1M.Tech Stuent, Department of Computer Science & Applications, Kurukshetra University, Kurukshetra,

More information

Multimodal Stereo Image Registration for Pedestrian Detection

Multimodal Stereo Image Registration for Pedestrian Detection Multimoal Stereo Image Registration for Peestrian Detection Stephen Krotosky an Mohan Trivei Abstract This paper presents an approach for the registration of multimoal imagery for peestrian etection when

More information

Robust Camera Calibration for an Autonomous Underwater Vehicle

Robust Camera Calibration for an Autonomous Underwater Vehicle obust Camera Calibration for an Autonomous Unerwater Vehicle Matthew Bryant, Davi Wettergreen *, Samer Aballah, Alexaner Zelinsky obotic Systems Laboratory Department of Engineering, FEIT Department of

More information

A Duality Based Approach for Realtime TV-L 1 Optical Flow

A Duality Based Approach for Realtime TV-L 1 Optical Flow A Duality Base Approach for Realtime TV-L 1 Optical Flow C. Zach 1, T. Pock 2, an H. Bischof 2 1 VRVis Research Center 2 Institute for Computer Graphics an Vision, TU Graz Abstract. Variational methos

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Linus Svärm Petter Stranmark Centre for Mathematical Sciences, Lun University {linus,petter}@maths.lth.se Abstract Shift-map image processing is a new framework base on energy

More information

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles 2D Kinematics Consier a robotic arm. We can sen it commans like, move that joint so it bens at an angle θ. Once we ve set each joint, that s all well an goo. More interesting, though, is the question of

More information

Fast Window Based Stereo Matching for 3D Scene Reconstruction

Fast Window Based Stereo Matching for 3D Scene Reconstruction The International Arab Journal of Information Technology, Vol. 0, No. 3, May 203 209 Fast Winow Base Stereo Matching for 3D Scene Reconstruction Mohamma Mozammel Chowhury an Mohamma AL-Amin Bhuiyan Department

More information

Discriminative Filters for Depth from Defocus

Discriminative Filters for Depth from Defocus Discriminative Filters for Depth from Defocus Fahim Mannan an Michael S. Langer School of Computer Science, McGill University Montreal, Quebec HA 0E9, Canaa. {fmannan, langer}@cim.mcgill.ca Abstract Depth

More information

Dense Disparity Estimation in Ego-motion Reduced Search Space

Dense Disparity Estimation in Ego-motion Reduced Search Space Dense Disparity Estimation in Ego-motion Reuce Search Space Luka Fućek, Ivan Marković, Igor Cvišić, Ivan Petrović University of Zagreb, Faculty of Electrical Engineering an Computing, Croatia (e-mail:

More information

Shift-map Image Registration

Shift-map Image Registration Shift-map Image Registration Svärm, Linus; Stranmark, Petter Unpublishe: 2010-01-01 Link to publication Citation for publishe version (APA): Svärm, L., & Stranmark, P. (2010). Shift-map Image Registration.

More information

Dual Arm Robot Research Report

Dual Arm Robot Research Report Dual Arm Robot Research Report Analytical Inverse Kinematics Solution for Moularize Dual-Arm Robot With offset at shouler an wrist Motivation an Abstract Generally, an inustrial manipulator such as PUMA

More information

Skyline Community Search in Multi-valued Networks

Skyline Community Search in Multi-valued Networks Syline Community Search in Multi-value Networs Rong-Hua Li Beijing Institute of Technology Beijing, China lironghuascut@gmail.com Jeffrey Xu Yu Chinese University of Hong Kong Hong Kong, China yu@se.cuh.eu.h

More information

6 Gradient Descent. 6.1 Functions

6 Gradient Descent. 6.1 Functions 6 Graient Descent In this topic we will iscuss optimizing over general functions f. Typically the function is efine f : R! R; that is its omain is multi-imensional (in this case -imensional) an output

More information

Real Time On Board Stereo Camera Pose through Image Registration*

Real Time On Board Stereo Camera Pose through Image Registration* 28 IEEE Intelligent Vehicles Symposium Einhoven University of Technology Einhoven, The Netherlans, June 4-6, 28 Real Time On Boar Stereo Camera Pose through Image Registration* Fai Dornaika French National

More information

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 17, No 3 Sofia 017 Print ISSN: 1311-970; Online ISSN: 1314-4081 DOI: 10.1515/cait-017-0030 Particle Swarm Optimization Base

More information

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE БСУ Международна конференция - 2 THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE Evgeniya Nikolova, Veselina Jecheva Burgas Free University Abstract:

More information

Radar Tomography of Moving Targets

Radar Tomography of Moving Targets On behalf of Sensors Directorate, Air Force Research Laboratory Final Report September 005 S. L. Coetzee, C. J. Baker, H. D. Griffiths University College London REPORT DOCUMENTATION PAGE Form Approved

More information

A Framework for Dialogue Detection in Movies

A Framework for Dialogue Detection in Movies A Framework for Dialogue Detection in Movies Margarita Kotti, Constantine Kotropoulos, Bartosz Ziólko, Ioannis Pitas, an Vassiliki Moschou Department of Informatics, Aristotle University of Thessaloniki

More information

Using the disparity space to compute occupancy grids from stereo-vision

Using the disparity space to compute occupancy grids from stereo-vision The 2010 IEEE/RSJ International Conference on Intelligent Robots an Systems October 18-22, 2010, Taipei, Taiwan Using the isparity space to compute occupancy gris from stereo-vision Mathias Perrollaz,

More information

Using Vector and Raster-Based Techniques in Categorical Map Generalization

Using Vector and Raster-Based Techniques in Categorical Map Generalization Thir ICA Workshop on Progress in Automate Map Generalization, Ottawa, 12-14 August 1999 1 Using Vector an Raster-Base Techniques in Categorical Map Generalization Beat Peter an Robert Weibel Department

More information

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE D CIRCULAR ULTRASONIC PHASED ARRAY S. Monal Lonon South Bank University; Engineering an Design 103 Borough Roa, Lonon

More information

Estimating Velocity Fields on a Freeway from Low Resolution Video

Estimating Velocity Fields on a Freeway from Low Resolution Video Estimating Velocity Fiels on a Freeway from Low Resolution Vieo Young Cho Department of Statistics University of California, Berkeley Berkeley, CA 94720-3860 Email: young@stat.berkeley.eu John Rice Department

More information

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides

Threshold Based Data Aggregation Algorithm To Detect Rainfall Induced Landslides Threshol Base Data Aggregation Algorithm To Detect Rainfall Inuce Lanslies Maneesha V. Ramesh P. V. Ushakumari Department of Computer Science Department of Mathematics Amrita School of Engineering Amrita

More information

Chapter 5 Proposed models for reconstituting/ adapting three stereoscopes

Chapter 5 Proposed models for reconstituting/ adapting three stereoscopes Chapter 5 Propose moels for reconstituting/ aapting three stereoscopes - 89 - 5. Propose moels for reconstituting/aapting three stereoscopes This chapter offers three contributions in the Stereoscopy area,

More information

Non-homogeneous Generalization in Privacy Preserving Data Publishing

Non-homogeneous Generalization in Privacy Preserving Data Publishing Non-homogeneous Generalization in Privacy Preserving Data Publishing W. K. Wong, Nios Mamoulis an Davi W. Cheung Department of Computer Science, The University of Hong Kong Pofulam Roa, Hong Kong {wwong2,nios,cheung}@cs.hu.h

More information

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body

Research Article Inviscid Uniform Shear Flow past a Smooth Concave Body International Engineering Mathematics Volume 04, Article ID 46593, 7 pages http://x.oi.org/0.55/04/46593 Research Article Invisci Uniform Shear Flow past a Smooth Concave Boy Abullah Mura Department of

More information

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways

Bends, Jogs, And Wiggles for Railroad Tracks and Vehicle Guide Ways Ben, Jogs, An Wiggles for Railroa Tracks an Vehicle Guie Ways Louis T. Klauer Jr., PhD, PE. Work Soft 833 Galer Dr. Newtown Square, PA 19073 lklauer@wsof.com Preprint, June 4, 00 Copyright 00 by Louis

More information

Politehnica University of Timisoara Mobile Computing, Sensors Network and Embedded Systems Laboratory. Testing Techniques

Politehnica University of Timisoara Mobile Computing, Sensors Network and Embedded Systems Laboratory. Testing Techniques Politehnica University of Timisoara Mobile Computing, Sensors Network an Embee Systems Laboratory ing Techniques What is testing? ing is the process of emonstrating that errors are not present. The purpose

More information

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control Almost Disjunct Coes in Large Scale Multihop Wireless Network Meia Access Control D. Charles Engelhart Anan Sivasubramaniam Penn. State University University Park PA 682 engelhar,anan @cse.psu.eu Abstract

More information

Module13:Interference-I Lecture 13: Interference-I

Module13:Interference-I Lecture 13: Interference-I Moule3:Interference-I Lecture 3: Interference-I Consier a situation where we superpose two waves. Naively, we woul expect the intensity (energy ensity or flux) of the resultant to be the sum of the iniviual

More information

Offloading Cellular Traffic through Opportunistic Communications: Analysis and Optimization

Offloading Cellular Traffic through Opportunistic Communications: Analysis and Optimization 1 Offloaing Cellular Traffic through Opportunistic Communications: Analysis an Optimization Vincenzo Sciancalepore, Domenico Giustiniano, Albert Banchs, Anreea Picu arxiv:1405.3548v1 [cs.ni] 14 May 24

More information

Unknown Radial Distortion Centers in Multiple View Geometry Problems

Unknown Radial Distortion Centers in Multiple View Geometry Problems Unknown Raial Distortion Centers in Multiple View Geometry Problems José Henrique Brito 1,2, Rolan Angst 3, Kevin Köser 3, Christopher Zach 4, Pero Branco 2, Manuel João Ferreira 2, Marc Pollefeys 3 1

More information

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks

Queueing Model and Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Queueing Moel an Optimization of Packet Dropping in Real-Time Wireless Sensor Networks Marc Aoun, Antonios Argyriou, Philips Research, Einhoven, 66AE, The Netherlans Department of Computer an Communication

More information

Estimation of large-amplitude motion and disparity fields: Application to intermediate view reconstruction

Estimation of large-amplitude motion and disparity fields: Application to intermediate view reconstruction c 2000 SPIE. Personal use of this material is permitte. However, permission to reprint/republish this material for avertising or promotional purposes or for creating new collective works for resale or

More information

Questions? Post on piazza, or Radhika (radhika at eecs.berkeley) or Sameer (sa at berkeley)!

Questions? Post on piazza, or  Radhika (radhika at eecs.berkeley) or Sameer (sa at berkeley)! EE122 Fall 2013 HW3 Instructions Recor your answers in a file calle hw3.pf. Make sure to write your name an SID at the top of your assignment. For each problem, clearly inicate your final answer, bol an

More information

DISTRIBUTION A: Distribution approved for public release.

DISTRIBUTION A: Distribution approved for public release. AFRL-OSR-VA-TR-2014-0232 Design Optimizations Simulation of Wave Propagation in Metamaterials Robert Freund MASSACHUSETTS INSTITUTE OF TECHNOLOGY 09/24/2014 Final Report DISTRIBUTION A: Distribution approved

More information

Comparison of Methods for Increasing the Performance of a DUA Computation

Comparison of Methods for Increasing the Performance of a DUA Computation Comparison of Methos for Increasing the Performance of a DUA Computation Michael Behrisch, Daniel Krajzewicz, Peter Wagner an Yun-Pang Wang Institute of Transportation Systems, German Aerospace Center,

More information

CONSTRUCTION AND ANALYSIS OF INVERSIONS IN S 2 AND H 2. Arunima Ray. Final Paper, MATH 399. Spring 2008 ABSTRACT

CONSTRUCTION AND ANALYSIS OF INVERSIONS IN S 2 AND H 2. Arunima Ray. Final Paper, MATH 399. Spring 2008 ABSTRACT CONSTUCTION AN ANALYSIS OF INVESIONS IN S AN H Arunima ay Final Paper, MATH 399 Spring 008 ASTACT The construction use to otain inversions in two-imensional Eucliean space was moifie an applie to otain

More information

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway

State Indexed Policy Search by Dynamic Programming. Abstract. 1. Introduction. 2. System parameterization. Charles DuHadway State Inexe Policy Search by Dynamic Programming Charles DuHaway Yi Gu 5435537 503372 December 4, 2007 Abstract We consier the reinforcement learning problem of simultaneous trajectory-following an obstacle

More information

Calculation on diffraction aperture of cube corner retroreflector

Calculation on diffraction aperture of cube corner retroreflector November 10, 008 / Vol., No. 11 / CHINESE OPTICS LETTERS 8 Calculation on iffraction aperture of cube corner retroreflector Song Li (Ó Ø, Bei Tang (», an Hui Zhou ( ï School of Electronic Information,

More information

Learning Subproblem Complexities in Distributed Branch and Bound

Learning Subproblem Complexities in Distributed Branch and Bound Learning Subproblem Complexities in Distribute Branch an Boun Lars Otten Department of Computer Science University of California, Irvine lotten@ics.uci.eu Rina Dechter Department of Computer Science University

More information

A Comparative Evaluation of Iris and Ocular Recognition Methods on Challenging Ocular Images

A Comparative Evaluation of Iris and Ocular Recognition Methods on Challenging Ocular Images A Comparative Evaluation of Iris an Ocular Recognition Methos on Challenging Ocular Images Vishnu Naresh Boeti Carnegie Mellon University Pittsburgh, PA 523 naresh@cmu.eu Jonathon M Smereka Carnegie Mellon

More information

Exploring Context with Deep Structured models for Semantic Segmentation

Exploring Context with Deep Structured models for Semantic Segmentation 1 Exploring Context with Deep Structure moels for Semantic Segmentation Guosheng Lin, Chunhua Shen, Anton van en Hengel, Ian Rei between an image patch an a large backgroun image region. Explicitly moeling

More information

Computer Organization

Computer Organization Computer Organization Douglas Comer Computer Science Department Purue University 250 N. University Street West Lafayette, IN 47907-2066 http://www.cs.purue.eu/people/comer Copyright 2006. All rights reserve.

More information

More Raster Line Issues. Bresenham Circles. Once More: 8-Pt Symmetry. Only 1 Octant Needed. Spring 2013 CS5600

More Raster Line Issues. Bresenham Circles. Once More: 8-Pt Symmetry. Only 1 Octant Needed. Spring 2013 CS5600 Spring 03 Lecture Set 3 Bresenham Circles Intro to Computer Graphics From Rich Riesenfel Spring 03 More Raster Line Issues Fat lines with multiple pixel with Symmetric lines n point geometry how shoul

More information

Characterizing Decoding Robustness under Parametric Channel Uncertainty

Characterizing Decoding Robustness under Parametric Channel Uncertainty Characterizing Decoing Robustness uner Parametric Channel Uncertainty Jay D. Wierer, Wahee U. Bajwa, Nigel Boston, an Robert D. Nowak Abstract This paper characterizes the robustness of ecoing uner parametric

More information

Figure 1: Schematic of an SEM [source: ]

Figure 1: Schematic of an SEM [source:   ] EECI Course: -9 May 1 by R. Sanfelice Hybri Control Systems Eelco van Horssen E.P.v.Horssen@tue.nl Project: Scanning Electron Microscopy Introuction In Scanning Electron Microscopy (SEM) a (bunle) beam

More information

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks : a Movement-Base Routing Algorithm for Vehicle A Hoc Networks Fabrizio Granelli, Senior Member, Giulia Boato, Member, an Dzmitry Kliazovich, Stuent Member Abstract Recent interest in car-to-car communications

More information

Spherical Billboards and their Application to Rendering Explosions

Spherical Billboards and their Application to Rendering Explosions Spherical Billboars an their Application to Renering Explosions Tamás Umenhoffer László Szirmay-Kalos Gábor Szijártó Department of Control Engineering an Information Technology Buapest University of Technology,

More information

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract The Reconstruction of Graphs Dhananay P. Mehenale Sir Parashurambhau College, Tila Roa, Pune-4030, Inia. Abstract In this paper we iscuss reconstruction problems for graphs. We evelop some new ieas lie

More information

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation

Solution Representation for Job Shop Scheduling Problems in Ant Colony Optimisation Solution Representation for Job Shop Scheuling Problems in Ant Colony Optimisation James Montgomery, Carole Faya 2, an Sana Petrovic 2 Faculty of Information & Communication Technologies, Swinburne University

More information

AnyTraffic Labeled Routing

AnyTraffic Labeled Routing AnyTraffic Labele Routing Dimitri Papaimitriou 1, Pero Peroso 2, Davie Careglio 2 1 Alcatel-Lucent Bell, Antwerp, Belgium Email: imitri.papaimitriou@alcatel-lucent.com 2 Universitat Politècnica e Catalunya,

More information

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory Feature Extraction an Rule Classification Algorithm of Digital Mammography base on Rough Set Theory Aboul Ella Hassanien Jafar M. H. Ali. Kuwait University, Faculty of Aministrative Science, Quantitative

More information

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources An Algorithm for Builing an Enterprise Network Topology Using Wiesprea Data Sources Anton Anreev, Iurii Bogoiavlenskii Petrozavosk State University Petrozavosk, Russia {anreev, ybgv}@cs.petrsu.ru Abstract

More information

Video-based Characters Creating New Human Performances from a Multi-view Video Database

Video-based Characters Creating New Human Performances from a Multi-view Video Database Vieo-base Characters Creating New Human Performances from a Multi-view Vieo Database Feng Xu Yebin Liu? Carsten Stoll? James Tompkin Gaurav Bharaj? Qionghai Dai Hans-Peter Seiel? Jan Kautz Christian Theobalt?

More information

Message Transport With The User Datagram Protocol

Message Transport With The User Datagram Protocol Message Transport With The User Datagram Protocol User Datagram Protocol (UDP) Use During startup For VoIP an some vieo applications Accounts for less than 10% of Internet traffic Blocke by some ISPs Computer

More information

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed. Preface Here are my online notes for my Calculus I course that I teach here at Lamar University. Despite the fact that these are my class notes, they shoul be accessible to anyone wanting to learn Calculus

More information

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition ITERATIOAL JOURAL OF MATHEMATICS AD COMPUTERS I SIMULATIO A eural etwork Moel Base on Graph Matching an Annealing :Application to Han-Written Digits Recognition Kyunghee Lee Abstract We present a neural

More information

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis Multilevel Linear Dimensionality Reuction using Hypergraphs for Data Analysis Haw-ren Fang Department of Computer Science an Engineering University of Minnesota; Minneapolis, MN 55455 hrfang@csumneu ABSTRACT

More information

Lecture 1 September 4, 2013

Lecture 1 September 4, 2013 CS 84r: Incentives an Information in Networks Fall 013 Prof. Yaron Singer Lecture 1 September 4, 013 Scribe: Bo Waggoner 1 Overview In this course we will try to evelop a mathematical unerstaning for the

More information

[2006] IEEE. Reprinted, with permission, from [Damith C. Herath, Sarath Kodagoda and Gamini Dissanayake, Simultaneous Localisation and Mapping: A

[2006] IEEE. Reprinted, with permission, from [Damith C. Herath, Sarath Kodagoda and Gamini Dissanayake, Simultaneous Localisation and Mapping: A [6] IEEE. Reprinte, with permission, from [Damith C. Herath, Sarath Koagoa an Gamini Dissanayake, Simultaneous Localisation an Mapping: A Stereo Vision Base Approach, Intelligent Robots an Systems, 6 IEEE/RSJ

More information

Kinematic Analysis of a Family of 3R Manipulators

Kinematic Analysis of a Family of 3R Manipulators Kinematic Analysis of a Family of R Manipulators Maher Baili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S. 6597 1, rue e la Noë, BP 92101,

More information

Learning Polynomial Functions. by Feature Construction

Learning Polynomial Functions. by Feature Construction I Proceeings of the Eighth International Workshop on Machine Learning Chicago, Illinois, June 27-29 1991 Learning Polynomial Functions by Feature Construction Richar S. Sutton GTE Laboratories Incorporate

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu Institute of Information Science Acaemia Sinica Taipei, Taiwan Da-wei Wang Jan-Jan Wu Institute of Information Science

More information

EFFICIENT STEREO MATCHING BASED ON A NEW CONFIDENCE METRIC. Won-Hee Lee, Yumi Kim, and Jong Beom Ra

EFFICIENT STEREO MATCHING BASED ON A NEW CONFIDENCE METRIC. Won-Hee Lee, Yumi Kim, and Jong Beom Ra th European Signal Processing Conference (EUSIPCO ) Bucharest, omania, August 7-3, EFFICIENT STEEO MATCHING BASED ON A NEW CONFIDENCE METIC Won-Hee Lee, Yumi Kim, an Jong Beom a Department of Electrical

More information

A multiple wavelength unwrapping algorithm for digital fringe profilometry based on spatial shift estimation

A multiple wavelength unwrapping algorithm for digital fringe profilometry based on spatial shift estimation University of Wollongong Research Online Faculty of Engineering an Information Sciences - Papers: Part A Faculty of Engineering an Information Sciences 214 A multiple wavelength unwrapping algorithm for

More information

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace A Classification of R Orthogonal Manipulators by the Topology of their Workspace Maher aili, Philippe Wenger an Damien Chablat Institut e Recherche en Communications et Cybernétique e Nantes, UMR C.N.R.S.

More information

Study of Network Optimization Method Based on ACL

Study of Network Optimization Method Based on ACL Available online at www.scienceirect.com Proceia Engineering 5 (20) 3959 3963 Avance in Control Engineering an Information Science Stuy of Network Optimization Metho Base on ACL Liu Zhian * Department

More information

Rough Set Approach for Classification of Breast Cancer Mammogram Images

Rough Set Approach for Classification of Breast Cancer Mammogram Images Rough Set Approach for Classification of Breast Cancer Mammogram Images Aboul Ella Hassanien Jafar M. H. Ali. Kuwait University, Faculty of Aministrative Science, Quantitative Methos an Information Systems

More information

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD Warsaw University of Technology Faculty of Physics Physics Laboratory I P Joanna Konwerska-Hrabowska 6 FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD.

More information

EFFICIENT ON-LINE TESTING METHOD FOR A FLOATING-POINT ADDER

EFFICIENT ON-LINE TESTING METHOD FOR A FLOATING-POINT ADDER FFICINT ON-LIN TSTING MTHOD FOR A FLOATING-POINT ADDR A. Droz, M. Lobachev Department of Computer Systems, Oessa State Polytechnic University, Oessa, Ukraine Droz@ukr.net, Lobachev@ukr.net Abstract In

More information

Using Ray Tracing for Site-Specific Indoor Radio Signal Strength Analysis 1

Using Ray Tracing for Site-Specific Indoor Radio Signal Strength Analysis 1 Using Ray Tracing for Site-Specific Inoor Raio Signal Strength Analysis 1 Michael Ni, Stephen Mann, an Jay Black Computer Science Department, University of Waterloo, Waterloo, Ontario, NL G1, Canaa Abstract

More information

MR DAMPER OPTIMAL PLACEMENT FOR SEMI-ACTIVE CONTROL OF BUILDINGS USING AN EFFICIENT MULTI- OBJECTIVE BINARY GENETIC ALGORITHM

MR DAMPER OPTIMAL PLACEMENT FOR SEMI-ACTIVE CONTROL OF BUILDINGS USING AN EFFICIENT MULTI- OBJECTIVE BINARY GENETIC ALGORITHM 24th International Symposium on on Automation & Robotics in in Construction (ISARC 2007) Construction Automation Group I.I.T. Maras MR DAMPER OPTIMAL PLACEMENT FOR SEMI-ACTIVE CONTROL OF BUILDINGS USING

More information

On Effectively Determining the Downlink-to-uplink Sub-frame Width Ratio for Mobile WiMAX Networks Using Spline Extrapolation

On Effectively Determining the Downlink-to-uplink Sub-frame Width Ratio for Mobile WiMAX Networks Using Spline Extrapolation On Effectively Determining the Downlink-to-uplink Sub-frame With Ratio for Mobile WiMAX Networks Using Spline Extrapolation Panagiotis Sarigianniis, Member, IEEE, Member Malamati Louta, Member, IEEE, Member

More information

Improving Spatial Reuse of IEEE Based Ad Hoc Networks

Improving Spatial Reuse of IEEE Based Ad Hoc Networks mproving Spatial Reuse of EEE 82.11 Base A Hoc Networks Fengji Ye, Su Yi an Biplab Sikar ECSE Department, Rensselaer Polytechnic nstitute Troy, NY 1218 Abstract n this paper, we evaluate an suggest methos

More information

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs

Distributed Line Graphs: A Universal Technique for Designing DHTs Based on Arbitrary Regular Graphs IEEE TRANSACTIONS ON KNOWLEDE AND DATA ENINEERIN, MANUSCRIPT ID Distribute Line raphs: A Universal Technique for Designing DHTs Base on Arbitrary Regular raphs Yiming Zhang an Ling Liu, Senior Member,

More information

SMART IMAGE PROCESSING OF FLOW VISUALIZATION

SMART IMAGE PROCESSING OF FLOW VISUALIZATION SMAR IMAGE PROCESSING OF FLOW VISUALIZAION H Li (A Rinoshika) 1, M akei, M Nakano 1, Y Saito 3 an K Horii 4 1 Department of Mechanical Systems Engineering, Yamagata University, Yamagata 99-851, JAPAN Department

More information

Compliant Baffle for Large Telescope Daylight Imaging. Stacie Williams Air Force Research Laboratory ABSTRACT

Compliant Baffle for Large Telescope Daylight Imaging. Stacie Williams Air Force Research Laboratory ABSTRACT Compliant Baffle for Large Telescope Daylight Imaging Steven Griffin, Andrew Whiting, Shawn Haar The Boeing Company Stacie Williams Air Force Research Laboratory ABSTRACT With the recent interest in daylight

More information

STEREOSCOPIC ROBOT VISION SYSTEM

STEREOSCOPIC ROBOT VISION SYSTEM Palinko Oskar, ipl. eng. Facult of Technical Sciences, Department of Inustrial Sstems Engineering an Management, 21 000 Novi Sa, Dositej Obraovic Square 6, Serbia & Montenegro STEREOSCOPIC ROBOT VISION

More information

Lab work #8. Congestion control

Lab work #8. Congestion control TEORÍA DE REDES DE TELECOMUNICACIONES Grao en Ingeniería Telemática Grao en Ingeniería en Sistemas e Telecomunicación Curso 2015-2016 Lab work #8. Congestion control (1 session) Author: Pablo Pavón Mariño

More information

Exploring Context with Deep Structured models for Semantic Segmentation

Exploring Context with Deep Structured models for Semantic Segmentation APPEARING IN IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, APRIL 2017. 1 Exploring Context with Deep Structure moels for Semantic Segmentation Guosheng Lin, Chunhua Shen, Anton van en

More information

Computer Graphics Inf4/MSc. Computer Graphics. Lecture 6 View Projection Taku Komura

Computer Graphics Inf4/MSc. Computer Graphics. Lecture 6 View Projection Taku Komura Computer Graphics Lecture 6 View Projection Taku Komura 1 Overview 1. View transformation 2. Rasterisation Implementation of viewing. Transform into camera coorinates. Perform projection into view volume

More information

Bayesian localization microscopy reveals nanoscale podosome dynamics

Bayesian localization microscopy reveals nanoscale podosome dynamics Nature Methos Bayesian localization microscopy reveals nanoscale poosome ynamics Susan Cox, Ewar Rosten, James Monypenny, Tijana Jovanovic-Talisman, Dylan T Burnette, Jennifer Lippincott-Schwartz, Gareth

More information

SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH

SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH SURVIVABLE IP OVER WDM: GUARANTEEEING MINIMUM NETWORK BANDWIDTH Galen H Sasaki Dept Elec Engg, U Hawaii 2540 Dole Street Honolul HI 96822 USA Ching-Fong Su Fuitsu Laboratories of America 595 Lawrence Expressway

More information

Multi-camera tracking algorithm study based on information fusion

Multi-camera tracking algorithm study based on information fusion International Conference on Avance Electronic Science an Technolog (AEST 016) Multi-camera tracking algorithm stu base on information fusion a Guoqiang Wang, Shangfu Li an Xue Wen School of Electronic

More information

IMAGE registration [1] is the process of spatially aligning

IMAGE registration [1] is the process of spatially aligning The final version of record is available at http://dx.doi.org/1.119/tpami.216.263687 Image Registration and Change Detection under Rolling Shutter Motion Blur Vijay Rengarajan, Ambasamudram Narayanan Rajagopalan,

More information

Texture Defect Detection System with Image Deflection Compensation

Texture Defect Detection System with Image Deflection Compensation Texture Defect Detection System with Image Deflection Compensation CHUN-CHENG LIN CHENG-YU YEH Department of Electrical Engineering National Chin-Yi University of Technology 35, Lane 15, Sec. 1, Chungshan

More information

6.823 Computer System Architecture. Problem Set #3 Spring 2002

6.823 Computer System Architecture. Problem Set #3 Spring 2002 6.823 Computer System Architecture Problem Set #3 Spring 2002 Stuents are strongly encourage to collaborate in groups of up to three people. A group shoul han in only one copy of the solution to the problem

More information