Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax

Size: px
Start display at page:

Download "Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax"

Transcription

1 Parallax360: Sereoscopic 360 Scene Represenaion for Head-Moion Parallax Bicheng Luo, Feng Xu, Chrisian Richard and Jun-Hai Yong (b) (c) (a) (d) (e) Fig. 1. We propose a novel image-based represenaion o generae novel views wih head-moion parallax for a 360 scene: (a) When a viewer changes he orienaion and posiion, correc novel views are presened. (b, c) The original sereo views. (d, e) The novel sereo views. Noice ha he posiional relaionship beween he bench and he rees changes according o he head moion. Absrac We propose a novel 360 scene represenaion for convering real scenes ino sereoscopic 3D virual realiy conen wih head-moion parallax. Our image-based scene represenaion enables efficien synhesis of novel views wih six degrees-of-freedom (6-DoF) by fusing moion fields a wo scales: (1) dispariy moion fields carry implici deph informaion and are robusly esimaed from muliple laerally displaced auxiliary viewpoins, and (2) pairwise moion fields enable real-ime flow-based blending, which improves he visual fideliy of resuls by minimizing ghosing and view ransiion arifacs. Based on our scene represenaion, we presen an end-o-end sysem ha capures real scenes wih a roboic camera arm, processes he recorded daa, and finally renders he scene in a head-mouned display in real ime (more han 40 Hz). Our approach is he firs o suppor head-moion parallax when viewing real 360 scenes. We demonsrae compelling resuls ha illusrae he enhanced visual experience and hence sense of immersion achieved wih our approach compared o widely-used sereoscopic panoramas. Index Terms 360 scene capure, scene represenaion, head-moion parallax, 6 degrees-of-freedom (6-DoF), image-based rendering 1 INTRODUCTION Experiencing virual realiy (VR) has become increasingly easy in recen years hanks o he availabiliy of commodiy head-mouned displays (HMDs). However, good VR conen is sill scarce and migh become a boleneck for VR echnology. Currenly, mos VR conen is compuer-generaed from virual 3D scenes, which are edious o model and animae, and herefore ofen lack he realism of real scenes. On he oher hand, capuring real scenes and convering hem ino highly realisic VR conen is a promising avenue for creaing visually rich VR conen. Many applicaions, such as virual ourism, eleconferencing, film and elevision producion would benefi from such VR conen ha is capured from he real world. The mos common represenaions for 360 VR conen of real scenes Bicheng Luo, Feng Xu (corresponding auhor) and Jun-Hai Yong are wih he School of Sofware a Tsinghua Universiy, China. luobc14@mails.singhua.edu.cn, feng-xu@singhua.edu.cn, yongjh@singhua.edu.cn. Chrisian Richard is wih he Universiy of Bah, UK. chrisian@richard.name. Manuscrip received xx xxx. 201x; acceped xx xxx. 201x. Dae of Publicaion xx xxx. 201x; dae of curren version xx xxx. 201x. For informaion on obaining reprins of his aricle, please send o: reprins@ieee.org. Digial Objec Idenifier: xx.xxxx/tvcg.201x.xxxxxxx are panoramas (360 images) and sereo panoramas. Panoramas can be easily obained by siching muliple images [41], bu hey do no carry any 3D informaion of he scene. Sereo panoramas, on he oher hand, conain parallax (or dispariy) [31, 39]. However, as all he scene informaion is represened by wo panoramas wih a fixed parallax, head-moion parallax where he occlusion relaionship changes wih he viewpoin canno be generaed by sereo panoramas, as i requires a differen parallax corresponding o differen head posiions. One poenial soluion for full 3D percepion for arbirary viewpoins is o reconsruc he complee 3D geomery of he scene. However, his is very difficul o achieve for arbirary real scenes, as real scenes are generally large and complex, while he viewpoins for recording are usually resriced o a small region. We propose a novel image-based represenaion for modeling a 3D 360 scene using implici deph informaion (see Figure 1). The core of our represenaion builds on wo-scale moion fields, dispariy and pairwise moion fields, which exrac 3D informaion of he scene and improve he rendering qualiy and performance. Using our represenaion, we do no require a dense ligh-field sampling wih huge recording and sorage cos, bu only 72 8 images wih he wo-scale moion fields. In addiion, our represenaion enables real-ime viewpoin synhesis and smooh viewpoin ransiions wihou requiring explici 3D scene geomery. Our approach makes he following conribuions: A novel image-based represenaion for 360 real scenes. Based on his represenaion, we propose an end-o-end sysem ha

2 (1) (2) (3) (a) (b) (c) Fig. 2. Illusraion of our 360 scene represenaion: (a) Key frames are capured uniformly on he surface of a sphere (orange circles). (b) The local region on he sphere is remapped ino he 2D laiude-longiude plane. (c) The hree ypes of informaion sored in our represenaion: (1) key frames (bordered in orange), (2) dispariy moion fields (he curves fied o he colored moion vecors), and (3) pairwise moion fields (he color-coded flow fields beween adjacen key frames). ranges from recording o rendering of real scenes. The resuls show ha our sysem enables head-moion parallax, which is missing from exising panorama-based approaches. A robus curve-based fiing mehod for esimaing dispariy moion fields, which implicily conveys deph informaion for a viewpoin. Dispariy moion fields are esimaed more robusly compared o radiional sereo maching-based deph esimaion. Novel-view synhesis wih real-ime flow-based blending beween synhesized images from differen viewpoins produces smooh viewpoin ransiions wih minimal visual arifacs. By combining dispariy and pairwise moion fields on he fly, we avoid cosly online moion esimaion and achieve real-ime rendering. 2 RELATED WORK Image-Based Scene Modeling A 3D scene can be modeled wih differen forms of informaion,and visualized wih differen image-based rendering echniques [37]. On he one hand, scenes can be represened purely wih images and wihou any geomeric informaion, using a ligh field [24] or lumigraph [13]. Even hough echniques in his caegory are highly developed [23,25,27,36,47], hey sill require exremely densely sampled images of a scene, which is impracical for a 360 scene. To more efficienly represen a scene, image-based rendering uses sparsely sampled images wih differen forms of deph informaion, such as he unsrucured lumigraph [5] ha generalizes ligh-field echniques. Using deph for synhesizing novel viewpoins has been shown o produce increasingly high-qualiy resuls [6, 7, 10, 12, 15, 42]. However, as hese deph-based echniques are no designed for 360 scenes, i is unclear how o sample an enire 360 scene o guaranee boh a small baseline beween images (which is desirable for view synhesis) and an efficien represenaion wih a low sampling densiy of images. Huang e al. [16] use dense scene reconsrucion o creae VR videos wih head-moion parallax from a single 360 video. However, geomery-based image warping limis he visual qualiy of resuls, as sraigh lines are bending, and occlusions lead o sreching arifacs, as he viewer moves away from he capure viewpoin. 3D Scene Reconsrucion On he oher hand, a virual 3D scene can be very efficienly rendered if i is available in he form of exured geomery. PMVS [11] is widely used for reconsrucing 3D models of real scenes from muli-view inpu images. Using deph sensors, larger, mosly indoor, scenes can also be reconsruced [2, 29, 30, 43, 46], bu wih a relaively long capuring process. Besides saic scenes, dynamic objecs are also able o be reconsruced wih muliview inpu [8, 9] or single-view inpu [18, 28]. Hedman e al. [14] reconsruc exured geomery from casually capured inpu phoos, bu heir approach canno handle view-dependen effecs. However, curren 3D reconsrucion echniques are sill no able o handle large oudoor scenes, due o he lack of abiliy o reconsruc disan objecs and he difficuly of achieving muli-view coverage of all objecs in a scene. Panoramas Currenly, he mos convenien way o visualize a real 360 scene wihin a head-mouned display is o use a panorama (or 360 video), which is creaed by aligning and siching muliple images (or videos) corresponding o differen viewing direcions. Panorama siching echniques are widely covered in he lieraure [4, 33, 41]. For generaing panoramas, Zelnik-Manor e al. [45] projec he images ono a sphere o represen he 360 scene. Kopf e al. [20] propose locallyadaped projecions o reduce disorions in he generaed panoramas. Opical flow is also used o minimize undesired moion parallax in synhesizing panoramas [40], as i compensaes moions in he scene. Also based on opical flow, Kang e al. [19] propose muli-perspecive plane sweeping o seamlessly sich images. Perazzi e al. [32] sich panoramic videos from unsrucured camera arrays by removing parallax wih local warps. We also use flow-based blending o seamlessly align and fuse muliple synhesized images ino a single coheren image. Recenly, Lee e al. [22] used a deformed sphere o perform spherical projecion, and a non-uniform sampling o represen imporan regions in a scene wih higher resoluion. Sereo Panoramas Sereo panoramas are a beer represenaion for 360 scenes han a single panorama, as he laer do no provide any 3D informaion. Sereo panoramas can be generaed by moving a camera along a circular rajecory wih radial [31] or angenial [38] viewing direcion, and siching verical sripes from he images. A he same ime, he deph of a panorama can also be esimaed, and hus sereo vision can be achieved [39]. Recenly, Richard e al. [35] proposed a pracical soluion for creaing high-qualiy sereo panoramas by sabilizing and correcing inpu images, and seamless siching hem using flow-based ray upsampling. Cusom muli-camera rigs also enable capure of sereo video panorama o represen dynamic scenes [1, 26]. However, echniques based on sereo panoramas assume ha he viewer only roaes around a verical axis beween he wo eyes, and hus sereo vision is limied o a fixed viewpoin wih varying view direcions, bu wihou head-moion parallax. Ligh fields have also been used for siching panoramas [3] and esimaing deph maps [21] of 360 scenes. Our 360 scene represenaion overcomes hese limiaions by using image-based rendering wih muliple key frames, leading o resuls of higher visual qualiy. 3 PARALLAX360 SCENE REPRESENTATION Our 360 scene represenaion consiss of hree ypes of informaion (illusraed in Figure 2): 1. key frames ha represen he color informaion of he scene, 2. dispariy moion fields ha represen implici 3D informaion of he scene a each key frame, and 3. pairwise moion fields for efficien and smooh viewpoin ransiions in novel-view synhesis. Dispariy and pairwise moion fields form our wo-scale moion fields.

3 Roboic arm Sepper moors Camera (a) (b) I k (c) Fig. 3. Our capure scheme: (a) Key frames on he sampling sphere, as orange poins. (b) The sampling circle of relaive frames (blue poins) around one key frame. (c) One key frame I k (bordered in orange), and is six relaive frames (bordered in blue). Fig. 4. Our roboic capure device for real scenes: (a) Schemaic drawing wih he main componens, and (b) phoo of our capure device. Key Frames To capure a complee scene in 360, we sample discree posiions uniformly in laiude and longiude on a sphere (called he sampling sphere ), and record key frame images a hese posiions looking radially ouward (see Figure 2a,b). In all our experimens, we capure key frames a uniform angular incremens of 5 in boh laiude and longiude on a sphere wih a diameer of 1.25 m. Dispariy Moion Fields While a key frame describes he visual informaion of one specific viewpoin, he dispariy moion field conveys is implici deph informaion. Specifically, he dispariy represens he moion beween a poin and is corresponding poins in images from surrounding viewpoins. For his, we exend he common case of binocular dispariy in sereo maching o muliple viewpoins surrounding one key frame. We esimae he moion vecors for six surrounding images for each key frame, and represen he moion vecors using curves, as illusraed in Figure 2c, for more robus moion esimaion and efficien compuaion during view synhesis. Given he key frames, he curve-based dispariy moion fields can be used o synhesize nearby viewpoins, similar o deph informaion used for view synhesis. Our curve-based represenaion has hree advanages over explici deph maps. Firs, i does no require camera calibraion which is normally necessary for deph esimaion wih sereo maching. Second, he curve is fied o muliple moion vecors, and is hus more robus o errors in he moion esimaion. Third, our represenaion is redundan compared wih explici deph maps, which is useful for synhesizing novel viewpoins in differen direcions. We discuss he compuaion of dispariy moion fields in Secion 4.2, and heir usage in Secion Pairwise Moion Fields Given a key frame and is dispariy moion field, we can render novel viewpoins near he posiion of he key frame. However, if he viewpoin moves from one key frame o anoher, he sudden swich beween key frames will generae noiceable popping arifacs. To solve his problem, and achieve a smooh ransiion beween key frames, he flow fields beween heir respecive resuls are required o synhesize a smooh inerpolaion. In our approach, we precompue he opical flow beween pairs of adjacen key frames, and calculae he blending moion fields online using simple flow arihmeic. This avoids expensive online opical flow esimaion, and ensures real-ime performance. The pairwise moion fields are visualized in Figure 2c, and we describe heir usage in Secion METHOD The pipeline of our end-o-end sysem comprises hree seps: image capure, moion field precompuaion and novel-view synhesis Image Capure Besides he key frames, we also capure anoher kind of frames, called relaive frames, which we use o compue he dispariy moion fields. Specifically, for a key frame I k, we firs capure i a is defined posiion on he sampling sphere wih radially ouward viewing direcion. We hen capure six relaive frames a posiions close o he key frame. In our approach, relaive frames are capured on a circle cenered a he key frame posiion, wih a fixed radius of 25 mm in our experimens. Figure 3b shows he sampling paern of relaive frames in our approach. Finally, hese seven images (shown in Figure 3c) are grouped ogeher f 1 f 2 f 6 {C P P I k } (a) (b) (c) Fig. 5. Dispariy moion fields: (a) Relaive frames. (b) Moion fields { f 1, f 2,..., f 6 } beween he key frame and each relaive frame. (c) Dispariy moion curves {C P } for each image pach P in he key frame I k. for he nex seps. Noe ha he relaive frames are discarded afer he moion field compuaion, and only he key frames are kep. To capure he key frames and he corresponding relaive frames for a real scene, we developed he capure device in Figure 4. We use a roboic arm wih hree degrees-of-freedom o conrol he posiion and orienaion of a camera. We conver sampling coordinaes o he movemen of he sepper moors, so ha he camera can be placed a he camera posiion required by our capure scheme. In our experimens, we capure a range of 360 horizonally and 40 verically in seps of 5. Our device capures he corresponding 72 8=576 key frames and 576 6=3,456 relaive frames in less han 2 hours. This ime can be reduced by using faser sepper moors. 4.2 Moion Field Precompuaion In his secion, we inroduce he compuaion of he dispariy moion fields from he capured key frames and relaive frames, and hen discuss compuaion of he pairwise moion fields beween key frames Dispariy Moion Fields Inspired by he moion compensaion echniques used in video compression, we use moion fields o synhesize new views, raher han rely on an esimae of scene deph, which is more difficul o esimae accuraely, paricularly for large oudoor scenes. The dispariy moion field defines he 2D flow field from a key frame o any viewpoin surrounding he key frame, and is used o synhesize hese novel views. All dispariy moion fields are compued independenly. We sar by compuing opical flow [44] beween he key frame and all is relaive frames. Le us denoe he flow fields using F = { f 1, f 2,..., f 6 } (see Figure 5b), where f i (p) is he moion vecor for pixel p in he flow field f i. For sorage and compuaional efficiency, we aggregae he moion vecors of individual pixels p ino nonoverlapping image paches P of size 8 8 pixels by averaging hem. As

4 r 3 r 2 r 1 r I 1 f 1 I 1 r 4 r 5 (a) r (b) 6 f 1 2 f 1 2 I Fig. 6. Moion field inerpolaion using he dispariy moion fields: (a) The sampling circle wih six relaive frames r i and he arge viewpoin r. (b) The moion vecors of he pach P in he six relaive frames (v i P for i=1,...,6), and in he arge frame (v P ). I 2 f 2 (a) (b) (c) I 2 demonsraed by our resuls, pach-level moion fields are sufficien for synhesizing high-qualiy novel views. To merge he individual moion vecors of he relaive frames ino a single dispariy moion field, we propose a curve-based moion represenaion. While he sampling paern of relaive frames is regular by consrucion (see Figure 6a), he moion vecors v i P corresponding o he relaive frames are generally irregular (see Figure 6b). To achieve a robus moion encoding, we herefore separaely encode he magniude and direcion of moion vecors. We fi an ellipse C P (by leas-squares fiing [34]) o he endpoins of all six moion vecors o encode he moion magniude, and separaely sore he polar angles θp i of he moion vecors v i P o encode heir direcion. The parameers for he ellipses C P and moion direcions θp i define our dispariy moion fields, which are sored in our final scene represenaion. Noice ha he fied curves provide a robus fi o he six inpu moion vecors. The esimaion errors in individual vecors are reduced by he fiing. Afer compuing he dispariy moion field (e.g. visualized in Figure 5c) for each key frame, we discard all relaive frames as hey are no required any more. As we will see in Secion 4.3.1, his represenaion is well-suied for efficienly inerpolaing moion fields for any novel viewpoin near he key frame Pairwise Moion Fields We compue he pairwise moion fields f i j beween every pair I i and I j of neighboring key frames using opical flow [44]. For sorage efficiency, we hen again downsample he pairwise moion fields o he same resoluion as he dispariy moion fields, i.e. o one moion vecor per pach of 8 8 pixels. These pairwise moion fields are also sored in our scene represenaion, and make i complee. 4.3 Novel-View Synhesis We synhesize novel viewpoins anywhere inside he sampling sphere using a wo-sep process: (1) we synhesize a novel viewpoin on he sampling sphere, wih a radially-ouward viewing direcion (Secion 4.3.2), and hen (2) we warp he synhesized image o mach he desired viewpoin in he sphere, wih any viewing direcion (Secion 4.3.3). However, before discussing hese seps, we firs describe our approach for inerpolaing dispariy moion fields for a novel viewpoin (Secion 4.3.1), which is used in he firs sep Dispariy Moion Field Inerpolaion We propose a coordinaes ransfer scheme o inerpolae he dispariy moion field o obain he moion field for any novel viewpoin (angen o he sampling sphere). Firs, based on he sampling scheme inroduced in Secion 4.1 and Figure 3, we represen he sampling posiions of a key frame and is relaive frames as he cener of a circle and poins on he circle (see Figure 6a). The vecors r i indicae heir displacemen relaive o he key frame. The posiion of any novel arge viewpoin r can be represened similarly. Fig. 7. The process of synhesizing a novel view (from wo key frames): (a) The inpus are wo key frames, I 1 and I 2, and heir pairwise moion field f 1 2. (b) The wo inermediae inerpolaed images I 1 and I 2, and he blending flow field f 1 2 beween hem. (c) The final arge view I. We nex express he direcion of he novel viewpoin r as a linear combinaion of he neares wo relaive frames, e.g. r 1 and r 2 in Figure 6a, using θ(r ) = αθ(r 1 ) + βθ(r 2 ) in his case. Here α + β = 1 and θ(r) denoes he polar angle of he 2D vecor r. Then, we use he coefficiens α and β o inerpolae he arge moion field for his novel viewpoin using he curve-based moion field C P and θp i (see Figure 6b). To be specific, for a pach P in he key frame, we compue is moion vecor using v P = r r 1 C P(α θp 1 + β θ P 2). Here C P(θ) is a moion vecor from he cener o he poin on he ellipse wih he polar angle of θ. We obain he arge moion field for he novel viewpoin by esimaing he moion vecors for all paches accordingly Novel-View Synhesis on he Sampling Sphere For a arge viewpoin on he sampling sphere, we firs find is K neares key frames, ordered by disance (from neares o furhes), which we denoe using I 1, I 2,..., I K wihou loss of generaliy. Each of he key frames is used o synhesize a arge image Ik via heir dispariy moion field, and hese arge images are hen aligned and fused ino he final arge image I via flow-based blending wih he pairwise moion fields. Firs, using he dispariy moion field of key frame k, we obain he moion field fk beween I k and he arge image Ik using he inerpolaion mehod in Secion Nex, we use his dispariy-derived moion field fk o synhesize he arge view I k using I k (p) = I k ( ) ( fk ) 1 (p), (1) where p is a pixel in he arge image Ik, and ( f k ) 1 ransforms he pixels of he arge frame Ik o he key frame I k. Noice ha he moion field fk is originally defined per pach P, and no per pixel p. We hus use bilinear inerpolaion on he 2D image domain o smoohly propagae he pach-level moion field o all pixels of he image. Figure 7ab shows an example for wo key frames, I 1 and I 2, and he arge views I1 and I2 inerpolaed from hem. If we simply alpha-blended all inerpolaed arge views I1, I 2,..., IK, he final oupu I would likely conain ghosing arifacs, because here is no guaranee ha he same pixel on he individually inerpolaed arge views I1, I 2,..., I K corresponds o he same scene poin. If we se K =1 (using only he neares key frame), here will be no ghosing arifacs, bu he viewpoin ransiion will no be smooh when he used key frames swiches. This is known as popping arifacs. To synhesize a final arge image I wihou ghosing arifacs, we align he arge images of all key frames using flow-based blending. We esimae he blending moion field f1 k using opical flow from arge image I1 o arge image I k, as shown in Figure 7b. Ideally, he

5 Sereo Panorama Our Mehod Fig. 8. Comparison of our mehod (righ) o sereo panoramas [35] (lef) on wo synheic scenes. For each mehod, we show wo sereo pairs viewed from differen viewpoins (lef/righ). Zoomed crops of each view are shown below each resul. Our mehod preserves head-moion parallax beween differen viewpoins, as seen in he displacemen of he nearby ree (op) or lamp pos (boom) compared o he rees (op) or shop window (boom) in he background. Noe ha he full views of hese resuls are capured direcly from he DK2 headse, which has a lower resoluion han he inpu images, and applies chromaic aberraion compensaion for he headse s opics, resuling in shifed color channels and hus color fringes. pixel pk = f1 k (p1 ) in Ik should correspond o he same scene poin as pixel p1 in I1. So, given all he blending moion fields f1 k, we ge he corresponding pixel coordinaes of he scene poin in all he images Ik. Then we can esimae is final pixel posiion in I by weighed averaging, where he weigh is inversely proporional o he disance rk beween he arge viewpoin and key frame k in heir sampling circle spaces: wk = 1 rk K i=1 ri, wk wk or wk = K = K 1 i=1 wi (2) when he weighs are normalized o sum o one. Mahemaically, we hen fuse he posiion of he corresponding pixels using K p= wk p k, (3) k=1 where p is he final posiion of he poin in he fused arge image I. Nex, we calculae he color of he poin p by fusing he colors of he corresponding poins in he synhesized images {Ik k = 1,..., K}: I (p) = K wk Ik (pk ). (4) k=1 Noice ha p may fall beween he pixel grid on image I ; we again use bilinear inerpolaion on he 2D image plane o esimae he coordinaes of all pk s for poins p a he cener of a pixel grid. Anoher obsacle ha prevens achieving real-ime performance is. We propose o use he he calculaion of he blending flow fields fi k precompued dispariy and pairwise moion fields (Secion 4.2) o infer, raher han calculae opical flow beween he arge images I and fi k i Ik online. Saring from he arge image Ii, we firs apply he inverse of he dispariy-derived moion field fi, o map pixels ino he coordinae frame of key frame Ii. We hen apply he pairwise moion field fi k o map o key frame Ik. Finally, we apply he dispariy-derived moion field fk, o arrive in he coordinaes of he arge image Ik. Mahemaically, we can express his using he flow concaenaion operaor as fi k = fk fi k ( fi ) 1. (5) The whole synhesis process is illusraed for wo key frames in Figure 7. As flow concaenaion is essenially addiion, he blending moion fields fi k can now be compued very efficienly. To obain sereoscopic resuls, we render wo separae views, one for each eye of he head-mouned display. In our experimens, we use he neares 2 3 key frames (i.e. K = 2, 3), which srikes a suiable balance beween synhesis qualiy and performance. To achieve real-ime performance, he algorihm in his secion is implemened on he GPU using Direc3D 11 HLSL. We compare he performance of CPU and GPU implemenaions in Secion Viewpoin Exension For a arge viewpoin inside he sampling sphere, we firs find he inersecion of he arge viewing ray and he sampling sphere. We hen synhesize he arge image a his inersecion poin, looking radially ouward, following he mehod described in he previous secion. Finally, o mach he arge view, we apply a homography warp ha firs roaes he virual camera from he synhesized o he arge viewing direcion on he sampling sphere, and hen applies a scaling ransform ha approximaes he change in viewpoin from he posiion on he sphere o he arge viewpoin inside he sphere. Alhough his synhesis scheme does no exacly reflec he change in perspecive resuling from moving he viewpoin inside he sampling sphere, we find ha, in pracice, i is an exremely efficien approximaion ha noneheless produces visually highly compelling resuls, as demonsraed in he following secion. We resric he disance beween he arge viewpoin and he sampling sphere o be less han a enh of he diameer of he sampling sphere. Oherwise, arifacs may appear in he resuls.

6 Sereo Panorama Our Mehod Noe flower behind vase Fig. 9. Comparison of our mehod (righ) o sereo panoramas [35] (lef) on hree real scenes. For each mehod, we show wo sereo pairs viewed from differen viewpoins (lef/righ). Zoomed crops of each view are shown below each resul. Top: As our mehod suppors head-moion parallax, one can see he flowers behind he vase from some viewpoins (righ, circled in green), which is no he case for sereo panoaramas (see lef half of figure). Middle: Thanks o moion parallax, one can also look behind he leg of he saue using our mehod. Boom: When he headse is roaed, our mehod correcly maches he 3D roaion of he bench (wih perspecive effec), while he sereo panorama only applies a global 2D ransform in image space. Noe ha he full views of hese resuls are capured direcly from he DK2 headse, which has a lower resoluion han he inpu images, and applies chromaic aberraion compensaion for he headse s opics, resuling in shifed color channels and hus color fringes. 5 R ESULTS We perform all experimens on a sandard PC wih a 3.6 GHz Inel Core i7 quad-core CPU, 16 GB memory, and an Nvidia GeForce 980 GPU. We use an Oculus Rif Developmen Ki 2 ( DK2 for shor) headmouned display, which has a display resoluion of pixels for each eye. The valid moving range of a user is 360 horizonally and 40 verically, and key frames are capured a an inerval of 5. Our represenaion requires 216 MB per scene for soring he 72 8 images and he corresponding wo moion fields, which is much less han a full ligh-field represenaion. Our sysem achieves an average frame rae of 41.2 Hz on he DK2. In he following, we firs compare our soluion o exising sereo panoramas. We hen evaluae he main componens of our echnique in erms of qualiy and performance. Please see our supplemenal video for furher resuls and comparisons. 5.1 Comparison Figures 8 and 9 show he resuls of our approach compared o exising sereo panoramas, on synheic and real scenes, respecively. For synheic scenes, we render he inpu views using Uniy, and for real scenes we use our roboic arm o capure hem (see Secion 4.1). In boh cases, we hen compue moion fields as described in Secion 4.2 o complee our scene represenaion. We compare our resuls wih sereo panoramas creaed by he sae-of-he-ar Megasereo echnique [35]. Unlike Megasereo, our resuls clearly show head-moion parallax. Video comparisons are shown in he supplemenal video. Figure 8 shows a comparison on synheic inpu images for wo differen viewing posiions. When changing he viewing posiion, sereo panoramas only apply a homography warp, which is mosly horizonal ranslaion of he shown imagery. In his case, here is no head-moion parallax. In our resuls, on he oher hand, a change of viewpoin leads o head-moion parallax, which is visible in he shif of nearby and far objecs relaive o each oher, such as he rees in he firs scene. In he ciy scene, one can clearly see he change of posiion of he lamp pos compared o he sore window behind i. Figure 9 shows our resuls on hree real-world scenes, compared o sereo panoramas. The plans scene (op) shows a flower po behind a vase. In our resuls, one can clearly see he change of posiion beween he large vase and he small flower po behind i. This canno be

7 Neares Key Frame (NKF) Alpha Blending (AB) Flow Based Blending (FBB) Fig. 10. Comparison of differen novel-view synhesis mehods. For each mehod, he lef wo columns show wo consecuive synhesized images, while he righ column shows heir absolue difference. Lef: Using only he neares key frame resuls in clearly visible differences (known as popping arifacs) when he neares key frame for a synhesized view changes. Middle: Alpha blending (beween wo key frames) hides popping arifacs by blending muliple synhesized images, bu poor alignmen resuls in ghosing arifacs and a blurry synhesized image. Righ: Our flow-based blending approach (here beween wo key frames) resuls in crisp synhesized images wihou popping or ghosing arifacs. observed in he sereo panorama, where he flower po is permanenly hidden behind he vase. Sereo panoramas do no provide any new viewpoins, bu insead show a fixed spaial configuraion for all head roaions. In he saue scene (middle), one can see behind he saue s leg wih our approach. In he sereo panorama, he rendered images are keeping he same posiional relaionship. Alhough binocular dispariy provides a sense of deph percepion o viewers, head roaion is no refleced in he resuls beyond a simple pan, which diminishes he sense of immersion. The hird scene ( bench ) shows a bench in fron of several rees. When he head-mouned display is roaed o face he bench, our approach correcly shows he 3D roaion of he bench and he resuling change in perspecive, while he sereo panorama only applies a much simpler global ransform. Our resuls reveal he shape of he bench by enlarging he par of i closes o he viewer. Sereo panoramas provide essenially he same image conens no maer where he viewer looks or moves. Compared wih sereo panoramas, which combine all informaion ino wo images, our approach akes advanage of an image-based represenaion ha also compacly sores he deph informaion of a scene. Noe ha we are no showing any panorama resuls here (bu we show hem in he supplemenal video), because hey are visually similar o sereo panoramas. 5.2 Evaluaion To furher evaluae our mehod, we compare hree differen soluions for synhesizing novel views: Neares key frame (NKF) renders a novel view using only he neares key frame I 1 (K =1). The arge view is idenical o I 1. Alpha blending (AB) renders a novel view using he neares K =2 key frames. The arge views I1 and I 2 are alpha-blended wihou fusing he posiion of he corresponding pixels. This corresponds o using p 1 = p 2 = p in Equaion 4. Flow-based blending (FBB) renders a novel view using our full rendering mehod, as described in Secion View Synhesis Qualiy Figure 10 compares he hree novel-view synhesis mehods on wo real scenes. To beer demonsrae heir differences, we consider an HMD pah from he posiion of one key frame o anoher, and pick wo consecuive resul views and +1, where he neares key frame swiches. For a clear comparison, we only show a cropped image region. We also show video resuls in he supplemenal video. Using only he neares key frame (NKF) o synhesize a novel view resuls in abrup changes beween frames and +1, as he neares key frame changes. Alpha-blending (AB) beween views synhesized from muliple key frames suppresses abrup changes beween and + 1, because blending weighs change smoohly as he viewpoin changes. Table 1. Sereo synhesis performance for differen blending approaches. View blending approach pixels pixels Neares key frame (NKF) 44.9 Hz 20.5 Hz Alpha blending (AB) 22.0 Hz 9.8 Hz Naïve flow-based blending (FBB) 2.4 Hz 1.0 Hz using pairwise moion fields (+pmf) 22.5 Hz 9.9 Hz implemened on GPU (+GPU) 94.5 Hz 41.2 Hz However, he views synhesized from differen key frames are no always perfecly aligned (e.g. a objec edges), which causes ghosing arifacs ha resul in blurry synhesized views (see Figure 10, middle). In our approach, we perform flow-based blending (FBB), which aligns he views synhesized from differen key frames before blending hem. This removes he ghosing arifacs seen in he alpha-blended resul, and produces a clean, crisp resul, even when changing viewpoins View Synhesis Performance Table 1 compares he frame raes of differen synhesis mehods for sereoscopic rendering. Our experimens es wo resoluions on he DK2: and pixels (he full display resoluion). Comparing hese resoluion levels, we see an approximaely inverse linear relaionship beween frame rae and resoluion across all mehods: he frame rae is roughly halved when rendering wice as many pixels. Alpha blending (AB) is nearly wice as expensive (wih K = 2) as using only he neares key frame (NKF). Naïve flow-based blending, which compues he blending moion fields fi k using opical flow during rendering, comes a grea compuaional cos. Using our precompued pairwise moion fields o infer he blending moion fields fi k dramaically improves performance ( FBB+pMF in Table 1). Our GPU implemenaion using Direc3D 11 HLSL ( FBB+pMF+GPU ) achieves a furher speed-up o more han 40 Hz for he full display resoluion of he DK2 headse. In he supplemenal video, we show ha his improvemen in performance does no sacrifice rendering qualiy compared o online flow-based blending (FBB). 5.3 Discussion Our approach requires a grea many images as inpu. This increases he difficuly of recording and affecs he efficiency of he moion field compuaion. In our experimens, o capure he = 4,032 inpu images akes almos 2 hours, and precompuaion akes almos 24 hours on a quad-core PC. The main boleneck in our implemenaion is he opical flow compuaion [44], which akes abou 99% of he ime. However, noe ha our approach is independen of any paricular flow implemenaion, so in principle any implemenaion could be used. Using a real-ime opical flow mehod [17] could reduce our precompuaion o less han one minue.

8 Fig. 11. View-synhesis arifacs caused by incorrec moion fields. In addiion, our scene represenaion has increased sorage requiremens compared o sereo panoramas, as i sores hundreds of key frames plus he associaed pach-level dispariy and pairwise moion fields. All his daa is required o achieve head-moion parallax, bu compression schemes used for ligh fields [5, 24] could likely be adaped o reduce sorage requiremens by an order of magniude. Like previous work on sereo panoramas [1, 35], he qualiy of our synhesized views depends mainly on he correcness of he compued opical flow. If he opical flow conains errors, he final resuls will likely conain arifacs. Figure 11 shows an example of such arifacs, which are caused by incorrec opical flow. In his case, a repeiive paern leads o incorrec correspondence and hence poorly synhesized resuls. This problem could poenially be amelioraed by enforcing geomeric consisency checks beween compued flow fields. 6 CONCLUSION In his paper, we presened Parallax360 a novel 360 scene represenaion based on wo-scale moion fields, i.e. he dispariy and pairwise moion fields. Building on his represenaion, we propose a complee sysem for capuring and represening a real scene, and rendering i in a VR HMD in real ime. Our approach is he firs o generae novel views of a real 360 scene wih head-moion parallax in real ime. As shown by our resuls, our sysem generaes a much more realisic 3D effec compared o sereo panoramas. Our represenaion is designed o handle large real scenes, where he dispariy moion fields achieve implici deph esimaion by a robus curve-fiing echnique ha filers ou noise, and he precompued pairwise moion fields guaranee highqualiy flow-based viewpoin synhesis wih real-ime performance. We also developed a roboic capure device o auomaically and accuraely capure he inpu images required by our approach. ACKNOWLEDGMENTS This work was suppored by he NSFC (No , , ), he Naional Key Technologies R&D Program of China (No. 2015BAF23B03) and EPSRC gran CAMERA (EP/M023281/1). REFERENCES [1] R. Anderson, D. Gallup, J. T. Barron, J. Konkanen, N. Snavely, C. H- ernández, S. Agarwal, and S. M. Seiz. Jump: virual realiy video. ACM Transacions on Graphics, 35(6):198, [2] M. Arikan, R. Preiner, and M. Wimmer. Muli-deph-map rayracing for efficien large-scene reconsrucion. IEEE Transacions on Visualizaion and Compuer Graphics, 22(2): , [3] C. Birklbauer and O. Bimber. Panorama ligh-field imaging. Compuer Graphics Forum, 33(2):43 52, [4] M. Brown and D. G. Lowe. Auomaic panoramic image siching using invarian feaures. Inernaional Journal of Compuer Vision, 74(1):59 73, [5] C. Buehler, M. Bosse, L. McMillan, S. Gorler, and M. Cohen. Unsrucured lumigraph rendering. In SIGGRAPH, pp , [6] G. Chaurasia, S. Duchene, O. Sorkine-Hornung, and G. Dreakis. Deph synhesis and local warps for plausible image-based navigaion. ACM Transacions on Graphics, 32(3):30, [7] S. E. Chen and L. Williams. View inerpolaion for image synhesis. In SIGGRAPH, pp , [8] A. Colle, M. Chuang, P. Sweeney, D. Gille, D. Evseev, D. Calabrese, H. Hoppe, A. Kirk, and S. Sullivan. High-qualiy sreamable freeviewpoin video. ACM Transacions on Graphics, 34(4):69, [9] M. Dou, S. Khamis, Y. Degyarev, P. Davidson, S. R. Fanello, A. Kowdle, S. O. Escolano, C. Rhemann, D. Kim, J. Taylor, P. Kohli, V. Tankovich, and S. Izadi. Fusion4D: real-ime performance capure of challenging scenes. ACM Transacions on Graphics, 35(4):114, [10] M. Eisemann, B. De Decker, M. Magnor, P. Bekaer, E. De Aguiar, N. Ahmed, C. Theobal, and A. Sellen. Floaing exures. Compuer Graphics Forum, 27(2): , [11] Y. Furukawa and J. Ponce. Accurae, dense, and robus muliview sereopsis. IEEE Transacions on Paern Analysis and Machine Inelligence, 32(8): , [12] M. Goesele, J. Ackermann, S. Fuhrmann, C. Haubold, R. Klowsky, D. S- eedly, and R. Szeliski. Ambien poin clouds for view inerpolaion. ACM Transacions on Graphics, 29(4):95, [13] S. J. Gorler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen. The lumigraph. In SIGGRAPH, pp , [14] P. Hedman, S. Alsisan, R. Szeliski, and J. Kopf. Casual 3D phoography. ACM Transacions on Graphics, 36(6):234:1 15, [15] P. Hedman, T. Rischel, G. Dreakis, and G. Brosow. Scalable inside-ou image-based rendering. ACM Transacions on Graphics, 35(6):231:1 11, [16] J. Huang, Z. Chen, D. Ceylan, and H. Jin. 6-DOF VR videos wih a single 360-camera. In Proceedings of IEEE Virual Realiy (VR), pp , [17] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosoviskiy, and T. Brox. FlowNe 2.0: Evoluion of opical flow esimaion wih deep neworks. In CVPR, [18] M. Innmann, M. Zollhöfer, M. Nießner, C. Theobal, and M. Samminger. VolumeDeform: Real-ime volumeric non-rigid reconsrucion. In ECCV, [19] S. B. Kang, R. Szeliski, and M. Uyendaele. Seamless siching using muli-perspecive plane sweep. Technical Repor MSR-TR , Microsof Research, [20] J. Kopf, D. Lischinski, O. Deussen, D. Cohen-Or, and M. Cohen. Locally adaped projecions o reduce panorama disorions. Compuer Graphics Forum, 28(4): , [21] B. Krolla, M. Diebold, B. Goldlücke, and D. Sricker. Spherical ligh fields. In BMVC, [22] J. Lee, B. Kim, K. Kim, Y. Kim, and J. Noh. Rich360: opimized spherical represenaion from srucured panoramic camera arrays. ACM Transacions on Graphics, 35(4):63, [23] A. Levin and F. Durand. Linear view synhesis using a dimensionaliy gap ligh field prior. In CVPR, [24] M. Levoy and P. Hanrahan. Ligh field rendering. In SIGGRAPH, pp , [25] K. Marwah, G. Wezsein, Y. Bando, and R. Raskar. Compressive ligh field phoography using overcomplee dicionaries and opimized projecions. ACM Transacions on Graphics, 32(4):46, [26] K. Mazen, M. F. Cohen, B. Evans, J. Kopf, and R. Szeliski. Low-cos 360 sereo phoography and video capure. ACM Transacions on Graphics, 36(4):148, [27] K. Mira and A. Veeraraghavan. Ligh field denoising, ligh field superresoluion and sereo camera based refocussing using a GMM ligh field pach prior. In CVPR Workshops, [28] R. A. Newcombe, D. Fox, and S. M. Seiz. DynamicFusion: Reconsrucion and racking of non-rigid scenes in real-ime. In CVPR, [29] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohi, J. Shoon, S. Hodges, and A. Fizgibbon. KinecFusion: Real-ime dense surface mapping and racking. In ISMAR, [30] M. Nießner, M. Zollhöfer, S. Izadi, and M. Samminger. Real-ime 3D reconsrucion a scale using voxel hashing. ACM Transacions on Graphics, 32(6):169:1 11, [31] S. Peleg, M. Ben-Ezra, and Y. Prich. Omnisereo: Panoramic sereo imaging. IEEE Transacions on Paern Analysis and Machine Inelligence, 23(3): , [32] F. Perazzi, A. Sorkine-Hornung, H. Zimmer, P. Kaufmann, O. Wang, S. Wason, and M. Gross. Panoramic video from unsrucured camera arrays. Compuer Graphics Forum, 34(2):57 68, [33] S. Philip, B. Summa, J. Tierny, P.-T. Bremer, and V. Pascucci. Disribued seams for gigapixel panoramas. IEEE Transacions on Visualizaion and Compuer Graphics, 21(3): , [34] L. Piegl and W. Tiller. The NURBS book. Springer, [35] C. Richard, Y. Prich, H. Zimmer, and A. Sorkine-Hornung. Megasereo: Consrucing high-resoluion sereo panoramas. In CVPR, 2013.

9 [36] L. Shi, H. Hassanieh, A. Davis, D. Kaabi, and F. Durand. Ligh field reconsrucion using sparsiy in he coninuous Fourier domain. ACM Transacions on Graphics, 34(1):12, [37] H.-Y. Shum, S.-C. Chan, and S. B. Kang. Image-Based Rendering. Springer, [38] H.-Y. Shum and L.-W. He. Rendering wih concenric mosaics. In SIG- GRAPH, pp , [39] H.-Y. Shum and R. Szeliski. Sereo reconsrucion from muliperspecive panoramas. In ICCV, [40] H.-Y. Shum and R. Szeliski. Sysems and experimen paper: Consrucion of panoramic image mosaics wih global and local alignmen. Inernaional Journal of Compuer Vision, 36(2): , [41] R. Szeliski. Image alignmen and siching: A uorial. Foundaions and Trends in Compuer Graphics and Vision, 2(1):1 104, [42] S. Wanner and B. Goldluecke. Variaional ligh field analysis for dispariy esimaion and super-resoluion. IEEE Transacions on Paern Analysis and Machine Inelligence, 36(3): , [43] K. Xu, H. Huang, Y. Shi, H. Li, P. Long, J. Caichen, W. Sun, and B. Chen. Auoscanning for coupled scene reconsrucion and proacive objec analysis. ACM Transacions on Graphics, 34(6):177, [44] L. Xu, J. Jia, and Y. Masushia. Moion deail preserving opical flow esimaion. IEEE Transacions on Paern Analysis and Machine Inelligence, 34(9): , [45] L. Zelnik-Manor, G. Peers, and P. Perona. Squaring he circle in panoramas. In ICCV, [46] Y. Zhang, W. Xu, Y. Tong, and K. Zhou. Online srucure analysis for real-ime indoor scene reconsrucion. ACM Transacions on Graphics, 34(5):159, [47] Z. Zhang, Y. Liu, and Q. Dai. Ligh field from micro-baseline image pair. In CVPR, 2015.

Implementing Ray Casting in Tetrahedral Meshes with Programmable Graphics Hardware (Technical Report)

Implementing Ray Casting in Tetrahedral Meshes with Programmable Graphics Hardware (Technical Report) Implemening Ray Casing in Terahedral Meshes wih Programmable Graphics Hardware (Technical Repor) Marin Kraus, Thomas Erl March 28, 2002 1 Inroducion Alhough cell-projecion, e.g., [3, 2], and resampling,

More information

EECS 487: Interactive Computer Graphics

EECS 487: Interactive Computer Graphics EECS 487: Ineracive Compuer Graphics Lecure 7: B-splines curves Raional Bézier and NURBS Cubic Splines A represenaion of cubic spline consiss of: four conrol poins (why four?) hese are compleely user specified

More information

STEREO PLANE MATCHING TECHNIQUE

STEREO PLANE MATCHING TECHNIQUE STEREO PLANE MATCHING TECHNIQUE Commission III KEY WORDS: Sereo Maching, Surface Modeling, Projecive Transformaion, Homography ABSTRACT: This paper presens a new ype of sereo maching algorihm called Sereo

More information

CAMERA CALIBRATION BY REGISTRATION STEREO RECONSTRUCTION TO 3D MODEL

CAMERA CALIBRATION BY REGISTRATION STEREO RECONSTRUCTION TO 3D MODEL CAMERA CALIBRATION BY REGISTRATION STEREO RECONSTRUCTION TO 3D MODEL Klečka Jan Docoral Degree Programme (1), FEEC BUT E-mail: xkleck01@sud.feec.vubr.cz Supervised by: Horák Karel E-mail: horak@feec.vubr.cz

More information

LAMP: 3D Layered, Adaptive-resolution and Multiperspective Panorama - a New Scene Representation

LAMP: 3D Layered, Adaptive-resolution and Multiperspective Panorama - a New Scene Representation Submission o Special Issue of CVIU on Model-based and Image-based 3D Scene Represenaion for Ineracive Visualizaion LAMP: 3D Layered, Adapive-resoluion and Muliperspecive Panorama - a New Scene Represenaion

More information

CENG 477 Introduction to Computer Graphics. Modeling Transformations

CENG 477 Introduction to Computer Graphics. Modeling Transformations CENG 477 Inroducion o Compuer Graphics Modeling Transformaions Modeling Transformaions Model coordinaes o World coordinaes: Model coordinaes: All shapes wih heir local coordinaes and sies. world World

More information

A Matching Algorithm for Content-Based Image Retrieval

A Matching Algorithm for Content-Based Image Retrieval A Maching Algorihm for Conen-Based Image Rerieval Sue J. Cho Deparmen of Compuer Science Seoul Naional Universiy Seoul, Korea Absrac Conen-based image rerieval sysem rerieves an image from a daabase using

More information

Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax

Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 24, NO. 4, APRIL 2018 1545 Parallax360: Stereoscopic 360 Scene Representation for Head-Motion Parallax Bicheng Luo, Feng Xu, Christian Richardt

More information

In Proceedings of CVPR '96. Structure and Motion of Curved 3D Objects from. using these methods [12].

In Proceedings of CVPR '96. Structure and Motion of Curved 3D Objects from. using these methods [12]. In Proceedings of CVPR '96 Srucure and Moion of Curved 3D Objecs from Monocular Silhouees B Vijayakumar David J Kriegman Dep of Elecrical Engineering Yale Universiy New Haven, CT 652-8267 Jean Ponce Compuer

More information

A High-Speed Adaptive Multi-Module Structured Light Scanner

A High-Speed Adaptive Multi-Module Structured Light Scanner A High-Speed Adapive Muli-Module Srucured Ligh Scanner Andreas Griesser 1 Luc Van Gool 1,2 1 Swiss Fed.Ins.of Techn.(ETH) 2 Kaholieke Univ. Leuven D-ITET/Compuer Vision Lab ESAT/VISICS Zürich, Swizerland

More information

4.1 3D GEOMETRIC TRANSFORMATIONS

4.1 3D GEOMETRIC TRANSFORMATIONS MODULE IV MCA - 3 COMPUTER GRAPHICS ADMN 29- Dep. of Compuer Science And Applicaions, SJCET, Palai 94 4. 3D GEOMETRIC TRANSFORMATIONS Mehods for geomeric ransformaions and objec modeling in hree dimensions

More information

Algorithm for image reconstruction in multi-slice helical CT

Algorithm for image reconstruction in multi-slice helical CT Algorihm for image reconsrucion in muli-slice helical CT Kasuyuki Taguchi a) and Hiroshi Aradae Medical Engineering Laboraory, Toshiba Corporaion, 1385 Shimoishigami, Oawara, Tochigi 324-855, Japan Received

More information

Curves & Surfaces. Last Time? Today. Readings for Today (pick one) Limitations of Polygonal Meshes. Today. Adjacency Data Structures

Curves & Surfaces. Last Time? Today. Readings for Today (pick one) Limitations of Polygonal Meshes. Today. Adjacency Data Structures Las Time? Adjacency Daa Srucures Geomeric & opologic informaion Dynamic allocaion Efficiency of access Curves & Surfaces Mesh Simplificaion edge collapse/verex spli geomorphs progressive ransmission view-dependen

More information

MORPHOLOGICAL SEGMENTATION OF IMAGE SEQUENCES

MORPHOLOGICAL SEGMENTATION OF IMAGE SEQUENCES MORPHOLOGICAL SEGMENTATION OF IMAGE SEQUENCES B. MARCOTEGUI and F. MEYER Ecole des Mines de Paris, Cenre de Morphologie Mahémaique, 35, rue Sain-Honoré, F 77305 Fonainebleau Cedex, France Absrac. In image

More information

A Fast Stereo-Based Multi-Person Tracking using an Approximated Likelihood Map for Overlapping Silhouette Templates

A Fast Stereo-Based Multi-Person Tracking using an Approximated Likelihood Map for Overlapping Silhouette Templates A Fas Sereo-Based Muli-Person Tracking using an Approximaed Likelihood Map for Overlapping Silhouee Templaes Junji Saake Jun Miura Deparmen of Compuer Science and Engineering Toyohashi Universiy of Technology

More information

Visual Indoor Localization with a Floor-Plan Map

Visual Indoor Localization with a Floor-Plan Map Visual Indoor Localizaion wih a Floor-Plan Map Hang Chu Dep. of ECE Cornell Universiy Ihaca, NY 14850 hc772@cornell.edu Absrac In his repor, a indoor localizaion mehod is presened. The mehod akes firsperson

More information

Spline Curves. Color Interpolation. Normal Interpolation. Last Time? Today. glshademodel (GL_SMOOTH); Adjacency Data Structures. Mesh Simplification

Spline Curves. Color Interpolation. Normal Interpolation. Last Time? Today. glshademodel (GL_SMOOTH); Adjacency Data Structures. Mesh Simplification Las Time? Adjacency Daa Srucures Spline Curves Geomeric & opologic informaion Dynamic allocaion Efficiency of access Mesh Simplificaion edge collapse/verex spli geomorphs progressive ransmission view-dependen

More information

Real-Time Non-Rigid Multi-Frame Depth Video Super-Resolution

Real-Time Non-Rigid Multi-Frame Depth Video Super-Resolution Real-Time Non-Rigid Muli-Frame Deph Video Super-Resoluion Kassem Al Ismaeil 1, Djamila Aouada 1, Thomas Solignac 2, Bruno Mirbach 2, Björn Oersen 1 1 Inerdisciplinary Cenre for Securiy, Reliabiliy, and

More information

Sam knows that his MP3 player has 40% of its battery life left and that the battery charges by an additional 12 percentage points every 15 minutes.

Sam knows that his MP3 player has 40% of its battery life left and that the battery charges by an additional 12 percentage points every 15 minutes. 8.F Baery Charging Task Sam wans o ake his MP3 player and his video game player on a car rip. An hour before hey plan o leave, he realized ha he forgo o charge he baeries las nigh. A ha poin, he plugged

More information

Occlusion-Free Hand Motion Tracking by Multiple Cameras and Particle Filtering with Prediction

Occlusion-Free Hand Motion Tracking by Multiple Cameras and Particle Filtering with Prediction 58 IJCSNS Inernaional Journal of Compuer Science and Nework Securiy, VOL.6 No.10, Ocober 006 Occlusion-Free Hand Moion Tracking by Muliple Cameras and Paricle Filering wih Predicion Makoo Kao, and Gang

More information

Evaluation and Improvement of Region-based Motion Segmentation

Evaluation and Improvement of Region-based Motion Segmentation Evaluaion and Improvemen of Region-based Moion Segmenaion Mark Ross Universiy Koblenz-Landau, Insiue of Compuaional Visualisics, Universiässraße 1, 56070 Koblenz, Germany Email: ross@uni-koblenz.de Absrac

More information

A METHOD OF MODELING DEFORMATION OF AN OBJECT EMPLOYING SURROUNDING VIDEO CAMERAS

A METHOD OF MODELING DEFORMATION OF AN OBJECT EMPLOYING SURROUNDING VIDEO CAMERAS A METHOD OF MODELING DEFORMATION OF AN OBJECT EMLOYING SURROUNDING IDEO CAMERAS Joo Kooi TAN, Seiji ISHIKAWA Deparmen of Mechanical and Conrol Engineering Kushu Insiue of Technolog, Japan ehelan@is.cnl.kuech.ac.jp,

More information

NEWTON S SECOND LAW OF MOTION

NEWTON S SECOND LAW OF MOTION Course and Secion Dae Names NEWTON S SECOND LAW OF MOTION The acceleraion of an objec is defined as he rae of change of elociy. If he elociy changes by an amoun in a ime, hen he aerage acceleraion during

More information

FIELD PROGRAMMABLE GATE ARRAY (FPGA) AS A NEW APPROACH TO IMPLEMENT THE CHAOTIC GENERATORS

FIELD PROGRAMMABLE GATE ARRAY (FPGA) AS A NEW APPROACH TO IMPLEMENT THE CHAOTIC GENERATORS FIELD PROGRAMMABLE GATE ARRAY (FPGA) AS A NEW APPROACH TO IMPLEMENT THE CHAOTIC GENERATORS Mohammed A. Aseeri and M. I. Sobhy Deparmen of Elecronics, The Universiy of Ken a Canerbury Canerbury, Ken, CT2

More information

STRING DESCRIPTIONS OF DATA FOR DISPLAY*

STRING DESCRIPTIONS OF DATA FOR DISPLAY* SLAC-PUB-383 January 1968 STRING DESCRIPTIONS OF DATA FOR DISPLAY* J. E. George and W. F. Miller Compuer Science Deparmen and Sanford Linear Acceleraor Cener Sanford Universiy Sanford, California Absrac

More information

Last Time: Curves & Surfaces. Today. Questions? Limitations of Polygonal Meshes. Can We Disguise the Facets?

Last Time: Curves & Surfaces. Today. Questions? Limitations of Polygonal Meshes. Can We Disguise the Facets? Las Time: Curves & Surfaces Expeced value and variance Mone-Carlo in graphics Imporance sampling Sraified sampling Pah Tracing Irradiance Cache Phoon Mapping Quesions? Today Moivaion Limiaions of Polygonal

More information

Real Time Integral-Based Structural Health Monitoring

Real Time Integral-Based Structural Health Monitoring Real Time Inegral-Based Srucural Healh Monioring The nd Inernaional Conference on Sensing Technology ICST 7 J. G. Chase, I. Singh-Leve, C. E. Hann, X. Chen Deparmen of Mechanical Engineering, Universiy

More information

Stereoscopic Neural Style Transfer

Stereoscopic Neural Style Transfer Sereoscopic Neural Syle Transfer Dongdong Chen 1 Lu Yuan 2, Jing Liao 2, Nenghai Yu 1, Gang Hua 2 1 Universiy of Science and Technology of China 2 Microsof Research cd722522@mail.usc.edu.cn, {luyuan,jliao}@microsof.com,

More information

Video Content Description Using Fuzzy Spatio-Temporal Relations

Video Content Description Using Fuzzy Spatio-Temporal Relations Proceedings of he 4s Hawaii Inernaional Conference on Sysem Sciences - 008 Video Conen Descripion Using Fuzzy Spaio-Temporal Relaions rchana M. Rajurkar *, R.C. Joshi and Sananu Chaudhary 3 Dep of Compuer

More information

In fmri a Dual Echo Time EPI Pulse Sequence Can Induce Sources of Error in Dynamic Magnetic Field Maps

In fmri a Dual Echo Time EPI Pulse Sequence Can Induce Sources of Error in Dynamic Magnetic Field Maps In fmri a Dual Echo Time EPI Pulse Sequence Can Induce Sources of Error in Dynamic Magneic Field Maps A. D. Hahn 1, A. S. Nencka 1 and D. B. Rowe 2,1 1 Medical College of Wisconsin, Milwaukee, WI, Unied

More information

Optimal Crane Scheduling

Optimal Crane Scheduling Opimal Crane Scheduling Samid Hoda, John Hooker Laife Genc Kaya, Ben Peerson Carnegie Mellon Universiy Iiro Harjunkoski ABB Corporae Research EWO - 13 November 2007 1/16 Problem Track-mouned cranes move

More information

DAGM 2011 Tutorial on Convex Optimization for Computer Vision

DAGM 2011 Tutorial on Convex Optimization for Computer Vision DAGM 2011 Tuorial on Convex Opimizaion for Compuer Vision Par 3: Convex Soluions for Sereo and Opical Flow Daniel Cremers Compuer Vision Group Technical Universiy of Munich Graz Universiy of Technology

More information

Deep Appearance Models for Face Rendering

Deep Appearance Models for Face Rendering Deep Appearance Models for Face Rendering STEPHEN LOMBARDI, Facebook Realiy Labs JASON SARAGIH, Facebook Realiy Labs TOMAS SIMON, Facebook Realiy Labs YASER SHEIKH, Facebook Realiy Labs Deep Appearance

More information

Effects needed for Realism. Ray Tracing. Ray Tracing: History. Outline. Foundations of Computer Graphics (Fall 2012)

Effects needed for Realism. Ray Tracing. Ray Tracing: History. Outline. Foundations of Computer Graphics (Fall 2012) Foundaions of ompuer Graphics (Fall 2012) S 184, Lecure 16: Ray Tracing hp://ins.eecs.berkeley.edu/~cs184 Effecs needed for Realism (Sof) Shadows Reflecions (Mirrors and Glossy) Transparency (Waer, Glass)

More information

Coded Caching with Multiple File Requests

Coded Caching with Multiple File Requests Coded Caching wih Muliple File Requess Yi-Peng Wei Sennur Ulukus Deparmen of Elecrical and Compuer Engineering Universiy of Maryland College Park, MD 20742 ypwei@umd.edu ulukus@umd.edu Absrac We sudy a

More information

Real time 3D face and facial feature tracking

Real time 3D face and facial feature tracking J Real-Time Image Proc (2007) 2:35 44 DOI 10.1007/s11554-007-0032-2 ORIGINAL RESEARCH PAPER Real ime 3D face and facial feaure racking Fadi Dornaika Æ Javier Orozco Received: 23 November 2006 / Acceped:

More information

Image Content Representation

Image Content Representation Image Conen Represenaion Represenaion for curves and shapes regions relaionships beween regions E.G.M. Perakis Image Represenaion & Recogniion 1 Reliable Represenaion Uniqueness: mus uniquely specify an

More information

Point Cloud Representation of 3D Shape for Laser- Plasma Scanning 3D Display

Point Cloud Representation of 3D Shape for Laser- Plasma Scanning 3D Display Poin Cloud Represenaion of 3D Shape for Laser- Plasma Scanning 3D Displa Hiroo Ishikawa and Hideo Saio Keio Universi E-mail {hiroo, saio}@ozawa.ics.keio.ac.jp Absrac- In his paper, a mehod of represening

More information

Today. Curves & Surfaces. Can We Disguise the Facets? Limitations of Polygonal Meshes. Better, but not always good enough

Today. Curves & Surfaces. Can We Disguise the Facets? Limitations of Polygonal Meshes. Better, but not always good enough Today Curves & Surfaces Moivaion Limiaions of Polygonal Models Some Modeling Tools & Definiions Curves Surfaces / Paches Subdivision Surfaces Limiaions of Polygonal Meshes Can We Disguise he Faces? Planar

More information

Image Based Computer-Aided Manufacturing Technology

Image Based Computer-Aided Manufacturing Technology Sensors & Transducers 03 by IFSA hp://www.sensorsporal.com Image Based Compuer-Aided Manufacuring Technology Zhanqi HU Xiaoqin ZHANG Jinze LI Wei LI College of Mechanical Engineering Yanshan Universiy

More information

AUTOMATIC 3D FACE REGISTRATION WITHOUT INITIALIZATION

AUTOMATIC 3D FACE REGISTRATION WITHOUT INITIALIZATION Chaper 3 AUTOMATIC 3D FACE REGISTRATION WITHOUT INITIALIZATION A. Koschan, V. R. Ayyagari, F. Boughorbel, and M. A. Abidi Imaging, Roboics, and Inelligen Sysems Laboraory, The Universiy of Tennessee, 334

More information

Real-Time Avatar Animation Steered by Live Body Motion

Real-Time Avatar Animation Steered by Live Body Motion Real-Time Avaar Animaion Seered by Live Body Moion Oliver Schreer, Ralf Tanger, Peer Eiser, Peer Kauff, Bernhard Kaspar, and Roman Engler 3 Fraunhofer Insiue for Telecommunicaions/Heinrich-Herz-Insiu,

More information

arxiv: v2 [cs.cv] 20 May 2018

arxiv: v2 [cs.cv] 20 May 2018 Sereoscopic Neural Syle Transfer Dongdong Chen 1 Lu Yuan 2, Jing Liao 2, Nenghai Yu 1, Gang Hua 2 1 Universiy of Science and Technology of China 2 Microsof Research cd722522@mail.usc.edu.cn, {jliao, luyuan,

More information

SENSING using 3D technologies, structured light cameras

SENSING using 3D technologies, structured light cameras IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 39, NO. 10, OCTOBER 2017 2045 Real-Time Enhancemen of Dynamic Deph Videos wih Non-Rigid Deformaions Kassem Al Ismaeil, Suden Member,

More information

AML710 CAD LECTURE 11 SPACE CURVES. Space Curves Intrinsic properties Synthetic curves

AML710 CAD LECTURE 11 SPACE CURVES. Space Curves Intrinsic properties Synthetic curves AML7 CAD LECTURE Space Curves Inrinsic properies Synheic curves A curve which may pass hrough any region of hreedimensional space, as conrased o a plane curve which mus lie on a single plane. Space curves

More information

Motion Level-of-Detail: A Simplification Method on Crowd Scene

Motion Level-of-Detail: A Simplification Method on Crowd Scene Moion Level-of-Deail: A Simplificaion Mehod on Crowd Scene Absrac Junghyun Ahn VR lab, EECS, KAIST ChocChoggi@vr.kais.ac.kr hp://vr.kais.ac.kr/~zhaoyue Recen echnological improvemen in characer animaion

More information

Gauss-Jordan Algorithm

Gauss-Jordan Algorithm Gauss-Jordan Algorihm The Gauss-Jordan algorihm is a sep by sep procedure for solving a sysem of linear equaions which may conain any number of variables and any number of equaions. The algorihm is carried

More information

An Improved Square-Root Nyquist Shaping Filter

An Improved Square-Root Nyquist Shaping Filter An Improved Square-Roo Nyquis Shaping Filer fred harris San Diego Sae Universiy fred.harris@sdsu.edu Sridhar Seshagiri San Diego Sae Universiy Seshigar.@engineering.sdsu.edu Chris Dick Xilinx Corp. chris.dick@xilinx.com

More information

Research Article Auto Coloring with Enhanced Character Registration

Research Article Auto Coloring with Enhanced Character Registration Compuer Games Technology Volume 2008, Aricle ID 35398, 7 pages doi:0.55/2008/35398 Research Aricle Auo Coloring wih Enhanced Characer Regisraion Jie Qiu, Hock Soon Seah, Feng Tian, Quan Chen, Zhongke Wu,

More information

Projection & Interaction

Projection & Interaction Projecion & Ineracion Algebra of projecion Canonical viewing volume rackball inerface ransform Hierarchies Preview of Assignmen #2 Lecure 8 Comp 236 Spring 25 Projecions Our lives are grealy simplified

More information

MATH Differential Equations September 15, 2008 Project 1, Fall 2008 Due: September 24, 2008

MATH Differential Equations September 15, 2008 Project 1, Fall 2008 Due: September 24, 2008 MATH 5 - Differenial Equaions Sepember 15, 8 Projec 1, Fall 8 Due: Sepember 4, 8 Lab 1.3 - Logisics Populaion Models wih Harvesing For his projec we consider lab 1.3 of Differenial Equaions pages 146 o

More information

A time-space consistency solution for hardware-in-the-loop simulation system

A time-space consistency solution for hardware-in-the-loop simulation system Inernaional Conference on Advanced Elecronic Science and Technology (AEST 206) A ime-space consisency soluion for hardware-in-he-loop simulaion sysem Zexin Jiang a Elecric Power Research Insiue of Guangdong

More information

High Resolution Passive Facial Performance Capture

High Resolution Passive Facial Performance Capture High Resoluion Passive Facial Performance Capure Derek Bradley1 Wolfgang Heidrich1 Tiberiu Popa1,2 Alla Sheffer1 1) Universiy of Briish Columbia 2) ETH Zu rich Figure 1: High resoluion passive facial performance

More information

Collision-Free and Curvature-Continuous Path Smoothing in Cluttered Environments

Collision-Free and Curvature-Continuous Path Smoothing in Cluttered Environments Collision-Free and Curvaure-Coninuous Pah Smoohing in Cluered Environmens Jia Pan 1 and Liangjun Zhang and Dinesh Manocha 3 1 panj@cs.unc.edu, 3 dm@cs.unc.edu, Dep. of Compuer Science, Universiy of Norh

More information

Improving Occupancy Grid FastSLAM by Integrating Navigation Sensors

Improving Occupancy Grid FastSLAM by Integrating Navigation Sensors Improving Occupancy Grid FasSLAM by Inegraing Navigaion Sensors Chrisopher Weyers Sensors Direcorae Air Force Research Laboraory Wrigh-Paerson AFB, OH 45433 Gilber Peerson Deparmen of Elecrical and Compuer

More information

Upper Body Tracking for Human-Machine Interaction with a Moving Camera

Upper Body Tracking for Human-Machine Interaction with a Moving Camera The 2009 IEEE/RSJ Inernaional Conference on Inelligen Robos and Sysems Ocober -5, 2009 S. Louis, USA Upper Body Tracking for Human-Machine Ineracion wih a Moving Camera Yi-Ru Chen, Cheng-Ming Huang, and

More information

Real-time 2D Video/3D LiDAR Registration

Real-time 2D Video/3D LiDAR Registration Real-ime 2D Video/3D LiDAR Regisraion C. Bodenseiner Fraunhofer IOSB chrisoph.bodenseiner@iosb.fraunhofer.de M. Arens Fraunhofer IOSB michael.arens@iosb.fraunhofer.de Absrac Progress in LiDAR scanning

More information

Image warping Li Zhang CS559

Image warping Li Zhang CS559 Wha is an image Image arping Li Zhang S559 We can hink of an image as a funcion, f: R 2 R: f(, ) gives he inensi a posiion (, ) defined over a recangle, ih a finie range: f: [a,b][c,d] [,] f Slides solen

More information

Proceeding of the 6 th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-sairas 2001, Canadian Space Agency,

Proceeding of the 6 th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-sairas 2001, Canadian Space Agency, Proceeding of he 6 h Inernaional Symposium on Arificial Inelligence and Roboics & Auomaion in Space: i-sairas 00, Canadian Space Agency, S-Huber, Quebec, Canada, June 8-, 00. Muli-resoluion Mapping Using

More information

Virtual Recovery of Excavated Archaeological Finds

Virtual Recovery of Excavated Archaeological Finds Virual Recovery of Excavaed Archaeological Finds Jiang Yu ZHENG, Zhong Li ZHANG*, Norihiro ABE Kyushu Insiue of Technology, Iizuka, Fukuoka 820, Japan *Museum of he Terra-Coa Warrlors and Horses, Lin Tong,

More information

A MRF formulation for coded structured light

A MRF formulation for coded structured light A MRF formulaion for coded srucured ligh Jean-Philippe Tardif Sébasien Roy Déparemen d informaique e recherche opéraionnelle Universié de Monréal, Canada {ardifj, roys}@iro.umonreal.ca Absrac Mulimedia

More information

Image segmentation. Motivation. Objective. Definitions. A classification of segmentation techniques. Assumptions for thresholding

Image segmentation. Motivation. Objective. Definitions. A classification of segmentation techniques. Assumptions for thresholding Moivaion Image segmenaion Which pixels belong o he same objec in an image/video sequence? (spaial segmenaion) Which frames belong o he same video sho? (emporal segmenaion) Which frames belong o he same

More information

ACQUIRING high-quality and well-defined depth data. Online Temporally Consistent Indoor Depth Video Enhancement via Static Structure

ACQUIRING high-quality and well-defined depth data. Online Temporally Consistent Indoor Depth Video Enhancement via Static Structure SUBMITTED TO TRANSACTION ON IMAGE PROCESSING 1 Online Temporally Consisen Indoor Deph Video Enhancemen via Saic Srucure Lu Sheng, Suden Member, IEEE, King Ngi Ngan, Fellow, IEEE, Chern-Loon Lim and Songnan

More information

RGB-D Object Tracking: A Particle Filter Approach on GPU

RGB-D Object Tracking: A Particle Filter Approach on GPU RGB-D Objec Tracking: A Paricle Filer Approach on GPU Changhyun Choi and Henrik I. Chrisensen Cener for Roboics & Inelligen Machines College of Compuing Georgia Insiue of Technology Alana, GA 3332, USA

More information

THE micro-lens array (MLA) based light field cameras,

THE micro-lens array (MLA) based light field cameras, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL., NO., A Generic Muli-Projecion-Cener Model and Calibraion Mehod for Ligh Field Cameras Qi hang, Chunping hang, Jinbo Ling, Qing Wang,

More information

The Impact of Product Development on the Lifecycle of Defects

The Impact of Product Development on the Lifecycle of Defects The Impac of Produc Developmen on he Lifecycle of Rudolf Ramler Sofware Compeence Cener Hagenberg Sofware Park 21 A-4232 Hagenberg, Ausria +43 7236 3343 872 rudolf.ramler@scch.a ABSTRACT This paper invesigaes

More information

Simultaneous Localization and Mapping with Stereo Vision

Simultaneous Localization and Mapping with Stereo Vision Simulaneous Localizaion and Mapping wih Sereo Vision Mahew N. Dailey Compuer Science and Informaion Managemen Asian Insiue of Technology Pahumhani, Thailand Email: mdailey@ai.ac.h Manukid Parnichkun Mecharonics

More information

Design Alternatives for a Thin Lens Spatial Integrator Array

Design Alternatives for a Thin Lens Spatial Integrator Array Egyp. J. Solids, Vol. (7), No. (), (004) 75 Design Alernaives for a Thin Lens Spaial Inegraor Array Hala Kamal *, Daniel V azquez and Javier Alda and E. Bernabeu Opics Deparmen. Universiy Compluense of

More information

MOTION TRACKING is a fundamental capability that

MOTION TRACKING is a fundamental capability that TECHNICAL REPORT CRES-05-008, CENTER FOR ROBOTICS AND EMBEDDED SYSTEMS, UNIVERSITY OF SOUTHERN CALIFORNIA 1 Real-ime Moion Tracking from a Mobile Robo Boyoon Jung, Suden Member, IEEE, Gaurav S. Sukhame,

More information

Multi-Target Detection and Tracking from a Single Camera in Unmanned Aerial Vehicles (UAVs)

Multi-Target Detection and Tracking from a Single Camera in Unmanned Aerial Vehicles (UAVs) 2016 IEEE/RSJ Inernaional Conference on Inelligen Robos and Sysems (IROS) Daejeon Convenion Cener Ocober 9-14, 2016, Daejeon, Korea Muli-Targe Deecion and Tracking from a Single Camera in Unmanned Aerial

More information

Audio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA

Audio Engineering Society. Convention Paper. Presented at the 119th Convention 2005 October 7 10 New York, New York USA Audio Engineering Sociey Convenion Paper Presened a he 119h Convenion 2005 Ocober 7 10 New Yor, New Yor USA This convenion paper has been reproduced from he auhor's advance manuscrip, wihou ediing, correcions,

More information

Landmarks: A New Model for Similarity-Based Pattern Querying in Time Series Databases

Landmarks: A New Model for Similarity-Based Pattern Querying in Time Series Databases Lmarks: A New Model for Similariy-Based Paern Querying in Time Series Daabases Chang-Shing Perng Haixun Wang Sylvia R. Zhang D. So Parker perng@cs.ucla.edu hxwang@cs.ucla.edu Sylvia Zhang@cle.com so@cs.ucla.edu

More information

Robust Visual Tracking for Multiple Targets

Robust Visual Tracking for Multiple Targets Robus Visual Tracking for Muliple Targes Yizheng Cai, Nando de Freias, and James J. Lile Universiy of Briish Columbia, Vancouver, B.C., Canada, V6T 1Z4 {yizhengc, nando, lile}@cs.ubc.ca Absrac. We address

More information

Robot localization under perceptual aliasing conditions based on laser reflectivity using particle filter

Robot localization under perceptual aliasing conditions based on laser reflectivity using particle filter Robo localizaion under percepual aliasing condiions based on laser refleciviy using paricle filer DongXiang Zhang, Ryo Kurazume, Yumi Iwashia, Tsuomu Hasegawa Absrac Global localizaion, which deermines

More information

A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding

A Review on Block Matching Motion Estimation and Automata Theory based Approaches for Fractal Coding Regular Issue A Review on Block Maching Moion Esimaion and Auomaa Theory based Approaches for Fracal Coding Shailesh D Kamble 1, Nileshsingh V Thakur 2, and Preei R Bajaj 3 1 Compuer Science & Engineering,

More information

A Fast Non-Uniform Knots Placement Method for B-Spline Fitting

A Fast Non-Uniform Knots Placement Method for B-Spline Fitting 2015 IEEE Inernaional Conference on Advanced Inelligen Mecharonics (AIM) July 7-11, 2015. Busan, Korea A Fas Non-Uniform Knos Placemen Mehod for B-Spline Fiing T. Tjahjowidodo, VT. Dung, and ML. Han Absrac

More information

Dynamic Depth Recovery from Multiple Synchronized Video Streams 1

Dynamic Depth Recovery from Multiple Synchronized Video Streams 1 Dynamic Deph Recoery from Muliple ynchronized Video reams Hai ao, Harpree. awhney, and Rakesh Kumar Deparmen of Compuer Engineering arnoff Corporaion Uniersiy of California a ana Cruz Washingon Road ana

More information

A Face Detection Method Based on Skin Color Model

A Face Detection Method Based on Skin Color Model A Face Deecion Mehod Based on Skin Color Model Dazhi Zhang Boying Wu Jiebao Sun Qinglei Liao Deparmen of Mahemaics Harbin Insiue of Technology Harbin China 150000 Zhang_dz@163.com mahwby@hi.edu.cn sunjiebao@om.com

More information

MODEL BASED TECHNIQUE FOR VEHICLE TRACKING IN TRAFFIC VIDEO USING SPATIAL LOCAL FEATURES

MODEL BASED TECHNIQUE FOR VEHICLE TRACKING IN TRAFFIC VIDEO USING SPATIAL LOCAL FEATURES MODEL BASED TECHNIQUE FOR VEHICLE TRACKING IN TRAFFIC VIDEO USING SPATIAL LOCAL FEATURES Arun Kumar H. D. 1 and Prabhakar C. J. 2 1 Deparmen of Compuer Science, Kuvempu Universiy, Shimoga, India ABSTRACT

More information

Feature-Preserving Reconstruction of Singular Surfaces

Feature-Preserving Reconstruction of Singular Surfaces Eurographics Symposium on Geomery Processing 2012 Eian Grinspun and Niloy Mira (Gues Ediors) Volume 31 (2012), Number 5 Feaure-Preserving Reconsrucion of Singular Surfaces T. K. Dey 1 and X. Ge 1 and Q.

More information

Image warping/morphing

Image warping/morphing Image arping/morphing Image arping Digial Visual Effecs Yung-Yu Chuang ih slides b Richard Szeliski, Seve Seiz, Tom Funkhouser and leei Efros Image formaion Sampling and quanizaion B Wha is an image We

More information

Rao-Blackwellized Particle Filtering for Probing-Based 6-DOF Localization in Robotic Assembly

Rao-Blackwellized Particle Filtering for Probing-Based 6-DOF Localization in Robotic Assembly MITSUBISHI ELECTRIC RESEARCH LABORATORIES hp://www.merl.com Rao-Blackwellized Paricle Filering for Probing-Based 6-DOF Localizaion in Roboic Assembly Yuichi Taguchi, Tim Marks, Haruhisa Okuda TR1-8 June

More information

An Adaptive Spatial Depth Filter for 3D Rendering IP

An Adaptive Spatial Depth Filter for 3D Rendering IP JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.3, NO. 4, DECEMBER, 23 175 An Adapive Spaial Deph Filer for 3D Rendering IP Chang-Hyo Yu and Lee-Sup Kim Absrac In his paper, we presen a new mehod

More information

Schedule. Curves & Surfaces. Questions? Last Time: Today. Limitations of Polygonal Meshes. Acceleration Data Structures.

Schedule. Curves & Surfaces. Questions? Last Time: Today. Limitations of Polygonal Meshes. Acceleration Data Structures. Schedule Curves & Surfaces Sunday Ocober 5 h, * 3-5 PM *, Room TBA: Review Session for Quiz 1 Exra Office Hours on Monday (NE43 Graphics Lab) Tuesday Ocober 7 h : Quiz 1: In class 1 hand-wrien 8.5x11 shee

More information

Computer representations of piecewise

Computer representations of piecewise Edior: Gabriel Taubin Inroducion o Geomeric Processing hrough Opimizaion Gabriel Taubin Brown Universiy Compuer represenaions o piecewise smooh suraces have become vial echnologies in areas ranging rom

More information

arxiv: v1 [cs.cv] 18 Apr 2017

arxiv: v1 [cs.cv] 18 Apr 2017 Ligh Field Blind Moion Deblurring Praul P. Srinivasan 1, Ren Ng 1, Ravi Ramamoorhi 2 1 Universiy of California, Berkeley 2 Universiy of California, San Diego 1 {praul,ren}@eecs.berkeley.edu, 2 ravir@cs.ucsd.edu

More information

Video-Based Face Recognition Using Probabilistic Appearance Manifolds

Video-Based Face Recognition Using Probabilistic Appearance Manifolds Video-Based Face Recogniion Using Probabilisic Appearance Manifolds Kuang-Chih Lee Jeffrey Ho Ming-Hsuan Yang David Kriegman klee10@uiuc.edu jho@cs.ucsd.edu myang@honda-ri.com kriegman@cs.ucsd.edu Compuer

More information

NURBS rendering in OpenSG Plus

NURBS rendering in OpenSG Plus NURS rering in OpenSG Plus F. Kahlesz Á. alázs R. Klein Universiy of onn Insiue of Compuer Science II Compuer Graphics Römersrasse 164. 53117 onn, Germany Absrac Mos of he indusrial pars are designed as

More information

Dynamic Route Planning and Obstacle Avoidance Model for Unmanned Aerial Vehicles

Dynamic Route Planning and Obstacle Avoidance Model for Unmanned Aerial Vehicles Volume 116 No. 24 2017, 315-329 ISSN: 1311-8080 (prined version); ISSN: 1314-3395 (on-line version) url: hp://www.ijpam.eu ijpam.eu Dynamic Roue Planning and Obsacle Avoidance Model for Unmanned Aerial

More information

User Adjustable Process Scheduling Mechanism for a Multiprocessor Embedded System

User Adjustable Process Scheduling Mechanism for a Multiprocessor Embedded System Proceedings of he 6h WSEAS Inernaional Conference on Applied Compuer Science, Tenerife, Canary Islands, Spain, December 16-18, 2006 346 User Adjusable Process Scheduling Mechanism for a Muliprocessor Embedded

More information

Low-Cost WLAN based. Dr. Christian Hoene. Computer Science Department, University of Tübingen, Germany

Low-Cost WLAN based. Dr. Christian Hoene. Computer Science Department, University of Tübingen, Germany Low-Cos WLAN based Time-of-fligh fligh Trilaeraion Precision Indoor Personnel Locaion and Tracking for Emergency Responders Third Annual Technology Workshop, Augus 5, 2008 Worceser Polyechnic Insiue, Worceser,

More information

arxiv: v1 [cs.cv] 25 Apr 2017

arxiv: v1 [cs.cv] 25 Apr 2017 Sudheendra Vijayanarasimhan Susanna Ricco svnaras@google.com ricco@google.com... arxiv:1704.07804v1 [cs.cv] 25 Apr 2017 SfM-Ne: Learning of Srucure and Moion from Video Cordelia Schmid Esimaed deph, camera

More information

Probabilistic Detection and Tracking of Motion Discontinuities

Probabilistic Detection and Tracking of Motion Discontinuities Probabilisic Deecion and Tracking of Moion Disconinuiies Michael J. Black David J. Flee Xerox Palo Alo Research Cener 3333 Coyoe Hill Road Palo Alo, CA 94304 fblack,fleeg@parc.xerox.com hp://www.parc.xerox.com/fblack,fleeg/

More information

Scheduling. Scheduling. EDA421/DIT171 - Parallel and Distributed Real-Time Systems, Chalmers/GU, 2011/2012 Lecture #4 Updated March 16, 2012

Scheduling. Scheduling. EDA421/DIT171 - Parallel and Distributed Real-Time Systems, Chalmers/GU, 2011/2012 Lecture #4 Updated March 16, 2012 EDA421/DIT171 - Parallel and Disribued Real-Time Sysems, Chalmers/GU, 2011/2012 Lecure #4 Updaed March 16, 2012 Aemps o mee applicaion consrains should be done in a proacive way hrough scheduling. Schedule

More information

Precise Voronoi Cell Extraction of Free-form Rational Planar Closed Curves

Precise Voronoi Cell Extraction of Free-form Rational Planar Closed Curves Precise Voronoi Cell Exracion of Free-form Raional Planar Closed Curves Iddo Hanniel, Ramanahan Muhuganapahy, Gershon Elber Deparmen of Compuer Science Technion, Israel Insiue of Technology Haifa 32000,

More information

Visual Perception as Bayesian Inference. David J Fleet. University of Toronto

Visual Perception as Bayesian Inference. David J Fleet. University of Toronto Visual Percepion as Bayesian Inference David J Flee Universiy of Torono Basic rules of probabiliy sum rule (for muually exclusive a ): produc rule (condiioning): independence (def n ): Bayes rule: marginalizaion:

More information

Learning in Games via Opponent Strategy Estimation and Policy Search

Learning in Games via Opponent Strategy Estimation and Policy Search Learning in Games via Opponen Sraegy Esimaion and Policy Search Yavar Naddaf Deparmen of Compuer Science Universiy of Briish Columbia Vancouver, BC yavar@naddaf.name Nando de Freias (Supervisor) Deparmen

More information

Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields

Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields Scale Recovery for Monocular Visual Odomery Using Deph Esimaed wih Deep Convoluional Neural Fields Xiaochuan Yin, Xiangwei Wang, Xiaoguo Du, Qijun Chen Tongji Universiy yinxiaochuan@homail.com,wangxiangwei.cpp@gmail.com,

More information

A Hierarchical Object Recognition System Based on Multi-scale Principal Curvature Regions

A Hierarchical Object Recognition System Based on Multi-scale Principal Curvature Regions A Hierarchical Objec Recogniion Sysem Based on Muli-scale Principal Curvaure Regions Wei Zhang, Hongli Deng, Thomas G Dieerich and Eric N Morensen School of Elecrical Engineering and Compuer Science Oregon

More information

X-Splines : A Spline Model Designed for the End-User

X-Splines : A Spline Model Designed for the End-User X-Splines : A Spline Model Designed for he End-User Carole Blanc Chrisophe Schlic LaBRI 1 cours de la libéraion, 40 alence (France) [blancjschlic]@labri.u-bordeaux.fr Absrac his paper presens a new model

More information