Haoyu Wang s Research in SENSEI project

Similar documents
Neue Verfahren der Bildverarbeitung auch zur Erfassung von Schäden in Abwasserkanälen?

CS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20

Announcements. Mosaics. Image Mosaics. How to do it? Basic Procedure Take a sequence of images from the same position =

GoPro Omni FAQ. Can I use any GoPro HERO4 camera with Omni? Do I need special firmware for the cameras I use with Omni?

VOLUMETRIC VIDEO // PLENOPTIC LIGHTFIELD // MULTI CAMERA METHODOLOGIES JORDAN HALSEY // VR PLAYHOUSE

CS6670: Computer Vision

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Multi-View Stereo for Static and Dynamic Scenes

Application questions. Theoretical questions

HUGIN. Free, Open Source. Windows & Mac TIFF, JPG, PNG Yes Medium Choose resource usage. Limited out of the box, can install

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (without Ground Control Points)

Image-Based Modeling and Rendering

DEPTH, STEREO AND FOCUS WITH LIGHT-FIELD CAMERAS

Advances in the Dynallax Solid-State Dynamic Parallax Barrier Autostereoscopic Visualization Display System

Multi-view stereo. Many slides adapted from S. Seitz

Tutorial (Beginner level): Orthomosaic and DEM Generation with Agisoft PhotoScan Pro 1.3 (with Ground Control Points)

Il colore: acquisizione e visualizzazione. Lezione 17: 11 Maggio 2012

Epipolar Geometry CSE P576. Dr. Matthew Brown

AIT Inline Computational Imaging: Geometric calibration and image rectification

Hybrid Rendering for Collaborative, Immersive Virtual Environments

Stereo SLAM. Davide Migliore, PhD Department of Electronics and Information, Politecnico di Milano, Italy

Seamless creativity.

3D Computer Vision. Depth Cameras. Prof. Didier Stricker. Oliver Wasenmüller

Il colore: acquisizione e visualizzazione. Lezione 20: 11 Maggio 2011

IATL Academic Fellowship

Image-Based Rendering

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

StereoScan: Dense 3D Reconstruction in Real-time

Image-Based Modeling and Rendering

Image Processing: Motivation Rendering from Images. Related Work. Overview. Image Morphing Examples. Overview. View and Image Morphing CS334

Image stitching. Digital Visual Effects Yung-Yu Chuang. with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac

Tutorial (Intermediate level): 3D Model Reconstruction of the building with Agisoft PhotoScan 1.0.0

On the Use of Ray-tracing for Viewpoint Interpolation in Panoramic Imagery

PANORAMIC IMAGE MATCHING BY COMBINING HARRIS WITH SIFT DESCRIPTOR

LUMS Mine Detector Project

Video Mosaics for Virtual Environments, R. Szeliski. Review by: Christopher Rasmussen

Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera

Camera Drones Lecture 3 3D data generation

IMAGE-BASED RENDERING TECHNIQUES FOR APPLICATION IN VIRTUAL ENVIRONMENTS

Inline Computational Imaging: Single Sensor Technology for Simultaneous 2D/3D High Definition Inline Inspection

3D Digitization of Human Foot Based on Computer Stereo Vision Combined with KINECT Sensor Hai-Qing YANG a,*, Li HE b, Geng-Xin GUO c and Yong-Jun XU d

Multiple View Geometry

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

A virtual tour of free viewpoint rendering

Mosaics. Today s Readings

Image warping and stitching

Visual Representation of Windfarms

Modeling Light. Michal Havlik : Computational Photography Alexei Efros, CMU, Fall 2007

Volumetric Scene Reconstruction from Multiple Views

Processing 3D Surface Data

Multi-projector-type immersive light field display

LET S FOCUS ON FOCUSING

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group

But First: Multi-View Projective Geometry

IMAGE-BASED 3D ACQUISITION TOOL FOR ARCHITECTURAL CONSERVATION

Prof. Feng Liu. Spring /27/2014

Announcements. Mosaics. How to do it? Image Mosaics

Multi-view Stereo. Ivo Boyadzhiev CS7670: September 13, 2011

Omni-directional stereoscopy

Advanced Shading I: Shadow Rasterization Techniques

The 7d plenoptic function, indexing all light.

Construction of an Immersive Mixed Environment Using an Omnidirectional Stereo Image Sensor

Image warping and stitching

Homographies and RANSAC

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception

Image Warping and Mosacing

Computer Graphics. Lecture 9 Environment mapping, Mirroring

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

3D FIXATIONS IN REAL AND VIRTUAL SCENARIOS. Thies Pfeiffer, A.I. & V.R. Group, Bielefeld University

Project 2 due today Project 3 out today. Readings Szeliski, Chapter 10 (through 10.5)

Monocular Tracking and Reconstruction in Non-Rigid Environments

FAST REGISTRATION OF TERRESTRIAL LIDAR POINT CLOUD AND SEQUENCE IMAGES

Image Mosaicing with Motion Segmentation from Video

CONTENTS. High-Accuracy Stereo Depth Maps Using Structured Light. Yeojin Yoon

Integrating 3D Vision Measurements into Industrial Robot Applications

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

5LSH0 Advanced Topics Video & Analysis

Camera Calibration for a Robust Omni-directional Photogrammetry System

Stereo Vision Computer Vision (Kris Kitani) Carnegie Mellon University

Recap from Previous Lecture

SENSEI-Panama Visualizing animal movement data on a virtual island in cave2. Jillian Aurisano and James Hwang June 29, 2016

Omni Stereo Vision of Cooperative Mobile Robots

Image Based Reconstruction II

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Panoramic Image Stitching

!!!"#$%!&'()*&+,'-%%./01"&', Tokihiko Akita. AISIN SEIKI Co., Ltd. Parking Space Detection with Motion Stereo Camera applying Viterbi algorithm

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 1. Prof Emmanuel Agu

Theory of Stereo vision system

Jump Stitch Metadata & Depth Maps Version 1.1

360 video stabilization

Acquisition and Visualization of Colored 3D Objects

Multi-View Omni-Directional Imaging

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Image-Based Rendering. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Infrared Camera Calibration in the 3D Temperature Field Reconstruction

A Review of Image- based Rendering Techniques Nisha 1, Vijaya Goel 2 1 Department of computer science, University of Delhi, Delhi, India

IEEE Consumer Electronics Society Calibrating a VR Camera. Adam Rowell CTO, Lucid VR

Virtual Testbeds for Planetary Exploration: The Self Localization Aspect

Transcription:

Haoyu Wang s Research in SENSEI project Haoyu Wang Electronic Visualization Laboratory University of Illinois at Chicago

What is SENSEI project? Short for Sensor Environment Imaging (SENSEI) Instrument Project Goal: a scientific camera & display system for fully surrounding stereo cinema for scientific visual and depth data acquisition SENSEI Team: faculties and students from different universities and institutions Software group in EVL

SENSEI project Capture dynamic visual omnidirectional data By CAVEcam or the camera prototype later Create VR-like experience with image based material 360 by 180 panorama construction for both left and right eye Point Cloud Reprojection 2D image stitching (my task) 360 by 180 stereo video from panorama sequences Develop display transmission and storage systems to support scientific explorations

Point Cloud Reprojection Points Cloud Reprojection using depth maps a PhD thesis project by Jason Juang Now being worked on by Ji Dai & Jurgen Schultz Brief description Compute dense disparity maps from each pair of images Based on knowledge of camera position and camera movement, reconstruct the Point Clouds for whole 3D space Project the point clouds onto the two spheres from virtual eye positions

Steps in Point Cloud Reprojection Disparity to depth Disparity map from image pair Point cloud from disparity map

Result of Point Cloud Reprojection Panorama of synthesized data left right

Result of Point Cloud Reprojection Panorama of real data (basement image set)

Conclusions of Point Cloud method Pros Geometrically correct No vertical misalignment and no parallax error With correct dense point clouds, can provide view from any position around camera rather than its shooting spot Cons Need accurate dense disparity maps for perfect reconstruction of point clouds, which is probably time-consuming task Need to fill the black holes after reconstruction

2D-stitching method for panorama How 2D stitching method generate panorama Image set blending cp matching Pairwise alignment Optimization

Conclusions of 2D stitching method Pros Don t need disparity maps No black holes in the final panorama Cons Actually suffer from vertical misalignment and parallax error Can only provide scene from the position of the camera

software for image stitching Many software can do the image-stitching: PTGUI, Autopano, ICE Why not use them?

Improvement to the stitching with depth cp matching without depth information Left image set Right image set cp matching with depth information Disparity map

Pairwise stitching result before and after depth matching Pairwise stitching result before depth-matching Pairwise stitching result after depth-matching

Another problem for stereo panorama: vertical disparity Left panorama Right panorama

Left panorama for basement

Right panorama for basement

solution to vertical disparity problem Left image set Right image set cp matching blending blending cp matching Pairwise alignment Left panorama Right panorama Pairwise alignment Optimization Stereo optimization Optimization

solution to vertical disparity: stereo optimization I i l and I i r are m pairs of images, i {1,2,3,m} M(i) is the set of features which could be found in both of the i th left and right images U i,l,k and U i,r,k (R,T)=argmin v=r u+t R={ R 1 l, are position of k th matched T={ T 1 l R 2 l,,, T 2 l,

Result of panorama of 72 images: left

Result of panorama of 72 images: right

Six cube faces

The end Q & A :