HUMAN COMPUTER INTERACTION

Size: px
Start display at page:

Download "HUMAN COMPUTER INTERACTION"

Transcription

1 HUMAN COMPUTER INTERACTION 7. UI DEVICES AND SYSTEMS I-Chen Lin, National Chiao Tung University, Taiwan

2 Objectives Unique or interesting control/input devices and technologies. Display interfaces in addition to PC monitors. Devices conforming to users perceptual abilities. The applications of advanced display and virtual environment. Several slides are from the following references: D.A. Bowman, E. Kruijff, J.J. LaViola, I. Poupyrev, 3D User Interfaces: Theory and Practice, Addison Wesley Professional, Course notes, 3D User Interfaces, CS, Columbia Univ.. Course notes, Introduction to Human-Computer Interaction Design, CS, Stanford Univ..

3 Dimensions of Performance Continuous vs. discrete Resolution and accuracy Sampling rate and latency Noise, aliasing and nonlinearity Direct vs. Indirect Absolute vs. relative Control-to-display ratio Physical property sensed Position, motion, force Degrees of freedom

4 Symbolic Input Task of entering Text Numbers Symbols Desktop symbolic input Mobile symbolic input Standing, walking, communicating 3D UI symbolic input Tracking, gesture, etc.

5 Symbolic Input (cont.) Conditions for 3D environment Mobile users Not seated: standing, crouching, About to move Actively moving No dedicated desk surface Hands busy or full Eyes busy, occluded, or in low light

6 Keyboards

7 Some Ergonomic Issues Hand position - Freedom of hand for positioning device Touch typing vs. hunt and peck Repetitive stress / fatigue One handed use Need for support

8 Mobile (Chord, Multi-press, )

9 Specific Symbolic Input J. Mankoff & G. Abowd, UIST 98 Samsung Scurry I. Poupyrev, N. Tomokazu, S. Weghorst, VRAIS 98

10 Specific Symbolic Input Data gloves Key poses or continuous sign recognition. HMM-based recognition Learnability

11 Projected Keyboards Virtual Keyboard

12 Pointing Devices Target acquisition Steering / positioning Tracking Gesture Freehand drawing Drawing lines Tracing and digitizing Clicking, Double-clicking, dragging

13 Trackball, Trackpad, Trackpoint What is Sensed Motion (e.g., mouse) Position (e.g., trackpad) Force (e.g., trackpoint)

14 Tracking Pointers ASL EYE-TRAC 7 eye tracking Brain-Computer Interface, MMSPG, EPFL Tobii X1 Light Eye Tracker

15 Fitts Law MT = a + b log 2 (A/W + 1) a : the start/stop time of the device b : the inherent speed of the device A : the distance from the starting point to the center of the target W: the width of the target (along the axis of motion) Target acquisition time is proportional to the log of the ratio of the Distance to the Width of the target. Applies to position control devices Same for direct and indirect

16 Fitts Law (cont.) At each time step, a user travels a proportion of the remaining distance, 1 r for some 0 < r < 1, towards the target. The remaining distance after N the sub-movement is r N A. A target is reached when the remaining distance is less than half the width of the target: r N A < W/2. r N W A 2 N log r W 2A W log2 2A log r 2 1 log The distance left to the target's centre is a function that decays exponentially over time. 2 r 2A log2 W MT = a + b log 2 (A/W + 1)

17 Pen/Touch Input Good for pointing and drawing Natural for gestures Possibility of multiple pointers Not an efficient way for text entry Handwriting Problems of interpretation Special characters/gestures

18 Touch Screen HTC One M8 ipad2, Fig. from en.wikipedia.org/wiki/ipad Surface Pro 3, Microsoft

19 Multi-Touch Screen J.Y. Han, Low-Cost Multi-Touch Sensing through Frustrated Total Internal Reflection, Proc. UIST 05.

20 Multi-Touch Screen (cont.) Simple image processing operations Rectification, background subtraction, noise removal, and connected components analysis Low cost video capturing Inherent drawbacks of vision-based system Requires a significant amount of space Skin reflectance, pixel resolution, etc. contaminated surface

21 Tangible or Mixing Interaction S. Klemmer et al. Designers Outpost: A Tangible Interface for Collaborative Web Site Design, Proc. UIST 01. Designers Outpost

22 Designers Outpost ADD LINK REMOVE INK MOVE MENU SAVE

23 Designers Outpost Touch sensitive SMART board augmented with two digital cameras

24 Tangible or Mixing Interaction MetaDesk

25 Tangible UI (Illuminating Clay) B. Piper, C. Ratti, Y. Wang, B. Zhu, S. Getzoyan, & H. Ishii, Proc. CHI Allow users to interact with real freeform surfaces Users Molds surface of clay System Determines height of surface Projects imagery corresponding to tasks

26 Illuminating Clay (cont.)

27 Pointing in 3D User points with a virtual ray Visualized as a line segment attached to hand Immersive: Ray attached to 6DOF-tracked hand Desktop: Ray controlled by mouse Disambiguate by selecting closest object Hand/tracker jitter amplified by distance Makes it difficult to pick distant objects with precision M. Mine, ISAAC, 1997

28 Pointing in 3D Pointing: Fishing Reel Use additional device to specify length of pointing Vector Allows user to reel object in and out, changing its position along the ray Bowman, 1997

29 3D and Haptic DataGlove Phantom GyroMouse CyberGrasp

30 Wii and Wii remote

31 Wii Remote Controller Accurate positions IR camera + sensor bar Fixed distance between LEDs. Estimate relative positions from sensor views.

32 Wii Remote Controller (cont.) 3-axis linear accelerometer : at least +/- 3g with 10% sensitivity (Recently, accelerometers start being used in laptops or cell phones for damage protection or entertainment purposes.) MotionPlus (add-on) Gyroscope for angular velocities.

33 Pointing in 3D Vibration (Micro Vibrator) Sensor(Touch Sensor / 6-axis (acceleration angular speed) motion sensor) Continuous standby time:approx. 18 days Charge time:approx. 3 hours

34 Active Depth Sensors Can handle the matching problem at textureless regions. Instead of using multiple cameras, active vision systems project laser, IR, or visible light stripes or patterns. Xtion Pro Live, ASUS Fig. from Depth image of Xtion Pro Live, ASUS Fig. from Kinect, Microsoft Fig. from

35 35 Motion Capture Utilizing multiple infrared cameras for stable tracking. Protruding markers help with shape analysis but may obstruct facial motion. The commercial motion capture devices are very accurate but also expensive.

36 Motion Capture (cont.) Visualeyez, Phoenix Technologies Inc.

37 Motion Capture (cont.) Active markers or passive ones? Optotrak, NDI vicon8

38 Marker Tracking Markers in the 1 st frame Potential 3D candidates A B C D t = 1 t = 2 t = 3 t = 4

39 The Tracking problem Without missing markers : A graph problem : minimum cost network flow (MCNF) O(N3logNC). With missing markers: For optimal solutions, time-consuming dynamic programming etc. are required. For near real-time tracking, Kalman filters are insufficient.

40 False Tracking and Conflict Markers in the 1 st frame Potential 3D candidates Missing markers A B C D t = 1 t = 2 t = 3 t = 4

41 False Tracking and Conflict The effects of false tracking are accumulative. t =20 t =50 t =100 t =300 t =500 I.-C. Lin et al., Visual Comp 05

42 Detection and rectification Markers in the 1 st frame Potential 3D candidates Conjectural markers A B C D t = 1 t = 2 t = 3 t = 4

43 Motion Tracking (Body Surface) Data collection & cleaning Merging disconnected trajectories Hole filling Skin animation Capture session Resulting animation Surface model in rest pose S. Park, J.K. Hodgins, Capturing and Animating Skin Deformation in Human Motion, Proc. SIGGRAPH 06

44 Data Collection and Cleaning Occlusions happen in a large region for a long time

45 Data Collection and Cleaning Merging disconnected trajectories Hole filling Reference pose PCA

46 Capture and Synthesis

47 Walking Natural Provides vestibular cues Must track entire VE Scaling is possible Tracker weight, cables, environmental obstacles Outdoor, as well as indoor Fatigue UNC Ceiling Tracker Columbia Univ. Touring Machine

48 Walking Simulators Early attempts used a conventional treadmill No natural way to change direction Track user s head/feet and rotate entire treadmill Omnidirectional treadmills Two sets of orthogonal rollers/belts allow arbitrary yaw, but slowly GaitMaster (H. Iwata et al., U. Tsukuba) Tracked user walks on two platforms that can be continuously moved in 3D Torus Treadmill Typically slow and dangerous! GaitMaster

49 Walking Simulators CirculaFloor (H. Iwata et al., SIGGRAPH 04 E-Tech, IEEE CG&A 05) Four tracked moving tiles automatically reconfigure to meet feet of (slow) tracked user Different configurations correspond to tracked user direction

50 Displays Displays, aka output devices, present information to human senses. For a single user or multiple users. Scene representations are rendered or composed into a form that can be presented by a display. Computer graphics and image processing tech. Properties Sampling Resolution

51 Visual Perception Range 400~700 nm

52 Visual Perception (cont.) Temporal resolution ~ Hz (and higher): flicker fusion frequency Video frame rates are not necessarily equal to flicker rates Dependent on brightness, location on the retina (it is higher for a brighter light source) Field of view Both eyes: ~ 200 horizontal, 120 stereo overlap One eye: ~ 60 toward nose, 90 toward side, 50 up, 70 down

53 Geometry of Large display Surface geometry Planar Curved Hemispheric Multiple surfaces

54 Interactive Fog Screen I. Rakkolainen et al., The Interactive FogScreen (SIGGRAPH Emerging Tech.)

55 Visual Cues E.g. depth cues, helping us understand the depth of objects Monocular, static (pictorial) Monocular, dynamic Binocular

56 Monocular, Static Cues Relative size Occlusion Perspective Linear Aerial

57 Monocular, Static Cues Lighting Shadow Texture gradients

58 Monocular, Dynamic Cues Motion parallax

59 Binocular Cues wickelgren/psyc110/stereopsis.jpg

60 Optical See-Through Displays Augmented info. optics: mirror beam splitter = combiner Real world is full resolution/fov, but virtual world cannot block real world virtual/real worlds at distances Saab AddVisor 150 Google glass Fig from en.wikipedia.org/wiki/google_glass

61 Video See-Through Displays Videomixing Real world is low resolution/fov, but virtual world can block real world virtual/real worlds at same distance problems with camera mounting/optics Camera Synthetic graphics Mixed Reality Systems Lab COASTAR

62 Stereoscopic Viewing Spatial multiplexing Stereoscope Presents each eye with its own view Viewing in virtual environment Changing views according to head poses. users.telenet.be/thomasweynants /stereoscope.html

63 Stereoscopic Viewing Temporal multiplexing Liquid crystal shutter eyewear Right/left lenses alternatively opacify in synchrony with display of left/right views Synchronized to infrared emitter

64 Stereoscopic Viewing Spectral multiplexing Red blue anaglyph stereo Display overlapped left/right images in red/blue. Can process only edges to retain color in object interior mars.jpl.nasa.gov/mpf/mpf/anaglyph-arc.htm

65 Stereoscopic Viewing Polarization multiplexing Linear (horizontal/vertical): limited head tilting Circular (lefthanded/right-handed) Screen must preserve polarization (metallized)

66 Autostereoscopic Viewing Spatial multiplexing Parallax barrier methods Parallax stereogram by interleaving the columns from two images

67 Autostereoscopic Viewing Spatial multiplexing Parallax barrier methods Parallax panoramagram Can use physically moving mechanical or virtually moving liquid crystal barrier to create active display for a tracked user

68 Autostereoscopic Viewing Spatial multiplexing Lenticular approach Array of cylindrical lenses Brighter than barrier

69 Multi-layer 3D Display PureDepth Multi-Layer Display 3D Display

70 CAVE (Cave Automatic Virtual Environment)

71 Advanced Virtual Environment M. Gross et al, blue-c: A Spatially Immersive Display and 3D Video Portal for Telepresence,, Proc. SIGGRAPH 03.

72 Blue-C

73 Blue-C (cont.)

74 Consumer-level Virtual Environment Recently, the prices of virtual environment produces have reached the consumer level. VR may become more common and closer to our daily lives. Oculus Rift Virtuix Omni

COMS W4172 Perception, Displays, and Devices 3

COMS W4172 Perception, Displays, and Devices 3 COMS W4172 Perception, Displays, and Devices 3 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 20, 2018 1 What

More information

Physical Input and Tangible Computing

Physical Input and Tangible Computing stanford hci group cs147 Physical Input and Tangible Computing Björn Hartmann (bjoern@cs) 08 November 2006 http://cs147.stanford.edu Learning Goals Be familiar with the space of input devices, their properties

More information

CSE 165: 3D User Interaction. Lecture #3: Displays

CSE 165: 3D User Interaction. Lecture #3: Displays CSE 165: 3D User Interaction Lecture #3: Displays CSE 165 -Winter 2016 2 Announcements Homework Assignment #1 Due Friday at 2:00pm To be presented in CSE lab 220 Paper presentations Title/date due by entering

More information

Realtime 3D Computer Graphics Virtual Reality

Realtime 3D Computer Graphics Virtual Reality Realtime 3D Computer Graphics Virtual Reality Human Visual Perception The human visual system 2 eyes Optic nerve: 1.5 million fibers per eye (each fiber is the axon from a neuron) 125 million rods (achromatic

More information

CSE 165: 3D User Interaction

CSE 165: 3D User Interaction CSE 165: 3D User Interaction Lecture #4: Displays Instructor: Jurgen Schulze, Ph.D. CSE 165 - Winter 2015 2 Announcements Homework Assignment #1 Due tomorrow at 1pm To be presented in CSE lab 220 Homework

More information

Prof. Feng Liu. Spring /27/2014

Prof. Feng Liu. Spring /27/2014 Prof. Feng Liu Spring 2014 http://www.cs.pdx.edu/~fliu/courses/cs510/ 05/27/2014 Last Time Video Stabilization 2 Today Stereoscopic 3D Human depth perception 3D displays 3 Stereoscopic media Digital Visual

More information

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD.

Devices displaying 3D image. RNDr. Róbert Bohdal, PhD. Devices displaying 3D image RNDr. Róbert Bohdal, PhD. 1 Types of devices displaying 3D image Stereoscopic Re-imaging Volumetric Autostereoscopic Holograms mounted displays, optical head-worn displays Pseudo

More information

Mahdi Amiri. May Sharif University of Technology

Mahdi Amiri. May Sharif University of Technology Course Presentation Multimedia Systems 3D Technologies Mahdi Amiri May 2014 Sharif University of Technology Binocular Vision (Two Eyes) Advantages A spare eye in case one is damaged. A wider field of view

More information

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues

Basic distinctions. Definitions. Epstein (1965) familiar size experiment. Distance, depth, and 3D shape cues. Distance, depth, and 3D shape cues Distance, depth, and 3D shape cues Pictorial depth cues: familiar size, relative size, brightness, occlusion, shading and shadows, aerial/ atmospheric perspective, linear perspective, height within image,

More information

COMS W4172 Perception, Displays, and Devices 4

COMS W4172 Perception, Displays, and Devices 4 COMS W4172 Perception, s, and Devices 4 Steven Feiner Department of Computer Science Columbia University New York, NY 10027 www.cs.columbia.edu/graphics/courses/csw4172 February 22, 2018 1 Stereoscopic

More information

Perception, Part 2 Gleitman et al. (2011), Chapter 5

Perception, Part 2 Gleitman et al. (2011), Chapter 5 Perception, Part 2 Gleitman et al. (2011), Chapter 5 Mike D Zmura Department of Cognitive Sciences, UCI Psych 9A / Psy Beh 11A February 27, 2014 T. M. D'Zmura 1 Visual Reconstruction of a Three-Dimensional

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song

CS 563 Advanced Topics in Computer Graphics Stereoscopy. by Sam Song CS 563 Advanced Topics in Computer Graphics Stereoscopy by Sam Song Stereoscopy Introduction Parallax Camera Displaying and Viewing Results Stereoscopy What is it? seeing in three dimensions creates the

More information

3D Interaction Techniques for Virtual Environments: Selection and Manipulation. Doug A. Bowman

3D Interaction Techniques for Virtual Environments: Selection and Manipulation. Doug A. Bowman 3D Interaction Techniques for Virtual Environments: Selection and Manipulation Doug A. Bowman 1 Terminology Interaction Technique (IT) method for accomplishing a task 3D application system that displays

More information

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters Computer Animation and Visualisation Lecture 3. Motion capture and physically-based animation of characters Character Animation There are three methods Create them manually Use real human / animal motions

More information

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo

Stereo. Shadows: Occlusions: 3D (Depth) from 2D. Depth Cues. Viewing Stereo Stereograms Autostereograms Depth from Stereo Stereo Viewing Stereo Stereograms Autostereograms Depth from Stereo 3D (Depth) from 2D 3D information is lost by projection. How do we recover 3D information? Image 3D Model Depth Cues Shadows: Occlusions:

More information

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0 zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their

More information

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan

BINOCULAR DISPARITY AND DEPTH CUE OF LUMINANCE CONTRAST. NAN-CHING TAI National Taipei University of Technology, Taipei, Taiwan N. Gu, S. Watanabe, H. Erhan, M. Hank Haeusler, W. Huang, R. Sosa (eds.), Rethinking Comprehensive Design: Speculative Counterculture, Proceedings of the 19th International Conference on Computer- Aided

More information

Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc.

Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc. Volumetric Hyper Reality: A Computer Graphics Holy Grail for the 21st Century? Gavin Miller Apple Computer, Inc. Structure of this Talk What makes a good holy grail? Review of photo-realism Limitations

More information

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few...

There are many cues in monocular vision which suggests that vision in stereo starts very early from two similar 2D images. Lets see a few... STEREO VISION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their own

More information

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images

Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily

More information

Seeing the world through a depth-sensing camera

Seeing the world through a depth-sensing camera Seeing the world through a depth-sensing camera Copyright 2014 pabr@pabr.org All rights reserved. In this project we attach a depth-sensing camera to a head-mounted stereoscopic display (think: Kinect

More information

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012

Stereo Vision A simple system. Dr. Gerhard Roth Winter 2012 Stereo Vision A simple system Dr. Gerhard Roth Winter 2012 Stereo Stereo Ability to infer information on the 3-D structure and distance of a scene from two or more images taken from different viewpoints

More information

Stereovision. Binocular disparity

Stereovision. Binocular disparity Stereovision Binocular disparity Retinal correspondence Uncrossed disparity Horoptor Crossed disparity Horoptor, crossed and uncrossed disparity Wheatsteone stereoscope (c. 1838) Red-green anaglyph How

More information

Vision-Based Human-Computer-Interaction

Vision-Based Human-Computer-Interaction Vision-Based Human-Computer-Interaction Ulrich Leiner,, Interactive Media Human Factors, Einsteinufer 37, 10587 Berlin, Germany www.hhi.fraunhofer.de 10.12.2009 1 Outline Ulrich Leiner, Heinrich-Hertz-Institute,

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments

An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments Doug A. Bowman and Larry F. Hodges* Graphics, Visualization, and Usability Center College of Computing

More information

Rich Augmented Reality Interactions (without goggles, gloves or 3D trackers)

Rich Augmented Reality Interactions (without goggles, gloves or 3D trackers) Rich Augmented Reality Interactions (without goggles, gloves or 3D trackers) Hrvoje Benko er, MSR Redmond Surface Touch Mouse THE DISCONNECT BETWEEN REAL AND DIGITAL WORLDS Visually rich Call of Duty

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

PERCEIVING DEPTH AND SIZE

PERCEIVING DEPTH AND SIZE PERCEIVING DEPTH AND SIZE DEPTH Cue Approach Identifies information on the retina Correlates it with the depth of the scene Different cues Previous knowledge Slide 3 Depth Cues Oculomotor Monocular Binocular

More information

Computer Vision. I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University

Computer Vision. I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University Computer Vision I-Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University About the course Course title: Computer Vision Lectures: EC016, 10:10~12:00(Tues.); 15:30~16:20(Thurs.) Pre-requisites:

More information

Reprint. from the Journal. of the SID

Reprint. from the Journal. of the SID A 23-in. full-panel-resolution autostereoscopic LCD with a novel directional backlight system Akinori Hayashi (SID Member) Tomohiro Kometani Akira Sakai (SID Member) Hiroshi Ito Abstract An autostereoscopic

More information

What have we leaned so far?

What have we leaned so far? What have we leaned so far? Camera structure Eye structure Project 1: High Dynamic Range Imaging What have we learned so far? Image Filtering Image Warping Camera Projection Model Project 2: Panoramic

More information

Augmenting Reality with Projected Interactive Displays

Augmenting Reality with Projected Interactive Displays Augmenting Reality with Projected Interactive Displays Claudio Pinhanez IBM T.J. Watson Research Center, P.O. Box 218 Yorktown Heights, N.Y. 10598, USA Abstract. This paper examines a steerable projection

More information

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group

Video Communication Ecosystems. Research Challenges for Immersive. over Future Internet. Converged Networks & Services (CONES) Research Group Research Challenges for Immersive Video Communication Ecosystems over Future Internet Tasos Dagiuklas, Ph.D., SMIEEE Assistant Professor Converged Networks & Services (CONES) Research Group Hellenic Open

More information

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond

High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond High-Fidelity Augmented Reality Interactions Hrvoje Benko Researcher, MSR Redmond New generation of interfaces Instead of interacting through indirect input devices (mice and keyboard), the user is interacting

More information

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy Who am I? Marco Callieri PhD in computer science Always had the like for 3D graphics... Researcher at the

More information

Dr Antony Robotham - Executive Director extreme Data Workshop April 2012

Dr Antony Robotham - Executive Director extreme Data Workshop April 2012 Dr Antony Robotham - Executive Director extreme Data Workshop 19-20 April 2012 A Centre of Excellence in Virtual Engineering VE best practice demonstration VE business development and research VE education

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Image-based modeling (IBM) and image-based rendering (IBR)

Image-based modeling (IBM) and image-based rendering (IBR) Image-based modeling (IBM) and image-based rendering (IBR) CS 248 - Introduction to Computer Graphics Autumn quarter, 2005 Slides for December 8 lecture The graphics pipeline modeling animation rendering

More information

VIRTUAL REALITY AND MOTION TRACKING

VIRTUAL REALITY AND MOTION TRACKING VIRTUAL REALITY AND MOTION TRACKING Divya Singla,Luv Mendiratta Dronacharya College of engineering Farrukhnagar, Gurgaon, India Introduction: Virtual reality (VR) is referred as immersive multimedia, which

More information

Homework Graphics Input Devices Graphics Output Devices. Computer Graphics. Spring CS4815

Homework Graphics Input Devices Graphics Output Devices. Computer Graphics. Spring CS4815 Computer Graphics Spring 2016-2017 Outline 1 2 3 Displays To Do 1 Go to Wikipedia http://en.wikipedia.org/ and read the pages on Colour Spaces (http: //en.wikipedia.org/wiki/colour_spaces), Optical Illusions

More information

Stable Vision-Aided Navigation for Large-Area Augmented Reality

Stable Vision-Aided Navigation for Large-Area Augmented Reality Stable Vision-Aided Navigation for Large-Area Augmented Reality Taragay Oskiper, Han-Pang Chiu, Zhiwei Zhu Supun Samarasekera, Rakesh Teddy Kumar Vision and Robotics Laboratory SRI-International Sarnoff,

More information

Constructing Virtual 3D Models with Physical Building Blocks

Constructing Virtual 3D Models with Physical Building Blocks Constructing Virtual 3D Models with Physical Building Blocks Ricardo Jota VIMMI / Inesc-ID IST / Technical University of Lisbon 1000-029 Lisbon, Portugal jotacosta@ist.utl.pt Hrvoje Benko Microsoft Research

More information

Omni Stereo Vision of Cooperative Mobile Robots

Omni Stereo Vision of Cooperative Mobile Robots Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)

More information

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses

Extended Fractional View Integral Photography Using Slanted Orthogonal Lenticular Lenses Proceedings of the 2 nd World Congress on Electrical Engineering and Computer Systems and Science (EECSS'16) Budapest, Hungary August 16 17, 2016 Paper No. MHCI 112 DOI: 10.11159/mhci16.112 Extended Fractional

More information

Multidimensional image retargeting

Multidimensional image retargeting Multidimensional image retargeting 9:00: Introduction 9:10: Dynamic range retargeting Tone mapping Apparent contrast and brightness enhancement 10:45: Break 11:00: Color retargeting 11:30: LDR to HDR 12:20:

More information

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception

Binocular cues to depth PSY 310 Greg Francis. Lecture 21. Depth perception Binocular cues to depth PSY 310 Greg Francis Lecture 21 How to find the hidden word. Depth perception You can see depth in static images with just one eye (monocular) Pictorial cues However, motion and

More information

Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Announcements. Written Assignment 2 out (due March 8) Computer Graphics Announcements Written Assignment 2 out (due March 8) 1 Advanced Ray Tracing (Recursive) Ray Tracing Antialiasing Motion Blur Distribution Ray Tracing Ray Tracing and Radiosity Assumptions Simple shading

More information

Stereo: Disparity and Matching

Stereo: Disparity and Matching CS 4495 Computer Vision Aaron Bobick School of Interactive Computing Administrivia PS2 is out. But I was late. So we pushed the due date to Wed Sept 24 th, 11:55pm. There is still *no* grace period. To

More information

Why should I follow this presentation? What is it good for?

Why should I follow this presentation? What is it good for? Why should I follow this presentation? What is it good for? Introduction into 3D imaging (stereoscopy, stereoscopic, stereo vision) S3D state of the art for PC and TV Compiling 2D to 3D Computing S3D animations,

More information

Stereoscopic Kiosk for Virtual Museum

Stereoscopic Kiosk for Virtual Museum Stereoscopic Kiosk for Virtual Museum Wan-Yen Lo 1, Yu-Pao Tsai 2,3, Chien-Wei Chen 1, Yi-Ping Hung 1,3,4 1 Department of Computer Science and Information Engineering, National Taiwan University 2 Department

More information

Visual Perception Sensors

Visual Perception Sensors G. Glaser Visual Perception Sensors 1 / 27 MIN Faculty Department of Informatics Visual Perception Sensors Depth Determination Gerrit Glaser University of Hamburg Faculty of Mathematics, Informatics and

More information

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation

Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation Mixed-Reality for Intuitive Photo-Realistic 3D-Model Generation Wolfgang Sepp, Tim Bodenmueller, Michael Suppa, and Gerd Hirzinger DLR, Institut für Robotik und Mechatronik @ GI-Workshop VR/AR 2009 Folie

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman

Stereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure

More information

CSE 165: 3D User Interaction. Lecture #5: Selection

CSE 165: 3D User Interaction. Lecture #5: Selection CSE 165: 3D User Interaction Lecture #5: Selection 2 Announcements Project 1 due this Friday at 2pm Grading in VR lab B210 2-3:30pm Two groups: even hours start at 2pm odd hours at 3pm 3 Selection and

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our

More information

Presentation #: CL05

Presentation #: CL05 Presentation #: CL05 The Control Center of the Future Bill Jacobs The Jacobs Group Wesley Cobb, PhD BRS Labs Peter Bussens Barco Dan O Neill TSG Solutions March 27, 2012 1 Presented By 2 SPONSORS 3 CL05:

More information

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo

Robert Collins CSE486, Penn State Lecture 08: Introduction to Stereo Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Stereo Vision Inferring depth from images taken at the same time by two or more cameras. Basic Perspective Projection Scene Point Perspective

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

CMP 477 Computer Graphics Module 2: Graphics Systems Output and Input Devices. Dr. S.A. Arekete Redeemer s University, Ede

CMP 477 Computer Graphics Module 2: Graphics Systems Output and Input Devices. Dr. S.A. Arekete Redeemer s University, Ede CMP 477 Computer Graphics Module 2: Graphics Systems Output and Input Devices Dr. S.A. Arekete Redeemer s University, Ede Introduction The widespread recognition of the power and utility of computer graphics

More information

Simulating a 3D Environment on a 2D Display via Face Tracking

Simulating a 3D Environment on a 2D Display via Face Tracking Simulating a 3D Environment on a 2D Display via Face Tracking NATHAN YOUNG APRIL 27 TH, 2015 EENG 512 Overview Background on displaying in 3D Diagram of System Face and Eye Tracking Haar Classifiers Kalman

More information

Real-time Integral Photography Holographic Pyramid using a Game Engine

Real-time Integral Photography Holographic Pyramid using a Game Engine Real-time Integral Photography Holographic Pyramid using a Game Engine Shohei Anraku, Toshiaki Yamanouchi and Kazuhisa Yanaka Kanagawa Institute of Technology, 1030 Shimo-ogino, Atsugi-shi, Kanagawa-ken,

More information

lecture 10 - depth from blur, binocular stereo

lecture 10 - depth from blur, binocular stereo This lecture carries forward some of the topics from early in the course, namely defocus blur and binocular disparity. The main emphasis here will be on the information these cues carry about depth, rather

More information

Real-time Gesture Pattern Classification with IMU Data

Real-time Gesture Pattern Classification with IMU Data Real-time Gesture Pattern Classification with IMU Data Alex Fu Stanford University Computer Science Department alexfu@stanford.edu Yangyang Yu Stanford University Electrical Engineering Department yyu10@stanford.edu

More information

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful.

Project 4 Results. Representation. Data. Learning. Zachary, Hung-I, Paul, Emanuel. SIFT and HoG are popular and successful. Project 4 Results Representation SIFT and HoG are popular and successful. Data Hugely varying results from hard mining. Learning Non-linear classifier usually better. Zachary, Hung-I, Paul, Emanuel Project

More information

Margarita Grinvald. Gesture recognition for Smartphones/Wearables

Margarita Grinvald. Gesture recognition for Smartphones/Wearables Margarita Grinvald Gesture recognition for Smartphones/Wearables Gestures hands, face, body movements non-verbal communication human interaction 2 Gesture recognition interface with computers increase

More information

ROBOT SENSORS. 1. Proprioceptors

ROBOT SENSORS. 1. Proprioceptors ROBOT SENSORS Since the action capability is physically interacting with the environment, two types of sensors have to be used in any robotic system: - proprioceptors for the measurement of the robot s

More information

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods

L2 Data Acquisition. Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods L2 Data Acquisition Mechanical measurement (CMM) Structured light Range images Shape from shading Other methods 1 Coordinate Measurement Machine Touch based Slow Sparse Data Complex planning Accurate 2

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality

Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality Creating a Vision Channel for Observing Deep-Seated Anatomy in Medical Augmented Reality A Cut-Away Technique for In-Situ Visualization Felix Wimmer 1, Christoph Bichlmeier 1, Sandro M. Heining 2, Nassir

More information

Monocular Visual Odometry

Monocular Visual Odometry Elective in Robotics coordinator: Prof. Giuseppe Oriolo Monocular Visual Odometry (slides prepared by Luca Ricci) Monocular vs. Stereo: eamples from Nature Predator Predators eyes face forward. The field

More information

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy

Tecnologie per la ricostruzione di modelli 3D da immagini. Marco Callieri ISTI-CNR, Pisa, Italy Tecnologie per la ricostruzione di modelli 3D da immagini Marco Callieri ISTI-CNR, Pisa, Italy 3D from Photos Our not-so-secret dream: obtain a reliable and precise 3D from simple photos Why? Easier, less

More information

Lecture 14: Computer Vision

Lecture 14: Computer Vision CS/b: Artificial Intelligence II Prof. Olga Veksler Lecture : Computer Vision D shape from Images Stereo Reconstruction Many Slides are from Steve Seitz (UW), S. Narasimhan Outline Cues for D shape perception

More information

Index C, D, E, F I, J

Index C, D, E, F I, J Index A Ambient light, 12 B Blurring algorithm, 68 Brightness thresholding algorithm float testapp::blur, 70 kinect.update(), 69 void testapp::draw(), 70 void testapp::exit(), 70 void testapp::setup(),

More information

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology lecture 23 (0, 1, 1) (0, 0, 0) (0, 0, 1) (0, 1, 1) (1, 1, 1) (1, 1, 0) (0, 1, 0) hue - which ''? saturation - how pure? luminance (value) - intensity What is light? What is? Light consists of electromagnetic

More information

A Mouse-Like Hands-Free Gesture Technique for Two-Dimensional Pointing

A Mouse-Like Hands-Free Gesture Technique for Two-Dimensional Pointing A Mouse-Like Hands-Free Gesture Technique for Two-Dimensional Pointing Yusaku Yokouchi and Hiroshi Hosobe Faculty of Computer and Information Sciences, Hosei University 3-7-2 Kajino-cho, Koganei-shi, Tokyo

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz

Stereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes

More information

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth?

Think-Pair-Share. What visual or physiological cues help us to perceive 3D shape and depth? Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth? [Figure from Prados & Faugeras 2006] Shading Focus/defocus Images from same point of view, different camera parameters

More information

Inline Computational Imaging: Single Sensor Technology for Simultaneous 2D/3D High Definition Inline Inspection

Inline Computational Imaging: Single Sensor Technology for Simultaneous 2D/3D High Definition Inline Inspection Inline Computational Imaging: Single Sensor Technology for Simultaneous 2D/3D High Definition Inline Inspection Svorad Štolc et al. svorad.stolc@ait.ac.at AIT Austrian Institute of Technology GmbH Center

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

Multiple View Geometry

Multiple View Geometry Multiple View Geometry Martin Quinn with a lot of slides stolen from Steve Seitz and Jianbo Shi 15-463: Computational Photography Alexei Efros, CMU, Fall 2007 Our Goal The Plenoptic Function P(θ,φ,λ,t,V

More information

BIL Computer Vision Apr 16, 2014

BIL Computer Vision Apr 16, 2014 BIL 719 - Computer Vision Apr 16, 2014 Binocular Stereo (cont d.), Structure from Motion Aykut Erdem Dept. of Computer Engineering Hacettepe University Slide credit: S. Lazebnik Basic stereo matching algorithm

More information

Homework Graphics Input Devices Graphics Output Devices. Computer Graphics. Spring CS4815

Homework Graphics Input Devices Graphics Output Devices. Computer Graphics. Spring CS4815 Computer Graphics Spring 2017-2018 Outline 1 2 3 Displays To Do 1 Go to Wikipedia http://en.wikipedia.org/ and read the pages on Colour Spaces (http: //en.wikipedia.org/wiki/colour_spaces), Optical Illusions

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Teleimmersion System. Contents. Dr. Gregorij Kurillo. n Introduction. n UCB. n Current problems. n Multi-stereo camera system

Teleimmersion System. Contents. Dr. Gregorij Kurillo. n Introduction. n UCB. n Current problems. n Multi-stereo camera system Teleimmersion System Dr. Gregorij Kurillo Contents n Introduction n Teleimmersion @ UCB n Current problems n Multi-stereo camera system n Networking n Rendering n Conclusion 1 3D scene 3D scene Internet

More information

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from?

Binocular stereo. Given a calibrated binocular stereo pair, fuse it to produce a depth image. Where does the depth information come from? Binocular Stereo Binocular stereo Given a calibrated binocular stereo pair, fuse it to produce a depth image Where does the depth information come from? Binocular stereo Given a calibrated binocular stereo

More information

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Local Illumination. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller Local Illumination CMPT 361 Introduction to Computer Graphics Torsten Möller Graphics Pipeline Hardware Modelling Transform Visibility Illumination + Shading Perception, Interaction Color Texture/ Realism

More information

Stereo plotting over orientated photographs

Stereo plotting over orientated photographs TcpStereo for drones Stereo plotting over orientated photographs Introduction This system allows stereo plotting in CAD platform over aerial photographs with previously defined orientations. It works from

More information

Bridging the Paper and Electronic Worlds

Bridging the Paper and Electronic Worlds Bridging the Paper and Electronic Worlds Johnson, Jellinek, Klotz, Card. Aaron Zinman MAS.961 What its about Paper exists Its useful and persistent Xerox is concerned with doc management Scanning is problematic

More information

COMS W4170 Scaling Up and Down 2: From Wall-Sized to Hand-Held

COMS W4170 Scaling Up and Down 2: From Wall-Sized to Hand-Held COMS W4170 Scaling Up and Down 2: From Wall-Sized to Hand-Held Steven Feiner Department of Computer Science Columbia University New York, NY 10027 December 6, 2018 1 Large Displays: Automated Warping of

More information

Stereo vision. Many slides adapted from Steve Seitz

Stereo vision. Many slides adapted from Steve Seitz Stereo vision Many slides adapted from Steve Seitz What is stereo vision? Generic problem formulation: given several images of the same object or scene, compute a representation of its 3D shape What is

More information

An Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web

An Auto-Stereoscopic VRML Viewer for 3D Data on the World Wide Web An Auto-Stereoscopic VM Viewer for 3D Data on the World Wide Web Shinji Uchiyama, Hiroyuki Yamamoto, and Hideyuki Tamura Mixed eality Systems aboratory Inc. 6-145 Hanasakicho, Nishi-ku, Yokohama 220-0022,

More information

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them.

All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. All human beings desire to know. [...] sight, more than any other senses, gives us knowledge of things and clarifies many differences among them. - Aristotle University of Texas at Arlington Introduction

More information

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation

A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation A Low Power, High Throughput, Fully Event-Based Stereo System: Supplementary Documentation Alexander Andreopoulos, Hirak J. Kashyap, Tapan K. Nayak, Arnon Amir, Myron D. Flickner IBM Research March 25,

More information

Stereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict

Stereo Graphics. Visual Rendering for VR. Passive stereoscopic projection. Active stereoscopic projection. Vergence-Accommodation Conflict Stereo Graphics Visual Rendering for VR Hsueh-Chien Chen, Derek Juba, and Amitabh Varshney Our left and right eyes see two views, which are processed by our visual cortex to create a sense of depth Computer

More information

Computer Graphics 1. Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010

Computer Graphics 1. Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010 Computer Graphics 1 Chapter 9 (July 1st, 2010, 2-4pm): Interaction in 3D 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons

More information