Gaze interaction (2): models and technologies

Size: px
Start display at page:

Download "Gaze interaction (2): models and technologies"

Transcription

1 Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano Gaze interaction A. Vinciarelli, M. Pantic, H. Bourlard, Social Signal Processing: Survey of an Emerging Domain, Image and Vision Computing (2008)

2 Gaze estimation without eye trackers Problem! detect the existence of eyes accurately interpret eye positions in the images using the pupil or iris center. for video images, the detected eyes are tracked from frame to frame. Gaze estimation : detected eyes in the images used to estimate and track where a person is looking in 3D, or alternatively, determining the 3D line of sight. Gaze estimation without eye trackers

3 //eye models Identify a model of the eye which is sufficiently expressive to take account of large variability in the appearance and dynamics, while also sufficiently constrained to be computationally efficient Eyelids may appear straight from one view but highly curved from another. The iris contour also changes with viewing angle. The dashed lines indicate when the eyelids appear straight the solid yellow lines represent the major axis of the iris ellipse Even for the same subject, a relatively small variation in viewing angles can cause significant changes in appearance //eye models The eye image may be characterized by the intensity distribution of the pupil(s), iris, and cornea, their shapes. Ethnicity, viewing angle, head pose, color, texture, light conditions, the position of the iris within the eye socket, and the state of the eye (i.e., open/ close) are issues that heavily influence the appearance of the eye. The intended application and available image data lead to different prior eye models. The prior model representation is often applied at different positions, orientations, and scales to reject false candidates

4 //eye models Shape-based methods: use a prior model of eye shape and surrounding structures fixed shape deformable shape Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods subspace-based methods Hybrid methods: combine feature, shape, and appearance approaches to exploit their respective benefits //eye models: Shape-Based Approaches Shape-based methods: use a prior model of eye shape and and a similarity measure Prior model of eye shape and surrounding structures iris and pupil contours and the exterior shape of the eye (eyelids) simple elliptical or of a more complex nature parameters of the geometric model define the allowable template deformations and contain parameters for rigid (similarity) transformations and parameters for nonrigid template deformations ability to handle shape, scale, and rotation changes

5 //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Valenti and Gevers uses isophote (i.e., curves connecting points of equal intensity) properties to infer the center of (semi)circular patterns which represent the eyes //eye models: Shape-Based Approaches Simple Elliptical Shape Models:

6 //eye models: Shape-Based Approaches Simple Elliptical Shape Models: //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) uses isophote (i.e., curves connecting points of equal intensity) no head pose voting Direction to the center

7 //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) uses isophote (i.e., curves connecting points of equal intensity) no head pose //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) uses isophote (i.e., curves connecting points of equal intensity) no head pose

8 //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) uses isophote (i.e., curves connecting points of equal intensity) no head pose //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) uses scale space framework for multiresolution

9 //eye models: Shape-Based Approaches Simple Elliptical Shape Models: example: Webcam-based Visual Gaze Estimation (Valenti et al) simple interpolants for easy calibration //eye models: Shape-Based Approaches Complex Shape Models: example: Yuille deformable templates

10 //eye models: Shape-Based Approaches Complex Shape Models: example: Yuille deformable templates //eye models: Shape-Based Approaches Complex Shape Models: example: Yuille deformable templates

11 //eye models: Shape-Based Approaches Complex Shape Models: example: Yuille deformable templates //eye models: Shape-Based Approaches Complex Shape Models: 1. computationally demanding, 2. may require high contrast images, and 3. usually need to be initialized close to the eye for successful localization. For large head movements, they consequently need other methods to provide agood initialization

12 //eye models: Feature-Based Shape Methods Explore the characteristics of the human eye to identify a set of distinctive features around the eyes. The limbus, pupil (dark/bright pupil images), and cornea reflections are common features used for eye localization Local Features by Intensity The eye region contains several boundaries that may bedetected by gray-level differences Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others. A filter bank may therefore enhance desired features of the image and, if appropriately defined, deemphasize irrelevant features //eye models: Feature-Based Shape Methods Local Features by Intensity The eye region contains several boundaries that may be detected by gray-level differences

13 //eye models: Feature-Based Shape Methods Local Features by Intensity The eye region contains several boundaries that may be detected by gray-level differences (Harper et al.) //eye models: Feature-Based Shape Methods Local Features by Intensity The eye region contains several boundaries that may be detected by gray-level differences Sequential search strategy

14 //eye models: Feature-Based Shape Methods Local Features by Intensity The eye region contains several boundaries that may be detected by gray-level differences //eye models: Feature-Based Shape Methods Local Features by Intensity The eye region contains several boundaries that may be detected by gray-level differences

15 //eye models: Feature-Based Shape Methods Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others Example Sirohey and Rosenfeld: Edges of the eye s sclera are detected with four Gabor wavelets. A nonlinear filter is constructed to detect the left and right eye corner candidates. The eye corners are used to determine eye regions for further analysis. Postprocessing steps are employed to eliminate the spurious eye corner candidates. A voting method is used to locate the edge of the iris. Since the upper part of the iris may not be visible, the votes are accumulated by summing edge pixels in a U-shaped annular region. The annulus center receiving the most votes is selected as the iris center To detect the edge of the upper eyelid, all edge segments are examined in the eye region and fitted to a third-degree polynomial //eye models: Feature-Based Shape Methods Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others Example Sirohey and Rosenfeld:

16 //eye models: Feature-Based Shape Methods Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others Example Sirohey and Rosenfeld: //eye models: Feature-Based Shape Methods Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others Example Sirohey and Rosenfeld:

17 //eye models: Feature-Based Shape Methods Local Feature by Filter Responses Filter responses enhance particular characteristics in the image while suppressing others Example Sirohey and Rosenfeld: //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods subspace-based methods

18 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods ( example Grauman et al) During the first stage of processing, the eyes are automatically located by searching temporally for "blinklike" motion //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods ( example Grauman et al) During the first stage of processing, the eyes are automatically located by searching temporally for "blink-like" motion

19 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods ( example Grauman et al) During the first stage of processing, the eyes are automatically located by searching temporally for "blink-like" motion //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods ( example Grauman et al)

20 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods ( example Grauman et al) //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes)

21 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes) How can we find an efficient representation of such a data set? Rather, than storing every image, we might try to represent the images more effectively, e.g., in a lower dimensional subspace We seek a linear basis with which each image in the ensemble is approximatedas a linear combination of basis images //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes) How can we find an efficient representation of such a data set? Rather, than storing every image, we might try to represent the images more effectively, e.g., in a lower dimensional subspace let s select the basis to minimize squared reconstruction error

22 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes) How can we find an efficient representation of such a data set? Rather, than storing every image, we might try to represent the images more effectively, e.g., in a lower dimensional subspace The eigenvectors of the sample covariance matrix of the image data provide the major axis //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes)

23 //eye models Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure subspace methods (eigeneyes) //in summary... Shape-based methods: use a prior model of eye shape and surrounding structures fixed shape deformable shape Appearance-based methods: rely on models built directly on the appearance of the eye region: template matching by constructing an image patch model and performing eye detection through model matching using a similarity measure intensity-based methods subspace-based methods Hybrid methods: combine feature, shape, and appearance approaches to exploit their respective benefits Other methods: eye trackers: active light (IR)...we have already considered these

24 Gaze estimation Gaze: the gaze direction the point of regard (PoR or fixation) Gaze modeling consequently focuses on the relations between the image data and the point of regard/gaze direction. Gaze estimation //some general problems 1. camera-calibration: determining intrinsic camera parameters; 2. geometric-calibration: determining relative locations and orientations of different units in the setup such as camera, light sources, and monitor; 3. personal calibration: estimating cornea curvature, angular offset between visual and optical axes; and 4. gazing mapping calibration: determining parameters of the eyegaze mapping functions.

25 Gaze estimation //methods IR light and feature extraction: 2D Regression-Based Gaze Estimation 3D Model-Based Gaze Estimation Appearance based methods Similarly to the appearance models of the eyes, appearance-based models for gaze estimation do not explicitly extract features, but rather use the image contents as input with thei ntention of mapping these directly to screen coordinates (PoR). do not require calibration of cameras and geometry data since the mapping is made directly on the image contents Natural light methods Gaze estimation //methods IR light and feature extraction: 2D Regression-Based Gaze Estimation 3D Model-Based Gaze Estimation Appearance based methods Similarly to the appearance models of the eyes, appearance-based models for gaze estimation do not explicitly extract features, but rather use the image contents as input with thei ntention of mapping these directly to screen coordinates (PoR). do not require calibration of cameras and geometry data since the mapping is made directly on the image contents Natural light methods Natural light approaches face several new challenges such as light changes in the visible spectrum, lower contrast images, but are not as sensitive to the IR light in the environment, and may thus, be potentially better suited when used outdoor

26 Gaze estimation //methods Appearance based methods Example: K.-H. Tan, D.J. Kriegman, and N. Ahuja,: appearance manifold model treat an image as a point in a high-dimensional space: a 20 pixel by 20 pixel intensity image can be considered a 400-component vector, or a point in a 400-dimensional space (appearance manifold) s2 s3 s1 each manifold point s is an image of an eye, labeled with a 2D coordinate of a point on a display Gaze estimation //methods Appearance based methods Example: K.-H. Tan, D.J. Kriegman, and N. Ahuja,: appearance manifold model treat an image as a point in a high-dimensional space: a 20 pixel by 20 pixel intensity image can be considered a 400-component vector, or a point in a 400-dimensional space (appearance manifold) s1 Manifold Learning s2 s3 each manifold point s is an image of an eye, labeled with a 2D coordinate of a point on a display

27 Gaze estimation //methods Appearance based methods Example: K.-H. Tan, D.J. Kriegman, and N. Ahuja,: appearance manifold model treat an image as a point in a high-dimensional space: a 20 pixel by 20 pixel intensity image can be considered a 400-component vector, or a point in a 400-dimensional space (appearance manifold) Gaze estimation //methods Appearance based methods Example: William Blake & Cipolla: mapping images to continuous output spaces using powerful Bayesian learning techniques

28 Gaze estimation //methods Appearance based methods Example: William Blake & Cipolla: mapping images to continuous output spaces using powerful Bayesian learning techniques calibration Gaze estimation //methods Example: William Blake & Cipolla: mapping images to continuous output spaces using powerful Bayesian learning techniques Rather than using raw pixel data, input images are processed to obtain different types of feature To infer the input output mapping for unseen inputs in real-time: sparse regression model (Gaussian Processes) Method is fully Bayesian: output predictions are provided with a measure of uncertainty During the learning phase, all unknown modelling parameters are inferred from data as part of the Bayesian framework: do not require known dynamics a priori.

29 Gaze estimation //methods Appearance based methods Example: William Blake & Cipolla: mapping images to continuous output spaces using powerful Bayesian learning techniques Can be applied to other contexts Gaze estimation //methods Appearance based methods Example: William Blake & Cipolla: mapping images to continuous output spaces using powerful Bayesian learning techniques Can be applied to other contexts

30 Gaze estimation //using other cues Gaze estimation //head-tracking The Watson head-tracker real-time object tracker uses range and appearance information from a stereo camera to recover the 3D rotation and translation of objects, or of the camera itself. The system can be connected to a face detector and used as an accurate head tracker. Additional supporting algorithms can improve the accuracy of the tracker Software download

31 The Watson head tracker The Watson head tracker //head pointing

32 The Watson head tracker, //Interactive Kiosk Shared attention Shared attention through gaze interactions?

33 Shared attention //Developmental timeline Shared attention Mutual gaze Gaze following

34 Shared attention Imperative pointing Declarative pointing (create shared attention) Shared attention //Open questions

35 Shared attention //Models (B.Scassellati, MIT) Shared attention //Models (B.Scassellati, MIT)

36 Shared attention //Robots that Learn to Converse: Shared attention //Robots that Learn to Converse:

37 Shared attention //Robots that Learn to Converse:

Eye tracking by image processing for helping disabled people. Alireza Rahimpour

Eye tracking by image processing for helping disabled people. Alireza Rahimpour An Introduction to: Eye tracking by image processing for helping disabled people Alireza Rahimpour arahimpo@utk.edu Fall 2012 1 Eye tracking system: Nowadays eye gaze tracking has wide range of applications

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface , 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

Large-Scale 3D Point Cloud Processing Tutorial 2013

Large-Scale 3D Point Cloud Processing Tutorial 2013 Large-Scale 3D Point Cloud Processing Tutorial 2013 Features The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in Prof.

More information

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

Learning the Deep Features for Eye Detection in Uncontrolled Conditions

Learning the Deep Features for Eye Detection in Uncontrolled Conditions 2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Structure from Motion. Prof. Marco Marcon

Structure from Motion. Prof. Marco Marcon Structure from Motion Prof. Marco Marcon Summing-up 2 Stereo is the most powerful clue for determining the structure of a scene Another important clue is the relative motion between the scene and (mono)

More information

Computer Vision Lecture 17

Computer Vision Lecture 17 Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views?

Recap: Features and filters. Recap: Grouping & fitting. Now: Multiple views 10/29/2008. Epipolar geometry & stereo vision. Why multiple views? Recap: Features and filters Epipolar geometry & stereo vision Tuesday, Oct 21 Kristen Grauman UT-Austin Transforming and describing images; textures, colors, edges Recap: Grouping & fitting Now: Multiple

More information

Facial Processing Projects at the Intelligent Systems Lab

Facial Processing Projects at the Intelligent Systems Lab Facial Processing Projects at the Intelligent Systems Lab Qiang Ji Intelligent Systems Laboratory (ISL) Department of Electrical, Computer, and System Eng. Rensselaer Polytechnic Institute jiq@rpi.edu

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

What is Computer Vision?

What is Computer Vision? Perceptual Grouping in Computer Vision Gérard Medioni University of Southern California What is Computer Vision? Computer Vision Attempt to emulate Human Visual System Perceive visual stimuli with cameras

More information

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map? Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Webcam-based Eye Gaze Tracking under Natural Head Movement

Webcam-based Eye Gaze Tracking under Natural Head Movement Faculty of Science Master in Artificial Intelligence arxiv:1803.11088v1 [cs.cv] 29 Mar 2018 Author: Webcam-based Eye Gaze Tracking under Natural Head Movement Kalin STEFANOV Supervisors: Dr. Theo GEVERS

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Silhouette Coherence for Camera Calibration under Circular Motion

Silhouette Coherence for Camera Calibration under Circular Motion Silhouette Coherence for Camera Calibration under Circular Motion Carlos Hernández, Francis Schmitt and Roberto Cipolla Appendix I 2 I. ERROR ANALYSIS OF THE SILHOUETTE COHERENCE AS A FUNCTION OF SILHOUETTE

More information

A face recognition system based on local feature analysis

A face recognition system based on local feature analysis A face recognition system based on local feature analysis Stefano Arca, Paola Campadelli, Raffaella Lanzarotti Dipartimento di Scienze dell Informazione Università degli Studi di Milano Via Comelico, 39/41

More information

Non-Differentiable Image Manifolds

Non-Differentiable Image Manifolds The Multiscale Structure of Non-Differentiable Image Manifolds Michael Wakin Electrical l Engineering i Colorado School of Mines Joint work with Richard Baraniuk, Hyeokho Choi, David Donoho Models for

More information

Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Eye Typing off the Shelf

Eye Typing off the Shelf Eye Typing off the Shelf Dan Witzner Hansen Dept. of Innovation IT University Copenhagen Copenhagen, Denmark Arthur Pece Heimdall Vision & Dept. of Computer Science University of Copenhagen Copenhagen,

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17 Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images

More information

Perceptual Grouping from Motion Cues Using Tensor Voting

Perceptual Grouping from Motion Cues Using Tensor Voting Perceptual Grouping from Motion Cues Using Tensor Voting 1. Research Team Project Leader: Graduate Students: Prof. Gérard Medioni, Computer Science Mircea Nicolescu, Changki Min 2. Statement of Project

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Non-line-of-sight imaging

Non-line-of-sight imaging Non-line-of-sight imaging http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 25 Course announcements Homework 6 will be posted tonight. - Will

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION *

STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * STRUCTURE AND MOTION ESTIMATION FROM DYNAMIC SILHOUETTES UNDER PERSPECTIVE PROJECTION * Tanuja Joshi Narendra Ahuja Jean Ponce Beckman Institute, University of Illinois, Urbana, Illinois 61801 Abstract:

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

Step-by-Step Model Buidling

Step-by-Step Model Buidling Step-by-Step Model Buidling Review Feature selection Feature selection Feature correspondence Camera Calibration Euclidean Reconstruction Landing Augmented Reality Vision Based Control Sparse Structure

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision

Stereo. Outline. Multiple views 3/29/2017. Thurs Mar 30 Kristen Grauman UT Austin. Multi-view geometry, matching, invariant features, stereo vision Stereo Thurs Mar 30 Kristen Grauman UT Austin Outline Last time: Human stereopsis Epipolar geometry and the epipolar constraint Case example with parallel optical axes General case with calibrated cameras

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19.

Announcements. Computer Vision I. Motion Field Equation. Revisiting the small motion assumption. Visual Tracking. CSE252A Lecture 19. Visual Tracking CSE252A Lecture 19 Hw 4 assigned Announcements No class on Thursday 12/6 Extra class on Tuesday 12/4 at 6:30PM in WLH Room 2112 Motion Field Equation Measurements I x = I x, T: Components

More information

Gaze Tracking. Introduction :

Gaze Tracking. Introduction : Introduction : Gaze Tracking In 1879 in Paris, Louis Émile Javal observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops

More information

ABSTRACT I. INTRODUCTION

ABSTRACT I. INTRODUCTION 2017 IJSRSET Volume 3 Issue 8 Print ISSN: 2395-1990 Online ISSN : 2394-4099 Themed Section: Engineering and Technology Real Time Gaze Estimation for Medical Field using Normally Webcam with OpenCV Alhamzawi

More information

CS201: Computer Vision Introduction to Tracking

CS201: Computer Vision Introduction to Tracking CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

Abstract Purpose: Results: Methods: Conclusions:

Abstract Purpose: Results: Methods: Conclusions: Abstract Purpose: Current gaze-tracking systems can monitor point of gaze allowing for free head movement, but do not perform well when used with the large display surfaces and wide range of head positions

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania.

Visual Tracking. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania. Image Processing Laboratory Dipartimento di Matematica e Informatica Università degli studi di Catania 1 What is visual tracking? estimation of the target location over time 2 applications Six main areas:

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

User Calibration-free Method using Corneal Surface Image for Eye Tracking

User Calibration-free Method using Corneal Surface Image for Eye Tracking User Calibration-free Method using Corneal Surface Image for Eye Tracking Sara Suda 1, Kenta Yamagishi 2 and Kentaro Takemura 1,2 1 Graduate School of Engineering, Tokai University, Hiratsuka, Japan 2

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Dual-state Parametric Eye Tracking

Dual-state Parametric Eye Tracking Dual-state Parametric Eye Tracking Ying-li Tian 1;3 Takeo Kanade 1 and Jeffrey F. Cohn 1;2 1 Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 2 Department of Psychology, University

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

POME A mobile camera system for accurate indoor pose

POME A mobile camera system for accurate indoor pose POME A mobile camera system for accurate indoor pose Paul Montgomery & Andreas Winter November 2 2016 2010. All rights reserved. 1 ICT Intelligent Construction Tools A 50-50 joint venture between Trimble

More information

NON-INTRUSIVE INFRARED-FREE EYE TRACKING METHOD

NON-INTRUSIVE INFRARED-FREE EYE TRACKING METHOD NON-INTRUSIVE INFRARED-FREE EYE TRACKING METHOD Bartosz Kunka, Bozena Kostek Gdansk University of Technology, Multimedia Systems Department, Gdansk, Poland, e-mail: kuneck@sound.eti.pg.gda.pl e-mail: bozenka@sound.eti.pg.gda.pl

More information

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Motion and Tracking. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Motion and Tracking Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Motion Segmentation Segment the video into multiple coherently moving objects Motion and Perceptual Organization

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

A ROBUST REAL TIME EYE TRACKING AND GAZE ESTIMATION SYSTEM USING PARTICLE FILTERS TARIQ IQBAL. Department of Computer Science

A ROBUST REAL TIME EYE TRACKING AND GAZE ESTIMATION SYSTEM USING PARTICLE FILTERS TARIQ IQBAL. Department of Computer Science A ROBUST REAL TIME EYE TRACKING AND GAZE ESTIMATION SYSTEM USING PARTICLE FILTERS TARIQ IQBAL Department of Computer Science APPROVED: Olac Fuentes, Ph.D., Chair Christopher Kiekintveld, Ph.D. Sergio Cabrera,

More information

Ball detection and predictive ball following based on a stereoscopic vision system

Ball detection and predictive ball following based on a stereoscopic vision system Research Collection Conference Paper Ball detection and predictive ball following based on a stereoscopic vision system Author(s): Scaramuzza, Davide; Pagnottelli, Stefano; Valigi, Paolo Publication Date:

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

3D Models and Matching

3D Models and Matching 3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

Robust Eye Gaze Estimation

Robust Eye Gaze Estimation Robust Eye Gaze Estimation Joanna Wiśniewska 1, Mahdi Rezaei 2, and Reinhard Klette 2 1 Warsaw University of Technology, Plac Politechniki 1, 00-661 Warsaw, Poland, J.Wisniewska@stud.elka.pw.edu.pl 2 The

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

University of Cambridge Engineering Part IIB Module 4F12: Computer Vision and Robotics Handout 1: Introduction

University of Cambridge Engineering Part IIB Module 4F12: Computer Vision and Robotics Handout 1: Introduction University of Cambridge Engineering Part IIB Module 4F12: Computer Vision and Robotics Handout 1: Introduction Roberto Cipolla October 2006 Introduction 1 What is computer vision? Vision is about discovering

More information

Last Lecture. Bayer pattern. Focal Length F-stop Depth of Field Color Capture. Prism. Your eye. Mirror. (flipped for exposure) Film/sensor.

Last Lecture. Bayer pattern. Focal Length F-stop Depth of Field Color Capture. Prism. Your eye. Mirror. (flipped for exposure) Film/sensor. Last Lecture Prism Mirror (flipped for exposure) Your eye Film/sensor Focal Length F-stop Depth of Field Color Capture Light from scene lens Mirror (when viewing) Bayer pattern YungYu Chuang s slide Today

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Elliptical Head Tracker using Intensity Gradients and Texture Histograms

Elliptical Head Tracker using Intensity Gradients and Texture Histograms Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December

More information

Keeping features in the camera s field of view: a visual servoing strategy

Keeping features in the camera s field of view: a visual servoing strategy Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Robust Real-Time Eye Detection and Tracking Under Variable Lighting Conditions and Various Face Orientations

Robust Real-Time Eye Detection and Tracking Under Variable Lighting Conditions and Various Face Orientations Robust Real-Time Eye Detection and Tracking Under Variable Lighting Conditions and Various Face Orientations Zhiwei Zhu a, Qiang Ji b a E-mail:zhuz@rpi.edu Telephone: 1-518-276-6040 Department of Electrical,

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Product information. Hi-Tech Electronics Pte Ltd

Product information. Hi-Tech Electronics Pte Ltd Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Epipolar Geometry in Stereo, Motion and Object Recognition

Epipolar Geometry in Stereo, Motion and Object Recognition Epipolar Geometry in Stereo, Motion and Object Recognition A Unified Approach by GangXu Department of Computer Science, Ritsumeikan University, Kusatsu, Japan and Zhengyou Zhang INRIA Sophia-Antipolis,

More information