Object Move Controlling in Game Implementation Using OpenCV

Size: px
Start display at page:

Download "Object Move Controlling in Game Implementation Using OpenCV"

Transcription

1 Object Move Controlling in Game Implementation Using OpenCV Professor: Dr. Ali Arya Reported by: Farzin Farhadi-Niaki Lindsay Coderre Department of Systems and Computer Engineering Carleton University Ottawa, Canada I. INTRODUCTION Computer vision is a rapidly growing field, partly as a result of both cheaper and more capable cameras, partly because of affordable processing power, and partly because vision algorithms are starting to mature. OpenCV itself has played a role in the growth of computer vision by enabling thousands of people to do more productive work in vision. With its focus on real-time vision, OpenCV helps students and professionals efficiently implement projects and jump-start research by providing them with a computer vision and machine learning infrastructure that was previously available only in a few mature research labs. II. METHODOLOGY Computer vision is the transformation of data from a still or video camera into either a decision or a new representation. All such transformations are done for achieving some particular goal. The input data may include some contextual information such as "the camera is mounted in a car" or "laser range finder indicates an object is 1 meter away". The decision might be "there is a person in this scene" or "there are 14 tumour cells on this slide". A new representation might mean turning a color image into a grayscale image or removing camera motion from an image sequence. For robotics, we need object recognition (what) and object location (where): a) Object recognition In OpenCV, there are a wide range of approaches to detect an object; the techniques such as: convolution/filters, thresholds, histogram and matching, contours, efficient nearest neighbour matching to recognize objects using huge learned databases of objects, etc. b) Object location Again, in OpenCV, there are various techniques for finding the object location, e.g. background subtraction (to find the moving objects), corner finding, optical flow, mean-shift and camshaft tracking, structure from motion using SIFT descriptor and SURF gradient histogram grids, or just simply finding the edge of the object and momentarily checking the location of object s centre on pixel by pixel based movement. A. Convolution Convolution is the basis of many of the transformations. In the abstract, this term means something we do to every part of an image. What a particular convolution "does" is determined by the form of the Convolution kernel being used. This kernel is essentially just a fixed size array of numerical coefficients along with an anchor point in that array, which is typically located at the center. The size of the array is called the support of the kernel.

2 We can express this procedure in the form of an equation. If we define the image to be I(x, y), the kernel to be G(i, j) (where 0 < i < M i 1 and 0 < j < M j 1), and the anchor point to be located at (a i, a j ) in the coordinates of the kernel, then the convolution H(x, y) is defined by the following expression: B. Canny The most significant new dimension to the Canny algorithm is that it tries to assemble the individual edge candidate pixels into contours. These contours are formed by applying a hysteresis threshold to the pixels. This means that there are two thresholds, an upper and a lower. If a pixel has a gradient larger than the upper threshold, then it is accepted as an edge pixel; if a pixel is below the lower threshold, it is rejected. If the pixel's gradient is between the thresholds, then it will be accepted only if it is connected to a pixel that is above the high threshold. void cvcanny( const CvArr* img, CvArr* edges, double lowthresh, double highthresh, int aperturesize = 3 ); The cvcanny() function expects an input image, which must be grayscale, and an output image, which must also be grayscale. C. Threshold double cvthreshold( CvArr* src, CvArr* dst, double threshold, double max_value, int threshold_type ); Frequently we have done many layers of processing steps and want either to make a final decision about the pixels in an image or to categorically reject those pixels below or above some value while keeping the others. The OpenCV function cvthreshold() accomplishes these tasks. The basic idea is that an array is given, along with a threshold, and then something happens to every element of the array depending on whether it is below or above the threshold. The cvthreshold() function handles only 8-bit or floating-point grayscale source images. CV_THRESH_BINARY_INV dst i = (src i > T)? M : 0

3 Fig 1. Each threshold type corresponds to a particular comparison operation between the i th source pixel (src i) and the threshold (denoted in the table by T). Depending on the relationship between the source pixel and the threshold, the destination pixel dst i may be set to 0, the src i, or the max_value (denoted in the table by M). D. Adaptive Threshold There is a modified threshold technique in which the threshold level is itself variable. In OpenCV, this method is implemented in the cvadaptivethreshold() function: void cvadaptivethreshold( CvArr* src, CvArr* dst, double max_val, int adaptive_method = CV_ADAPTIVE_THRESH_MEAN_C int threshold_type = CV_THRESH_BINARY, int block_size = 3, double param1 = 5 ); cvadaptivethreshold() allows for two different adaptive threshold types depending on the settings of adaptive_method. In both cases the adaptive threshold T(x, y) is set on a pixel-by-pixel basis by computing a weighted average of the b-by-b region around each pixel location minus a constant, where b is given by block_size and the constant is given by param1. If the method is set to CV_ADAPTIVE_THRESH_MEAN_C, then all pixels in the area are weighted equally. If it is set to CV_ADAPTIVE_THRESH_GAUSSIAN_C, then the pixels in the region around (x, y) are weighted according to a Gaussian function of their distance from that center point. The adaptive threshold technique is useful when there are strong illumination or reflectance gradients that you need to threshold relative to the general intensity gradient. This function handles only singlechannel 8-bit or floating-point images, and it requires that the source and destination images be distinct.

4 Fig 2. Binary threshold versus adaptive binary threshold: the input image (top) was turned into a binary image using a global threshold (lower left) and an adaptive threshold (lower right); raw image courtesy of Kurt Konolidge E. Contours Although algorithms like the Canny edge detector can be used to find the edge pixels that separate different segments in an image, they do not tell us anything about those edges as entities in themselves. The next step is to be able to assemble those edge pixels into contours. cvfindcontours()is a convenient function in OpenCV that will do exactly this for us. Specifically, with assigning memory storages, OpenCV functions gain access to memory when they need to construct new objects dynamically; then we will need to use sequences (as something similar to the generic container classes), which are the objects used to represent contours generally. With those concepts in hand, we will get into contour finding. A contour is a list of points that represent a curve in an image. Contours are represented in OpenCV by sequences in which every entry in the sequence encodes information about the location of the next point on the curve. The function cvfindcontours() computes contours from binary images. It can take images created by cvcanny(), which have edge pixels in them, or images created by functions like cvthreshold() or cvadaptivethreshold(), in which the edges are implicit as boundaries between positive and negative regions. Drawing a contour on the screen using cvdrawcontours() function is the next step. Here we create a window with an image in it. A trackbar sets a simple threshold, and the contours in the thresholded image are drawn. The image is updated whenever the trackbar is adjusted. III. DISCUSSIONS OpenCV, short for Open Computer Vision, is used in the purpose of our project so that movement captured from the camera is translated into movement of objects within the game interface. A basic framework is used as the foundation to create any simple game interface. In terms of code, there are three classes created: Camera, game and object contain all of the basic functions that all games require.

5 A. Camera Class In this project, using OpenCV, we have tried to implement a simulation of object move detection (Fig. 3) where would be used in game control. Fig 3. The camera result (including four windows). The procedure of requirements for this implementation through Camera class has the following steps: 1) Capturing the image as a frame from a camera. 2) Converting the image/frame to 8-bits grayscale image to be compatibly usable in some filters like Canny. 3) Creating trackbars for up and down thresholds (Fig. 4). Fig 4. Trackbars for up and down thresholds. 4) Finding the edge of the object using Canny or Threshold (Adaptive Threshold) functions (Fig. 5).

6 Fig 5. Edge of the object. 5) Creating dynamic structure and sequence for the contours, and finding the current location (x, y) of the object s centre after finding its contours. 6) Finally, transferring the objects location (x, y) to the Game class to control the objects movements on the screen. B. Object Class The Object class fundamentally contains the functions to draw a box and a circle. Both of these functions have variables to describe its size, position and colour. Depending on the game, certain other variables and functions are added, such as velocity, if the object is continually moving around the game, or detectcollision() if the object is going to interact with other objects. void Object::drawCircle( CvArr* dst, int centerx, int centery, int radius, int b, int g, int r ) { cvcircle( dst, cvpoint( centerx, centery ), radius, cvscalar( b, g, r ), -1 ); } C. Game Class The Game class contains all of the game logics. It initializes all of the objects and the camera. It creates the game interface window and draws the scene. Any other game properties can also be added, such as a score or a function to continually animate an object around the screen. Two of the more important functions are the movement functions. They describe how the objects move will be controlled by the users, whether in every direction or only vertically or horizontally. These functions use the u and v values that are outputted from the Camera class, where u and v will become the x and y positions of the object designated. IV. RESULTS To explore the possibilities of this framework, some games were developed.

7 A. Pong Pong was first created to explore the classes (Fig. 6). Two paddles are controlled by the user to hit a ball back and forth. It also keeps score of how many times a paddle lets the ball past. Some extra functions were added. For example, the ball has a spin when it hits a paddle. Depending on where the ball hits, the y velocity of the ball increases or decreases. void spinball( Object paddle ) { int balllocation; float ballpercent; } balllocation = ball.cy - paddle.y; ballpercent = (float)balllocation/(float)paddle.height; ballpercent = ballpercent*10; ball.yvel = (int)(ballpercent + 0.5); Fig 6. Pong game result. B. Maze Another game created was a Maze (Fig. 7). A ball moves within a path. The background image of the path is analyzed and the colour value is stored in a matrix. Before the ball moves, it evaluates the wanted destination based on the u and v values from the camera. If the value in the matrix indicates that the colour is not black, the ball moves to the new location. This way the ball can only travel on the path. Setting the Matrix: for( int i=0; i < bg->width; i++ ) for( int j=0; j < bg->height; j++ ) { CvScalar pixelvalue = cvget2d( bg, j, i ); cvmset( matrix, i, j, pixelvalue.val[0] ); }

8 Fig 7. Maze game result. C. Third Game There are endless possibilities for simple games using this framework. A ball could collect boxes by adding detectcollision() for each box and a visibility Boolean that determines whether the box is on screen after it collides with the ball (Fig. 8). A breakaway game could be made using an animated ball, a paddle with a horizontal spin function and a bunch of boxes with the detectcollision() and visibility Boolean combination. Game logics for individual games can be made into its own class and members can be added to the Object class to make the game as optimal as possible. Fig 8. Third game result.

9 V. CONCLUSION Computer vision applications are growing rapidly, from product inspection to image and video indexing on the Web to medical applications and even to local navigation on Mars. OpenCV is also growing to accommodate these developments. One of the key new development areas for OpenCV is robotic perception. This effort focuses on 3D perception as well as 2D plus 3D object recognition since the combination of data types makes for better features for use in object detection, segmentation and recognition. Robotic perception relies heavily on 3D sensing, so efforts are under way to extend camera calibration, rectification and correspondence to multiple cameras and to camera plus laser rangefinder combinations. Creating capable robots subsumes most fields of computer vision and artificial intelligence, from accurate 3D reconstruction to tracking, identifying humans, object recognition, and image stitching and on to learning, control, planning, and decision making. Any higher-level task, such as planning, is made much easier by rapid and accurate depth perception and recognition. It is in these areas especially that OpenCV hopes to enable rapid advance by encouraging many groups to contribute and use ever better methods to solve the difficult problems of real-world perception, recognition, and learning. OpenCV will, of course, support many other areas as well, from image and movie indexing on the web to security systems and medical analysis. The wishes of the general community will heavily influence OpenCV's direction and growth. There is a worldwide community of interactive artists who use OpenCV so that viewers can interact with their art in dynamic ways. The most commonly used routines for this application are face detection, optical flow, and tracking. The focused effort on improving object recognition will allow different modes of interacting with art, because objects can then be used as modal controls. With the ability to capture 3D meshes, it may also be possible to "import" the viewer into the art and so allow the artist to gain a better feel for recognizing user action; this, in turn, could be used to enhance dynamic interaction. The needs and desires of the artistic community for using computer vision will receive enhanced priority in OpenCV's future. A group of manufacturers are aiming to develop cell-phone projectors perfect for robots, because most cell phones are lightweight, low-energy devices whose circuits already include an embedded camera. This opens the way for close-range portable structured light and thereby accurate depth maps, which are just what we need for robot manipulation and 3D object scanning. Computer vision has a rich future ahead, and it seems likely to be one of the key enabling technologies for the 21st century. Likewise, OpenCV seems likely to be (at least in part) one of the key enabling technologies for computer vision. Endless opportunities for creativity and profound contribution lie ahead. REFERENCES [1] Bradski, G., Kaehler, A. 2008, Learning OpenCV: Computer Vision with the OpenCV Library. O Reilly Media Inc., Sebastopol, CA.

An Implementation on Object Move Detection Using OpenCV

An Implementation on Object Move Detection Using OpenCV An Implementation on Object Move Detection Using OpenCV Professor: Dr. Ali Arya Reported by: Farzin Farhadi-Niaki Department of Systems and Computer Engineering Carleton University Ottawa, Canada I. INTRODUCTION

More information

Building Detection. Guillem Pratx LIAMA. 18 August BACKGROUND 2 2. IDEA 2

Building Detection. Guillem Pratx LIAMA. 18 August BACKGROUND 2 2. IDEA 2 Building Detection Guillem Pratx LIAMA 18 August 2004 1. BACKGROUND 2 2. IDEA 2 2.1 STARTING POINT 2 2.2 IMPROVEMENTS 2 2.2.1 IMAGE PREPROCESSING 2 2.2.2 CONTOURS CHARACTERIZATION 2 3. IMPLEMENTATION 3

More information

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Laboratory of Applied Robotics

Laboratory of Applied Robotics Laboratory of Applied Robotics OpenCV: Shape Detection Paolo Bevilacqua RGB (Red-Green-Blue): Color Spaces RGB and HSV Color defined in relation to primary colors Correlated channels, information on both

More information

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

The NAO Robot, a case of study Robotics Franchi Alessio Mauro

The NAO Robot, a case of study Robotics Franchi Alessio Mauro The NAO Robot, a case of study Robotics 2013-2014 Franchi Alessio Mauro alessiomauro.franchi@polimi.it Who am I? Franchi Alessio Mauro Master Degree in Computer Science Engineer at Politecnico of Milan

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Local Image preprocessing (cont d)

Local Image preprocessing (cont d) Local Image preprocessing (cont d) 1 Outline - Edge detectors - Corner detectors - Reading: textbook 5.3.1-5.3.5 and 5.3.10 2 What are edges? Edges correspond to relevant features in the image. An edge

More information

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16 Review Edges and Binary Images Tuesday, Sept 6 Thought question: how could we compute a temporal gradient from video data? What filter is likely to have produced this image output? original filtered output

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 4 Part-2 February 5, 2014 Sam Siewert Outline of Week 4 Practical Methods for Dealing with Camera Streams, Frame by Frame and De-coding/Re-encoding for Analysis

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:

More information

CS4495/6495 Introduction to Computer Vision

CS4495/6495 Introduction to Computer Vision CS4495/6495 Introduction to Computer Vision 9C-L1 3D perception Some slides by Kelsey Hawkins Motivation Why do animals, people & robots need vision? To detect and recognize objects/landmarks Is that a

More information

CS5670: Computer Vision

CS5670: Computer Vision CS5670: Computer Vision Noah Snavely Lecture 4: Harris corner detection Szeliski: 4.1 Reading Announcements Project 1 (Hybrid Images) code due next Wednesday, Feb 14, by 11:59pm Artifacts due Friday, Feb

More information

Case Study: Attempts at Parametric Reduction

Case Study: Attempts at Parametric Reduction Appendix C Case Study: Attempts at Parametric Reduction C.1 Introduction After the first two studies, we have a better understanding of differences between designers in terms of design processes and use

More information

Scene Text Detection Using Machine Learning Classifiers

Scene Text Detection Using Machine Learning Classifiers 601 Scene Text Detection Using Machine Learning Classifiers Nafla C.N. 1, Sneha K. 2, Divya K.P. 3 1 (Department of CSE, RCET, Akkikkvu, Thrissur) 2 (Department of CSE, RCET, Akkikkvu, Thrissur) 3 (Department

More information

CSE152 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 29-May-2015

CSE152 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 29-May-2015 Instructions: CSE15 Introduction to Computer Vision Assignment 3 (SP15) Instructor: Ben Ochoa Maximum Points : 85 Deadline : 11:59 p.m., Friday, 9-May-015 This assignment should be solved, and written

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Object Recognition Tools for Educational Robots

Object Recognition Tools for Educational Robots Object Recognition Tools for Educational Robots Xinghao Pan Advised by Professor David S. Touretzky Senior Research Thesis School of Computer Science Carnegie Mellon University May 2, 2008 Abstract SIFT

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia Application Object Detection Using Histogram of Oriented Gradient For Artificial Intelegence System Module of Nao Robot (Control System Laboratory (LSKK) Bandung Institute of Technology) A K Saputra 1.,

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

Crush Around Augmented Reality Game Computer Vision and Image Processing for Mobile Platforms

Crush Around Augmented Reality Game Computer Vision and Image Processing for Mobile Platforms Crush Around Augmented Reality Game Computer Vision and Image Processing for Mobile Platforms Tomer Cagan cagan.tomer@post.idc.ac.il Ziv Levy levy.ziv@post.idc.ac.il School of Computer Science. The Interdisciplinary

More information

Image Processing (1) Basic Concepts and Introduction of OpenCV

Image Processing (1) Basic Concepts and Introduction of OpenCV Intelligent Control Systems Image Processing (1) Basic Concepts and Introduction of OpenCV Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Digital Images. Kyungim Baek. Department of Information and Computer Sciences. ICS 101 (November 1, 2016) Digital Images 1

Digital Images. Kyungim Baek. Department of Information and Computer Sciences. ICS 101 (November 1, 2016) Digital Images 1 Digital Images Kyungim Baek Department of Information and Computer Sciences ICS 101 (November 1, 2016) Digital Images 1 iclicker Question I know a lot about how digital images are represented, stored,

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

REAL-TIME PATTERN RECOGNITION

REAL-TIME PATTERN RECOGNITION SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY 2007 MINICOURSE REAL-TIME PATTERN RECOGNITION USING THE OPENCV LIBRARY {jpsml, mouse, ela, gsm, ds2, vt, jk}@cin.ufpe.br João Paulo Lima Thiago Farias Eduardo

More information

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge) Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Progress Report of Final Year Project

Progress Report of Final Year Project Progress Report of Final Year Project Project Title: Design and implement a face-tracking engine for video William O Grady 08339937 Electronic and Computer Engineering, College of Engineering and Informatics,

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth

Active Stereo Vision. COMP 4900D Winter 2012 Gerhard Roth Active Stereo Vision COMP 4900D Winter 2012 Gerhard Roth Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can handle different

More information

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS 1 RONNIE O. SERFA JUAN, 2 CHAN SU PARK, 3 HI SEOK KIM, 4 HYEONG WOO CHA 1,2,3,4 CheongJu University E-maul: 1 engr_serfs@yahoo.com,

More information

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science Edge Detection From Sandlot Science Today s reading Cipolla & Gee on edge detection (available online) Project 1a assigned last Friday due this Friday Last time: Cross-correlation Let be the image, be

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

(Sample) Final Exam with brief answers

(Sample) Final Exam with brief answers Name: Perm #: (Sample) Final Exam with brief answers CS/ECE 181B Intro to Computer Vision March 24, 2017 noon 3:00 pm This is a closed-book test. There are also a few pages of equations, etc. included

More information

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.

Image matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian. Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the

More information

CS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow

CS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow CS4495 Fall 2014 Computer Vision Problem Set 5: Optic Flow DUE: Wednesday November 12-11:55pm In class we discussed optic flow as the problem of computing a dense flow field where a flow field is a vector

More information

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov CS 378: Autonomous Intelligent Robotics Instructor: Jivko Sinapov http://www.cs.utexas.edu/~jsinapov/teaching/cs378/ Visual Registration and Recognition Announcements Homework 6 is out, due 4/5 4/7 Installing

More information

Multimedia Technology CHAPTER 4. Video and Animation

Multimedia Technology CHAPTER 4. Video and Animation CHAPTER 4 Video and Animation - Both video and animation give us a sense of motion. They exploit some properties of human eye s ability of viewing pictures. - Motion video is the element of multimedia

More information

Edges and Binary Images

Edges and Binary Images CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm

More information

LUMS Mine Detector Project

LUMS Mine Detector Project LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

A Comparison of SIFT and SURF

A Comparison of SIFT and SURF A Comparison of SIFT and SURF P M Panchal 1, S R Panchal 2, S K Shah 3 PG Student, Department of Electronics & Communication Engineering, SVIT, Vasad-388306, India 1 Research Scholar, Department of Electronics

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision Dr. Gerhard Roth COMP 4102A Winter 2015 Version 2 General Information Instructor: Adjunct Prof. Dr. Gerhard Roth gerhardroth@rogers.com read hourly gerhardroth@cmail.carleton.ca

More information

Character Recognition from Google Street View Images

Character Recognition from Google Street View Images Character Recognition from Google Street View Images Indian Institute of Technology Course Project Report CS365A By Ritesh Kumar (11602) and Srikant Singh (12729) Under the guidance of Professor Amitabha

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

A Vision System for Automatic State Determination of Grid Based Board Games

A Vision System for Automatic State Determination of Grid Based Board Games A Vision System for Automatic State Determination of Grid Based Board Games Michael Bryson Computer Science and Engineering, University of South Carolina, 29208 Abstract. Numerous programs have been written

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Tracking facial features using low resolution and low fps cameras under variable light conditions

Tracking facial features using low resolution and low fps cameras under variable light conditions Tracking facial features using low resolution and low fps cameras under variable light conditions Peter Kubíni * Department of Computer Graphics Comenius University Bratislava / Slovakia Abstract We are

More information

Tangents. In this tutorial we are going to take a look at how tangents can affect an animation.

Tangents. In this tutorial we are going to take a look at how tangents can affect an animation. Tangents In this tutorial we are going to take a look at how tangents can affect an animation. One of the 12 Principles of Animation is called Slow In and Slow Out. This refers to the spacing of the in

More information

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka CS223b Midterm Exam, Computer Vision Monday February 25th, Winter 2008, Prof. Jana Kosecka Your name email This exam is 8 pages long including cover page. Make sure your exam is not missing any pages.

More information

Image Processing: Final Exam November 10, :30 10:30

Image Processing: Final Exam November 10, :30 10:30 Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always

More information

Multimedia Retrieval Exercise Course 2 Basic of Image Processing by OpenCV

Multimedia Retrieval Exercise Course 2 Basic of Image Processing by OpenCV Multimedia Retrieval Exercise Course 2 Basic of Image Processing by OpenCV Kimiaki Shirahama, D.E. Research Group for Pattern Recognition Institute for Vision and Graphics University of Siegen, Germany

More information

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview

Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Nonlinear State Estimation for Robotics and Computer Vision Applications: An Overview Arun Das 05/09/2017 Arun Das Waterloo Autonomous Vehicles Lab Introduction What s in a name? Arun Das Waterloo Autonomous

More information

Sobel Edge Detection Algorithm

Sobel Edge Detection Algorithm Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Image processing and features

Image processing and features Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry

More information

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur Pyramids Gaussian pre-filtering Solution: filter the image, then subsample blur F 0 subsample blur subsample * F 0 H F 1 F 1 * H F 2 { Gaussian pyramid blur F 0 subsample blur subsample * F 0 H F 1 F 1

More information

Introduction to Computer Graphics and Computer Vision Assignment 3: Due 7/27/2015

Introduction to Computer Graphics and Computer Vision Assignment 3: Due 7/27/2015 Introduction to Computer Graphics and Computer Vision Assignment 3: Due 7/27/2015 Nicholas Dwork For this assignment, don t submit any printouts of images. If you want to turn in some answers handwritten,

More information

HISTOGRAMS OF ORIENTATIO N GRADIENTS

HISTOGRAMS OF ORIENTATIO N GRADIENTS HISTOGRAMS OF ORIENTATIO N GRADIENTS Histograms of Orientation Gradients Objective: object recognition Basic idea Local shape information often well described by the distribution of intensity gradients

More information

6. Applications - Text recognition in videos - Semantic video analysis

6. Applications - Text recognition in videos - Semantic video analysis 6. Applications - Text recognition in videos - Semantic video analysis Stephan Kopf 1 Motivation Goal: Segmentation and classification of characters Only few significant features are visible in these simple

More information

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Edge detection. Winter in Kraków photographed by Marcin Ryczek Edge detection Winter in Kraków photographed by Marcin Ryczek Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image

More information

Feature descriptors and matching

Feature descriptors and matching Feature descriptors and matching Detections at multiple scales Invariance of MOPS Intensity Scale Rotation Color and Lighting Out-of-plane rotation Out-of-plane rotation Better representation than color:

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA Journal of Computer Science, 9 (5): 534-542, 2013 ISSN 1549-3636 2013 doi:10.3844/jcssp.2013.534.542 Published Online 9 (5) 2013 (http://www.thescipub.com/jcs.toc) MATRIX BASED INDEXING TECHNIQUE FOR VIDEO

More information

Cs : Computer Vision Final Project Report

Cs : Computer Vision Final Project Report Cs 600.461: Computer Vision Final Project Report Giancarlo Troni gtroni@jhu.edu Raphael Sznitman sznitman@jhu.edu Abstract Given a Youtube video of a busy street intersection, our task is to detect, track,

More information

Object Recognition Algorithms for Computer Vision System: A Survey

Object Recognition Algorithms for Computer Vision System: A Survey Volume 117 No. 21 2017, 69-74 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Object Recognition Algorithms for Computer Vision System: A Survey Anu

More information

Interactive Virtual Environments

Interactive Virtual Environments Interactive Virtual Environments Video Acquisition of 3D Object Shape Emil M. Petriu, Dr. Eng., FIEEE Professor, School of Information Technology and Engineering University of Ottawa, Ottawa, ON, Canada

More information

Introduction to Medical Imaging (5XSA0)

Introduction to Medical Imaging (5XSA0) 1 Introduction to Medical Imaging (5XSA0) Visual feature extraction Color and texture analysis Sveta Zinger ( s.zinger@tue.nl ) Introduction (1) Features What are features? Feature a piece of information

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

Exploring Projectile Motion with Interactive Physics

Exploring Projectile Motion with Interactive Physics Purpose: The purpose of this lab will is to simulate a laboratory exercise using a program known as "Interactive Physics." Such simulations are becoming increasingly common, as they allow dynamic models

More information

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University Edge and Texture CS 554 Computer Vision Pinar Duygulu Bilkent University Filters for features Previously, thinking of filtering as a way to remove or reduce noise Now, consider how filters will allow us

More information

Cross-platform Mobile Document Scanner

Cross-platform Mobile Document Scanner Computer Science and Engineering 2018, 8(1): 1-6 DOI: 10.5923/j.computer.20180801.01 Cross-platform Mobile Document Scanner Amit Kiswani Lead/Architect Mobile Applications, Paramount Software Solutions,

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING The image part with relationship ID rid2 was not found in the file. DIGITAL IMAGE PROCESSING Lecture 6 Wavelets (cont), Lines and edges Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information