The NAO Robot, a case of study Robotics Franchi Alessio Mauro

Similar documents
Laboratory of Applied Robotics

Anno accademico 2006/2007. Davide Migliore

Grasping Known Objects with Aldebaran Nao

Computer Vision I - Appearance-based Matching and Projective Geometry

Introduction to Medical Imaging (5XSA0)

Construction Progress Management and Interior Work Analysis Using Kinect 3D Image Sensors

Gaze Tracking. Introduction :

Check the Desktop development with C++ in the install options. You may want to take 15 minutes to try the Hello World C++ tutorial:

Edge and corner detection

Computer Vision I - Appearance-based Matching and Projective Geometry

[ ] Review. Edges and Binary Images. Edge detection. Derivative of Gaussian filter. Image gradient. Tuesday, Sept 16

Effects Of Shadow On Canny Edge Detection through a camera

Edges and Binary Images

Wikipedia - Mysid

SECTION 5 IMAGE PROCESSING 2

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Other Linear Filters CS 211A

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Real-time image processing and object recognition for robotics applications. Adrian Stratulat

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Lecture: Edge Detection

EE795: Computer Vision and Intelligent Systems

Pattern Feature Detection for Camera Calibration Using Circular Sample

An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Image Processing: Final Exam November 10, :30 10:30

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

Autonomous Navigation for Flying Robots

Lecture 6: Edge Detection

As stated in the main document, in each exercise, the required movements of the upper

Filtering Images. Contents

Segmentation I: Edges and Lines

Edge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Image Processing

Robotics Programming Laboratory

ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination

An Implementation on Object Move Detection Using OpenCV

Computer Science Faculty, Bandar Lampung University, Bandar Lampung, Indonesia

Edge Detection Using Streaming SIMD Extensions On Low Cost Robotic Platforms

Programming and Pictures: Computational Photography and Applications to Math and Physics. GISA 2016 Session 131 Marist School Christopher Michaud

Team Description Paper Team AutonOHM

Line, edge, blob and corner detection

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Capturing, Modeling, Rendering 3D Structures

Keywords: clustering, construction, machine vision

Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?

CS 378: Autonomous Intelligent Robotics. Instructor: Jivko Sinapov

CS5670: Computer Vision

LUMS Mine Detector Project

On Road Vehicle Detection using Shadows

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

AN EFFICIENT APPROACH FOR IMPROVING CANNY EDGE DETECTION ALGORITHM

Final Review CMSC 733 Fall 2014

Edge Detection CSC 767

Image processing and features

Digital Image Processing. Image Enhancement - Filtering

Correcting User Guided Image Segmentation

Introduction to Computer Vision

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Automatic Image Alignment (feature-based)

ENGN2911I: 3D Photography and Geometry Processing Assignment 1: 3D Photography using Planar Shadows

Bellwork. Find the absolute value for each point

CMPUT 206. Introduction to Digital Image Processing

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Local invariant features

Previously. Edge detection. Today. Thresholding. Gradients -> edges 2/1/2011. Edges and Binary Image Analysis

Local Image preprocessing (cont d)

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

CS4495/6495 Introduction to Computer Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

MORPHOLOGICAL EDGE DETECTION AND CORNER DETECTION ALGORITHM USING CHAIN-ENCODING

Introduction to Mobile Robotics Techniques for 3D Mapping

Avigilon Control Center Web Client User Guide

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

FOREGROUND DETECTION ON DEPTH MAPS USING SKELETAL REPRESENTATION OF OBJECT SILHOUETTES

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Local features: detection and description. Local invariant features

Towards the completion of assignment 1

Modify Panel. Flatten Tab

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Motion Estimation for Video Coding Standards

Image features. Image Features

A model-based approach for tool tracking in laparoscopy

Computer and Machine Vision

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

Understanding Tracking and StroMotion of Soccer Ball

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Scale Invariant Feature Transform

Edge Detection. Today s reading. Cipolla & Gee on edge detection (available online) From Sandlot Science

Puzzle games (like Rubik s cube) solver

COS Lecture 10 Autonomous Robot Navigation

EN1610 Image Understanding Lab # 3: Edges

HOG-based Pedestriant Detector Training

Homework 4 Computer Vision CS 4731, Fall 2011 Due Date: Nov. 15, 2011 Total Points: 40

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

Transcription:

The NAO Robot, a case of study Robotics 2013-2014 Franchi Alessio Mauro alessiomauro.franchi@polimi.it

Who am I? Franchi Alessio Mauro Master Degree in Computer Science Engineer at Politecnico of Milan First year Ph.D. student, collaborating with prof. Gini Research areas: Cognitive robotics Symbolic reasoning Grasping If you need to contact me: mail: alessiomauro.franchi@polimi.it office: AirLab DEIB, building 20, ground floor telephone: 3565

This lesson Two sections Computer vision - OpenCV(~60 mins) Loading/reading images; Images manipulation Images filtering Videos streaming Introducing the NAO robot (~20 mins) Structure Sensors Capabilities

Computer Vision/OpenCV What is OpenCV (Open Source Computer Vision)? A set of libraries focused on real-time images processing. A project started by Intel Russia in 1998 Now are maintened by Itseez and Willow Garage 2500 optimized algorithms A full-featured CUDA and OpenCL interfaces are being actively developed right now http://opencv.org/ Built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products

Computer Vision/OpenCV Some application: Detect and recognize faces, Identify objects, Classify human actions in videos, Track camera movements and moving objects, Extract 3D models of objects and produce 3D point clouds, Recognize scenery and establish markers to overlay it with augmented reality, Etc

Augmented Reality/OpenCV The technology functions by enhancing one s current perception of reality Live direct or indirect view of a physical real-world environment whose elements are augmented/extended

Let s start with OpenCV How to install OpenCV on Windows + VisualStudio 2012 1. Go to http://opencv.org/ 2. Download auto-extracting or binary files to be compiled 3. Set environment variables a) Open command prompt and type setx -m OPENCV_DIR D:\OpenCV\Build\x86\vc11 b) Right click on Computer - > Properties -> Advanced- System settings -> Advanced tab-> Environment variable - > System variables c) Add to PATH %OPENCV_DIR%\bin d) Reboot the system

Let s start with OpenCV Now create a new Visual Studio project 1. Open Visual Studio 2012 2. Select File -> New -> Project 3. Once created right click on the project name -> Property 4. Configuration properties -> c/c++ -> General add to Additional Include Directories the following $(OPENCV_DIR)\..\..\include 5. Linker -> General add to «Additional libraries Directories the following $(OPENCV_DIR)\lib 6. Linker -> Input add to Additional Dependencies rhe following list opencv_core249d.lib;opencv_imgproc249d.lib;opencv_highgui249d.l ib;opencv_ml249d.lib;opencv_video249d.lib;opencv_features2d249d. lib;opencv_calib3d249d.lib;opencv_objdetect249d.lib;opencv_contrib 249d.lib;opencv_legacy249d.lib;opencv_flann249d.lib

Let s start with OpenCV The «Hello world» example

Some theory How images are composed Black and white Any digital image is a matrix of pixels. Any pixel have a value. The value of pixels are between 0 (=black) and a maximum depending on the number of bits of the image (8 bits = 255 = white). The image-depth refers to the bits allocated for each pixel. When the value of the pixel is increased, the intensity of that pixel is also increased. Example Image-depth 8 bit 1 channel, grayscale image Height is 4 pixel, width is 5 pixels Resolution = 4x5.

Some theory How images are composed Color Color image is composed by 3/4 planes; Red Green Blue Alpha (optional) Any pixel s final color is the combination of the corresponding values of pixels in the three/four planes (255, 0, 0) = red. (0, 255, 0) = green (255, 0, 255) = violate

Some theory Introducing the concept of feature An "interesting" part of an image Human beings are naturally able to find these «features», it is something innate. As consequences we are able to find two similar images, to compose images or to play with puzzle A simple game: are you able to tell me the exact location of each patche?

Some theory

Some theory

Let s go on with OpenCV Dilation Erosion The value of the output pixel is the maximum value of all the pixels in the input pixel's neighborhood. Pixels beyond the image border are assigned the minimum The value of the output pixel is the minimum value of all the pixels in the input pixel's neighborhood. Pixels beyond the image border are assigned the maximum

Let s go on with OpenCV Canny Edge detector 1. Filter out any noise. The Gaussian filter is used for this purpose 2. Four filters to detect horizontal, vertical and diagonal edges in the blurred image are used 3. Search for local maxima in the gradient map with non-maximum suppression 4. Thresholding with hysteresis: Lower than low threshold: pixel is not considered; Higher than high threshold: pixel is on an edge; Between the two thresholds: pixel is considered if nearby an already considered pixel.

Let s go on with OpenCV Harris corner detector 1. Corners represents a variation in the gradient in the image, we will look for this variation. So sweep a window in both direction and compute the variation of intensity Maximize the equation above to find windows with a large variation A window with a score R greater than a certain value is considered a corner

Let s go on with OpenCV The Hough transform A technique for detecting significant configurations of points in the image. Lines, segments, curves, The shapes can be expressed by a function in a new parameters space

Let s go on with OpenCV The Hough transform How points are transformed? Points are the intersection of two or more lines So points is a curve in the parameters space

Let s go on with OpenCV The Hough transform -How lines are transformed? Curves corresponding the three points on the lines intersect in a given point in the parameters space This point is the image of the lines.

The NAO robot NAO is a programmable, 58cm tall humanoid robot 25 degrees of freedom (DOF) 2 cameras, 4 directional microphones, sonar, 2 IR receiver/emitter, 1 inertial board, 9 tactile sensors and 8 pressure sensors voice synthesizer, LED lights, and 2 speakers

Links and axis Links length Axis definition The X axis is positive toward NAO s front, the Y from right to left and the Z is vertical.

Joints

Sensors Sensor network: 2 cameras, 4 directional microphones, sonar rangefinder, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors 8 pressure sensors voice synthesizer, LED lights, 2 speakers

Some examples Some basic movements Standing up; Sitting down; Walking and some more complex: Dancing; Tracking obects; Climbing stairs; Robocup. Videos!