LUMS Mine Detector Project
Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines and regions can be used to, for example, enable the alignment of a manipulator / gripping mechanism with an object Robot Movement Vision System
Current Image Reference image Can I move the manipulator so that the current image matches the reference image? Measurements Corners Lines Regions Corner features
Open-loop robot control The extraction of the image information and control of the robot are two separate tasks Vision System Control Sequence Once the information is extracted, a control sequence is generated and the robot moves blindly, assuming that there is no change in the environment. Vision information is extracted only once.
Visual Servoing (Hill & Park 1979) Dynamic look and move systems Control of the robot is done in two stages. The vision system provides the input to the robot controller which in turn uses joint feedback to internally stabilize the robot. Visual information is extracted continuously. Vision System Robot Controller Joint Feedback
Visual Servoing (Hill & Park 1979) Direct visual servo systems Here, visual controller directly computes the input to the robot joints and robot controller is eliminated altogether. Visual Controller Joint Feedback
Image based Visual Servo systems 2D image measurements are used directly Reduce the error between a set of current and desired image features Position based visual servo systems 3D information about the scene is estimated with a known camera model. The control task is defined in 3D world coordinates Hybrid visual servo systems A combination of previous 2 approaches Also called 2 ½ D visual servoing
Number of Cameras 1 2 >2 Eye-In- Hand Stand- Alone Eye-In- Hand Stand- Alone Redundant-Camera- System
Maintain a fixed distance and orientation w.r.t the ground. Arm visual servoing Visual Feedback Arm joint control Two main tasks Visual perception for ground profiling Arm joint control for obtaining the desired wrist configuration
Binocular Stand-Alone Position based Dynamic look-and -move Vision System Joint 1 Joint 2 Sensor Payload
Stereo Vision Custom built rig 2 logitech c500 webcams Total cost < $100 OpenCV library
Motivation Used for 3D reconstruction of a scene captured simultaneously by 2 cameras Depth information is not available from a single image.
Motivation By capturing images of a scene from 2 viewpoints we can calculate the depth through triangulation The depth of a point is inversely proportional to its disparity
Camera calibration Estimate the camera matrix containing the following parameters The focal lengths of both cameras Principle point offsets Radial and tangential distortion coefficients Done by capturing images of a known 3D object, and solving the equation of the pinhole camera model for the required unknowns The calibration object
The Calibration Process
Stereo calibration After the calibration of the individual cameras, the stereo parameters must be estimated. These relate to the relative placement of both cameras in space. The parameters include The translation vector The rotation matrix The essential matrix The fundamental matrix Same procedure as single camera calibration OpenCV provides routines both for simple and stereo calibration
Image rectification for faster correspondences Use the epipolar constraint to reduce the search space We can even transform the images so that the epipolar lines are horizontal and the images are row aligned. Epipolar Geometry
Image rectification for faster correspondences OpenCV provides 2 methods for image rectification Uncalibrated Rectification Stereo pair may not be calibrated Calibration parameters estimated along with rest of the unknowns Calibrated Rectification Stereo pair calibrated beforehand More accurate than uncalibrated rectification As stereo calibration parameters are available beforehand, we have used calibrated rectification. Also known as Bouget s method.
Some rectification results from local outdoor experiments
Some rectification results from local outdoor experiments
Some rectification results from local outdoor experiments
Finding correspondences and generating the disparity maps The disparity can be calculated easily once the images are row aligned. It is the difference between the value of x L and x R Disparity is inversely proportional to depth d = x L - x R
Finding correspondences and generating the disparity maps OpenCV provides 3 algorithms for correspondences Block matching Semi-Global block matching Graph-Cut Algorithm Block matching Matching through correlation The correlation function is a simple Sum of Squared Differences (SSD) window. Does not find a lot of correspondences but gives results in real-time.
Disparity Maps
Disparity Maps
Disparity Maps
Generating the 3D point cloud The disparity map can be used to obtain the point cloud with the help of the extrinsic and intrinsic camera parameters derived from the calibration process
Generating the 3D point cloud
Generating the 3D point cloud
Plane fitting through PCA The point cloud can now be used to calculate the normal vector of the visible terrain. This vector will eventually be used to adjust the angle of the arm. The normal is simply the singular vector with the smallest singular value.
2 DoF P-R configuration Sensory feedback National Instruments hardware Vision System Joint 1 Joint 2 Sensor Payload
Lab experimental setup
Sensors and Circuitry Motor drive (C-Series Module) Interface Power Distribution Rotary Encoder Linear Encoder SbRIO (NI)
National Instruments Single board RIO (Sb-RIO). Real time processor Reconfigurable FPGA Analog and Digital I/O. C series connectivity Stand alone Communication Programmable with LabVIEW
Programming environment LabVIEW 2010 Graphical Real time module Parallelism Interfacing OpenCV code with LabVIEW SbRIO with PC Program Structure
Main control loop Simple on-off control. Two tasks Visual ground profiling through stereo Joint motor control (critical).
The speed breaker experiment
Chaumette and Hutchinson (2006) Chaumette and Hutchinson (2007) Kragic and Christensen (?) Learning OpenCv by Bradski and Kaehler ni.com