IN5520 Digital Image Analysis. Two old exams. Practical information for any written exam Exam 4300/9305, Fritz Albregtsen

Similar documents
UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Practical Image and Video Processing Using MATLAB

EE795: Computer Vision and Intelligent Systems

Lecture 8 Object Descriptors

TEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?

Chapter 11 Representation & Description

SUMMARY PART I. What is texture? Uses for texture analysis. Computing texture images. Using variance estimates. INF 4300 Digital Image Analysis

Image Processing: Final Exam November 10, :30 10:30

ECG782: Multidimensional Digital Signal Processing

Lecture 9: Hough Transform and Thresholding base Segmentation

FROM PIXELS TO REGIONS

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

COMPUTER AND ROBOT VISION

Lecture 10: Image Descriptors and Representation

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

Review for the Final

Lecture 4: Transforms. Computer Graphics CMU /15-662, Fall 2016

Practice Exam Sample Solutions

Anne Solberg

Digital Image Processing Chapter 11: Image Description and Representation

ECG782: Multidimensional Digital Signal Processing

SUMMARY PART I. Variance, 2, is directly a measure of roughness. A bounded measure of smoothness is

Topic 6 Representation and Description

morphology on binary images

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

EECS490: Digital Image Processing. Lecture #23

11/10/2011 small set, B, to probe the image under study for each SE, define origo & pixels in SE

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Motion Planning. O Rourke, Chapter 8

ECEN 447 Digital Image Processing

Processing of binary images

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

CoE4TN4 Image Processing

Anno accademico 2006/2007. Davide Migliore

Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang

Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu

Edge detection. Gradient-based edge operators

CS4733 Class Notes, Computer Vision

EE 584 MACHINE VISION

OBJECT DESCRIPTION - FEATURE EXTRACTION

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Feature description. IE PŁ M. Strzelecki, P. Strumiłło

Robot vision review. Martin Jagersand

Basics of Computational Geometry

CITS 4402 Computer Vision

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Machine vision. Summary # 6: Shape descriptors

Fundamentals of Digital Image Processing

Morphological Image Processing

Basic Algorithms for Digital Image Analysis: a course

2D transformations: An introduction to the maths behind computer graphics

Computer Vision I - Basics of Image Processing Part 1

09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)

Digital Image Processing

Chamfer matching. More on template matching. Distance transform example. Computing the distance transform. Shape based matching.


- Low-level image processing Image enhancement, restoration, transformation

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

Vector Calculus: Understanding the Cross Product

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Ulrik Söderström 21 Feb Representation and description

Digital Image Processing

Edges and Binary Image Analysis. Thurs Jan 26 Kristen Grauman UT Austin. Today. Edge detection and matching

Generalized Hough Transform, line fitting

SUMMARY PART I. Variance, 2, is directly a measure of roughness. A measure of smoothness is

Morphological Image Processing

2D/3D Geometric Transformations and Scene Graphs

Examination in Image Processing

E0005E - Industrial Image Analysis

Name: Math 310 Fall 2012 Toews EXAM 1. The material we have covered so far has been designed to support the following learning goals:

Filtering Images. Contents

Binary Image Analysis. Binary Image Analysis. What kinds of operations? Results of analysis. Useful Operations. Example: red blood cell image

Template Matching Rigid Motion

Lecture 18 Representation and description I. 2. Boundary descriptors

EECS490: Digital Image Processing. Lecture #20

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CSE 152 Lecture 7. Intro Computer Vision

CS534 Introduction to Computer Vision Binary Image Analysis. Ahmed Elgammal Dept. of Computer Science Rutgers University

Computer Vision I - Basics of Image Processing Part 2

Biomedical Image Analysis. Point, Edge and Line Detection

Lecture 5 2D Transformation

CHAPTER 2 REVIEW COORDINATE GEOMETRY MATH Warm-Up: See Solved Homework questions. 2.2 Cartesian coordinate system

Morphology-form and structure. Who am I? structuring element (SE) Today s lecture. Morphological Transformation. Mathematical Morphology

Computer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C.

Model Fitting, RANSAC. Jana Kosecka

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Biomedical Image Analysis. Mathematical Morphology

Morphological Image Processing

Linear transformations Affine transformations Transformations in 3D. Graphics 2009/2010, period 1. Lecture 5: linear and affine transformations

Feature Detectors and Descriptors: Corners, Lines, etc.

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Transcription:

IN5520 Digital Image Analysis Two old exams Practical information for any written exam Exam 4300/9305, 2016 Exam 4300/9305, 2017 Fritz Albregtsen 27.11.2018 F13 27.11.18 IN 5520 1

Practical information for exam - I Only allowed aid: Simple calculator Read the entire exercise text before you start solving the exercises and check that the exam paper is complete. If you lack information in the exam text or think that some information is missing, you may make your own assumptions, as long as they are in the spirit of the exercise. In such a case, you should make it clear what assumptions you have made. F13 27.11.18 IN 5520 2

Practical exam information - II You should spend your 4 hours in such a manner that you get to answer all exercises shortly. If you get stuck on one question, move on to the next question. Your answers should be short, typically a few sentences and / or a sketch may be sufficient. Never give only a numerical answer, e.g. «1/30 mm» but give your reasoning, e.g., «The sampling theorem: f S >2f max, where f S = sampling rate, is needed in order to avoid aliasing. If T s = sampling period, thus T S =1/f S <1/(2f max )=1/30 mm."? Use the number of pages that you need! Anything else? Questions? F13 27.11.18 IN 5520 3

Note on solutions on the web We have put several previous exams on the web. This also includes solution hints. The latter are hints and sketches, not complete solutions that would give you an A!!! F13 27.11.18 INF 5520 4300 4 4

2016: Some statistics Ex.2a-h Ex. 1a-e 12,0 10,0 8,0 6,0 4,0 2,0 0,0 Score for hver del-oppgave : middelverdi + - standardavvik 1 2 3 4 5 6 7 8 9 101112131415161718192021222324252627282930 Normert poengsum for hver kandidat Histogram, bokstavkarakterer INF4300 ECTS 100 8 SENSOR 2 80 60 40 20 6 4 2 0 0 10 20 30 40 50 60 70 80 90 100 SENSOR 1 0 F E D C B A F13 27.11.18 IN 5520 5

2016: Exercise 1, a-c F13 27.11.18 IN 5520 6

2016: Exercise 1, d Answer: For an ellipse oriented parallel to axis system; a,b,c, and d are all zero. This implies that Hu s moments 3 to 7 are zero. So only Hu s first and second moment are useful. And since the moments are rotation invariant, this will also be true for all other orientations. F13 27.11.18 IN 5520 7

2016: Exercise 1, e e) What may cause zero entries in the table above to show up as non-zero in digital images? Answer: Noise, caused by the discrete sampling, and by the quantization. F13 27.11.18 IN 5520 8

2016: Exercise 2, a - c Absolute code: start at start point, Keep coding figure fixed. 3 2 1 Relative code: Rotate coding to fit the step from origin to next pixel. Code from next pixel onwards, and put in last code as prefix. 4 5 6 7 0 F13 27.11.18 IN 5520 9

2016: Exercise 2, d F13 27.11.18 IN 5520 10

2016: Exercise 2, e F13 27.11.18 IN 5520 11

2016: 2 f (for the PhD s) F13 27.11.18 IN 5520 12

2016: 2 g (for the PhD s) F13 27.11.18 IN 5520 13

2016: 2 h (for the PhD s) F13 27.11.18 IN 5520 14

2016: Ex 3 & 4: Anne F13 27.11.18 IN 5520 15

2017: Some statistics 12,0 Score for hver del-oppgave : middelverdi + - standardavvik 10,0 8,0 Ex. 1a-f 6,0 4,0 2,0 0,0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 12 10 8 6 4 2 0 Histogram, bokstavkarakterer INF4300 ECTS F E D C B A F13 27.11.18 IN 5520 16

2017: 1a - c Given a concave object outlined by its one pixel wide perimeter in a binary image. a) What do we mean by projections of such an object? Answer: Most often this means the 1D horizontal and vertical projections of the object pixels onto the (x,y) axes. May also mean projection onto a given axis. b) What do we mean by the signature of the object perimeter, and what is the relation between signature and projection? Answer: Signature is a 1D functional representation of the 2D boundary of an object. It can be represented in several ways. Simple choice: radius vs. angle, using the center of mass as origin. Radial projection in reference to centroid -> signature. c) Describe and sketch the convex hull of such a concave object, and list some object features based on the convex deficiency. Answer: CH is the smallest convex set containing the object. Features: CH solidity/convexity = (object area)/(ch area), Number of and distribution of CH component areas. F13 27.11.18 IN 5520 17

2017: 1d Describe an algorithm to obtain a simplifying polygon from the given object, so that the resulting vertexes are a subset of the original perimeter pixels. Answer: Draw line between initial breakpoints; boundary points farthest apart. 1. For each intermediate point: 2 If greatest distance from the line is larger than a given threshold, we have a new breakpoint. 3 Previous line segment replaced by two line segments, repeat 1-2 above for each of them. The procedure is repeated until all contour points are within a distance T from a corresponding line segment. The resulting ordered set of breakpoints is then the set of vertices of a polygon approximating the original contour, and the set of breakpoints is a subset of the set of boundary points. F13 27.11.18 IN 5520 18

2017: 1e Assume that you have a large set of binary images like the one produced in d), each containing only one object. The objects are all polygons, some concave, some convex. The number of polygon edges is unknown, but may vary. Explain the steps to obtain the Hough transform of each of these images, using the normal representation of a lines. Answer: Textbook stuff. Define a 2D accumulator matrix, determine resolution in r and θ. Find orientation for each edge pixel (using e.g. Sobel-operator). Run through all edge pixels and accumulate distance from origin (r) and angle to x-axis (θ). F13 27.11.18 IN 5520 19

2017: 1f (PhD) Based on the information given in e) above, please explain how a number of projections less than or equal to the number of peaks in the Hough domain can easily distinguish concave from convex objects. Why? Answer: Why less than or equal to : For convex polygons, the number of peaks in the HT is equal to the number of sides in the polygon, whereas for concave polygons, co-linear line segments may imply that the number of peaks in HT is less than the number of sides in the polygon. Besides, as you will see below, you may detect a concavity before having tested all HT peaks. For each peak in HT, reconstruct the corresponding line in the image, and project the object onto an axis that is orthogonal to the line. For a convex polygon, the projection will always fall on one side of the line, whereas for a concave polygon, some projections (at least two of them) will fall on both sides of the line, and the test may be terminated before all HT peaks have been tested. F13 27.11.18 IN 5520 20

2017: 2a Assume that you have a binary image containing both large objects and small noise elements. The noise elements vary in shape, but are much smaller than the real objects. The latter have small details that are important for the subsequent image analysis. What filters would you consider for removal of the noise elements, and what are the pros and cons of these filters? Answer: A simple lowpass filter of sufficient size would remove at least the smallest objects, but would not preserve the detailed shape of the bigger objects. A median filter of sufficient size would also remove the smallest objects, but would not preserve details on the perimeter of the bigger objects. Adaptive lowpass and median filters would be somewhat better. F13 27.11.18 IN 5520 21

2017: 2b The basic morphological operators erosion and dilation, and the simplest combinations of these are often used for this task. How would you use them, and what are the pros and cons? Answer: Erosion with a suitable structuring element would remove the smallest noise elements, but would also affect the size of the bigger objects, and would probably also remove the fine details on the objects. Dilation of the result of the erosion, known as opening, would regain the size of the objects, but not the details that were lost in the erosion. F13 27.11.18 IN 5520 22

2017: 2c Please give one more advanced combination of morphological operators for this task, and discuss the pros and cons. Answer: Top-hat detects light objects on a dark background also called white top-hat. Top-hat is given by (image minus its opening), given the structuring element for the opening. Pro: Will handle all noise element shapes. Con: Needs a large structuring element. The Hit and Miss operator could do a template matching of specific small elements, so elements of a given size, shape and orientation can be removed without affecting any other elements and objects. The con is that to remove all small elements, the process must be repeated using a list of templates. F13 27.11.18 IN 5520 23

2017: 2d (PhD s only) It is possible to remove the noise elements completely without altering the shape of the objects. Please explain how. Answer: This is covered by a group Exercise on shape representation: 7. Compute region objects in matlab and create a region label image to study how the regions correspond to numbers b1 = bwlabel(bw); cc=bwconncomp(b1); ll = labelmatrix(cc); regim = label2rgb(ll); imshow(regim); 8. Compute simple region properties using regionprops in matlab. Can you use region area to remove the frame and noisy segments? By region analysis, each object /noise element is represented by a list of pixels. A region property is area, and we can simply remove the shortest lists, and then regenerate the image from the object lists. F13 27.11.18 IN 5520 24

2017: Ex 3 & 4: Anne F13 27.11.18 IN 5520 25

2017: 5a The Gray Level Cooccurrence Matrix method is a frequently used texture analysis method. In 2D gray level images we often use isotropic GLCMs based on symmetric matrices obtained for a given pixel distance, d, in a limited number of directions θ. How do you obtain an isotropic symmetric GLCM for a given (d,θ), and how can you normalize it? Answer: Count the number of times that pixel pairs with gray levels z i and z j occur in the image when moving a distance d in direction θ. This gives P(i,j d,θ), which can be normalized by c(i,j d,θ)=p(i,j dθ)/s, where S is the sum of all elements in P(i,j d,θ). The normalized GLCM for the opposite direction, c(d,θ+π) is given by the transpose c(d,θ) T. Adding the two results in a normalized symmetric matrix: C(d,θ) = [c(d,θ) + c(d,θ) T ]/2. Weighted adding 4 normalized symmetric matrices then gives an isotropic GLCM. F13 27.11.18 IN 5520 26

2017: 5b Assume without knowing what the image looks like that you have obtained a normalized GLCM as described above. Then there are several possible formulae describing features that can be extracted from the GLCM. One of them is where the weight function W(i,j) = - log(p(i,j)) is given in the graph Discuss, based on the figure above, what the output value E will be (high or low) for homogeneous and inhomogeneous images, respectively. Answer: Remember that since we are talking about a normalized GLCM, the P(i,j)-values sum to 1. In inhomogeneous scenes, a lot of relatively small P(i, j) values are present in the GLCM. So we are in the left part of the figure, making the Entropy value high. If all P(i, j) values are equal, Entropy will attain its maximum value of log 2 (G) = bits per pixel. An extremely homogeneous image, will have only one gray level with P=1, so we are in the right hand part of the figure, getting an Entropy value of zero. F13 27.11.18 IN 5520 27

2017: 5c (PhD s only) Is the feature above invariant to a linear scaling of the pixel values in the image? Please explain, and also discuss another feature that behaves oppositely! Answer: No! We will still have a normalized histogram, but it will be distributed over a larger/smaller range of gray levels, so the entropy will be larger/smaller, since the weight function is not linear. The ASM feature, on the other hand, is invariant to linear scaling, since its weight function is linear. (E and ASM are discussed in p41 of lecture 2.) F13 27.11.18 IN 5520 28

Thank You for Your Attention! F13 27.11.18 IN 5520 29