Camouflage Breaking: A Review of Contemporary Techniques
|
|
- Collin Jackson
- 6 years ago
- Views:
Transcription
1 Camouflage Breaking: A Review of Contemporary Techniques Amy Whicker University of South Carolina Columbia, South Carolina rossam2@cse.sc.edu Abstract. Camouflage is an attempt to make a target "invisible" by making the target appearance blend into the background. Camouflage related work is typically either in camouflage assessment and design or in camouflage breaking. This paper will discuss two current methods of camouflage breaking. Camouflage breaking is important because of the obvious military tactics, background subtraction, and general knowledge of object extraction. The first method is multiple camouflage breaking by co-occurrence and Canny. The second method is convexity-based camouflage breaking. Both methods seem to achieve desired results, but the convexity-based camouflage breaking method is clearly a more robust algorithm. 1 Introduction Camouflage is a way of making the foreground appear to be background, therefore concealing objects in plain view. The word camouflage comes from a French word camoufler, which means to blind or veil. In the late 1800 s Abbott Thayer observed that animals use counter-shading as a way to camouflage themselves. This observation was the beginning of modern day camouflage. In 1915, the French army created the first of what we now know as camouflage. Assessment and design of camouflage has been researched and developed ever since, to insure this best camouflage available. Although assessment and design has been thoroughly researched over the years, camouflage breaking seems to have gone unnoticed. Camouflage breaking is important because of the obvious military tactics, background subtraction, and in understanding detection of non-camouflaged objects. This paper will review two of the current techniques for camouflage breaking. 1.1 Co-occurrence and Canny P. Nagabhushan and Nagappa U. Bhajantri developed the co-occurrence and Canny method and published their finding in This method can be broken into two parts. The first part determines if there is a camouflage object within the image by calculating the gray level co-occurrence matrix of the image and comparing it with the
2 gray level co-occurrence matrix of the background. Once it is known that there is a camouflage object within the image then the second part of the process begins. The second part consists of a repeated application the Canny edge detection operator until effective visualization of the camouflage objects is achieved. 1.2 Convexity-based Ariel Tankus and Yehezkel Yeshurun developed convexity-base camouflage breaking and first published their finding in This method uses an operator (D arg ) to create an output image whose intensity level is a reflection of the convexity of the original image. The D arg operator is defined by the sum of Y arg, rotated 0, 90, 180, and 270. Y arg is the y-derivative of the polar coordinates of the gradient argument of the original image. Y arg detects the zero-crossing of the gradient argument. Thus Y arg detects convexity, because the zero-crossing of the 1 st derivative determines the local minimum and local maximum of the original function. Once the D arg output is obtained, then we threshold the image to find the most convex points. Therefore any object of interest should be labeled by this method, whether camouflaged or not. 2 Method The co-occurrence and Canny camouflage breaking and the convexity-based camouflage breaking are vastly different method of solving the same problem. In the following sections we will take an in-depth look at each of these method. 2.1 Co-occurrence and Canny Method As stated before, the co-occurrence and Canny method can be broken into two parts. Assessing the image for possible camouflaged object and then bringing those objects to the foreground. In this section we will describe each step of the co-occurrence and Canny method in detail. Step 1 Camouflage Detection. The gray level occurrence matrix is used to determine how often a pattern is repeated in an image. This allows for repeated pattern to be considered as a homogenous region. This can be very useful when analyzing an image with a noisy or cluttered background.
3 First we must determine the gray level co-occurrence matrix. The gray level cooccurrence matrix, P φ, d (a, b), is the matrix of relative frequencies of the occurrence of gray level configurations. Namely, P φ, d (a, b) is how frequently two pixels with gray level a, b appear in the image separated by distance d in direction φ. For better understanding we have included an example from [1] to illustrate how to calculate the gray level co-occurrence matrix. Notice that P φ, d (a, b) is a symmetric matrix and therefore in practice only half of the matrix should be calculated, but for clarity we have included the whole matrix. Once the gray level co-occurrence matrix for both the input image and the background of the input image are calculated, then a comparison of the texture parameters can be made to determine if any camouflaged object are contained within the image. The texture parameters used are energy, entropy, maximum probability, contrast, inverse difference moment, and correlation. 1. Energy the measure of homogeneity of an image, the more homogenous an image the higher the energy. Σ a, b P2 φ, d (a, b). (1) 2. Entropy the number of occurrences of a certain pattern, the higher the entropy the more number of occurrences. Σ a, b Pφ, d (a, b) log2 Pφ, d (a,b). (2) 3. Maximum Probability results in the most predominant pixel pair. max a,b Pφ, d (a, b). (3) 4. Contrast the measure of local image variations. Σ a, b a-b k Pλ φ, d (a, b), typically k=2, λ=1. (4)
4 5. Inverse Difference moment measures the smoothness of the image. Σ a, b ( Pλ φ, d (a, b) / a-b k ). (5) 6. Correlation the measure of image linearity. ( Σ a, b [(ab) Pφ, d (a, b)] - µx µy) / (σxσy), Where µx,µy are means and σx, σ y are standard deviations, µx =Σ a a Σ b Pφ, d (a, b), µy =Σ b b Pφ, d (a, b), σx =Σ a (a- µ x )2 Σ b Pφ, d (a, b), σy =Σ b (b-µ x)2 Σ a Pφ, d (a, b) (6) With these texture parameters the image is then analyzed for camouflaged objects, an image in determined to have a camouflaged object, then the second phase of the method begins. Part 2 Visualization of Camouflage Objects. To make the camouflaged object visible to the human eye we must repeatedly apply the Canny edge detection operator. This brings the object to the foreground, but it does not extract the object from the image. Further processing will be needed to extract the object. 2.2 Convexity-based Method The convexity-based method determines places of highest convexity. Most foreground objects are convex and therefore this method locates foreground objects regardless of colorization by camouflage. Since this method not an edge based method, but a convexity-based, it is extremely reliable in detecting objects that are camouflaged where the edges tend to be misleading. Now we look at the development of this algorithm. Let I(x, y) be an input image, the Cartesian representation of the gradient map is (7) Let us convert the gradient of I(x, y) into its polar representation. The gradient argument is defined by: (8) The polar coordinates are used because we generalize convexity on basic paraboloids. The y-derivative of the polar coordinates of a paraboloid tends to infinity at the negative x-axis, therefore giving us a basis for the D arg operator. [2]
5 So, the y-derivative of the gradient argument is Yarg, which is used to create Darg by summing the rotations of Yarg at 0, 90, 180, and 270. Yarg is represent by, (9) Fig. 1. (a) Paraboloidal intensity function: I(x, y) = 100x y2. (b) Gradient argument of (a). Discontinuity ray at the negative x-axis. (c) Y arg of (a) (derivative of (b)). (d) Rotation of (a) (90 c.c.w.), calculation of gradient argument. (e) Rotation of (a) (90 c.c.w.), calculation of Y arg. (f) Response of D arg, the isotropic operator.[4] As you can see in figure 1 the reaction of Y arg and D arg on a parabola is clear. This reaction creates an output image that highlights the convex areas of the original image. Using a threshold on the intensity of the output image, we can pinpoint the most convex areas of the image. 3 Implementation We plan to do future work in camouflage breaking, which will include the implementation and assessment of both the co-occurrence and Canny method and the convexitybased method. The implementation of these two methods will give more information on the ability of each algorithm to truly determine and find camouflaged objects within an image.
6 4 Analysis The results of each method are clearly different, as seen below. Each method has merit in what it accomplishes. In the following sections we will analysis the results of each method. 4.1 Co-occurrence and Canny Method This method has only been tested on synthetic images with known backgrounds. One of the drawbacks to this method is the fact that you must have the background image to use as a gage to see if there exist a camouflaged object within another image. Finding the background image is typically not an easy task in real application. Putting that drawback aside, let s look at the result from the synthetic images. Fig. 2. (a) Camouflaged image, l and I are camouflaged in 1. (b) The Canny edge detection results. [1] As you can see in figure 2, the repeated application of the Canny edge detector was able to bring out the foreign objects in this image. The outcome appears to achieve a desired result, but then in this form the process of extraction of these objects was never addressed. Also, there is some concern about the theory versus the application of this method. Will it be able to stand up to non-synthesized image data? Hopefully future analysis will prove more helpful after implementation of this method.
7 4.2 Convexity-based Method This method has been thoroughly tested with an array of input images. This algorithm is robust because it continues to produce desired results under various conditions, such as, different illuminations, variously scaled objects of interest, different orientations and cluttered or textured images as seen in figure 3 and figure 4. Fig. 3. Left: Robustness to illumination. D arg strongly reacts to the funnel, and is robust to changes in lighting direction. Middle: Robustness to scale changes. A vase is shown in 5 different scales. Right: Robustness to orientation variations. The flashlight changes its orientation from vertical to horizontal. D arg strongly reacts to the cylindric flashlight. The detection of the flashlight is independent of orientation. [6] Fig. 3. The main focus (right: vase, left: man) remains a dominant feature in these highly textured or cluttered images and is detected with D arg. [3]
8 The convexity-based approach is also invariance to derivable strongly monotonically increasing transformation of the gray-level function as seen in figure 5. Fig. 5. Notice that the D arg (bottom row) remains similar for all 3 images. [3] Now for camouflage, as seen in figure 6 and figure 7, the D arg operator does extremely well to determine the focus of an image even if the main focus is camouflaged. Figure 6 and figure 7 show a comparison between a known edge-based detection method and the convexity-based method. Clearly, in this application, the convexity-based method is a vastly superior algorithm. Fig. 5. D arg (top row) is able to unmistakably mark the hunter, where radial symmetry found the trees to the left of the hunter. [3]
9 Fig. 5. D arg (top row) is able to clearly mark the two soldiers, where radial symmetry marked various positions, most of which where not the soldiers. [3] As you can see, the convexity-based method clearly is a remarkable approach to camouflage detection. A non-edge based method is logical and has proven quite effective at solving this problem. I look forward to exploring the convexity-based methods use in face detection and implementing the algorithm for camouflage breaking. 5 Conclusion In conclusion both methods detect camouflaged objects within an image. Each method has its own strong points and weakness. The co-occurrence and Canny method is a simple algorithm, and creates a good outline of the object, but it does not extract the object, it must have the known background, and has only been tested on synthetic images and may not be effective in real application. Further processing must be done to actually exact the camouflaged object. The convexity-based method is a robust algorithm and is precise in finding foreground objects, but it does not extract the object, and a threshold must be determined, which can change the results. As you can see each method has some advantages and disadvantage. We hope we future work that we can explore each in more detail. References 1. Nagabhushan, P., Bhajantri, N. U.: Multiple Camouflage Breaking by Co-occurrence and Canny, University of Mysore, Manasa Ganotri, Tankus, A., Yeshurun, Y., Intrator, N.: Face Detection by Direct Convexity Estimation, Pattern Recognition Letters 18(9) (1997),
10 3. Tankus, A., Yeshurun, Y.: Detection of regions of interest and camouflage breaking by direct convexity estimation, IEEE International Workshop on Visual Surveillance, pages 42-48, Bombay, India (1998). In conjunction with ICCV (1998). 4. Tankus, A., Yeshurun, Y.: A model for visual camouflage breaking, 1st IEEE International Workshop on Biologically Motivated Computer Vision (BMCV), Seoul, Korea (2000) Tankus, A., Yeshurun, Y.: Convexity-based camouflage breaking, International Conference on Pattern Recognition (ICPR), Barcelona, Spain (2000) Tankus, A., Yeshurun, Y.: Convexity Based Visual Camouflage Breaking, Computer Vision and Image understanding 82, (2001)
Detection of Regions of Interest and Camouflage Breaking by Direct Convexity Estimation
Detection of Regions of Interest and Camouflage Breaking by Direct Convexity Estimation Ariel Tankus Yehezkel Yeshurun Tel-Aviv University Department of Computer Science Ramat-Aviv 69978, Israel {arielt,hezy}@math.tau.ac.il
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationStudy on the Camouflaged Target Detection Method Based on 3D Convexity
www.ccsenet.org/mas Modern Applied Science Vol. 5, No. 4; August 011 Study on the Camouflaged Tet Detection Method Based on 3D Convexity Yuxin Pan Engineering Institute of Engineering Corps, PLA University
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationThe Institute of Telecommunications and Computer Sciences, UTP University of Science and Technology, Bydgoszcz , Poland
Computer Technology and Application 6 (2015) 64-69 doi: 10.17265/1934-7332/2015.02.002 D DAVID PUBLISHIN An Image Analysis of Breast Thermograms Ryszard S. Choras The Institute of Telecommunications and
More informationCS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges
CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationChapter 11 Representation & Description
Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering
More informationComputer Vision I - Filtering and Feature detection
Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationSchedule for Rest of Semester
Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationThe SIFT (Scale Invariant Feature
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationWhat is an edge? Paint. Depth discontinuity. Material change. Texture boundary
EDGES AND TEXTURES The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Bill Freeman and Antonio Torralba (MIT), including their
More informationLarge-Scale 3D Point Cloud Processing Tutorial 2013
Large-Scale 3D Point Cloud Processing Tutorial 2013 Features The image depicts how our robot Irma3D sees itself in a mirror. The laser looking into itself creates distortions as well as changes in Prof.
More informationImage Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus
Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into
More informationDigital Image Processing
Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents
More informationOutlines. Medical Image Processing Using Transforms. 4. Transform in image space
Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationComputer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.
Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with
More informationCS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing
CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.
More informationA Texture-based Method for Detecting Moving Objects
A Texture-based Method for Detecting Moving Objects Marko Heikkilä University of Oulu Machine Vision Group FINLAND Introduction The moving object detection, also called as background subtraction, is one
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationComputer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,
More informationComputer Vision I - Basics of Image Processing Part 1
Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2
More informationProblem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1
Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page
More informationTexture. Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern
More informationEdge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)
Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationRegion-based Segmentation
Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.
More informationTEXTURE ANALYSIS USING GABOR FILTERS
TEXTURE ANALYSIS USING GABOR FILTERS Texture Types Definition of Texture Texture types Synthetic Natural Stochastic < Prev Next > Texture Definition Texture: the regular repetition of an element or pattern
More informationTexture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors
Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationEdge Detection (with a sidelight introduction to linear, associative operators). Images
Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to
More informationLast update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1
Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationImage features. Image Features
Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in
More informationComputationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms
Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at
More informationEECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline
EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)
More informationFeatures Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationLine, edge, blob and corner detection
Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5
More informationSimultaneous surface texture classification and illumination tilt angle prediction
Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona
More informationFeature Detectors and Descriptors: Corners, Lines, etc.
Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood
More informationSCALE INVARIANT FEATURE TRANSFORM (SIFT)
1 SCALE INVARIANT FEATURE TRANSFORM (SIFT) OUTLINE SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion 2 SIFT BACKGROUND Scale-invariant feature transform SIFT: to detect
More informationAn Algorithm for Blurred Thermal image edge enhancement for security by image processing technique
An Algorithm for Blurred Thermal image edge enhancement for security by image processing technique Vinay Negi 1, Dr.K.P.Mishra 2 1 ECE (PhD Research scholar), Monad University, India, Hapur 2 ECE, KIET,
More informationTilt correction of images used for surveillance
Tilt correction of images used for surveillance Pallav Nandi Chaudhuri1 Samriddha Dey2 and Subhadip Bhattacharya3 1 23 Computer Science and Engineering Techno India Banipur Applied Electronics and Instrumentation
More informationEECS490: Digital Image Processing. Lecture #19
Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny
More informationDigital Image Processing. Lecture # 15 Image Segmentation & Texture
Digital Image Processing Lecture # 15 Image Segmentation & Texture 1 Image Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) Applications:
More informationPeriodicity Extraction using Superposition of Distance Matching Function and One-dimensional Haar Wavelet Transform
Periodicity Extraction using Superposition of Distance Matching Function and One-dimensional Haar Wavelet Transform Dr. N.U. Bhajantri Department of Computer Science & Engineering, Government Engineering
More informationComputer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11
Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)
More informationEdge detection. Goal: Identify sudden. an image. Ideal: artist s line drawing. object-level knowledge)
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda
More informationAnalysis of Image and Video Using Color, Texture and Shape Features for Object Identification
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features
More informationComputer Vision I - Basics of Image Processing Part 2
Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationDigital Image Processing. Image Enhancement - Filtering
Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images
More information5. Feature Extraction from Images
5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i
More informationDefect Detection of Regular Patterned Fabric by Spectral Estimation Technique and Rough Set Classifier
Defect Detection of Regular Patterned Fabric by Spectral Estimation Technique and Rough Set Classifier Mr..Sudarshan Deshmukh. Department of E&TC Siddhant College of Engg, Sudumbare, Pune Prof. S. S. Raut.
More informationAn Edge Detection Algorithm for Online Image Analysis
An Edge Detection Algorithm for Online Image Analysis Azzam Sleit, Abdel latif Abu Dalhoum, Ibraheem Al-Dhamari, Afaf Tareef Department of Computer Science, King Abdulla II School for Information Technology
More informationCHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT
CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and
More informationEN1610 Image Understanding Lab # 3: Edges
EN1610 Image Understanding Lab # 3: Edges The goal of this fourth lab is to ˆ Understanding what are edges, and different ways to detect them ˆ Understand different types of edge detectors - intensity,
More informationAn ICA based Approach for Complex Color Scene Text Binarization
An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationImage Processing
Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationCS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges
More informationClassification of Protein Crystallization Imagery
Classification of Protein Crystallization Imagery Xiaoqing Zhu, Shaohua Sun, Samuel Cheng Stanford University Marshall Bern Palo Alto Research Center September 2004, EMBC 04 Outline Background X-ray crystallography
More informationLecture 6: Multimedia Information Retrieval Dr. Jian Zhang
Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical
More informationCOLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES
COLOR IMAGE SEGMENTATION IN RGB USING VECTOR ANGLE AND ABSOLUTE DIFFERENCE MEASURES Sanmati S. Kamath and Joel R. Jackson Georgia Institute of Technology 85, 5th Street NW, Technology Square Research Building,
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationAutomatic Tracking of Moving Objects in Video for Surveillance Applications
Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering
More informationA Patch Prior for Dense 3D Reconstruction in Man-Made Environments
A Patch Prior for Dense 3D Reconstruction in Man-Made Environments Christian Häne 1, Christopher Zach 2, Bernhard Zeisl 1, Marc Pollefeys 1 1 ETH Zürich 2 MSR Cambridge October 14, 2012 A Patch Prior for
More informationRobust Shape Retrieval Using Maximum Likelihood Theory
Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2
More informationTexture Similarity Measure. Pavel Vácha. Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University
Texture Similarity Measure Pavel Vácha Institute of Information Theory and Automation, AS CR Faculty of Mathematics and Physics, Charles University What is texture similarity? Outline 1. Introduction Julesz
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationScene-Consistent Detection of Feature Points in Video Sequences
Scene-Consistent Detection of Feature Points in Video Sequences Ariel Tankus and Yehezkel Yeshurun School of Computer Science Tel-Aviv University Tel-Aviv 69978, Israel {arielt,hezy}@post.tau.ac.il Abstract
More informationRobust Ring Detection In Phase Correlation Surfaces
Griffith Research Online https://research-repository.griffith.edu.au Robust Ring Detection In Phase Correlation Surfaces Author Gonzalez, Ruben Published 2013 Conference Title 2013 International Conference
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationA Robust Method for Circle / Ellipse Extraction Based Canny Edge Detection
International Journal of Research Studies in Science, Engineering and Technology Volume 2, Issue 5, May 2015, PP 49-57 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) A Robust Method for Circle / Ellipse
More informationScale Invariant Feature Transform
Why do we care about matching features? Scale Invariant Feature Transform Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Automatic
More information[Programming Assignment] (1)
http://crcv.ucf.edu/people/faculty/bagci/ [Programming Assignment] (1) Computer Vision Dr. Ulas Bagci (Fall) 2015 University of Central Florida (UCF) Coding Standard and General Requirements Code for all
More informationDigital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering
Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing
More informationPractical Image and Video Processing Using MATLAB
Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques
More informationSubject-Oriented Image Classification based on Face Detection and Recognition
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationImage Processing: Final Exam November 10, :30 10:30
Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always
More informationLocal Feature Detectors
Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,
More informationA Road Marking Extraction Method Using GPGPU
, pp.46-54 http://dx.doi.org/10.14257/astl.2014.50.08 A Road Marking Extraction Method Using GPGPU Dajun Ding 1, Jongsu Yoo 1, Jekyo Jung 1, Kwon Soon 1 1 Daegu Gyeongbuk Institute of Science and Technology,
More informationCHAPTER VIII SEGMENTATION USING REGION GROWING AND THRESHOLDING ALGORITHM
CHAPTER VIII SEGMENTATION USING REGION GROWING AND THRESHOLDING ALGORITHM 8.1 Algorithm Requirement The analysis of medical images often requires segmentation prior to visualization or quantification.
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationTEXTURE. Plan for today. Segmentation problems. What is segmentation? INF 4300 Digital Image Analysis. Why texture, and what is it?
INF 43 Digital Image Analysis TEXTURE Plan for today Why texture, and what is it? Statistical descriptors First order Second order Gray level co-occurrence matrices Fritz Albregtsen 8.9.21 Higher order
More informationOriented Filters for Object Recognition: an empirical study
Oriented Filters for Object Recognition: an empirical study Jerry Jun Yokono Tomaso Poggio Center for Biological and Computational Learning, M.I.T. E5-0, 45 Carleton St., Cambridge, MA 04, USA Sony Corporation,
More informationCHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR)
63 CHAPTER 4 SEMANTIC REGION-BASED IMAGE RETRIEVAL (SRBIR) 4.1 INTRODUCTION The Semantic Region Based Image Retrieval (SRBIR) system automatically segments the dominant foreground region and retrieves
More informationFace Detection for Skintone Images Using Wavelet and Texture Features
Face Detection for Skintone Images Using Wavelet and Texture Features 1 H.C. Vijay Lakshmi, 2 S. Patil Kulkarni S.J. College of Engineering Mysore, India 1 vijisjce@yahoo.co.in, 2 pk.sudarshan@gmail.com
More information