Set Size, Clutter & Complexity

Size: px
Start display at page:

Download "Set Size, Clutter & Complexity"

Transcription

1 Set Size, Clutter & Complexity A review of You are simple Quantities Aude Oliva "I think the next century will be the century of complexity." Stephen Hawking

2 To be complex or not to be complex Unfamiliar Character Mathematics Transparency camouflage stupidity Number of objects Number of elements

3 Definition Set size: increases with the number of objects in the display Clutter: increases with the number of features or objects and their variance Complexity: increase with the number of distinguishable parts and the number of connections between these parts Two domains of interest: Image memory and visual search

4 Role of background complexity on visual search performances The interaction of Set size * Complexity Wolfe, J.M., Oliva, A., Horowitz, T. S, Butcher, S., & Bompas, A. (2002). Segmentation of Objects from Backgrounds in Visual Search Tasks. Vision Research, 42,

5 Visual Search in Complex Background Classical Visual Search Visual Search in background Complexity: clutter Complexity: Junctions Complexity: camouflage Complexity: Scaling of object RT Slope (e.g. 40 msec) cost Intercept Efficient Set size (# of items) Wolfe, Oliva, Horowitz et al (2002)

6 Visual Search in Complex Background Hypothesis 2: initial separation Hypothesis 1: Each object must be mechanism. A single operation separately extracted from the background. Increasing background complexity should add a cost for each item examined. separates possible target objects from the background and then search is performed on a display with the same set size (complexity may add additional candidate objects). Separating more complex backgrounds may take longer. Different slope Same slope RT Complex RT Complex Simple Simple Set size (# of items) Set size (# of items)

7 Effect of background on Search The effect of background on object search: Background and objects are separated initially in a pre-attentive step and the search operates on a subset of candidates items. Example: search for T among Ls L L L L T Initial percept Separation stage Search stage Wolfe, Oliva, Horowitz et al (2002)

8 RT(msec) Experiment 1: Search among cluttered desks Set size Empty desk ( 31 msec/item ) Simple desk ( 37 msec/item ) Messy desk ( 35 msec/item ) Clutter produces an additive RT cost.

9 Experiment 2 : The Walls T junctions Broken T junctions T Junctions w/line terminator control No junctions no vertical X junctions Broken X junctions X Junctions w/line terminator control Blank square control

10 Search among junctions walls Slope = 17 msec Mean = 562 msec Slope = 16 msec Mean = 580 msec Slope = 17 msec Mean = 587 msec Slope = 21 msec Mean = 618 msec Slope = 18 msec Mean = 628 msec Slope = 24 msec Mean = 637 msec Slope = 22 msec Mean = 596 msec Slope = 21 msec Mean = 664 msec Efficiency of search (slope) is not affected by the background complexity. The complexity produce an additive RT cost. Results favor the clean-up hypothesis.

11 Experiment 3: Camouflaged Target Purpose : To systematically vary the similarity of target and background spatial frequency (SF) content. Method : Backgrounds were textures composed of the same spatial frequencies as the target or of a lower SF component (.5x and 0.125x) or of an higher SF (2x and 8x). We plot relative log SF from low to high SF. (-0.9,- 0.3,0,0.3,0.9 relative log units). Participants performed two search tasks: searching for one target or searching for two targets. Set sizes were 1,4,7,10 items. Logic: If clean-up is done only once, then the cost of the background will be similar for 1T and 2T tasks. All the backgrounds had the same histogram (a gaussian distribution of gray-level). The targets and distractors were of different contrasts: 3 stdev (easely discriminable), 2 stdev and 1 stdev( (almost camouflaged ) from the background mean.

12 Coarser scale Camouflaged search Finer scale F = Hyp 1: Clean up twice (once for T1, again for T2) Hyp 2: Clean up once Relative RT 1 T search 2 T search Relative RT 1 T search 2 T search Low 0 High Frequency of the background Low 0 High Frequency of the background

13 Experiment 3 : Results Relative mean RT (ms) 1T-Target Present Relative mean RT (ms) 1T-Target Absent 150 2T-Target Present 300 2T-Target Absent Frequency of the background (log) Slopes for 1 Target =44,31,51,44,37 msec/item (not significant) Slopes for 2 Targets =80,80,80,84,80 msec/item (not significant) Frequency of the background (log) Slopes for 1 Target =99,95,84,92,102 msec/item (not significant) Slopes for 2 Targets =140,123,122,132,145 msec/item (not significant) Efficiency of the search (slope) is not affected by the background complexity. The complexity produces an additive RT cost that is dependent on the spatial frequency similarities between the target and the background. Results in favor of the clean-up hypothesis (#2).

14 Level of camouflage F = Low contrast target High contrast target

15 Exp 4: Complexity as a scaling of target object Frequency F/8 Complexity = log(0.125) Frequency F/4 Complexity = log(0.25) Frequency F Complexity = Max Frequency 2F Complexity = log(2) background with color-cue stimuli RT (msec) 1000 Color cue No color cue Slope (msec) 100 Color cue No color cue Background Frequency (in log) Background Frequency (in log)

16 Background Complexity Representation For a search task purpose, a precise representation of the structure of the background may not be relevant. However, the background complexity affects initial perceptual stage. Observers are able to quickly and effortlessly determine the mean size of a set of heterogeneous circles (Treisman and colleagues). Is the background scene, for the purpose of a search task, represented by a statistical summary of features?

17 Background Complexity Representation

18 Experiment 5: Examples of Backgrounds Single Pattern Frequency F/8 4 regions Complexity = log(4) Single Pattern Frequency F/4 16 regions Complexity = log(16) Single Pattern Frequency F 256 regions Complexity = log(256) Single Pattern Frequency 2F 1024 regions Complexity = log(1024) Single Pattern with color-cue stimuli = 1/2 Pattern(F/8)+1/4 Pattern(F/4)+1/4 Pattern(F) = 1/2 log(4) + 1/4 log(16) + 1/4 log(256) = log(16) Composed Pattern Complexity equivalent to Size 16 regions Single Pattern Complexity of Size 16 regions Those two patterns have the same level of complexity in regard to the target

19 statistical summary of background complexity Composed Region pattern of complexity equivalent to Single pattern (1024) RT(msec) Color cue- Observed RT Color cue- Predictive RT No color cue- Observed RT No color cue- Predictive RT Predictive RT = 0.5 * RT (Pattern of complexity 256) * RT (Pattern of complexity 1024) * RT (Pattern of complexity 16384) Background Frequency (log) Performances of visual search on the Composed Backgrounds may be predicted from the average of the single region backgrounds

20 Search in background A single operation separates possible target objects from the background. Search then proceeds through the set of target objects, ignoring the background. Clutter Junctions Camouflage Target scaling Item/Item cost

21 Conclusions Observers can separate candidate targets from a complex background in a single preattentive step. Background information adds an additive RT cost at the beginning of the search. This initial separation mechanism makes sense in regard to a gist mechanism: a very fast computation of features over the whole image, that correspond to background information.

22 Definition Set size: increases with the number of objects in the display Clutter: increases with the quantity of features or objects and their variance Complexity: increase with the quantity of distinguishable parts and the quantity of connections between these parts Two domains of interest: Image memory and visual search

23 Why studying visual complexity? It is a paradox. When the parts of a complex are separated or conceptualized as a whole, the valence of the complexity changes and the pattern becomes simpler No crowding. There are almost no studies related to visual complexity in real world scenes It is impossible. to characterize visual complexity because it is too complex. Scene gist does not care. Model of scene categorization (Spatial envelope) is independent of visual complexity of the image

24 But really? Why? coast Ruggedness Conceptual Space landscape forest mountain Neighbours Openness Scenes are composed of numerous objects, textures and colors which are arranged in a variety of spatial layouts. However, scene categorization (e.g. street, kitchen, park), unlike other visual processes (e.g., search), seems to be unaffected by the level of visual complexity of a scene. Large space (200 m), urban scenes Large space (200 m), urban scenes, in perspective, busy

25 Spatial Envelope A picture can be represented by a vector of length N corresponding to N perceptual properties of space (Here we show N=3, for openness, {expansion or ruggedness} and roughness. O i Ex i Rn i O i Rg i Rn i { S,S,S { { S,S,S { { S,S,S { { S,S,S { Each perceptual dimension corresponds to one axis of a multidimensional space into which scenes with similar space properties are projected together.

26 2 b Complex From latin complexus = entwined, twisted together In order to have a complex, you need 2 or more elements which are joined in such a way that it is difficult to separate them Intuitively, an object is more complex if more parts can be distinguished and if more connections between them exists Logically, more parts to be represented means more time to searched among or to compute. The components of a complex cannot be separated without destroying it (by separation, you break the connections). The method of analyzing or decomposition into independent modules may not be used to simplify the modeling of a complex object. From Heylighen (1997)

27 Complexity is Variety The representation of the visual complexity is likely to combine both levels of varieties (parts and surface styles). Intuitively, complex scenes should contain a larger variety of parts and surfaces styles, as well as more relationships between these regions than do simpler scenes. Oliva et al (2004)

28 complexity is a complex property Visual complexity may be a function of: - Variety of elements (contours, objects) - Variety of surface styles (textures, colors, materials) - Variety of surface modulators (shadows, light sources) - Variety of Symmetries - Variety of Spatial layout - Subjective experience (familiarity)

29 Image Regularities Simple scene Perceptually good Complex scene Perceptually bad

30 Image Regularities Simple scene Perceptually good Complex scene Perceptually bad Mirror symmetry

31 Image Regularities Simple scene Perceptually good Complex scene Perceptually bad Mirror symmetry

32 Image Regularities Simple scene Perceptually good Complex scene Perceptually bad Mirror symmetry

33 Image Regularities Simple scene Perceptually good Complex scene Perceptually bad Let s consider toy examples of the simplest and the most complex scene (spatially-variant pattern)

34 Image Regularity The good An empty pattern The bad A random pattern Mathematics defined simplicity as the degree to which an object can be faithfully compressed, meaning without lost of information (Feldman, 1997, 2004).

35 Perceptual Regularity: Symmetry The good An empty pattern The bad A random pattern No features or many features displayed randomly share similar perceptual regularities

36 Perceptual Regularity: Symmetry The good An empty space The bad A random space Empty and Random spaces are maximally symmetric Not in the absolute position of the parts, but in the probability that a part will be find at a particular location. The essence of symmetry: one part is sufficient to reconstruct the whole

37 Perceptual Regularity: Stationarity The good An empty space The bad A random space Features in empty and random space are stationary Stationarity : the probability that a component or set of features will be found at any location in the pattern is the same.

38 Perceptual Regularity: Stationarity The good An empty space The bad A random space Features in empty and random space are stationary Stationarity is the probability that a component or set of features will be found at any location in the pattern is the same.

39 Perceptual Regularity The good An empty space The Ugly The bad A random space

40 Perceptual Regularities The good An empty space The Ugly The bad A random space Lower symmetry Lower stationarity But less complex than bad??

41 What is visual complexity? Optimum regularity? Increase performances Variety of features? Degree of perceived visual complexity

42 What is visual complexity? Optimum regularity? Increase performances Variety of features? Degree of perceived visual complexity

43 What is visual complexity? Optimum regularity? Increase performances Variety of features? Degree of perceived visual complexity

44 What is visual complexity? Low Medium High Any rigorous study of the perception of visual complexity requires a precise definition of what visual complexity is. Two levels of visual complexity: (1) the complexity inside the image (perceptual complexity) (2) the task related visual complexity (cognitive complexity) First Question: How can we represent the perceptual complexity of a scene?

45 Visual Complexity Research Program (1) How do human observers represent visual complexity? What is the content of that representation? c2 Starting to search for the perceptual dimension(s) c1 underlying perceived visual complexity (2) How does our visual system handle visual complexity in scenes? What are the perceptive and mnemonic mechanisms used to faithfully (or not) compressed visual complexity? Starting to look at individual scenes of various degrees of visual complexity for memory tasks c3

46 Representation of Visual Complexity in natural scenes Oliva, A., Mack, M.L., Shrestha, M., & Peeper, A. (2004). Identifying the Perceptual Dimensions of Visual Complexity of Scenes. The 26th Annual Meeting of the Cognitive Science Society Meeting, Chicago, August 2004

47 Representing complexity: Textures Rao and Lohse (1993) Heaps & Handel (1999): The visual complexity of a texture defined as the degree of difficulty in providing a verbal description of a texture The degree of perceivable structure of a texture (goodness or simplicity) depends on two major perceptual dimensions: (1) Repetitiveness (vs, disorganization) (2) Uniform Orientation (vs. randomness) Degree of perceived visual complexity

48 Representing visual scene complexity Question: How can a cognitive system represent the degree of visual complexity of a scene? (e.g. variety of objects) Hypothesis 1: visual complexity of a scene can be represented along a single dimension (e.g. there exists a eureka filter computing visual complexity) Alternative hypothesis: visual complexity is represented by a multi-dimensional space of perceptual dimensions. How do task constraints modulate the perceived visual complexity of a scene? (the flexibility of the features used to represent visual complexity)

49 The Shape of a complexity representation p1 p3 c1 c2 S1 S2 p3 C c3 1- Unique Perceptual Dimension C The features or properties related to visual complexity can be combined into one perceptual dimension (like mean depth estimation) 2- Multi-dimensional Space {c1,c2,, cn} Most of visual complexity variability is explained by an identifiable number of perceptual dimensions. The weight of each dimension may vary with task constraints, but the principal dimensional vocabulary remains the same (like determining the basic-level category of a scene) 3- Flexible Space: Space 1, Space N The properties that each human observer uses to represent visual complexity varies. There is no specific dimensional vocabulary that is used for representing visual complexity (maybe like the emotional valence of a scene)

50 Representing Visual complexity These three hypotheses of representation of visual complexity are not mutually exclusive: for a particular task, the visual complexity space could be skewed towards a line (e.g. one perceptual property like quantity of objects is preferentially used), but for a different task, the space of visual complexity might take into account multiple dimensions. In a first study, we aim to tease apart the three levels of representation. We evaluated the degree of agreement that participants had when asked to judge the perceived visual complexity of indoor scenes.

51 Norming visual complexity Rating the visual complexity ~ 1000 scenes. frequency High Medium Low Complexity Norming: 100 scenes (selected at random among the 1000) were presented on a 23 monitor, and 40 participants were asked to organize images into groups of visual complexity (minimum 3 groups, maximum 24), taking into account objects, colors, textures, space and lighting information. Complexity was defined as follows: if you would only glance once at the picture, how difficult will it be to describe the scene to somebody else so that she can find it among similar images. Participants did an average of 4 trials of 100 different images each.

52 Hierarchical Classification Task 100 scenes selected along the full complexity scale. After each subdivision, participants described the criteria they used to split the images.

53 Hierarchical grouping task

54 Constraints on the complexity space Two groups of participants (N=17 per group), were told different definitions of visual complexity. Control group: Visual simplicity is related to how easy it will be to give a verbal description of the image and remember it after seeing it for a short time. Visual complexity is related to how difficult it will be to give a verbal description of the image and remember it after seeing it for a short time. Structure group: Visual complexity is related to the structure of the scene and not merely to color or brightness. Simplicity is how you see that objects and regions are going well together. Complexity is related to how difficult it is to make sense of the structure of the scene.

55 Representing Visual Complexity To differentiate between the three shapes of complexity space (1 Dimensional, N Dimensional or N spaces): (1) Qualitative analysis: which criteria did participants use? (2) How consistent are participants in ranking images along visual complexity? (3) What is the underlying representation of visual complexity? (Multi-dimensional scaling)

56 Criteria of Visual Complexity Criteria of visual complexity and their % for the primary and the secondary divisions. Criteria Quantity of: object texture color Quantity total Clutter Symmetry Openness Layout organization Contrast Group:Structure <1 Group:control Taxonomy of visual criteria. (1) Quantity refers to objects, textures, colors (2) Clutter is a relational criteria relationship between quantity of objects and space. (3) Openness refers to the amount of space (4) Symmetry refers to mirror symmetry (5) Layout organization: description of the type of layout (centralized, grid).

57 Rankings Correlation With the hierarchical grouping task, scenes were classified into 8 bins of complexity. Images within each group were given the same complexity value. Within each group, we computed the Spearman s rank order correlation for each possible pairings of subjects,. If participants were consistent, correlation ranking should be high. Control group: r=0.62 (0.15) Structure group: r=0.61 (0.14) High consistency rankings Low consistency rankings

58 Multi-dimensional scaling The MDS provides a visual representation of the pattern of proximities (i.e., similarities or distances) among the images and inform about the underlying representation Criteria given by participants may be redundant with each other (a scene cannot have a high degree of clutter and a lot of open space). The number of dimensions of an MDS space are decorrelated. They correspond to the number of independent ways in which images can be sensed to resemble or differ.

59 Multi-dimensional scaling (structure group) Clutter/Quantity

60 Second Axis of MDS Structure group No mirror symmetry Control group Mirror symmetry Correlation between images projected onto the second principal axis in each group drops to 0.33

61 Conclusion Correlation between the image ranks projected onto the first axis of the MDS-control group and MDS-structure group is 0.98, suggesting the existence of a principal dimension of complexity (clutter). Additional secondary dimensions participate in the the estimation of complexity (e.g. symmetry, openness). The dimensions of complexity are modulated by task constraints. What is the shape of the complexity space? It looks like a multi-dimensional space (clutter, quantity of color, texture, openness, symmetry, layout organization) possibly skewed towards a principal dimension (clutter).

62 Memorize these pictures

63

64

65

66

67

68

69

70

71

72 Which of the following pictures have you already seen?

73

74 NO

75

76 NO

77

78 NO

79

80 NO

81

82 YES

83

84 NO

85 Memory Confusion You have seen these pictures You were tested with these pictures

86 Memory Confusion You have seen these pictures You were tested with these pictures

87 Human image memory Memory of complex images is outstanding but We remember the meaning or gist of an image, its spatial layout but not the objects.

88 Question Memory of real-world images is known to be very good, but little is known about the mechanisms that observers may use to encode and represent complex visual information in the domain of natural images and scenes. A characteristic of natural scenes is their variability in quantity of objects, spatial arrangements and scale. This variability presents the question of whether perceptual and mnemonic mechanisms depend on the degree of visual complexity of a scene image.

89 Role of Visual complexity on memory Low clutter High clutter

90 Role of Visual complexity on memory Low clutter High clutter Easy to remember? (less confusion) Difficult to remember? (more confusion) Increase of false alarms

91 Memory Confusion: very simple and very complex images are equally difficult to remember % of error (false alarms) d High complexity Low complexity

92 Possible Interpretation Performance for images of high and low complexity were equivalent suggesting that image encoding is not a mechanism depending merely on the quantity of objects. Overall, the results suggest that scenes of high and low visual complexity have a lower degree of discriminability than scenes of medium complexity. What do images of high and low visual complexity have in common? High Medium Low details uniform (1) Low distinctiveness value (2) Lack of variation in the quantity and type of information

93 Summary Temporal browsing of information: each picture can be shown for only msec in a sequence (Mary Potter, 1975) and still be recognized. You will memorize the storytelling of the picture, its meaning and spatial layout, but miss a lot of visual information, including some important objects. Visual complexity may have a complex interaction on memory processes

94 Complexity in spatial scales Does spatial layout resolution for scene recognition vary with image categories and task? Or is there an universal spatial layout resolution independent of categorization?

95 Spatial Scale Layout and Scene Categories Highway Street Closeup Buil ding Coast Country Forest Mountain M 0 c/i c/i c/i Slope(%) Performances of categorization in 8 basic level groups (chance levels is 12.5%), for scene representation at 0,2 and 4 cycles/image. The diagnostic spatial layout resolution varies with scene category. Increasing in spatial layout does not mean increasing in visual complexity.

Scene-Centered Description from Spatial Envelope Properties

Scene-Centered Description from Spatial Envelope Properties Scene-Centered Description from Spatial Envelope Properties Aude Oliva 1 and Antonio Torralba 2 1 Department of Psychology and Cognitive Science Program Michigan State University, East Lansing, MI 48824,

More information

ELL 788 Computational Perception & Cognition July November 2015

ELL 788 Computational Perception & Cognition July November 2015 ELL 788 Computational Perception & Cognition July November 2015 Module 6 Role of context in object detection Objects and cognition Ambiguous objects Unfavorable viewing condition Context helps in object

More information

Visual Design. Simplicity, Gestalt Principles, Organization/Structure

Visual Design. Simplicity, Gestalt Principles, Organization/Structure Visual Design Simplicity, Gestalt Principles, Organization/Structure Many examples are from Universal Principles of Design, Lidwell, Holden, and Butler Why discuss visual design? You need to present the

More information

Statistics of Natural Image Categories

Statistics of Natural Image Categories Statistics of Natural Image Categories Authors: Antonio Torralba and Aude Oliva Presented by: Sebastian Scherer Experiment Please estimate the average depth from the camera viewpoint to all locations(pixels)

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Local Features and Bag of Words Models

Local Features and Bag of Words Models 10/14/11 Local Features and Bag of Words Models Computer Vision CS 143, Brown James Hays Slides from Svetlana Lazebnik, Derek Hoiem, Antonio Torralba, David Lowe, Fei Fei Li and others Computer Engineering

More information

Visual localization using global visual features and vanishing points

Visual localization using global visual features and vanishing points Visual localization using global visual features and vanishing points Olivier Saurer, Friedrich Fraundorfer, and Marc Pollefeys Computer Vision and Geometry Group, ETH Zürich, Switzerland {saurero,fraundorfer,marc.pollefeys}@inf.ethz.ch

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING. Wei Liu, Serkan Kiranyaz and Moncef Gabbouj

ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING. Wei Liu, Serkan Kiranyaz and Moncef Gabbouj Proceedings of the 5th International Symposium on Communications, Control and Signal Processing, ISCCSP 2012, Rome, Italy, 2-4 May 2012 ROBUST SCENE CLASSIFICATION BY GIST WITH ANGULAR RADIAL PARTITIONING

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

Beyond Bags of Features

Beyond Bags of Features : for Recognizing Natural Scene Categories Matching and Modeling Seminar Instructed by Prof. Haim J. Wolfson School of Computer Science Tel Aviv University December 9 th, 2015

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Step 10 Visualisation Carlos Moura

Step 10 Visualisation Carlos Moura Step 10 Visualisation Carlos Moura COIN 2017-15th JRC Annual Training on Composite Indicators & Scoreboards 06-08/11/2017, Ispra (IT) Effective communication through visualization Why investing on visual

More information

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions

Integers & Absolute Value Properties of Addition Add Integers Subtract Integers. Add & Subtract Like Fractions Add & Subtract Unlike Fractions Unit 1: Rational Numbers & Exponents M07.A-N & M08.A-N, M08.B-E Essential Questions Standards Content Skills Vocabulary What happens when you add, subtract, multiply and divide integers? What happens when

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

Salford Systems Predictive Modeler Unsupervised Learning. Salford Systems

Salford Systems Predictive Modeler Unsupervised Learning. Salford Systems Salford Systems Predictive Modeler Unsupervised Learning Salford Systems http://www.salford-systems.com Unsupervised Learning In mainstream statistics this is typically known as cluster analysis The term

More information

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Lecture 10: Semantic Segmentation and Clustering

Lecture 10: Semantic Segmentation and Clustering Lecture 10: Semantic Segmentation and Clustering Vineet Kosaraju, Davy Ragland, Adrien Truong, Effie Nehoran, Maneekwan Toyungyernsub Department of Computer Science Stanford University Stanford, CA 94305

More information

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/ TEXTURE ANALYSIS Texture analysis is covered very briefly in Gonzalez and Woods, pages 66 671. This handout is intended to supplement that

More information

Which is better? Sentential. Diagrammatic Indexed by location in a plane

Which is better? Sentential. Diagrammatic Indexed by location in a plane Jeanette Bautista Perceptual enhancement: text or diagrams? Why a Diagram is (Sometimes) Worth Ten Thousand Words Larkin, J. and Simon, H.A Structural object perception: 2D or 3D? Diagrams based on structural

More information

Surfaces. Science B44. Lecture 11 Surfaces. Surfaces. 1. What problem do surfaces solve? 2. How are surfaces discovered

Surfaces. Science B44. Lecture 11 Surfaces. Surfaces. 1. What problem do surfaces solve? 2. How are surfaces discovered Science B44 Lecture 11 Surfaces Surfaces 1. What problem do surfaces solve 2. How are surfaces discovered, grouping 3. Modal and amodal completion 4. Border ownership and figure-ground 5. Top-down help

More information

Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope

Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope International Journal of Computer Vision 42(3), 145 175, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Modeling the Shape of the Scene: A Holistic Representation of the Spatial

More information

How can search in natural scenes be so efficient? Jeremy M Wolfe (with George Alvarez & Yoana Kuzmova as well as Aude Oliva, Antonio Torralba, et al.

How can search in natural scenes be so efficient? Jeremy M Wolfe (with George Alvarez & Yoana Kuzmova as well as Aude Oliva, Antonio Torralba, et al. How can search in natural scenes be so efficient? Jeremy M Wolfe (with George Alvarez & Yoana Kuzmova as well as Aude Oliva, Antonio Torralba, et al. Things to worry about Will your pedestrian model fail

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Some properties of our visual system. Designing visualisations. Gestalt principles

Some properties of our visual system. Designing visualisations. Gestalt principles Designing visualisations Visualisation should build both on the perceptual abilities of the human and the graphical conventions that have developed over time. Also the goal of the visualization should

More information

Why equivariance is better than premature invariance

Why equivariance is better than premature invariance 1 Why equivariance is better than premature invariance Geoffrey Hinton Canadian Institute for Advanced Research & Department of Computer Science University of Toronto with contributions from Sida Wang

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Feature Selection for Image Retrieval and Object Recognition

Feature Selection for Image Retrieval and Object Recognition Feature Selection for Image Retrieval and Object Recognition Nuno Vasconcelos et al. Statistical Visual Computing Lab ECE, UCSD Presented by Dashan Gao Scalable Discriminant Feature Selection for Image

More information

Chapter 4: Analyzing Bivariate Data with Fathom

Chapter 4: Analyzing Bivariate Data with Fathom Chapter 4: Analyzing Bivariate Data with Fathom Summary: Building from ideas introduced in Chapter 3, teachers continue to analyze automobile data using Fathom to look for relationships between two quantitative

More information

Approaches to Visual Mappings

Approaches to Visual Mappings Approaches to Visual Mappings CMPT 467/767 Visualization Torsten Möller Weiskopf/Machiraju/Möller Overview Effectiveness of mappings Mapping to positional quantities Mapping to shape Mapping to color Mapping

More information

GIST. GPU Implementation. Prakhar Jain ( ) Ejaz Ahmed ( ) 3 rd May, 2009

GIST. GPU Implementation. Prakhar Jain ( ) Ejaz Ahmed ( ) 3 rd May, 2009 GIST GPU Implementation Prakhar Jain ( 200601066 ) Ejaz Ahmed ( 200601028 ) 3 rd May, 2009 International Institute Of Information Technology, Hyderabad Table of Contents S. No. Topic Page No. 1 Abstract

More information

Context. CS 554 Computer Vision Pinar Duygulu Bilkent University. (Source:Antonio Torralba, James Hays)

Context. CS 554 Computer Vision Pinar Duygulu Bilkent University. (Source:Antonio Torralba, James Hays) Context CS 554 Computer Vision Pinar Duygulu Bilkent University (Source:Antonio Torralba, James Hays) A computer vision goal Recognize many different objects under many viewing conditions in unconstrained

More information

CIE L*a*b* color model

CIE L*a*b* color model CIE L*a*b* color model To further strengthen the correlation between the color model and human perception, we apply the following non-linear transformation: with where (X n,y n,z n ) are the tristimulus

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

Robust Shape Retrieval Using Maximum Likelihood Theory

Robust Shape Retrieval Using Maximum Likelihood Theory Robust Shape Retrieval Using Maximum Likelihood Theory Naif Alajlan 1, Paul Fieguth 2, and Mohamed Kamel 1 1 PAMI Lab, E & CE Dept., UW, Waterloo, ON, N2L 3G1, Canada. naif, mkamel@pami.uwaterloo.ca 2

More information

Joint design of data analysis algorithms and user interface for video applications

Joint design of data analysis algorithms and user interface for video applications Joint design of data analysis algorithms and user interface for video applications Nebojsa Jojic Microsoft Research Sumit Basu Microsoft Research Nemanja Petrovic University of Illinois Brendan Frey University

More information

Content Based Image Retrieval

Content Based Image Retrieval Content Based Image Retrieval R. Venkatesh Babu Outline What is CBIR Approaches Features for content based image retrieval Global Local Hybrid Similarity measure Trtaditional Image Retrieval Traditional

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

5. Feature Extraction from Images

5. Feature Extraction from Images 5. Feature Extraction from Images Aim of this Chapter: Learn the Basic Feature Extraction Methods for Images Main features: Color Texture Edges Wie funktioniert ein Mustererkennungssystem Test Data x i

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Does everyone have an override code?

Does everyone have an override code? Does everyone have an override code? Project 1 due Friday 9pm Review of Filtering Filtering in frequency domain Can be faster than filtering in spatial domain (for large filters) Can help understand effect

More information

Grade 5: PA Academic Eligible Content and PA Common Core Crosswalk

Grade 5: PA Academic Eligible Content and PA Common Core Crosswalk Grade 5: PA Academic Eligible and PA Common Core Crosswalk Alignment of Eligible : More than Just The crosswalk below is designed to show the alignment between the PA Academic Standard Eligible and the

More information

MSA220 - Statistical Learning for Big Data

MSA220 - Statistical Learning for Big Data MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Every Picture Tells a Story: Generating Sentences from Images

Every Picture Tells a Story: Generating Sentences from Images Every Picture Tells a Story: Generating Sentences from Images Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, David Forsyth University of Illinois

More information

Enhanced Hemisphere Concept for Color Pixel Classification

Enhanced Hemisphere Concept for Color Pixel Classification 2016 International Conference on Multimedia Systems and Signal Processing Enhanced Hemisphere Concept for Color Pixel Classification Van Ng Graduate School of Information Sciences Tohoku University Sendai,

More information

A NOVEL FEATURE EXTRACTION METHOD BASED ON SEGMENTATION OVER EDGE FIELD FOR MULTIMEDIA INDEXING AND RETRIEVAL

A NOVEL FEATURE EXTRACTION METHOD BASED ON SEGMENTATION OVER EDGE FIELD FOR MULTIMEDIA INDEXING AND RETRIEVAL A NOVEL FEATURE EXTRACTION METHOD BASED ON SEGMENTATION OVER EDGE FIELD FOR MULTIMEDIA INDEXING AND RETRIEVAL Serkan Kiranyaz, Miguel Ferreira and Moncef Gabbouj Institute of Signal Processing, Tampere

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

TRANSPARENCY. Dan Stefanescu

TRANSPARENCY. Dan Stefanescu MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY Working Paper 107 July 1975 Dan Stefanescu This report describes research done at the Artificial Intelligence Laboratory of the

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Information Fusion Dr. B. K. Panigrahi

Information Fusion Dr. B. K. Panigrahi Information Fusion By Dr. B. K. Panigrahi Asst. Professor Department of Electrical Engineering IIT Delhi, New Delhi-110016 01/12/2007 1 Introduction Classification OUTLINE K-fold cross Validation Feature

More information

Sketchable Histograms of Oriented Gradients for Object Detection

Sketchable Histograms of Oriented Gradients for Object Detection Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The

More information

PITSCO Math Individualized Prescriptive Lessons (IPLs)

PITSCO Math Individualized Prescriptive Lessons (IPLs) Orientation Integers 10-10 Orientation I 20-10 Speaking Math Define common math vocabulary. Explore the four basic operations and their solutions. Form equations and expressions. 20-20 Place Value Define

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011 Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition

More information

Partitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning

Partitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning Partitioning Data IRDS: Evaluation, Debugging, and Diagnostics Charles Sutton University of Edinburgh Training Validation Test Training : Running learning algorithms Validation : Tuning parameters of learning

More information

Visual words. Map high-dimensional descriptors to tokens/words by quantizing the feature space.

Visual words. Map high-dimensional descriptors to tokens/words by quantizing the feature space. Visual words Map high-dimensional descriptors to tokens/words by quantizing the feature space. Quantize via clustering; cluster centers are the visual words Word #2 Descriptor feature space Assign word

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

UNIT 1: NUMBER LINES, INTERVALS, AND SETS

UNIT 1: NUMBER LINES, INTERVALS, AND SETS ALGEBRA II CURRICULUM OUTLINE 2011-2012 OVERVIEW: 1. Numbers, Lines, Intervals and Sets 2. Algebraic Manipulation: Rational Expressions and Exponents 3. Radicals and Radical Equations 4. Function Basics

More information

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class: family of patterns

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

GRADE 4 MATH COMPETENCY STATEMENTS / PERFORMANCE INDICATORS

GRADE 4 MATH COMPETENCY STATEMENTS / PERFORMANCE INDICATORS Common Core State Standards Alignment Codes Everyday Math Strands & Goals Alignment Codes OA; Operations and Algebraic Thinking NBT; Number and Operations in Base Ten NOF; Number and Operations - Fractions

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

Montana City School GRADE 5

Montana City School GRADE 5 Montana City School GRADE 5 Montana Standard 1: Students engage in the mathematical processes of problem solving and reasoning, estimation, communication, connections and applications, and using appropriate

More information

Object Classification Problem

Object Classification Problem HIERARCHICAL OBJECT CATEGORIZATION" Gregory Griffin and Pietro Perona. Learning and Using Taxonomies For Fast Visual Categorization. CVPR 2008 Marcin Marszalek and Cordelia Schmid. Constructing Category

More information

An Experiment in Visual Clustering Using Star Glyph Displays

An Experiment in Visual Clustering Using Star Glyph Displays An Experiment in Visual Clustering Using Star Glyph Displays by Hanna Kazhamiaka A Research Paper presented to the University of Waterloo in partial fulfillment of the requirements for the degree of Master

More information

Visual Computing. Lecture 2 Visualization, Data, and Process

Visual Computing. Lecture 2 Visualization, Data, and Process Visual Computing Lecture 2 Visualization, Data, and Process Pipeline 1 High Level Visualization Process 1. 2. 3. 4. 5. Data Modeling Data Selection Data to Visual Mappings Scene Parameter Settings (View

More information

TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES

TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES John R. Smith and Shih-Fu Chang Center for Telecommunications Research and Electrical Engineering Department Columbia

More information

Dimension Reduction CS534

Dimension Reduction CS534 Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of

More information

A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS

A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS A SYNOPTIC ACCOUNT FOR TEXTURE SEGMENTATION: FROM EDGE- TO REGION-BASED MECHANISMS Enrico Giora and Clara Casco Department of General Psychology, University of Padua, Italy Abstract Edge-based energy models

More information

Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence

Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence Scott Foresman Investigations in Number, Data, and Space Content Scope & Sequence Correlated to Academic Language Notebooks The Language of Math Grade 4 Content Scope & Sequence Unit 1: Factors, Multiples,

More information

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1

Wavelet Applications. Texture analysis&synthesis. Gloria Menegaz 1 Wavelet Applications Texture analysis&synthesis Gloria Menegaz 1 Wavelet based IP Compression and Coding The good approximation properties of wavelets allow to represent reasonably smooth signals with

More information

Efficient Visual Coding: From Retina To V2

Efficient Visual Coding: From Retina To V2 Efficient Visual Coding: From Retina To V Honghao Shan Garrison Cottrell Computer Science and Engineering UCSD La Jolla, CA 9093-0404 shanhonghao@gmail.com, gary@ucsd.edu Abstract The human visual system

More information

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures

Machine learning Pattern recognition. Classification/Clustering GW Chapter 12 (some concepts) Textures Machine learning Pattern recognition Classification/Clustering GW Chapter 12 (some concepts) Textures Patterns and pattern classes Pattern: arrangement of descriptors Descriptors: features Patten class:

More information

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks

8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks 8 th Grade Pre Algebra Pacing Guide 1 st Nine Weeks MS Objective CCSS Standard I Can Statements Included in MS Framework + Included in Phase 1 infusion Included in Phase 2 infusion 1a. Define, classify,

More information

Ballston Spa Central School District The Common Core State Standards in Our Schools Fourth Grade Math

Ballston Spa Central School District The Common Core State Standards in Our Schools Fourth Grade Math 1 Ballston Spa Central School District The Common Core State s in Our Schools Fourth Grade Math Operations and Algebraic Thinking Use the four operations with whole numbers to solve problems 4.OA.1. Interpret

More information

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol.2, II-131 II-137, Dec. 2001. Production of Video Images by Computer Controlled Cameras and Its Application to TV Conference System

More information

Using the Forest to See the Trees: Context-based Object Recognition

Using the Forest to See the Trees: Context-based Object Recognition Using the Forest to See the Trees: Context-based Object Recognition Bill Freeman Joint work with Antonio Torralba and Kevin Murphy Computer Science and Artificial Intelligence Laboratory MIT A computer

More information

Filters (cont.) CS 554 Computer Vision Pinar Duygulu Bilkent University

Filters (cont.) CS 554 Computer Vision Pinar Duygulu Bilkent University Filters (cont.) CS 554 Computer Vision Pinar Duygulu Bilkent University Today s topics Image Formation Image filters in spatial domain Filter is a mathematical operation of a grid of numbers Smoothing,

More information

Lecture 3: Linear Classification

Lecture 3: Linear Classification Lecture 3: Linear Classification Roger Grosse 1 Introduction Last week, we saw an example of a learning task called regression. There, the goal was to predict a scalar-valued target from a set of features.

More information

Houghton Mifflin MATHEMATICS Level 1 correlated to NCTM Standard

Houghton Mifflin MATHEMATICS Level 1 correlated to NCTM Standard Number and Operations Standard Understand numbers, ways of representing numbers, relationships among numbers, and number systems count with understanding and recognize TE: 191A 195B, 191 195, 201B, 201

More information

4th grade Math (3rd grade CAP)

4th grade Math (3rd grade CAP) Davison Community Schools ADVISORY CURRICULUM COUNCIL Phase II, April 20, 2015 Julie Crockett, Matt Lobban 4th grade Math (3rd grade CAP) Course Essential Questions (from Phase I report): How do we multiply/divide

More information

University of Florida CISE department Gator Engineering. Visualization

University of Florida CISE department Gator Engineering. Visualization Visualization Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida What is visualization? Visualization is the process of converting data (information) in to

More information

String distance for automatic image classification

String distance for automatic image classification String distance for automatic image classification Nguyen Hong Thinh*, Le Vu Ha*, Barat Cecile** and Ducottet Christophe** *University of Engineering and Technology, Vietnam National University of HaNoi,

More information

Perceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted

Perceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted Exp Brain Res (2013) 224:551 555 DOI 10.1007/s00221-012-3334-y RESEARCH ARTICLE Perceived 3D metric (or Euclidean) shape is merely ambiguous, not systematically distorted Young Lim Lee Mats Lind Geoffrey

More information

arxiv: v3 [cs.cv] 3 Oct 2012

arxiv: v3 [cs.cv] 3 Oct 2012 Combined Descriptors in Spatial Pyramid Domain for Image Classification Junlin Hu and Ping Guo arxiv:1210.0386v3 [cs.cv] 3 Oct 2012 Image Processing and Pattern Recognition Laboratory Beijing Normal University,

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

Texture. COS 429 Princeton University

Texture. COS 429 Princeton University Texture COS 429 Princeton University Texture What is a texture? Antonio Torralba Texture What is a texture? Antonio Torralba Texture What is a texture? Antonio Torralba Texture Texture is stochastic and

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Preprocessing Short Lecture Notes cse352. Professor Anita Wasilewska

Preprocessing Short Lecture Notes cse352. Professor Anita Wasilewska Preprocessing Short Lecture Notes cse352 Professor Anita Wasilewska Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization and concept

More information

GRAPHING BAYOUSIDE CLASSROOM DATA

GRAPHING BAYOUSIDE CLASSROOM DATA LUMCON S BAYOUSIDE CLASSROOM GRAPHING BAYOUSIDE CLASSROOM DATA Focus/Overview This activity allows students to answer questions about their environment using data collected during water sampling. Learning

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Features and Feature Selection Hamid R. Rabiee Jafar Muhammadi Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Features and Patterns The Curse of Size and

More information