[4] MISB ST , Floating Point to Integer Mapping, Feb 2014

Size: px
Start display at page:

Download "[4] MISB ST , Floating Point to Integer Mapping, Feb 2014"

Transcription

1 MISB ST STANDARD Generalized Transformation Parameters 26 February Scope This Standard (ST) describes a generalized method of transforming two-dimensional data (or points) from one coordinate system into a second two-dimensional coordinate system. This Generalized Transformation may be used for various image-to-image transformations such as an affine transformation by simply equating some parameters to be equal to zero. In addition, this Generalized Transformation may describe some homographic-like transformations. This ST defines three items: 1) The different methods of implementation and constraints that need to be enforced to maintain certain transformation relationships. 2) The mandatory method of uncertainty propagation to be implemented on systems where uncertainty information is needed. 3) The KLV Local Set (LS) that represents all the parameters for the Generalized Transformation. 2 References 2.1 Normative Reference The following references and the references contained therein are normative. [1] Edward M. Mikhail, James S. Bethel, and J. Chris McGlone. Introduction to Modern Photogrammetry. New York: John Wiley & Sons, Inc., 2001 [2] MISB ST , Bit and Byte Order for Metadata in Motion Imagery Files and Streams, Feb 2014 [3] MISB ST , Generalized Standard Deviation and Correlation Coefficient Metadata, Feb 2014 [4] MISB ST , Floating Point to Integer Mapping, Feb Informative References [5] MISB ST , Range Image Metadata Set, Feb 2014 [6] NGA.STND.0017_3.0, Community Sensor Model (CSM) Technical Requirements Document (TRD) Version 3.0, Nov 2010 [7] MISB ST , Metric Geopositioning Metadata Set, Feb February 2015 Motion Imagery Standards Board 1

2 3 Revision History Revision Date Summary of Changes ST /26/2015 Added requirements ST & ST to clarify implementation of standard deviation values and correlation coefficients; added information on Fast Steering Mirrors; added reference ST Abbreviations and Acronyms CCFPA CPT CSM TRD CT DPIT FLOAT FLP FPA FSM INT KLV LS MISB NDT OT 5 Introduction Combined Composite Focal Plane Array Child-Parent Transformation Community Sensor Model Technical Requirements Documents Chipping Transformation Default Pixel-Space to Image-Space Transformation IEEE Single precision floating point number Floating Length Pack Focal Plane Array Fast Steering Mirror IEEE Integer Key-Length-Value Local Set Motion Imagery Standards Board No Defined Transformation Optical Transformation This standard defines a Generalized Transformation based on the foundational two-dimensional projective transformation. From the Generalized Transformation, this ST defines four types of commonly used derived transformations, and a methodology for extending support for additional derived transformations. All derived transformations assume that when a parameter is not given, it is equal to zero. This assumption helps in the implementation of transformations. As such, if all parameters are assumed to be equal to zero, then the resulting transformation returns an output identically equivalent to its input. In addition, transformation data may be accompanied by uncertainty information that describes the quality of the transformation; however, it is not required. This ST defines a method to describe the standard deviation and correlation coefficient information that accompanies the transformation. To prevent incorrect error propagation, all constraints that describe the individual transformation must be accounted for when invoking the stochastic model. Finally, a LS (Local Set) is defined that contains the transformation parameters necessary to implement the Generalized Transformation. This LS maps 16-byte Universal Keys assigned to 26 February 2015 Motion Imagery Standards Board 2

3 each individual parameter of the Generalized Transformation to 1-byte Tags for efficiency purposes. The transformation data provides the parameters for mapping between two-dimensional spaces. In order to use this transformation, it must be associated with one of the two-dimensional spaces, which provides the context of the transformation. In order to provide this context this LS must be used within a parent KLV set. This means that this LS is never used standalone or without being embedded in another LS. 6 Generalized Transformation The Generalized Transformation describes a class of two-dimensional projective transformations intended for image-space coordinate transformations. The two-dimensional projective transformation is the foundation of the Generalized Transformation. The purpose of this transformation is to define a mathematical mapping from points on one plane to points on another plane. For this reason, a system of homogeneous coordinates is used. The following two equations provide a mathematical description of the plane-to-plane projective transformation relation of input to output image coordinates. x out = (1 A)x in + By in + C Gx in + Hy in + 1 y out = Dx in + (1 E)y in + F Gx in + Hy in + 1 Equation 1 Equation 2 The form of Equation 1 and Equation 2 is slightly different than how a projective transformation is typically described, where the terms (1 - A) and (1 - E) are normally expressed as just A and E respectively. This modification allows for all values in the expression to be equal to zero without any need for special cases. With all constants (A through H) equal to zero, the transformation yields coordinates identically equal to the input. This is advantageous because the transformation can always be executed regardless of the input data (e.g. if one or more of the parameters are zero). As this transformation is a projective transformation, the inverse may be written as a function of the original parameters. This, again, has advantages because only one set of parameters is needed to define the forward transformation and the inverse transformation. The inverse of Equation 1 and Equation 2 derived through a series of algebraic steps results in Equation 3 and Equation 4 respectively. ((1 E) FH)x out + (CH B)y out + (BF C(1 E)) x in = (DH G(1 E))x out + (GB H(1 A))y out + ((1 A)(1 E) DB) Equation 3 26 February 2015 Motion Imagery Standards Board 3

4 (GF D)x out + ((1 A) CG)y out + (DC F(1 A)) y in = (DH G(1 E))x out + (GB H(1 A))y out + ((1 A)(1 E) DB) Equation 4 Equations 1-4 define a number of two-dimensional projective transformations and their inverses. In many imagery applications, a two-dimensional affine transformation requires six parameters. This set of six parameters is a subset of the original eight parameters described above. The Appendix contains various formulations of projective transformations and the constraints needed to create various standard transformations. 6.1 Transformation Types The derived transformations defined in this document are identified in Table 1 and can be represented by their enumeration value in the subsequent Generalized Transformation Local Set (LS). Enumeration Value Table 1: Derived Transformations List Description Units 0 Other No Defined Transformation (NDT) None 1 Chipping Transformation (CT) Pixels 2 Child-Parent Transformation (CPT) Millimeters 3 Default Pixel-Space to Image-Space Transformation (DPIT) Millimeters 4 Optical Transformation (OT) Millimeters Other No Defined Transformation An enumeration value equal to 0 implies the transformation type is not defined; however, this does not prevent the user from exploiting the information contained within the Generalized Transformation LS Chipping Transformation An enumeration value equal to one 1 signifies the transmitted image is a chip (or sub-region) from a larger image. Examples of a chipped image are: 1) a sub-region of an image that may be digitally enlarged (zoom); 2) a sub-region of an image selected to reduce bandwidth, or to provide higher quality within the sub-region. Further information on this transformation is given in Section Child-Parent Transformation An enumeration value equal to two 2 indicates the transformation of a child focal plane array (FPA) to its parent FPA (e.g. example defined in MISB ST 1002[5]). This CPT is a plane-to- 26 February 2015 Motion Imagery Standards Board 4

5 plane transformation used to transform between FPA s in image space. Further description of this transformation is given in Section Default Pixel-Space to Image-Space Transformation An enumeration value equal to three 3 is the default pixel-space to image-space transformation. Further information on this transformation is given in Section Optical Transformation An enumeration value equal to four 4 indicates the pixel data of an image is a translation, rotation, scale or skew from the originating FPA to final optical focal plane. This may occur when the originating FPA is a subset of an entire optical focal plane. An example is a Combined Composite Focal Plane Array (CCFPA) sensor, where multiple focal plane array detectors combine to image a single optical focal plane. This optical transformation is a plane-to-plane transformation to transform from FPA to the optical image plane. In addition to providing a transformation from FPA to CCFPA, the optical transformation may also support the effects of coudé paths or Fast Steering Mirrors (FSM). Coudé path and FSM effects may mimic that of the transformation between FPA and CCFPA. They may also differ, however, by translating, rotating, scaling or skewing the optical image plane. This document does not provide a description of how to model coudé path and FSM; however, through analytics the effects of coudé path and FSM can be modeled through the optical transformation. Further description of this transformation is given in Section Extensibility for New Transformations Additional derived transformations may be added to this ST to support new capabilities. ST Requirement Additional derived transformations supported by the MISB shall be added to MISB ST 1202 Table 1 along with supporting information regarding type and use. 6.2 Uncertainty Propagation In many applications, the knowledge of the uncertainty of all estimated values is critical to understand the performance of a system. Thus, it is desirable to provide a means to propagate the uncertainty information of the transformation parameters. The Generalized Transformation LS utilizes the format described in MISB ST 1010[3] for transmitting the standard deviation and correlation coefficient information. ST Requirement When uncertainty information of the Generalized Transformation parameters is available, uncertainty information shall be represented by a Standard Deviation 26 February 2015 Motion Imagery Standards Board 5

6 Correlation Coefficient Floating Length Pack (FLP) in accordance with MISB ST 1010[3]. The Standard Deviation Correlation Coefficient FLP, as defined in MISB ST 1010, requires the parent LS (e.g. the Generalized Transformation LS in this case) to define the order of parameters to associate uncertainty information. Requirement(s) ST ST ST ST The matrix size in the Standard Deviation Correlation Coefficient FLP shall be eight (8) to represent all the parameters in the Generalized Transformation. The Standard Deviation Correlation Coefficient FLP LS shall order its entries for the eight elements of the Generalized Transformation LS in the same order as the first eight parameters of MISB ST Table 2. Standard deviation values shall be represented by four (4) byte floats. Correlation coefficient values shall be mapped into two (2) byte integers using IMAPB(-1.0, 1.0, 2) (see MISB ST 1201[4]). The projective transformation is the general case of the two-dimensional to two-dimensional transformation and no constraints exist on the uncertainty propagation. Further information on how to handle the uncertainty propagation for other transformations is addressed in the Appendix. 6.3 Concatenation of Transformations A benefit of projective transformations is that a combination of projective transformations is itself a projective transformation; however, the order in which these transformations are performed is critical. In the case of determining these transformations for sensor modeling purposes, which assumes an image-to-ground sequence, the order is defined by the following. Requirement ST Transformations shall be performed in the following order: 1) chipping, 2) child-parent, 3) default pixel-space to image-space and 4) image-space coordinates imaged on the focal plane into the optical image-space coordinate system. The chipping or digital zoom transformation is the first transformation to be performed. This transformation transforms the image coordinates of the chipped or digitally zoomed image into the original image coordinate system. This is the transformation described in Section The Child-Parent transformation is the second transformation to be performed. This transformation transforms the original image coordinates above of the child image into 26 February 2015 Motion Imagery Standards Board 6

7 the image coordinate system of a parent image. This is the transformation described in Section The default pixel-space to image-space transformation is the third transformation to be performed. This transforms the pixel coordinates into units of millimeters and moves the origin to the center of the image. This is the transformation described in Section The fourth and final transformation transforms the image-space coordinates imaged on the focal plane into the optical image-space coordinate system. This is the transformation described in Section The ground-to-image projection sequence is the inverse of the image-to-ground sequence. Uncertainty information may accompany all of the above transformations. 6.4 Generalized Transformation Local Set The Generalized Transformation LS as defined in this ST has the following requirements: Requirement(s) ST All metadata shall be expressed in accordance with MISB ST 0107[2]. ST ST ST The version of MISB ST 1202 utilized shall always be sent in the Generalized Transformation LS. When the enumeration value corresponding to the transformation type is not populated in the Generalized Transformation LS, the value shall be assumed to be equal to zero indicating No Defined Transformation. The MISB ST 1202 Local Set shall be embedded within a LS that provides context for the transformation. Tags and Keys within the Generalized Transformation LS Table 2 defines the Generalized Transformation LS data elements and data order. Local Set Key Table 2: Generalized Transformation LS 06.0E.2B B E (CRC 40498) Constituent Elements Tag ID Key Name Symbol / Notes 1 2 0E (CRC 39709) 0E (CRC 49741) 3 x Equation Numeratorx factor x Equation Numeratory factor x Equation Numerator- Constant factor Name Generalized Transformation LS Units / Range Format Length (bytes) A in Equation 1 N/A FLOAT 4 B in Equation 1 N/A FLOAT 4 C in Equation 1 N/A FLOAT 4 26 February 2015 Motion Imagery Standards Board 7

8 E (CRC 62845) 0E (CRC 28909) 0E (CRC 18397) 0E (CRC 7821) 0E (CRC 10685) 0E (CRC 1420) 06.0E.2B E (CRC 64882) 0E (CRC 56368) 0E F (CRC 3109) y Equation Numeratorx factor y Equation Numeratory factor y Equation Numerator- Constant factor Denominator-x factor Denominator-y factor Standard Deviation and Correlation Coefficient FLP D in Equation 2 N/A FLOAT 4 E in Equation 2 N/A FLOAT 4 F in Equation 2 N/A FLOAT 4 G in Equation 1 and Equation 2 H in Equation 1 and Equation 2 This Key defined in MISB ST 1010[3] N/A FLOAT 4 N/A FLOAT 4 N/A N/A N/A Document Version document_version [0 255] BER-OID 1 Transformation Enumeration Transformation Type defined in Table 1 [0 255] UINT8 1 7 Appendix This appendix provides further details on the mapping of the parameters in the unique transformations represented within this ST. These transformation types, their inverses, and the uncertainty propagation are given. As a general rule, the uncertainty propagation is defined as if all eight parameters are being used. With this assumption, special cases are not needed on the algorithm, or in usage. It is the responsibility of the data provider to populate the uncertainty information correctly in order to properly represent the uncertainties in the transformation. 7.1 Generalized Transformation LS The Generalized Transformation Local Set supports a number of transformation types that may be needed in the development of a sensor model. One transformation, the default pixel to imagespace transformation (enumeration value = 3), is performed on all data. The remaining transformation types are performed according to the needs of the dataset; however, a specific ordering of these transformations is mandatory. 26 February 2015 Motion Imagery Standards Board 8

9 The following four subsections describe these transformations Chipping Transformation (CT) The chipping transformation (enumeration value = 1) is utilized for image chipping and a special subset of image chipping known as digital zoom. The chipping transformation is performed in the pixel coordinate system defined by the Community Sensor Model (CSM) Technical Requirements Document (TRD)[6] (e.g. line and sample (or row and column) measured from the upper left hand corner). This is shown in Figure 1 and Figure 2. Figure 1: Pixel Coordinate System per CSM TRD Figure 2: Pixel Coordinate System in an Image The general form of the chipping transformation is given in Equation 5. A chipped or zoomed image is a sub-region of a larger image without rotation, as illustrated in Figure 3. The transformations needed for executing a sensor model must transform the chipped image coordinates into the original image coordinate space (i.e. the location of the original pixels must be known). 26 February 2015 Motion Imagery Standards Board 9

10 Samples 0,0 Original 0,0 Lines L n Chip L T, S T Chip (scaled by sf) H c =L n *sf S n W c =S n *sf Figure 3: Example Chipping Transformation [ L O S O ] = 1 sf 0 [ 0 1 sf] [ L C S C ] + L T ( 1 sf ) H C 2 S [ T ( 1 sf ) W C 2 ] Equation 5 The transformation parameters for chipping are computed from a combination of a number of parameters, which are described below. A = 1 1 sf Equation 6 B = 0 Equation 7 C = L T (1 A) H C 2 Equation 8 D = 0 Equation 9 E = 1 1 sf F = S T (1 E) W C 2 Equation 10 Equation 11 G = 0 Equation 12 H = 0 Equation February 2015 Motion Imagery Standards Board 10

11 The translation values, L T and S T, in Equation 8 and Equation 11 describe the location of the center of the chipped image within the original image. The value sf is the scale factor used to scale the image. It is assumed that sf is equally applied to both a line and sample. The variables L and S describe the line and sample coordinates, respectively, of the point of interest. In Equation 5, the subscript O refers to the original image coordinates and the subscript C refers to the chipped image coordinates. Finally, the variables H C and W C are the chipped image height and width, respectively. A special case of the chipping transformation is a Digital Zoom of the original image. A Digital Zoom uses the center region of the original image and produces a new image with new coordinates and same dimensions as the original image, as illustrated in Figure 4. For this special case the last terms of Equation 5 can be computed from the size of the original image as shown in Equation 14. 0,0 Original 0,0 Digital Zoom H o L n Digital Zoom S n W o sf H o L n W o S n Figure 4: Digital Zoom Transformation [ L O S O ] = 1 sf 0 [ 0 [ L C ] + 1 S C sf] [ H O 2 (1 1 sf ) W O 2 (1 1 sf ) ] Equation 14 The transformation parameters for digital zoom are computed from a combination of a number of parameters, which is described below. A = 1 1 sf Equation 15 B = 0 Equation 16 C = H O 2 (1 1 sf ) Equation February 2015 Motion Imagery Standards Board 11

12 D = 0 Equation 18 E = 1 1 sf Equation 19 F = W O 2 (1 1 sf ) Equation 20 G = 0 Equation 21 H = 0 Equation 22 The value sf is the scale value used to apply a digital zoom to an image. For example, for a 2X digital zoom sf = 2. It is assumed sf is equally applied to both line and sample. The variables L and S describe the line and sample coordinates, respectively, of the point of interest. The subscript O is in reference to the original image coordinates and the subscript C is in reference to the chipped image coordinates. Finally, the variables H O and W O are the original image height and width, respectively. The chipping transformation only produces rescaled and translated images. The parameters that describe the chipping transformation are assumed to be known needing no uncertainty information about these parameters. Because of this, there is typically no stochastic model that accompanies this transformation. The values defined in Equation 15 through Equation 22 or Equation 6 through Equation 13 can be used to define the inverse transformation using Equation 3 and Equation Child-Parent Transformation (CPT) The Child-Parent Transformation (enumeration value = 2) is used in transforming a child focal plane array to its parent focal plane array. These two arrays are related within multiple sensors. An example of this is a co-boresighted sensor system with sensors contained within the same turret. In this formulation, one focal plane must be chosen as the parent focal plane. This focal plane is what metadata, such as photogrammetry metadata, is in reference. The child focal plane is the image being transformed into the parent sensor s coordinate system. The transformation can include rotation, translation and scaling. This is done by applying an eight parameter transformation via the General Transformation described in Equation 1 and Equation 2, where the child image coordinates are represented by the in subscripts, the parent image coordinates are represented by the out subscripts, and the variables L in and S in to describe the line and sample coordinates in the child image and the variables L out and S out to describe the line and sample coordinates in the parent image. L out = (1 A)L in + BS in + C GL in + HS in + 1 Equation February 2015 Motion Imagery Standards Board 12

13 S out = DL in + (1 E)S in + F GL in + HS in + 1 Equation 24 The CPT may be inserted into the parent LS invoking this transformation. The CPT does not require any unique mapping into the metadata stream. The transformation values in Equation 23 and Equation 24 can be used to define the inverse transformation in Equation 3 and Equation Default Pixel-Space to Image-Space Transformation (DPIT) The default pixel-space to image-space transformation has two representations. The first is the CSM TRD defined approach for motion imagery; the second is a generalized approach for other imagery modalities. The definitions for these cases are in and respectively CSM TRD Default Pixel to Image-Space Transformation For motion imagery, the default transformation (enumeration value = 3) is the assumed transformation in constructing a CSM compliant sensor model of a full image. That is, the full focal pane array is transmitted and represented by the metadata stream. This is the transformation assumed to be contained within MISB ST 1107[7] that converts the pixel coordinates into the image coordinates for the sensor model. This transformation is defined by Equation 25. H [ x y ] = [ 0 p2m L x p2m y 0 ] [ 2 S W ] Equation 25 2 The value p2m x and p2m y are the dimensions given to each individual pixel. These pixels may, or may not, be square. The variables L and S describe the line and sample pixel coordinates, respectively, of the point of interest. The variables H and W are the full image height and width, respectively. Finally, the variables x and y are the image coordinates. This transformation may be inserted into the parent LS invoking the default transformation; however, it is not needed because it is assumed to be contained within MISB ST If it is present in the metadata stream, the following mapping is applied: A = 1 Equation 26 B = p2m x Equation 27 C = p2m x W 2 Equation February 2015 Motion Imagery Standards Board 13

14 D = p2m y Equation 29 E = 1 Equation 30 F = p2m y H 2 Equation 31 G = 0 Equation 32 H = 0 Equation 33 As this is the default transformation and considered a known constant, there is typically not a stochastic model that accompanies this transformation. The values defined in Equation 25 through Equation 33 may be used to define the inverse transformation using Equation 3 and Equation Generalized Pixel to Image-Space Transformation For other imagery modalities not using the CSM TRD defined pixel-space to image-space transformation, the transformation (enumeration value = 3) is the assumed generalized transformation that defines the transformation between the pixel image coordinate system and the image coordinate system. This is accomplished by applying the Generalized Transformation described in Equation 34 and Equation 35, where the pixel image coordinates are represented by the in subscripts, and the image coordinates are represented by the out subscripts. This transformation was defined in the main body of text, but is being repeated below for reference. x out = (1 A)L in + BS in + C GL in + HS in + 1 y out = DL in + (1 E)S in + F GL in + HS in + 1 Equation 34 Equation 35 The generalized default pixel-space to image-space transformation introduces no new variables. It uses the variables L in and S in to describe the line and sample coordinates in the pixel image and uses the variables x out and y out to describe the x and y coordinates in the image coordinate system. The transformation values in Equation 34 and Equation 35 may be used to define the inverse transformation in Equation 3 and Equation Optical Transformation (OT) The optical transformation (enumeration value = 4) is the assumed transformation used in transforming a FPA to a CCFPA or the effects of coudé path and FSM. These two arrays are optically related within one sensor. The relationship occurs when the pixel data of the image is a translation, rotation, scale or skew from the optical focal plane. A transformation must be done to 26 February 2015 Motion Imagery Standards Board 14

15 transform the focal plane array to the optical focal plane array. Similar to the CPT, the two arrays consist of an out and in array. The optical focal plane array is considered the out array while the originating FPA is considered the in array. The eight parameter transformation via the General Transformation described in Equation 1 and Equation 2 is applied, and is repeated below for reference. x out = (1 A)x in + By in + C Gx in + Hy in + 1 y out = Dx in + (1 E)y in + F Gx in + Hy in + 1 Equation 36 Equation 37 The OT introduces no new variables. It uses the variables x in and y in to describe the originating FPA image coordinates, and the variables x out and y out to describe the optical image coordinates. The OT may be inserted into the parent LS invoking the optical transformation. The OT does not require any unique mapping into the metadata stream. The OT above is the General Transformation described in the main body of this document. Uncertainty propagation for the OT is the same as the General Transformation. This was described in depth in section 6.2. The values defined in Equation 36 and Equation 37 may be used to define the inverse transformation in Equation 3 and Equation February 2015 Motion Imagery Standards Board 15

MISB ST STANDARD. 24 February Range Motion Imagery. 1 Scope. 2 References. 2.1 Normative Reference

MISB ST STANDARD. 24 February Range Motion Imagery. 1 Scope. 2 References. 2.1 Normative Reference MISB ST 002. STANDARD Range Motion Imagery 24 February 204 Scope This Standard describes Range Motion Imagery, its format and supporting metadata. Range Motion Imagery is a temporal sequence of Range Images.

More information

MISB ST STANDARD. 27 February Motion Imagery Interpretability and Quality Metadata. 1 Scope. 2 References. 2.1 Normative References

MISB ST STANDARD. 27 February Motion Imagery Interpretability and Quality Metadata. 1 Scope. 2 References. 2.1 Normative References MISB ST 1108.2 STANDARD Motion Imagery Interpretability and Quality Metadata 27 February 2014 1 Scope This document defines metadata keys necessary to express motion imagery interpretability and quality

More information

MISB ST STANDARD. 27 February Ancillary Text Metadata Sets. 1 Scope. 2 References. 2.1 Normative References.

MISB ST STANDARD. 27 February Ancillary Text Metadata Sets. 1 Scope. 2 References. 2.1 Normative References. MISB ST 0808.1 STANDARD Ancillary Text Metadata Sets 27 February 2014 1 Scope This Standard documents the SMPTE KLV metadata sets used to encode text data associated with a motion imagery data stream.

More information

Photogrammetry Metadata Set for Digital Motion Imagery

Photogrammetry Metadata Set for Digital Motion Imagery MISB RP 0801.4 RECOMMENDED PRACTICE Photogrammetry Metadata Set for Digital Motion Imagery 4 October 013 1 Scope This Recommended Practice presents the Key-Length-Value (KLV) metadata necessary for the

More information

MISB ST STANDARD. 29 October Ancillary Text Metadata Sets. 1 Scope. 2 References. 3 Acronyms. 4 Definitions

MISB ST STANDARD. 29 October Ancillary Text Metadata Sets. 1 Scope. 2 References. 3 Acronyms. 4 Definitions MISB ST 0808.2 STANDARD Ancillary Metadata Sets 29 October 2015 1 Scope This standard (ST) defines the Ancillary Local Set and the Ancillary Universal Set. These sets include KLV metadata elements for

More information

MISB RP RECOMMENDED PRACTICE. 24 October Annotation Universal Metadata Set. 1 Scope. 2 References. 2.1 Normative References

MISB RP RECOMMENDED PRACTICE. 24 October Annotation Universal Metadata Set. 1 Scope. 2 References. 2.1 Normative References MISB RP 0602.3 RECOMMENDED PRACTICE Annotation Universal Metadata Set 24 October 2013 1 Scope This Recommended Practice documents the basic SMPTE KLV metadata to encode Video Annotation data within a motion

More information

Annotation Universal Metadata Set. 1 Scope. 2 References. 3 Introduction. Motion Imagery Standards Board Recommended Practice MISB RP 0602.

Annotation Universal Metadata Set. 1 Scope. 2 References. 3 Introduction. Motion Imagery Standards Board Recommended Practice MISB RP 0602. Motion Imagery Standards Board Recommended Practice Annotation Universal Metadata Set MISB RP 0602.1 13 June 2007 1 Scope This Recommended Practice documents the basic SMPTE KLV metadata sets used to encode

More information

UAS Datalink Local Set

UAS Datalink Local Set MISB ST 0601.13 STANDARD UAS Datalink Local Set 21 June 2018 1 Scope MISB ST 0601 defines the Unmanned Air System (UAS) Datalink Local Set (LS) for UAS platforms. The UAS Datalink LS is typically produced

More information

Camera model and multiple view geometry

Camera model and multiple view geometry Chapter Camera model and multiple view geometry Before discussing how D information can be obtained from images it is important to know how images are formed First the camera model is introduced and then

More information

[1] IEEE , Standard for Floating-Point Arithmetic [and Floating-Point formats]

[1] IEEE , Standard for Floating-Point Arithmetic [and Floating-Point formats] MISB RP 1201 Recommended Practice Floating Point to Integer Mapping February 15 th 2012 1 Scope This recommended practice describes the method for mapping floating point values to integer values and the

More information

Replacement Sensor Model Tagged Record Extensions Specification for NITF 2.1

Replacement Sensor Model Tagged Record Extensions Specification for NITF 2.1 eplacement Sensor Model Tagged ecord Extensions Specification for NITF 2.1 draft 7/23/04 Prepared for the NGA Prepared by: John Dolloff Charles Taylor BAE SYSTEMS Mission Solutions San Diego, CA BAE Systems

More information

Introduction to Homogeneous coordinates

Introduction to Homogeneous coordinates Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically

More information

Multi-Image Geo Location and 3D Feature Extraction from Stereo. Gene Rose

Multi-Image Geo Location and 3D Feature Extraction from Stereo. Gene Rose Multi-Image Geo Location and 3D Feature Extraction from Stereo Gene Rose Introduction In RemoteView, both basic and advanced Photogrammetric processes are available. The white paper titled Basic Photogrammetry

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

MISB RP May Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2. References

MISB RP May Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2. References Motion Imagery Standards Board Recommended Practice: Security Metadata Universal and Local Sets for Digital Motion Imagery MISB RP 0102.5 15 May 2008 1. Scope This Recommended Practice (RP) describes the

More information

Convert Local Coordinate Systems to Standard Coordinate Systems

Convert Local Coordinate Systems to Standard Coordinate Systems BENTLEY SYSTEMS, INC. Convert Local Coordinate Systems to Standard Coordinate Systems Using 2D Conformal Transformation in MicroStation V8i and Bentley Map V8i Jim McCoy P.E. and Alain Robert 4/18/2012

More information

MISB RP September Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2. References

MISB RP September Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2. References Motion Imagery Standards Board Recommended Practice: Security Metadata Universal and Local Sets for Digital Motion Imagery MISB RP 0102.3 12 September 2007 1. Scope This Recommended Practice (RP) describes

More information

CS 231A Computer Vision (Winter 2015) Problem Set 2

CS 231A Computer Vision (Winter 2015) Problem Set 2 CS 231A Computer Vision (Winter 2015) Problem Set 2 Due Feb 9 th 2015 11:59pm 1 Fundamental Matrix (20 points) In this question, you will explore some properties of fundamental matrix and derive a minimal

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

1 Scope MISB EG Engineering Guideline. 3 September Video Moving Target Indicator Local Data Set

1 Scope MISB EG Engineering Guideline. 3 September Video Moving Target Indicator Local Data Set MISB EG 0903.0 Engineering Guideline 3 September 2009 Video Moving Target Indicator Local Data Set 1 Scope This Engineering Guideline (EG) defines a Local Data Set (LDS) that may be used to deliver Video

More information

Perspective Projection in Homogeneous Coordinates

Perspective Projection in Homogeneous Coordinates Perspective Projection in Homogeneous Coordinates Carlo Tomasi If standard Cartesian coordinates are used, a rigid transformation takes the form X = R(X t) and the equations of perspective projection are

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

MISB ST 1101 STANDARD. 23 October STANAG 4586 Control of UAS Motion Imagery Payloads. 1 Scope. 2 References. 2.1 Normative References

MISB ST 1101 STANDARD. 23 October STANAG 4586 Control of UAS Motion Imagery Payloads. 1 Scope. 2 References. 2.1 Normative References MISB ST 1101 STANDARD STANAG 4586 Control of UAS Motion Imagery Payloads 23 October 2014 1 Scope This Standard (ST) provides guidance for the control of Motion Imagery (MI) payloads on a platform, such

More information

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM

CIS 580, Machine Perception, Spring 2015 Homework 1 Due: :59AM CIS 580, Machine Perception, Spring 2015 Homework 1 Due: 2015.02.09. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Camera Model, Focal Length and

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION

MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION MONO-IMAGE INTERSECTION FOR ORTHOIMAGE REVISION Mohamed Ibrahim Zahran Associate Professor of Surveying and Photogrammetry Faculty of Engineering at Shoubra, Benha University ABSTRACT This research addresses

More information

Multitemporal Geometric Distortion Correction Utilizing the Affine Transformation

Multitemporal Geometric Distortion Correction Utilizing the Affine Transformation Purdue University Purdue e-pubs LARS Symposia Laboratory for Applications of Remote Sensing 10-1-1973 Multitemporal Geometric Distortion Correction Utilizing the Affine Transformation R. A. Emmert Lawrence

More information

MISB RP November Security Metadata Universal Set for Digital Motion Imagery. 1. Scope. 2. References

MISB RP November Security Metadata Universal Set for Digital Motion Imagery. 1. Scope. 2. References Motion Imagery Standards Board Recommended Practice: Security Metadata Universal Set for Digital Motion Imagery MISB RP 0102.2 25 November 2003 1. Scope This Recommended Practice (RP) describes the use

More information

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction MATEMATIČKI VESNIK MATEMATIQKI VESNIK 69, 1 (2017), 12 22 March 2017 research paper originalni nauqni rad THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS Alekseí Yu. Chekunov Abstract. In this

More information

MISB ST STANDARD. Timestamps for Class 1/Class 2 Motion Imagery. 25 February Scope. 2 References

MISB ST STANDARD. Timestamps for Class 1/Class 2 Motion Imagery. 25 February Scope. 2 References MISB ST 0604.4 STANDARD Timestamps for Class 1/Class 2 Motion Imagery 25 February 2016 1 Scope The MISP mandates that a Precision Time Stamp be inserted into all Class 0/1/2 Motion Imagery. This standard

More information

Camera Model and Calibration

Camera Model and Calibration Camera Model and Calibration Lecture-10 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Computer Graphics Hands-on

Computer Graphics Hands-on Computer Graphics Hands-on Two-Dimensional Transformations Objectives Visualize the fundamental 2D geometric operations translation, rotation about the origin, and scale about the origin Learn how to compose

More information

To Do. Outline. Translation. Homogeneous Coordinates. Foundations of Computer Graphics. Representation of Points (4-Vectors) Start doing HW 1

To Do. Outline. Translation. Homogeneous Coordinates. Foundations of Computer Graphics. Representation of Points (4-Vectors) Start doing HW 1 Foundations of Computer Graphics Homogeneous Coordinates Start doing HW 1 To Do Specifics of HW 1 Last lecture covered basic material on transformations in 2D Likely need this lecture to understand full

More information

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction

THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction MATEMATIQKI VESNIK Corrected proof Available online 01.10.2016 originalni nauqni rad research paper THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS Alekseí Yu. Chekunov Abstract. In this paper

More information

MISB Standard Standard. 03 September Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2.

MISB Standard Standard. 03 September Security Metadata Universal and Local Sets for Digital Motion Imagery. 1. Scope. 2. MISB Standard 0102.7 Standard Security Metadata Universal and Local Sets for Digital Motion Imagery 03 September 2009 1. Scope This Standard describes the use of security metadata in MPEG-2 digital motion

More information

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration

Camera Calibration. Schedule. Jesus J Caban. Note: You have until next Monday to let me know. ! Today:! Camera calibration Camera Calibration Jesus J Caban Schedule! Today:! Camera calibration! Wednesday:! Lecture: Motion & Optical Flow! Monday:! Lecture: Medical Imaging! Final presentations:! Nov 29 th : W. Griffin! Dec 1

More information

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy

Machine vision. Summary # 11: Stereo vision and epipolar geometry. u l = λx. v l = λy 1 Machine vision Summary # 11: Stereo vision and epipolar geometry STEREO VISION The goal of stereo vision is to use two cameras to capture 3D scenes. There are two important problems in stereo vision:

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu Reference Most slides are adapted from the following notes: Some lecture notes on geometric

More information

Advanced Encryption Standard and Modes of Operation. Foundations of Cryptography - AES pp. 1 / 50

Advanced Encryption Standard and Modes of Operation. Foundations of Cryptography - AES pp. 1 / 50 Advanced Encryption Standard and Modes of Operation Foundations of Cryptography - AES pp. 1 / 50 AES Advanced Encryption Standard (AES) is a symmetric cryptographic algorithm AES has been originally requested

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah

Camera Models and Image Formation. Srikumar Ramalingam School of Computing University of Utah Camera Models and Image Formation Srikumar Ramalingam School of Computing University of Utah srikumar@cs.utah.edu VisualFunHouse.com 3D Street Art Image courtesy: Julian Beaver (VisualFunHouse.com) 3D

More information

Material Exchange Format (MXF) Mapping Type D-10 Essence Data to the MXF Generic Container

Material Exchange Format (MXF) Mapping Type D-10 Essence Data to the MXF Generic Container PROPOSED SMPTE 386M SMPTE STANDARD for Television Material Exchange Format (MXF) Mapping Type D-1 Essence Data to the MXF Generic Container Table of Contents 1 Scope 2 Normative References 3 Glossary of

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices Computergrafik Matthias Zwicker Universität Bern Herbst 2008 Today Transformations & matrices Introduction Matrices Homogeneous Affine transformations Concatenating transformations Change of Common coordinate

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 263 Index 3D reconstruction, 125 5+1-point algorithm, 284 5-point algorithm, 270 7-point algorithm, 265 8-point algorithm, 263 affine point, 45 affine transformation, 57 affine transformation group, 57 affine

More information

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures

Motion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

3D Geometry and Camera Calibration

3D Geometry and Camera Calibration 3D Geometry and Camera Calibration 3D Coordinate Systems Right-handed vs. left-handed x x y z z y 2D Coordinate Systems 3D Geometry Basics y axis up vs. y axis down Origin at center vs. corner Will often

More information

EE640 FINAL PROJECT HEADS OR TAILS

EE640 FINAL PROJECT HEADS OR TAILS EE640 FINAL PROJECT HEADS OR TAILS By Laurence Hassebrook Initiated: April 2015, updated April 27 Contents 1. SUMMARY... 1 2. EXPECTATIONS... 2 3. INPUT DATA BASE... 2 4. PREPROCESSING... 4 4.1 Surface

More information

MISB EG October Predator UAV Basic Universal Metadata Set. 1 Scope. 2 References. 3 Introduction

MISB EG October Predator UAV Basic Universal Metadata Set. 1 Scope. 2 References. 3 Introduction Motion Imagery Standards Board Engineering Guideline: Predator UAV Basic Universal Metadata Set MISB EG 0104.1 11 October 2001 1 Scope This Engineering Guideline (EG) documents the basic Predator UAV (Unmanned

More information

The Course Structure for the MCA Programme

The Course Structure for the MCA Programme The Course Structure for the MCA Programme SEMESTER - I MCA 1001 Problem Solving and Program Design with C 3 (3-0-0) MCA 1003 Numerical & Statistical Methods 4 (3-1-0) MCA 1007 Discrete Mathematics 3 (3-0-0)

More information

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II

Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision QUIZ II Handed out: 001 Nov. 30th Due on: 001 Dec. 10th Problem 1: (a (b Interior

More information

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253

Index. 3D reconstruction, point algorithm, point algorithm, point algorithm, point algorithm, 253 Index 3D reconstruction, 123 5+1-point algorithm, 274 5-point algorithm, 260 7-point algorithm, 255 8-point algorithm, 253 affine point, 43 affine transformation, 55 affine transformation group, 55 affine

More information

This main body of this document describes the recommended practices with a particular focus on problems (1) and (4).

This main body of this document describes the recommended practices with a particular focus on problems (1) and (4). MISB RP 1204 Recommended Practice Motion Imagery Identification System (MIIS) June 7 th 2012 1 Scope Motion imagery data is generated by many different sensors, distributed across many different networks

More information

Arrays. Defining arrays, declaration and initialization of arrays. Designed by Parul Khurana, LIECA.

Arrays. Defining arrays, declaration and initialization of arrays. Designed by Parul Khurana, LIECA. Arrays Defining arrays, declaration and initialization of arrays Introduction Many applications require the processing of multiple data items that have common characteristics (e.g., a set of numerical

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection

More information

Vector Algebra Transformations. Lecture 4

Vector Algebra Transformations. Lecture 4 Vector Algebra Transformations Lecture 4 Cornell CS4620 Fall 2008 Lecture 4 2008 Steve Marschner 1 Geometry A part of mathematics concerned with questions of size, shape, and relative positions of figures

More information

Sedat Doğan Introduction

Sedat Doğan Introduction Acta Montanistica Slovaca Ročník 18 (2013), číslo 4, 239-253 Calibration of Digital Amateur Cameras for Optical 3D Measurements With Element-Wise Weighted Total Least Squares (EW-WTLS) and Classical Least

More information

CS201 Computer Vision Camera Geometry

CS201 Computer Vision Camera Geometry CS201 Computer Vision Camera Geometry John Magee 25 November, 2014 Slides Courtesy of: Diane H. Theriault (deht@bu.edu) Question of the Day: How can we represent the relationships between cameras and the

More information

Translations. Geometric Image Transformations. Two-Dimensional Geometric Transforms. Groups and Composition

Translations. Geometric Image Transformations. Two-Dimensional Geometric Transforms. Groups and Composition Geometric Image Transformations Algebraic Groups Euclidean Affine Projective Bovine Translations Translations are a simple family of two-dimensional transforms. Translations were at the heart of our Sprite

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004

Augmented Reality II - Camera Calibration - Gudrun Klinker May 11, 2004 Augmented Reality II - Camera Calibration - Gudrun Klinker May, 24 Literature Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2. (Section 5,

More information

Mathematics Scope & Sequence Grade 8 Revised: June 2015

Mathematics Scope & Sequence Grade 8 Revised: June 2015 Mathematics Scope & Sequence 2015-16 Grade 8 Revised: June 2015 Readiness Standard(s) First Six Weeks (29 ) 8.2D Order a set of real numbers arising from mathematical and real-world contexts Convert between

More information

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography

Monocular Vision-based Displacement Measurement System Robust to Angle and Distance Using Homography 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information

Camera Model and Calibration. Lecture-12

Camera Model and Calibration. Lecture-12 Camera Model and Calibration Lecture-12 Camera Calibration Determine extrinsic and intrinsic parameters of camera Extrinsic 3D location and orientation of camera Intrinsic Focal length The size of the

More information

Grade 6 Middle School Math Solution Alignment to Oklahoma Academic Standards

Grade 6 Middle School Math Solution Alignment to Oklahoma Academic Standards 6.N.1 Read, write, and represent integers and rational numbers expressed as fractions, decimals, percents, and ratios; write positive integers as products of factors; use these representations in real-world

More information

An Overview of Matchmoving using Structure from Motion Methods

An Overview of Matchmoving using Structure from Motion Methods An Overview of Matchmoving using Structure from Motion Methods Kamyar Haji Allahverdi Pour Department of Computer Engineering Sharif University of Technology Tehran, Iran Email: allahverdi@ce.sharif.edu

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Advanced Lighting Techniques Due: Monday November 2 at 10pm

Advanced Lighting Techniques Due: Monday November 2 at 10pm CMSC 23700 Autumn 2015 Introduction to Computer Graphics Project 3 October 20, 2015 Advanced Lighting Techniques Due: Monday November 2 at 10pm 1 Introduction This assignment is the third and final part

More information

Shape as a Perturbation to Projective Mapping

Shape as a Perturbation to Projective Mapping Leonard McMillan and Gary Bishop Department of Computer Science University of North Carolina, Sitterson Hall, Chapel Hill, NC 27599 email: mcmillan@cs.unc.edu gb@cs.unc.edu 1.0 Introduction In the classical

More information

Investigations in Number, Data, and Space for the Common Core 2012

Investigations in Number, Data, and Space for the Common Core 2012 A Correlation of Investigations in Number, Data, and Space for the Common Core 2012 to the Common Core State s with California Additions s Map Kindergarten Mathematics Common Core State s with California

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 9: Image alignment http://www.wired.com/gadgetlab/2010/07/camera-software-lets-you-see-into-the-past/ Szeliski: Chapter 6.1 Reading All 2D Linear Transformations

More information

Game Mathematics. (12 Week Lesson Plan)

Game Mathematics. (12 Week Lesson Plan) Game Mathematics (12 Week Lesson Plan) Lesson 1: Set Theory Textbook: Chapter One (pgs. 1 15) We begin the course by introducing the student to a new vocabulary and set of rules that will be foundational

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens

Implemented by Valsamis Douskos Laboratoty of Photogrammetry, Dept. of Surveying, National Tehnical University of Athens An open-source toolbox in Matlab for fully automatic calibration of close-range digital cameras based on images of chess-boards FAUCCAL (Fully Automatic Camera Calibration) Implemented by Valsamis Douskos

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang

Maths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang Maths for Signals and Systems Linear Algebra in Engineering Some problems by Gilbert Strang Problems. Consider u, v, w to be non-zero vectors in R 7. These vectors span a vector space. What are the possible

More information

Smartphone Video Guidance Sensor for Small Satellites

Smartphone Video Guidance Sensor for Small Satellites SSC13-I-7 Smartphone Video Guidance Sensor for Small Satellites Christopher Becker, Richard Howard, John Rakoczy NASA Marshall Space Flight Center Mail Stop EV42, Huntsville, AL 35812; 256-544-0114 christophermbecker@nasagov

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

1-5 Parent Functions and Transformations

1-5 Parent Functions and Transformations Describe the following characteristics of the graph of each parent function: domain, range, intercepts, symmetry, continuity, end behavior, and intervals on which the graph is increasing/decreasing. 1.

More information

Utilization of Similarity Metric in the Implementation of an Object Recognition Algorithm using Java

Utilization of Similarity Metric in the Implementation of an Object Recognition Algorithm using Java Utilization of Similarity Metric in the Implementation of an Object Recognition Algorithm using Java Fadzliana Saad, Member, IEEE, Rainer Stotzka Abstract An algorithm utilizing similarity metric to find

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Technical Publications

Technical Publications GE Medical Systems Technical Publications Direction 2188003-100 Revision 0 Tissue Volume Analysis DICOM for DICOM V3.0 Copyright 1997 By General Electric Co. Do not duplicate REVISION HISTORY REV DATE

More information

Overview. By end of the week:

Overview. By end of the week: Overview By end of the week: - Know the basics of git - Make sure we can all compile and run a C++/ OpenGL program - Understand the OpenGL rendering pipeline - Understand how matrices are used for geometric

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

MISB RP 1302 RECOMMENDED PRACTICE. 27 February Inserting KLV in Session Description Protocol (SDP) 1 Scope. 2 References

MISB RP 1302 RECOMMENDED PRACTICE. 27 February Inserting KLV in Session Description Protocol (SDP) 1 Scope. 2 References MISB RP 1302 RECOMMENDED PRACTICE Inserting KLV in Session Description Protocol (SDP) 27 February 2014 1 Scope This MISB Recommended Practice (RP) presents a method to insert KLV (Key-Length-Value) encoded

More information

Lecture 6 Stereo Systems Multi-view geometry

Lecture 6 Stereo Systems Multi-view geometry Lecture 6 Stereo Systems Multi-view geometry Professor Silvio Savarese Computational Vision and Geometry Lab Silvio Savarese Lecture 6-5-Feb-4 Lecture 6 Stereo Systems Multi-view geometry Stereo systems

More information

Study of the Effects of Target Geometry on Synthetic Aperture Radar Images using Simulation Studies

Study of the Effects of Target Geometry on Synthetic Aperture Radar Images using Simulation Studies Study of the Effects of Target Geometry on Synthetic Aperture Radar Images using Simulation Studies K. Tummala a,*, A. K. Jha a, S. Kumar b a Geoinformatics Dept., Indian Institute of Remote Sensing, Dehradun,

More information

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz

Humanoid Robotics. Projective Geometry, Homogeneous Coordinates. (brief introduction) Maren Bennewitz Humanoid Robotics Projective Geometry, Homogeneous Coordinates (brief introduction) Maren Bennewitz Motivation Cameras generate a projected image of the 3D world In Euclidian geometry, the math for describing

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy

Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair

More information

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline

EE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline 1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation

More information

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C.

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C. Linear Algebra and Image Processing: Additional Theor regarding Computer Graphics and Image Processing not covered b David C. La Dr. D.P. Huijsmans LIACS, Leiden Universit Februar 202 Differences in conventions

More information