IMAGINE AutoSync User s Guide. September 2008

Size: px
Start display at page:

Download "IMAGINE AutoSync User s Guide. September 2008"

Transcription

1 IMAGINE AutoSync User s Guide September 2008

2 Copyright 2008 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of ERDAS, Inc. This work is protected under United States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, except as expressly permitted in writing by ERDAS, Inc. All requests should be sent to the attention of: Manager, Technical Documentation ERDAS, Inc Peachtree Corners Circle Suite 100 Norcross, GA USA. The information contained in this document is subject to change without notice. Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a project at the Los Alamos National Laboratory, funded by the U.S. Government, managed under contract by the University of California (University), and is under exclusive commercial license to LizardTech, Inc. It is used under license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents pending. The U.S. Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S. Government has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced throughout the world, for or on behalf of the United States, inventions covered by U.S. Patent No. 5,710,835 and has other rights under 35 U.S.C and applicable implementing regulations; (b) If LizardTech's rights in the MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any provisions of this license which could reasonably be deemed to do so would then protect the University and/or the U.S. Government; and (c) The University has no obligation to furnish any know-how, technical assistance, or technical data to users of MrSID software and makes no warranty or representation as to the validity of U.S. Patent 5,710,835 nor that the MrSID Software will not infringe any patent or other proprietary right. For further information about these provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, Stereo Analyst and IMAGINE VirtualGIS are registered trademarks; IMAGINE OrthoBASE Pro is a trademark of ERDAS, Inc. SOCET SET is a registered trademark of BAE Systems Mission Solutions. Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.

3 iii

4 iv

5 Table of Contents Table of Contents v List of Tables vii Preface ix About This Manual ix Example Data ix Documentation ix Conventions Used in This Book ix Getting Started x ERDAS IMAGINE Icon Panel x ERDAS IMAGINE Menu Bar xi Dialogs xvii More Information/Help xvii Introduction to IMAGINE AutoSync Introduction Overview Benefits Constraints Data Preparation Input Images Reference Images Selecting Input and Reference Images Digital Elevation Model (DEM) APM Engine APM Strategy Parameters Ideal Situations for Good APM Performance Situations to Avoid APM Trouble Shooting and Tips Modeling Image to Image (2D) Transforms Ground to Image (3D) Transforms Approximate Sensor Models Selecting a Model Selecting a DEM/DTM Modeling Trouble-Shooting and Tips IMAGINE AutoSync Tips and Hints Interpreting Results Using the IMAGINE AutoSync Workstation Using the IMAGINE AutoSync Wizards IMAGINE AutoSync Workflows General IMAGINE AutoSync Tips and Hints Summary General Guidelines Table of Contents v

6 Using IMAGINE AutoSync Introduction Using the Edge Matching Wizard Using the Input tab Using the APM Strategy tab Using the Edge Match Strategy tab Using the Projection tab Using the Output tab Display Output Image View Summary Report Using the AutoSync Workstation Create New IMAGINE AutoSync Project Add Input Image Add Reference Image Collect Manual Tie Points Run APM Preview Output Image Improve Output Image Results Review Image Map Data Set Output Image Projection Resample Output Image Verify Output Image View Summary Report Index vi Table of Contents

7 List of Tables Table 1: Session Menu Options xi Table 2: Main Menu Options xiii Table 3: Tools Menu Options xiv Table 4: Utility Menu Options xv Table 5: Help Menu Options xvi Table 6: APM Parameter Tuning Table 7: Tie Point-Based Model Selection List of Tables vii

8 viii List of Tables

9 Preface About This Manual Example Data Documentation Conventions Used in This Book The IMAGINE AutoSync User s Guide serves as a handy guide to help you use IMAGINE AutoSync. Included is a comprehensive index, so that you can reference particular information later. Sample data sets are provided with the software. This data is separately installed from the data DVD. For the purposes of documentation, <ERDAS_Data_Home> represents the name of the directory where sample data is installed. The Tour Guides refer to specific data which are stored in <ERDAS_Data_Home>/examples. This manual is part of a suite of on-line documentation that you receive with ERDAS IMAGINE software. There are two basic types of documents, digital hardcopy documents which are delivered as PDF files suitable for printing or on-line viewing, and On-Line Help Documentation, delivered as HTML files. The PDF documents are found in <IMAGINE_HOME>\help\hardcopy where <IMAGINE_HOME> represents the name of the directory where ERDAS IMAGINE is installed. Many of these documents are available from the ERDAS Start menu. The on-line help system is accessed by clicking on the Help button in a dialog or by selecting an item from a Help menu. In ERDAS IMAGINE, the names of menus, menu options, buttons, and other components of the interface are shown in bold type. For example: In the Select Layer To Add dialog, select the Fit to Frame option. When asked to use the mouse, you are directed to click, Shift-click, middle-click, right-click, hold, drag, etc. click designates clicking with the left mouse button. Shift-click designates holding the Shift key down on your keyboard and simultaneously clicking with the left mouse button. middle-click designates clicking with the middle mouse button. right-click designates clicking with the right mouse button. hold designates holding down the left (or right, as noted) mouse button. drag designates dragging the mouse while holding down the left mouse button. Preface ix

10 The following paragraphs are used throughout the ERDAS IMAGINE documentation: These paragraphs contain strong warnings. These paragraphs provide software-specific information. These paragraphs contain important tips. These paragraphs lead you to other areas of this book or other ERDAS manuals for additional information. NOTE: Notes give additional instruction. Shaded Boxes Shaded boxes contain supplemental information that is not required to execute the steps of a tour guide, but is noteworthy. Generally, this is technical information. Getting Started ERDAS IMAGINE Icon Panel To start ERDAS IMAGINE, type the following in a UNIX command window: imagine, or select ERDAS IMAGINE from the Start -> ERDAS menu. ERDAS IMAGINE begins running; the icon panel automatically opens. The ERDAS IMAGINE icon panel contains icons and menus for accessing ERDAS IMAGINE functions. You have the option (through the Session -> Preferences menu) to display the icon panel horizontally across the top of the screen or vertically down the left side of the screen. The default is a horizontal display. The icon panel that displays on your screen looks similar to the following: The various icons that are present on your icon panel depend on the components and add-on modules you have purchased with your system. x Preface

11 ERDAS IMAGINE Menu Bar The menus on the ERDAS IMAGINE menu bar are: Session, Main, Tools, Utilities, and Help. These menus are described in this section. NOTE: Any items which are unavailable in these menus are shaded and inactive. Session Menu 1. Click the word Session in the upper left corner of the ERDAS IMAGINE menu bar. The Session menu opens: These menus are identical to the ones on the icon panel. Click here to end the ERDAS IMAGINE session You can also place the cursor anywhere in the icon panel and press Ctrl-Q to exit ERDAS IMAGINE The following table contains the Session menu selections and their functionalities: Table 1: Session Menu Options Selection Preferences Configuration Session Log Active Process List Commands Enter Log Message Functionality Set individual or global default options for many ERDAS IMAGINE functions (Viewer, Map Composer, Spatial Modeler, etc.). Configure peripheral devices for ERDAS IMAGINE. View a real-time record of ERDAS IMAGINE messages and commands, and to issue commands. View and cancel currently active processes running in ERDAS IMAGINE. Open a command shell, in which you can enter commands to activate or cancel processes. Insert text into the Session Log. Preface xi

12 Table 1: Session Menu Options (Continued) Start Recording Batch Commands Open Batch Command File View Offline Batch Queue Flip Icons Tile Viewers Close All Viewers Main Tools Utilities Help Properties Generate System Information Report Exit IMAGINE Selection Functionality Open the Batch Wizard. Collect commands as they are generated by clicking the Batch button that is available on many ERDAS IMAGINE dialogs. Open a Batch Command File (*.bcf) you have saved previously. Open the Scheduled Batch Job list dialog, which gives information about pending batch jobs. Specify horizontal or vertical icon panel display. Rearrange two or more Viewers on the screen so that they do not overlap. Close all Viewers that are currently open. Access a menu of tools that corresponds to the icons along the ERDAS IMAGINE icon bar. Access a menu of tools that allow you to view and edit various text and image files. Access a menu of utility items that allow you to perform general tasks in ERDAS IMAGINE. Access the ERDAS IMAGINE On-Line Help. Display the ERDAS IMAGINE Properties dialog where system, environment and licensing information is available. Provides a mechanism for printing essential IMAGINE operating system parameters. Exit the ERDAS IMAGINE session (keyboard shortcut: Ctrl-Q). Main Menu 2. Click the word Main in the ERDAS IMAGINE menu bar. The Main menu opens xii Preface

13 The following table contains the Main menu selections and their functionalities: Table 2: Main Menu Options Selection Start IMAGINE Viewer Import/Export Data Preparation Map Composer Image Interpreter Image Catalog Image Classification Spatial Modeler Vector Radar VirtualGIS LPS Project Manager Stereo Analyst Functionality Start an empty Viewer. Open the Import/Export dialog. Open the Data Preparation menu. Open the Map Composer menu. Open the Image Interpreter menu. Open the Image Catalog dialog. Open the Classification menu. Open the Spatial Modeler menu. Open the Vector Utilities menu. Open the Radar menu. Open the VirtualGIS menu. Open the LPS Project Manager Startup dialog. Open the Stereo Analyst Workspace. Tools Menu 3. Click the word Tools in the ERDAS IMAGINE menu bar. The Tools menu opens: Preface xiii

14 The following table contains the Tools menu selections and their functionalities: Table 3: Tools Menu Options Edit Text Files Selection Edit Raster Attributes View Binary Data View IMAGINE HFA File Structure Annotation Information Image Information Vector Information Image Command Tool Coordinate Calculator Create/Display Movie Sequences Create/Display Viewer Sequences Image Drape DPPDB Workstation View EML ScriptFiles a Functionality Create and edit ASCII text files. Edit raster attribute data. View the contents of binary files in a number of different ways. View the contents of the ERDAS IMAGINE hierarchical files. View information for annotation files, including number of elements and projection information. Obtain full image information for a selected ERDAS IMAGINE raster image. Obtain full image information for a selected ERDAS IMAGINE vector coverage. Open the Image Command dialog. Transform coordinates from one spheroid or datum to another. View a series of images in rapid succession. View a series of images saved from the Viewer. Create a perspective view by draping imagery over a terrain DEM. Start the Digital Point Positioning DataBase Workstation (if installed). Open the EML View dialog, which enables you to view, edit, and print ERDAS IMAGINE dialogs. xiv Preface

15 a. UNIX only. Utilities Menu 4. Click Utilities on the ERDAS IMAGINE menu bar. The Utilities menu opens: The following table contains the Utilities menu selections and their functionalities: Table 4: Utility Menu Options Selection JPEG Compress Images Decompress JPEG Images Convert Pixels to ASCII Convert ASCII to Pixels Convert Images to Annotation Convert Annotation to Raster Create/Update Image Chips Mount/Unmount CD-ROM a Create Lowercase Parallel Links a Functionality Compress raster images using the JPEG compression technique and save them in an ERDAS IMAGINE format. Decompress images compressed using the JPEG Compress Images utility. Output raster data file values to an ASCII file. Create an image from an ASCII file. Convert a raster image to polygons saved as ERDAS IMAGINE annotation (.ovr). Convert an annotation file containing vector graphics to a raster image file. Provide a direct means of creating chips for one or more images. Mount and unmount a CD-ROM drive. Make a set of links to items on CD for systems that convert CD paths to uppercase. Preface xv

16 Table 4: Utility Menu Options (Continued) Selection Create Font Tables Font to Symbol Compare Images Reconfigure Raster Formats Reconfigure Vector Formats Reconfigure Resample Methods Reconfigure Geometric Models Functionality Create a map of characters in a particular font. Create a symbol library to use as annotation characters from an existing font. Open Image Compare dialog. Compare layers, raster, map info, etc. Start a DLL to reconfigure raster formats. Start a DLL to reconfigure vector formats. Start a DLL to reconfigure resampling methods. Start a DLL to reconfigure the geometric models. a. UNIX only. Help Menu 5. Select Help from the ERDAS IMAGINE menu bar. The Help menu opens. NOTE: The Help menu is also available from the Session menu. The following table contains the Help menu selections and their functionalities: Table 5: Help Menu Options Selection Help for Icon Panel IMAGINE Online Documentation Functionality View the On-Line Help for the ERDAS IMAGINE icon panel. Access the root of the On-Line Help tree. xvi Preface

17 Table 5: Help Menu Options Selection IMAGINE Version IMAGINE DLL Information About ERDAS IMAGINE Functionality View which version of ERDAS IMAGINE you are running. Display and edit DLL class information and DLL instance information. Open ERDAS IMAGINE Credits. Dialogs A dialog is a window in which you enter file names, set parameters, and execute processes. In most dialogs, there is very little typing required simply use the mouse to click the options you want to use. Most of the dialogs used throughout the tour guides are reproduced from the software, with arrows showing you where to click. These instructions are for reference only. Follow the numbered steps to actually select dialog options. For On-Line Help with a particular dialog, click the Help button in that dialog. All of the dialogs that accompany the raster and vector editing tools, as well as the Select Layer To Add dialog, contain a Preview window, which enables you to view the changes you make to the Viewer image before you click Apply. Most of the functions in ERDAS IMAGINE are accessible through dialogs similar to the one below: More Information/Help On-Line Help As you go through the tour guides, or as you work with ERDAS IMAGINE on your own, there are several ways to obtain more information regarding dialogs, tools, or menus, as described below. There are two main ways you can access On-Line Help in ERDAS IMAGINE: Preface xvii

18 select the Help option from a menu bar click the Help button on any dialog. Status Bar Help The status bar at the bottom of the Viewer displays a quick explanation for buttons when the mouse cursor is placed over the button. It is a good idea to keep an eye on this status bar, since helpful information displays here, even for other dialogs. Bubble Help The User Interface and Session category of the Preference Editor enables you to turn on Bubble Help, so that the single-line Help displays directly below your cursor when your cursor rests on a button or frame part. This is helpful if the status bar is obscured by other windows. xviii Preface

19 Introduction to IMAGINE AutoSync Introduction Overview Benefits This chapter explains what to expect from IMAGINE AutoSync and how to adjust the underlying engine for optimal results. It also provides practical tips and hints, and describes the best strategies to handle difficult situations you may encounter. Imagery always needs to be geometrically corrected to a map coordinate system to be useful. This is especially true for applications such as change detection, resolution merge (pan sharpening), mosaic, and even simple layer stacking. The geometric correction must be highly accurate, because misalignment of features at the same location could render the results useless. Additionally, many applications require the creation of a large database of georeferenced images. The current process of manual point measurement can be prohibitively labor intensive for large applications, and it does not enforce sub-pixel level correlation between images due to the limitation of human visual interpretation. Block triangulation, although tying imagery together photogrametrically, does not enforce any correlation to already existing image layers. IMAGINE AutoSync uses an automatic point matching algorithm to generate thousands of tie points, and produces a mathematical model to tie the images together. The resulting workflows significantly reduce or sometimes completely eliminate manual point collection. The output should be generally equal or better in accuracy in comparison to the current methodology. Some of the benefits of using IMAGINE AutoSync include: Automatic point matching algorithm for mass tie point generation. Automatic sensor detection (if sensor information is available). Streamlined wizard workflow that can handle a large number of similar images. Powerful workstation that provides an efficient environment and tools for tie point quality assessment, point measurement, preview of the output, and is useful in the organization of complex projects, and so on. Iterative model refinement and instantaneous rectification results in the workstation environment. Constraints As with any tool, poor data quality and/or inappropriate parameters can produce less than desirable results. As a user, you should be aware of the following: Introduction to IMAGINE AutoSync 1

20 Default parameters are appropriate for most data, but may require adjustment to work well for more varied scenes. Data quality of the input images, reference image, and Digital Elevation Model (DEM) directly affects the final output results. User intervention may be required in some workflows to compensate for poor data quality, erroneous model selection, incorrect tie point measuring, and so on. The more experienced you are with using IMAGINE AutoSync and the more knowledge you have about the data and workflow, the better the output will be using our software. Data Preparation Input Images Reference Images The quality of your input data plays a crucial role in determining the accuracy of the output and extent of user intervention required. Additionally, the type of the input data largely determines which workflow to follow for optimal results. This section discusses various data you can use in IMAGINE AutoSync and how to best prepare the data. It also provides suggested remedies for potential problems. When using the edge matching workflow, you can use georeferenced or calibrated input images. In the georeferencing workflow, input images can be georeferenced, calibrated, or raw images. You can also use images that have map information but are not georeferenced to a particular projected coordinate system. If you are using raw input images, you must first establish a footprint with the reference image before running automatic point measurement. This is a necessary step since raw images lack the map information to place the image at an approximate location to overlap with the reference image. Another consideration when using raw imagery is the potential for matching problems between the uncorrected, vertically displaced mountainous regions in the raw image and an orthorectified reference image. This displacement can cause poor points to be generated from the automatic point measurement process. You can alleviate this problem by choosing an appropriate sensor model that allows for the specification of a DEM (DLT, RPC, or ROP) and using an accurate DEM. See Modeling on page 12 for more details. Sensor metadata can be very helpful in establishing models for rectification. For example, QuickBird images contain enough information to build a rigorous model. Generally, data that are rectified using a rigorous model and an accurate DEM produce the best results. The reference image must be georeferenced and have projection information. For best results, the reference image should have a higher or spatial resolution equal to the input images. IMAGINE AutoSync has been tested with datasets that do not adhere to the above suggestion and at a factor of six difference, it was generally found that tie point editing was required to improve the solution. 2 Introduction to IMAGINE AutoSync

21 Selecting Input and Reference Images When selecting input and reference images to use, it is preferable to use images with maximum similarity. This largely improves the result of the automatic point measurement. If the images are too dissimilar, the results from the automatic point measurement process may be undesirable. Some of the main factors that affect the similarities of images include the following: Time of Capture Time of capture, especially the season, could greatly alter the radiometric characteristics of the images. For example, a winter scene will not match well with a summer scene with high vegetation. Resolution Resolution is another factor that affects point matching results, because it creates a difference in the details of the two images. Avoid mixing input and reference images with a resolution difference larger than a factor of six. Elevation Variation Variation in the elevation could also cause a difference between the input and reference images. This is because the reference most likely will be an orthorectified image and therefore vertical displacement is minimal compared to the input. As a result, features that should be in the same location could be far apart when the input and reference images are attempted to be matched. To avoid this problem, you can select a model that will allow for the specification of a DEM. Spectral Range of Selected Band The automatic point matching process uses a single band of a multiband image to locate the tie point pairs. Consequently, it is important to make sure similar bands are selected from the input image and the reference image to achieve maximum similarity in radiometric characteristics. Selecting a band within the visible spectrum will produce the best results. Infrared and thermal bands should generally be avoided. Sensor The sensor used in capturing the image (Landsat, IKONOS, SPOT, and so on) affects the resolution and radiometric characteristics of the images. It also determines the mathematical models that can be used in the rectification. For example, the Rigorous Orbital Pushbroom sensor model can only be applied to sensors such as IRS, QuickBird, and so on. See Modeling on page 12 for details on selecting an appropriate model. Introduction to IMAGINE AutoSync 3

22 Digital Elevation Model (DEM) The availability of a high resolution Digital Elevation Model (DEM) can drastically impact the quality of rectification results, especially for mountainous areas. A DEM provides additional model-solving information in determining the location of features in the output. This could greatly reduce the negative impact of vertical displacement when matching input and reference images. APM Engine APM Strategy Parameters Automatic Point Measurement (APM) is a software tool that uses image-matching technology to automatically recognize and measure the corresponding image points between two raster images. In IMAGINE AutoSync, APM aims to deliver the coordinates of evenly distributed corresponding points between an input image and a reference image. APM works automatically to find the needed image points, but there are a set of parameters you can adjust in circumstances where the default settings fail to produce acceptable results. An Advanced Point Matching Strategy dialog is also provided for more control over the process. Please note that the defaults that appear in these dialogs can be set from IMAGINE properties under the IMAGINE AutoSync listing. APM Strategy Tab You can adjust APM parameters on the APM Strategy tab on the IMAGINE AutoSync Project Properties dialog in the workstation, or on the APM Strategy tab in the wizards. 4 Introduction to IMAGINE AutoSync

23 Input Layer to Use: Select the layer of the input image to use for APM. IMAGINE AutoSync automatically assigns a layer name for each layer in the input image, using the following format: Layer_1, Layer_2, and so on. Layer 1 refers to band one of the image. If you have multiple input images in your list and they contain a different number of bands, all possible bands will be listed. If you choose Layer_5 and one or more images only contain four bands, band 5 will be used on any images containing 5 bands, but band 4 or the next available band will be used on any images with fewer than 5 bands. Reference Layer to Use: Select the layer of reference image to use for APM. IMAGINE AutoSync uses the actual reference layer names. Try to select the layers of two images with same or similar spectrum. For example, if you use two color images, then try to choose the same green bands. The more similar the radiometric characteristics of two images are, the better APM results it can achieve. In case you are using images with different band widths (such as a black-white image and a multi-spectral image), you may still get good results, but you may also encounter less accurate matching points depending on the specific situation. Find Points With: Select either the Default Distribution or Defined Pattern type of point distribution. Default Distribution APM superimposes a regularly spaced grid onto each image in an attempt to find matching points in a well distributed pattern. When you are attempting to collect fewer than 100 points, a 5 x 5 grid is used. If you want to collect 100 points or more, a 10 x 10 grid is used. APM tries to collect matching points within an area of 512 X 512 pixels centered on the corresponding grid intersection of each image. If a matching point is not found within this area, APM moves to the next grid intersection. This optimizes the likelihood of finding well distributed points while minimizing search time on large images. 10% 30% 50% 70% 90% 10% 30% 50% 70% 90% The 5 X 5 grid divides each image as shown at left. The 10 X 10 grid begins at 5% and increments by 10%. In both cases, the maximum search area is 512 X 512 pixels centered on each grid intersection. Introduction to IMAGINE AutoSync 5

24 Default distribution is most appropriate for standard aerial photography and less so for the more varied possible inputs to IMAGINE AutoSync. For that reason, Default Distribution is available as an option to those working with aerials, but the standard default setting for finding points is set to Defined Pattern in anticipation of a wider use of data. Defined Pattern If you select Defined Pattern, the options Starting Column, Starting Line, Column Increment, Line Increment, Ending Column, and Ending Line become active. Using those options, you can define the exact placement of tie points throughout the images of the block. With this option, you can also define the intended number of points per pattern. Intended Number of Points/Pattern (Image) This section of the APM Strategy tab changes depending on whether you have selected Default Distribution or Defined Pattern, above. In the case of Default Distribution, enter the intended number of tie points generated for each image. The minimum is 9 and the maximum is 500. The default is 400. In the case of Defined Pattern, enter the intended number of tie points generated per pattern. The minimum number of points per pattern is 1 and the maximum is 8. The default is 1. If you want to define a tie point pattern, select Defined Pattern. However, you should consider the overlap percentage when you define your own pattern and try to get each pattern location inside as much of the overlap as possible. Keep All Points Select this checkbox to use all tie points generated regardless of accuracy or distribution. If this checkbox is active, the number of collected tie points will be greater than the intended number of points per image. You do not normally need to choose this option unless your images have low contrast, yielding few points without this option selected. Starting Column, Starting Line: Define the starting location of tie points you want to find on the image in pixels. You will get better results if you define the starting location close to the upper-left corner of the overlap area on the higher resolution image. It is safe to define the location close to the upper-left corner of the image, but you may get bad results if it is close to the lower-right corner. Column Increment, Line Increment: Define the increment in pixels for tie point locations along column and line direction. APM will try to find tie points around the image locations with these increments to the previous locations. 6 Introduction to IMAGINE AutoSync

25 Ending Column, Ending Line: Define the last column and line for tie point collection. If you want to define them, they should not exceed the lower-right corner of the overlap area. If you leave them at the default of 0, 0, APM will automatically use the last column and last line of the overlap area. Automatically Remove Blunders: Click this checkbox to remove blunders (wrong tie points) automatically from the APM generated tie points. Removing blunders is an iterative process based on a 3rd order polynomial model. When this option is selected, the points that do not fit well with the majority of tie points are considered blunders and are discarded. By default, this option is selected. You should deselect this option only if you suspect that it is removing correct tie points. For example, you should deselect this option when most of the APM tie points are wrong, or when there is a large difference in the terrain between the inputs and reference image. Maximum Blunder Removal Iterations: This option becomes available when you choose to automatically remove blunders with the Automatically Remove Blunders option. The default is 2. In most cases, increasing this number means more iterations of the blunder removal algorithm will be run. As a result, more tie points will be considered as blunders and discarded. Advanced Point Matching Strategy Dialog On the Advanced Point Matching Strategy dialog in IMAGINE AutoSync, you can adjust the more advanced APM parameters to optimize automatic tie point collection. Search Size: Enter the window size in pixels to use for searching for corresponding points. IMAGINE AutoSync searches for the corresponding point within a square window defined by this parameter. The default value is 17 (a 17 x 17 pixel window). For flat areas, this value could be smaller, for steeper areas, it could be larger. A larger value could cause more computation time and more wrong points, but a smaller value could result in fewer matched points. Introduction to IMAGINE AutoSync 7

26 Feature Point Density: Defines the feature point density percentage based on an internal default. Increasing the value above 100% (for example, in poor contrast area) will get more feature points to get more matched points. Decreasing the value below 100% (for example, in an area with crowded details) will result in fewer feature points to accelerate the computation. Normally, you do not need to adjust this parameter if you are using scanned aerial photos. However, if you select the Avoid Shadow option, you should set this parameter to a higher value (for example, 200%). Correlation Size: Enter the window size in pixels for crosscorrelation. The default window size is 11 (11 x 11). A larger window size could cause a smaller correlation coefficient due to the geometric difference within the two correlation windows and, therefore, fewer matched points. A smaller window size could result in a larger correlation coefficient due to insufficient contents, therefore yielding more bad points. Minimum Point Match Quality: Enter the limit for the cross correlation coefficient. The allowed value is between 0.6 and The default value is A larger limit results in fewer points accepted and less error. A smaller limit results in more correlated points but possibly more errors. Least Squares Size: Enter the window size in pixels for least square matching. A larger window size could reduce the number of badly matched points, and could also reduce the number of good points. A smaller window size could increase the number of both bad and good points. The default is 21. You should increase this number for flatter areas and decrease the number for steeper areas. Setting the window size too small could result in insufficient contents in the window for the least square computation. Initial Accuracy: Enter the relative accuracy of your initial values used by the automatic tie point generation process. Generally, a large value here increases the initial search area for possible corresponding points at the initial estimation phase. This value can be seen as the relative accuracy of the source you have chosen for your initial values (for example, initial map coordinates or relative terrain change). The default value is 10%. You should use initial values with an accuracy of 10% or better for the automatic tie point collection. Avoid Shadow: When you select this checkbox, APM will try to avoid generating tie points in areas of shadow. Avoiding areas in shadow improves tie point accuracy. You should choose the type of images you are working with; negative or positive image scans. You do not need to use this option unless shadows are very prominent in your images. 8 Introduction to IMAGINE AutoSync

27 Image Scan Type: Positive Select this option if you are working with a positive image (bare ground appears white in the image). Image Scan Type: Negative Select this option if you are working with a negative image (bare ground appears black in the image). Use Manual Tie Points for Initial Connection between Images: Select this option if the manually measured tie points will be used as the initials for APM to find additional tie points automatically. Select this option when the initial map coordinates are very coarse, or when no map information is available for one or more of the images. If you try to rectify a raw image to another raw image, you should also select this option. You should manually collect a minimum of three points. Exclude Background Area: Select this option if you want to exclude the background of the image. When you select this option, a bounding box excluding the background area will be calculated and used as the active image area for APM. The default starting column and starting row will move inside of this bounding box. If you manually changed the values in the Starting Column and Starting Row, then your new values will take precedence above the calculated bounding box. Background Value: This option becomes available when you select the Exclude Background Area option. Enter the background value of the image. The default value is zero. If you do not know the background value, you can use the IMAGINE ImageInfo tool and review the pixel values of the image. Ideal Situations for Good APM Performance For the best APM results, try to ensure that the following conditions are met as much as possible. Not meeting one or more of these conditions does not necessarily mean that the APM results will be of poor quality. Use images with an overlap larger than 40%. Use images with the same or similar resolution or pixel size. Use images that were captured in the same season, at the same time of day (similar illumination conditions), and with similar weather situations with good visibility. Use images that were captured by the same or similar sensor. Select the same band or a similar band in the images for point matching to ensure similarity of radiometric characteristics. Use images that are properly orthorectified (if appropriate). This reduces the impact of vertical displacement and other distortions. Pay special attention to quality of orthorectification. A poorly orthorectified image produces bad results and is misleading in raising your expectation. Introduction to IMAGINE AutoSync 9

28 Use images with relatively flat terrain. There is minimal vertical displacement and the radiometric characteristics are better preserved because they have not gone through extensive modification in a prior rectification process. Ensure good initial map information is available for the images. Images with less than 10% misalignment in the overlap region tend to yield better results. When there is no initial image map information, you need to perform an initial manual registration. You can do this by digitizing 3-4 high quality points that are evenly distributed and preferably placed close to the image corners. When using images with mountainous terrain, use an accurate DEM in order to remove the image displacement caused by the steep terrain change. Situations to Avoid These are some situations you should try to avoid, since any or a combination of the following conditions could result in poorly matched points. If you are unable to improve any of the following conditions, refer to "APM Trouble Shooting and Tips" on page 11 to learn about possible remedies by adjusting the APM parameters. Using images with an overlap that is less than 256 x 256 pixels or with an overlap region that is too narrow. Since APM requires a sufficient region to deploy the matching strategy, an overlap less than 20% will not produce desirable results. Resolution differences (or pixel size difference) larger than 6 times. This is the threshold for any meaningful results. Using images with a drastic time difference can result in a poor match. For instance, a winter scene does not match well with a summer scene due to the change in vegetation. Images do not match well if they were captured too many years apart if there has been a high level of change. The band to match selection for inputs and reference should not differ too much in electromagnetic wavelength. For example, an infrared band might not match well with a blue band. Sensor characteristics between the images if too different can affect performance. For instance, a very high flying sensor does not match well with a low flying sensor. The band width used, sensor mechanism (pushbroom or discrete sensors) all could play a role in making the images distinct from each other. If using images with large, uncorrected terrain relief, the vertical displacement can drastically reduce the likelihood of successful APM results. 10 Introduction to IMAGINE AutoSync

29 Large scale, high-rise urban scenes do not match well due to the vertical displacement effect, which is difficult to orthorectify. Bad initial image registration. If misalignment in the overlap region is larger than 20%, or manually digitized points are unevenly distributed too far from image corners or of poor quality, results may be affected. Inappropriate placement of the starting point too close to the ending edge (lower-right corner) of overlap area. This will result in the overlap area not having enough search regions. APM Trouble Shooting and Tips Refer to this section if the APM results are not as expected. If APM results in a large RMSE, it may indicate bad APM results and/or inappropriate modeling. Examine the tie points carefully to ensure that the problem is from the APM results (many bad points, not enough points, and so on.) before applying the following steps to fine tune the APM parameters for improved results. If the APM points are correct but the output does not reflect the quality of the points, you most likely have chosen an inappropriate model. Problem Diagnosis and Solutions Sometimes you may not be able to rectify or improve any of the conditions. If that occurs, you can adjust the APM parameters by following the steps listed in the next section. APM Parameter Tuning When you need to adjust the APM parameters, first match your situation with one of the following, and then adjust the APM parameters accordingly. Table 6: APM Parameter Tuning Situation Many points, but many poor quality points Too few points, but good quality Remedies On the APM Strategy tab, change one or more of the following parameters: Increase the Minimum Point Match Quality (> 0.9) Increase the Correlation Size and Least Squares Size Decrease the Intended Number of Points On the APM Strategy tab, change one or more of the following parameters: Increase the Intended Number of Points Decrease the Column Increment and Line Increment Introduction to IMAGINE AutoSync 11

30 Situation Too few points, and poor quality Remedies On the APM Strategy tab, change one or more of the following parameters: Increase the Search Size Decrease the Correlation Size Decrease the Least Squares Size Increase the Intended Number of Points NOTE: Most likely, you have one or more undesirable conditions, such as a large misalignment. If the problem is a large misalignment, manually collect a few tie points and select the Use Manual tie points for initial connection between the images option under APM Strategy Advanced Options. Too many points On the APM Strategy tab, change one or more of the following parameters: Decrease the Intended Number of Points Increase the Column Increment and Line Increment Increase the Minimum Point Match Quality (> 0.9) and increase the Correlation Size and Least Squares Size NOTE: A large of number of points is unnecessary and slows down the APM processing. A large number of points does not necessarily improve the accuracy. Try to trim down the numbers and keep the best points. Edge Matching The nature of APM is to avoid edges of images. Features on the edge, if no points were found, could be misaligned. For the edge matching workflow, it will typically align features towards the center of the overlap. If the next step of processing is mosaicking, a cutline down the center of the overlap will eliminate issues on the edge. To match the entire overlap, it may be necessary to manually collect points on the edge, especially on linear features, to obtain full alignment. Modeling This section provides a brief explanation of the various mathematical models available in IMAGINE AutoSync. You can apply these models to the input images in order to geometrically correct them. Understanding the model properties helps you select the model that will generate the most accurate results for your dataset. 12 Introduction to IMAGINE AutoSync

31 Image to Image (2D) Transforms An image-to-image transform can warp one image onto another without the use of an earth model (DEM). The fit will not be as good as a rigorous sensor model that requires a DEM because much of the distortion comes from the terrain. Rubber Sheeting Rubber sheeting is a two dimensional image-to-image transformation which is implemented as a piecewise transformation based on the triangles formed from the tie points. This has the property that the transformation is always perfect at the control points and there is always a well behaved transition from triangle to triangle. However, if an image has hilly or mountainous terrain, you will have to collect a large number of tie points. In effect, the tie points will be forming a model of the terrain surface. This can be impractical since the performance of rubber sheeting decreases as the number of points increases. Rubber sheeting is best used in an area of moderate relief when an actual sensor model and a DEM is not available. Polynomial A polynomial model is a two dimensional image-to-image transformation. The polynomial model is of the form: X = A i x i y n i i ( ) Y = B i x i y n i i ( ) If the order is set to 1, then the result is an affine or linear transformation that is appropriate for cases where there is little or no terrain displacement and most of the image to image difference is in the form of scale, offset, and rotation. You can use higher orders in cases where there is slowly varying terrain effects with scale, offset, and terrain rotation. In both cases, it is assumed that the actual sensor model and DEM are not available. Ground to Image (3D) Transforms Every image is a mapping of three-dimensional (3D) coordinates into a two-dimensional (2D) plane. The ground-to-image transformation models this mapping using a DEM as the earth model. Introduction to IMAGINE AutoSync 13

32 Rigorous Orbital Pushbroom (ROP) Many current satellite imaging systems use pushbroom sensors. This is a sensor that has a linear sensing array associated with an optical system. The whole system is moved forward by the orbital motion of the satellite. As it moves, the single line is pushed forward, scanning a whole image. The linear array can be thought of as a very narrow camera, which can be modeled with the same six parameters as a frame camera. Each line has a different set of six parameters. Because the orbit of the satellite is very stable and well known, the position (x, y, z) and orientation, omega, phi and kappa (ω, φ, κ) can be modeled as time varying parameters. The time is related to each line so these parameters can be computed give the line number in the image. In the case of the Rigorous Orbital Pushbroom (ROP), the parameters of the orbit model are refined using the tie points. 14 Introduction to IMAGINE AutoSync

33 Satellites which are currently handled by the ROP model include: SPOT 5 EROS 1A ASTER QuickBird OrbView Approximate Sensor Models It is possible to use mathematical approximations for many sensor models. These approximations do not directly model the sensor. Instead, they are based on mathematic formulas whose results are very close to the rigorous sensor model. Projective Transform Model The Projective Transform model provides a more powerful modeling capability for multiperspective satellite images such as Landsat, SPOT, and QuickBird. While the best way to transform is to use the sensor-specific models, the Projective Transform can be used in situations where no ephemeris is available, where there is no applicable sensor model, or where the satellite image has already been geometrically corrected. Rational Polynomial Coefficients (RPC) Rational Polynomial Coefficients are (as the name implies) ratios of polynomials. These can model reasonably complex transformations and remain stable (high order polynomials tend to be unstable). Like a polynomial, a rational polynomial is described by an order and a set of coefficients for the numerator and denominator for the X and Y term. A rational polynomial is a 3D to 2D transformation in the sense that a ground x, y, and z are used along with the coefficients to compute the image row and column values. The RPC coefficients are typically computed using a solved rigorous model. One reason to use RPCs is that you do not need to know the original rigorous model, so it is a good way to provide a common framework which is independent of the actual sensor used. The following satellites data and formats provide RPCs: IKONOS QuickBird OrbView NITF Data The RPC values themselves cannot be computed from the tie points and ground points. However, existing RPC values can be refined to provide more accurate transforms. Introduction to IMAGINE AutoSync 15

34 Direct Linear Transform (DLT) The Direct Linear Transform (DLT) is actually an RPC whose order is equal to one. The DLT is an excellent approximation for frame cameras, and when it is known that the data comes from a frame camera, you can use this without knowing the specifics of the frame camera. The DLT coefficients can be computed from the tie and 3D points. Selecting a Model Selecting a DEM/DTM Modeling Trouble- Shooting and Tips You will get the best results when using a rigorous sensor model and an accurate DEM. Most of the satellite data are shipped with sensor model data (either parameters for the rigorous orbital pushbroom or RPCs) which IMAGINE AutoSync can read. If the model is unknown but a DEM is available, then it is a reasonable strategy to first try using a DLT. If the results from the DLT are not acceptable, the image may have been created using a pushbroom sensor. In this case, the pushbroom orbital parameters are unknown, so the next best candidate is to use one of the image-to-image (2D) transformations as described above. The quality and accuracy of the results will be directly tied to the quality of the DEM or DTM (Digital Terrain Model) used. A DTM usually does not include man-made structures such as buildings or bridges, so it can be expected that these features will have the most mismatch in the final results. Refer to this section for additional modeling troubleshooting help and tips. The more rigorous the model, the better the result. Follow the list to make the best of the available information from the data. The recommended order of models from the most rigorous to the least is: 1. Rigorous Orbital Pushbroom (ROP) 2. Rational Polynomial Coefficients (RPC) 3. Direct Linear Transform (DLT) 4. Polynomial 5. Rubber Sheeting The quality of the points is essential in determining the outcome of the image-to-image transformation models. Ideally they should be evenly distributed and closely matched. If a particular region has very few points, you may collect some points manually to compensate. If you determine that image to image (2D) transform is the best available model for the images, the following tips can help you decide whether to use an Affine, Polynomial, or Rubber Sheeting model. 16 Introduction to IMAGINE AutoSync

35 Table 7: Tie Point-Based Model Selection Number of Tie Points Appropriate Model < 10 Affine Polynomial (3rd Order) > 50 Polynomial or Rubber Sheeting Note: When the density of points is satisfactory with an even distribution, use Rubber Sheeting. Otherwise, use the 3rd Order Polynomial. Rubber sheeting models always have an RMSE of zero. Points would need to be manually reviewed in order to assess accuracy. When using the Linear or Non-Linear Rubber Sheeting model, you can first try using a Polynomial model for the data. Then use the RMSE as threshold to find and remove any mismatched points using the RMSE threshold selection tool in the IMAGINE AutoSync Workstation. The RMSE threshold selection tool will select all points that do not meet the specified value. By right clicking in the far left column on the CellArray, you can delete all selected points quickly eliminating the bad points. Finally, apply the Rubber Sheeting Model. Only the remaining points will be used, yielding better results. When results from using the Linear or Non-Linear Rubber Sheeting model show significant misalignment in a region, selectively collect some manual tie points in that area and resolve the model to correct the situation. This process can be very effective in correcting a situation where the overall model fits well, but is problematic in a small number of regions. Deleting points with an Error Mean + 2 x Standard Deviation, and resolving the model 3-4 times will drastically improve the overall error results. Repeating this process more than 3-4 times will have diminishing results. This is more applicable when using a rigorous model such as RPC, Orbital Pushbroom, or DLT. When using other models (Polynomial, Linear, or Non-Linear Rubber Sheeting, and so on.), this method may not be as beneficial. During the process of resolving the model, you can undo the last deletion of points if the model results are not what you expected. IMAGINE AutoSync Tips and Hints This section provides additional tips and hints for using IMAGINE AutoSync to generate the best results. Introduction to IMAGINE AutoSync 17

36 Interpreting Results After careful data preparation, you can run APM and tie the images together through a mathematical model. Then you can review the results. This section explains how to correctly interpret the results, identify any problems, and how to resolve them. Visual Inspection Visual inspection in the workstation is the most reliable method to verify results. Use the Swipe tool on overlaying images to inspect them for proper alignment. A well-aligned set of images will swipe smoothly without sudden visual interruption, except where there are real changes (for example, new buildings) or shadows. Tie Point Quality Analysis Inspect selected areas and use the Zoom tool in the workstation to assess the quality. Also look for an uneven distribution of tie points. If this occurs, you may need to manually collect some points to compensate. An easy way to locate suspect points is to select points based on RMSE (for instance, RMSE > Error Mean + 2 x Standard Deviation). Then use the Drive To tool to locate the points quickly. RMSE Analysis RMSE is the cumulative result of point matching and modeling. A large RMSE could be caused by one or both. Inspect the tie points to determine whether the points are the culprit. Often times, the inappropriate use of a model is responsible for a large error value. An example would be when the point quality is very good, yet the results show a large RMSE. Conversely, a small RMSE may not necessarily indicate good overall results. It could be an artifact of the model that is being used. For example, the Linear or Non-Linear Rubber Sheeting model by definition will produce a zero error because all tie points are meant to match exactly in the results while the regions surrounding the points may be distorted. Therefore, you should always perform a visual inspection along with analyzing the RMSE to ensure correct judgment of the error conditions. Follow-up Actions When your analysis of the results point to problems either in APM or modeling, refer to the proper sections of this chapter for specific tips for improvement: For better data preparation, refer to Data Preparation on page 2. For APM parameter tuning, refer to APM Engine on page 4. For a better model choice, refer to Modeling on page Introduction to IMAGINE AutoSync

37 Using the IMAGINE AutoSync Workstation This section provides some helpful tips when using the IMAGINE AutoSync workstation. Use the IMAGINE AutoSync Workstation for complex workflows that require more user intervention. The workstation provides more flexibility, tools for visual inspection of the results, and for manually collecting tie points. Set the color of the GCPs in the IMAGINE AutoSync preferences before running APM in the workstation. You will avoid having to select all of the GCPs individually generated by APM to change their color. Selecting a large number of GCPs (> 2000) can be very time-consuming. Use the Preview Output option on the Input Image context menu to view the results of the model before calibrating or resampling the imagery. While in the preview mode, you can continue to delete GCPs and resolve the model. Also, whenever you select Preview Output again, the model will be recomputed and the viewer updated appropriately. This avoids having to return to the Point Review mode. When both the input and reference image contain projection information and the two images are of different resolutions, click the Set Same Scale icon on the toolbar in the workstation to set the display scale of each image to be the same. During point review, it will be easier to find similar features between the two images. If large errors are produced when using imagery with no projection information, review the manually collected points to ensure they were collected over the same features. Low quality manual GCPs will produce low quality models. Using very low resolution elevation data with the rigorous models may result in a shearing affect in the output imagery. This is due to the difference in the resolution of the input image and the elevation data. It saves time to turn off the display of a large number of tie points in the Overview in the workstation. Using the IMAGINE AutoSync Wizards This section provides some helpful tips when using the IMAGINE AutoSync wizards. Use the Georeferencing or Edge Matching wizards when you want to be guided through a workflow. The wizard workflow can be labor-saving when the datasets are homogenous, large, and can be processed with the same settings of parameters and models. Introduction to IMAGINE AutoSync 19

38 Do not mix different types of datasets where each dataset requires different settings in the same wizard workflow. Separate the datasets into different projects. Before batching large homogenous datasets with the wizard, try to run one dataset through the wizard and inspect the results in the workstation. Experiment with the settings of the workflow to find one that generates optimal output. Then batch the large datasets with the same settings. After you determine a good workflow, you can make batching easier by creating a template IMAGINE AutoSync.lap file with the proper settings but no images in the workstation. Then load the template.lap file in the wizard and add the large datasets. IMAGINE AutoSync Workflows IMAGINE AutoSync supports three main types of workflows. Use the workflow that is suitable to the nature of the data and your applications. Georeferencing Workflow Use the georeferencing workflow if you know that one input image is clearly of better accuracy, and both images are georeferenced. For example, use the georeferencing workflow if you have a database of high-accuracy images and you want to introduce another georeferenced image of lesser quality to the database. Edge Matching Workflow Use the edge matching workflow to bring the overlap area of image pairs into alignment. This workflow will modify both images due to the fact that the required shifts will be divided between the images. Edge matching may also be a good choice when the overlap area is small and it is undesirable to apply the same transform that is suitable for the overlap region to the entire image. Raw Imagery Workflow Use the raw images workflow when an image does not have georeferencing information available, or when it is unreliable. When the georeferencing information is unreliable, you should ignore the existing georeferencing information. With raw images, you need to manually collect at least three tie points that are evenly distributed, close to the image corners, and high quality. General IMAGINE AutoSync Tips and Hints Some miscellaneous tips and hints for using IMAGINE AutoSync include: If you calculate statistics on all imagery before using the image within IMAGINE AutoSync, this will improve the overall performance. 20 Introduction to IMAGINE AutoSync

39 IMAGINE AutoSync will recompute pyramids at the start of the APM process for any image that does not have 3 x 3 pyramids previously computed. ERDAS IMAGINE defaults to producing 2 x 2 pyramids. You can change the ERDAS IMAGINE Image File preferences to always produce 3 x 3 to save time. Work with local files whenever possible. Saving IMAGINE AutoSync project files that contain a large number of GCPs may be slow over a network. When resampling, ensure the output cell size is reasonable for the images. The IMAGINE AutoSync defaults may not be suitable for your application. Summary General Guidelines When properly used, IMAGINE AutoSync is a powerful tool for fast image rectification with a tremendous saving of manual labor. This is achieved by a streamlined workflow, user-friendly workstation environment, a state-of-the-art automatic point matching engine, and a wide selection of intelligent modeling methods. The final output from IMAGINE AutoSync is the cumulative result of the workflow you select, the data quality, APM engine usage (parameter settings), and the model selected. To ensure the best results, you should make careful and judicious decisions on these factors, starting with the data preparation, and proceeding with the steps as outlined in the sections of this chapter. As with any sophisticated system, using IMAGINE AutoSync requires that you have a basic understanding of the various components of the embedded technologies. The more knowledge you have with regard to the data and the internal working of IMAGINE AutoSync, the better your chance of success. Some general guidelines you should follow when using IMAGINE AutoSync include: 1. Start with careful data preparation to ensure that you obtain the best data available (refer to Data Preparation on page 2). 2. Ensure that you understand the parameters of the APM engine. Analyze the data to see how many ideal or undesirable scenarios the data may exhibit and try to rectify them. (refer to APM Engine on page 4). 3. Select the most accurate model for rectification and utilize the provided metadata. Understand the limitations of each model and troubleshoot accordingly (refer to Modeling on page 12). 4. Follow the tips and hints in IMAGINE AutoSync Tips and Hints on page 17. This will help you avoid frustration caused by improper use. Introduction to IMAGINE AutoSync 21

40 5. If your APM results are not as expected, analyze whether it is the result of bad tie points or improper choice of a sensor model (refer to Modeling on page 12). Then proceed to rectify the situation accordingly. We are continually improving the technology in IMAGINE AutoSync to improve results and widen the scope of the product. 22 Introduction to IMAGINE AutoSync

41 Using IMAGINE AutoSync Introduction IMAGINE AutoSync provides both wizard-driven and workstation workflows for the automated rectification of imagery. The IMAGINE AutoSync wizards guide you through the geometric correction process or you can use the workstation for customizable control of your workflow. This chapter explains the steps for edge matching two images using the Edge Matching Wizard and how to use the IMAGINE AutoSync Workstation to georeference a raw image. All of the data used in this chapter are in the <ERDAS_Data_Home>/examples directory. Using the Edge Matching Wizard In this section, you use the IMAGINE AutoSync Edge Matching wizard to align two images so that features in the overlapping area match up. The two files to be edge matched are air-photo-1.img and air-photo-2.img. These data files are air photo images of the Oxford, Ohio area. You must have ERDAS IMAGINE running. 1. Click the IMAGINE AutoSync icon on the ERDAS IMAGINE icon panel. The IMAGINE AutoSync menu opens. Click here to start the Edge Matching Wizard 2. Select Edge Matching Wizard... from the IMAGINE AutoSync menu. The IMAGINE AutoSync Edge Matching Wizard opens. Using IMAGINE AutoSync 23

42 Click here to select images Using the Input tab In the Input tab, you will add the images to be edge matched. IMAGINE AutoSync will edge match neighboring images, so input image order in the CellArray is important. 1. In the Input tab, click the Open File icon. The Input Images dialog opens. Click here to display the file Click here to select the file Preview window 2. In the Input Images dialog under File name, select air-photo-1.img from the file list. 3. Click OK in the Input Images dialog. The file air-photo-1.img displays in the Input Images column in the Edge Matching Wizard dialog. 24 Using IMAGINE AutoSync

43 Input image 4. Repeat step 1. through step 3. for the second image, selecting air-photo-2.img this time. 5. Click Next> to continue to the APM Strategy tab in the Edge Matching Wizard. Using the APM Strategy tab In the APM Strategy tab, you can adjust the algorithm settings that control the placement of automatically generated tie points in your images. You can also select which input image layer to use to achieve a better point matching result. Using IMAGINE AutoSync 25

44 Make sure Defined Pattern is selected 1. Accept the default settings in the APM Strategy tab. Make sure that Defined Pattern is selected. 2. Click Next> to continue to the Edge Match Strategy tab in the Edge Matching Wizard. Using the Edge Match Strategy tab In the Edge Match Strategy tab, you can select a refinement method and choose to apply the refinement to the overlapping area only or the whole image. Click to select Linear Rubber Sheeting Make sure the buffer size is In the Edge Match Strategy tab, click the Refinement Method list and select Linear Rubber Sheeting. 26 Using IMAGINE AutoSync

45 2. Accept the Apply Refinement to default of Overlapping Area Only to apply refinement only to the overlapping area between the images. 3. In the Edge Match Strategy tab, in the Buffer Around the Overlapping Area (pixels): field, keep the default of Click Next> to continue to the Projection tab in the Edge Matching Wizard. Using the Projection tab In the Projection tab, you can set a projection for your output images. You can set it to the same projection as the corresponding input image or to another specified projection. NOTE: The Output Projection fields will be greyed out in the Projection tab until you select the Resample geocorrection method in the Output tab. 1. In the Projection tab, accept the default Output Projection of Same as Input Image. 2. Click Next> to continue to the Output tab in the Edge Matching Wizard. Using the Output tab In the Output tab, you can specify the properties for your output images, including selecting the geocorrection method and specifying names for the output files and summary report. Using IMAGINE AutoSync 27

46 Click here to select the Resample method Click here to open the Resample Settings dialog Click here to open the Output File Names dialog Enter a summary report name here 1. In the Output tab, select the Resample geocorrection method. 2. Click Resample Settings... in the Output tab. The Resample Settings dialog opens. For more information on the geocorrection methods, see Resampling vs. Calibration on page 30. Make sure Cubic Convolution is selected 3. Accept the default settings in the Resample Settings dialog. Make sure the Cubic Convolution resample method is selected. 4. Click OK in the Resample Settings dialog. 5. In the Output tab, click the Set Output File Names... button. The Output File Names dialog opens. 28 Using IMAGINE AutoSync

47 Enter a default file name suffix here Click here to select a default output directory 6. In the Output File Names dialog, click the File Selector icon to select a default output directory of your choice. 7. In the Default Output File Name Suffix field, enter a default file name suffix of your choice, or use the default _output. 8. Click OK in the Output File Names dialog. 9. In the Output tab, make sure the Generate Summary Report checkbox is selected and enter a name of your choice for the HTML summary report. You can also click the File Selector icon select a directory of your choice. to 10. In the Output tab, click Save to save the project. A File Selector opens, and you can save the project to a directory of your choice. 11. In the Output tab, click Finish to complete the edge matching process. The AutoSync Job status dialog appears, stating the progress of the edge match operation. 12. Click OK in the AutoSync Job status dialog when the operation is finished. NOTE: Edge matching can take several minutes to run, based upon your hardware capabilities and the size of the image files. Using IMAGINE AutoSync 29

48 Resampling vs. Calibration Resampling Resampling is the process of calculating the file values for the rectified image and creating the new file. All of the raster data layers in the source file are resampled. The output image has as many layers as the input image. ERDAS IMAGINE provides these widely-known resampling algorithms: Nearest Neighbor Bilinear Interpolation Cubic Convolution Bicubic Spline Calibration Instead of creating a new, rectified image by resampling the original image based on the mathematical model, calibrating an image only saves the mathematical model into the original image as a piece of auxiliary information. Calibration does not generate new images, so when the calibrated image is used, the math model comes into play as needed. For example, if you want to see the calibrated image in its rectified map space in a Viewer, the image can be resampled on the fly based on the math model, by selecting the Orient image to map system option in the Select Layer To Add dialog. A major drawback to image calibration is that the processes involved with the calibrated image is slowed down significantly if the math model is complicated. One minor advantage to image calibration is that it uses less disk space and leaves the image s spectral information undisturbed. NOTE: We recommend that image calibration be used only when necessary, due to the drawbacks of the process. Display Output Image 1. Click the Viewer icon in the ERDAS IMAGINE icon panel. The Select Viewer Type dialog opens. 2. Select Classic Viewer. 3. Click OK in the Select Viewer Type dialog. A new Viewer displays. 4. Click the Open icon in the Viewer you just created. 30 Using IMAGINE AutoSync

49 The Select Layer To Add: dialog opens. 5. Click the Raster Options tab at the top of the Select Layer To Add: dialog. 6. Select the Background Transparent option. 7. Click the File tab at the top of the Select Layer To Add: dialog. 8. In the Select Layer To Add: dialog under Filename, select the output images from the directory in which you saved them. 9. Click OK in the Select Layer To Add dialog. The edge matched output images display in the viewer. Use the Viewer Swipe Tool 1. To perform visual verification using the Viewer Swipe tool, select Utility -> Swipe... from the Viewer menu bar. 2. The Viewer Swipe dialog opens. Using IMAGINE AutoSync 31

50 Use the slide to swipe over the images Choose either Vertical or Horizontal Use Auto Mode and Speed to watch the images being swiped at a rate you choose 3. Check Auto Mode in the Viewer Swipe dialog, and type 500 for the Speed. You can watch as the swipe tool slowly works its way over the images allowing you to evaluate the quality. Experiment with both Vertical and Horizontal direction and different speeds. View Summary Report Once you have finished edge matching the images, you can view the HTML summary report to review information about the error, tie points, etc. 1. In a Windows Explorer window, browse to the directory where you saved the HTML report (in the Output tab). 2. Click to open the HTML file. The summary report opens in a browser window. You can experiment with selecting different options in the Edge Matching wizard tabs to produce different results. For example, in the Edge Match Strategy tab, select to apply refinement to the Whole Image (instead of the Overlapping Area Only) or change the Buffer Around the Overlapping Area (pixels) number to see the differences in the resulting output images. Using the AutoSync Workstation In this section, you use the georeference workflow in the IMAGINE AutoSync Workstation to georeference a raw Landsat TM image of Atlanta, Georgia, using a SPOT panchromatic image of the same area. The raw Landsat TM image does not have any map information and the SPOT image is rectified to the State Plane map projection. 32 Using IMAGINE AutoSync

51 This section explains the steps for using the georeference workflow in the workstation to georeference a raw image (an image without any map information). When georeferencing a rectified image, you do not need to manually collect tie points before running APM. To georeference a raw image in the IMAGINE AutoSync Workstation, follow these basic steps: create a new IMAGINE AutoSync project add an input image add an image to reference against the input image collect manual tie points run APM preview the output image improve output image results (if necessary) review the input and reference image map data information set the output image projection resample or calibrate the output image verify the rectification process view the summary report Create New IMAGINE AutoSync Project First, create a new IMAGINE AutoSync project. You must have ERDAS IMAGINE running. 1. Click the AutoSync icon on the ERDAS IMAGINE icon panel. The AutoSync menu opens. Click here to start the IMAGINE AutoSync Workstation Using IMAGINE AutoSync 33

52 2. Select AutoSync Workstation... from the AutoSync menu. The IMAGINE AutoSync Workstation Startup dialog opens. Click here to create a new IMAGINE AutoSync project 3. On the IMAGINE AutoSync Workstation Startup dialog, select Create a new project. 4. Click OK. The Create New Project dialog opens. Enter a project name here Click to open the Resample Settings dialog Click here to select the georeference workflow Click here to select the Resample geocorrection method Enter a summary report name here 5. On the Create New Project dialog, select the Georeference workflow. 6. In the Project File (*.lap) field, click the File Selector icon or enter a project file name of your choice. 7. On the Create New Project dialog, select the Resample geocorrection method. 8. Click the Resample Settings... button. 34 Using IMAGINE AutoSync

53 The Resample Settings dialog opens. Click to accept the default resample settings Make sure Cubic Convolution is selected 9. Accept the default settings in the Resample Settings dialog. Make sure the Cubic Convolution resample method is selected. IMAGINE AutoSync provides these widely-known resampling algorithms: Nearest Neighbor, Bilinear Interpolation, Cubic Convolution, and Bicubic Spline. In some cases you may want to change the Resample Method, but for this chapter, leave it set to Cubic Convolution. 10. Click OK in the Resample Settings dialog. 11. In the Create New Project dialog, in the Default Output Directory: (*) field, click the File Selector icon to select a default output directory of your choice. 12. In the Default Output File Name Suffix field, enter a default file name suffix of your choice, or keep the default _output. 13. In the Create New Project dialog, make sure the Generate Summary Report checkbox is selected. The name of the project in the Project File field defaults as the summary report name, but you can also click the File Selector icon and directory of your choice. to select a different name 14. If you are using SPOT DIMAP for input and you selected an imagery.tif file for the input image file name, you can run APM without manually measured points. Using IMAGINE AutoSync 35

54 Menu bar IMAGINE AutoSync toolbar Project Explorer Viewer panes GCP toolbar CellArray Status bar Add Input Image After you have created the IMAGINE AutoSync project, the next step is to add the input image you want to georeference. 1. To add an input image to the project, do one of the following: In the IMAGINE AutoSync toolbar, click the Open Input Images icon Select File -> Add Images -> Input Images... from the menu bar Right-click on the Input Images folder in the Project Explorer Tree View and select Add Input Image... The Select Images To Open dialog opens. 36 Using IMAGINE AutoSync

55 Click here to select the file Click here to add the file Preview window 2. In the Select Images To Open dialog under Filename, click the file tmatlanta.img. This file is a Landsat TM image of Atlanta that has not been rectified. 3. Click OK in the Select Images To Open dialog. The input image tmatlanta.img displays in the IMAGINE AutoSync Workstation. Using IMAGINE AutoSync 37

56 The input file displays in the Project Explorer and left viewer panes Add Reference Image After you have added an input image, the next step is to add an image to reference against the input image. 1. To add a reference image to the project, do one of the following: In the IMAGINE AutoSync toolbar, click the Open Reference Images icon Select File -> Add Images -> Set Reference Image... from the menu bar Right-click on the Reference Image folder in the Project Explorer Tree View and select Set Reference Image... The Select Images To Open dialog opens. 38 Using IMAGINE AutoSync

57 Click here to select the file Click here to add the file Preview window 2. In the Select Images To Open dialog under Filename, click the file panatlanta.img. This file is a SPOT panchromatic image of Atlanta. This image has been georeferenced to the State Plane map projection. 3. Click OK in the Select Images To Open dialog. The reference image panatlanta.img displays in the IMAGINE AutoSync Workstation. Using IMAGINE AutoSync 39

58 The reference file displays in the Project Explorer and right viewer panes Collect Manual Tie Points Once you have loaded the input and reference images in the IMAGINE AutoSync Workstation, you can manually collect tie points. This step is necessary here because the input image (tmatlanta.img) is a raw image (without any map information). When using the IMAGINE AutoSync Workstation to georeference images with map information, you do not need to manually collect tie points before running APM and you can skip this step. 1. In the GCP toolbar, click the Create GCP icon. 40 Using IMAGINE AutoSync

59 Click to collect tie points Click to show tie points in the viewer panes Click to change the tie point color to yellow 2. In the GCP toolbar, click the Show Selected Points icon. 3. To make the input image tie points easier to see in the viewer om the left, right-click in the Color column to the right of Point ID in the first row of the CellArray and select the color Yellow. Repeat this for each tie point in the CellArray. 4. To make the reference image tie points easier to see in the viewer on the right, right-click in the Color column to the right of Y Input in the first row of the CellArray and select the color Yellow. Repeat this for each tie point in the CellArray. 5. In the Main View pane of the input image, click a location to collect a tie point. The point you have created is labeled as 1 in the Main View pane and its X and Y inputs are listed in the CellArray. Also notice that the input image icon in the Project Tree View now has a green border since it now has tie points. 6. In the Main View pane of the reference image, click the same location to collect a tie point. Using IMAGINE AutoSync 41

60 Click to create tie points on both the input and reference images These are the X and Y map coordinates for tie points in the input image (tmatlanta.img) These are the X and Y map coordinates for tie points in the reference image (panatlanta.img) 7. Collect at least three manual tie points in both the input and reference images. You should choose points that are easily identifiable in both images, such as road intersections and landmarks, so that the images match properly. Also, make sure you scatter your tie points around the images so they are not all concentrated in one place. Try to collect tie points that are close to each of the four corners of the images. If you are using SPOT DIMAP for input and you selected an imagery.tif file for the input image file name, you can run APM without manually measured points. Your project in the IMAGINE AutoSync Workstation should now look similar to the following: 42 Using IMAGINE AutoSync

61 Run APM After collecting several tie points in the input and reference image, the next step is to run automatic point matching (APM) to automatically generate more control points for your images. 1. In the IMAGINE AutoSync toolbar, do one of the following to run APM: In the IMAGINE AutoSync toolbar, click the Run APM icon Select Process -> Run APM from the menu bar Right-click on the input image (tmatlanta.img) in the Project Explorer Tree View and select Run APM The Status bar at the bottom of the workstation displays the RMSE and Error Standard Deviation results. The APM points generated populate the CellArray and display in the viewers for both the input and reference images. Using IMAGINE AutoSync 43

62 The APM points display in the viewers and populate the CellArray The Status bar displays the RMSE and error standard deviation results Preview Output Image After you run APM, you can preview the output image to make sure you are satisfied with the results before resampling or calibrating. 1. To preview the output image, right-click on the input image (tmatlanta.img) in the Project Explorer Tree View and select Preview Output. The output image displays in the viewer. 44 Using IMAGINE AutoSync

63 Improve Output Image Results If you preview the output results and the image is warped, shows black images, or produces other unacceptable output, it is most likely the result of incorrect APM tie points or an inappropriate sensor model. In this chapter, if you are dissatisfied with the results, there are most likely incorrect tie points that you should delete before resampling. If the Error Std. Dev. is higher than 2.0, you should also follow these steps to improve the tie points. If you delete incorrect APM points and the results are still poor, then you should try changing the sensor model in the IMAGINE AutoSync Project Properties dialog. To learn more about improving APM results, see APM Engine on page Right-click on the input image (tmatlanta.img) in the Project Explorer Tree View and select Review Points. The input and reference images display in the viewer panes, showing the tie points. Using IMAGINE AutoSync 45

64 Enter 2 for the error threshold search criteria 2. In the GCP toolbar, enter or use the nudgers to the right of the field to select 2 in the error threshold text box. 3. In the GCP toolbar, click the Select GCPs with Error Threshold icon. Click to select tie points with an error threshold of 2 in the CellArray The tie points with an error higher than 2 are highlighted in the CellArray. 46 Using IMAGINE AutoSync

65 Click here to drive through the selected points in the CellArray 4. Click the Drive To icon to click through the selected points in the CellArray. As you click through the points, the points in the viewers will be highlighted with a box. 5. When you find a point with a high error, click the Delete GCP icon. The selected point is deleted from the viewers and the CellArray. 6. After you delete all the points with a high error, right-click on the input image (tmatlanta.img) in the Project Explorer Tree View and select Preview Output. Review Image Map Data The next step is to review the image map data for the input and reference image. You can review the map data to learn about the map and projection information in order to determine if you want the output image to have the same projection as the reference image. 1. Click on the reference image in the Main View pane or in the Project Explorer Tree View to select it. Using IMAGINE AutoSync 47

66 Click to open the ImageInfo dialog Click to select panatlanta.img 2. On the IMAGINE AutoSync toolbar, click the ImageInfo icon. The ImageInfo dialog for panatlanta.img opens. 48 Using IMAGINE AutoSync

67 Note map and projection information Note the information in the Map Info section and that the Projection Info section shows that the map is georeferenced to State Plane. 3. When you are finished, select File -> Close in the ImageInfo dialog. 4. Click on the input image in the Main View pane or in the Project Explorer Tree View to select it. Using IMAGINE AutoSync 49

68 Click to open the ImageInfo dialog Click to select tmatlanta.img 5. On the IMAGINE AutoSync toolbar, click the ImageInfo icon. 6. The ImageInfo dialog for tmatlanta.img opens. 50 Using IMAGINE AutoSync

69 Note map and projection information Note the information in the Map Info section and that the Projection Info section shows that this is a raw image with no projection information. Therefore, for this chapter, use the input projection from the reference image (panatlanta.img) for the output image. Set Output Image Projection Since the image to be georeferenced is a raw image with no projection information, you need to change the project properties to set the output projection before resampling or calibrating the image. 1. To change the output projection, do one of the following: Select Process -> Project Properties... from the menu bar In the IMAGINE AutoSync toolbar, click the Edit Project Properties icon The IMAGINE AutoSync Project Properties dialog opens. 2. Click the Projection tab. The Projection tab opens in the IMAGINE AutoSync Project Properties dialog. 3. In the Output Projection section, select Same as Reference Image. The projection information from the reference image displays (greyed out). Using IMAGINE AutoSync 51

70 Click here to use the same projection as the reference image The reference image projection info displays here 4. Click the OK button to close the IMAGINE AutoSync Project Properties dialog. Resample Output Image Resampling is the process of calculating the file values for the rectified image and creating the output file. All of the raster data layers in the source file are resampled. The output image will have as many layers as the input image. 1. To resample the output image, do one of the following: Select Process -> Calibrate/Resample from the menu bar Right-click on the input image (tmatlanta.img) in the Project Explorer Tree View and select Calibrate/Resample You can change the resample settings in the Output tab in the IMAGINE AutoSync Project Properties dialog. You may want to experiment with changing the resample settings later, but this chapter uses the default settings. The resampled output image (tmatlanta_output.img) displays in the workstation viewer and the output image name now displays in the Output Images folder in the Project Explorer. Your project in the IMAGINE AutoSync Workstation should now look similar to the following: 52 Using IMAGINE AutoSync

71 The resampled output image displays in the viewer and Project Tree View Verify Output Image Once the output image is created, you can use the workstation to perform the output image verification. You can verify that the input image (tmatlanta.img) has been correctly georeferenced to the reference image (panatlanta.img) by visually checking that they conform to each other using the Viewer Blend/Fade, Viewer Swipe, or Viewer Flicker verification tools. Use the Viewer Blend/Fade Tool 1. To perform visual verification using the Viewer Blend/Fade tool, click the Start Blend Tool icon on the IMAGINE AutoSync toolbar. The Viewer Blend/Fade dialog opens. Using IMAGINE AutoSync 53

72 Use the slide to blend and fade the images Use Auto Mode and Speed to watch the images being blended together at a rate you choose 2. Select Auto Mode in the Viewer Blend/Fade dialog, and type 500 for the Speed. You can watch as the tool slowly blends the images, allowing you to evaluate the quality. Experiment with both different speeds or use the slide to blend and fade the images. Use the Viewer Swipe Tool 1. To perform visual verification using the Viewer Swipe tool, click the Start Swipe Tool icon on the IMAGINE AutoSync toolbar. The Viewer Swipe dialog opens. Use the slide to swipe over the images Choose either Vertical or Horizontal Use Auto Mode and Speed to watch the images being swiped at a rate you choose 2. Check Auto Mode in the Viewer Swipe dialog, and type 500 for the Speed. You can watch as the swipe tool slowly works its way over the images allowing you to evaluate the quality. Experiment with both Vertical and Horizontal direction and different speeds. 54 Using IMAGINE AutoSync

73 Use the Viewer Flicker Tool 1. To perform visual verification using the Viewer Flicker tool, click the Start Flicker Tool icon on the IMAGINE AutoSync toolbar. The Viewer Flicker dialog opens. Click to quickly switch between the images Use Auto Mode and Speed to watch the images switch from top to bottom at a rate you choose 2. Check Auto Mode in the Viewer Flicker dialog, and type 500 for the Speed. You can watch as the flicker tool switches between the top and bottom images, allowing you to evaluate the quality. You can also click Manual Flicker to quickly switch between the images. Experiment with different speeds. View Summary Report You can view the summary HTML report for more information.review information about the error, tie points, etc. 1. To view the summary report, do one of the following: In the GCP toolbar, click the Summary Report icon Right-click on input image (tmatlanta.img) in the Project Explorer Tree View and select Review Report. Open a Windows Explorer window, browse to the directory where you saved the HTML report (in the Create New Project dialog) and open the.html file The summary report opens in a separate browser window. Using IMAGINE AutoSync 55

Getting Started / xix

Getting Started / xix These paragraphs lead you to other areas of this book or other ERDAS manuals for additional information. NOTE: Notes give additional instruction. Shaded Boxes Shaded boxes contain supplemental information

More information

ERDAS IMAGINE Professional Tour Guides. November 2009

ERDAS IMAGINE Professional Tour Guides. November 2009 ERDAS IMAGINE Professional Tour Guides November 2009 Copyright 2009 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive

More information

ERDAS TITAN Client 9.3 Localization Guide. August 2008

ERDAS TITAN Client 9.3 Localization Guide. August 2008 ERDAS TITAN Client 9.3 Localization Guide August 2008 Copyright 2008 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive

More information

ERDAS IMAGINE Advantage Tour Guides. November 2009

ERDAS IMAGINE Advantage Tour Guides. November 2009 ERDAS IMAGINE Advantage Tour Guides November 2009 Copyright 2009 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property

More information

LPS Project Manager User s Guide. November 2009

LPS Project Manager User s Guide. November 2009 LPS Project Manager User s Guide November 2009 Copyright 2009 ERDAS, Inc. All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property

More information

Conventions Used in This Book

Conventions Used in This Book Conventions Used in This Book In ERDAS IMAGINE, the names of menus, menu options, buttons, and other components of the interface are shown in bold type. For example: In the Select Layer To Add dialog,

More information

ENVI Automated Image Registration Solutions

ENVI Automated Image Registration Solutions ENVI Automated Image Registration Solutions Xiaoying Jin Harris Corporation Table of Contents Introduction... 3 Overview... 4 Image Registration Engine... 6 Image Registration Workflow... 8 Technical Guide...

More information

Tutorial files are available from the Exelis VIS website or on the ENVI Resource DVD in the image_reg directory.

Tutorial files are available from the Exelis VIS website or on the ENVI Resource DVD in the image_reg directory. Image Registration Tutorial In this tutorial, you will use the Image Registration workflow in different scenarios to geometrically align two overlapping images with different viewing geometry and different

More information

Leica Photogrammetry Suite Automatic Terrain Extraction

Leica Photogrammetry Suite Automatic Terrain Extraction Leica Photogrammetry Suite Automatic Terrain Extraction Copyright 2006 Leica Geosystems Geospatial Imaging, LLC All rights reserved. Printed in the United States of America. The information contained in

More information

IMAGINE OrthoRadar. Accuracy Evaluation. age 1 of 9

IMAGINE OrthoRadar. Accuracy Evaluation. age 1 of 9 IMAGINE OrthoRadar Accuracy Evaluation age 1 of 9 IMAGINE OrthoRadar Product Description IMAGINE OrthoRadar is part of the IMAGINE Radar Mapping Suite and performs precision geocoding and orthorectification

More information

IMAGINE EXPANSION PACK Extend the Power of ERDAS IMAGINE

IMAGINE EXPANSION PACK Extend the Power of ERDAS IMAGINE IMAGINE EXPANSION PACK Extend the Power of ERDAS IMAGINE IMAGINE EXPANSION PACK IMAGINE Expansion Pack is a collection of functionality to extend the utility of ERDAS IMAGINE. It includes 3D visualization

More information

Extracting Features using IMAGINE Easytrace A Technical White Paper

Extracting Features using IMAGINE Easytrace A Technical White Paper Extracting Features using IMAGINE Easytrace A Technical White Paper Copyright (c) 2006 Leica Geosystems Geospatial Imaging, LLC All rights reserved. Printed in the United States of America. The information

More information

Files Used in this Tutorial

Files Used in this Tutorial RPC Orthorectification Tutorial In this tutorial, you will use ground control points (GCPs), an orthorectified reference image, and a digital elevation model (DEM) to orthorectify an OrbView-3 scene that

More information

Files Used in this Tutorial

Files Used in this Tutorial Generate Point Clouds and DSM Tutorial This tutorial shows how to generate point clouds and a digital surface model (DSM) from IKONOS satellite stereo imagery. You will view the resulting point clouds

More information

Files Used in this Tutorial

Files Used in this Tutorial RPC Orthorectification Tutorial In this tutorial, you will use ground control points (GCPs), an orthorectified reference image, and a digital elevation model (DEM) to orthorectify an OrbView-3 scene that

More information

By Colin Childs, ESRI Education Services. Catalog

By Colin Childs, ESRI Education Services. Catalog s resolve many traditional raster management issues By Colin Childs, ESRI Education Services Source images ArcGIS 10 introduces Catalog Mosaicked images Sources, mosaic methods, and functions are used

More information

Stereo Analyst User s Guide

Stereo Analyst User s Guide User s Guide Copyright 2006 Leica Geosystems Geospatial Imaging, LLC All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of

More information

Managing Imagery and Raster Data Using Mosaic Datasets

Managing Imagery and Raster Data Using Mosaic Datasets 2013 Esri International User Conference July 8 12, 2013 San Diego, California Technical Workshop Managing Imagery and Raster Data Using Mosaic Datasets Hong Xu, Prashant Mangtani Esri UC2013. Technical

More information

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D

TRAINING MATERIAL HOW TO OPTIMIZE ACCURACY WITH CORRELATOR3D TRAINING MATERIAL WITH CORRELATOR3D Page2 Contents 1. UNDERSTANDING INPUT DATA REQUIREMENTS... 4 1.1 What is Aerial Triangulation?... 4 1.2 Recommended Flight Configuration... 4 1.3 Data Requirements for

More information

Automatic DEM Extraction

Automatic DEM Extraction Technical Specifications Automatic DEM Extraction The Automatic DEM Extraction module allows you to create Digital Elevation Models (DEMs) from stereo airphotos, stereo images and RADAR data. Image correlation

More information

Georeferencing in ArcGIS

Georeferencing in ArcGIS Georeferencing in ArcGIS Georeferencing In order to position images on the surface of the earth, they need to be georeferenced. Images are georeferenced by linking unreferenced features in the image with

More information

Aerial Photo Rectification

Aerial Photo Rectification Aerial Photo Rectification ERDAS Imagine 2016 Description: We will be using ERDAS Imagine to georeference aerial photos to a DOQ image. We will try to achieve this with a total RMS (root mean square) error

More information

Orthorectifying ALOS PALSAR. Quick Guide

Orthorectifying ALOS PALSAR. Quick Guide Orthorectifying ALOS PALSAR Quick Guide Copyright Notice This publication is a copyrighted work owned by: PCI Geomatics 50 West Wilmot Street Richmond Hill, Ontario Canada L4B 1M5 www.pcigeomatics.com

More information

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens

Terrain correction. Backward geocoding. Terrain correction and ortho-rectification. Why geometric terrain correction? Rüdiger Gens Terrain correction and ortho-rectification Terrain correction Rüdiger Gens Why geometric terrain correction? Backward geocoding remove effects of side looking geometry of SAR images necessary step to allow

More information

The Feature Analyst Extension for ERDAS IMAGINE

The Feature Analyst Extension for ERDAS IMAGINE The Feature Analyst Extension for ERDAS IMAGINE Automated Feature Extraction Software for GIS Database Maintenance We put the information in GIS SM A Visual Learning Systems, Inc. White Paper September

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Digital Photogrammetric System. Version 5.3 USER GUIDE. Block adjustment

Digital Photogrammetric System. Version 5.3 USER GUIDE. Block adjustment Digital Photogrammetric System Version 5.3 USER GUIDE Table of Contents 1. Purpose of the document... 3 2. module... 3 3. Start of work in adjustment module... 4 4. Interface and its elements... 6 4.1.

More information

New! Analysis Ready Data Tools Add-on package for image preprocessing for multi-temporal analysis. Example of satellite imagery time series of Canada

New! Analysis Ready Data Tools Add-on package for image preprocessing for multi-temporal analysis. Example of satellite imagery time series of Canada Highlights New! Analysis Ready Data Tools Add-on package for image preprocessing for multi-temporal analysis Rigorous scientific preprocessing Example of satellite imagery time series of Canada A new industry

More information

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene

Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Geometric Accuracy Evaluation, DEM Generation and Validation for SPOT-5 Level 1B Stereo Scene Buyuksalih, G.*, Oruc, M.*, Topan, H.*,.*, Jacobsen, K.** * Karaelmas University Zonguldak, Turkey **University

More information

Office 2016 Excel Basics 01 Video/Class Project #13 Excel Basics 1: Excel Grid, Formatting, Formulas, Cell References, Page Setup (O16-13)

Office 2016 Excel Basics 01 Video/Class Project #13 Excel Basics 1: Excel Grid, Formatting, Formulas, Cell References, Page Setup (O16-13) Office 2016 Excel Basics 01 Video/Class Project #13 Excel Basics 1: Excel Grid, Formatting, Formulas, Cell References, Page Setup (O16-13) Topics Covered in Video: 1) Excel file = Workbook, not Document

More information

Administrator Guide. Oracle Health Sciences Central Designer 2.0. Part Number: E

Administrator Guide. Oracle Health Sciences Central Designer 2.0. Part Number: E Administrator Guide Oracle Health Sciences Central Designer 2.0 Part Number: E37912-01 Copyright 2013, Oracle and/or its affiliates. All rights reserved. The Programs (which include both the software and

More information

Analysis Ready Data For Land (CARD4L-ST)

Analysis Ready Data For Land (CARD4L-ST) Analysis Ready Data For Land Product Family Specification Surface Temperature (CARD4L-ST) Document status For Adoption as: Product Family Specification, Surface Temperature This Specification should next

More information

Automatic DEM Extraction

Automatic DEM Extraction Automatic DEM Extraction The Automatic DEM Extraction module allows you to create Digital Elevation Models (DEMs) from stereo airphotos, stereo images and RADAR data. Image correlation is used to extract

More information

Guide to WB Annotations

Guide to WB Annotations Guide to WB Annotations 04 May 2016 Annotations are a powerful new feature added to Workbench v1.2.0 (Released May 2016) for placing text and symbols within wb_view tabs and windows. They enable generation

More information

POSITIONING A PIXEL IN A COORDINATE SYSTEM

POSITIONING A PIXEL IN A COORDINATE SYSTEM GEOREFERENCING AND GEOCODING EARTH OBSERVATION IMAGES GABRIEL PARODI STUDY MATERIAL: PRINCIPLES OF REMOTE SENSING AN INTRODUCTORY TEXTBOOK CHAPTER 6 POSITIONING A PIXEL IN A COORDINATE SYSTEM The essential

More information

Automatic DEM Extraction

Automatic DEM Extraction Technical Specifications Automatic DEM Extraction The Automatic DEM Extraction module allows you to create Digital Elevation Models (DEMs) from stereo airphotos, stereo images and RADAR data. Image correlation

More information

Producing Ortho Imagery In ArcGIS. Hong Xu, Mingzhen Chen, Ringu Nalankal

Producing Ortho Imagery In ArcGIS. Hong Xu, Mingzhen Chen, Ringu Nalankal Producing Ortho Imagery In ArcGIS Hong Xu, Mingzhen Chen, Ringu Nalankal Agenda Ortho imagery in GIS ArcGIS ortho mapping solution Workflows - Satellite imagery - Digital aerial imagery - Scanned imagery

More information

CREATING CUSTOMIZED SPATIAL MODELS WITH POINT CLOUDS USING SPATIAL MODELER OPERATORS TO PROCESS POINT CLOUDS IN IMAGINE 2014

CREATING CUSTOMIZED SPATIAL MODELS WITH POINT CLOUDS USING SPATIAL MODELER OPERATORS TO PROCESS POINT CLOUDS IN IMAGINE 2014 CREATING CUSTOMIZED SPATIAL MODELS WITH POINT CLOUDS USING SPATIAL MODELER OPERATORS TO PROCESS POINT CLOUDS IN IMAGINE 2014 White Paper December 22, 2016 Contents 1. Introduction... 3 2. ERDAS IMAGINE

More information

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011

Training i Course Remote Sensing Basic Theory & Image Processing Methods September 2011 Training i Course Remote Sensing Basic Theory & Image Processing Methods 19 23 September 2011 Geometric Operations Michiel Damen (September 2011) damen@itc.nl ITC FACULTY OF GEO-INFORMATION SCIENCE AND

More information

Mosaic Tutorial: Advanced Workflow

Mosaic Tutorial: Advanced Workflow Mosaic Tutorial: Advanced Workflow This tutorial demonstrates how to mosaic two scenes with different color variations. You will learn how to: Reorder the display of the input scenes Achieve a consistent

More information

Geomatica OrthoEngine Course exercises

Geomatica OrthoEngine Course exercises Course exercises Geomatica Version 2017 SP4 Course exercises 2017 PCI Geomatics Enterprises, Inc. All rights reserved. COPYRIGHT NOTICE Software copyrighted by PCI Geomatics Enterprises, Inc., 90 Allstate

More information

TrueOrtho with 3D Feature Extraction

TrueOrtho with 3D Feature Extraction TrueOrtho with 3D Feature Extraction PCI Geomatics has entered into a partnership with IAVO to distribute its 3D Feature Extraction (3DFE) software. This software package compliments the TrueOrtho workflow

More information

ezimagex2 User s Guide Version 1.0

ezimagex2 User s Guide Version 1.0 ezimagex2 User s Guide Version 1.0 Copyright and Trademark Information The products described in this document are copyrighted works of AVEN, Inc. 2015 AVEN, Inc. 4595 Platt Rd Ann Arbor, MI 48108 All

More information

Image georeferencing is the process of developing a model to transform from pixel coordinates into GIS coordinates such as meters on the ground.

Image georeferencing is the process of developing a model to transform from pixel coordinates into GIS coordinates such as meters on the ground. Image georeferencing is the process of developing a model to transform from pixel coordinates into GIS coordinates such as meters on the ground. Image rectification is the process of using your georeferencing

More information

Setting up a 3D Environment for the City of Portland

Setting up a 3D Environment for the City of Portland Setting up a 3D Environment for the City of Portland www.learn.arcgis.com 380 New York Street Redlands, California 92373 8100 USA Copyright 2018 Esri All rights reserved. Printed in the United States of

More information

New Features in SOCET SET Stewart Walker, San Diego, USA

New Features in SOCET SET Stewart Walker, San Diego, USA New Features in SOCET SET Stewart Walker, San Diego, USA 2610083107A EXPORT CONTROL DATA. This presentation is approved for export as of 31 August 2007. The actual product and its technical information

More information

Location Intelligence Infrastructure Asset Management. Confirm. Confirm Mapping Link to MapInfo Professional Version v18.00b.am

Location Intelligence Infrastructure Asset Management. Confirm. Confirm Mapping Link to MapInfo Professional Version v18.00b.am Location Intelligence Infrastructure Asset Management Confirm Confirm Mapping Link to MapInfo Professional Version v18.00b.am Information in this document is subject to change without notice and does not

More information

IMAGINE Advantage PRODUCT DESCRIPTION

IMAGINE Advantage PRODUCT DESCRIPTION IMAGINE Advantage PRODUCT DESCRIPTION age 1 of 8 IMAGINE Advantage Product Description More than 30 years of geospatial research and geospatial software development has cumulated in the ERDAS IMAGINE suite

More information

ENVI ANALYTICS ANSWERS YOU CAN TRUST

ENVI ANALYTICS ANSWERS YOU CAN TRUST ENVI ANALYTICS ANSWERS YOU CAN TRUST HarrisGeospatial.com Since its launch in 1991, ENVI has enabled users to leverage remotely sensed data to better understand our complex world. Over the years, Harris

More information

Using Inspiration 7 I. How Inspiration Looks SYMBOL PALETTE

Using Inspiration 7 I. How Inspiration Looks SYMBOL PALETTE Using Inspiration 7 Inspiration is a graphic organizer application for grades 6 through adult providing visual thinking tools used to brainstorm, plan, organize, outline, diagram, and write. I. How Inspiration

More information

ACCURACY OF DIGITAL ORTHOPHOTOS FROM HIGH RESOLUTION SPACE IMAGERY

ACCURACY OF DIGITAL ORTHOPHOTOS FROM HIGH RESOLUTION SPACE IMAGERY ACCURACY OF DIGITAL ORTHOPHOTOS FROM HIGH RESOLUTION SPACE IMAGERY Jacobsen, K.*, Passini, R. ** * University of Hannover, Germany ** BAE Systems ADR, Pennsauken, NJ, USA acobsen@ipi.uni-hannover.de rpassini@adrinc.com

More information

Orthorectification and DEM Extraction of CARTOSAT-1 Imagery

Orthorectification and DEM Extraction of CARTOSAT-1 Imagery Orthorectification and DEM Extraction of CARTOSAT-1 Imagery TUTORIAL CARTOSAT-1 is the eleventh satellite to be built in the Indian Remote Sensing (IRS) series. This sunsynchronous satellite was launched

More information

ERDAS IMAGINE THE WORLD S MOST WIDELY-USED REMOTE SENSING SOFTWARE PACKAGE

ERDAS IMAGINE THE WORLD S MOST WIDELY-USED REMOTE SENSING SOFTWARE PACKAGE PRODUCT BROCHURE ERDAS IMAGINE THE WORLD S MOST WIDELY-USED REMOTE SENSING SOFTWARE PACKAGE 1 ERDAS IMAGINE The world s most widely-used remote sensing software package 2 ERDAS IMAGINE The world s most

More information

Layout and display. STILOG IST, all rights reserved

Layout and display. STILOG IST, all rights reserved 2 Table of Contents I. Main Window... 1 1. DEFINITION... 1 2. LIST OF WINDOW ELEMENTS... 1 Quick Access Bar... 1 Menu Bar... 1 Windows... 2 Status bar... 2 Pop-up menu... 4 II. Menu Bar... 5 1. DEFINITION...

More information

Geometric Rectification of Remote Sensing Images

Geometric Rectification of Remote Sensing Images Geometric Rectification of Remote Sensing Images Airborne TerrestriaL Applications Sensor (ATLAS) Nine flight paths were recorded over the city of Providence. 1 True color ATLAS image (bands 4, 2, 1 in

More information

Managing Image Data on the ArcGIS Platform Options and Recommended Approaches

Managing Image Data on the ArcGIS Platform Options and Recommended Approaches Managing Image Data on the ArcGIS Platform Options and Recommended Approaches Peter Becker Petroleum requirements for imagery and raster Traditional solutions and issues Overview of ArcGIS imaging capabilities

More information

Exercise #5b - Geometric Correction of Image Data

Exercise #5b - Geometric Correction of Image Data Exercise #5b - Geometric Correction of Image Data 5.6 Geocoding or Registration of geometrically uncorrected image data 5.7 Resampling 5.8 The Ukrainian coordinate system 5.9 Selecting Ground Control Points

More information

Georeferencing Imagery in ArcGIS 10.3.x

Georeferencing Imagery in ArcGIS 10.3.x Georeferencing Imagery in ArcGIS 10.3.x Georeferencing is the process of aligning imagery (maps, air photos, etc.) with spatial data such as point, lines or polygons (for example, roads and water bodies).

More information

Scenario Manager User Guide. Release September 2013

Scenario Manager User Guide. Release September 2013 Scenario Manager User Guide Release 6.2.1 September 2013 Scenario Manager User Guide Release 6.2.1 September 2013 Document Control Number: 9MN12-62110017 Document Number: SMUG-13-FCCM-0017-6.2.1-01 Oracle

More information

Image Services for Elevation Data

Image Services for Elevation Data Image Services for Elevation Data Peter Becker Need for Elevation Using Image Services for Elevation Data sources Creating Elevation Service Requirement: GIS and Imagery, Integrated and Accessible Field

More information

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY

DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY DIGITAL SURFACE MODELS OF CITY AREAS BY VERY HIGH RESOLUTION SPACE IMAGERY Jacobsen, K. University of Hannover, Institute of Photogrammetry and Geoinformation, Nienburger Str.1, D30167 Hannover phone +49

More information

Digital Photogrammetric System. Version 5.3 USER GUIDE. Processing of UAV data

Digital Photogrammetric System. Version 5.3 USER GUIDE. Processing of UAV data Digital Photogrammetric System Version 5.3 USER GUIDE Table of Contents 1. Workflow of UAV data processing in the system... 3 2. Create project... 3 3. Block forming... 5 4. Interior orientation... 6 5.

More information

IMAGE CORRECTION FOR DIGITAL MAPPING

IMAGE CORRECTION FOR DIGITAL MAPPING IMAGE CORRECTION FOR DIGITAL MAPPING Yubin Xin PCI Geomatics Inc. 50 West Wilmot St., Richmond Hill, Ontario L4B 1M5, Canada Fax: 1-905-7640153 xin@pcigeomatics.com ABSTRACT Digital base maps generated

More information

Operation Guide <Functions Edition> Click on the button to jump to the desired section.

Operation Guide <Functions Edition> Click on the button to jump to the desired section. Operation Guide Click on the button to jump to the desired section. Using the Scanner Function Sending Scanned Image Data to Your Computer Sending Scanned Image Data by Email Using

More information

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION

PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS INTRODUCTION PHOTOGRAMMETRIC SOLUTIONS OF NON-STANDARD PHOTOGRAMMETRIC BLOCKS Dor Yalon Co-Founder & CTO Icaros, Inc. ABSTRACT The use of small and medium format sensors for traditional photogrammetry presents a number

More information

MANUAL NO. OPS647-UM-151 USER S MANUAL

MANUAL NO. OPS647-UM-151 USER S MANUAL MANUAL NO. OPS647-UM-151 USER S MANUAL Software Usage Agreement Graphtec Corporation ( Graphtec ) hereby grants the purchaser and authorized User (the User ) the right to use the software (the Software

More information

Introduction to ERDAS IMAGINE. (adapted/modified from Jensen 2004)

Introduction to ERDAS IMAGINE. (adapted/modified from Jensen 2004) Introduction to ERDAS IMAGINE General Instructions (adapted/modified from Jensen 2004) Follow the directions given to you in class to login the system. If you haven t done this yet, create a folder and

More information

button in the lower-left corner of the panel if you have further questions throughout this tutorial.

button in the lower-left corner of the panel if you have further questions throughout this tutorial. Mosaic Tutorial: Simple Workflow This tutorial demonstrates how to use the Seamless Mosaic tool to mosaic six overlapping digital aerial scenes. You will learn about displaying footprints and image data

More information

User Guide. Oracle Health Sciences Central Designer Release 2.0. Part Number: E

User Guide. Oracle Health Sciences Central Designer Release 2.0. Part Number: E User Guide Oracle Health Sciences Central Designer Release 2.0 Part Number: E37919-01 Copyright 2013, Oracle and/or its affiliates. All rights reserved. The Programs (which include both the software and

More information

LIDAR MAPPING FACT SHEET

LIDAR MAPPING FACT SHEET 1. LIDAR THEORY What is lidar? Lidar is an acronym for light detection and ranging. In the mapping industry, this term is used to describe an airborne laser profiling system that produces location and

More information

DEM Extraction Module User s Guide

DEM Extraction Module User s Guide DEM Extraction Module User s Guide DEM Extraction Module Version 4.7 August, 2009 Edition Copyright ITT Visual Information Solutions All Rights Reserved 20DEM47DOC Restricted Rights Notice The IDL, IDL

More information

Using Imagery for Intelligence Analysis

Using Imagery for Intelligence Analysis 2013 Esri International User Conference July 8 12, 2013 San Diego, California Technical Workshop Using Imagery for Intelligence Analysis Renee Bernstein Natalie Campos Esri UC2013. Technical Workshop.

More information

Table of Contents. iii

Table of Contents. iii Photo to Movie 4.5 Table of Contents Photo to Movie Introduction... 1 Introduction... 1 Installation... 2 Organizing Your Movie... 5 Planning your movie... 5 Adding photos to your slide show... 5 Choosing

More information

Introducing ArcScan for ArcGIS

Introducing ArcScan for ArcGIS Introducing ArcScan for ArcGIS An ESRI White Paper August 2003 ESRI 380 New York St., Redlands, CA 92373-8100, USA TEL 909-793-2853 FAX 909-793-5953 E-MAIL info@esri.com WEB www.esri.com Copyright 2003

More information

ENVI. Get the Information You Need from Imagery.

ENVI. Get the Information You Need from Imagery. Visual Information Solutions ENVI. Get the Information You Need from Imagery. ENVI is the premier software solution to quickly, easily, and accurately extract information from geospatial imagery. Easy

More information

ENVI Tutorial: Introduction to ENVI

ENVI Tutorial: Introduction to ENVI ENVI Tutorial: Introduction to ENVI Table of Contents OVERVIEW OF THIS TUTORIAL...1 GETTING STARTED WITH ENVI...1 Starting ENVI...1 Starting ENVI on Windows Machines...1 Starting ENVI in UNIX...1 Starting

More information

Overview. Image Geometric Correction. LA502 Special Studies Remote Sensing. Why Geometric Correction?

Overview. Image Geometric Correction. LA502 Special Studies Remote Sensing. Why Geometric Correction? LA502 Special Studies Remote Sensing Image Geometric Correction Department of Landscape Architecture Faculty of Environmental Design King AbdulAziz University Room 103 Overview Image rectification Geometric

More information

DataMaster for Windows

DataMaster for Windows DataMaster for Windows Version 3.0 April 2004 Mid America Computer Corp. 111 Admiral Drive Blair, NE 68008-0700 (402) 426-6222 Copyright 2003-2004 Mid America Computer Corp. All rights reserved. Table

More information

PRODUCT DESCRIPTION PRODUCT DESCRIPTION

PRODUCT DESCRIPTION PRODUCT DESCRIPTION IMAGESTATION PRODUCT DESCRIPTION PRODUCT DESCRIPTION IMAGESTATION FAMILY OF PRODUCTS ImageStation includes applications for your entire production workflow including project creation, orientation and triangulation,

More information

AEMLog Users Guide. Version 1.01

AEMLog Users Guide. Version 1.01 AEMLog Users Guide Version 1.01 INTRODUCTION...2 DOCUMENTATION...2 INSTALLING AEMLOG...4 AEMLOG QUICK REFERENCE...5 THE MAIN GRAPH SCREEN...5 MENU COMMANDS...6 File Menu...6 Graph Menu...7 Analysis Menu...8

More information

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping

IMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain

More information

TOPOGRAPHIC NORMALIZATION INTRODUCTION

TOPOGRAPHIC NORMALIZATION INTRODUCTION TOPOGRAPHIC NORMALIZATION INTRODUCTION Use of remotely sensed data from mountainous regions generally requires additional preprocessing, including corrections for relief displacement and solar illumination

More information

Administration Tools User Guide. Release April 2015

Administration Tools User Guide. Release April 2015 Administration Tools User Guide Release 6.2.5 April 2015 Administration Tools User Guide Release 6.2.5 April 2015 Part Number: E62969_05 Oracle Financial Services Software, Inc. 1900 Oracle Way Reston,

More information

Sentinel-1 Toolbox. Offset Tracking Tutorial Issued August Jun Lu Luis Veci

Sentinel-1 Toolbox. Offset Tracking Tutorial Issued August Jun Lu Luis Veci Sentinel-1 Toolbox Offset Tracking Tutorial Issued August 2016 Jun Lu Luis Veci Copyright 2016 Array Systems Computing Inc. http://www.array.ca/ http://step.esa.int Offset Tracking Tutorial The goal of

More information

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee.

Modern Surveying Techniques. Prof. S. K. Ghosh. Department of Civil Engineering. Indian Institute of Technology, Roorkee. Modern Surveying Techniques Prof. S. K. Ghosh Department of Civil Engineering Indian Institute of Technology, Roorkee Lecture - 12 Rectification & Restoration In my previous session, I had discussed regarding

More information

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934

Video Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934 Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between

More information

In addition, the image registration and geocoding functionality is also available as a separate GEO package.

In addition, the image registration and geocoding functionality is also available as a separate GEO package. GAMMA Software information: GAMMA Software supports the entire processing from SAR raw data to products such as digital elevation models, displacement maps and landuse maps. The software is grouped into

More information

Selective Space Structures Manual

Selective Space Structures Manual Selective Space Structures Manual February 2017 CONTENTS 1 Contents 1 Overview and Concept 4 1.1 General Concept........................... 4 1.2 Modules................................ 6 2 The 3S Generator

More information

COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350

COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350 COMPARATIVE CHARACTERISTICS OF DEM OBTAINED FROM SATELLITE IMAGES SPOT-5 AND TK-350 Dr. V. F. Chekalin a*, M. M. Fomtchenko a* a Sovinformsputnik, 47, Leningradsky Pr., 125167 Moscow, Russia common@sovinformsputnik.com

More information

Comparison of Image Orientation by IKONOS, QuickBird and OrbView-3

Comparison of Image Orientation by IKONOS, QuickBird and OrbView-3 Comparison of Image Orientation by IKONOS, QuickBird and OrbView-3 K. Jacobsen University of Hannover, Germany Keywords: Orientation, Mathematical Models, IKONOS, QuickBird, OrbView-3 ABSTRACT: The generation

More information

GE1LC7 - Getting to Know Bentley Descartes for Advanced Image Viewing, Editing and Processing

GE1LC7 - Getting to Know Bentley Descartes for Advanced Image Viewing, Editing and Processing GE1LC7 - Getting to Know Bentley Descartes for Advanced Image Viewing, Editing and Processing Inga Morozoff Introduction: Raster data is everywhere - whether you do CAD design, mapping or GIS analysis,

More information

Creating, balancing and mosaicing Orthophotos

Creating, balancing and mosaicing Orthophotos Creating, balancing and mosaicing Orthophotos Wizards Map production 3D presentations Annotation Orthophoto Surface gridding Contouring Image mosaicing Data compression Geocoding Spatial analysis Raster

More information

MrSID Plug-in for 3D Analyst

MrSID Plug-in for 3D Analyst LizardTech MrSID Plug-in for 3D Analyst User Manual Copyrights Copyright 2009 2010 LizardTech. All rights reserved. Information in this document is subject to change without notice. The software described

More information

ArcScan for ArcGIS Tutorial

ArcScan for ArcGIS Tutorial ArcGIS 9 ArcScan for ArcGIS Tutorial Copyright 00 008 ESRI All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of ESRI. This

More information

MARS v Release Notes Revised: May 23, 2018 (Builds and )

MARS v Release Notes Revised: May 23, 2018 (Builds and ) MARS v2018.0 Release Notes Revised: May 23, 2018 (Builds 8302.01 8302.18 and 8350.00 8352.00) Contents New Features:... 2 Enhancements:... 6 List of Bug Fixes... 13 1 New Features: LAS Up-Conversion prompts

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

Getting Started with ShowcaseChapter1:

Getting Started with ShowcaseChapter1: Chapter 1 Getting Started with ShowcaseChapter1: In this chapter, you learn the purpose of Autodesk Showcase, about its interface, and how to import geometry and adjust imported geometry. Objectives After

More information

Using ArcScan for ArcGIS

Using ArcScan for ArcGIS ArcGIS 9 Using ArcScan for ArcGIS Copyright 00 005 ESRI All rights reserved. Printed in the United States of America. The information contained in this document is the exclusive property of ESRI. This

More information

Questions and Answers

Questions and Answers Autodesk AutoCAD Raster Design 2011 Questions and Answers AutoCAD Raster Design 2011 Questions and Answers Make the most of rasterized scanned drawings, maps, aerial photos, satellite imagery, and digital

More information

DIGITAL HEIGHT MODELS BY CARTOSAT-1

DIGITAL HEIGHT MODELS BY CARTOSAT-1 DIGITAL HEIGHT MODELS BY CARTOSAT-1 K. Jacobsen Institute of Photogrammetry and Geoinformation Leibniz University Hannover, Germany jacobsen@ipi.uni-hannover.de KEY WORDS: high resolution space image,

More information